Syndicated vs. panel data: the two-sentence version
Syndicated data measures retail sales — what was sold through which stores. Panel data measures consumer behavior — who bought what, repeated over time. Both are sample-based; both get extrapolated to total-channel estimates; they answer different questions.
A category analyst pulling SPINS or Circana weekly is looking at syndicated data. A consumer-insights team studying repeat-purchase behavior on a new product launch is looking at panel data.
Syndicated POS scan data — what it measures
Syndicated retail data tracks transactions at the point of sale. The data flows from one of two sources:
- Retailer scan data — POS transactions from chains that license their data to a syndicator (SPINS, Circana, NielsenIQ).
- Distributor flow data — what was shipped from distributors (KeHE, UNFI for natural; others for conventional) to retailers in their networks.
Both get reconciled and projected up to a total-channel estimate. The output: a sales number per SKU per retailer per week.
What it tells you well:
- Which SKUs are selling at which retailers
- Distribution metrics (ACV, TDP, store count)
- Velocity (sales per store per week)
- Promotional lift, regional performance, share trends
What it can't tell you:
- Whether the same household bought your product twice
- Whether a SKU you launched is being bought by your existing customers or by net-new buyers
- Demographic and lifestyle attributes of the actual buyer
- Cross-category basket behavior
Panel data — what it measures
Consumer panel data tracks what individual households buy over time. The major panels (NielsenIQ Homescan, Circana's panel, Numerator) recruit a representative sample of households who self-report (or scan) their purchases continuously.
NielsenIQ Homescan, one of the two largest US household panels, tracks approximately 100,000 households — a sample that projects to total US household purchase behavior by demographic cut. Circana's panel is comparable in scope. Both panels report with a lag of several weeks and publish at a less granular frequency than syndicated scan data.
The output: a longitudinal record of every measured household's purchase history, projectable to the total US population (or a demographic cut) by household demographics.
What it tells you well:
- Repeat purchase rate after trial
- Buyer demographics (income, age, household composition, region)
- Cross-category basket — what else does your buyer pick up in the same trip?
- Penetration (% of households buying your brand at least once)
- Buyer source-of-volume — when you grow, are you stealing from competitors, growing the category, or both?
What it can't tell you well:
- Real-time weekly sales movement (panels report on a slower cadence with smaller samples per cell)
- Performance at a specific retailer below the major-chain level
- Detail on small or new SKUs (sample size per SKU is too small for reliable reads on long-tail items)
Where they overlap and disagree
Both syndicated and panel data report some version of "category dollars," and the numbers usually disagree — sometimes by 5%, sometimes by 30%. Reasons:
- Universe differences. Syndicated data measures stores that license their POS. Panel data extrapolates from household purchases regardless of where bought. Channels covered by one and not the other (notably DTC, Amazon, regional chains, Costco, and convenience) explain a lot of the gap.
- Whole Foods is a specific case. Whole Foods doesn't report scanner data to SPINS; Circana carries Whole Foods as part of conventional grocery; NielsenIQ projects Whole Foods sales via panel data. So "Whole Foods sales" is itself a source-dependent number.
- Projection methodology differences. Each syndicator and each panel uses a different projection model.
- Definition differences. A "category" defined by SPINS attributes is not the same as a "category" defined by NielsenIQ category hierarchy. Cross-source category numbers are usually not apples-to-apples without normalization.
The right reaction to a discrepancy is rarely "which one is right." It's "what's the question, and which source's universe matches it better."
A worked example: a new flavor launch
A wellness snack brand launches a new flavor of their bestselling bar in Q1. By Q2, they want to understand: is the new flavor growing the brand or cannibalizing the original?
Syndicated data (SPINS) answers:
- New flavor ACV after 12 weeks: 38% Natural Channel
- New flavor velocity: $29/store/week at carrying stores
- Original flavor velocity: down 4% at stores carrying both flavors, flat at stores carrying only the original
The syndicated read suggests mild cannibalization at dual-SKU stores — the original flavor is slightly down where the new one was added. But is that definitively cannibalization, or just noise?
Panel data answers:
- 62% of new flavor buyers in the first 8 weeks were new to the brand — they'd never bought any SKU in the prior 52 weeks
- 21% were existing brand buyers who traded up to the new flavor (and reduced original-flavor purchase frequency)
- 17% were existing brand buyers who added the new flavor without reducing original-flavor purchase frequency
The panel cuts the question cleanly: 62% of trial is net-new buyers, which means the new flavor is growing the brand, not just moving existing buyers around. The 4% velocity dip on the original at dual-SKU stores is partly (21%) real cannibalization — but 62% new buyers means the launch is net-positive for the brand.
This answer requires panel data. Syndicated data alone can't tell you whether the $29/store/week on the new SKU came from new buyers or from existing brand buyers who switched.
A practical decision rule
| Question | Source |
|---|---|
| Are we gaining distribution in Sprouts? | Syndicated (SPINS) |
| Did our new flavor grow the category or steal from our existing SKUs? | Panel |
| Is our promo working at Whole Foods? | Syndicated (Circana, since SPINS doesn't carry Whole Foods) |
| Are repeat buyers staying after the promo ends? | Panel |
| What's our share at Kroger this quarter? | Syndicated |
| What demographics over-index on our brand? | Panel |
| How fast is the keto bar segment growing? | Syndicated for the size; panel for the buyer story |
| How does our new SKU's trial rate compare to the category average? | Panel |
| Which of our retail partners drives our highest-LTV buyers? | Panel (if the panel has retailer-specific detail) |
Most weekly category and sales reporting is syndicated. Most strategic and innovation work pulls in panel data alongside.
When the numbers disagree significantly
A common scenario: the brand's SPINS natural-channel sales are up 12% year-over-year. The panel says brand penetration is down 3%. Which is right?
Both can be correct simultaneously. They're measuring different things:
- SPINS up 12%: the brand's dollar sales through measured natural retailers are growing — possibly because of velocity gains, distribution expansion, price increases, or all three.
- Panel penetration down 3%: fewer households bought the brand at all in the trailing year — possibly because the brand converted fewer new buyers even as existing buyers purchased more (higher basket size or frequency per buying household).
The combined story: the brand is getting more dollars per buyer but losing at the top of the funnel. That's a different strategic problem than "we're growing on both dimensions" or "we're shrinking on both." Without both data sources, the strategy would be wrong.
The cost asymmetry between syndicated and panel data
A common surprise when brands first add panel data to their syndicated read: panel data is materially more expensive per cell of insight than the operating-system POS contract suggests.
Rough US-market pricing in 2026:
- Syndicated POS (SPINS, Circana, NielsenIQ): $30,000–$150,000/year for a brand-level contract with multi-retailer coverage. Pricing scales with retailer count, category depth, and add-on banners like Kroger total or banner-level reads.
- Household panel data (NielsenIQ Homescan, Circana Panel, Numerator): $40,000–$120,000/year for full demographic and category access at the brand level. Numerator's receipt panel sits at the lower end of this range; legacy Nielsen and Circana panels at the higher end, particularly when buyer-level retailer attribution is included.
The asymmetry isn't in absolute dollars — both ranges overlap — but in what you get per dollar. A syndicated contract gives weekly SKU-level sales across hundreds of retailers and thousands of stores. A panel contract gives demographic-cut household behavior on a small sample, projectable but inherently noisier at the sub-category level.
The practical budget implication: brands typically pay for syndicated first (it's the operating-system data for weekly sales tracking) and add panel data when a specific strategic question — repeat rate, source of volume, demographic profile — is worth the marginal $50,000+/year. Panel is the analytical splurge; syndicated is the utility bill.
Smaller brands often delay panel data until a Series B raise or post-$10M revenue. Before that point, the strategic decisions panel data informs aren't yet load-bearing enough to justify the cost. Once the brand is making national distribution bets, fighting for shelf in a category where source-of-volume matters to the retailer buyer, or running a defensible new-item launch program, panel data earns its keep — and the cost-per-decision drops sharply because the same dataset informs five or six strategic calls a year.
Doing this in Scout
Scout's primary surface is syndicated retail data — SPINS extracts your team uploads on a weekly cadence, presented as shared dashboards across sales, category, and commercial leadership. Panel data isn't integrated into the Scout surface today; the panel-data use cases (repeat rate, source of volume, demographic profile) typically live in a separate workflow against the panel vendor's own platform. The pattern most teams follow: Scout for the weekly syndicated cadence, panel data for quarterly strategic reads and innovation decisions.
Summary + further reading
- Syndicated data measures store-level retail sales; panel data measures household-level consumer behavior.
- They disagree on category numbers because they measure different universes with different projection rules — neither is wrong.
- Most weekly category reporting is syndicated; panel data is the right tool for repeat rate, buyer demographics, and source-of-volume questions.
- When the two sources disagree, diagnose the cause rather than picking the "right" one — they often tell different true stories simultaneously.
Related: What is SPINS data? · Reading SPINS panel coverage