Why SPINS panel coverage matters
You're reading a SPINS report that says your brand is doing 28% ACV in Natural Channel last 12 weeks. Three weeks later the same report says 31% — but you didn't add doors. You ask your SPINS rep, who explains that one of the regional natural co-ops just came back into the panel after a months-long data gap, and your "growth" is actually a sample-mix change.
That's the kind of thing that ends a brand-side analyst's morning. SPINS data isn't wrong, but it's the output of a projection from a sample of stores to a full channel — and the projection has rules. Most analysts learn those rules from being burned by them. This page collects the ones that burn most often.
The audience is the brand-side analyst at a natural, specialty, or wellness CPG brand who's pulling SPINS data weekly and presenting to commercial leadership. If you're using SPINS for retailer-level performance reads, post-promo lift, or distribution analysis, the projection and suppression rules below change how you should read every number that comes out of the portal.
The three diagnostic questions for any anomalous cell
Before the methodology deep-dive, here are the three questions that short-circuit most SPINS data confusions. Bookmark these — the worked example and detail below show why each one works.
-
Did ACV change with sales? If no, the change is panel or projection, not real demand. A genuine sales crash takes ACV down with it (stores stop ordering, distribution thins). A suppression or backfill event leaves ACV intact.
-
Is the prior or following week unusually high? Backfill is common and usually evens out over a 4-week window. If you see a zero followed by an unusual spike, check whether the sum of those two weeks is close to the two-week normal for that retailer. If yes: it's a backfill, not a real spike.
-
Is this a direct-scan retailer or a distributor-flow estimate? Distributor-flow numbers move more, both because of shipping latency and because the projection coefficients are larger. A 30% week-over- week move in independent natural (KeHE/UNFI) warrants more skepticism than the same move at Sprouts, which reports direct scan.
The SPINS panel coverage methodology — what's happening under the hood
SPINS combines two data sources for the natural and specialty channel:
- Direct retailer scan data from natural and specialty chains that license their POS to SPINS — Sprouts, Natural Grocers, and a long list of regional naturals and specialty chains. (Notably, Whole Foods Market does not report scanner data to SPINS; for Whole Foods coverage, brands rely on Circana or NielsenIQ panel projections.)
- Distributor-flow data from KeHE and UNFI, which captures what was shipped to independent natural retailers (the long tail of independent co-ops, single-store naturals, and specialty stores that don't license their POS individually).
Those two streams get reconciled, deduplicated, and projected up to a representation of the total channel. Projection means: if the sample stores in market X represent N% of estimated channel sales, the sample number gets multiplied by 1/N to estimate the total. The projection coefficient varies by:
- Geography. Coverage density isn't uniform across regions. Areas with denser natural-retail concentration (West Coast, Northeast) produce smaller projection multipliers than areas with sparser natural retail.
- Channel slice. Natural channel projects differently than Conventional MULO. SPINS' MULO+ extension (Specialty/Natural Enhanced + Conventional MULO, with the conventional portion powered by Circana data) layers two different projection models together into one number.
- Time period. Projection coefficients update as the underlying panel composition changes. A retailer leaving or joining mid-quarter shifts the coefficient retroactively.
This is fine when you're looking at total-channel trend lines. It's a problem when you're looking at week-over-week movement on a specific cell (retailer × geography × period), because the "movement" might be panel composition, not actual sales change.
SPINS data suppression — when SPINS hides the cell entirely
Beyond projection, SPINS applies suppression to cells where the sample size is too small to publish without violating a retailer's contract or compromising statistical reliability. The practical effect: you'll see blanks, zeros, or "insufficient data" markers where you expected numbers.
The thresholds vary by retailer contract and aren't published in one public reference. In practice:
- Banner-level Kroger detail is suppressed in the standard feed unless you've licensed banner-level breakouts (Ralphs, King Soopers, Fred Meyer, Harris Teeter, Smith's, Fry's, and others). Without that add-on, "Kroger total" is your only handle, and you can't separate King Soopers strength from Harris Teeter weakness.
- Independent natural in lower-density geographies often shows zero for multiple weeks running, then a backfilled number when distributor data catches up.
- Small-share categories in MULO can show suppression on retailer × week cells even when the rolled-up category is fine.
Suppression doesn't mean zero sales. It means SPINS isn't publishing the number. Reading suppressed cells as "zero" is the single most common analytical error in this space — and it biases distribution, share, and post-promo lift calculations downward.
Worked example — a brand "losing" a banner
Fictional but representative: a wellness brand sells through roughly 40% of Sprouts stores nationally. Distribution is stable, velocity is consistent. The weekly SPINS pull comes back like this:
| Week | Sprouts $ | Sprouts ACV | Note |
|---|---|---|---|
| W1 | $12,400 | 38% | Normal |
| W2 | $11,950 | 39% | Normal |
| W3 | $0 | 0% | Anomaly |
| W4 | $13,100 | 40% | Normal |
| W5 | $25,300 | 39% | Anomaly |
What actually happened: SPINS suppressed the W3 cell because of a panel data delivery issue, then backfilled the W3 dollars into the W5 number when the data arrived. Read naively, this looks like a catastrophic week followed by a 100%+ spike. The reality: every week was normal. Total dollars over the five-week window are exactly what they should be.
The tell is in the ACV column. If sales had truly gone to zero in W3, ACV would also have dropped — no stores selling = no distribution. ACV held steady at 38–40% throughout. That's the load-bearing diagnostic.
The worked example shows all three diagnostic questions in action: ACV held at 38–40% throughout (question 1 — no real distribution event), W5 was exactly double normal (question 2 — classic backfill pattern), and Sprouts is direct-scan so the anomaly is especially notable as an exception rather than a baseline expectation (question 3 context).
How to validate a suspicious number before presenting it
Before putting a SPINS number in a deck that goes to a buyer or a CEO, run these checks:
Check 1: Is it consistent with velocity? If the brand is in 200 Sprouts stores and the dollar figure implies $4/store/week, that's implausibly low for a normally-moving SKU. Cross-check: dollars ÷ (ACV-implied store count) = implied velocity. If implied velocity is outside the historical range by more than 20–30%, the cell is suspect.
Check 2: Is there a parallel signal in a different data source? If the brand also tracks inventory movement through their 3PL or distributor, a KeHE/UNFI shipment anomaly should appear in both SPINS and the distributor's own sell-through report. If SPINS shows a spike and the distributor doesn't, the spike is likely a panel artifact.
Check 3: Call your SPINS rep. This isn't weakness; it's due diligence. SPINS reps can pull the data-delivery log for your account and tell you whether the anomalous week had a known panel event — a retailer data gap, a backfill, a projection coefficient change. Getting the confirmation takes 10 minutes and saves a category director from questioning your methodology in a meeting.
SPINS projection methodology: coefficient changes and retroactive adjustments
One of the least-documented sources of apparent SPINS volatility is a retroactive panel change. When a retailer joins or leaves SPINS' panel — or when a major distributor changes their data-sharing agreement — SPINS sometimes revises historical data to adjust for the change in panel composition.
The practical effect: a 52-week report you pulled in January may show slightly different numbers than the same 52-week report you pull in April if a panel event happened in between. The data isn't wrong — it's more accurate — but if you're tracking brand performance against a historical baseline, the baseline itself can shift.
The mitigation: lock down the historical baseline report at the start of a reporting cycle (e.g., fiscal year start) and store it separately. Use the locked version for year-over-year comparisons; use the current version for current-period reads. When the numbers don't reconcile, ask your SPINS rep whether a panel composition change could explain the difference.
Doing this in Scout
The standard SPINS portal shows the raw numbers but doesn't draw attention to suppression vs. projection vs. real movement. The cells that should trigger an investigation look identical to the cells that shouldn't.
Scout dashboards over your SPINS extracts give you a working surface for applying the three diagnostic questions above — ACV, dollars, and units on the same time series; cross-retailer comparisons; and a shared dashboard your sales and category leadership can read together instead of mailing PDFs around. Scout doesn't auto-flag suppressed cells or auto-detect backfills today, so the diagnostic discipline still belongs to the analyst. What Scout does is give your team a single place where the diagnostic work compounds — once you've figured out a Week-18 backfill, the annotation lives on the dashboard instead of in a pivot table on someone's laptop.
Summary + further reading
- SPINS data is projected from a sample, not measured directly, and the projection coefficients shift as the panel composition changes.
- Suppressed cells are not zero — reading them as zero biases distribution, share, and lift calculations downward.
- Three diagnostic questions for any anomalous cell: did ACV move with sales? is the surrounding week unusually high? is the retailer direct-scan or distributor-flow?
- Before presenting a suspicious number, validate: implied velocity, parallel distributor signal, and a quick call to your SPINS rep.
- Historical data can be retroactively adjusted when panel composition changes; lock down baseline reports at the start of reporting cycles.
Related reading: What is SPINS data? · Syndicated vs. panel data