SPINS metrics decision tree: velocity, share of shelf, or TDP?

Why this SPINS metrics decision tree matters

A category lead asks the analyst three different questions in the same Slack thread:

  1. "How is our brand performing at Sprouts?"
  2. "Are we under-indexed on shelf in our key stores?"
  3. "What's our distribution growing fastest in?"

Each of these wants a different metric. The first wants velocity ($/store/week or units/store/week). The second wants share of shelf

  • over/under-index. The third wants ACV change (or TDP change if assortment is part of the story).

Most analyst confusion in this space comes from picking the wrong metric for the question and then defending the wrong number in a meeting. This page is the decision tree.

Velocity, ACV, TDP, and share of shelf: one sentence each

  • Velocity — sales per carrying store per week. Measures the quality of distribution, not the breadth.
  • ACV — dollar-weighted % of category sales at carrying stores. Measures the importance-weighted breadth of distribution.
  • TDP — ACV × average SKUs per carrying store. Measures breadth
    • assortment depth in one number.
  • Share of shelf — facings as % of total category facings. Measures shelf-level competitive presence (a leading indicator of share of sales). Note: shelf data lives in separate vendor systems from SPINS sales data; share of shelf is typically reconciled outside the syndicated surface.

The decision tree

Question type 1 — "Is the brand growing or shrinking?"

Sub-questionRight metric
Is total brand $ growing?$ sales (just look at the dollars)
Is the per-store performance improving?Velocity ($/store/week)
Is distribution expanding?ACV change
Is the brand getting more shelf real estate?TDP change (if SKU count is moving) or share of shelf change (if facings)
Is the brand growing because of more stores or more sales per store?Decompose $ growth = ACV change × velocity change

Question type 2 — "How does the brand compare to peers?"

Sub-questionRight metric
Who has more total dollars in the category?$ share
Who is selling more per store?Velocity comparison
Who has wider distribution?ACV comparison (weighted) or store count (unweighted)
Who has more shelf real estate?Share of shelf
Who is converting shelf to sales most efficiently?Over/under-index = $ share ÷ share of shelf

Question type 3 — "What should we do strategically?"

QuestionRight metric to investigate
Can we add more stores?ACV. If ACV is already 90%+, distribution growth is capped — focus elsewhere.
Can we add SKUs to existing stores?TDP and SKUs/door. If SKUs/door is 1, headroom is real.
Can we earn more shelf at retailers we're already in?Share of shelf vs. share of sales (over-index argues for more shelf).
Can we improve velocity at the stores we're in?Velocity by store, segmented by store type, region, or banner.

A worked example

A brand reports the following changes from 2024-Q4 to 2025-Q1:

MetricQ4 2024Q1 2025Change
Total $$3.0M$3.3M+10%
ACV60%60%flat
Velocity ($/store/week)$145$160+10%
TDP120132+10%
Share of shelf12%12%flat

What the right metric tells you, and the wrong metric lets you miss:

  • $ alone says "+10% growth" — true but not useful for strategy.
  • ACV says distribution is flat. No new doors. Door-expansion isn't the growth driver.
  • Velocity says +10%. Same stores, more sales per store. This is the actual story.
  • TDP says +10%. Combined with flat ACV, this means the SKU count per door went up — the brand added a SKU at existing doors. ACV didn't change but TDP did.
  • Share of shelf says flat. Same shelf real estate as before.

The growth story is "we added a SKU at our existing doors and it's selling, lifting per-door velocity 10% and TDP 10% with no change in ACV or shelf footprint." Anyone defending the growth story with just the +10% number is leaving the strategic implication on the floor — namely, that the brand has demonstrated assortment-depth upside and could push for more SKUs at existing doors with a data-backed argument.

Velocity benchmarks by channel and category

Velocity varies enormously by category and channel. Knowing the rough benchmarks for your category keeps you from misreading whether a number is strong or weak:

Category / channelTypical velocity range ($/store/week)Notes
Protein bars, Natural Channel$35–$120Category leaders (RXBar, Larabar) run $80–$140 at Sprouts
Supplements (capsule/powder), Natural Channel$20–$80High velocity at top-selling SKUs in standalone supplement sets
RTD beverages, Natural Channel$40–$150Wide range; energy/functional tend higher
Snack chips/crackers, Natural Channel$25–$90Staple-shelf categories; velocity driven by reorder frequency
Natural Channel overall (median brand)$30–$60Rough benchmark for a mid-tier performing brand

These are rough ranges — what matters for benchmarking is your category-specific velocity at a given retailer, not cross-category averages. A $45/store/week velocity in a supplement category is very different from $45/store/week in a commodity snack category. Ask your SPINS rep for the category velocity distribution, not just your brand's number.

Velocity as a pitch tool for new distribution

Velocity is the metric that wins new distribution at a retailer who doesn't yet carry the brand. The logic: a retailer's buyer evaluates new items by asking "if I authorize this SKU, what will it do per store per week?" They have a minimum velocity threshold (which varies by category and chain) below which they won't authorize.

The strongest new-distribution pitch pairs two velocity numbers:

  1. Current velocity at comparable retailers already carrying the brand. "We're doing $62/store/week at Natural Grocers, which has a similar demographics profile to your core stores." This is the proof of concept.

  2. Category velocity benchmark for the segment. "The plant-based protein bar segment averages $48/store/week at the natural channel; our velocity is running 29% above category average." This frames your performance relative to the shelf set the buyer manages.

What doesn't work: presenting velocity at a very different retailer as proof of performance ("we do $120/store/week at Whole Foods so you should carry us at Kroger"). Different demographics, different category sets, different price sensitivity — velocity doesn't transfer across channels without explanation.

Anti-patterns to watch for

  • Reporting velocity without also reporting ACV. Velocity at high ACV is a strong story. Velocity at low ACV is suspicious — small store base means high variance and panel sensitivity. A $180/store/week velocity that's based on 6% ACV and 8 stores is statistically unreliable. Always report the sample (store count or ACV) alongside velocity.
  • Reporting ACV change without store count. ACV can shift from panel composition or weighting changes alone (see Reading SPINS panel coverage). Always cross-check that store count moved in the same direction.
  • Reporting TDP without decomposing. TDP up 10% can be ACV +10% (door growth), SKUs/door +10% (assortment growth), or some mix. Each implies a different commercial action.
  • Reporting share of shelf without share of sales. Share of shelf is only useful in ratio with share of sales. Standalone shelf share doesn't tell you whether the brand is over- or under-converting.
  • Using velocity to compare across categories without normalization. A $60/store/week velocity means something very different in a high-turn snack category vs. a slow-turn supplement category. Cross-category velocity comparisons are noise without category-typical velocity context.

Diagnosing velocity by region and banner

Aggregate velocity hides nearly every interesting commercial question. A national brand at $62/store/week average velocity might be running $95/store/week in the Pacific Northwest and $38/store/week in the Southeast — same brand, same SKUs, very different stories underneath.

The first decomposition cut for any velocity diagnostic:

  • By region. A 2–3× velocity spread between regions is normal for natural-channel brands. Wellness and functional categories skew dramatically to West Coast and Northeast metro markets; treat the regional aggregate as the operating reality, not the national average. If your category leader runs $80/store/week nationally but $135/store/week in California, that 1.7× regional skew is also the shape your benchmarking should follow.
  • By banner. Within a chain like Sprouts or Whole Foods, velocity varies by store cluster (urban vs. suburban, format size, median household income within trade area). For Kroger specifically, see the Kroger banner vs. total page — the banner-level decomposition for Kroger requires a separate data source from standard SPINS.
  • By new vs. mature stores. Stores that picked up the brand in the last 12 weeks typically run 30–50% below mature-store velocity for the first 6 months as shoppers discover the new placement. If you're expanding distribution fast, the velocity trend can look like a decline even though every cohort is performing on plan. Cohort-aging velocity (week-since-launch on the x-axis) is the right view, not week-over-week aggregate velocity.

The right way to present velocity to commercial leadership: lead with the regional or banner cut that's most actionable, not the national average. A "we're down 8% on velocity" headline that hides a 24% gain in Pacific Northwest and a 22% drop in Mid-Atlantic is the worst of both worlds — it obscures both the success and the problem the team should be working on, and it makes the analyst look like a junior who's never run a real diagnostic.

Doing this in Scout

Scout's brand-performance views surface velocity, ACV, and TDP side-by-side from SPINS extracts so the metric set is one read instead of four pivot tables. Share-of-shelf data comes from a separate vendor surface (audit / image-recognition / planogram) and isn't integrated into Scout today; the over/under-index calculation is a manual reconciliation against the shelf-data export, paired with the share-of-sales number from the Scout dashboard. For the velocity benchmarking use case, Scout's retailer-comparison view lets you see the brand's velocity against the category distribution at each retailer — so the "how does our $62/store/week rank in this category at this chain" question is a glance rather than a sort.

Summary + further reading

  • Pick the metric that matches the question type — growth, peer comparison, or strategic action each calls for different metrics.
  • The most common error is reporting one metric (usually $ growth or ACV) without the others, which misses the strategic implication.
  • Always cross-check ACV change against store count; always decompose TDP change into ACV vs. SKUs/door.
  • For new-distribution pitches, pair current-retailer velocity against category benchmark velocity — not cross-channel velocity that won't transfer.

Related: What is TDP? · What is share of shelf?

See this on your own data, book a Scout demo

Want this as a Google Sheet?

Drop your email and we'll send the worked example.