Reconciling Kroger promo lift across SPINS, Stratum, and scan

Why this matters

The Tuesday after the brand's quarterly TPR at Kroger ends, three different lift numbers show up:

  • SPINS reports the promo at +30% lift (versus the brand's 12-week trailing baseline).
  • 84.51° Stratum reports the same promo at +18% lift (versus the brand's pre-promo loyalty-attached purchase rate).
  • Kroger in-store scan data — pulled through the brand's KPM or category-buyer relationship — reports +35% lift (versus the same period last year, raw).

All three numbers come from the same promo, the same brand, the same Kroger banners. They disagree by 17 percentage points. The CMO wants one number for the board deck. The CFO wants the most conservative one. The category lead wants to defend the promo to the buyer.

This page is about which number answers which question, and how to reconcile the three without picking the most flattering one.

What each source actually measures during a promo

For full background on the sources themselves, see SPINS vs. 84.51° Stratum vs. Circana. For the baseline-period mechanics that drive much of the disagreement, see Choosing a baseline period for SPINS post-promo lift. The promo-specific summary:

SourceWhat it measuresCadenceBaseline defaultBlind spots
SPINSProjected $ at Kroger from syndicated panelWeekly with multi-week lagTrailing N-week pre-promo or YoYNo household, no basket; banner-level is a paid add-on; suppression on small cells
84.51° StratumLoyalty-card-attached purchases at Kroger family of bannersNear-real-time (days, not weeks)Pre-promo loyalty-attached purchase rate, often comparable to a 4–8 week trailing windowLoyalty-attached only (~50% of trips per 84.51°'s public framing); behavior-shift on non-loyalty trips invisible
Kroger in-store scan (via KPM or buyer-shared data)Raw POS dollars at the banner level, all tripsDaysOften YoY raw, or "prior 4 weeks" — varies by buyerNo household attribution, no cross-retailer context, baseline choice not standardized

The disagreement starts here: the three sources have different universes, different baselines by default, and different lags. The fact that they produce different lift numbers is structural, not a data quality problem. Reconciling them means accepting that each answers a different version of "did the promo work."

Why the three numbers diverge — five real causes

1. Loyalty-attached vs. total trips

Stratum measures loyalty-card-attached purchases. Roughly half of Kroger trips have a loyalty card attached, per 84.51°'s public framing. A promo that disproportionately attracted non-loyalty trips (e.g., walk-in shoppers responding to in-store signage) will look bigger in SPINS/scan than in Stratum because Stratum can't see the non-loyalty incremental.

A promo that disproportionately drove existing loyalty buyers to buy more will look bigger in Stratum than in SPINS, because SPINS treats this as ordinary baseline behavior while Stratum surfaces the per-household frequency change.

2. Baseline choice

SPINS's default baselines (trailing 4 or 12 weeks) handle pre-promo pull-forward differently than scan's typical "prior 4 weeks" cut. Stratum's loyalty-attached pre-promo rate is its own baseline that doesn't directly compare to either. Same promo, three baselines, three lifts. A reconciliation that doesn't first align the baseline window definitions is doing apples-to-oranges math.

The five common SPINS baseline choices and their tradeoffs are covered in Choosing a baseline period for SPINS post-promo lift.

3. Banner-level vs. total-Kroger reads

SPINS standard delivery is total-Kroger; banner-level is a paid add-on. Stratum and scan typically deliver banner-level by default. If the promo over-indexed at certain banners (Ralphs, King Soopers — the natural/urban-leaning banners), banner-level reads will show strength that the total-Kroger SPINS read averages out. See A workflow for monitoring Kroger banner performance in SPINS and Reading Kroger total-store performance.

4. Basket substitution

A promo on the brand's 6 oz pack might cannibalize sales of the brand's 12 oz pack within the same shopping trip. SPINS catches the basket-level wash at the brand-aggregate level only if the brand SKUs are rolled up; at SKU level, the 6 oz looks like a hero while the 12 oz looks soft. Stratum's basket-level view directly shows the within-trip substitution; SPINS and scan don't.

5. Reporting lag and what counts as the "promo period"

SPINS' weekly cadence and Circana-licensed conventional data have a multi-week lag and a fixed weekly cut. A promo that ran Tuesday-to- Tuesday is reported as a Sunday-to-Saturday weekly bucket, which includes one or two non-promo days. Stratum's near-real-time cut and scan data can isolate the actual promo dates. Same promo, different period boundaries, different totals.

A reconciliation framework

Three steps to make the numbers comparable rather than just adjacent.

Step 1 — align the baseline definitions

Pick one baseline window and apply it to all three sources where possible.

  • For SPINS, pull the trailing 8 weeks pre-promo (excluding the immediate pre-promo week to limit pull-forward contamination — see Choosing a baseline period for SPINS post-promo lift).
  • For Stratum, pull the same 8-week pre-promo window in loyalty-attached terms.
  • For scan, pull the same 8-week pre-promo window from the raw scan feed.

This still leaves the universe differences (loyalty-only for Stratum), but at least the baseline windows match.

Step 2 — align the period boundaries

Define the "promo period" by actual TPR start and end dates, not by the syndicator's weekly bucket. Where the syndicator's weekly bucket is the only option (SPINS), note explicitly that the SPINS read includes 1–2 non-promo days at the edges.

Step 3 — reconcile what each source is "actually saying"

Build a small table that decomposes the lift number into its universe and baseline:

SourceLift %Numerator (incremental $)UniverseComment
SPINS+30%+$120K syndicated-projection $Total Kroger family, all tripsIncludes loyalty + non-loyalty, includes panel projection
Stratum+18%+$50K loyalty-attached $Loyalty cards only (~50% of trips)Loyalty buyers — likely existing brand customers
Scan+35%+$140K raw POS $Total Kroger, raw POSNo baseline rigor; raw YoY can include brand growth

Now the numbers tell a coherent story:

  • Scan's +35% includes the brand's organic growth (no trend adjustment). Subtract the brand's ~10% YoY growth and you're at a ~+25% trend-adjusted lift.
  • SPINS's +30% is the syndicated-projected lift; broadly in line with the trend-adjusted scan read.
  • Stratum's +18% is the loyalty-attached lift only. If loyalty trips are ~50% of total, scaling up to the full population (assuming similar lift on non-loyalty) gives a +18% × full = roughly +18% loyalty + +X% non-loyalty. To match the trend-adjusted SPINS/scan reads at ~25–30%, non-loyalty trips must have lifted more than loyalty — consistent with non-loyalty shoppers being more responsive to promo signage.

That's the actual story: the brand's promo over-indexed on non-loyalty trips. The three numbers don't disagree; they're each measuring different slices.

Worked example — a Kroger TPR across banners

A wellness brand runs a 25%-off TPR on the 8 oz hero SKU across all Kroger banners, week of 2025-09-09. Brand has been growing ~10% YoY.

SourceReported liftBaseline usedPeriod boundaryBanner detail available?
SPINS (banner-level add-on)+30% Kroger total; +42% Ralphs, +38% King Soopers, +22% core banner-name, +35% Fred MeyerTrailing 12 weeks pre-promoSun–Sat weekly bucketYes
84.51° Stratum+18% loyalty-attached, brand levelTrailing 8 weeks pre-promo (loyalty-attached)Promo start–end datesYes — broadly tracks the banner pattern
Kroger scan (KPM)+35% raw YoY, Kroger total; banner skew matches SPINSYoY raw, week of last yearPromo start–end datesYes

Reconciled reading:

  • Trend-adjusted SPINS lift (12-wk trailing baseline, in-period): +30% as reported. Already trend-implicit because trailing-12 is a current-state baseline. Defensible.
  • Trend-adjusted scan lift: +35% raw YoY minus ~10% YoY brand growth = +25% trend-adjusted. Within range of SPINS.
  • Stratum loyalty-attached lift: +18%. The gap to SPINS/scan (+30/+25%) implies non-loyalty incremental was higher than loyalty incremental. Probably the in-store signage worked better on impulse trips than on planned-list loyalty trips.

The right reporting summary:

"Kroger TPR delivered roughly +25–30% trend-adjusted lift across the family of banners, with the strongest reads at Ralphs (+42%) and Fred Meyer (+35%), where the brand's category fit is highest. The Stratum loyalty-attached read of +18% suggests the promo over-indexed on non-loyalty (impulse) trips relative to the brand's typical promo pattern."

That report uses all three sources, names the difference, and doesn't pretend the +35% scan number is the same as the +30% SPINS number.

Anti-patterns

  • Picking the highest of three lift numbers and using it without context. The +35% scan number includes brand trend; using it unadjusted overstates promo effectiveness and biases future promo-investment decisions.
  • Picking the lowest of three lift numbers as "the conservative one." Stratum's loyalty-attached lift is narrower in universe, not more conservative — it excludes a real chunk of the incremental dollars that SPINS and scan are correctly measuring.
  • Averaging the three numbers. The mean of +18%, +30%, and +35% is +27.7%. There's no methodology that produces that number. It's numerology, not analysis.
  • Comparing different baseline windows across sources. If SPINS uses trailing-12 and scan uses YoY, the two lifts aren't comparable. Re-pull on aligned baselines or document the gap explicitly.
  • Reporting Stratum's loyalty-attached lift as "Kroger lift" without the universe footnote. The number is correct for loyalty-attached trips; it's not the brand's full Kroger lift.
  • Ignoring banner-level skew when reporting total. A flat total-Kroger lift can hide +40% at urban-skewed banners and -10% at value-skewed banners. See A workflow for monitoring Kroger banner performance in SPINS.

Doing this in Scout

Scout's promo-analysis surface lets you pull SPINS-extracted Kroger lift side-by-side with uploaded Stratum exports and (where the brand has them) scan exports, with aligned baseline windows configurable across all three sources. The scenario explorer view shows the lift number from each source, the baseline window used, and the universe footnote, so the reconciliation table above is the default view rather than a manual assembly. Direct API feeds to Stratum and to Kroger scan aren't wired today; the integration model is upload-driven.

Summary + further reading

  • SPINS, 84.51° Stratum, and Kroger scan each measure a different slice of the same promo. The three lift numbers will disagree structurally; the disagreement is not a data-quality issue.
  • The reconciliation framework: align baseline windows, align period boundaries, then decompose the lift gap into universe (loyalty vs. total) and methodology (raw vs. trend-adjusted) differences.
  • Report the picture from all three sources rather than picking the most flattering — the universe gap (loyalty vs. non-loyalty, banner skew) is often the strategic finding.

Related: SPINS vs. 84.51° Stratum vs. Circana: Kroger data sources · Choosing a baseline period for SPINS post-promo lift · A workflow for monitoring Kroger banner performance in SPINS

See this on your own data, book a Scout demo

Want this as a Google Sheet?

Drop your email and we'll send the worked example.