Bain & Company pioneered the perfect store concept over 15 years ago. Since then, every major CPG company has adopted it under their own name. Coca-Cola calls it RED—Right Execution Daily. P&G calls it Golden Store. Unilever calls it Perfect Store. PepsiCo calls it Flawless Execution.
Different names, but they all face the same problem: they invest in category strategy, planogram design, trade promotions, and field teams, but none of that investment converts into sales unless execution at the shelf matches the plan.
Perfect store execution is the framework that defines what 'matching the plan' actually looks like at the store level—and gives a field team a scored standard to measure against on every visit.
Most enterprise CPG brands already have a perfect store program. The problem is that most programs run on measurement methods too slow and too inaccurate to catch execution failures before they cost revenue. A program that captures compliance data monthly is telling a category manager what the shelf looked like on audit day—not what a shopper encountered on any other day of the month.
Perfect store execution is a structured framework used by CPG brands to define, measure, and enforce in-store execution standards across every outlet in a network. It establishes what 'correct' looks like at the store level—for every SKU position, every price tag, every promotional display—and scores each store against that standard on every visit.
The framework exists because the distance between a planogram approved at HQ and a shelf that matches it in a store is larger than planning teams generally account for.
Field reps vary in attention and knowledge, resets execute imperfectly, store managers make adjustments, and competitor activity changes the shelf between visits. Without a defined standard and a consistent measurement method, there's no reliable way to know whether any given store is actually executing the brand strategy or only approximately executing it.
A perfect store score is a composite number—typically 0 to 100—calculated from weighted sub-scores across five execution pillars. A store scoring 88 is executing 88% of the defined standard. The key is understanding which pillar is generating the 12% shortfall and what that costs in revenue terms—not just reporting the aggregate number.
That distinction between measuring a score and understanding what drives it is what separates a perfect store program that improves execution from one that just tracks it.
Each pillar is a distinct category of execution failure with its own commercial consequence and its own field action.
A blended compliance score that combines all five produces a number that looks useful but doesn't tell a category manager or field execution director where to focus. The five pillars need to be measured, weighted, and tracked separately.
The right SKUs are on the shelf, in the correct quantity, with no gaps.
Availability is typically weighted heaviest in the perfect store score—around 40% in beverage and food categories—because nothing else matters if the product isn't there. An empty shelf position is a lost sale regardless of how well the price tag, the planogram, and the display are executing.
Availability covers on-shelf availability (OSA), out-of-stock detection, and near-out-of-stock situations where a SKU has dropped to 1–2 remaining facings. A product with one unit pushed to the back has effectively disappeared from a shopper's view before the shelf technically goes empty.
Products are in the correct shelf position, at the correct height, with the correct number of facings facing forward. Visibility is where the Golden Zone matters directly.
The Golden Zone—approximately 120 to 160 centimeters from the floor—is where eye-level products sit. Behavioral research consistently shows that products in the Golden Zone generate significantly higher sales velocity than the same product at floor level or above head height.
A field rep who completes a reset and counts the right number of facings but places a hero SKU at floor level has technically met the facing count requirement while missing the commercial intent of the planogram.
Visibility also covers share of shelf and planogram compliance at the position level—not just whether the right SKUs are present but whether they're in the right positions relative to competitors and within the category layout.
The correct price tag is on the shelf, visible, and matches the agreed promotional or everyday price.
Price integrity has two distinct failure modes:
The first is a missing price tag. A shopper who can't find the price of a product hesitates. In impulse categories—beverages, snacks, personal care—hesitation breaks the purchase decision. A display without a price tag is a display that isn't converting at its intended rate.
The second is a price tag that doesn't match the current promotion. When a trade promotion launches on Monday and the promotional price tag hasn't been updated, the promotional display runs at full price. The brand funded the display, negotiated the secondary placement, and built the campaign—and the shelf is charging shoppers the non-promotional price because the tag wasn't changed during the reset.
Promotional displays are built correctly, on time, in the right location, with the right point-of-sale materials in place.
This pillar covers both primary shelf promotions—shelf-talkers, wobblers, price-drop tags—and secondary placements: endcaps, floor displays, clip strips, pallet stacks, and checkout positions.
Secondary placement tracking is where most perfect store programs have their biggest blind spot. A holiday display authorized for the endcap and built correctly on setup day can be moved by a store associate managing space pressure the following Wednesday.
Without visit-level verification, the brand doesn't know until post-campaign sales analysis shows the promotion underperformed—by which point the trade budget has already been spent and the campaign window has closed.
Point-of-sale materials—wobblers, shelf-talkers, header cards, floor decals, secondary display signage—are present, correctly positioned, and undamaged.
POSM compliance is one of the hardest pillars to verify manually at scale. A field rep moving quickly through a store confirms a display is built but may miss that the header card is missing, the price-drop wobbler is turned the wrong way, or the floor decal has been partially covered by a neighboring brand's fixture.
Each of these failures reduces the conversion rate of the display, often without appearing in any audit report. The display is checked as present. The POSM non-compliance stays invisible.
Together, these five pillars define what a perfect store actually means operationally. But how to score them consistently—and what does that score actually tell you?
Perfect Store Score = (Sum of weighted sub-scores ÷ Maximum possible score) × 100
A concrete example. A beverage brand defines five weighted sub-KPIs:
|
Pillar |
Weight |
|
Availability |
40 points |
|
Visibility / planogram compliance |
25 points |
|
Pricing compliance |
15 points |
|
Promotional execution |
15 points |
|
POSM (Point of Sale Materials) compliance |
5 points |
A store achieves full availability (40/40), full visibility (25/25), correct pricing (15/15), a promotional display built correctly but missing the price-drop wobbler (11/15), and POSM partially in place (3/5). Perfect Store Score = 94/100 = 94%.
That 94% looks strong. But the shortfall is concentrated in promotion—where the missing wobbler means the display that was funded isn't converting at the promotional rate. A blended 94% score hides a specific, commercially significant problem. A pillar-level breakdown makes it visible.
Availability is weighted highest for a beverage or snack brand because an empty shelf position loses a sale in seconds. A beauty brand doing a seasonal launch weights promotion and POSM execution highest because the display drives awareness and trial for shoppers who aren't yet loyal buyers.
The perfect store framework is designed to be calibrated to a brand's commercial priorities—not applied as a generic 100-point checklist identical across categories and store formats. Two brands in the same retailer may have different weights on the same five pillars, and both are correct for their respective strategies.
That calibration is also what makes perfect store scores useful for benchmarking across stores. Once the weights are set, a category manager can rank every store in a network by score, identify the consistent laggards, and direct field team attention to the accounts where execution gaps are costing the most revenue.
The relationship between execution completeness and promotional return is not linear. A promotion at 90% execution does not deliver 90% of the expected lift. The shortfall is typically larger—sometimes significantly so—depending on which component is missing.
The reason is that a promotional execution has interdependent components. The display generates foot traffic to the product location. The price tag converts that foot traffic into purchase decisions. The product availability fulfills those decisions. If any one component is missing, the conversion chain breaks.
A display built correctly without the promotional price tag doesn't convert at the promotional rate. Shoppers see the product but no reason to switch from their usual purchase. An out-of-stock on the hero SKU on day three of a two-week campaign means the display collected shopper attention but generated no sale for those days—and a shopper who encountered an empty position may not return. A display positioned in a back-of-store endcap rather than the contracted front-of-store position reaches significantly fewer shoppers than the same display in the agreed location.
The practical implication: a trade marketing director managing a campaign that's 'mostly' complete may be recovering significantly less than the modeled return, because the missing component was the specific conversion trigger in the execution chain.
|
"If I walk away from a store today and I have the results tomorrow or at the end of the month, I've missed the opportunity. You only get one chance to make that change. If you do that while you're in the store in real time, you potentially have that sale or that increased volume you would never get when you walk out and find out the result later." — Retail execution leader with 25 years in CPG, including senior roles at Coca-Cola Hellenic |
This is why the perfect store score needs to be broken down by pillar, not just reported as a total.
The aggregate number tells a field execution director how their network is performing. The pillar breakdown tells them where the revenue is leaking and what a field rep needs to fix on the next visit.
The last-mile problem in perfect store execution isn't motivational. Field reps generally want to execute correctly. The breakdown is structural—it happens in the handoffs between strategy, instruction, and in-store action.
A reset brief sent by email or PDF gets read on the day it arrives, interpreted individually by each rep, and acted on days later when the rep is standing in the store without the original document.
Two reps in the same region executing the same reset from the same brief can produce two meaningfully different shelf states—not because either rep failed, but because written instructions without visual confirmation allow for interpretation variance.
A planogram approved six weeks ago may not reflect a range review that happened three weeks ago, a promotion that launched last Monday, or the store-specific fixture dimensions that differ from the standard template.
A rep executing against an outdated planogram produces an outdated shelf—and reports it as compliant because it matches the document they were given.
In a traditional audit workflow, a rep completes a task, marks it as done on a checklist, and moves on. Whether the execution matched the plan—whether the price tag is actually there, whether the display is facing the correct direction, whether the hero SKU is at eye level—depends entirely on the rep's own assessment. A rep who self-reports full compliance and a shelf that's 85% compliant both look identical in a manual checklist report.
When a compliance failure is identified—a price tag missing, a display in the wrong bay, a hero SKU at floor level—the correction requires the rep to return or a follow-up visit to be scheduled. If the failure happened on day one of a three-week promotion, and the correction happens on day eight after a manager reviews a report and routes a follow-up, seven days of promotional investment have already delivered reduced returns.
That gap between when an execution failure occurs and when it gets corrected is where trade spend leaks. It's not visible in the campaign launch report. It's only visible in the post-campaign sales analysis—by which point the budget is spent and the window has closed.
Traditional perfect store programs measure compliance at the speed of the audit schedule—monthly for most brands, weekly for the most intensive programs. Image recognition changes the measurement speed to the speed of the store visit.
Every visit generates a scored record. Every gap generates a correction task. Every correction gets documented before the rep leaves the store.
|
Step |
What happens |
|
1. Detect |
Rep photographs the shelf section. IR reads all five pillar components and returns a perfect store score for that section within 90 seconds. |
|
2. Assign |
Gaps are ranked by commercial priority and delivered as a correction task list to the rep's phone before they leave the aisle. |
|
3. Correct |
Rep fixes what's correctable during the current visit. For issues requiring store manager involvement, the rep has photographic evidence to support the conversation immediately. |
|
4. Verify |
The correction is photo-documented. HQ sees a before-and-after view of shelf state from the same visit. Compliance reflects actual execution, not self-reported task completion. |
That four-step cycle, completed within a single visit, is the operational difference between a perfect store program that produces accurate compliance data and one that produces optimistic self-assessments.
The key is in step two—the correction happening during the visit rather than on a follow-up trip scheduled days later. An execution failure corrected during the same visit costs the brand hours of sub-optimal shelf time. The same failure routed to a reporting dashboard and actioned on a follow-up visit costs the brand days.
The most experienced field reps know every SKU in their category and can read a shelf accurately in 60 seconds. Most reps don't. The gap between the best reps and the average rep in a large field team is the gap between a store that consistently scores 95 and one that consistently scores 78—same planogram, same standards, different execution outcomes.
|
"The best sales reps are only a small percentage of people. With image recognition, we're now able to put a tool in the hands of the average rep where they can already get the result and be told the right thing to do. Any salesperson can now perform like a great salesperson because you're telling them what to do. You're making it easy for them." — Retail execution leader with 25 years in CPG, including senior roles at Coca-Cola Hellenic |
Image recognition software removes the skill dependency from shelf reading. The rep photographs the section. The AI reads every SKU's position, facing count, price tag, and POSM status. The rep receives a prioritized task list showing which gaps have the highest commercial impact. The analysis that was previously dependent on expert knowledge now happens automatically—which means execution quality across the full field team converges toward the standard the best reps were already meeting.
That convergence is what makes a perfect store program scalable. It stops being an initiative that works well in stores with experienced reps and works poorly everywhere else. It becomes a consistent standard that applies network-wide, regardless of rep tenure or category expertise.
Coca-Cola's Right Execution Daily (RED) program—one of the most sophisticated perfect store programs in CPG—runs on Vision Group’s Store360, branded for Coca-Cola as iRed. That deployment started as a proof of concept and produced 18% revenue growth at Walmart. It's now a global standard across the Coca-Cola system.
Store360 calculates a perfect store score for each store visit. A category manager sees compliance trends by store, by banner, by region, by week—not a monthly average that buries individual store failures in aggregate data.
A store that's fully compliant three weeks out of four shows a 75% compliance trend. A monthly average rounds that to 'acceptable.'
Store360 maps each SKU to its exact shelf position—row, bay, height from floor. A brand can set a Golden Zone target (priority SKUs between rows 2 and 4, corresponding to 120–160cm) and receive an automatic flag when any SKU drifts outside that range. That's a visit-level KPI, not an audit observation.
Store360 reads price tags during the shelf photo—detecting missing tags, mislabeled prices, and promotional pricing that hasn't been updated after a campaign change. Pricing compliance appears alongside availability and positioning failures in a single workflow, on the same visit.
Store360 captures promotional displays, endcaps, and POSM presence in the same visit workflow as the primary shelf read. A brand running a holiday campaign can confirm both home shelf compliance and secondary display execution during the same store visit—without a separate audit program.
The perfect store score across a network creates a natural ranking. Store360's dashboard surfaces the highest and lowest scoring stores by region, banner, and account—so a field execution director can identify which stores consistently execute above network average and use those as execution benchmarks for the rest of the network.
For brands that want to extend perfect store scoring into store-level engagement, Vision Group's ClickToWin platform turns execution scores into a leaderboard visible to store managers and location staff.
Store managers see their score, their rank against peers in the same region or banner, and the specific gaps that are costing them points. A CPG brand using ClickToWin with a foodservice partner saw location managers improving execution scores week-over-week without financial incentives—leaderboard visibility alone drove compliance improvement.
Proof points:
L'Oréal at Walmart: $50,000+ in replenishment orders across 10 stores in two weeks, moving from 2–4 week old audit data to live shelf visibility during the visit.
Vision Group network: 22% fewer out-of-stocks, 600,000+ field hours saved annually across client deployments.
Store360 is live in 55+ countries, runs on the device a field rep already carries, and most clients go live in under 30 days—no new hardware, no retailer permission required.
→ Book a personalized walkthrough to see how Store360 calculates a perfect store score during a standard store visit.
Perfect store execution is a structured framework used by CPG brands to define, measure, and enforce in-store execution standards across every store in a network. It establishes what 'correct' looks like at the store level—for every SKU position, every price tag, every promotional display—and scores each store against that standard. The framework traces back to Bain & Company's work with CPG companies over 15 years ago and is now used industry-wide under different brand names.
Coca-Cola calls it RIGHT Execution Daily (RED). P&G calls it Golden Store. Unilever uses the term Perfect Store. PepsiCo calls it Flawless Execution. The framework and the five core pillars are consistent across all versions—the names reflect each company's internal branding of the same fundamental execution standard.
Availability (are the right SKUs on the shelf with no gaps), visibility (are products in the correct position and height with correct facings), pricing (are price tags present, visible, and showing the correct price), promotion (are displays built correctly, on time, in the right location), and POSM compliance (are point-of-sale materials present, correctly positioned, and undamaged). Each pillar is scored separately and weighted according to the brand's commercial priorities.
A perfect store score is a composite number—typically 0 to 100—calculated from weighted sub-scores across the five execution pillars. A store scoring 88 is executing 88% of the defined standard. The score is most useful when broken down by pillar, because an aggregate number masks where the shortfall is concentrated and which specific failure is generating the most commercial damage.
Perfect Store Score = (Sum of weighted sub-scores ÷ Maximum possible score) × 100. Each pillar is assigned a weight based on its commercial impact for the brand and category. Availability is typically weighted highest in food and beverage categories. Promotional execution and POSM may be weighted higher for a brand running a seasonal campaign or new product launch.
The Golden Zone is the shelf area between approximately 120 and 160 centimeters from the floor—the height range where products sit at eye level for the average adult shopper. Products positioned in the Golden Zone consistently generate higher sales velocity than the same products at floor level or above the sightline. In a perfect store program, Golden Zone positioning is tracked under the visibility pillar. A product with the correct facing count but positioned on the bottom shelf after a reset is failing the visibility KPI even though it's technically present.
POSM stands for Point of Sale Material—the physical marketing materials attached to displays and shelf positions, including wobblers, shelf-talkers, header cards, floor decals, and promotional signage. POSM drives purchase conversion at the moment of decision. A promotional display built correctly but missing its price-drop wobbler, or with a shelf-talker facing the wrong direction, converts at a lower rate than a fully executed display. POSM compliance is one of the hardest pillars to verify manually because a field rep moving quickly through a store confirms a display is present but may miss individual material failures.
Because a promotional execution has interdependent components. The display drives foot traffic. The price tag converts that foot traffic. The product availability fulfills the conversion. If any one component is missing, the conversion chain breaks. A display without a promotional price tag doesn't convert at the promotional rate—shoppers see the product but no reason to switch. An out-of-stock on the hero SKU on day three of the campaign means days of accumulated foot traffic generated no sales. The shortfall from missing one component is disproportionate to its weight in the overall execution checklist.
The last-mile problem is the distance between a strategy approved at HQ and a shelf that matches it in a store. Instructions sent as PDFs get interpreted individually by each rep. Planograms on file don't always reflect the current promotion or recent range review. Self-reported compliance on a checklist doesn't verify that execution matched the plan. And the correction cycle for failures identified in a report—reviewed days after the visit—is too slow to protect the campaign window where the trade investment is active.
Image recognition changes measurement from monthly audit snapshots to visit-level data. A field rep photographs a shelf section; the AI reads all five pillar components and returns a perfect store score within 90 seconds. Gaps are delivered as a prioritized task list to the rep's phone before they leave the aisle. The rep corrects what's fixable during the current visit, with photographic documentation before and after. The compliance record reflects what the shelf actually looked like, not what the rep reported on a checklist.
Top-revenue SKUs at priority accounts should be scored on every visit. Promotional windows should be tracked at launch, mid-campaign, and close—not just at setup. A monthly audit cadence is sufficient for understanding long-term trends but too infrequent to catch execution failures during the 3–4 week campaign windows where trade investment is active. The correction has to happen while the window is still open.
PICOS stands for Perfect In-store Conditions for Outstanding Sales—a structured perfect store framework used by several large CPG companies that defines execution standards across product placement, inventory, communication, outlet coverage, and service. It's an alternative naming convention for the same five-pillar approach, calibrated specifically for field teams to use as a visit-level checklist.
Once execution weights are set and scores are calculated consistently across a network, stores can be ranked from highest to lowest. A field execution director can identify the top-performing 20% of stores as execution benchmarks—understanding what conditions, store formats, or rep assignments produce consistently high scores—and direct improvement efforts at the bottom 20%, where execution gaps are concentrated. Research consistently shows that 20–25% of a network typically accounts for 65–70% of execution failures.
A planogram compliance score measures whether products are positioned correctly against the approved planogram—specifically covering the visibility and sometimes availability pillars. A perfect store score is broader: it aggregates compliance across all five pillars including pricing, promotion, and POSM. A store can be fully planogram-compliant and still score 70% on perfect store if price tags are missing, promotional displays haven't been built, or POSM materials are absent.
A monthly audit tells a category manager how compliant their stores were when the auditor visited. It doesn't tell them what happened on any of the other 29 days—or which promotional campaign ran at 94% but delivered 70% of its modeled lift because the price-drop wobbler was missing on days 3 through 8.
A perfect store program running on visit-level image recognition data closes that gap. Every visit generates a scored record across all five pillars. Every gap generates a correction task. Every correction gets documented. The program doesn't just measure execution after the fact—it drives it during the visit, while a correction still changes the outcome.
Coca-Cola's RIGHT Execution Daily program has operated on this principle for years. Nestlé field reps now close distribution voids and compliance gaps in under 60 seconds with data that previously took 15–20 minutes to gather manually. The standard doesn't change. The speed at which it gets enforced does.
→ Book a walkthrough of Vision Group's Store360 here.