eC
eCommvert.
Portfolio StrategyPerformance MarketingGrowth Strategy

How to Build a Product Discovery Engine (Not Just a Campaign)

Move beyond one-off tests to a systematic, repeatable process for continuously discovering new winning products in your catalog.

10 min read

From Random Testing to Systematic Discovery

Most eCommerce brands approach product discovery the same way:

  1. Launch new product
  2. Add it to Google Shopping feed
  3. Hope the algorithm picks it up
  4. Check back in 30 days
  5. Product has 47 impressions and zero sales
  6. Conclude "the product doesn't work"
  7. Repeat

This isn't product discovery. It's product abandonment.

Real product discovery requires a system - a repeatable process that continuously tests, evaluates, and promotes products through your portfolio. Not campaigns you set up once, but an engine that runs automatically.

Here's how to build one.

What Is a Product Discovery Engine?

A product discovery engine is a systematic process that:

  • Identifies which products to test next (prioritization)
  • Allocates appropriate budget for data gathering (exploration)
  • Evaluates performance using business metrics, not just ROAS (scoring)
  • Promotes winners to higher budget tiers (graduation)
  • Retests failed products under different conditions (rehabilitation)
  • Operates continuously without manual intervention (automation)

Think of it as a funnel:

Catalog (500 products)
    ↓
Discovery Queue (prioritized list)
    ↓
Testing (active exploration campaigns)
    ↓
Evaluation (scoring + performance check)
    ↓
Graduation (promotion to core campaigns)
    ↓
Optimization (scale winners)

Component 1: Product Scoring & Prioritization

Not all products deserve equal testing budget. You need a scoring system that prioritizes which products to discover first.

The Discovery Score Formula

We use a weighted scoring system:

Discovery Score = 
  (Margin Weight × Margin Score) +
  (Demand Weight × Demand Score) +
  (Stock Weight × Stock Score) +
  (Newness Weight × Newness Score) +
  (Strategic Weight × Strategic Score)

Margin Score (Weight: 30%)

Products with higher margins should get priority - more room for exploration inefficiency.

If margin >50%: Score = 10
If margin 40-50%: Score = 8
If margin 30-40%: Score = 6
If margin 20-30%: Score = 4
If margin <20%: Score = 2

Demand Score (Weight: 25%)

Use external demand signals (Google Trends, Merchant Center Diagnostics, search volume).

If search volume >10K/month: Score = 10
If search volume 5K-10K: Score = 8
If search volume 1K-5K: Score = 6
If search volume 100-1K: Score = 4
If search volume <100: Score = 2

Stock Score (Weight: 20%)

No point discovering a winner you can't fulfill.

If stock >100 units: Score = 10
If stock 50-100: Score = 8
If stock 20-50: Score = 6
If stock 5-20: Score = 3
If stock <5: Score = 1

Newness Score (Weight: 15%)

New products need data before seasonal demand hits.

If product age <30 days: Score = 10
If age 30-60 days: Score = 8
If age 60-90 days: Score = 6
If age 90-180 days: Score = 4
If age >180 days: Score = 2

Strategic Score (Weight: 10%)

Manual override for strategic priorities (new category, seasonal prep, competitor response).

Strategic priority = Set manually (0-10)

Example Calculation

Product A: Winter Coat

  • Margin: 45% → Score = 8
  • Search Volume: 8,000/month → Score = 8
  • Stock: 75 units → Score = 8
  • Age: 15 days old → Score = 10
  • Strategic: High (winter season prep) → Score = 9
Discovery Score = (0.30 × 8) + (0.25 × 8) + (0.20 × 8) + (0.15 × 10) + (0.10 × 9)
                = 2.4 + 2.0 + 1.6 + 1.5 + 0.9
                = 8.4 / 10

Product B: Basic T-Shirt

  • Margin: 22% → Score = 4
  • Search Volume: 45,000/month → Score = 10
  • Stock: 200 units → Score = 10
  • Age: 400 days old → Score = 2
  • Strategic: None → Score = 0
Discovery Score = (0.30 × 4) + (0.25 × 10) + (0.20 × 10) + (0.15 × 2) + (0.10 × 0)
                = 1.2 + 2.5 + 2.0 + 0.3 + 0
                = 6.0 / 10

Result: Winter Coat (8.4) gets priority over T-Shirt (6.0) despite T-Shirt's higher demand.

Implementation in Google Sheets

Create a scoring spreadsheet:

  1. Export your product catalog from Merchant Center
  2. Add columns: margin %, search volume, stock level, date added, strategic priority
  3. Create calculated columns for each score component
  4. Calculate final Discovery Score
  5. Sort by Discovery Score descending
  6. Top 50-100 products = Discovery Queue

Update weekly or monthly as inventory, seasons, and priorities change.

Component 2: Testing Framework

Once you have your Discovery Queue, you need a structured testing process.

Campaign Structure

Discovery Campaign 1: High-Priority Testing

  • Products: Top 30 from Discovery Queue
  • Budget: 15% of total ad spend
  • ROAS Target: 225% (aggressive data gathering)
  • Bid Strategy: Maximize Conversions
  • Refresh: Every 30 days (graduate/demote)

Discovery Campaign 2: Medium-Priority Testing

  • Products: Next 50 from Discovery Queue
  • Budget: 10% of total ad spend
  • ROAS Target: 250%
  • Bid Strategy: Maximize Conversions
  • Refresh: Every 45 days

Discovery Campaign 3: Long-Tail Testing

  • Products: All others not in core campaigns
  • Budget: 5% of total ad spend
  • ROAS Target: None (maximize reach)
  • Bid Strategy: Maximize Clicks
  • Goal: Get minimum data on everything

Data Gathering Thresholds

Before evaluating a product, ensure it has minimum data:

  • Minimum impressions: 1,000
  • Minimum clicks: 25
  • Minimum time: 14 days

Products below these thresholds stay in testing regardless of initial metrics.

Why? With 50 impressions and 0 conversions, you don't know if the product is bad or just didn't reach the right people yet.

Component 3: Performance Evaluation

After products hit data thresholds, evaluate using a multi-factor scorecard.

The Viability Score

Different from Discovery Score - this measures actual performance.

Viability Score = 
  (ROAS Score × 0.35) +
  (Conversion Rate Score × 0.25) +
  (CTR Score × 0.15) +
  (Margin Score × 0.15) +
  (Velocity Score × 0.10)

ROAS Score

>400%: Score = 10
300-400%: Score = 8
200-300%: Score = 6
150-200%: Score = 4
100-150%: Score = 2
<100%: Score = 1

Conversion Rate Score

(Relative to catalog average)

>2x average: Score = 10
1.5-2x average: Score = 8
1-1.5x average: Score = 6
0.5-1x average: Score = 4
<0.5x average: Score = 2

CTR Score

(Relative to catalog average)

>1.5x average: Score = 10
1.25-1.5x: Score = 8
1-1.25x: Score = 6
0.75-1x: Score = 4
<0.75x: Score = 2

Margin Score

(Same as Discovery Score)

Velocity Score

(Growth trajectory)

Increasing ROAS week-over-week: Score = 10
Stable ROAS: Score = 6
Decreasing ROAS: Score = 3

Decision Matrix

Based on Viability Score:

  • Score ≥8: Graduate to Core Portfolio (proven winner)
  • Score 6-8: Move to Growth Portfolio (continue testing with higher budget)
  • Score 4-6: Keep in Discovery (needs more data or optimization)
  • Score <4: Pause (failed test, consider rehabilitation later)

Component 4: Graduation Process

Winners need to leave Discovery and join your main portfolio.

Graduation Workflow

  1. Identify graduates: Run Viability Score weekly, flag products ≥8
  2. Verify sustainability: Check that performance is stable for 14+ days
  3. Update product labels: Change custom_label_0 from "discovery" to "core"
  4. Move to core campaign: Product automatically enters core campaigns via label filtering
  5. Backfill Discovery Queue: Add next product from prioritization list
  6. Monitor post-graduation: Track for 30 days to ensure performance holds

Graduation Report Template

Track weekly:

Metric This Week Last Week Change
Products in Discovery 87 92 -5
Products Graduated 5 3 +2
Products Paused 7 4 +3
New Products Added 12 8 +4
Avg. Viability Score 5.8 5.4 +0.4

Component 5: Rehabilitation Strategy

Products that "fail" discovery deserve a second chance under different conditions.

Why Products Fail Initial Testing

  • Wrong season (winter product tested in summer)
  • Wrong audience (shown to irrelevant users)
  • Poor product data (bad images, title, description)
  • Insufficient budget (algorithm didn't try hard enough)
  • Market timing (tested before trend emerged)

Rehabilitation Triggers

Retest paused products when:

  • Seasonal shift: Summer arrives, retry summer products
  • Data updated: New images, better descriptions added
  • Trend detected: Google Trends shows rising interest
  • Competitor success: Similar products winning for competitors
  • Time passed: 90+ days since last test (market may have changed)
  • Price change: Price dropped 20%+, making it more competitive

Rehabilitation Campaign

Separate campaign for "second chance" products:

  • Budget: 3-5% of total
  • Products: Previously paused, now meeting rehabilitation triggers
  • ROAS Target: 200% (very aggressive learning)
  • Test duration: 21 days
  • Decision: Graduate, pause permanently, or retry again later

Component 6: Automation & Tooling

A true engine runs automatically. Here's how to automate each component.

Prioritization Automation

Option A: Google Sheets + Scripts

  1. Connect Google Sheets to Merchant Center API
  2. Auto-pull product data daily
  3. Calculate Discovery Scores automatically
  4. Email top 20 products weekly

Option B: Zapier/Make.com Workflow

  1. Trigger: New product added to Merchant Center
  2. Action 1: Lookup margin from your inventory system
  3. Action 2: Get search volume from Google Trends API
  4. Action 3: Calculate Discovery Score
  5. Action 4: Add to Discovery Queue if score >6

Custom Labels for Automation

Use Google Merchant Center custom labels to automate campaign assignments:

  • custom_label_0: Portfolio tier (discovery / growth / core / paused)
  • custom_label_1: Discovery Score bucket (high / medium / low)
  • custom_label_2: Viability Score bucket (winner / testing / failed)
  • custom_label_3: Margin tier (premium / standard / budget)
  • custom_label_4: Strategic tags (seasonal / new_launch / rehabilitation)

Then create campaigns that filter by these labels. When you update a label in Merchant Center, products automatically move between campaigns.

Graduation Automation

Create a Google Ads Script:


// Pseudo-code (requires adaptation)
function autoGraduateProducts() {
  var discoveryProducts = getProductPerformance('discovery_campaign');
  
  for each product in discoveryProducts {
    if (product.impressions > 1000 && 
        product.conversions > 5 &&
        product.roas > 3.0 &&
        product.daysTested > 14) {
      
      updateMerchantCenterLabel(product.id, 'custom_label_0', 'core');
      logGraduation(product.id, product.name, product.roas);
    }
  }
  
  sendWeeklyGraduationReport();
}

Alerting System

Set up alerts for:

  • Discovery campaign ROAS <150% for 7+ days (budget too aggressive)
  • Zero graduations in 14 days (prioritization or testing issue)
  • >20 graduations in 7 days (graduating too easily)
  • Discovery queue <20 products (need to refill)
  • Any product with 2,000+ impressions still in discovery (stuck in queue)

Putting It All Together: The Weekly Workflow

Monday: Prioritization

  1. Update Discovery Score spreadsheet (5 min - automated)
  2. Review new products added to catalog (10 min)
  3. Manually adjust strategic priorities if needed (5 min)

Wednesday: Evaluation

  1. Pull performance data from Google Ads (automated)
  2. Calculate Viability Scores for products with enough data (automated)
  3. Review graduation candidates (10 min)
  4. Approve graduations (5 min)

Friday: Maintenance

  1. Review paused products for rehabilitation triggers (10 min)
  2. Check discovery campaign budgets and pacing (5 min)
  3. Review weekly graduation report (5 min)
  4. Identify any issues or opportunities (10 min)

Monthly: Strategic Review

  1. Analyze which product categories are graduating vs. failing
  2. Adjust scoring weights if needed
  3. Review overall portfolio health metrics
  4. Plan next month's strategic priorities

Total weekly time investment: ~50 minutes + monthly 2-hour review

Expected Results Timeline

Month 1:

  • Products in testing: 50-80
  • Graduations: 2-5
  • Discovery campaign ROAS: 180-220%
  • Blended ROAS: -10-15% vs baseline

Month 2-3:

  • Products in testing: 80-120
  • Graduations: 5-12 per month
  • Discovery campaign ROAS: 220-260%
  • Blended ROAS: -5% to baseline

Month 4-6:

  • Products in testing: 100-150
  • Graduations: 8-15 per month
  • Discovery campaign ROAS: 260-300%
  • Blended ROAS: +5-10% vs baseline
  • Revenue: +15-25% vs baseline

Month 7-12:

  • Total active products: 2-3x starting baseline
  • Continuous graduation flow: 10-20/month
  • Blended ROAS: At or above baseline
  • Revenue: +30-50% vs starting point

Your Implementation Checklist

Week 1: Foundation

  • ☐ Export product catalog with all data
  • ☐ Build Discovery Score spreadsheet
  • ☐ Calculate scores for all products
  • ☐ Create Discovery Queue (top 50-100 products)

Week 2: Campaign Setup

  • ☐ Create custom labels in Merchant Center
  • ☐ Tag Discovery Queue products
  • ☐ Set up 3 discovery campaigns
  • ☐ Configure budget allocation (15% / 10% / 5%)

Week 3: Automation

  • ☐ Set up Google Sheets automation
  • ☐ Configure label-based campaign filtering
  • ☐ Create graduation tracking spreadsheet
  • ☐ Set up weekly email reports

Week 4: Launch & Monitor

  • ☐ Launch discovery campaigns
  • ☐ Monitor daily for first week
  • ☐ Adjust budgets if needed
  • ☐ Document baseline metrics

Ongoing: Operate & Optimize

  • ☐ Weekly evaluation & graduation process
  • ☐ Monthly strategic review
  • ☐ Quarterly scoring weight optimization

Need help building your product discovery engine? We offer Discovery Engine Implementation that sets up the entire system for you, including automation and custom reporting.

Tags
Product DiscoveryCampaign OptimizationExploration StrategyScaling

Ready to optimize your product portfolio?

Get expert insights and strategies to scale your e-commerce business.