Lesson 19: Pricing and Discount Experiment Design Across Launch Windows

Most indie teams do one of two things with discounts: they never test until panic mode, or they run discounts so often that buyers learn to wait and ignore full price.

This lesson gives you a repeatable experiment framework so pricing decisions become controlled business tests tied to launch windows, not emotional reactions to slow days.

What You Will Build

By the end of this lesson, you will have:

  1. A pricing baseline and guardrail sheet
  2. A launch-window discount hypothesis table
  3. A small event-by-event test design
  4. A metric review template for post-window decisions
  5. A stop-loss rule to avoid over-discounting

Step 1 - Lock the baseline before any discount tests

Start with one baseline:

  • list price by region tier
  • target net revenue per unit
  • floor price you will not cross
  • expected conversion range at full price

Without this baseline, every discount looks "good" in isolation.

Mini Task

Create pricing_baseline_v1.md with your full-price assumptions and one hard minimum price guardrail.

Step 2 - Define one hypothesis per window

Do not test "everything at once." For each window, write one clear hypothesis:

  • Window: launch week
  • Change: no discount, premium messaging emphasis
  • Expected outcome: stronger anchor for later discount windows

Then for a later event:

  • Window: seasonal promo
  • Change: modest discount band
  • Expected outcome: conversion lift without collapsing average selling price

Each hypothesis should be falsifiable, not vague.

Step 3 - Build a simple experiment matrix

Use a compact matrix:

  1. window name and date range
  2. discount percent tested
  3. audience segment focus (new wishlist traffic, returning visitors)
  4. key message angle (value, update, urgency, social proof)
  5. success metric and failure threshold

Keep this matrix small enough to review weekly.

Step 4 - Track the right metrics after each window

Minimum review metrics:

  • gross units sold
  • net revenue after platform fee
  • conversion rate change vs baseline
  • wishlist-to-purchase trend
  • refund rate movement

One high unit spike with weaker net revenue may still be a loss.

Step 5 - Add stop-loss and cooldown rules

Set two protection rules:

  1. Stop-loss: halt repeated discounting if net revenue per day drops below threshold
  2. Cooldown: require a no-discount interval before next test window

This protects long-term pricing trust and keeps experiments meaningful.

Pro Tips

  • Keep your test windows aligned to one clear operational rhythm (for example, weekly review every Monday) so the team does not evaluate results at random times.
  • Pair each pricing test with one short change-log note (patch size, major bug fixes, content drops) so you can separate pricing impact from product-change impact.
  • Freeze new discount experiments when store reviews trend sharply negative. Fix sentiment first, then resume pricing tests.
  • Tag each test window with one owner and one approval deadline so discount changes are not blocked by last-minute ambiguity.

Common Mistakes

  • Testing too many discount levels at once
  • Ignoring regional pricing impact on net results
  • Measuring only units sold instead of net revenue
  • Running back-to-back promos with no recovery window

Troubleshooting

Discount increased units but revenue still dropped

Your discount depth likely exceeded your margin-safe zone. Re-test a narrower band and compare net, not gross.

Results look noisy and hard to interpret

Window mixed too many variables (price, copy, and major update). Stabilize one variable next cycle.

Community expects constant sales now

You conditioned buyers with dense promo cadence. Add cooldown spacing and restore value messaging at full price windows.

Mini Challenge

Create launch_discount_experiment_round1.md with:

  1. baseline assumptions
  2. two launch-window hypotheses
  3. one test matrix table
  4. pass/fail metric thresholds
  5. next action if each test fails

Use it for your next store-event planning sprint.

FAQ

How big should my first launch-window discount test be?

Start with a narrow, margin-safe band (often single digits to low teens) unless you already have reliable historical data. Wide first tests make it harder to diagnose what actually moved conversion.

Should I run the same discount percentage in every region?

Usually no. Keep regional purchasing power and platform recommendations in mind, then verify that net revenue and refund behavior stay healthy per region instead of forcing one global number.

What is the fastest way to avoid over-discounting?

Use a hard cooldown rule and a stop-loss threshold before each event starts. If either rule triggers, skip the next discount window and review your baseline assumptions.

When should we cancel a planned discount window entirely?

Cancel when your current build stability, review sentiment, or support capacity is below threshold. Discount traffic amplifies operational pain if core quality is not ready.

Lesson Recap

You now have:

  • baseline pricing guardrails
  • window-specific experiment hypotheses
  • a compact test matrix
  • net-revenue-first review metrics
  • stop-loss and cooldown protection rules

This turns discount timing from guesswork into repeatable launch operations.

Next Lesson Teaser

Next, you will assemble a launch-ops control panel that combines pricing tests, wishlist pacing, and patch-freeze rules into one weekly go/no-go dashboard.

Related Learning

Bookmark this lesson before your next promo planning session so discount decisions stay evidence-based.