Lesson Goal
In Lesson 7 you designed a live ops roadmap and content calendar.
In this lesson you will:
- Define safe, focused A/B tests for offers and prices.
- Set up guardrails so experiments cannot silently ruin your economy.
- Learn a simple way to read results and decide your next move.
You are not trying to become a data scientist overnight. You are learning how to ask one clear question at a time and let players answer it with their behavior.
Step 1 – Pick One Clear Question per Test
Weak tests try to answer five questions at once. Strong tests answer one.
Common questions for an early monetization experiment:
- “Does a cheaper starter pack increase payer conversion without tanking revenue?”
- “Do players respond better to a time‑limited cosmetic bundle than a permanent one?”
- “Does moving the store entry point to the main menu increase conversion?”
Your job is to translate your roadmap into single‑question tests:
- Open your live ops calendar from Lesson 7.
- For one upcoming event or offer, write one sentence:
- “We want to know if X vs Y leads to better metric Z.”
Examples:
- “We want to know if $2.99 vs $4.99 for the starter pack leads to better day‑7 payer conversion.”
- “We want to know if a 3‑day welcome bundle converts better than a permanent starter bundle.”
Mini‑task:
Write three candidate questions, then circle the one that would be most useful for your next 30 days of development.
Step 2 – Design Variants You Can Actually Ship
Once you have your question, you need variants (A and B).
Design rules for good variants:
- Change one major thing at a time (price, discount, contents, or presentation).
- Keep implementation cheap: reuse existing items, art, and UI where possible.
- Make sure both variants are fair and aligned with your values.
Examples:
- Price test
- Variant A: Starter pack at $4.99 with 1 skin + 500 soft currency.
- Variant B: Starter pack at $2.99 with the same contents.
- Content/value test
- Variant A: Skin only.
- Variant B: Skin + small boost (e.g., 2 hours of double XP).
Avoid:
- Completely different bundles (you will not know if price or contents made the difference).
- “Joke” variants you know are bad; treat both sides as something you would ship.
Mini‑task:
Sketch your A and B variants for a single offer in a table: name, contents, price, and where it appears in the UI.
Step 3 – Set Guardrails Before You Launch
Experiments are safest when you decide in advance what “too far” looks like.
Set guardrails in three areas:
- Player experience
- Maximum allowed pay‑to‑win feeling (for example, no stat boosts in competitive modes).
- Limits on FOMO tactics (for example, no hard paywalls on core progression).
- Economy stability
- Maximum allowed increase/decrease in average revenue per paying user (ARPPU) or soft currency inflation before you roll back.
- Operational risk
- Time window: tests run for a defined period (for example, 7–14 days).
- Rollback plan: how to revert to the previous configuration in one step.
Write these as simple rules:
- “If D7 retention drops by more than 5% for new players in either variant, we stop the test and revert.”
- “If support tickets about unfairness spike, we stop the test.”
Mini‑task:
For your chosen test, write two guardrail rules: one about player experience, one about metrics.
Step 4 – Wire the Test into Your Analytics
Your A/B test needs to be visible in data, not just in your design doc.
Add or confirm analytics events for:
- Install / first session (to tie players to a cohort).
- Offer view (who saw variant A vs B).
- Offer purchase (who bought which variant, when, and at what price).
Include properties such as:
experiment_id(for example,starter_pack_price_test_01).variant(A or B).price_pointorbundle_id.
In your dashboard or spreadsheet, prepare:
- A breakdown of conversion (buyers / players who saw the offer) per variant.
- A breakdown of revenue per player and per payer per variant.
- A simple retention check: D1/D7 for players exposed to each variant.
Mini‑task:
Write down your experiment_id, the variant values you will use, and which events must include them. This becomes your implementation checklist.
Step 5 – Roll Out the Test Safely
How you roll out the test matters as much as the design.
Safer rollout patterns:
- Soft launch or limited regions before global rollout.
- Start with a small percentage of new players (for example, 20% in variant B) rather than 50/50 on day one.
- Prefer tests on new players (where possible) so you do not radically change expectations for existing spenders.
Operational steps:
- Confirm analytics are live and logging for both variants before counting results.
- Confirm UI strings and prices are correct in all supported currencies.
- Announce internally what is being tested, what the guardrails are, and who can push the rollback button if needed.
Mini‑task:
Decide whether this test will run on new installs only or also on existing players. Write down why, and what could go wrong in each case.
Step 6 – Read the Results and Decide
At the end of your test window, you need to make a call: keep A, keep B, or iterate.
Look at:
- Offer conversion per variant.
- Revenue per player and revenue per payer per variant.
- Short‑term retention for affected players (D1, D7).
- Any qualitative feedback (reviews, support, Discord).
Avoid overreacting to:
- Very small sample sizes (do not trust a test where 10 people bought A and 7 people bought B).
- One‑day spikes caused by marketing or influencer traffic.
When in doubt, prefer the variant that:
- Meets your floor metrics (for example, at least break‑even revenue).
- Respects player trust and keeps your future options open.
Mini‑task:
Draft a short, plain‑language summary template:
“We ran [experiment_id]. Variant X performed better on metric Y by Z%, with no major negative effects on retention or feedback. We will [adopt/iterate/park] this variant.”
Step 7 – Document and Fold Back into Your Roadmap
Experiments become powerful when you stack learnings over time.
Create a simple “Monetization Experiments Log” with columns like:
experiment_idquestionvariantsmetricsresult(win/loss/inconclusive)decision(adopt/revert/redo later)
After each test:
- Record what you actually did, what you observed, and what you will change next.
- Update your live ops calendar from Lesson 7 to reflect the new baseline or next test.
This prevents you from running the same unclear test three times and wondering why nothing changes.
Your Checklist Before Moving On
Before you proceed to the next lesson, make sure you have:
- One clear test question for an offer or price.
- Two ship‑ready variants that differ in a meaningful but focused way.
- Written guardrails for player experience and metrics.
- A minimal analytics plan with experiment IDs and variant tags.
- A rollout strategy that starts small and can be rolled back.
- A simple way to log results and decisions after the test.
With this in place, your monetization experiments stop being wild guesses and start becoming measured, reversible steps that respect both your players and your studio.