Lesson 13 gave you a stable frame budget. This lesson asks whether the mission is fair and readable when someone else plays it—not when you muscle memory routes. You will script observation, capture numbers that matter to stealth (time-to-detect, alert escalations, fail reasons), and ship small builds with honest patch notes.

Course illustration - isometric snowy Japanese building scene from Dribbble


Lesson objective

By the end of this lesson you will have:

  1. A one-page playtest script with three fixed routes (baseline, aggressive, gadget-heavy) and success criteria tied to Lesson 1 pillars.
  2. A minimal observability pass in Blueprint or logs: timestamps for first sight stimulus, alert tier changes, and mission outcome (win / fail / quit) so you can compare builds without arguing from memory.
  3. A patch-note template you reuse each iteration, plus one round of balance changes justified by data or observer formsnot solo author bias.

Step 1: Lock what you are testing this week

  1. Freeze content scope for the session: no new gadgets, no greybox reshapes—only numbers and placement tweaks inside the slice you already built.
  2. Name the build (StealthSlice_Playtest_YYYYMMDD_A) and record the git hash or Perforce changelist in your notes.
  3. List two hardware targets (from Lesson 13) so FPS regressions do not masquerade as AI tuning.

Pro tip: If players blame “bad AI” but FPS dipped, fix performance firstperception timers stretch when frames stutter.


Step 2: Write the route script (10–15 minutes per run)

Borrow lane language from Lesson 3 and objective beats from Lesson 9.

For each route, document:

  • Start spawn and first 30 seconds (what HUD should show).
  • Mandatory waypoints (A → B → C) with expected stealth posture (crouch / walk / sprint only where allowed).
  • Forbidden shortcuts for that run (e.g. “no distraction throws” on Route 1).
  • Win condition and acceptable fail states (spotted once allowed? combat allowed?).

Give observers a printed or second-screen checklist so they watch feet, UI, and audio stingers from Lesson 11 without backseat driving.


Step 3: Observability without a full analytics stack

You do not need a backend. You need repeatable lines in Output Log or a CSV append from Blueprint.

Log at least:

  • Mission start / checkpoint reached / objective complete.
  • Perception events: first stimulus time (game time or real seconds since start), stimulus type (sight / hearing / damage), new alert tier from Lesson 7.
  • Engaged resolution (lost player, KO, player death, timeout).

Blueprint sketch: on AI controller or perception delegate, Print String disabled for shipping—use UE_LOG macros in C++ or a tiny Append to Saved/Playtests/run.csv from Blueprint File IO node set (keep async writes bounded).

Time-to-detect for this course means: elapsed time from route start or last checkpoint until first non-idle alert on any guard that matters to that route. Compare medians across three players, not one hero run.


Step 4: Human observation form (5 minutes to fill)

Give each observer four scores 1–5 plus one sentence each:

  1. Goal clarity (do they know what to do next without you talking)?
  2. Detection fairness (did alerts feel earned vs random)?
  3. Readability (silhouette + UI + sound agree)?
  4. Friction (undo cost after mistaketoo harsh / ok / too forgiving)?

Free text: one cheese moment, one delight moment.


Step 5: Route exploits and patch discipline

  1. Reproduce every cheese twiceonce following the script, once chasing the shortcut.
  2. Classify: fix now (breaks pillar), fix next milestone (needs art), accept with telemetry (speedrun friendly but not default win).
  3. If you change collision, lighting, or peripheral AI, re-run Lesson 12 sightline spot checks before calling balance done.

Step 6: Patch-note template (copy per build)

Use this skeleton in Docs/Patchnotes_buildId.md or Notion:

  • Build ID / date / platform preset (Development vs Shipping).
  • Intent (what hypothesis we tested).
  • Player-facing changes (bullets, no engine jargon).
  • Balance table (sight radius, hearing range, search duration, cooldowns) with old new.
  • Known issues (honest list).
  • Metrics snapshot (median time-to-detect per route, win rate, quit points).

Ship the patch note with the zip or Itch page so returning testers know what moved.


Recap

  • Scripts beat vibesroutes make sessions comparable.
  • Logs or CSV hooks turn feels off into before / after numbers.
  • Exploit triage prevents silent scope creep into new geometry.
  • Patch notes are part of the vertical slice deliverable, not an afterthought.

Further reading


FAQ

How many testers before I change numbers?
Three fresh runs minimum on the same build. If two agree on the same pain, it is real.

Do I need NDAs for friends-and-family?
No for most slices, but do ask them not to stream if assets are placeholder or licensed.

What if time-to-detect is “fine” but everyone quits?
Check objective clarity and checkpoint spacing firststealth math cannot fix a lost player.


Next: Lesson 15: Packaging, Trailer Capture, and Case Study turns this balanced slice into a shippable build, trailer capture rails, and a one-page case study. Finish Lesson 14 when you have one documented iteration with patch notes and observer forms filed.