Case Studies and Experiments May 16, 2026

Twenty-One-Day Wishlist Source Tagging Experiment - A Synthesized UTM Discipline Pattern Before October 2026 Next Fest

Synthesized 2026 Experiments pattern for 21-day wishlist source tagging and UTM discipline before October Next Fest—tag taxonomy, weekly gates, release-evidence proof, and honest limits for 1–3 person teams. Pattern playbook, not named-studio biography.

By GamineAI Team

Twenty-One-Day Wishlist Source Tagging Experiment - A Synthesized UTM Discipline Pattern Before October 2026 Next Fest

Sport Cars pixel artwork for wishlist source velocity and attribution lanes

Your operating review Block 4 says “Steam discovery” every Friday—but you cannot prove whether Bluesky, fest badge, creator embed, or email moved wishlists last week. October 2026 Next Fest punishes that ambiguity: you will spend fest week guessing which capsule tweak worked.

This Case Studies and Experiments article is a synthesized 21-day tagging experiment—not a named-studio biography. Numbers below are model planning anchors for your own spreadsheet, not reported facts about any one game.

Why this matters now (May 2026)

  1. Fest density — More demos mean more external links; untagged links collapse attribution into “misc.”
  2. Discovery refresh — Steam surfaces reward coherent stacks; you need per-surface rows in capsule iteration logs.
  3. Partner diligencePublisher packets increasingly ask how you measure marketing, not whether you post.
  4. Truth auditsSeven-day store truth fails if traffic spikes from a mislabeled creator clip you cannot trace.

Direct answer: Run a 21-day experiment with frozen tag taxonomy, one new tagged link per weekday minimum, weekly export to release-evidence/marketing-and-demo/attribution/, and Block 4 rows that name dominant surface with evidence path—not vibes.

Pattern disclaimer

Allowed Not allowed in this pattern
Model week-by-week structure “Studio X gained 400% wishlists”
Qualitative outcome tables Fabricated Steam app IDs or revenue
Your team’s measured exports Copy-pasting our model numbers into decks
Honest “unknown” weeks Pretending tags existed before week one

Who should run this experiment

  • 1–3 person teams with a live Steam page and at least one external channel (social, newsletter, or creator)
  • Studios preparing October 2026 Next Fest who already completed or scheduled a truth audit
  • Teams with four Friday operating sheets started or planned

Skip if: no wishlist button yet, or you change store URL structure weekly (stabilize first).

Prerequisites (90 minutes, day 0)

  • [ ] Steamworks traffic stats access
  • [ ] Spreadsheet or markdown log template (below)
  • [ ] release-evidence/marketing-and-demo/attribution/README.md stub
  • [ ] Agreement: no capsule A/B during experiment weeks unless tagged as capsule-vN
  • [ ] Price experiments paused—price noise confounds source reads

Tag taxonomy (freeze on day 0)

Use lowercase hyphen slugs. Do not rename mid-experiment.

Dimension Allowed values (examples)
surface steam-discovery, steam-fest, steam-creator, itch, bluesky, mastodon, newsletter, discord, press, unknown
creative capsule-v3, trailer-a, demo-build-481, devlog-12
intent wishlist, demo-play, page-view-only

Composite example: surface=bluesky&creative=trailer-a&intent=wishlist

UTM mapping (external web)

Param Maps to
utm_source surface
utm_medium social / email / embed
utm_campaign creative
utm_content optional A/B variant

Steam store note: Steamworks does not honor UTMs on the store URL the way web analytics do—tags matter for off-Steam links that land on Steam via redirector pages, link filters, or tracked short links you control. Document which links are Steam-native (tracked only in Steamworks) vs external→Steam.

Experiment rules

  1. One new tagged outbound link per weekday minimum (even if low traffic).
  2. No untagged fest posts during the 21 days.
  3. Weekly export every Sunday to release-evidence/.../attribution/week-N.csv (or screenshot + markdown table if CSV unavailable).
  4. Block 4 must cite dominant surface + file path.
  5. Confound freeze: no untagged viral posts; if one happens, log as surface=unknown and note in README—do not retro-tag hope.

What “source tagging” means in 2026 (not 2018 growth hacks)

Source tagging here is operational discipline, not a funnel optimization cult. You are building a receipt trail that answers three questions every Friday:

  1. Which link did we publish?
  2. Which asset version did that link carry?
  3. What Steamworks row moved in the same 48-hour window (visits, wishlists) without claiming false causation?

UTM parameters on web landing pages still matter for itch pages, press kits, and newsletter tools. Steam-native traffic still matters in Steamworks → Traffic & Installs (or equivalent reporting for your app). The experiment forces both into one README narrative so publisher diligence can grep a folder instead of interrogating you live.

Steam-native vs external→Steam (decision tree)

Link published?
├─ Yes → Points to steam store / demo directly?
│   ├─ Yes → Log in inventory as steam-native; screenshot Steamworks by date
│   └─ No  → Points to landing / itch / press?
│       ├─ Yes → Full UTM + short-link click log
│       └─ No  → Fix link before posting
└─ No → Do not claim surface credit this week

Teams that skip the branch treat every wishlist spike as “Twitter worked” because a post went viral—then truth audits reveal the demo still mislabeled co-op.

Link inventory template (day 1)

Copy into link-inventory.md:

# Link inventory — <app name> — <start date>

| # | Label | URL (truncated ok) | surface | creative | Last verified |
|---|-------|-------------------|---------|----------|---------------|
| 1 | Steam store canonical | | steam-discovery | capsule-v? | |
| 2 | Fest demo branch | | steam-fest | demo-build-? | |
| 3 | Bluesky bio link | | bluesky | profile | |
...

## Untagged (fix before week 2)
- 

Rule: rows without surface cannot appear in “dominant surface” claims.

Week 1 (days 1–7) — Baseline naming

Goal: Stop losing data to “misc.”

Daily micro-task (15–25 min)

Day Task Proof
Mon Inventory every live link to store/demo link-inventory.md
Tue Tag top five historical links Before/after table
Wed Create Bluesky template with fixed query string Screenshot
Thu Tag itch embed if scoped per HTML5 SKU opinion Row in inventory
Fri Operating sheet Block 4 names surfaces studio/weekly-reviews/
Sat Export Steamworks traffic by source (screenshot) week-1-steamworks.png
Sun README week-1 summary Honest “unknown %”

Model week-1 qualitative outcome

Signal Healthy pattern Unhealthy pattern
Inventory Every link has slug Mystery Discord pins
Steamworks Can name top 2 surfaces 100% “other”
Block 4 Surfaces listed “posted stuff”

Week 2 (days 8–14) — Controlled posts

Goal: Deliberate posts with one changed variable.

Posting discipline

  • One surface per day emphasis (rotate Bluesky → newsletter → itch).
  • Same creative tag for three days when testing copy, change creative when testing visual.
  • Pair with screenshot composition gates only after tags exist—otherwise you confound visual and source.

Model mid-experiment review (60 minutes, day 10)

Questions:

  1. Did any tagged link show zero clicks but non-zero wishlists? → investigate Steam discovery lag or untagged paths.
  2. Did unknown rise week over week? → enforcement failure, not audience failure.
  3. Are fest badges live without surface=steam-fest? → fix before week 3.

Week 2 failure mode

Tag sprawl: inventing bluesky-thread-may-12-take-2. Freeze to taxonomy; put nuance in README notes, not slug grammar.

Short-link and redirect hygiene

You do not need a paid stack on day one. Pick one short-link owner (Bitly, self-hosted redirect, or platform-native analytics) and freeze it for 21 days.

Approach Pros Cons
Platform-native (Bluesky/Mastodon analytics) Zero setup Hard to compare across surfaces
Single short-link domain Consistent click log Another bill / DNS task
Self-hosted 302 in repo Grep-able config Engineering time

Model rule: same short-link base path per surface so week-2 compare is apples-to-apples. Example: yoursite.com/go/bluesky/* vs yoursite.com/go/newsletter/*.

Document redirects in link-inventory.md—partners spot broken redirects faster than you expect.

Creator and press links (high confound lane)

Creators are not your employee. When a video drops untagged:

  1. Log surface=unknown for that week’s narrative.
  2. Add a creator kit row with pre-tagged URLs before the next outbound.
  3. Never retroactively tag the creator’s URL without their repost—your inventory must reflect links you control.

Press keys belong in release-evidence/marketing-and-demo/press/ with frozen UTMs per outlet slug (surface=press, creative=outlet-name).

Week 3 (days 15–21) — Decision week

Goal: Pick one dominant external surface to double down pre-fest, and one to pause.

Decision matrix (qualitative)

Surface Evidence quality Action
High visits + high wishlist adds in Steamworks alignment Strong Increase cadence
High clicks, low wishlists Weak funnel Fix truth audit before spend
Low clicks, high wishlists Likely discovery/internal Do not credit external post
Unknown >30% of narrative Broken discipline Repeat week 1 inventory

Day 21 deliverables

  1. attribution/decision-memo.md (one page)
  2. Updated capsule calendar with surface notes
  3. Block 4 row: “dominant surface + next 14-day plan”
  4. Optional: wire telemetry primer for demo opens—not required to finish experiment

Release-evidence layout

release-evidence/marketing-and-demo/attribution/
  README.md
  link-inventory.md
  week-1-steamworks.png
  week-2-export.md
  week-3-export.md
  decision-memo.md

README minimum table:

| Week | Dominant surface (claimed) | Unknown % band | Capsule frozen? |
|------|----------------------------|----------------|-----------------|
| 1    |                            |                | Y/N             |
| 2    |                            |                | Y/N             |
| 3    |                            |                | Y/N             |

Spreadsheet columns (if you prefer Sheets)

Column Purpose
date Post or link live date
url Full tagged URL
surface Taxonomy slug
creative Asset id
clicks From short-link or platform analytics
steam_visits Manual Steamworks row (date range)
wishlist_delta From Steamworks (same range)
notes Confounds (sale, patch, fest badge)

Do not merge weeks in one row—weekly grain keeps honesty.

Integration with operating review

Block Experiment hook
Engineering Log build hash when demo link changes
Production Scope freeze for untagged “just this once” posts
Marketing Mandatory dominant surface + evidence path
Finance Defer paid ads until week 3 decision—avoid untagged spend

Four consecutive Friday sheets during the experiment make partner questions answerable without a live call.

Confounds to log explicitly

  • Steam seasonal sale (tag confound=steam-sale)
  • Patch day (tag confound=patch-481)
  • Creator video you did not coordinate (tag surface=unknown)
  • Browser demo SKU traffic separate from PC wishlist path

Ignoring confounds produces false “Bluesky wins” stories.

Sibling articles (read order)

  1. Capsule iteration calendar — when to change creative tags
  2. Truth audit challenge — before scaling spend
  3. Post-Next-Fest plateau playbook — after fest, same tags persist
  4. Release-evidence taxonomy — parent folder rules
  5. AI disclosure checklist — if generative assets appear in tagged creatives

Anti-patterns (synthesized post-mortems)

  1. UTM on Steam URL only — expecting Steam to parse like GA4.
  2. Renaming tags week 2 — breaks week-over-week compare.
  3. Celebrating wishlists without visit row — discovery moved, not your post.
  4. Twenty tags, zero posts — inventory theater.
  5. Skipping Sunday export — partner packet has nothing to grep.

Day-by-day detail (model calendar)

Day Focus Minimum proof artifact
1 Full inventory link-inventory.md ≥15 rows
2 Tag historical top 5 Diff column “tag added”
3 Bluesky template Screenshot + template file
4 itch / web SKU check Row + browser SKU note
5 Friday operating sheet Block 4 surfaces
6 Steamworks screenshot Dated PNG
7 Week 1 README Unknown band estimate
8 Newsletter or Discord tagged send Outbound log row
9 Same creative, new copy only Two rows, same creative
10 Mid-experiment review 60-min notes in README
11 Pause untagged channel Written team rule
12 Creator kit update Pre-tagged URLs file
13 Production scope check No surprise fest posts
14 Week 2 export Markdown table
15 Compare week 1 vs 2 bands Qualitative paragraph
16 Double-down surface chosen Draft memo bullet
17 Capsule calendar annotation Link to calendar article
18 Demo link tag audit Build hash + URL row
19 Confound log review Any sale/patch rows
20 Partner packet dry-run Cold-open attribution folder
21 decision-memo.md final One-page max

Skipping a day is allowed; skipping Sunday export is not.

Interpreting Steamworks without lying

Steam reporting lags and buckets shift. Model interpretation rules:

  • Same-week narrative: “visits rose after tagged Bluesky post; wishlists rose in same window—hypothesis external surface contributed.”
  • Forbidden narrative: “Bluesky caused X wishlists” without click row + confound check.
  • Discovery spike with no tagged post: credit steam-discovery, not last night’s Discord joke.
  • Fest badge live: require surface=steam-fest row before fest week posts.

Pair qualitative reads with plateau diagnostics after fest, not during week two panic.

Engineering hooks (optional, day 12+)

If you already run privacy-safe telemetry, add one demo-open event property referrer_tag mirrored from URL query—not twenty events. Godot / Unity implementation is team-specific; the experiment only requires documented property names in attribution/README.md.

Web demos under Godot WASM ceilings should pass tags into embed parent URLs where itch allows—log failures as unknown, not silent success.

Second FAQ batch

Should we tag internal playtest links?

Yes—surface=playtest prevents false external credit.

What about Steam curator keys?

Log as surface=press or surface=steam-creator with curator slug; never mix with personal social tags.

Can we run two experiments at once?

Not with capsule A/B and source tagging unless creative dimension separates them—default no for micro-teams.

Does this replace 14 screenshot tools?

No—visual stack is parallel; tags tell you which screenshot version was live when a link fired.

Partner / publisher questions this experiment answers

Question Evidence path
“How do you know marketing works?” attribution/decision-memo.md
“What will you double before fest?” Week 3 dominant surface row
“Do you understand Steam vs external?” Inventory branch notes
“Are you guessing?” Unknown band trend across weeks

FAQ

Is 21 days mandatory?

Model length for habit + three Sunday exports. 14 days minimum if fest is imminent; do not shorten below two weekly exports.

Do itch and Steam need separate inventories?

Yes—different surface values, different confounds, same README table.

What if we have no newsletter?

Rotate available surfaces; document newsletter=n/a in README.

Can AI write our tagged posts?

Assistive drafting OK; human publishes and verifies URL string character-for-character.

How does this interact with price anchor worksheet?

Finish source tagging before price A/B—price changes swamp attribution.

Ninety-minute “start Monday” sprint

Minute Output
0–20 Create folders + README
20–45 link-inventory.md top 20 links
45–70 Tag five highest-traffic links
70–85 Schedule 21-day calendar holds
85–90 Block 4 template line for Friday

Glossary

Term Meaning
Source tagging Naming where traffic originated
Surface Platform or Steam-native bucket
Creative Which asset version carried the link
Unknown band Qualitative % of unattributed narrative
Confound Event that invalidates single-post credit

Printable week gates

Week 1 gate: inventory complete, unknown band noted.
Week 2 gate: three deliberate tagged posts shipped.
Week 3 gate: decision memo + capsule calendar note.

Fail any gate → extend experiment one week; do not pretend fest prep is done.

Contrarian note

Some teams argue attribution is pointless for tiny samples. The 2026 counter-argument is process proof: partners and future-you need evidence you can steer, not a TED-chart fantasy. A honest “unknown 40%” week one beats a fabricated funnel slide.

After the experiment (days 22–42)

Do not delete the folder when the sprint ends. Days 22–42 maintain a lighter cadence:

  • One Sunday export per fortnight
  • Block 4 still names dominant surface
  • Update decision-memo.md only when you change surface strategy or creative major version
  • Feed learnings into operating review adoption if partners asked for ops proof

Stopping all tagging after day 21 recreates the archaeology problem before launch week.

Email to teammates (model, day 0)

Subject: 21-day attribution experiment — tag freeze

Body:

We are freezing surface / creative slugs for 21 days. No store posts without an inventory row. Sunday exports go to release-evidence/marketing-and-demo/attribution/. Price tests paused. Questions go to README, not new tag names.

Short mail reduces week-2 tag sprawl more than a Notion doc nobody opens. Pin the README table in your team chat every Sunday so exports do not slip.

Sample decision memo outline (model)

# Attribution decision — YYYY-MM-DD

## Dominant external surface
- Name:
- Evidence paths:

## Surface to pause
- Name:
- Reason:

## Unknown band trend
- Week 1 / 2 / 3:

## Fest prep implication
- Capsule:
- Demo link tags:

## Next 14 days
- One surface to double:
- One creative to refresh:

Close: October 2026 Next Fest rewards teams that can name their traffic lanes with receipts, not anecdotes. Run the 21-day tagging experiment in May and June so fest week is execution—not archaeology in link shorteners. Tag every outbound path, export every Sunday, and let Block 4 tell the truth about which surface actually moved the needle. When in doubt, log unknown loudly and fix links first—not the taxonomy or the deck.