Lesson 31: Rolling Stability Dashboard CSV Export and Webhook Ingest Automation
Lesson 30 gave you a manual rolling dashboard that stays honest about release-train health.
This lesson adds automation you can actually maintain: a small ingest path that pulls closed Lesson 29 tracker rows into the dashboard sheet or database on a schedule, plus an optional webhook lane when your incident tool already emits JSON.
The goal is not a full data platform. It is one boring pipeline that stops human copy errors from becoming fake green trains.

What you will build
By the end of this lesson, you will have:
- A versioned CSV contract that matches the Lesson 29 closing row schema
- A scheduled export job (CI or server cron) that uploads or appends rows idempotently
- A signed webhook receiver sketch you can implement behind HTTPS with replay protection
- A failure visibility rule so silent drops are impossible for more than one run
Step 1 - Freeze the row contract before you automate
Your ingest file must include at minimum the columns from Lesson 30’s mini exercise:
build_id, promotion_date, yellow_open_close, stability_signal, dialogue_signal, train_note, recurrence_flag
Add two automation columns:
ingest_batch_id(UUID or UTC timestamp from the exporter)source(csv_exportorwebhook)
If you skip the contract step, you will automate chaos.
Step 2 - Choose CSV first, webhooks second
CSV wins when:
- your tracker already lives in Sheets, Notion, or Linear exports
- your team is more comfortable with Git diffs than HTTP services
Webhooks win when:
- PagerDuty, Opsgenie, or GitHub already emits structured incident payloads you want to map into
train_noteorrecurrence_flag
Never start with both on day one. Ship CSV ingest, prove the dashboard moves correctly for two real patches, then add webhooks.
Step 3 - Implement a scheduled exporter
Pick the smallest runtime you already operate:
- GitHub Actions
on: schedulewriting an artifact plus optional commit to ametrics/branch - a single cron on a tiny VM that runs a Python or Node script and
scp/rsyncthe CSV to your dashboard host - Google Apps Script time-driven trigger if your dashboard is already Sheets-native
Hard requirements:
- exporter writes UTF-8 CSV with stable header order
- exporter sets
ingest_batch_idonce per run - exporter is idempotent: re-run replaces the same batch slice instead of duplicating rows
Step 4 - Add idempotent merge rules in the dashboard
When importing:
- Match rows on
build_idplusingest_batch_idif you expect multiple partial uploads - For the same
build_idwith a neweringest_batch_id, overwrite the train columns only - Never delete human annotations in free-text cells unless they live in a protected column outside the ingest range
Document the merge rule beside the sheet tab so future you does not “fix” it with manual sorting.
Step 5 - Optional webhook receiver sketch
If you add HTTP ingest:
- require HTTPS termination you control
- verify a shared secret header or HMAC signature computed from raw body plus timestamp
- reject replays older than a few minutes using the timestamp window
- respond
200only after the row is persisted; otherwise return500so the sender retries responsibly
Map only the fields you need. Do not mirror entire incident payloads into player-visible sheets.
Mini exercise
Create ingest_contract_v1.md that lists:
- final column names and types
- example row for a fake build
0.9.7-rc2 - merge rule paragraph
- where secrets live (CI vault name only, not literal secrets)
- alert destination if ingest fails twice in a row
Troubleshooting
Duplicate rows after every cron
Your merge key is too weak. Move uniqueness to build_id plus highest ingest_batch_id per column group.
Webhook works in Postman but not production
Check TLS intermediates, clock skew on signature timestamps, and body canonicalization (JSON minification breaks naive HMAC strings).
Dashboard owners distrust automation
Keep Lesson 29 manual export as a break-glass path for one release. Confidence follows choice, not force.
FAQ
Do we need a database
Not until three rolling columns feel tight. Sheets plus CSV is enough for many indies through first major launch.
How does this relate to player log correlation
Use the same build_id string in ingest rows, CI signing logs, and player-facing crash bundles. The Unity Build Profile and Signing Preflight Checklist chapter already nags you to align those tokens; treat this lesson as the dashboard side of that same habit.
Should webhooks include PII
No. Strip player identifiers in the mapper. Store counts and severities, not names.
Lesson recap
You can now:
- export Lesson 29 closures on a schedule without copy drift
- merge ingested rows safely into Lesson 30 views
- add optional signed webhooks without inventing a new schema per vendor
- see failures loudly when automation breaks
Next lesson teaser
When you outgrow Sheets, promote the same rows into a read-only warehouse layer with contracts and service accounts: Lesson 32: Read-Only Analytics Warehouse Contracts and Service Accounts. Until then, keep ingest batches small and your merge rules written down.
Related learning
- Lesson 29: Two-Week Post-Patch Confidence Tracker
- Lesson 30: Multi-Patch Rolling Stability Dashboard
- Lesson 26: Analytics Confidence Review and Live-Ops Handoff
- 15 Free Crash Triage and Repro Logging Tools for Unity, Unreal, and Godot Teams (2026 Edition)
- Unity Cloud Save Conflict Resolution Overwrites Newer Data — merge discipline parallels for cloud-backed player state, not dashboard cells, but the idempotency mindset transfers.
Bookmark this lesson when your Lesson 30 sheet becomes the most opened tab in release week.