Lesson 121: Cross-Window Decision-Outcome Divergence Review Lane Wiring for Strategy Packet Calibration Continuity (2026)
Direct answer: Build one deterministic divergence review lane per release window that compares strategy packet predictions to observed outcomes, classifies drift severity, and forces policy calibration updates when mismatch thresholds are breached.
Why this matters now (2026 calibration pressure)
Lesson 120 gave you signer-traceable strategy approval packets. That solved approval replay, but not calibration continuity.
In 2026 release lanes, teams often approve mitigation choices with strong evidence and still repeat the same planning mistakes because they never close the loop between:
- predicted risk in the packet
- observed risk after release
- policy changes needed for next windows
Without a divergence lane, packet quality can look high while decision accuracy decays quietly.
This lesson wires the lane that prevents silent drift.

What this lesson adds beyond Lesson 120
Lesson 120 answers: "Was the decision approved correctly?"
Lesson 121 answers: "Was the approved decision directionally correct, and what must change now?"
You are adding:
- a decision-outcome comparison schema
- divergence scoring and severity bands
- automatic calibration triggers
- signer-visible correction workflow
This converts governance from static approval into adaptive control.
Learning goals
By the end of this lesson, you will be able to:
- model expected vs observed outcome rows for each strategy packet
- calculate divergence scores with confidence weighting
- classify divergence severity with deterministic rules
- trigger policy and template calibration when thresholds fail
- export one review bundle for leadership and audit continuity
Prerequisites
- Lesson 119 mitigation simulation lane outputs
- Lesson 120 strategy-approval packet IDs and signer records
- release window outcome metrics with stable identifiers
- one baseline policy version for comparison
1) Define divergence lane contract
Create a schema file such as divergence_lane_review.csv with required columns:
window_idstrategy_packet_iddecision_lane_idexpected_risk_bandexpected_impact_scoreexpected_time_to_stabilize_hobserved_risk_bandobserved_impact_scoreobserved_time_to_stabilize_hconfidence_at_approvaldivergence_scoredivergence_classcalibration_actionownerstatus
If a packet cannot populate these fields, mark it non-reviewable and block closure.
2) Normalize expected and observed metrics
Do not compare raw metrics without normalization. Use one lane policy:
- map risk bands to numeric scale
- map impact scale definitions to fixed thresholds
- map stabilization time to window-adjusted values
Example normalized scale:
- risk band: low=1, medium=2, high=3, critical=4
- impact score: 0 to 100
- stabilization: hours from incident start to policy-defined steady state
Normalization ensures divergence rows stay comparable across windows.
3) Compute divergence score with confidence weighting
Use a practical formula:
divergence_score = (risk_delta * 0.35) + (impact_delta * 0.40) + (stabilization_delta * 0.25)
Where:
risk_delta = abs(expected_risk - observed_risk) / 3impact_delta = abs(expected_impact - observed_impact) / 100stabilization_delta = min(abs(expected_tts - observed_tts) / policy_tts_cap, 1.0)
Then confidence-adjust:
confidence_adjusted_divergence = divergence_score * (0.6 + (confidence_at_approval * 0.4))
Rationale:
- high-confidence wrong predictions are more serious than low-confidence misses
- this discourages overconfident approvals without evidence depth
4) Define divergence severity classes
Use deterministic bands:
- D0 (aligned): score < 0.15
- D1 (minor drift): 0.15 to 0.29
- D2 (material drift): 0.30 to 0.49
- D3 (critical drift): >= 0.50
Policy effects:
- D0: close with no calibration
- D1: log note, optional calibration
- D2: mandatory template calibration in next window
- D3: mandatory policy and signer review before next approval cycle
No manual downgrade of class without signed waiver record.
5) Wire class-to-action routing
Every divergence class needs explicit next action.
D0 route
- attach outcome evidence
- mark lane complete
- preserve baseline
D1 route
- add correction note to packet template hints
- assign lightweight review owner
- track for recurrence
D2 route
- revise risk-band mapping logic
- revise mitigation-option scoring guidance
- require review in next strategy planning kickoff
D3 route
- open calibration incident ticket
- require signer revalidation workshop
- block next comparable lane approvals until corrective changes are merged
Route determinism prevents "acknowledge and ignore" behavior.
6) Add cross-window recurrence detection
Single-window drift can be noise. Recurrent drift is governance failure.
Create recurrence keys:
decision_pattern_key(same lane type + same risk class + same mitigation family)team_mode_key(same owner group and release context)
Escalate when:
- D2 appears in 2 consecutive windows for same key
- D3 appears once with confidence above configured threshold
- combined D1+D2 rate breaches policy cap for a quarter
This lets you distinguish isolated misses from structural bias.
7) Build calibration patch queue
When D2 or D3 happens, produce concrete patch rows:
patch_idtarget(template, policy rule, scoring weight, evidence requirement)reasonlinked_divergence_rowsexpected_effectownerdeadlinevalidation_window
Do not close divergence lanes until patch rows exist and are assigned.
8) Enforce signer-visible calibration acknowledgment
Because Lesson 120 established signer accountability, divergence corrections must be signer-visible.
Add required acknowledgment fields:
signer_ack_requiredtrue for D2/D3signer_ack_atsigner_ack_notesigner_ack_hash
If signer acknowledgment is missing on D2/D3, hold next lane approval package.
This is how you stop recurring packet optimism from bypassing governance.
9) Create divergence review meeting template
Run one short meeting per window with fixed sections:
- top divergence rows by severity
- recurrence signals
- patch queue status
- signer acknowledgment status
- next-window calibration readiness
Output one bundle:
divergence_review_summary.md- updated patch queue rows
- decision log entry with go/hold for next planning cycle
Keep meeting output machine-readable and auditable.
10) Add CI gate for calibration completeness
Before promoting the next strategy-planning packet batch, enforce checks:
- no open D3 rows without accepted patch plan
- no expired D2 patches
- signer acknowledgment complete for required rows
- policy version bumped when calibration changes merged
If any check fails, CI returns hold_calibration_incomplete.
This prevents release momentum from skipping learning loops.
11) Suggested lane artifacts
Use this folder pattern per window:
/governance/divergence/<window_id>/
Files:
divergence_lane_review.csvdivergence_scoring_policy.mdcalibration_patch_queue.csvsigner_ack_log.csvdivergence_review_summary.md
Store hashes in your lesson120-style evidence index so artifacts remain replayable.
12) Failure matrix for reviewers
| Condition | Interpretation | Decision |
|---|---|---|
| D0 majority and no recurrence | model alignment stable | proceed |
| isolated D2 with assigned patch | controlled drift | proceed with watch |
| D2 recurrence without patch progress | structural mismatch | hold comparable approvals |
| any D3 without signer acknowledgment | critical governance breach | hold |
| calibration patch merged but policy version unchanged | continuity gap | hold until versioned |
Use this matrix to keep decisions consistent under pressure.
Implementation walkthrough (small-team friendly)
Step A - Assemble one window data pack
Collect:
- approved strategy packets
- observed release outcomes
- confidence records at approval time
- stabilization timestamps
Build one row per approved lane decision.
Step B - Run scorer and class mapper
Apply normalized scoring function and produce:
- divergence score
- class D0 to D3
- required action route
Export immutable results with row hashes.
Step C - Open patch queue entries
For each D2/D3 row:
- open calibration patch entry
- assign owner and deadline
- link affected templates or policy rules
No verbal-only "we will improve this" notes.
Step D - Run signer acknowledgment pass
For D2/D3, collect signer acknowledgments before approving next comparable lane.
Log signature metadata and hash.
Step E - Gate next-window planning
CI checks divergence completion criteria.
If all pass, allow planning packet promotion. If not, hold.
This sequence usually fits within a lightweight weekly governance rhythm.
Common mistakes to avoid
Mistake: Comparing outcomes without confidence context
Fix: weight divergence by approval-time confidence to expose overconfident misses.
Mistake: Treating all divergence as equal
Fix: use D0-D3 classes with explicit action paths.
Mistake: Closing divergence rows before calibration patches are assigned
Fix: require patch queue row creation for D2/D3 closure eligibility.
Mistake: Ignoring recurrence across windows
Fix: create recurrence keys and threshold-based escalation.
Mistake: Letting signer accountability end at approval
Fix: require signer acknowledgment on material/critical divergence.
FAQ
Is divergence review only for failed releases
No. Strong releases still need divergence analysis because near-miss patterns often appear before visible failure.
Can we skip confidence weighting for simplicity
You can start without it, but confidence weighting is strongly recommended because it highlights risky overconfidence patterns that raw deltas hide.
What if observed metrics arrive late
Mark row status as pending_observation, set deadline, and prevent final closure until required observation fields are complete.
Should every D1 trigger a patch
Not always. D1 can be monitored first. But recurring D1 on the same decision pattern should escalate to D2-style calibration work.
How many windows are enough for recurrence analysis
Three windows is a practical starting point for small teams, then expand as data quality improves.
Lesson recap
You now have a cross-window divergence review lane that converts strategy packet outcomes into measurable calibration signals, routes drift through deterministic actions, and blocks future approvals when learning obligations are skipped.
Next lesson teaser
Next, Lesson 123 will wire multi-cohort effectiveness segmentation so calibration patches can be retained for stable cohorts while conditional rollback paths protect unstable cohorts without freezing the full release lane.
See also
- Lesson 120: Strategy-Approval Audit Packet Wiring for Mitigation-Lane Decision Replay and Signer Traceability (2026)
- Lesson 119: Mitigation-Option Simulation Lane Wiring for Blocker Convergence Strategy Planning (2026)
- Google Play Games on PC Submission Readiness in 2026 - What Indie Teams Must Validate Before Store Review