Lesson 120: Strategy-Approval Audit Packet Wiring for Mitigation-Lane Decision Replay and Signer Traceability (2026)

Direct answer: Build one immutable strategy-approval audit packet per mitigation-lane decision so release teams can replay why a lane was chosen, which signer approved it, what evidence existed at approval time, and whether post-window outcomes matched projected risk.

Why this matters now (2026 release-window governance pressure)

Lesson 119 gave you mitigation-lane simulation and ranking. That solves option comparison, but it does not solve accountability by itself.

In 2026 release operations, teams are increasingly asked:

  • why this lane was approved instead of alternatives
  • whether the approver had complete evidence
  • whether the final outcome matches the claimed risk profile

If your answer is "we discussed it in chat" or "the lead signed off verbally," audit posture collapses fast. You need a deterministic packet with replayable artifacts.

This lesson wires that packet discipline.

Animals logo artwork representing approval identity and evidence continuity for decision replay

What you will produce

  1. lesson120_strategy_approval_packet_schema.yaml
  2. lesson120_signer_traceability_policy.yaml
  3. lesson120_strategy_packet_builder.py
  4. lesson120_strategy_packet_validator.py
  5. lesson120_approval_fail_matrix.csv
  6. strategy-approval/{release_window_id}/SAP-*.json outputs

Prerequisites: Lessons 117-119 (convergence dashboard, SLA forecast bands, and mitigation-option simulation lanes).

1) Define packet identity and scope contract

Create lesson120_strategy_approval_packet_schema.yaml with required top-level fields:

  • strategy_packet_id
  • release_window_id
  • exception_cluster_id
  • selected_lane_id
  • selected_mitigation_option_id
  • ranking_snapshot_id
  • approval_timestamp_utc
  • approval_state
  • signer_roster
  • evidence_bundle
  • decision_rationale
  • post_window_outcome_link
  • packet_sha256

Treat strategy_packet_id as immutable after issue. Any substantive update creates a new revision.

2) Wire signer traceability model

Create lesson120_signer_traceability_policy.yaml with signer classes:

  • strategy_owner
  • release_authority
  • risk_governance_reviewer
  • observer_optional

For each signer capture:

  • signer_id
  • role
  • authority_scope
  • signature_timestamp_utc
  • signature_method
  • signature_hash

No signature means no approval state transition to approved.

3) Capture decision replay baseline artifacts

Every packet must reference artifacts from Lesson 119:

  • simulation ranking snapshot
  • top-two lane comparison row
  • confidence class and risk overlay
  • fallback decision label

Store by hash and URI, not by "latest dashboard view." Replay must survive dashboard UI drift.

4) Standardize decision rationale fields

Add deterministic rationale keys:

  • selection_primary_reason
  • selection_secondary_reason
  • rejected_lane_1_reason
  • rejected_lane_2_reason
  • known_risk_acceptances
  • required_follow_up_controls

Keep rationale concise and testable. Avoid narrative-only comments that cannot be mapped to evidence rows.

5) Build strategy packet assembler

Implement lesson120_strategy_packet_builder.py:

  1. ingest selected lane from Lesson 119 output
  2. ingest signer roster and authority mapping
  3. ingest evidence hashes and source URIs
  4. compose packet payload using schema
  5. compute packet hash
  6. emit immutable packet revision

Output path pattern:

  • strategy-approval/{release_window_id}/SAP-{exception_cluster_id}-r{revision}.json

6) Add approval state machine

Allowed state transitions:

  • draft -> pending_signatures
  • pending_signatures -> approved
  • pending_signatures -> rejected
  • approved -> superseded (only by higher revision)

Invalid examples:

  • draft -> approved with missing signatures
  • approved -> draft

State machine violations should fail validation immediately.

7) Attach evidence-bundle continuity checks

evidence_bundle must include:

  • ranking snapshot hash
  • simulation rule version
  • forecast band snapshot hash
  • fail-matrix run id
  • dashboard extract hash

If any dependency hash is missing, mark packet incomplete_evidence and block approval.

8) Build validator for signer and evidence integrity

Implement lesson120_strategy_packet_validator.py checks:

  1. schema completeness
  2. signer role coverage minimums
  3. signature uniqueness and timestamp ordering
  4. evidence hash presence and format
  5. approved state requires complete signer/evidence conditions
  6. packet hash reproducibility

Run validator before storing packet in approval registry.

9) Define fail matrix

Create lesson120_approval_fail_matrix.csv:

scenario_id condition expected_result
A1 approved packet missing release authority signer fail
A2 signer id duplicated across conflicting roles fail
A3 selected lane id missing from ranking snapshot fail
A4 evidence bundle missing forecast snapshot hash fail
A5 packet hash differs on deterministic rebuild fail
A6 rejected lanes not documented in rationale fields fail
A7 complete packet with all signer and evidence constraints pass
A8 approved packet later superseded by revision with full lineage links pass

Use this matrix whenever schema or signer policy changes.

10) Wire packet-to-dashboard decision panel

Add a strategy-approval panel showing:

  • packet id and revision
  • selected lane and confidence class
  • signer roster status
  • evidence bundle completeness
  • approval state timeline

This lets reviewers see operational readiness before release approvals.

11) Add post-window outcome traceability

Approval packets are not complete without outcome tracking.

Add fields:

  • actual_outcome_status (met_projection, partial_match, diverged)
  • actual_convergence_hours
  • observed_regression_events
  • outcome_review_timestamp_utc

Link these to the original packet id. Do not overwrite rationale or signatures when outcome data arrives.

12) Add divergence escalation rule

Escalate when:

  • lane marked low regression risk causes critical regression
  • convergence misses high-band forecast by threshold
  • signer approved without required evidence fields

Escalation output:

  • strategy_divergence_incident_id
  • affected_packet_id
  • required_policy_update_action

This keeps lesson-loop governance real instead of symbolic.

13) Enforce storage and retention discipline

Store packet revisions under immutable paths:

  • strategy-approval/{release_window_id}/packets/
  • strategy-approval/{release_window_id}/validation/
  • strategy-approval/{release_window_id}/outcome-reviews/

Retention rules:

  • keep all approved revisions
  • keep superseded revisions with lineage pointer
  • never delete rejected drafts without archive reason code

14) Add human review protocol

Before final approval:

  1. reviewer verifies top-two lane comparison
  2. reviewer verifies signer scopes and authority
  3. reviewer verifies evidence bundle completeness
  4. reviewer verifies rationale references real snapshot ids
  5. reviewer records go/hold recommendation

This prevents checkbox approvals under time pressure.

15) Add CI policy gates

Block merge or release gate when:

  • validator returns fail
  • approval state is approved without complete signer coverage
  • selected lane cannot be replayed from evidence hashes
  • packet hash mismatch on rebuild

CI logs should include packet id and exact failed constraint.

Recommended artifact layout

  • strategy-approval/{release_window_id}/schemas/lesson120_strategy_approval_packet_schema.yaml
  • strategy-approval/{release_window_id}/policies/lesson120_signer_traceability_policy.yaml
  • strategy-approval/{release_window_id}/packets/SAP-{cluster}-r{n}.json
  • strategy-approval/{release_window_id}/validation/SAP-{cluster}-r{n}.log
  • strategy-approval/{release_window_id}/outcome-reviews/SAP-{cluster}-r{n}-outcome.md

Common mistakes to avoid

  • approving with only one signer under emergency pressure
  • storing rationale without rejected-lane reasons
  • linking to mutable dashboards instead of hashed snapshots
  • updating approved packets in place instead of revisioning
  • skipping post-window outcome linkage because launch "succeeded"

Pro tips

  • Assign one signer policy owner per quarter.
  • Require reason codes for all rejected lanes above score threshold.
  • Track approval latency from pending_signatures to approved.
  • Run outcome divergence review weekly during active windows.

Mini challenge (15 minutes)

  1. Take one cluster from Lesson 119 output.
  2. Build draft packet with selected lane and top-two rationale.
  3. Add signer roster with one signer intentionally missing.
  4. Run validator and confirm fail.
  5. Add missing signer and rerun to confirm pass.

If pass/fail transitions are deterministic, your packet wiring is production-ready.

Troubleshooting

Packet hash changes between reruns

Check source ordering and timestamp normalization. Deterministic fields must be sorted and canonicalized before hashing.

Approval stuck in pending_signatures

Inspect signer role requirements. One missing release_authority signature should keep state locked.

Selected lane cannot be replayed

Ranking snapshot id or hash references are missing or stale. Rebuild packet from current validated snapshot only.

FAQ

Is Lesson 120 replacing simulation ranking from Lesson 119

No. Lesson 119 decides candidate strategy quality. Lesson 120 governs how that decision is approved, signed, replayed, and audited.

Should every lane decision require full signer roster

For high-impact release windows, yes. For low-impact lanes, you can define reduced signer policy tiers, but rules must remain explicit and versioned.

Can we use screenshots instead of structured packets

No. Screenshots are useful context, not authoritative evidence. Approval replay requires structured fields, hashes, and signer lineage.

Lesson recap

You now have strategy-approval audit packet wiring that converts mitigation-lane choices into replayable governance artifacts with signer traceability, evidence continuity, and measurable post-window outcome checks.

Next lesson teaser

Next, Lesson 122 will wire calibration-patch effectiveness verification so divergence fixes are measured against next-window outcomes and retained, adjusted, or rolled back with evidence.

See also