Lesson 12: Automated Playmode Tests for Net Edge Cases - Connect, Disconnect, and Timeout Coverage

Lesson 11 gave you a profiling baseline.
Now you need something stricter than "we tested it once and it seemed fine."

This lesson helps you build a small automated safety net for the ugliest multiplayer regressions: failed joins, cleanup after disconnects, and timeouts that leave the session in a half-broken state.

Lesson objective

By the end of this lesson, you will have:

  1. A short list of edge cases worth automating first
  2. A playmode test harness for host and client lifecycle checks
  3. Assertions for disconnect cleanup and timeout recovery
  4. One repeatable QA gate you can run before friends-and-family builds

Why this matters now

Multiplayer bugs often hide in transitions, not in steady-state gameplay:

  • a player leaves and their avatar never despawns
  • the host drops and the UI stays stuck on "connecting"
  • a late join half-succeeds and leaves stale state in memory
  • retrying a failed connection duplicates managers, listeners, or callbacks

These are expensive bugs because they rarely show up during a happy-path play session.
Automation gives you a boring, repeatable answer to "did we break connection flow again?"

Start with three high-value edge cases

Do not try to automate every theoretical network failure in one pass. Start with:

  1. Client connects successfully and reaches ready state
  2. Client disconnects cleanly and server-side objects/state are released
  3. Connection attempt times out or fails and the UI plus network state reset correctly

If those three are stable, your external test builds become much easier to trust.

Step-by-step workflow

Step 1 - Define the pass conditions in plain language

Before writing a single test, write one sentence for each expected result:

  • "When a client joins, one player object is spawned and marked ready."
  • "When a client disconnects, owned objects despawn and the lobby count decreases."
  • "When a connection fails, the player returns to a safe menu state and can retry."

This keeps your tests focused on behavior instead of framework trivia.

Step 2 - Create a dedicated multiplayer test scene

Use a tiny scene that contains only:

  • NetworkManager
  • transport config used by the slice
  • minimal connection UI or connection-state controller
  • one lightweight player prefab

Avoid your full gameplay map.
The less unrelated content in the scene, the easier it is to understand failing tests.

Step 3 - Build a small harness around host and client startup

Your test harness should make three actions cheap:

  1. Start host
  2. Start one client
  3. Stop one side and wait for cleanup

If setup is painful, your team will stop running the tests. Keep the harness boring and reusable.

Step 4 - Assert state transitions, not just method calls

Good multiplayer tests assert outcomes such as:

  • number of connected clients
  • number of spawned player objects
  • whether connection state changed from Connecting to Connected or Failed
  • whether cleanup flags/events fired after disconnect

Avoid weak tests like "this method executed" if they do not prove the session recovered correctly.

Step 5 - Add timeout and retry coverage

A reliable edge-case suite should simulate at least one failed path:

  • wrong address
  • unreachable host
  • forced disconnect during handshake

Then verify:

  • error UI appears
  • stale connection state is cleared
  • retry works without restarting the whole app

This is the kind of bug that slips into external builds when nobody automates it.

Example - minimal playmode lifecycle test

This is intentionally small. The goal is to prove the pattern, not to mirror your full production harness.

using System.Collections;
using NUnit.Framework;
using Unity.Netcode;
using UnityEngine.TestTools;

public class MultiplayerConnectionLifecycleTests
{
    [UnityTest]
    public IEnumerator ClientJoinAndDisconnectCleansUpState()
    {
        TestNetBootstrap.StartHost();
        TestNetBootstrap.StartClient();

        yield return TestNetBootstrap.WaitForClients(1);

        Assert.AreEqual(1, TestNetBootstrap.HostConnectedClientCount);
        Assert.AreEqual(1, TestNetBootstrap.SpawnedPlayerCountForClients());

        TestNetBootstrap.StopClient();

        yield return TestNetBootstrap.WaitForDisconnectCleanup();

        Assert.AreEqual(0, TestNetBootstrap.HostConnectedClientCount);
        Assert.AreEqual(0, TestNetBootstrap.SpawnedPlayerCountForClients());

        TestNetBootstrap.StopAll();
    }
}

Your real harness can wrap:

  • setup and teardown
  • wait helpers for host/client state
  • transport-specific timing
  • scene reload protection

The important part is that the test proves the session returns to a clean state.

Example - connection failure test checklist

For a failed join, verify all of these:

  1. no connected client remains registered
  2. no duplicate NetworkManager is alive after the attempt
  3. the player sees a failure state or retry prompt
  4. a second connection attempt can start cleanly

If your test only checks for one error message, it is too weak.

Mini challenge

Create one MultiplayerEdgeCaseMatrix.md note for this course project with three columns:

  1. edge case
  2. automated assertion
  3. manual follow-up test

Seed it with these rows:

  • join succeeds
  • client disconnect cleanup
  • handshake timeout
  • host closes session mid-lobby

That matrix becomes your fast pre-playtest checklist.

Pro tips

  • Keep these tests in a tiny scene so failures isolate quickly.
  • Prefer one host plus one client test path before scaling to more peers.
  • Log build ID or commit hash alongside failures so regressions are traceable.
  • Treat flaky tests as urgent tech debt; ignored netcode tests become decoration fast.

Common mistakes

  • Automating only the happy path and calling it multiplayer coverage.
  • Running edge-case tests in a giant gameplay scene with too many unrelated systems.
  • Forgetting teardown, which creates false positives or cross-test contamination.
  • Asserting on timers alone instead of checking actual network/session state.

Troubleshooting

"The test passes locally but fails randomly in CI."

Your waits are probably too optimistic or teardown is incomplete. Add explicit state-based waits rather than only fixed delays.

"Disconnect cleanup takes longer than expected."

Check whether owned objects, event subscriptions, or singleton state survive the disconnect. Cleanup bugs often live outside transport code.

"Retry after failure still breaks the session."

Look for stale NetworkManager instances, duplicated callbacks, or UI state that never leaves Connecting.

FAQ

Should I automate packet loss simulation right now?

Not first. For this course stage, connection lifecycle and cleanup coverage give you a better payoff than deep transport chaos testing.

Do I need full CI before writing these tests?

No. Write them locally first, prove they catch regressions, then wire them into CI or your pre-release checklist.

How many net edge-case tests are enough for a vertical slice?

Start with 3 to 5 stable tests. A small trustworthy suite beats a large flaky one every time.

Recap

You now have the blueprint for a lightweight automated multiplayer safety net: isolate a test scene, automate host/client lifecycle checks, assert cleanup after disconnects, and verify failed joins recover cleanly.

Next lesson teaser

Lesson 13 moves from automation into platform constraints, including transport choices, Steam P2P vs dedicated-server thinking, and the practical limits that shape your release path.

Related links

Run these tests before every external build and after every networking-heavy merge. They are your cheapest defense against embarrassing session-flow regressions.