Lesson 11: Profiler with Networking Counters - Bandwidth, Tick, and GC Spikes

By Lesson 10, your server validation baseline blocks obvious abuse paths.
Now you need hard evidence that your slice stays stable under real session load.

This lesson gives you a repeatable profiling route so performance conversations stop being guesswork.

Lesson objective

By the end of this lesson, you will have:

  1. A profiling scene baseline for host plus 2 clients
  2. A counter checklist for bandwidth, tick, and packet behavior
  3. A Timeline workflow for finding GC and serialization spikes
  4. A lightweight regression gate for future networking changes

Why this matters now

Multiplayer performance issues are often invisible in solo play mode:

  • packet bursts increase latency feel before FPS visibly drops
  • unstable tick pacing causes rubber-banding and delayed hit confirmation
  • garbage collection spikes desync client prediction from authority updates

If you capture these early, Lesson 12 automation becomes much more reliable.

Step-by-step profiling workflow

Step 1 - Lock a reproducible test scenario

Create one scripted 3-minute scenario:

  • same map and spawn points
  • same bot count or scripted movement paths
  • same combat interactions every run

Use this exact scenario each time so counter trends are comparable.

Step 2 - Capture host and client traces separately

Run three roles:

  1. host build
  2. client A
  3. client B

Capture profiler data from host first, then one client with identical scenario timing.
Do not mix traces from different encounter flows.

Step 3 - Track core networking counters

For each run, record:

  • bytes sent and received per second
  • RPC and NetworkVariable update counts
  • packet resend or reliability pressure indicators
  • average and worst-frame tick duration

Store values in a simple baseline table by build ID.

Step 4 - Correlate counters with Timeline spikes

When counters jump, inspect Timeline markers around that moment:

  • serialization work
  • object spawn/despawn bursts
  • allocations followed by GC collections

The goal is to link "network symptom" to "main-thread cause."

Step 5 - Define pass/fail thresholds

Set practical guardrails for your slice branch, for example:

  • no sustained bandwidth growth above your target envelope
  • no GC spike over your agreed frame budget in combat windows
  • no repeated tick stalls during objective-heavy moments

If one threshold fails, log it as a release blocker for the current playtest build.

Example - simple counter capture scaffold

using Unity.Netcode;
using UnityEngine;

public class NetPerfSnapshot : MonoBehaviour
{
    private float _sampleTimer;
    private int _samples;

    private void Update()
    {
        _sampleTimer += Time.unscaledDeltaTime;
        if (_sampleTimer < 1f) return;
        _sampleTimer = 0f;
        _samples++;

        var netManager = NetworkManager.Singleton;
        if (netManager == null) return;

        ulong connected = (ulong)netManager.ConnectedClientsList.Count;
        Debug.Log($"NET_PERF sample={_samples} connected={connected} frame={Time.frameCount}");
    }
}

This script is intentionally minimal.
Pair it with Unity Profiler counters and structured log exports from your headless run.

Mini challenge

Create a multiplayer-profiler-baseline.md file with:

  1. scenario description and test duration
  2. three key network counters with target ranges
  3. one worst-frame GC event and suspected source
  4. one mitigation task for next sprint

Repeat after your next networking feature and compare deltas.

Pro tips

  • Profile builds, not just Editor, before making final optimization calls.
  • Keep one "known-good" baseline trace per milestone.
  • Investigate sudden traffic jumps around spawn waves and objective sync events first.

Common mistakes

  • Comparing traces from different scenarios and calling it a regression.
  • Optimizing rendering systems when network serialization is the real bottleneck.
  • Looking only at average values and ignoring worst-frame behavior.

Troubleshooting

"Bandwidth looks fine, but movement still feels jittery."

Check tick interval stability and packet burst timing, not just total bytes.

"GC spikes appear random across runs."

Verify scene flow is truly deterministic and disable unrelated background systems during capture.

"Profiler data is too noisy to act on."

Limit tracked counters to a short must-watch list and keep scenario duration consistent.

FAQ

Should I optimize host and clients equally right now?

Prioritize host authority stability first, then validate one representative client path.

Do I need dedicated server infra to profile meaningfully?

No. For this stage, local or LAN host-plus-clients traces provide high-value signals.

How often should this profiling pass run?

At minimum, after each networking-heavy feature merge and before every external playtest drop.

Recap

You now have a repeatable profiling loop: fixed scenario, core network counters, Timeline correlation, and pass/fail guardrails tied to build IDs.

Next lesson teaser

Lesson 12 adds automated playmode tests for net edge cases so regressions are caught in CI instead of late QA.

Related links

Treat this as your multiplayer health dashboard: if counters drift, investigate before you add more gameplay complexity.