Lesson 5: RPC vs NetworkVariable Tradeoffs (When to Replicate and Payload Size Habits)

Most Unity multiplayer slices become noisy here: too many RPCs for persistent data, or too many NetworkVariables for one-shot events. Both work, but both can create hidden sync debt if applied everywhere.

In this lesson, you will pick a clear replication rule set so gameplay state stays readable, bandwidth stays predictable, and debugging stays sane.

Lesson objective

By the end of this lesson, you will have:

  1. A simple decision framework for RPC vs NetworkVariable
  2. One cleaned-up replication path for your current gameplay actions
  3. Payload habits that reduce unnecessary traffic
  4. A repeatable test checklist for packet pressure edge cases

Why this matters

Your vertical slice can feel stable with two players and then break with four once packet load increases. Replication decisions made now become expensive to unwind later, especially when gameplay systems multiply.

If Lesson 4 gave you movement authority boundaries, Lesson 5 gives you state replication boundaries.

Core decision framework

Use this default model:

Use NetworkVariable for durable state

Good examples:

  • Current weapon slot
  • Team assignment
  • Health value that should persist for late joiners
  • Match phase enum (warmup, live, overtime)

Think: "What should a late-joining client immediately know?"

Use RPC for transient events

Good examples:

  • Play muzzle flash now
  • Trigger one hit marker pulse
  • Request server action from input intent
  • Broadcast one-time round-start cue

Think: "What is an event, not a continuously readable state?"

Common anti-patterns to avoid

Anti-pattern 1 - Spamming RPC per-frame for values

If a value continuously changes and should be observable as state, forcing every update through RPC quickly gets messy and hard to recover for late joins.

Anti-pattern 2 - Packing one-shot events into NetworkVariables

Event-like data in a variable can be missed, overwritten, or replayed in unintended ways when clients reconnect or deserialize late.

Anti-pattern 3 - Sending oversized payload structs

Large nested payloads for tiny events increase traffic and GC risk. Start minimal, then add only what is proven necessary.

Step-by-step implementation pass

Step 1 - Audit one gameplay loop

Choose a single loop from your slice (example: fire weapon -> apply damage -> update HUD).

For each replicated signal, label it:

  • state (durable)
  • event (transient)
  • request (client asks server)

Step 2 - Convert one mixed signal path

If you currently broadcast everything as RPCs, migrate one durable value (for example ammoInMagazine) to a NetworkVariable.

If you currently overuse variables for events, move one one-shot cue (for example playReloadFx) to RPC.

Do one path at a time so regressions are traceable.

Step 3 - Apply payload-size habits

Adopt these habits immediately:

  • Avoid strings in high-frequency paths where IDs work
  • Use compact enums/bytes for small category values
  • Send only changed fields, not full object snapshots
  • Keep event payloads purpose-specific

Step 4 - Add authority checks at boundaries

For request RPCs:

  • Client requests
  • Server validates
  • Server mutates durable state

Never trust request payloads blindly, even in friends-and-family builds.

Step 5 - Run replication pressure test

Repeat this checklist:

  1. Host + one client run normal actions for 3 minutes
  2. Trigger rapid action bursts (reload, fire, interact spam)
  3. Join an extra client mid-session (if supported)
  4. Reconnect one client and verify durable state accuracy
  5. Inspect logs for dropped/malformed action handling

Capture at least one "before vs after" bandwidth snapshot if your tooling allows it.

Example pattern - state via NetworkVariable, action via RPC

using Unity.Netcode;
using UnityEngine;

public class WeaponReplicator : NetworkBehaviour
{
    public NetworkVariable<int> AmmoInMag = new NetworkVariable<int>(
        30, NetworkVariableReadPermission.Everyone, NetworkVariableWritePermission.Server
    );

    [ServerRpc]
    public void RequestFireServerRpc(ServerRpcParams rpcParams = default)
    {
        if (AmmoInMag.Value <= 0) return;
        AmmoInMag.Value -= 1;
        PlayFireFxClientRpc();
    }

    [ClientRpc]
    private void PlayFireFxClientRpc()
    {
        // Visual + audio one-shot only.
    }
}

This keeps persistent ammo as state while firing effects stay transient.

Mini challenge

Create replication-decision-log.md with three rows:

  1. Signal name
  2. Chosen transport (RPC or NetworkVariable)
  3. Reason (event vs durable state)

Then include one payload reduction you made (for example enum instead of string key).

If another teammate can read your file and predict your runtime behavior, your model is clear enough.

Pro tips

  • Build a short "replication glossary" in your repo so team decisions stay consistent.
  • Prefix request RPC methods with Request to signal validation expectations.
  • Keep read/write permissions explicit on every NetworkVariable.

Common mistakes

  • Using client-write NetworkVariables for security-sensitive state
  • Broadcasting effect events as durable state
  • Rebuilding full structs for tiny updates

Troubleshooting

"Late joiners miss key state."

That state likely belongs in NetworkVariables (or another durable sync path), not event-only RPCs.

"Clients disagree on ammo or score."

Recheck write permissions and ensure the server is sole authority for state mutation.

"Bandwidth spikes during action-heavy moments."

Audit payload fields and event frequency; remove non-essential data first.

Recap

You now have a clear rule set: persistent gameplay truth through NetworkVariables, transient moments through RPCs, and tighter payload habits for stable growth.

Next lesson teaser

Lesson 6 covers scene management with networking, where many slices fail during map transitions, object respawn timing, and ownership continuity.

Related links

Bookmark this lesson before Lesson 6. Good replication boundaries make every later netcode decision easier.