Lesson 11: Performance Profiling and Fixes
Your vertical slice is now feature-complete enough to feel real. This is exactly when performance debt starts to hide under "it runs fine on my machine." In this lesson, you will run a repeatable profiling pass, identify the top bottlenecks, and apply fixes that are safe for a small indie production timeline.
Lesson Objective
By the end of this lesson you will:
- Capture a repeatable performance baseline in Editor and Build
- Identify top issues across CPU, GPU, and memory
- Apply three practical fixes with measurable impact
- Document a lightweight performance regression checklist for every future milestone
Why this matters now
Lesson 10 added polish systems (VFX, camera punch, hit-stop). Those improvements are valuable, but they can silently increase draw calls, overdraw, garbage collection pressure, and frame spikes. Profiling now prevents late-stage panic before launch and makes your next content additions predictable.
Step-by-step workflow
Step 1: Define your test scenarios first
Do not profile random gameplay. Use fixed scenarios so data is comparable:
- Combat stress: multiple enemies, particles, UI updates
- Traversal stress: moving across your biggest level chunk
- Menu stress: transitions between menu, load, gameplay, pause
Run each scenario for 60 to 90 seconds and name your captures consistently.
Pro tip: Record both average and worst-case frame time. Players remember spikes, not averages.
Step 2: Capture baseline in Editor and Build
Start with Unity Profiler:
- Open Window -> Analysis -> Profiler
- Profile Play Mode in Editor first for fast iteration
- Then make a development build and profile the Player build
Track at minimum:
- Main Thread ms
- Render Thread ms
- GC Alloc per frame
- Draw calls / batches
Create a small baseline table in your notes:
- Scenario
- Editor avg ms / worst ms
- Build avg ms / worst ms
- Biggest observed spike source
Step 3: Find top CPU spikes in Timeline view
In Profiler CPU module:
- Switch to Timeline
- Click on frame spikes over your target threshold
- Expand heavy functions and locate hot paths
Common offenders in indie slices:
- Expensive
Update()loops running on too many objects - Frequent
GetComponentcalls in per-frame paths - Repeated string operations/logging in gameplay loops
Fix pattern: cache references, move periodic work to lower frequency ticks, and gate expensive logic by proximity or state.
Step 4: Use Frame Debugger for rendering waste
Open Window -> Analysis -> Frame Debugger and inspect draw sequence:
Look for:
- Too many material variants causing batch breaks
- Layer/sorting mistakes that increase overdraw
- Post-processing enabled in scenes that do not need it
Practical fixes:
- Share materials where possible
- Reduce transparent overlap on UI/VFX layers
- Disable costly effects in non-critical cameras
Step 5: Memory and GC pass
Use Profiler Memory module and watch GC Alloc spikes:
Common sources:
- New lists/arrays created every frame
- LINQ in hot loops
- String concatenation in UI updates
Quick wins:
- Reuse collections
- Avoid per-frame allocations
- Pool short-lived gameplay objects (projectiles, hit sparks, floating text)
Step 6: Apply 3 fixes and re-measure
Pick only your highest-impact fixes for this lesson:
- One CPU-side fix
- One render-side fix
- One memory-side fix
Re-run the same 3 scenarios and compare before/after numbers. Document delta in ms and stability (fewer spikes).
Example mini-fix set for this course stage
- CPU: Cache enemy target references and run expensive threat checks every 0.2s instead of every frame
- Render: Merge duplicate VFX materials and simplify one heavy fullscreen effect
- Memory: Replace per-hit new list allocation with reused buffers in combat event pipeline
If two fixes conflict with readability or game feel, keep the option that preserves player clarity and target another hotspot.
Mini challenge
Run a "late-fight" stress case with your most chaotic encounter and maintain a stable frame budget for at least 90 seconds. Then write a one-paragraph "what changed" summary with measured improvements and one remaining risk.
Troubleshooting
Profiler shows huge spikes only in Editor
Editor overhead can distort results. Always validate in a development build before deciding.
Numbers fluctuate too much to compare
Your test path is not consistent. Use the same spawn counts, route, and actions each run.
GC spikes persist after obvious fixes
Search for hidden allocations in UI bindings, events, and helper utilities. Deep Profile can help briefly, but do not leave it on for regular runs.
Game feels worse after optimization
You may have over-reduced polish feedback. Reintroduce selective feedback while keeping optimized data flow.
Common mistakes to avoid
- Optimizing without baseline numbers
- Chasing tiny micro-optimizations before fixing big spikes
- Measuring only average FPS and ignoring 1% low behavior
- Applying many changes at once without isolated validation
Pro tips
- Keep a simple
performance-notes.mdper milestone - Add a pre-release "performance gate" checklist for every build candidate
- Profile after any major system addition, not only at the end of development
Recap
You now have a repeatable profiling workflow: define scenarios, capture baseline, isolate CPU/GPU/memory issues, apply targeted fixes, and verify improvements with the same test paths. This process keeps your Unity 2026 project stable as you approach launch tasks.
Next lesson teaser
Lesson 12 focuses on playtesting and bug triage. You will convert raw player feedback and issue reports into a practical severity rubric and milestone exit criteria.
FAQ
Should I optimize in Editor or Build first?
Use Editor for fast diagnosis, but make final decisions from Build profiling data.
Do I need DOTS for this course stage?
No. Most slice bottlenecks can be solved with architecture cleanup, batching discipline, and allocation control.
What frame target should I aim for?
Use your platform target and genre expectations, then prioritize stable frame time over headline FPS.
Related links
Bookmark this lesson before your next optimization pass, and share it with teammates so everyone debugs performance using the same baseline workflow.