Lesson 6: Quality Assurance & Testing Processes
You have a workflow for how work gets done. The next step is defining how you verify that work: how you test games before release, track bugs, and decide when to involve dedicated QA so you ship with confidence instead of hope. This lesson covers quality assurance and testing processes for game studios.
What You'll Learn
By the end of this lesson you will be able to:
- Define a testing approach – what to test, when, and who does it (devs vs QA)
- Create simple test plans – checklists and scenarios so nothing critical is missed
- Set up bug tracking – one place for bugs, clear severity, and ownership
- Decide when to involve QA – internal vs external, and when to hire
- Avoid common pitfalls – no plan, no tracking, or testing too late
Why This Matters
Games that ship with fewer critical bugs usually have a clear way to find, log, and fix issues before release. You do not need a huge QA team to benefit: even a small test plan and a shared bug list help. This lesson helps you design a process that fits your team size and budget.
Step 1: Define What You Test and When
Critical paths first
- List the flows that must work: first-time launch, core loop, save/load, main menu, settings, and any platform-specific requirements (e.g. achievements, cloud save).
- Test these every milestone or before every release. Add regression checks (did we break something we already fixed?) as the project grows.
When to test
- During development: Developers test their own work as they go (smoke tests, quick playthroughs).
- Before milestones: A short focused pass on the critical path and recent changes.
- Before release: A full pass (or multiple passes) with a test plan and bug triage. The earlier you start, the more time you have to fix issues.
Who tests
- Solo or tiny team: Developers do their own testing plus a final pass before release. Use a checklist so you do not skip steps.
- Larger team: Designate someone (or rotate) to do a dedicated test pass before milestones. Optionally bring in external QA for pre-release or full regression.
Pro tip: Start with one checklist (e.g. "Pre-release checklist") that covers install, first run, main loop, save/load, and quit. Expand it as the game grows.
Common mistake: Testing only at the end. Bugs found late cost more to fix. Test early and often, even in rough form.
Step 2: Create a Simple Test Plan
What a test plan is
- A list of test cases or scenarios: "Do X, expect Y." It can be a document, a spreadsheet, or a set of issues in your bug tracker tagged as "test case."
- Focus on the critical path and high-risk areas (new features, recent changes, platform-specific code).
Minimal test plan outline
- Setup – Install, first launch, permissions (e.g. storage, network).
- Core loop – Main gameplay flow from start to end (or one full session).
- Save / load – Save game, quit, reopen, load; confirm state is correct.
- Settings and UI – Main menu, options, key UI flows.
- Platform – Platform-specific features (e.g. achievements, cloud, controller, touch).
- Exit – Clean quit, no crashes or hangs.
Pro tip: Keep the plan short at first. Add cases when you find a bug that would have been caught by a specific check. Over time the plan becomes your regression suite.
Common mistake: A test plan that is too long or vague. Prefer a few clear, executable steps over a long essay.
Step 3: Set Up Bug Tracking
One place for bugs
- Use a single tool or list: Jira, Linear, Notion, GitHub Issues, or a shared spreadsheet. Everyone logs bugs there and no one uses private notes or chat as the only record.
What each bug needs
- Summary – Short description (e.g. "Game crashes when loading save on Android").
- Steps to reproduce – Numbered steps so anyone can reproduce.
- Expected vs actual – What should happen vs what happens.
- Severity – Critical (blocker, crash), major (feature broken), minor (cosmetic, workaround exists). Define what each level means for your team.
- Owner – Who is responsible for fixing or triaging. Can be "unassigned" until triage.
Triage
- Regularly (e.g. weekly or before release) go through new bugs: confirm reproducibility, set severity, assign owner. Close duplicates and "won't fix" with a brief reason.
Pro tip: Use a simple severity scale (e.g. P0/P1/P2) and agree that P0 must be fixed before release. That keeps the bar clear.
Common mistake: Bugs in chat or email only. If it is not in the tracker, it will be forgotten.
Step 4: Decide When to Involve Dedicated QA
When dev-only testing is enough
- Very small teams, small scope, or early prototype. Developers test their own work and do a final pass with a checklist.
When to add dedicated QA
- When the team or scope grows and developers no longer have time to do full passes.
- When you need coverage across many platforms, devices, or languages.
- When you want an independent view (someone who did not build the feature).
Internal vs external QA
- Internal: Part of your team, knows the project well, can test continuously.
- External: Contract or agency, good for pre-release bursts or specialized testing (e.g. compliance, localization). Clear scope and deliverables (test plan, bug list, sign-off criteria) help.
Pro tip: Even one dedicated tester (internal or contract) for the last few weeks before release can catch a lot of issues. Define what "ready for release" means (e.g. no P0 bugs, critical path signed off) so QA knows when to stop.
Common mistake: Involving QA only in the last week with no test plan or bug process. Give them time, a plan, and a tracker.
Step 5: Integrate QA With Your Workflow
When QA runs
- After a build is ready (e.g. from CI or a release candidate). QA does not test in the middle of a broken build unless you explicitly want "exploratory" or "broken build" testing.
When bugs get fixed
- Developers fix in priority order (e.g. P0 first). QA re-tests after a fix and closes the bug or reopens with new steps. Avoid "fix and forget" without verification.
Sign-off
- Define what "QA sign-off" means: e.g. all P0/P1 fixed and re-verified, critical path passed. Use it as a gate for release so everyone knows when the game is considered ready.
Pro tip: Add a short "test status" to your standup or weekly sync (e.g. "Last build tested, 3 P1 open, targeting fix by Friday"). That keeps QA visible and avoids last-minute surprises.
Common mistake: Treating QA as a separate phase that happens after development is "done." Weave testing into the loop so issues are found and fixed earlier.
Mini Challenge
- This week: Add one thing – a short pre-release checklist (5–10 items) for your current or last project, or a single shared place where you log bugs (even a doc or spreadsheet). Share it with your team and use it for the next build.
Troubleshooting
"We never have time to test."
Reserve time before each milestone or release. Even a half-day focused pass with a checklist is better than none. Move scope or date if needed so testing is not skipped.
"Bugs get lost."
All bugs go in one tracker with clear owner and severity. Triage regularly so nothing sits unassigned.
"QA finds bugs too late."
Involve QA earlier (e.g. last 2–4 weeks, not last 3 days) and give them a test plan. Test critical path after each major feature lands.
Summary
- Define what you test and when – critical path, milestones, and release; decide who tests (devs vs QA).
- Create a simple test plan – checklist or scenarios for setup, core loop, save/load, UI, platform, and exit.
- Set up bug tracking – one place, clear severity and owner, and regular triage.
- Decide when to involve QA – add dedicated QA when scope or team grows; use internal or external and define scope.
- Integrate QA with your workflow – test after builds, verify fixes, and define sign-off so release is a clear gate.
What's Next
In Lesson 7: Marketing & Community Management, you will define how your studio promotes games and manages community: channels, messaging, and when to hire or outsource marketing so your games reach players.
Found this useful? Bookmark this lesson and share it with your team. For more on studio operations, see our guides and help.