Case study May 11, 2026

From 14-Minute Unity Builds to 90-Second Iteration - Asset Import Worker + Accelerator Cache Server Indie Pipeline Case Study (2026)

2026 Unity 6.6 LTS case study - Asset Import Worker, Accelerator Cache Server, Library/PackageCache hygiene, IL2CPP per-platform cache, and Build Profile splits combined to deliver 4-9x iteration speedups for a representative 3-person indie team.

By GamineAI Team

Dream House illustration - hero image for the 2026 Unity 6.6 LTS Asset Import Worker + Accelerator Cache Server indie case study

Why this matters now

The iteration pain of indie Unity development in 2026 is not what it was in 2023. Unity 6.6 LTS matured the Asset Import Worker (the parallel asset-import lane that became default-on for new projects in 6.6.4f1) and the Accelerator Cache Server (Unity's on-premise/cloud cache that bypasses re-importing assets on every machine), and the pricing model for the Accelerator stabilized in early 2026 for small teams. Before mid-2025, indie teams who tried to wire both up usually either hit edge-case import failures or ran the cache server in such a leaky way that the speed gain evaporated within a week.

By Q2 2026, three things changed simultaneously: the Import Worker's failure modes became well-documented in Unity Discussions; the Accelerator Cache Server got a credible Docker path that small teams could host on a spare workstation or a $5-10/mo VPS; and Unity's 6.6 LTS branch absorbed enough cache-invalidation fixes that running the two together stopped being a heroic feat. Indie teams that resisted the upgrade because "we got burned in 2024" are leaving 4-9x iteration speedups on the table while their competitors ship demos faster.

This post is a composite case study of a representative 3-person indie team running a mid-size 2D RPG on Unity 6.6.4f1. The team's name is changed and a few specifics are aggregated across multiple real teams, but every measurement in this article is anchored to a real Unity 6.6 LTS configuration that a small team can reproduce. By the end of this post you will know exactly which seven changes mattered, which two are easy to misconfigure, and how to maintain the speedup once you have it.

If you have not yet upgraded to Unity 6.6 LTS, queue the Unity 6.6 LTS upgrade safety sprint first - then come back here. If your build time is dominated by shader compilation rather than asset import, start with Shader Variant Explosion Unity 6 Build Time Triage Flow and then come back here. The two pieces compose well.

The team profile

The case study team:

  • Three people: a tech artist (also handling pipeline), a programmer, and a designer who scripts in C#.
  • Unity 6.6.4f1 LTS with URP 17.0.x and Addressables 2.4.x.
  • A mid-size 2D RPG, ~12 GB Library/ folder at baseline, ~3.5 GB Assets/, ~22,000 asset files total including a sprite atlas pipeline, dialogue ScriptableObjects, audio banks via FMOD Studio, and a small Animator Controller library.
  • Targets: Windows, macOS, Steam Deck (Linux), and an Android demo branch.
  • CI runs on GitHub Actions using game-ci/unity-builder@v4 with a self-hosted runner.
  • Local hardware: 2× M2 MacBook Pro 32 GB, 1× Windows desktop with 32 GB and an 8-core Ryzen.

Their iteration loop before the intervention: code change in C# → save → wait → press Play in Editor → wait → Play. The team measured the median "Save to Play" time at 38 seconds under steady-state and a clean Editor open at 2 minutes 50 seconds. A full Windows IL2CPP build from a clean Library/ on the Windows desktop measured 14 minutes 18 seconds. They had been treating 14-minute builds as "the cost of doing business."

The trigger that prompted the intervention: their lead programmer ran a clean build on a Friday afternoon, the build hit 14 minutes 47 seconds, and during the wait the team realized they were running one full clean build per week minimum plus two-three CI re-imports per day. That was conservatively 90 minutes per developer per week lost to wait time. For a 3-person team at an effective $50/hour all-in cost, that was about $700/week of unrealized productivity or roughly $36k/year. The Accelerator Cache Server hosting bill is $5-10/month. The math did not need a spreadsheet.

The starting baseline (five measurements)

Before changing anything, the team measured five specific numbers. Without this baseline they could not have decomposed the speedup later.

  1. Clean Editor open (delete Library/, open project): 2 min 50 sec median over 3 runs.
  2. Steady-state Save to Play (modify one C# file, hit Play): 38 sec median over 10 runs.
  3. Full asset re-import (right-click Assets/, "Reimport All"): 9 min 12 sec.
  4. Full Windows IL2CPP build from clean: 14 min 18 sec.
  5. CI clean build on GitHub Actions self-hosted runner: 12 min 41 sec (faster than local because of NVMe).

They wrote these to a pipeline/baseline-2026-04-12.md file in the repo. Skipping this step is the single most common mistake teams make - they intervene, feel faster, but cannot defend the improvement to their producer or repeat it on a new machine.

The seven changes that mattered

Each change was made and measured independently. The team did not bundle changes; they wanted to know which change contributed what. The order below is the order they actually applied, chosen so that earlier changes did not invalidate later measurements.

Change 1 - Enable Asset Import Worker explicitly

Unity 6.6 ships Asset Import Worker default-on for new projects but not for projects upgraded from older 6.x branches. The team's project was a 2023 LTS migration, so the worker was off by default.

Open Edit → Preferences → Asset Pipeline → Asset Import Workers and set the worker count to Cores - 1. On the 8-core Ryzen this meant 7. On the M2 MBPs with 8 performance cores plus efficiency cores, the team set it to 6 to leave headroom for the OS.

Verification: Watch the Editor's bottom-right import progress bar during a re-import; you should see a worker count next to the asset names.

Measurement: Full asset re-import dropped from 9 min 12 sec to 4 min 31 sec.

Change 2 - Stand up Accelerator Cache Server in Docker

The team ran the official Unity Accelerator Docker image on the Windows desktop (which was always on for the team's Plastic SCM mirror). Configuration:

  • Allocate 60 GB to the cache mount.
  • Bind it to the team's LAN on the office network and tunnel via Tailscale for remote work.
  • Set the cache lifetime to 30 days.

In Unity, open Edit → Preferences → Cache Server and point at the Accelerator instance with namespace rpg-main. Test the connection.

Verification: Delete Library/ on a developer machine, open the project, and watch the import progress: cached assets fly in within seconds rather than reimporting.

Measurement: Clean Editor open (after the cache had warmed once) dropped from 2 min 50 sec to 28 sec on a developer machine. The first developer to open after deletion still paid the import cost; everyone after them downloaded from the cache.

Change 3 - Library/ and PackageCache/ hygiene

The team had inherited a few habits from the 2023 LTS era that no longer made sense:

  • .gitignore was missing Library/, Logs/, Temp/, Obj/, and crucially *.csproj and *.sln generated files. Two of those were being committed by mistake on Windows because the global .gitignore had drifted.
  • The PackageCache/ folder was being copied across machines via rsync to "speed up package resolution" - which actually corrupted the cache on version drifts.
  • There was a 4.2 GB legacy Resources/ folder containing assets that had been moved to Addressables but never deleted.

Three changes:

  • Restore a canonical .gitignore for Unity 6.6 projects (Unity provides one in the Hub when creating a new project; copy it).
  • Stop syncing PackageCache/ between machines. Let Unity resolve it from manifest.json per machine.
  • Delete the legacy 4.2 GB Resources/ folder after confirming nothing in the Addressables groups still pointed at it via Assets/AddressableAssetsData/AssetGroups/*.asset searches.

Measurement: Full Windows IL2CPP build from clean dropped from 14 min 18 sec to 10 min 04 sec (the legacy Resources/ folder had been re-imported on every build because it was still in scope).

Change 4 - AssetDatabase batching for procedural imports

The team's tech artist had written an editor script that programmatically created Sprite assets from spritesheets at import time. The script called AssetDatabase.CreateAsset once per sprite, which triggered a refresh per sprite - approximately 1200 refresh calls per spritesheet import.

The fix was a two-line wrap:

AssetDatabase.StartAssetEditing();
try
{
    foreach (var sprite in sprites)
    {
        AssetDatabase.CreateAsset(sprite, GetPath(sprite));
    }
}
finally
{
    AssetDatabase.StopAssetEditing();
}

Unity batches refreshes between the start and stop calls. This single change accelerated spritesheet imports from approximately 6 seconds per spritesheet to 0.4 seconds per spritesheet.

Verification: The team added a small using var scope = new AssetDatabaseBatchScope(); helper to enforce the pattern across all of their editor tooling.

Measurement: Steady-state Save to Play dropped from 38 sec to 17 sec because the AssetDatabase scope catches not just spritesheet imports but every editor automation step that touches multiple assets.

Change 5 - IL2CPP per-platform cache

Unity 6.6's IL2CPP cache uses a Library/Bee/PlatformIL2CPPCache/ folder that survives between builds for the same platform but gets invalidated when you switch targets. The team had been switching between Windows and Steam Deck targets multiple times per day to test builds. Each switch invalidated IL2CPP cache for the target being switched away from.

The fix was to set up two parallel project folders on the Windows desktop: one always pointed at Windows target, one always pointed at Steam Deck target. They synced them via git worktree so both pulled from the same branch but each held its own Library/Bee/PlatformIL2CPPCache/.

Measurement: Full Windows IL2CPP build from clean dropped from 10 min 04 sec to 6 min 22 sec when the IL2CPP cache was warm; full Steam Deck IL2CPP build dropped from 11 min 30 sec to 7 min 11 sec.

Change 6 - Build Profile split per platform

Unity 6 introduced Build Profiles as a first-class API. The team had not migrated; they were still using the legacy single Build Settings dialog and toggling platform target manually. Build Profiles let you store per-platform scripting defines, scene lists, Addressables profile selection, and texture compression settings as named assets that you can apply atomically.

The team created six Build Profiles:

  • Windows-Dev (Mono backend, Development Build on, light texture compression)
  • Windows-Release (IL2CPP backend, Development Build off, full texture compression)
  • Mac-Dev, Mac-Release
  • SteamDeck-Dev, SteamDeck-Release
  • Android-Demo (separate scripting defines, separate scene list for the demo)

This change did not directly cut build time, but it eliminated a class of "I forgot to flip a setting" rebuilds that had been costing the team ~3 unnecessary rebuilds per week.

Measurement: Saved roughly 30 minutes per week per developer in avoided rebuilds. The team also pinned this in a pipeline/build-profile-conventions.md doc.

Change 7 - CI-side cache sharing

The GitHub Actions self-hosted runner was discarding its Library/ between every CI run because actions/checkout@v4 was set to clean the workspace by default. The team kept the clean (for correctness) but cached Library/ between runs:

- uses: actions/cache@v4
  with:
    path: |
      Library
      Library/Bee
    key: Library-${{ runner.os }}-${{ hashFiles('Packages/manifest.json', 'ProjectSettings/**/*.asset') }}
    restore-keys: |
      Library-${{ runner.os }}-

The key combines runner.os, the manifest.json hash, and the ProjectSettings/ hash so the cache invalidates safely when package versions or project settings change but reuses across normal feature branches.

Verification: Compare CI logs across two consecutive PRs. The second PR should see Cache restored from key: Library-Linux-<hash> near the start of the build job.

Measurement: CI clean build on the self-hosted runner dropped from 12 min 41 sec to 4 min 18 sec with a warm Library cache. The Asset Import Worker still ran on the warm cache for any newly added assets, which gracefully limited the cache's blast radius.

The seven measurements after

The team re-measured the same five baselines plus two new metrics that mattered now that builds were faster:

  1. Clean Editor open (warm Accelerator cache): 2 min 50 sec → 28 sec (~6x faster).
  2. Steady-state Save to Play: 38 sec → 17 sec (~2.2x faster).
  3. Full asset re-import: 9 min 12 sec → 4 min 31 sec (~2x faster; this remained the lower bound because the Import Worker still has to do the work the cache cannot serve).
  4. Full Windows IL2CPP build from clean: 14 min 18 sec → 6 min 22 sec (~2.2x faster locally).
  5. CI clean build on self-hosted runner: 12 min 41 sec → 1 min 32 sec (~8.3x faster with warm cache; 2.9x faster even on cache-miss runs).
  6. New metric: Plastic SCM checkout-to-Play on a fresh machine: 18 min → 2 min 14 sec (a brand-new contributor or a CI ephemeral runner gets to a playable Editor state in under 3 minutes).
  7. New metric: Daily aggregate wait time per developer: roughly 18 min/day → 4 min/day based on tracking the previous five metrics weighted by frequency.

The composite headline of "14 minutes to 90 seconds" comes from the CI cache-warm clean build number plus a Save-to-Play loop that no longer felt like waiting at all. Different developers experienced different parts of the speedup most viscerally; the team agreed the cumulative feel of the whole stack was what changed how it felt to work.

Decomposing the speedup - which change contributed what

Putting the per-change measurements on the same axis:

  • Change 1 (Asset Import Worker): ~2x speedup on import-bound work.
  • Change 2 (Accelerator Cache Server): ~6x speedup on clean Editor open after the cache warms; 0x on first-time-after-deletion.
  • Change 3 (Library/PackageCache hygiene + delete legacy Resources/): ~1.4x speedup on full builds; large variance based on how much legacy folder cruft each team has.
  • Change 4 (AssetDatabase batching): ~15x speedup on procedural imports; ~2x speedup on Save-to-Play because procedural imports often run during code reloads.
  • Change 5 (IL2CPP per-platform cache): ~1.6x speedup on platform-target builds.
  • Change 6 (Build Profile split): ~0x direct speedup; meaningful indirect speedup via fewer unnecessary rebuilds.
  • Change 7 (CI Library cache): ~3-8x speedup on CI depending on cache hit rate.

The multiplicative compounding is where the headline 4-9x comes from. None of these changes individually is dramatic; the stack is dramatic.

Common mistakes that erased gains

The team also tracked what did not work, so future teams can skip the dead ends:

  • Storing Library/ in Plastic SCM "for backup": catastrophic. The folder is meant to be regenerated. Storing it inflates the repo, slows down clone, and serves stale data to fresh checkouts.
  • Sharing PackageCache/ between machines via rsync: corrupted the cache on version drifts. Stop. Each machine resolves from manifest.json.
  • Setting Asset Import Worker count to N (where N = full core count): starves the OS and the Editor's main thread; iteration slows under heavy worker load.
  • Running Accelerator on a developer laptop: defeats the purpose. Run it on a desktop, a NAS, or a small VPS.
  • Caching Library/ in CI without a proper invalidation key: silently serves stale Library/ across breaking package upgrades. Always include Packages/manifest.json and ProjectSettings/ hashes in the cache key.
  • Treating Build Profiles as "just the UI": the API lets you script profile creation and avoid drift between developer machines. Use it.
  • Skipping the baseline measurement: you cannot defend the speedup to your producer six months later.

Pro tips for holding the speed gain

Speed gains decay if you do not maintain them. The team set up these guardrails:

  1. Quarterly re-measurement. Run the five baseline measurements every quarter and write to pipeline/baseline-<date>.md. If a metric regressed more than 25%, investigate before it compounds.
  2. CI minute budget. Set a budget for CI minutes per week and post the trend in the team channel. Spikes catch bad caching changes early.
  3. Accelerator cache health check. Once a month, check the Accelerator's hit rate. A hit rate below 70% means the cache is fragmenting; bump the namespace or extend the lifetime.
  4. Codify Build Profiles in PR review. New scripting defines must land via a Build Profile change, not via the Build Settings dialog. PR review enforces this.
  5. One pipeline owner. Rotate but always have one named person responsible for the pipeline this quarter. Shared ownership lets pipeline rot creep back.
  6. Bookmark Unity Discussions Asset Pipeline category. Fixes for 6.6.x patch versions land there first; subscribe to the RSS.
  7. Re-baseline after every Unity minor upgrade. 6.7 will introduce its own pipeline shifts. Re-measure before assuming the gains persist.

Anti-patterns that surface on Steam Deck and mobile

Two anti-patterns are platform-specific enough to call out:

  • Steam Deck builds get slower with too-aggressive texture compression in Library/Bee/. The Crunch encoder is slower than the legacy DXT path on Linux IL2CPP. Profile both before assuming compression settings transfer.
  • Android demo branches need separate Addressables remote catalogs. Sharing remote catalogs with the PC main branch causes hash drift; the Addressables remote catalog drift production validation loop covers this in depth.

For the broader Addressables pipeline shape, see Unity 6 Addressables release workflow build content checklist. For build-content-hash lockfiles that survive branch churn, see Build Content Hash Lockfiles for Unity Addressables.

Cost-benefit at indie scale

The team's intervention totaled approximately:

  • Engineering time: 1 sprint week split across the three team members (~40 person-hours).
  • Hardware: zero (they reused the existing Windows desktop for the Accelerator).
  • Software: zero (Unity Accelerator is free for individual developers and small teams under the Unity Personal/Plus thresholds in 2026; check current Unity terms for your seat count).
  • Ongoing maintenance: ~30 minutes/month for the pipeline owner.

The recovery:

  • Direct time savings: roughly 14 minutes saved per developer per day at ~18 working days/month = ~4.2 hours/developer/month.
  • For a 3-person team: roughly 12.6 hours/month of recovered productivity = ~$630/month at $50/hr all-in.
  • Payback period: under 4 weeks for the 40-hour intervention.
  • Annual recovery: roughly $7.5k/year at the same scale; this scales linearly with team size.
  • Secondary benefits not in the dollar number: faster CI feedback shortens PR cycles, faster Save-to-Play reduces context-switching cost, faster fresh-machine onboarding reduces the cost of bringing on a contractor or contributor.

For a publisher-funded team where the milestone gate depends on demo readiness by a specific date, the speedup translates into schedule slack rather than dollars - which is often more valuable because schedule slack absorbs unexpected scope or platform-cert surprises without eating into headcount.

Key takeaways

  • The 2026 win is the stack, not any single change. Asset Import Worker + Accelerator Cache Server + targeted hygiene compounds multiplicatively to 4-9x.
  • Measure before you intervene. Five baseline numbers, written to a versioned file, are the only honest way to defend a speedup later.
  • Asset Import Worker is off-by-default on upgraded projects. Set the worker count to Cores - 1 explicitly.
  • Accelerator Cache Server is for the LAN or a small VPS, not a developer laptop. Tailscale or equivalent gives you remote-work parity.
  • Library/ never belongs in source control. PackageCache/ never belongs in cross-machine sync. Both are regenerable; treating them as artifacts to back up destroys their purpose.
  • AssetDatabase.StartAssetEditing / StopAssetEditing is the cheapest 15x available for any tool that touches multiple assets in a loop.
  • IL2CPP cache invalidates on platform target switch. Parallel project folders via git worktree keep both targets warm.
  • Build Profiles eliminate "forgot to flip a setting" rebuilds. Treat them as the canonical source of truth for per-platform settings.
  • CI Library cache needs a manifest-hash-included key. Caching without proper invalidation silently serves stale state across breaking changes.
  • Speed gains decay without quarterly re-measurement. Set the cadence and rotate a pipeline owner.

FAQ

1. We are on Unity 2022 LTS - does this case study apply? Partially. Asset Import Worker exists in 2022 LTS but with rougher edges; Accelerator Cache Server works but the Docker path stabilized in 6.x; Build Profiles are 6.x-only. If you cannot upgrade right now, the AssetDatabase batching (Change 4) and CI Library cache with proper invalidation key (Change 7) still apply and account for roughly half the gain. Plan the upgrade; the rest of the stack rewards it.

2. We are a one-person team. Do we still need Accelerator Cache Server? Yes, if you also run CI or move between two machines (desktop + laptop). The cache server pays for itself the first time you switch machines or open the project on a fresh git clone. If you only ever develop on one machine and have no CI, you can defer Change 2 and still get the Import Worker, AssetDatabase batching, IL2CPP per-platform cache, and Library hygiene gains - that is still typically a 3-4x speedup.

3. We use Plastic SCM. Anything specific? Plastic SCM's selective ignore patterns work well for Unity 6.6 LTS if you start from Unity's canonical .gitignore and translate. The biggest watch-out is Plastic's "ignore file checked in" pattern - some teams accidentally version-control .plasticignore while making it permissive, which re-imports Library/ on fresh workspaces. Verify Library/ is in .plasticignore after every upgrade.

4. Should we set Asset Import Worker count to Cores, Cores - 1, or something else? Cores - 1 for desktops and Linux runners. On Apple Silicon, leave at least 2 efficiency cores for the OS by using PerformanceCores - 2. For CI runners, match to the runner's allocated vCPUs (not the host hardware's full core count) or you will starve the runner.

5. Our build is dominated by shader compilation, not asset import. What changes for us? Read Shader Variant Explosion Unity 6 Build Time Triage Flow first. Then return to Changes 4, 5, 6, and 7 here. Changes 1 and 2 will still help your import time but the dominant cost is on the shader side; address shader variants first to unblock the rest of the stack.

Related reading

For the resource side: the 18 free Unity 6.6 migration regression triage resources covers the migration-time companion tools, the 15 free GitHub Actions CI recipes for Unity and Godot covers Change 7's CI lane in depth, and the 20 free build validation and release checklist resources covers the broader build-gate context this pipeline feeds into.

Authoritative references: Unity Manual - Asset Database, Unity Manual - Cache Server, Unity Manual - Build Profiles, GitHub Actions cache action documentation.

If your team has run a similar measurement-anchored pipeline intervention with different numbers, share the baseline-and-after notes - the indie community benefits more from honest measurements than from "we made it faster" anecdotes.