Unity Sentis or ONNX Model Import Failed - Neural Network Asset and Backend Fix

Problem: You import an ONNX file into Unity, but Sentis throws import errors, creates a broken neural network asset, or fails at runtime when selecting a backend.

Common messages include:

  • unsupported operator or unsupported opset
  • shape inference failed
  • backend execution error in GPU or CPU path

This usually happens when the exported model graph does not match Sentis-supported operators or when runtime backend expectations differ between Editor and Player builds.


Quick fix checklist

  1. Re-export ONNX with a Sentis-friendly opset (commonly lower/stable opset rather than latest experimental).
  2. Remove or replace unsupported operators in the source model pipeline.
  3. Validate input tensor shape and data type exactly match what your Unity inference code sends.
  4. Test inference on CPU backend first, then move to GPU backend once correctness is stable.
  5. Run the same smoke test in a Development Build, not only in Editor.

Root cause summary

Most Unity Sentis ONNX import failed issues come from one of these:

  • Model export mismatch - ONNX graph uses operators Sentis does not fully support in your package version.
  • Shape or dtype mismatch - model expects specific dimensions or float type but receives different input at runtime.
  • Backend-specific behavior - GPU backend fails for an op chain that works on CPU.
  • Version drift - exporter library, ONNX runtime assumptions, and Sentis package version are out of sync.

Step-by-step fix

Step 1 - Confirm package and model version alignment

  1. Check Unity Sentis package version in Package Manager.
  2. Confirm your ONNX exporter version from the training/inference pipeline repo.
  3. If model was exported long ago, re-export from a pinned pipeline revision.

Verification: You can document one known-good trio: Unity version + Sentis version + exporter/opset.


Step 2 - Re-export ONNX with conservative settings

  1. Export with a stable opset supported by your current Sentis version.
  2. Disable experimental graph optimizations that fuse uncommon ops when possible.
  3. Keep dynamic dimensions minimal unless required.

Common mistake: Exporting with newest opset by default because training framework upgraded silently.


Step 3 - Validate operators before import

  1. Inspect model graph in your ML toolchain or ONNX inspection utility.
  2. Identify unsupported operators and replace them in source model architecture where practical.
  3. Re-export and keep a changelog line of what was replaced.

Pro tip: Keep one "Unity-compatible" export config profile in your training repo to avoid repeating this work.


Step 4 - Validate input tensor contract in Unity

  1. Log input tensor shape and type before inference call.
  2. Ensure batch/channel ordering matches model expectation.
  3. Ensure normalization logic in Unity matches training preprocessing.

If preprocessing differs, model may import but output appears broken, which looks like backend failure.


Step 5 - Start with CPU backend, then promote

  1. Run inference on CPU backend first for deterministic debugging.
  2. Once outputs are sane, switch to GPU backend and re-run the same sample input set.
  3. If GPU fails, keep CPU as fallback and track GPU op incompatibility separately.

Verification: Same input sample should produce numerically close outputs across backends within expected tolerance.


Step 6 - Test in Development Build

Editor success does not guarantee Player success.

  1. Build Development player for your target platform.
  2. Run one minimal inference scene with known test inputs.
  3. Check logs for backend initialization and execution path.

Alternative fixes

  • If one model family keeps failing, export an alternate architecture with simpler operator set for in-game inference.
  • Keep heavy model server-side and run lightweight classifier/selector locally in Unity.
  • Use CPU-only mode for platforms where GPU backend behavior is inconsistent.

Prevention tips

  • Pin exporter and opset in your model CI pipeline.
  • Add a pre-merge test that loads ONNX in a Unity test project and runs one inference step.
  • Store a tiny golden input/output pair to detect regression when model or Sentis package changes.
  • Avoid silent package bumps without inference smoke tests.

FAQ

Why does ONNX import in Editor but fail in build?
Backend support and platform runtime constraints can differ. Always validate in a Development Build.

Should I always use GPU backend?
No. CPU backend is often more stable and easier to debug. Promote to GPU after correctness is proven.

Can I keep dynamic shapes?
You can, but fixed or constrained dimensions are usually easier to support in Unity game runtime pipelines.


Related links

Bookmark this fix for the next model refresh and share it with whoever owns your training export scripts.