Why Cross-Platform Input Design Matters
If your game ships on just one platform, it is tempting to hard-code input everywhere:
Input.GetKeyDown(KeyCode.Space)sprinkled through scripts- UI buttons wired directly to engine events
- Platform-specific hacks like
#if UNITY_STANDALONEvs#if UNITY_ANDROID
That approach collapses the moment you add:
- Gamepad support on PC
- A console build with certification requirements
- A mobile port with touch and on-screen controls
Suddenly, every change to controls feels risky, and you are fighting bugs like:
- Menu actions working on keyboard but not gamepad
- Touch input that breaks when UI is refactored
- Platform-only edge cases you cannot easily test
This article walks through how to build a cross-platform input system that:
- Treats actions as intent, not raw hardware buttons
- Keeps platform-specific code isolated in one layer
- Scales from keyboard + mouse → gamepad → touch without rewrites
Examples assume Unity-style C# and Godot-style structures, but the ideas apply to any engine.
Step 1 - Define Actions, Not Buttons
The first mistake many projects make is tying gameplay directly to physical inputs:
if (Input.GetKeyDown(KeyCode.Space)) {
Jump();
}
Instead, define a small, explicit set of actions:
MoveHorizontalMoveVerticalJumpAttackPrimaryInteractPause
Then build your game logic against those actions:
if (inputActions.Jump.WasPressedThisFrame()) {
Jump();
}
Guidelines for defining actions:
- Name them after player intent, not devices (
Jump, notSpacebar) - Keep the list small and reusable across characters and scenes
- Separate gameplay actions from UI/navigation actions when needed
Once actions are defined, every system that cares about input talks to the same API, no matter where the signal comes from.
Step 2 - Introduce An Input Abstraction Layer
Next, introduce an input abstraction that lives between your game and the engine’s raw APIs.
In Unity terms, this might be:
- An
IInputSourceinterface - A central
InputRouterorInputServicesingleton (or dependency-injected service)
Conceptually:
- The abstraction exposes actions and axes (e.g.
GetActionDown("Jump"),GetAxis("MoveHorizontal")) - Under the hood, it can read from keyboard, gamepad, or touch depending on platform and player preferences
Example interface:
public interface IInputSource {
float GetAxis(string actionName);
bool GetAction(string actionName);
bool GetActionDown(string actionName);
bool GetActionUp(string actionName);
}
Your player controller, UI navigation, and other systems now depend only on IInputSource, never on Input.GetKey or engine-specific singletons directly.
Step 3 - Map Devices To Actions Per Platform
With an abstraction in place, you can create platform-specific implementations.
Example: Desktop Keyboard + Mouse
public class DesktopInputSource : IInputSource {
public float GetAxis(string actionName) {
switch (actionName) {
case "MoveHorizontal": return Input.GetAxisRaw("Horizontal");
case "MoveVertical": return Input.GetAxisRaw("Vertical");
default: return 0f;
}
}
public bool GetAction(string actionName) {
switch (actionName) {
case "Jump": return Input.GetKey(KeyCode.Space);
case "AttackPrimary": return Input.GetMouseButton(0);
default: return false;
}
}
public bool GetActionDown(string actionName) {
switch (actionName) {
case "Jump": return Input.GetKeyDown(KeyCode.Space);
case "AttackPrimary": return Input.GetMouseButtonDown(0);
default: return false;
}
}
public bool GetActionUp(string actionName) {
switch (actionName) {
case "Jump": return Input.GetKeyUp(KeyCode.Space);
case "AttackPrimary": return Input.GetMouseButtonUp(0);
default: return false;
}
}
}
Example: Gamepad
public class GamepadInputSource : IInputSource {
public float GetAxis(string actionName) {
switch (actionName) {
case "MoveHorizontal": return Input.GetAxisRaw("GamepadHorizontal");
case "MoveVertical": return Input.GetAxisRaw("GamepadVertical");
default: return 0f;
}
}
public bool GetAction(string actionName) {
switch (actionName) {
case "Jump": return Input.GetButton("GamepadJump");
case "AttackPrimary": return Input.GetButton("GamepadAttack");
default: return false;
}
}
public bool GetActionDown(string actionName) {
switch (actionName) {
case "Jump": return Input.GetButtonDown("GamepadJump");
case "AttackPrimary": return Input.GetButtonDown("GamepadAttack");
default: return false;
}
}
public bool GetActionUp(string actionName) {
switch (actionName) {
case "Jump": return Input.GetButtonUp("GamepadJump");
case "AttackPrimary": return Input.GetButtonUp("GamepadAttack");
default: return false;
}
}
}
Example: Touch + On-Screen Controls
On mobile, you often drive actions from:
- On-screen buttons
- Virtual joystick
- Gesture recognizers
Wrap those into an implementation:
public class TouchInputSource : IInputSource {
private readonly VirtualJoystick _joystick;
private readonly VirtualButton _jumpButton;
public TouchInputSource(VirtualJoystick joystick, VirtualButton jumpButton) {
_joystick = joystick;
_jumpButton = jumpButton;
}
public float GetAxis(string actionName) {
switch (actionName) {
case "MoveHorizontal": return _joystick.Horizontal;
case "MoveVertical": return _joystick.Vertical;
default: return 0f;
}
}
public bool GetAction(string actionName) {
return actionName == "Jump" && _jumpButton.IsHeld;
}
public bool GetActionDown(string actionName) {
return actionName == "Jump" && _jumpButton.WasPressedThisFrame;
}
public bool GetActionUp(string actionName) {
return actionName == "Jump" && _jumpButton.WasReleasedThisFrame;
}
}
With this pattern, gameplay code never changes when you add or swap devices.
Step 4 - Detect And Switch Active Devices Gracefully
Modern games often support multiple devices at once:
- Player uses mouse and keyboard in menus
- Switches to controller on the couch
- Plays on Steam Deck with built-in controls
You can monitor recent input to decide which source is “active”:
- Track when a gamepad button is pressed
- Track when mouse or keyboard input changes
- Track when touch events occur
Then your InputRouter can:
- Enable the corresponding
IInputSource - Update UI hints (button glyphs vs keys vs touch icons)
Simple approach:
- Keep a list of candidate sources
- Each source can report whether it saw input this frame
- Prioritize the most recent active source for UI and gameplay hints
Step 5 - Keep UI, Menus, And Gameplay In Sync
Input systems often break because UI and gameplay use different logic:
- Menus use engine-specific navigation
- Gameplay uses a custom input abstraction
Instead:
- Route both UI navigation and gameplay through the same action names:
NavigateUp,NavigateDown,NavigateLeft,NavigateRightSubmit,Cancel,Pause
Map devices accordingly:
- Keyboard: arrows / WASD, Enter, Escape
- Gamepad: d-pad or left stick, A / B (or Cross / Circle)
- Touch: tap, swipe, explicit UI events that trigger actions
This makes it much easier to:
- Ensure menus are fully navigable with gamepad
- Support remapping once for all contexts
- Keep accessibility features (e.g. remappable input) consistent
Step 6 - Plan For Remapping And Accessibility
If you ever intend to support:
- Custom key bindings
- Swap A/B confirm/cancel behavior by region
- Accessibility features (one-hand mode, alternative layouts)
Design your input layer with remapping in mind:
- Store mappings in data (JSON, scriptable objects, config files)
- Treat engine-level bindings as “raw inputs” that feed into an action map
For example:
RawInput→ “Keyboard Space”Action→ “Jump”- Mapping:
Keyboard Space→Jump,Gamepad South Button→Jump
When a player remaps:
- Only the mapping layer changes
- Game code still queries
Jumpas usual
Step 7 - Test On Real Hardware, Not Just In Editor
Cross-platform input bugs are rarely obvious in editor-only testing.
Set up a testing checklist:
- [ ] Keyboard-only playthrough on PC
- [ ] Controller-only playthrough (Xbox/PlayStation/Switch pad)
- [ ] Mouse-only navigation in menus
- [ ] Full session on target console hardware (dev kits)
- [ ] Mobile/touch test on real devices, not just emulators
Watch for:
- Actions that fire twice or not at all
- Menus that cannot be navigated without a mouse
- Inconsistent dead zones or sensitivity across devices
Log your findings and fix them in the abstraction layer, not by special-casing every scene.
Common Pitfalls To Avoid
1. Hardcoding Input Everywhere
- Symptoms: copy-pasted
Input.GetKeyDownthroughout your codebase - Fix: route input through a single abstraction and phase out direct calls over time
2. Tying Actions To A Single Device
- Symptoms: menu works on keyboard, breaks on controller
- Fix: define device-agnostic actions and map them per device
3. Ignoring Accessibility
- Symptoms: players cannot remap controls or play one-handed
- Fix: design remapping from day one and keep action names stable
4. Platform-Specific Preprocessor Spaghetti
- Symptoms: dozens of
#ifbranches around input logic - Fix: move platform checks into the factory/bootstrapping that chooses
IInputSourceimplementations
Bringing It All Together
To build a robust cross-platform input system for PC, console, and mobile:
- Treat input as intentful actions, not hardware buttons
- Introduce an input abstraction layer with a consistent API
- Implement per-device sources (keyboard/mouse, gamepad, touch) behind that interface
- Let your router detect and switch active devices gracefully
- Keep UI and gameplay on the same action vocabulary
- Invest early in remapping and accessibility
- Test on real hardware and fix issues at the abstraction layer
Do this once, and you get:
- Cleaner, more maintainable code
- Easier ports to new platforms and storefronts
- Happier players who can play your game the way they want, wherever they want
Use this article as a checklist the next time you start a multi-platform project—or when you finally refactor that input system you have been afraid to touch.