Neural Networks for Game AI

Neural networks represent the cutting edge of artificial intelligence in games. These powerful systems can learn from data, recognize patterns, and make complex decisions that traditional AI methods struggle with. In this chapter, you'll learn how to implement neural networks in your games to create more intelligent, adaptive, and engaging AI.

What You'll Learn

  • Understand what neural networks are and how they work
  • Implement basic neural networks for game AI
  • Use neural networks for decision-making and pattern recognition
  • Create adaptive AI that learns from player behavior
  • Optimize neural networks for real-time game performance
  • Apply neural networks to specific game scenarios

Prerequisites


What Are Neural Networks?

A neural network is a computing system inspired by biological neural networks. It consists of interconnected nodes (neurons) organized in layers that process information and learn from data.

Basic Structure

Neural networks have three main components:

  1. Input Layer: Receives data (player position, health, enemy count, etc.)
  2. Hidden Layers: Process information through weighted connections
  3. Output Layer: Produces decisions or predictions (attack, flee, patrol, etc.)

Key Concept: Neural networks learn by adjusting the weights between neurons based on training data, allowing them to recognize patterns and make intelligent decisions.

Why Use Neural Networks in Games?

Advantages:

  • Adaptive Behavior: AI can learn and adapt to player strategies
  • Pattern Recognition: Identify complex patterns in gameplay
  • Non-Linear Decisions: Handle situations traditional AI struggles with
  • Emergent Behavior: Create unexpected, interesting AI behaviors
  • Scalability: Handle complex scenarios with many variables

Considerations:

  • Performance Cost: Neural networks require computation time
  • Training Required: Need data or time to train the network
  • Predictability: Can be harder to debug than traditional AI
  • Complexity: More complex to implement than state machines

Basic Neural Network Implementation

Simple Feedforward Network

Let's create a basic neural network that decides whether an enemy should attack or flee:

using UnityEngine;
using System.Collections.Generic;

public class SimpleNeuralNetwork
{
    // Network structure: 3 inputs, 4 hidden neurons, 2 outputs
    private int inputSize = 3;
    private int hiddenSize = 4;
    private int outputSize = 2;

    // Weights: connections between layers
    private float[,] inputToHiddenWeights;
    private float[,] hiddenToOutputWeights;

    // Biases: offsets for each neuron
    private float[] hiddenBiases;
    private float[] outputBiases;

    public SimpleNeuralNetwork()
    {
        InitializeWeights();
    }

    private void InitializeWeights()
    {
        // Initialize weights randomly (small values)
        inputToHiddenWeights = new float[inputSize, hiddenSize];
        hiddenToOutputWeights = new float[hiddenSize, outputSize];
        hiddenBiases = new float[hiddenSize];
        outputBiases = new float[outputSize];

        // Random initialization
        for (int i = 0; i < inputSize; i++)
        {
            for (int j = 0; j < hiddenSize; j++)
            {
                inputToHiddenWeights[i, j] = Random.Range(-0.5f, 0.5f);
            }
        }

        for (int i = 0; i < hiddenSize; i++)
        {
            for (int j = 0; j < outputSize; j++)
            {
                hiddenToOutputWeights[i, j] = Random.Range(-0.5f, 0.5f);
            }
        }
    }

    // Forward pass: process input through network
    public float[] Forward(float[] inputs)
    {
        // Input to hidden layer
        float[] hidden = new float[hiddenSize];
        for (int i = 0; i < hiddenSize; i++)
        {
            float sum = hiddenBiases[i];
            for (int j = 0; j < inputSize; j++)
            {
                sum += inputs[j] * inputToHiddenWeights[j, i];
            }
            hidden[i] = Sigmoid(sum); // Activation function
        }

        // Hidden to output layer
        float[] outputs = new float[outputSize];
        for (int i = 0; i < outputSize; i++)
        {
            float sum = outputBiases[i];
            for (int j = 0; j < hiddenSize; j++)
            {
                sum += hidden[j] * hiddenToOutputWeights[j, i];
            }
            outputs[i] = Sigmoid(sum);
        }

        return outputs;
    }

    // Activation function: Sigmoid
    private float Sigmoid(float x)
    {
        return 1f / (1f + Mathf.Exp(-x));
    }

    // Train network using backpropagation (simplified)
    public void Train(float[] inputs, float[] expectedOutputs, float learningRate = 0.1f)
    {
        // Forward pass
        float[] hidden = ForwardToHidden(inputs);
        float[] outputs = Forward(inputs);

        // Calculate output errors
        float[] outputErrors = new float[outputSize];
        for (int i = 0; i < outputSize; i++)
        {
            outputErrors[i] = (expectedOutputs[i] - outputs[i]) * outputs[i] * (1f - outputs[i]);
        }

        // Calculate hidden errors
        float[] hiddenErrors = new float[hiddenSize];
        for (int i = 0; i < hiddenSize; i++)
        {
            float sum = 0f;
            for (int j = 0; j < outputSize; j++)
            {
                sum += outputErrors[j] * hiddenToOutputWeights[i, j];
            }
            hiddenErrors[i] = sum * hidden[i] * (1f - hidden[i]);
        }

        // Update weights (backpropagation)
        UpdateWeights(inputs, hidden, outputErrors, hiddenErrors, learningRate);
    }

    private float[] ForwardToHidden(float[] inputs)
    {
        float[] hidden = new float[hiddenSize];
        for (int i = 0; i < hiddenSize; i++)
        {
            float sum = hiddenBiases[i];
            for (int j = 0; j < inputSize; j++)
            {
                sum += inputs[j] * inputToHiddenWeights[j, i];
            }
            hidden[i] = Sigmoid(sum);
        }
        return hidden;
    }

    private void UpdateWeights(float[] inputs, float[] hidden, float[] outputErrors, float[] hiddenErrors, float learningRate)
    {
        // Update hidden to output weights
        for (int i = 0; i < hiddenSize; i++)
        {
            for (int j = 0; j < outputSize; j++)
            {
                hiddenToOutputWeights[i, j] += learningRate * outputErrors[j] * hidden[i];
            }
        }

        // Update input to hidden weights
        for (int i = 0; i < inputSize; i++)
        {
            for (int j = 0; j < hiddenSize; j++)
            {
                inputToHiddenWeights[i, j] += learningRate * hiddenErrors[j] * inputs[i];
            }
        }

        // Update biases
        for (int i = 0; i < outputSize; i++)
        {
            outputBiases[i] += learningRate * outputErrors[i];
        }

        for (int i = 0; i < hiddenSize; i++)
        {
            hiddenBiases[i] += learningRate * hiddenErrors[i];
        }
    }
}

Using the Neural Network in Game AI

public class NeuralNetworkAI : MonoBehaviour
{
    private SimpleNeuralNetwork brain;
    public Transform player;

    void Start()
    {
        brain = new SimpleNeuralNetwork();
        TrainNetwork();
    }

    void Update()
    {
        // Get inputs: player distance, health, enemy count
        float[] inputs = new float[3];
        inputs[0] = Vector3.Distance(transform.position, player.position) / 20f; // Normalized
        inputs[1] = GetComponent<Health>().currentHealth / 100f; // Normalized
        inputs[2] = CountNearbyEnemies() / 10f; // Normalized

        // Get decision from neural network
        float[] outputs = brain.Forward(inputs);

        // outputs[0] = attack probability, outputs[1] = flee probability
        if (outputs[0] > outputs[1])
        {
            Attack();
        }
        else
        {
            Flee();
        }
    }

    private void TrainNetwork()
    {
        // Training examples: (inputs, expected outputs)
        // Example 1: Close player, high health, few enemies -> Attack
        float[] inputs1 = { 0.2f, 0.9f, 0.1f };
        float[] outputs1 = { 1f, 0f }; // Attack

        // Example 2: Close player, low health, many enemies -> Flee
        float[] inputs2 = { 0.3f, 0.2f, 0.8f };
        float[] outputs2 = { 0f, 1f }; // Flee

        // Train on multiple examples
        for (int i = 0; i < 1000; i++)
        {
            brain.Train(inputs1, outputs1);
            brain.Train(inputs2, outputs2);
            // Add more training examples...
        }
    }

    private int CountNearbyEnemies()
    {
        // Count enemies within range
        Collider[] colliders = Physics.OverlapSphere(transform.position, 10f);
        int count = 0;
        foreach (var col in colliders)
        {
            if (col.CompareTag("Enemy"))
                count++;
        }
        return count;
    }

    private void Attack()
    {
        // Attack behavior
        Debug.Log("Neural Network Decision: Attack!");
    }

    private void Flee()
    {
        // Flee behavior
        Debug.Log("Neural Network Decision: Flee!");
    }
}

Neural Networks for Pattern Recognition

Neural networks excel at recognizing patterns in game data. This is useful for predicting player behavior, identifying game states, or recognizing attack patterns.

Player Behavior Prediction

public class PlayerBehaviorPredictor
{
    private SimpleNeuralNetwork predictor;

    // Inputs: player position, velocity, health, weapon type
    // Output: predicted next action (move, attack, defend, etc.)

    public PlayerBehaviorPredictor()
    {
        predictor = new SimpleNeuralNetwork();
        // Adjust network size for 4 inputs, 4 outputs
    }

    public string PredictAction(PlayerData data)
    {
        float[] inputs = {
            data.position.x / 100f,      // Normalized position
            data.position.y / 100f,
            data.health / 100f,          // Normalized health
            (float)data.weaponType / 5f  // Normalized weapon type
        };

        float[] outputs = predictor.Forward(inputs);

        // Find highest output (most likely action)
        int maxIndex = 0;
        for (int i = 1; i < outputs.Length; i++)
        {
            if (outputs[i] > outputs[maxIndex])
                maxIndex = i;
        }

        string[] actions = { "Move", "Attack", "Defend", "Retreat" };
        return actions[maxIndex];
    }

    public void TrainOnPlayerHistory(List<PlayerData> history)
    {
        // Train network on historical player behavior
        for (int i = 0; i < history.Count - 1; i++)
        {
            float[] inputs = ConvertToInputs(history[i]);
            float[] expectedOutputs = ConvertToOutputs(history[i + 1].action);

            predictor.Train(inputs, expectedOutputs);
        }
    }

    private float[] ConvertToInputs(PlayerData data)
    {
        return new float[] {
            data.position.x / 100f,
            data.position.y / 100f,
            data.health / 100f,
            (float)data.weaponType / 5f
        };
    }

    private float[] ConvertToOutputs(string action)
    {
        float[] outputs = new float[4];
        string[] actions = { "Move", "Attack", "Defend", "Retreat" };

        for (int i = 0; i < actions.Length; i++)
        {
            outputs[i] = (actions[i] == action) ? 1f : 0f;
        }

        return outputs;
    }
}

Adaptive AI with Neural Networks

Neural networks can adapt to player behavior in real-time, creating more challenging and engaging gameplay.

Learning from Player Actions

public class AdaptiveEnemyAI : MonoBehaviour
{
    private SimpleNeuralNetwork decisionNetwork;
    private List<GameState> gameHistory;

    void Start()
    {
        decisionNetwork = new SimpleNeuralNetwork();
        gameHistory = new List<GameState>();
    }

    void Update()
    {
        // Make decision
        GameState currentState = CaptureGameState();
        float[] inputs = StateToInputs(currentState);
        float[] outputs = decisionNetwork.Forward(inputs);

        // Execute decision
        ExecuteDecision(outputs);

        // Learn from player's response
        if (Time.frameCount % 60 == 0) // Every second
        {
            LearnFromPlayerResponse();
        }
    }

    private GameState CaptureGameState()
    {
        GameState state = new GameState();
        state.playerPosition = player.transform.position;
        state.playerHealth = player.GetComponent<Health>().currentHealth;
        state.enemyPosition = transform.position;
        state.enemyHealth = GetComponent<Health>().currentHealth;
        state.distance = Vector3.Distance(transform.position, player.transform.position);
        return state;
    }

    private float[] StateToInputs(GameState state)
    {
        return new float[] {
            state.distance / 20f,
            state.playerHealth / 100f,
            state.enemyHealth / 100f
        };
    }

    private void LearnFromPlayerResponse()
    {
        if (gameHistory.Count < 2) return;

        // Analyze if player is adapting to our strategy
        GameState previousState = gameHistory[gameHistory.Count - 2];
        GameState currentState = gameHistory[gameHistory.Count - 1];

        // If player is countering our strategy, adjust
        if (IsPlayerCountering(previousState, currentState))
        {
            // Retrain network with new strategy
            RetrainNetwork();
        }
    }

    private bool IsPlayerCountering(GameState prev, GameState curr)
    {
        // Analyze if player changed strategy in response to our actions
        // This is a simplified example
        return Mathf.Abs(curr.playerPosition.x - prev.playerPosition.x) > 5f;
    }

    private void RetrainNetwork()
    {
        // Use recent game history to retrain
        for (int i = 0; i < gameHistory.Count - 1; i++)
        {
            float[] inputs = StateToInputs(gameHistory[i]);
            float[] expectedOutputs = DetermineOptimalResponse(gameHistory[i + 1]);

            decisionNetwork.Train(inputs, expectedOutputs, 0.05f); // Lower learning rate
        }
    }

    private float[] DetermineOptimalResponse(GameState state)
    {
        // Determine what the optimal response should have been
        // This is simplified - in practice, you'd use more sophisticated methods
        float[] outputs = new float[2];

        if (state.distance < 5f && state.enemyHealth > 50f)
        {
            outputs[0] = 1f; // Attack
            outputs[1] = 0f;
        }
        else
        {
            outputs[0] = 0f;
            outputs[1] = 1f; // Flee
        }

        return outputs;
    }

    private void ExecuteDecision(float[] outputs)
    {
        if (outputs[0] > outputs[1])
        {
            AttackPlayer();
        }
        else
        {
            FleeFromPlayer();
        }
    }

    private void AttackPlayer() { /* Attack implementation */ }
    private void FleeFromPlayer() { /* Flee implementation */ }
}

Performance Optimization

Neural networks can be computationally expensive. Here are strategies to optimize them for real-time games:

Optimization Techniques

1. Reduce Network Size

// Smaller networks = faster computation
// 3 inputs, 2 hidden, 2 outputs (fast)
// vs
// 10 inputs, 20 hidden, 5 outputs (slow)

2. Pre-compute Common Calculations

// Cache activation function results
private Dictionary<float, float> sigmoidCache = new Dictionary<float, float>();

private float Sigmoid(float x)
{
    if (sigmoidCache.ContainsKey(x))
        return sigmoidCache[x];

    float result = 1f / (1f + Mathf.Exp(-x));
    sigmoidCache[x] = result;
    return result;
}

3. Update Less Frequently

// Update neural network every N frames instead of every frame
private int updateInterval = 5; // Update every 5 frames

void Update()
{
    if (Time.frameCount % updateInterval == 0)
    {
        MakeNeuralNetworkDecision();
    }
}

4. Use Simpler Activation Functions

// ReLU is faster than Sigmoid
private float ReLU(float x)
{
    return Mathf.Max(0f, x); // Much faster than sigmoid
}

5. Batch Processing

// Process multiple AI entities together
public void ProcessBatch(List<AIEntity> entities)
{
    // Process all entities' neural networks in one batch
    // More efficient than individual processing
}

Practical Applications

1. Enemy AI Decision Making

Neural networks can replace or enhance traditional state machines for more complex decision-making:

public class NeuralEnemyAI : MonoBehaviour
{
    private SimpleNeuralNetwork brain;

    // Inputs: player distance, health ratio, weapon type, time of day
    // Outputs: attack probability, patrol probability, hide probability

    public void MakeDecision()
    {
        float[] inputs = GatherInputs();
        float[] outputs = brain.Forward(inputs);

        // Choose action based on highest output
        int action = GetHighestOutputIndex(outputs);
        ExecuteAction(action);
    }
}

2. Difficulty Adjustment

Neural networks can learn optimal difficulty settings for individual players:

public class AdaptiveDifficulty
{
    private SimpleNeuralNetwork difficultyNetwork;

    // Inputs: player skill, play time, death count, completion rate
    // Output: optimal difficulty multiplier

    public float CalculateDifficulty(PlayerStats stats)
    {
        float[] inputs = {
            stats.skillLevel,
            stats.playTime / 3600f, // Hours
            stats.deathCount / 100f,
            stats.completionRate
        };

        float[] output = difficultyNetwork.Forward(inputs);
        return output[0]; // Difficulty multiplier
    }
}

3. Procedural Content Generation

Neural networks can generate game content based on player preferences:

public class NeuralContentGenerator
{
    private SimpleNeuralNetwork generator;

    // Inputs: player preferences, game state, difficulty
    // Outputs: level parameters, enemy types, item placement

    public LevelData GenerateLevel(PlayerProfile profile, GameState state)
    {
        float[] inputs = ProfileToInputs(profile, state);
        float[] outputs = generator.Forward(inputs);

        return OutputsToLevelData(outputs);
    }
}

Common Challenges and Solutions

Challenge: Network Takes Too Long to Train

Problem: Training neural networks can be time-consuming.

Solution:

  • Use pre-trained networks when possible
  • Train offline, use trained network in game
  • Use simpler networks for real-time learning
  • Implement incremental learning (update gradually)

Challenge: Network Makes Unpredictable Decisions

Problem: Neural networks can be hard to debug and predict.

Solution:

  • Add constraints to network outputs
  • Use ensemble methods (multiple networks voting)
  • Implement fallback to traditional AI
  • Log and analyze network decisions

Challenge: Performance Issues

Problem: Neural networks slow down game performance.

Solution:

  • Reduce network size
  • Update less frequently
  • Use simpler activation functions
  • Consider using neural networks only for important AI entities

Tools and Resources

Neural Network Libraries

  • Unity ML-Agents: Unity's machine learning toolkit
  • TensorFlow.js: JavaScript neural network library
  • SharpNeat: C# neural network library
  • Neural Network Playground: Visual learning tool

Learning Resources

  • Neural Networks and Deep Learning: Online course
  • Game AI Pro Book Series: Game-specific AI techniques
  • Unity ML-Agents Documentation: Official Unity resources

Next Steps

You've learned how to implement neural networks for game AI, from basic feedforward networks to adaptive learning systems. In the next chapter, AI Ethics in Game Development, you'll explore the ethical considerations and responsible practices for using AI in games.

Practice Exercise:

  • Create a simple neural network that decides enemy behavior
  • Implement a player behavior predictor
  • Build an adaptive difficulty system using neural networks
  • Optimize a neural network for real-time performance

Related Resources:


Neural networks open up incredible possibilities for game AI. Start simple, experiment, and gradually build more sophisticated systems. Your AI will become more intelligent and engaging than ever before!