Most NPCs in games are scripted state machines. They follow a patrol route. They react to a trigger. They deliver a line of dialogue you’ve heard four times before. There’s no memory of past interactions, no adaptation, nothing surprising after the first few encounters. AI agents are starting to change all that — and honestly, the implications go well beyond just making games more entertaining.
Key Takeaway
AI agents are replacing scripted NPCs in games with adaptive characters that have persistent memory, dynamic goals, and genuine decision-making – creating emergent narrative and player experiences that no amount of hand-authored content can replicate.
If you’re building games or entertainment products, this is worth paying attention to. The technical approaches being developed in gaming are, if I’m honest, ahead of what most enterprise AI teams are currently doing. And the lessons transfer more directly than you might expect.
What AI agents actually add to gaming
The difference between a scripted NPC and an agent-based NPC is roughly like the difference between a vending machine and an actual conversation. A scripted NPC produces a fixed set of outputs for a fixed set of inputs. An agent-based NPC perceives its environment, holds state, reasons about what to do, takes action — and can do all of this in situations that were never explicitly programmed for.
Key Takeaways
- What AI agents actually add to gaming
- LLM-agent-game: what agent-based NPCs look like in practice
- driVLMe: vision-language agents in real-time environments
- Real-time companions: the business model shift
This matters differently depending on what kind of game you’re building. In open-world games, it means NPCs who actually remember things. A merchant who’s holding a grudge. A guard who clocked your behaviour pattern two sessions ago and adjusted accordingly. These aren’t gimmicks — they fundamentally change how a player relates to a world. In narrative games, it means characters who can improvise authentic responses rather than following a branching script that writers had to author years before the player made their choice. And in companion experiences, it means an AI partner that understands your play style, adapts its strategy, and communicates in plain language instead of through a menu of predetermined options.
All three of these are being built right now. This isn’t theoretical.
LLM-agent-game: what agent-based NPCs look like in practice
The LLM-agent-game project by onjas-buidl on GitHub is one of the cleaner open-source examples of what happens when you give NPCs a proper reasoning loop. Each NPC is an LLM-powered agent with memory, goals, and a planning process that generates behaviour from those goals rather than from a predetermined script.
What you get are NPCs that genuinely surprise you. A merchant who refuses to trade because they remember you tried to steal from them days ago. A guard who’s started watching the east gate because you always use it. A villager with their own objective that intersects with the player’s quest in ways neither party anticipated. That last one, in particular, starts to feel like something qualitatively different from traditional game design.
The technical mechanism is a perceive-plan-act loop: the agent observes relevant state from its environment, reasons about what to do given its goals and memory, then executes an action. If that sounds familiar, it’s because it’s the same architecture that underpins business AI agents. The domain is different; the pattern is identical.
The implementation challenges are similar too. Managing context window limits when an NPC has accumulated a long history. Making sure the agent’s reasoning is fast enough not to break immersion. Preventing the agent from doing something that violates the game’s internal logic. These are engineering problems the gaming community is actively working through, and the solutions will be applicable beyond games.
driVLMe: vision-language agents in real-time environments
driVLMe is a research project applying vision-language models to real-time driving in a game environment. It’s worth knowing about because it represents a different class of problem: not an NPC with persistent goals and memory, but an agent that must perceive a high-bandwidth visual environment and act within tight latency constraints.
Standard VLMs are too slow for real-time control. driVLMe explores how to structure the perception and action pipeline to make this tractable. That’s genuinely hard, and the research matters.
For anyone building in robotics, autonomous vehicles, or any real-time visual environment, this work is directly relevant. For gaming founders, it opens up AI companions that perceive the game world through vision rather than structured state — which dramatically expands what they can understand and react to.
Real-time companions: the business model shift
The most commercially significant AI agent pattern in gaming right now is the real-time companion. An agent that’s always present during play. Knows the player. Communicates in natural language. Helps, challenges, or entertains based on what’s actually happening.
Several companies are building this. The companion agent remembers that you hate stealth missions and suggests alternatives. It knows you’ve been stuck on the same boss fight for two hours and adjusts how it talks to you. It can explain mechanics, suggest strategy, and celebrate with you in a way that doesn’t feel copy-pasted.
This is a subscription business, not a product sale. Players don’t buy it once; they engage with it continuously, and the value compounds as the agent learns more about them. That’s a fundamentally different economic model, and it creates a data flywheel that benefits whoever builds it well. Every interaction teaches the agent something about the player, which makes the companion more valuable, which drives more engagement, which generates more data. The founders who build this loop ethically and effectively will have an advantage that’s very difficult to replicate from behind.
What patterns from game AI transfer to business AI
More than most people realise, honestly.
The perceive-plan-act loop. Game AI has been implementing this for decades. The version in LLM-agent-game — agent perceives structured environmental state, plans using an LLM reasoning step, executes actions with defined effects — maps directly to business agents operating in structured data environments.
Memory architecture. Games have developed sophisticated memory systems for NPCs: what they know, what they’ve observed, what they remember about specific characters, how memories decay. These are exactly the problems business AI agents face. The game AI literature has solutions the business AI community is only beginning to explore. If you want to understand agent memory patterns in depth, the post on AI agent memory covers the core architectural approaches.
Goal-directed behaviour. Game AI has spent years on agents that pursue goals intelligently in dynamic environments, handle unexpected obstacles, and re-plan when the situation changes. For certain categories of task, this is essentially solved. Business AI agents dealing with multi-step processes that encounter unexpected states would benefit from applying these techniques rather than reinventing them.
Real-time performance constraints. Games need decisions in milliseconds. The techniques developed to hit those targets — caching, pre-computation, hierarchical planning — are directly applicable to business AI systems with low-latency requirements.
Human-agent interaction design. This one is underrated. Games have spent decades studying how humans interact with AI characters and what makes those interactions feel natural. When to let the AI speak. When to let the human lead. How to avoid an agent that feels either passive or overbearing. These lessons apply directly to how you design business AI agent UX.
What this means for founders building games or entertainment products
The opportunity is significant. The implementation challenges are real. Here’s the honest version.
The good news: the foundational technology is accessible. You don’t need to build your own LLM to give NPCs genuine intelligence. The open-source examples show you the architecture; the LLM APIs provide the reasoning capability; you provide the game state and the character definitions. The barrier to entry is lower than it’s ever been.
The hard problems are engineering and design, not AI. Making decisions fast enough not to break immersion. Preventing agents from saying or doing things that violate content guidelines or the game world’s logic. Designing memory so agents feel consistent without holding every interaction in context indefinitely. Building the economic model around a companion agent that’s a continuous service rather than a one-time purchase.
The design challenge deserves particular emphasis. Agent-based NPCs can feel more real than scripted ones, but they can also feel uncanny and inconsistent in ways scripted NPCs never do. A scripted NPC always behaves as expected — predictable, yes, but safe. An agent-based NPC can surprise the player in genuinely delightful ways, but can also surprise them in ways that break immersion or feel fundamentally wrong. Finding the right balance is a design problem as much as a technical one.
The founders who do this well will treat agent behaviour as a design system, not just a technical feature. Define the agent’s personality with the same rigour you’d apply to any major character. Define the boundaries of what it will and won’t do. Build evaluation systems that let you measure whether the agent is behaving within those boundaries at scale. The same discipline applies to AI agents in business operations — the context differs, the discipline doesn’t.
The near-term landscape
Agent-based NPCs aren’t a distant prospect. They’re being deployed in games now, in limited forms, and capability is expanding quickly. Within two or three years, player expectations in certain genres will shift from “NPCs follow scripts” to “NPCs feel like actual characters with memory and personality.” That shift, when it arrives, will be a platform shift.
Platform shifts create new winners. They also leave behind the people who missed the transition. The founders building agent-based character systems today — even in modest, constrained forms — will have a head start on the design and engineering skills that the market will demand.
And the patterns being developed in gaming are teaching the broader AI industry things about how agents operate in dynamic, real-time environments with human users. That feedback loop is worth paying attention to whether you’re building games or not.
Practical takeaways
- Start with the LLM-agent-game open-source project as a reference implementation. Understand the perceive-plan-act loop properly before you try to build something novel on top of it.
- The companion agent business model is the most commercially interesting pattern in gaming AI. Think through retention, subscription mechanics, and the data flywheel before you start designing the technical implementation.
- Treat agent behaviour as a design system, not a technical feature. Personality, boundaries, and evaluation criteria all need the same rigour as any major game character.
- The hard problems are latency, consistency, and content safety — not the underlying intelligence. Plan your engineering resources accordingly.
- The patterns from game AI — memory architecture, goal-directed behaviour, real-time performance — transfer directly to business AI systems. If you’re building agent systems outside gaming, the game AI literature is worth your time.
The NPCs in games being built right now will be looked back on the way we look back at the first websites: functional, recognisable, and nothing like what the medium eventually became. The agents are just getting started.
Frequently Asked Questions
How are AI agents different from traditional game NPCs?
Traditional NPCs are state machines with predetermined scripts — they patrol routes, respond to triggers, and play fixed dialogue with no memory or adaptation. AI agents have persistent memory, dynamic goal-setting, and the ability to adapt their behaviour based on player interactions, creating genuinely unpredictable encounters.
What are the main challenges of using AI agents in games?
Key challenges include computational cost of running LLMs at game scale, maintaining narrative coherence when AI generates dynamic dialogue, preventing agents from generating inappropriate content, and ensuring consistent character personality across diverse player interactions.
Which games are already using AI agents?
Several experimental and indie titles are deploying LLM-powered NPCs, and major studios are testing agent-based character systems. AI Dungeon pioneered narrative AI agents. More recently, games like Worlds and multiple research projects from studios like Ubisoft and Electronic Arts demonstrate agent-based character behaviour in production environments.
AI agents in gaming: from smarter NPCs to real-time companions
About the Author
Ronnie Huss is a serial founder and AI strategist based in London. He builds technology products across SaaS, AI, and blockchain. Learn more about Ronnie Huss →
Follow on X / Twitter · LinkedIn
Written by
Ronnie Huss Serial Founder & AI StrategistSerial founder with 4 successful product launches across SaaS, AI tools, and blockchain. Based in London. Writing on AI agents, GEO, RWA tokenisation, and building AI-multiplied teams.