Skip to main content

Google Deepmind's neural net plays a mean game of CTF

Rise of the robots

Google's DeepMind research division have made a pretty solid argument that the future of game AI is in self-teaching neural networks. Not content with destroying chess forever (credit to the BBC), their most recent project was to have a team of AI agents learn how to play a Quake 3-derived game of Capture The Flag from scratch. Not only did they master it, but after nearly half a million simulated games, these bots aren't just better than human players, but more cooperative than a human if paired with one as a teammate.

The game being played is, admittedly, a relatively simple one. Maps are small (but procedurally generated), combat mechanics are basic (just 'tag' the enemy to make them drop a carried flag), and matches are only 2v2, but these AIs have taught themselves some surprisingly human-like strategies from scratch. They'll defend their base when needed, camp the enemy base while waiting for a teammate to score a capture, and cover their teammate when they think they might be at risk. You can see a video breakdown of the learning process below, and read more here.

Watch on YouTube

Unlike regular game bots which exist purely as in-game scripts, these AIs are interacting with the game as a human would, give or take. They sees the world as a stream of pixel images, and enter inputs through an emulated game controller. While the AI are given positive feedback for scoring goals, it knew nothing else at first. Not how to see, not how to control the game, and definitely not how to spawn-camp. While the AI was initially trained in blocky walled arenas, they also fared well in a rolling, procedurally generated desert environment as well, dotted with cacti.

While any training of AI would need to happen in a massive corporate environment (massive numbers of simulations were run in parallel), I'm genuinely excited to see if the resulting agents can be brought to consumer-scale hardware. Just how much CPU power is needed to run an AI that actually reads the game by sight and works fast enough to play in real-time, anyway? For those willing to dive deep on this, DeepMind's complete paper on the subject can be found here.

Read this next