Skip to main content

EA's self-taught Battlefield bots are a glimpse at the future of AI

You turn me right round, baby

So much of AI in games is smoke and mirrors, designed to create the impression of intelligence. Characters moving around navigation node-maps hand placed by developers, seeking cover behind whatever objects the level designers have marked as the most dramatic looking place to hide.

SEED (Search for Extraordinary Experiences Division) are a research group with EA that - among other things - are experimenting with a different, much more organically grown form of AI. After six days of Battlefield One training, their neutral net-spawned little soldier men do seem to have developed something of a life of their own.

Watch on YouTube

You can read the exact how and why of EA's little research project here, but the simple version is that they gave an internal prototype self-learning AI network 30 minutes of gameplay footage to study for an example of 'good' play, and then gave the system six days real-time (roughly 300 CPU-days, counting parallelization and accelerated time) to learn the game. The end results might be flawed, but it's impressively capable considering the nature of the game.

While it's easy to laugh at AIs chasing each other in circles, the fact that they've learnt how to play a (mostly) coherent round of Battlefield against each other in less than a week of training is truly impressive when you consider how much time and effort is poured into current games AI without seeing much improvement from generation to generation, and how so much of it can be undone by a single typo. It's also likely that given another week, month or more of time to train, as well as some play against humans, the neural net framework could still pick up some new tricks.

Right now, such self-taught AIs probably aren't the immediate way forward for shooters such as Battlefield. It's too much of a computational load to have them try and process their field of vision in any halfway human-like fashion, so their senses have to be limited to a handful of key data-points. Still, they're useful fodder for brute-force QA testing, as their occasionally bumbling antics means they'll find themselves in significantly more varied situations than their hard-coded counterparts.

However, in other genres (especially turn-based strategy, and real-time stuff after that) neural nets seem like they will have the upper hand soon. The key thing with a good neural net-based AI is that there's no smoke and mirrors. While it may perceive the game slightly differently than a human, it's learnt the game from the ground up, including tactics, and has the potential to learn even more. Of course the age-old bugbear exists of AI having much higher accuracy and response times than a human, but those can be artificially lowered with relative ease.

AI in action games hasn't advanced much in the past few years. It'll be interesting to see what neural nets like this can bring to the table as we move into the 2020, especially if learning can be offloaded to external, cloud-based systems.

Read this next