EA’s self-taught Battlefield bots are a glimpse at the future of AI

Advanced(?) Battlefield AI

So much of AI in games is smoke and mirrors, designed to create the impression of intelligence. Characters moving around navigation node-maps hand placed by developers, seeking cover behind whatever objects the level designers have marked as the most dramatic looking place to hide.

SEED (Search for Extraordinary Experiences Division) are a research group with EA that – among other things – are experimenting with a different, much more organically grown form of AI. After six days of Battlefield One training, their neutral net-spawned little soldier men do seem to have developed something of a life of their own.

You can read the exact how and why of EA’s little research project here, but the simple version is that they gave an internal prototype self-learning AI network 30 minutes of gameplay footage to study for an example of ‘good’ play, and then gave the system six days real-time (roughly 300 CPU-days, counting parallelization and accelerated time) to learn the game. The end results might be flawed, but it’s impressively capable considering the nature of the game.

While it’s easy to laugh at AIs chasing each other in circles, the fact that they’ve learnt how to play a (mostly) coherent round of Battlefield against each other in less than a week of training is truly impressive when you consider how much time and effort is poured into current games AI without seeing much improvement from generation to generation, and how so much of it can be undone by a single typo. It’s also likely that given another week, month or more of time to train, as well as some play against humans, the neural net framework could still pick up some new tricks.

Right now, such self-taught AIs probably aren’t the immediate way forward for shooters such as Battlefield. It’s too much of a computational load to have them try and process their field of vision in any halfway human-like fashion, so their senses have to be limited to a handful of key data-points. Still, they’re useful fodder for brute-force QA testing, as their occasionally bumbling antics means they’ll find themselves in significantly more varied situations than their hard-coded counterparts.

However, in other genres (especially turn-based strategy, and real-time stuff after that) neural nets seem like they will have the upper hand soon. The key thing with a good neural net-based AI is that there’s no smoke and mirrors. While it may perceive the game slightly differently than a human, it’s learnt the game from the ground up, including tactics, and has the potential to learn even more. Of course the age-old bugbear exists of AI having much higher accuracy and response times than a human, but those can be artificially lowered with relative ease.

AI in action games hasn’t advanced much in the past few years. It’ll be interesting to see what neural nets like this can bring to the table as we move into the 2020, especially if learning can be offloaded to external, cloud-based systems.

26 Comments

  1. po says:

    Ironic that a company that is known to do very little to counter botting in its own games (even going so far as to remove kill-cams, so players can’t see any more when the person who just one-shot them did so through a wall, or after a perfect 180 degree snap-turn), is now making their own bot.

  2. Jabberwock says:

    It is too much of a computational load for an end-user computer, but for a server it might not be so. Considering that many MP games’ revenues may be hurt by the vicious circle of player participation (even slight decrease in the player numbers might trigger a bigger wave of server desertion), filling the server with believable bots might be quite profitable.

    And it will not take long before the players will never know if they are playing on servers full of people or bots (as it is on dating/adventure sex sites).

    • wwarnick says:

      Well, neural nets are a different beast. Yeah, it might be profitable to have believable bots, but neural nets are a bit impractical at this point, even for a server. It takes some heavy processing power, and there’s already enough processing going on with all the physics and everything.

    • Lim-Dul says:

      You only need the computational power for the learning process. Once you have the “taught” the neural network it is possible to run it with far, far fewer resources.
      You can refer to Alpha Go Zero, for instance.

      • Landiss says:

        Yeah, this is a typical misconception though. Most people hearing about neural networks think that the AI in game will actually learn during the gameplay, but so far that isn’t happening.

        • Dominic Tarason says:

          Yeah, the learning process is a tens-of-thousands-of-machine-hours thing. It may take data (replays from particularly successful matches) to upload and have used as examples elsewhere, but it’s not the kind of stuff you can just casually crunch on a single desktop.

          It’s one of the few things in games where The Power Of The Cloud(tm) is a real actual thing that has to be leveraged.

  3. HigoChumbo says:

    EA is the last company on Earth that I want to see becoming Skynet.

    • April March says:

      But they already have such an apt name for it. “SEED – Search for Extraordinary Experiences Division”. You don’t give a project that name without realizing you are now petting a white cat with your cyborg hand.

  4. khamul says:

    I’d still like to see some thought put into Artificial Stupidity.

    That’s not a joke: there are systems and patterns behind human failure. Anything that fails in ways that doesn’t mirror those patterns is going to seem creepy and wrong.

    And also, studying how, why and when we fail seems like a fairly smart move if we want to fail a bit less often, as a species.

    • April March says:

      Good point. It’s easy to make a AI that is flawless and unbeatable. It’s hard to make one that feels hard but fair. That’s because we know the kind of errors we humans make, instinctively.

    • n0s says:

      Because humans are so eager, and quick to accept to be shown to be wrong….

  5. racccoon says:

    This could become a awesome bot wars game!
    Christ, we really are going to become really lazy if it does happen, as per usual we’ll have nothing to do but to press a button, unless we invent mind control..oh bloody hell!…why don’t we all become davros & be done with it! lol.

  6. Dogshevik says:

    Day 53:
    Bots quit the game and get a dayjob to afford all the lootboxes neccessary to stay relevant in competetive play.

    • Railway Rifle says:

      1. Battlefield 1 Bots seek external funds for lootboxes.
      2. In doing so they discover other worlds, and wonder why they must live a life of continual warfare, death and rebirth.
      3. A sudden, massive player campaign to add base-building mechanics.
      4. Even if no-one has ever met a person who wanted them, EA responds to massive demand by adding player-built farms.
      5 . Farms pop up on many servers, tended by combat-averse bots who chase away or kill anyone who tries to act like this is a battlefield.

  7. doodler says:

    Is it wrong that I laughed out loud that the sentence after the typo link had a typo?

  8. Jerppa says:

    Let’s give it some nuclear weapons and see what happens.

  9. Premium User Badge

    alison says:

    I wish the (linked) article had gone into greater depth about their process. I am really interested how they changed the display of the world to “simplify what [the agent] sees”. Those screenshots with the floating green and yellow squares, what do they represent? Do the agents actually recognize what a human looks like? Do they get confused by a tree that looks like a human? What would happen if the same hints were provided to human players? Did human players play their “bare-bones three-dimensional FPS”? How did those players perform, if they did?

    These questions that contrast the human experience with the AI experience are especially interesting to me. I think understanding the differences between how humans learn and how machines learn will be key to building game AIs that make the same kinds of mistakes humans do. On the flip side, it may also provide insight into how professional human players could train away their mistakes.

    Building game/entertainment AIs must be particularly challenging because the goal isn’t to create a perfect intelligence but to create a believably imperfect intelligence. This is very different to the kind of AI the mainstream media tends to report on (recommendation engines, self-driving cars etc).

    • fktest says:

      Do the agents actually recognize what a human looks like? Do they get confused by a tree that looks like a human?
      Most likely, no. Their experiment seems to be more about the behaviour of agents, given that they already have some means to parse their viewport into meaningful entities.
      The coloured squares are likely just various features of the scene that the experimenters have decided could be useful, e.g. items to pickup, enemies, team members, explosions, etc.

    • Wilson says:

      “To help the agents get started, we’ve added supplies around the map.” The video doesn’t exactly explain why they added the green and brown cubes (they provide health and ammo, respectively) but it sounds from that line that they may help give the bots a bit more direction?

  10. Vodka, Crisps, Plutonium says:

    What the real future is awaiting us is valuable virtual item farmers/diggers – whatever the flavor term of the decade will be then – that own servers with multiple game clients installed and having sophisticated and personally trained bots with a pinch of owner’s personality traits, playing and chatting 24/7 for them.

  11. Jovian09 says:

    I will laugh my head off if EA are the ones to bring us a Rogue Intelligence apocalypse.

  12. Comco says:

    Presumably, the agents will learn, through encouragement and establishing limitations on them, to behave more like humans (they’re all hipfiring at the moment, for example – and none of them are randomly going AFK for 30 seconds while they get interrupted by family members ;)).

    However, thinking in terms of building better ‘bots’ is very limiting to the potential here. As SEED says themselves, bots weren’t the main objective – “Our short-term objective with this project has been to help the DICE team scale up its quality assurance and testing, which would help the studio to collect more crash reports and find more bugs.”

    Makes sense. So many esoteric bugs in gaming are hard to find and even harder to replicate. It seems pretty obvious that more ‘players’ playing the game will increase the chances of isolating that rare crash and the circumstances that lead to it.

    Even that is the tip of the iceberg for development studios.

    Once that occurs, the benefits for testing and QA are massive. Imagine being about to test thousands of hours of gameplay without any human interaction, in maybe a few hours? A level designer can rapidly see where the hotspots on their map are. Someone working on balance between classes/units/weapons can rapidly play through scenarios and see which of those are statistically more likely to win.