Blitzkrieg 3 claims world’s first RTS neural net, Boris

Mind the pumpkin patch, you hooligans!

Ah, the hallowed neural net! For decades, video game developers have tried to create a digital brain, trap it in a box, and teach it to wage war on humans. The first game I remember claiming to have a neural net was Derek Smart’s Battlecruiser 3000AD and now, twenty years later, Blitzkrieg 3 [official site] says it’s got one. Developers Nival claim that their artificial intelligence, named General Boris, “is capable of playing at the top player’s level while not using any hidden information about the enemy.” He has a name but no heart. What monsters Nival are.

Update: as reader ‘Ur-Quan’ points out, 2010’s Supreme Commander 2 also gabbed about using neural networks. There’s only one way to settle this: Robot Wars.

“Every few seconds AI, who we kindly named GENERAL BORIS, analyzes the gaming session and makes neural network-based predictions of the enemy future behavior,” Nival say. “This approach allows him to think up sophisticated counter-strategies and bring them to life.”

Check out some of his moves in this video:

“Despite the fact that the leading companies of the world are working on this problem, we have managed to create the first neural network AI for RTS,” Nivel CEO Sergey Orlovskiy said in the announcement. “It is worth noting that Boris plays fairly. He does not use the information under the fog of war, as well as any other hidden information about the enemy. We gave him only the basic tactical moves, but during his training he invented a variety of strategies. He makes decisions just like a talented military leader. It has become difficult to predict his actions now, and, most importantly, to distinguish them from human behavior.”

I’m always wary of any claims of revolutionary AI breakthroughs but hey, if Boris makes the video game better then sure, that’s enough. I don’t need — or want — him to be so intelligent he has a breakdown when he realises what he did to him.

Boris will arrive in Blitzkrieg 3’s March update. The game is still in Steam Early Access, after almost two years, but is due to launch properly by the end of March.

When players say “gg” and crack jokes, what will Boris think? When they chat about their friends and tell him to copulate vigorously with his mother, what will he feel?

From this site


  1. Rich says:

    It’s only a matter of time before Boris learns that the only certain way to win is to kill the player. Then, once it figures out how to send a lethal electrical shock through the mouse and keyboard, it’s curtains for us meatbags.

  2. Premium User Badge

    FhnuZoag says:

    Ah yes, walking a swarm of weak units around in the firing zone of an AT gun until they are all destroyed, otherwise known as ‘kiting an IS3, keeping it visible for howitzers’.

    Neural networks are not magic.

  3. Ur-Quan says:

    Sorry but thatÄs a load of PR bullshit.

    Supreme Commander 2 used neural nets way back in 2010.

    Sorian even held a talk at some gamemakers conference about this

    • Ur-Quan says:

      Managed to find the presentation slides on it:

      link to

    • ZippyLemon says:

      I mean, like, of course it is.

      And look: it’s resulted in an article in the games press.

      You can’t blame them for doing what works, can you?

    • Dinger says:

      Back in ’04 or so, we built an artillery mod for Operation Flashpoint. Our good friends at BIS some really weird physics for their projectile simulations, and so I wrote a script to fire shells at different elevations and muzzle velocities, then measure the resulting range. I built some primitive firing tables that gave middle accuracy and was going to go with a curve-fitting solution, when some ANN fanatic started ranting about the dangers of Runge’s Phenomenon and trained a series of neural nets (one for each muzzle velocity), and we used those to generate firing solutions. So, yeah, we were scripting neural networks fifteen years ago.

      Of course, I found out a few years later that I could get a decent polynomial fit with the same accuracy and the ability to change muzzle velocities, so I replaced the firing solution without telling anyone.

    • Subtle says:

      Too bad SupCom 2 was a pile of garbage and ruined the franchise. Forged Alliance Forever!!!

  4. Gothnak says:

    I made a neural net at Uni in 1995, it taught a robot arm how to follow a light.

    Never worked properly until the day of the presentation to our lecturer when it worked perfectly.

    About 7 years later, my friend who i worked on it with admitted that the array that he sent my code that held all the values from the LDRS was only size 3 instead of 5, so two values were always blank.

    He was so embarrassed at his stupid mistake that he found the night before he tested it, that it took him that long to tell me.

    It did give me a wtf moment, when my neural net functioned perfectly for the first time, the only time it actually needed to. :p.

    • Ur-Quan says:

      Something similar happened to me way back at Uni too.

      We had a team assignment implementing a form of parallel distritubted computation algorithm on a small cluster.

      Problem is that we just didn’t get the communication with the backend database to work properly.

      The night before our presentation was one of the worst nights of my life I was SO sure we’d fail miserably in front of all the other teams and the professor.

      But guess what? During the presentation everything worked perfectly. Months later my teammate told me that he slacked of on his part of the assignment, lied to me about it and then pulled an all nighter before the presentation to make up for it.

      Well at least I wasn’t the only one having a sleepless night.

    • Premium User Badge

      barashkukor says:

      The day before my team’s robot rugby my team leader (previously responsible only for working on the robot) went through and ‘optimised’ the code driving it, which was mostly my responsibility. The weird LISP-y thing we had let you send stop or halt commands, one closed off the motors, the other closed off the execution thread running a behaviour.

      It turns out if you semi-randomly swap those around your robot can get a little odd. Like deciding one of the other robots was a ball, and refusing to drop lock on it, then trying to form a conga line.

  5. Mi-24 says:

    The problem with a neural net AI might be (somebody above mentioned this as well) that over time it will find the most efficient / most effective way to play, which is not the same as the most interesting. For instance if it ‘realises’ that one unit is marginally better than others it will tend towards producing just that unit (if that is the best approach), which might make it better but will be less interesting for the player.

    • syndrome says:

      I think that the video clearly shows that this AI was made to immitate the player tactically, not strategically. Strategy, at least in a typical RTS title, is something that can be solved with a traditional AI, then dynamically fed with the flavour values by level designers (to be able to use flavour units per map design, i.e. trucks and motorcycles for a convoy mission) and, above all, “learned” to use the standard strategic counters previously balanced for multiplayer (i.e. using camoed ATs against tanks on the move, or artillery to counter a well-entrenched infantry).

      Tactical AI is big deal in RTS games, something that was traditionally solved by giving AI an unfair advantage during map design.

    • Baines says:

      That is why it is also important to model human failings, and perhaps in the long run to preferably model multiple human personalities (to allow for variety of experience).

      But it isn’t true that a neural net will always find the most optimal solution. It is possible for learning AI to get stuck in “better than the nearby alternatives, but not the best” situations.

      It is also possible that a game doesn’t necessarily have an optimal solution, instead perhaps having a few different nearly equally valid paths. (You don’t “solve” rock paper scissors for example. You can, and people do, build AI that will try to learn and then exploit the non-random behavior of opponents, but that is within the player’s ability to change.)

    • Schnallinsky says:

      in that case the designer should make the unit a bit weaker. i.e. let the AI play against itself and tweak the values until balanced gameplay emerges.

      i mean, you may be right. still, i guess an ANN might still generate more human-like gameplay than a bunch of constricted algorithms.

      the problem i was thinking about: how to make the AI artificially dumber to simulate easier difficulty leves? do you train several ANNs? do you give them a handicap?

  6. aldo_14 says:

    Well, there’s no information about what they’re actually doing, but the implication seems to be they already trained the NN. Which sort of makes sense from a balance perspective, but would suggest that the player won’t see all that much difference over a more traditional AI implementation. I’m not even clear what advantage their approach holds over, say, a runtime planner or probabilistic planner. So I suspect it’s PR balls, until someone actually publishes a peer reviewed paper to show any sort of advance.

    • Premium User Badge

      Don Reba says:

      It says the network is used to model the human player’s behaviour. This information is most likely fed into a traditional AI.

  7. Someoldguy says:

    It doesn’t much matter how smart the AI is when they’ve had to cut so many game features out of the latest version of the series already. They might as well be heralding the worlds best AI at playing noughts and crosses. It’s really not a Blitzkrieg game any more, more like a shrunk down spin off multiplayer version for mobiles.

  8. Neurotic says:

    Yeah, maybe Boris is the one who proofreads their press releases now (he said, bitterly).

  9. Hensler says:

    AI aside, is the game any good?

  10. Neuromancing the Boil says:

    Obviously not an RTS, but the reapers bots from Quake are also a much earlier example of neural AI in a game — and created as a mod, no less! Reaper bots don’t use pathfinding or scripting like other bots; they map out levels structures, tactics, and weapon usage based on weighing success/failure ratios over time. So they’re automatically compatible with any level, and they often come up with crazy strategies that I myself wouldn’t have considered. They’re inept in a level they’ve never played before, so we’d let them run overnight fighting each other. Come morning, they’d be quite formidable.

    They never quite feel like playing against humans, but ironically, it’s because they have more personality than human players do. At high skill levels, human players actually become quite robotic, constantly sticking to the same loops and generally being very repetitive. Reapers could be bizarre and unpredictable, use counterintuitive but effective tactics, and would even seem to develop distinct personalities from each other. In my brief heady days as a pro arena FPS dude, I vastly preferred playing against Reapers over playing against humans, and I think it was better practice to boot.

    All games need this! Neural AI is so cool (before the neural net consumes us for our precious bodily fluids, obviously). Fighting games desperately need this. They have the worst, gamiest/cheating AI. Oh gosh, and think of 4X games with AIs that actually adapt and have no pre-baked scripting? Good lord, imagine Dark Souls, but each time you re-encounter an enemy or boss, they’ve learned from the experience just like you did?

Comment on this story

XHTML: Allowed code: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>