The Tale Of The Ultimate Starcraft AI

By Jim Rossignol on January 19th, 2011 at 3:31 pm.


The techno-chaps over at Ars Technica have posted an interesting story about the development of the ultimate Starcraft AI. Written by student Haomiao Huang, it’s the story of the “Berkeley Overmind”, which recently beat Starcraft pros a former Starcraft pro in head-to-head matches (although it is a way off beating the real current masters, it seems). Here’s a snippet: “In theory, a computer should be great at controlling many units simultaneously, since it’s not limited by human speeds. Indeed, there is a common misconception that because StarCraft is real-time, it must be game of reflexes. But while speed is useful and important, it is no substitute for knowing the right thing to do… To handle these issues and limit computational overhead, our agent uses artificial potential fields for unit movement. The potential field controller generates virtual forces that push the mutalisks around, balancing attractive forces on targets with repulsive forces on threats. Summing up the forces acting on a mutalisk gives a direction to fly, resulting in a simple but robust control scheme.”

A useful read for those of you interested in the subtleties of game AI, with some illustrative videos. Go read!

__________________

« | »

, .

28 Comments »

  1. Tei says:

    A* describe a potential field, imho, with every tile as a vector/node of the field pointing to a nearby tile.

    Maybe potentials are more analogical, the problem with A* is that you have to define these “nodes/tiles” …. I suppose …It don’t sounds like is impossible to write A* to create a any resolution dynamic grid, much like you append nodes in a queue. But I digress.

    This
    http://www.jhlabs.com/maps/doc/vector_field.jpg
    versus this
    http://upload.wikimedia.org/wikipedia/commons/thumb/f/f4/Pathfinding_A_Star.svg/430px-Pathfinding_A_Star.svg.png

    • FriendlyFire says:

      A* is a pathfinding algorithm. Potential fields are a decision algorithm. They have very little in common.

      With A*, you must already know what you want to reach, a goal. With potential fields, you can just input all the constraints (repulsive and attractive fields) and the unit(s) will automatically know the best possible location and action to do. Potential fields are very much a short-range algorithm, being less suitable for large-scale movement.

      I’ve implemented both algorithms in a turn-based strategy game and can definitely say that both complement one another, depending on the situation. The biggest problem with potential fields is that they tend to be much less effective when obstacles (like terrain) enter into play, since you then encounter the phenomenon called “local maxima”, where your desired maximum is surrounded by many much lower maxima, created by things such as a cavity in a cliff. Units then get stuck in the maximum because, to them, they are in the optimal location they can possibly be – any movement lowers their potential. Going around this involves things like units causing repulsive fields around themselves, thus forcing themselves to move, or calculating potentials and then filling any and all concave surface so that no cavity exists.

      It’s really neat stuff and extremely appropriate to use with flying units.

    • heretic says:

      Don’t know about potential fields but implemented A* myself, very nice algorithm – there are other algorithms that work with heuristic functions, but personally I found using A* really great to implement some interesting path-finding behaviours in real-time. Useful for modelling dynamic congestion, if you have calculated an optimal path which suddenly becomes congested (by traffic, or whatever else) you fire it off again and change your original path accordingly.

      What’s nice is that since it’s a simple heuristic function you can really plug in any information you want about the world, and the algorithm will work (quite fast usually) to find an optimal path based on the constraints you gave it.

      Its depth first though, worked fine for my project but may not be appropriate for others.

  2. Gnoupi says:

    The way the mutalisks are moving in the last videos, flanking, avoiding damage, etc, makes me think about AI War. Not fully, but it starts to look like it.

    It’s this kind of competition that makes you realize just how impressive is AI War, especially for its AI, the way it acts. Of course AI War is less focused on micro management and reflexes, but still.

    It would be interesting to see that kind of contest in AI War, building an AI able to compete with the enemy ones.

  3. Wilson says:

    Interesting article.

  4. Pijama says:

    Damn.

    Better start saving some cash now for the eventual neural implants….

  5. evilsooty says:

    They should get these guys to work on the Civ V AI!

    • Magrippinho says:

      I had a similar thought: “Creative Assembly, you have one last chance to make the AI halfway decent, or else we’ll be hiring these guys to handle it for the next Total War. The current staff will be sacked. Sincerely, Sega.”

    • Dominic White says:

      From the looks of it, this AI really isn’t ‘smart’, so much as it is delightfully gimmicky. It just pumps out mutalisks as fast as it can, and abuses its ability to micromanage 20+ units simultaneously to pull off hit and run attacks that a human player would never be capable of.

      Like they said, it takes advantage of things an AI can do that a human cannot.

    • Starky says:

      In general the trick of good human vs AI is for the AI to seem human and flawed, react in similar ways as players and be fun.

      Any game dev can write AI code that can beat a human player 100% of the time, just by pure cheating, FPS bots with perfect aim and perfect player detection, so on so forth.

      In Starcraft a bot could easily be designed that could macro perfectly – it would never miss a pylon, it would always spend it’s money on churning out units, it would always perfectly saturate mineral lines and never cue units.
      Then it could perfectly micro many units individually (pulling back wounded units for example), attacking 8 places at once. That is a lot harder and more sophisticated – but it’s not good video game AI. It could map hack and respond based on that information.

      Making AI that lives within the same constraints as a human player (say 200APM for the best of players – fog of war, and limited viewpoint and control constraints (mouse speed etc.) that would be amazing game AI.

      Disclaimer for those unable to read between the lines: NOT talking about the article AI specifically, just AI in general.

    • donerkebab says:

      Incredible the haters that come out on this. This was one of the most interesting video game stories I have read in quite a few months. If you read the whole article you would realize that in Starcraft it’s simply not true that “anyone can write AI code that can beat a human player 100% of the time”. Also the AI was written for computer vs. computer, not computer vs. human. Talk about missing the point.

  6. Longrat says:

    Yay, I helped!

  7. opel says:

    That mutalisk micro is absolutely beautiful. Every competitive player dreams of that level of control.

  8. Dreamhacker says:

    Having studied algorithms and data structures on uni-level myself, I am both incredibly impressed and incredibly intimidated by the prospect of making an evolutionary algorithmic AI for a game as complex as Starcraft.

    I’m also somewhat smug knowing that so called “pro” players can still be beaten by AI players :)

  9. Mccy_McFlinn says:

    Having read the article I can see why the AI has to “cheat” when set to hard on most RTS games.

    • Wilson says:

      Yeah, the article does highlight why that happens very well. Of course, I’d be interested to know how much easier the fact that Starcraft II is a micro-heavy game helped. It certainly gives the AI an advantage against human players, but I imagine getting an AI to play well something like Civ5 would be much harder, since the computer can gain no advantage from being faster. The stuff they did about the threat map and so on would be more important there.

    • Rhin says:

      Real-time decision making is really hard to encapsulate into useful data for an AI. the more slow, turn-based, and rule-driven the game is, the more easily it can be modelled in a way that makes computing power easily applicable (see: Deep Blue)

      Basically, programmers have no good way of building an AI for something they can’t describe the rules for. (well, neural networks and genetic algorithms in general, but then you don’t even really know what you’re building)

  10. The Army of None says:

    That was the most interesting article I’ve read all week. All hail our new, glorious mutalisk overlords.

    • Melipone says:

      I for one welcome them.

      I agree, my attention did not waver throughout, great read.

  11. Brumisator says:

    We are so dead.

  12. SwiftRanger says:

    I think it says enough about SC that they had to rewrite the unit pathing as well.

    If we’re talking about RTS AI you can’t go around this guy.

  13. Daiv says:

    Good read. When can we expect the RPS Hivemind to consume this AI Overmind?

  14. Berzee says:

    A fine piece of wordsmanship! And linksmanship!

  15. golden_worm says:

    Good read, inspiring stuff.

    I have never played StarCraft, its always been a bit of a hole in my gaming knoweledge. It never really had any appeal for me as a multiplayer.

    Coming up with with an AI sounded quite fun in a puzzle game sort of a way. I’m thinking an RTS crossed with SpaceChem crossed with Gratuitous Space Battles. The RTS genre almost doesn’t matter, just simple programmable “agents” to set against each other. Kind of like a cyber-brain Robot Wars.

  16. ManaTree says:

    I go to the same place in the same department as these guys (and the professor; I’ll be having him next semester!). It’s pretty crazy. Glad to know our school is still quite solid for its CS. :)

    I wish I was as crazy as them too. I’m definitely not up there now, not even sure if I will by the time I get out.

  17. Hoaxfish says:

    Now, all we have to do is trick the real Skynet into playing Starcraft and humanity is safe… Though the mental stability of some South Koreans might have to be sacrificed.

    • Rhin says:

      I thought the plan was to trick Skynet into playing WoW? Preferably as a healer class.