Electric Dreams, Part 1: The Lost Future Of AI

In 2001 two scientific researchers, John Laird and Michael van Lent, wrote an article for AI Magazine titled ‘Human-Level AI’s Killer Application – Interactive Computer Games’. The magazine, published and distributed by the stern and serious American Association for Artificial Intelligence, went out to universities and laboratories around the world. In their piece, Laird and van Lent described a future for the games industry where cutting-edge artificial intelligence was the selling point for games. “The graphics race seems to have run its course,” they declared. As they saw it, “better AI [is] becoming the point of comparison” for modern games. This didn’t quite work out.

This is a series of posts about artificial intelligence and videogames. It’s also about science, society, the future, the past, YouTube, Elon Musk, and how all of these things can hurt and help the future of the games that we play and love. It’s about how Laird and van Lent’s dream never came true, and probably never will – but it’s also about a new hope that I have for science, research and games, and one that you can be a part of. In a sense, I’m going to claim the same thing that Laird and van Lent did fourteen years ago – that the games industry might be on the brink of major change. It’ll be up to you to decide if I’m repeating the same old failed predictions, or if something is different this time. In this first part, we’re going to look back and ask why nothing happened fourteen years ago, and examine our relationship with better AI in modern games.

The ‘killer app’ article came out one year into the new millennium, an exciting time for gamers whether or not you were interested in artificial intelligence. A new console generation was on the horizon, I was pre-ordering JRPGs like they were limited-edition jubilee memorabilia, and big games were appearing on the release schedule with equally big promises attached. Strategy games like Shogun: Total War promised to test your tactical thinking and offer exciting battles with human-like opponents (IGN called the AI ‘startlingly realistic’ in one preview). Artificial life in games like The Sims could make little digital people with hopes and fears and needs, people whose lives you could influence and poke and watch. Black and White, which came out within a few months of the edition of AI Magazine and was no doubt on Laird and van Lent’s minds, offered the player a chance to teach a creature right from wrong, and watch it learn from its actions.

It’s easy to see why people thought that AI was changing games. Traditional ideas like Total War’s pursuit of the perfect opponent inherit from the old AI traditions of chess playing, but they were matched with brave attempts to create new kinds of games with AI – tinkering with living things, understanding how they thought. Many of these games were celebrated for their use of AI, even if it never quite became a selling point. Yet the illusion often faded, revealing the clunky systems underneath – Shogun’s generals would stand in hails of arrows mulling over their options, little Sims would stand around awkwardly to avoid invading someone’s personal space on the way up some stairs. This posed a problem for developers, because no AI is perfect and people tend to remember the time it slipped up rather than the many times that it didn’t.

One solution to this is to constantly strive for improvement: more intelligence, more understanding, more work on getting software to be robust and dependable. But it wasn’t the only solution, and one developer in particular was beginning to refine another solution to the AI problem in games. At the Game Developer Conference in 2002 Jaime Griesemer and Chris Butcher, two members of the original Halo team, spoke to an audience of designers, programmers, and other games industry professionals. “If you’re looking for tips about how to make the enemies in your game intelligent”, Jaime told the conference, “We don’t have any for you. We don’t know how to do that, but it sounds really hard.” It was a tongue-in-cheek comment – Halo’s AI enemies had a lot of work put into them to make them smart and interesting – but it hinted at the advice that Griesemer really wanted to get across. The simple fact was that players didn’t really know what they wanted. Players correlated intelligence with challenge – in other words, smarter enemies should be harder to defeat. Bungie noticed that if you simply doubled the health of an enemy, playtesters would report that they seemed more intelligent. The takeaway was a realisation that what the game was doing simply didn’t matter – what mattered was what the player thought the game was doing.

This anecdote often comes up at both industry and academic events, and I often hear developers defending this view by explaining that the AI’s primary function in a game is to entertain the player, and that this doesn’t necessarily require it to be intelligent. But it betrays a way of thinking that sums up the problem of AI and games: the industry needs things that work, and AI generally doesn’t. AI never seems to get any better in games – we always see the same hilarious pathfinding mistakes or errors in judgement. These days in particular, everyone will see it: games are now exposed to players in every minute of every hour of every day, through Twitch streams, through YouTube Let’s Plays, through simply watching your friend play a game in a Steam broadcast. If a game has a problem in it, someone will eventually see it, and if developers are afraid of that happening then AI is a terrifying prospect.

Why is it that AI always seems to trip up? It’s hard for us now, in an age of Alien: Isolation and Grand Theft Auto V, to remember what constituted innovation in the past. Concepts like crowds, squad AI or hiding in shadows were once revolutionary ideas. It turns out that as we get used to new technologies, we use the word ‘intelligent’ to describe it less and less, until eventually we just take it for granted, like YouTube’s video recommendations or the route planner on Google Maps. People in artificial intelligence call this The AI Effect – “AI is whatever hasn’t been done yet”, as someone was once misquoted. It’s natural, then, that AI often breaks down or doesn’t work quite right. In a sense that’s part of what it means to be AI.

Five years after Laird predicted a bright new future for games in his AI Magazine piece he was interviewed by The Economist for an article about artificial intelligence in games, titled “When looks are no longer enough“. He’s quoted a few times in the article, but one quote in particular stands out. “We are topping out on the graphics,” he said, “So what’s going to be the next thing that improves gameplay?” Half a decade after his original prediction, the answer was still rhetorical to Laird. The Economist seemed to think so too – the rest of the article featured heavily a game called Façade, a dynamic drama simulation that had taken years to develop by AI researchers Michael Mateas and Andrew Stern. Façade was extremely unusual – a game that was cutting edge enough to warrant scientific papers being written about it, but playable and interesting enough to be spread around the games world and played by hundreds of thousands of people. “It’s an example of where I hope to see computer games go in five years.” Laird said. Six months after the Economist article was published, the PlayStation 3 hit shelves, going head-to-head with the XBox 360. It was a year of Gears of War, Elder Scrolls, Company of Heroes and Call of Duty. The graphics race, it turned out, had a long way left to run.

Where does this leave us today? At the start of this piece I told you I was going to make the same claims as the ones made in that original article over a decade ago, a claim of change that has been so wrong in the past. If you look around at the current state of the games industry, it’s easy to see a similar atmosphere to 2001 or 2006. Games like Alien: Isolation and No Man’s Sky seem to hint at AI being applied at larger scales and more robustly than ever before. Just like 2006’s Façade, we’ve recently seen AI research projects like Prom Week breaking through into the games industry – Prom Week even won a nomination in the IGF. Does this mean we’re seeing the games industry finally accept AI as a worthwhile field to explore and experiment with?

I can’t tell you whether or not the time has finally come for Laird and van Lent’s dream of the future. My claim is going to be somewhat different, because the games industry has changed a lot in fourteen years. I’m going to argue over the course of this series that AI research no longer has to wait for the games industry to take it on. Instead, it’s time to acknowledge that academic research is the games industry, as much as indies, AAA or the people who play and talk about games are. By breaking down these mental walls between ‘academia’ and games, we can start to ask more important questions. What kind of contribution can academics make? How can they best make it? Who might be best-placed to help? Is ‘making a contribution’ something academics should even be doing?

We’ll meet some of the people asking exactly these questions in the next part, in two weeks time.

97 Comments

  1. BluePencil says:

    Excellent idea for a series and I really look forward to seeing the future entries.

  2. Premium User Badge

    garfieldsam says:

    Awesome idea for a series. As someone who does machine learning for a living I’d love to see if/how the huge advances in the field for business and science has been leveraged in video games. But of course I am biased. :]

  3. Mike says:

    Hey everyone! If you want to ask any questions/ping me about the series, I’ll be loitering in the comments.

    • LaurieCheers says:

      I read this thinking “surely by now they should have mentioned Michael Cook and all his AI research projects?” Then scrolled back up to the top and see it’s written _by_ Michael Cook!
      It’s like watching a movie starring Brad Pitt, set in a world that’s exactly like ours but for one detail – Brad Pitt doesn’t exist.

      • Koozer says:

        Maybe Mike is oblivious to all his research projects, as they are all actually carried out by an AI?

      • Mike says:

        Aww, thanks! That’s kind of you to think of me. Actually I did wonder if I should’ve mentioned who I am as context for the series… I’ll be doing it in the next part as I talk much more about scientists and folks that I know personally.

      • emotionengine says:

        Indeed. One of the first things that came to mind while reading this was this fascinating RPS article from 2012: link to rockpapershotgun.com

        And sure enough. . .

    • Premium User Badge

      garfieldsam says:

      Hey Mike,

      Are you planning to write about the actual techniques used in game AI at all? If not, any recommendations or ideas on where I could learn more? I’ve been curious for a while, just haven’t done anything about it yet.

  4. Premium User Badge

    Mungrul says:

    With Molyneux in the news, I hope Demis Hassabis and his work on Black & White doesn’t get brushed under the carpet.
    While the game itself may have been a tangled, unfinished mess (albeit still an enjoyable one), the creature AI was remarkable. I think it really got its chance to shine in the expansion, Creature Isle.

    Of course, Demis’ company, DeepMind, got bought by Google for £400M last year. And the company’s focus was machine learning. I doubt we’re going to see anything directly game-related come out of that deal, but whatever it is, I’m sure it’ll be interesting.

    • Mike says:

      Not game-related perhaps, but actually their work is using games a whole lot! Their machine learning experiments are all run on Atari games, and they’ve managed to get them playing the games pretty well, just from reading the screen’s pixel outputs. It’s cool stuff – although to be taken with a pinch of salt as far as long-term AI goes, I think.

  5. James Currie says:

    Really love this as an idea for a series. Even at the amateurish levels at which I can make digital things (Basically just mods) I am careful with my AI, more with difficulty than creating something akin to the Geth in Mass Effect (I hope they get the odd mention, they are a great example of how AI could operate).

    As I can think of nothing interesting, have an AI anecdote:

    On the Thrawn’s Revenge mod for Star Wars: Empire at War: Forces of Corruption I had an AI bug that meant the route programming would not work properly. That was something I was learning about at the time in Maths so I thought ‘I can fix this!’. I did, I also tweaked some other things that devoted processing power to the AI and I changed the algorithm such that it made backup plans and catered its fleets to attack specific fleets that I built. It wasn’t a big tweak in the programming but ‘it’ just obliterated me every time. I still keep that AI on a .text file somewhere in the EAW:FOC folder – one day I will defeat it, I just need to strike the heavy shipyards while it fights someone else. Unfortunately – it knows I will try that. Bugger.

    • evilbobthebob says:

      Cool, somebody else who mods FoC on RPS…I work with the Phoenix Rising mod primarily as a level designer, but I’ve looked at the AI scripting myself and it always amused me how Petroglyph added ‘magic’ fleets and armies for the AI so it appeared they were building consistently.

      • James Currie says:

        It’s quite a mess isn’t it. When I was fiddling I found that little bit of code and thought ‘ so wait, they can bring in fleets at an instant? Well damn!’ I let it do that for fighters and light ships and forced it to actually build anything larger. Fleets of 100 X-wings, 230 Y-wings and 60 Nebulon-B Frigates awaited me after that. Now it has to build everything like I do, Still beats me every time, the best I could ever do was hold onto fragments of territory and let it pick me off. The 2 hour battles I tell you… that AI just didn’t quit.

        I might give the Pheonix Rising Mod another look, when I last played it it only had space battles.

        • evilbobthebob says:

          Yeah we’ve completely redone both land and space combat to use a unified system that’s less based around X-Wing/TIE Fighter or RPG values and more based on physical ship/unit characteristics.

  6. Napalm Sushi says:

    I recall Minecraft going for an astoundingly long time, even after 1.0, with no A.I. whatsoever. A U-bend in a cave was sufficient to throw off anything that might pursue you.

    • Mike says:

      Minecraft is interesting – we don’t think of it as having AI but it uses lots of AI techniques to do things like generate its world or even move a character from A to B.

      • GameCat says:

        Uhm, I don’t think that seeded procedural map generator have anything to do with AI.
        It doesn’t response to anything else than given seed, which is constant on every map.

        AI must constantly change its behaviour to adapt to player character (or other AIs) presence etc.

        • kalzekdor says:

          You’d actually be surprised. Complex world generation has a lot of AI related tasks associated with it. A quick example would be the interaction between rivers and nearby terrain. You need to determine how the water will flow, what effect that erosion has over time, adjust for vegetation and weather, etc. Whenever you’re modelling a large intersection of discrete systems AI techniques are usually the most efficient means. It’s not nearly as simple as throwing a seeded RNG at the problem, particularly if you want your world to feel natural.

          That’s not even getting into human acts modifying the world, such as in Dwarf Fortress world generation.

          • GameCat says:

            I was thinking about Minecraft, not about general procedural generation.
            I doubt Minecraft world generator simulates erosion etc. over certain amount of time.

          • kalzekdor says:

            I have no first hand knowledge of how Minecraft performs its world generation, so I just rattled off an example from personal experience. I do know Minecraft uses biomes and generates terrain on the fly as players explore, starting from a flattened 3d noise map. The terrain is then iterated over, adding more detail with each pass. It’s not the sort of thing people usually associate with AI, but the world is generated “intelligently”.

          • LionsPhil says:

            If we’re going to call any algorithm “intelligence”, then drawing a diagonal line to the screen is AI.

          • kalzekdor says:

            Any algorithm? No. Modeling large scale interactions through use of heuristic optimization? Yeah, I’d say that counts.

            A single ant behaves simply and predictably. A colony behaves intelligently and can adapt. Emergent complexity can be best modeled by AI techniques, even when dealing with simple “rules”.

          • Mike says:

            Thanks for fighting the corner of AI definitions! The heuristic search example was really good.

          • JamesTheNumberless says:

            Game Cat is brilliantly demonstrating the “AI effect” here. There are two broad kinds of AI, those based on the physical symbol system hypothesis (or representation and search) and those which are biologically inspired.

            We are surrounded by so much AI based on the PSSH, that we no longer see most of it as AI. Mention path-finding or terrain generation or even certain optimization problems (yes, even drawing a straight line on a raster) and nobody thinks very much of the “intelligence” involved. Nobody is impressed anymore by a Google search, that has long since moved from “it’s AI” to “it’s just an algorithm” really, it’s both.

            Mention neural networks or cellular automata and you may get a nod of approval from people looking for application of AI but actually these things are “just algorithms” too in many ways they are even simpler algorithms and it’s actually the emergent properties of their behaviour which are interesting.

            Use a genetic algorithm to play Tetris and it’s “AI”… For a while. Until people are familiar with it, then all you’re doing is “scripting a bot.” Put a neural network in a robot so it can navigate rooms and, for a while it’s AI, until your vacuum cleaner starts working on the same principals :)

        • Mike says:

          When we say ‘AI’ in relation to games, what often comes to mind is moving little people around from A to B and shooting at them. But AI is a broad, broad field and lots of the problems that it solved a long time ago are now no longer thought of as AI any more – I actually mention this in the piece a bit, for this exact reason.

          World generation is still considered part of procedural generation, which is handled usually by AI departments in universities (as well as other departments, like design, in others). AI is a really difficult term to pin down, in reality, but I’d class world generation as a part of that. You’re using software to try and solve a problem that we think – or used to think, anyway – requires intelligence. You can do it in very simple ways, but the task itself is still in the domain of AI I think.

  7. sorian says:

    As an AI developer, I am really interested to see where this series goes.

    • FriendlyFire says:

      I hope your work on SupCom and PA gets a nod in the series!

      • LionsPhil says:

        Yeah, it’s one of the best RTS AIs I’ve seen.

        • LionsPhil says:

          Although come to think of it, Red Alert 2: Yuri’s Revenge on Brutal was pretty horrible too. It seemed to be competent enough to do some kind of heatmap of where your defenses were weakest and punch through that, rather than just dribbling tanks one at a time into your prism towers.

  8. tangoliber says:

    Often it seems that the more smarter AI is, it seems the stupider they appear. Instead of simplifying what they need to do, the developer gives them more freedom to make complex decisions, which they still aren’t capable of doing consistently.
    Brink had some of the most intelligent bot AI I’ve ever seen. Their decision making appears very nuanced to me. (Compare, for instance, how much better they are than the bots in another objective-based, class-based game, Killzone 2.) But there is way too many factors to consider in a match of Brink, and it is too much for any AI to process. So, Brink get’s the reputation for having comically bad AI, mostly because they tried. On the other hand, Quake 3 and Unreal Tournament continue to be praised for their AI…but the roles of those bots is much simpler.
    The AI in every RTS game seems terrible as well, but still probably has a lot of impressive stuff going on behind the scenes.
    ——————————-
    For single player FPS, I don’t believe that bots with complex AI are very fun. I prefer the Doom/Serious Sam/platformer way…where each enemy has a recognizable pattern, and it is through the combination and convergence of many patterns that the player becomes overwhelmed and unable to process it all.

    • LionsPhil says:

      UT2k4 bots are actually pretty nuanced. At higher levels they will form up into assault teams and take flanking routes to objectives. It’s map-hinted, obviously, but it’s there as part of the illusion of smarts.

      (They’re also pretty impressively robust. Drop them off the navigation node grid and they’ll wander around trying to find it again. Some older/other games just give up in this situation—HL1 NPCs would just freeze up.)

      • LionsPhil says:

        (Argh, stupid lack of edit.)

        Also, Brink really dropped the ball on some missions. That one where you had to stop a missile being launched just could not be achieved with bots IME, but was balanced, or even easy, with humans. They just could not focus on the objective sharply enough to complete it, or to defend you while you completed it.

    • Mike says:

      You get at a really crucial balance in a lot of games which use AI, that I want to come back to later on in the series: developers often have to make choices between creating AI for simpler scenarios, or risking producing something which is underwhelming. I think it’s good to do both things, I think approaching complex scenarios like Brink (or far more complex ones, that I’ll talk about later on!) is something we should be doing, because it helps us look at these problems and get closer to solutions for them. Good talk!

    • Premium User Badge

      Mungrul says:

      I was a Mac user when Quake 3 came out. I desperately wanted to create custom content for the game but there were no map editing tools on the Mac at the time, so playing with bot AI scripts and creating my own was one of the only meaningful ways I could create content.
      I loved the “weighting” of decisions, and it was really cool creating distinct personalities in the script that manifested exactly how I’d planned in-game. Bots with distinct barks that preferred certain weapons, that kind of thing. It was also fun to model bots around the play-styles of friends and then name them after said friends!
      So while they may not have possessed the most spectacularly intelligent AI on the planet, the coding was elegant enough to give anyone the chance to futz around with AI “programming”.
      I don’t work in anything game or code-related these days, but I thank the time I spent doing that and other Quake modding for giving me a deeper understanding of how games fit together.

      • Premium User Badge

        Mungrul says:

        Actually, thinking back on it further, it was the original Quake’s “Reaperbot” that introduced me to the concept of bots. I didn’t have an internet connection and desperately wanted to play multiplayer Quake, so Reaperbot scratched that itch. I’d quite often create games with massive amounts of bots in just to watch them find their ways around levels. It was fascinating, exhilarating stuff.

    • The First Door says:

      From what I remember F.E.A.R. has some really good AI in it, and that was a single player FPS. I remember seeing talks about how the AI was clever enough to discover barricades made by the player and search for other routes and such. I think they were touting the fact it wasn’t built on a state machines, but had cleverer planning algorithms.

      I do remember thinking when I was playing thought that the enemies were very good at flanking and working together against you!

  9. JustAchaP says:

    One word. F.E.A.R

    • Premium User Badge

      Mungrul says:

      I loved FEAR’s AI. I’m sure there were some clever tricks involved, and I seem to remember the map design complemented the AI by combining to create scenarios that made you believe the AI was actively trying to out-flank you. Great stuff.

    • Bishop says:

      I’m not sure the AI was /that/ clever. I think the first Crysis is maybe the ‘smartest’ I’ve seen, they’d all hide behind rocks and that at once without speaking to each other just jump out and overwhelm you, but it felt like I was playing against DeepBlue. 0 regard for their own lives, just this monsterous hive mind designed to make the game as unfun as possible. I think what makes F.E.A.R’s AI more about how the AI’s decision making was exposed to the player, hearing them shout not just what they were going to do but also what they weren’t going to do really sold them as not just intelligent but also human. For anyone that’s not played you’d often hear the enemies shout things like “Flush him out!” only for another to respond “No way!”, and these great moral breaks were normally reserved to just after you’d stapled an enemies head to the wall.

      • The First Door says:

        I mentioned this just above, but I found a paper about it:

        The F.E.A.R. AI did use a, at the time, relatively new technique for AI actions: At least in games, the technique they used was actually about 35 years old!

        link to alumni.media.mit.edu

  10. instago says:

    First thing’s first- path-finding should never be an issue. A* people, Jesus! It’s not that hard.

    Here’s the thing: if you want unbeatable AI, practically anyone who’s read an AI textbook, or has taken a class will be able to do it. But players don’t want unbeatable AI- they want human-like AI. And that’s not really an area of research in the (academic) field of AI. We can make AI better than humans, and the areas where we want human-like AI are practically exclusively video games.

    So that’s the problem. No serious AI researcher cares about human-like AI, because we can already make better-than-human AI (for specific tasks). And the only people who do care about human-like AI, like AI game devs, tend to keep their secrets close to their chest. There isn’t exactly a culture of sharing what you come up with in the private sector, because that can mean you go out of business (at worst) or at least give your competitors an edge.

    If you want AI to improve in games, where you start is getting people to publish their findings and implementations of various aspects of human-like AI from actual, published games.

    • Mike says:

      A* isn’t a magic bullet! It’s a great pathfinding algorithm but things are more complicated when real game development gets involved. Problems become more complex, they overlap with other systems you don’t have control over, and so on.

      But players don’t want unbeatable AI- they want human-like AI. And that’s not really an area of research in the (academic) field of AI.

      Actually, you’d be surprised! There’s lots of research going on into AI that can be curious, that can be deceived, that can be humanlike. There’s a whole world of exciting research projects out there. I’m going to try and show off some of them later on in the series! Academic AI is much more than optimisation and proof, and there’s a lot of amazing, passionate games work going on out there that you don’t often hear about.

      • tormeh says:

        I actually think those faults are essential to really advanced AI in general, not just in games. Humans are not deceivable, suffer from optical illusions etc. because it’s funny. Evolution has no sense of humor nor sentience. We’re fallible because we make shortcuts, linearisations and approximations because that makes problems a lot less complex to solve.

      • Premium User Badge

        garfieldsam says:

        Speaking of which, here’s one of many awesome examples!:

        link to io9.com

    • thefinn says:

      Not entirely true, if you’ve noticed recently there have been sporadic articles regarding people in high-places saying that commercial style AI could become a real issue in the near future. They are calling for a dumbing down of “no holds barred AI”.

      More human is exactly what they now have in mind.

      Although I do agree with most of your points at least until recently.

    • b0rsuk says:

      A* pathfinding is CRAP for groups of beings and predicting behavior of crowds. Watch the “supreme commander 2 flowfield” video on youtube. It shows you the glorified A* used on groups. They choke, mess around, and are stumped by the simplest traffic obstacles. It reminds me of the horror when you try to move tanks over a bridge in C&C.

      Then they showcase their Flowfield algorithm. There’s no contest.
      I would post a direct link, but then this post would be nuked as spam. Go to youtube, type supreme commander flowfield, and watch.

      • aoanla says:

        This: A* is a classic top-down pathfinding approach that doesn’t really work if your computed path is frequently intersected by other moving objects while you’re following it. (It needs continual recomputation on collision.)

        So, no, it’s not that simple – as with so many things in AI, there are robust solutions to problems, which people then decide are generally applicable outside the context of the problem they were designed to solve.

    • aldo_14 says:

      if you want unbeatable AI, practically anyone who’s read an AI textbook, or has taken a class will be able to do it

      That’s simply not true in any environment where the player actually has agency, because uncertainty – especially with unknown probabilities – is pretty much the hardest element to address. Remember AI is different from, say, giving the enemy NPC perfect aim or vision.

  11. skittles says:

    I think the major problem is not the possibility of good AI happening, it is simply the time involved. I think complex AI like those dreamed of previously is mostly possible with today’s technology. The problem however is two-fold, it is difficult (hence time and money), and it is not strictly necessary. You can deceive the player in various ways and they will still happily play your game as the Halo guys said.

    Creating a complex AI then becomes just as much work as the game itself, and companies cannot justify that. We hence each and every time get the bare minimum; the looks like it works AI. And each and every time the codebase is mostly scrapped and people start over again, or simply directly reused.

    The only way I see that AI in gaming is going to progress in any real meaningful way is if someone steps forward and takes it up as middleware. There is no particularly successful or decent AI middleware, and until there is, AI is pretty much going to be largely at a stand-still. At least that is my take.

  12. gavanw says:

    I’m working on a new type of game AI described at voxelquest.com (sorry for the plug but typing on a phone so linking is faster). Uses many techniques that are old, from an academic standpoint, but relatively unused in games.

    • Mike says:

      Good old-fashioned AI techniques (I sometimes see the acronym GOFAI which used to really confuse me) are actually enjoying a little bit of a resurgence in some areas of games research. I’ll definitely check your project out!

    • Jetadex says:

      I was about to give you a shoutout but you beat me to it by a solid week.

  13. ikehaiku says:

    Well, one thing: multiplayer.
    The original essay was published back in a day when multiplayer was already there, of course, but not “as common”or as ubiquitious as today.

    Who truly need AI today? Fps, Rts, MOBA and so forth, only use a solo campaign as a glorified tutorial for the most part.
    Why spend a lot of hours into coding an AI (and good AI is hard to code I imagine), when users will turn to MP (relatively easy to implement) anyway after a few hours.

    And this in turn (only my guess) makes that there’s probably less and less “AI-specialists” devs, which in turn makes implementing a good AI harder and more expensive, and the cycle continue.

    • Montag says:

      I’m completly ok to play one of the forty patroling guards of Dishonored to offer ONE player the scenario the dev created. This is just about seeing nothing in the shadows, right ?

      (I chose Dishonored because of Arkane’s myth The Crossing. “polygon dot com the-mirror-men-of-arkane” :> )

    • Mike says:

      Funny you should mention that! In the original draft I had of this article I actually mentioned multiplayer as another factor alongside the graphics race. Just after Laird got quoted in the second Economist article, PS Online and XBox Live launched, and multiplayer became a real thing for online console gamers. I think that had a huge effect, because people (mistakenly, I think) assumed that this would be an easy end to all their AI needs.

    • Nihilist says:

      Well me. I am a gamer since Pong and I don’t care and/or have time for Multiplayer. BUT you are right, it was and is an easy way out of any AI questions. Just create a freamework in which humans can interact and leave the rest be.

      You lost me as a customer, too, but that doesn’t matter.

  14. Jade Raven says:

    I am getting a distinct Adam Curtis impression from reading that second paragraph. I can’t help but read it in my head with his voice.

    • aoanla says:

      Yeah, that definitely happened to me too. Had to check the byline to make sure it wasn’t a guest post by him.

  15. thefinn says:

    Way past due for this article.

    I’m a 44yo gamer. I have been complaining about this for so many years now…

    The whole graphics thing is so old, what is the point of the immersion of skyrim’s environment, when you’re going to shoot someone IN THE HEAD and have them walking around for 20 seconds after which they just “I could’ve sworn I heard something.”

    It has become beyond unacceptable and this was a “good” game.

    • Hedgeclipper says:

      I used to be a smart AI but then I took an arrow to the head.

    • horsemedic says:

      This. Universally bad AI is why I’ve lost interest in most AAA titles and gravitate instead to roguelikes and clever indie games that make no attempt to script intelligent enemies, but rather make you fight mindless hordes of them in interesting patterns.

      While the field is fascinating and I’m looking forward to this series, I’d rather developers focused on ways to incorporate other humans’ intelligence into single-player campaigns—like Dark Souls.

    • Jonnyuk77 says:

      I’m with Finn, 20 years playing PC games and you really notice how little has changed.

      I’m not sure what the fix is either, the Skyrim dungeon does highlight a few problems. Stealthily approaching two wizards stood either side of an alter I arrow one in the chops, he goes down, rag-dolling hard. The other who presumably knew the recently deceased exclaims, “there must be someone there…”. Indeed there must as your once watchful chum now has an arrow in the eye. A short while later he calms a little, “it’s probably nothing…”, and takes up position once more in his now solitary vigil.

      No, it is something… it’s me the beefed up rogue in the shadows, look your friends corpse is still in the fireplace behind you… there, see? It’s Steve… the bloke you’ve watched over this alter with for an interminable length of time. Don’t dismiss your friends sudden demise as a chance happening, it’s me, death incarnate! I’m here!

      I’ve seen enemies with arrows in them dismiss the projectiles appearance after I’ve hid for a few seconds, I’ve danced in the face of Kings as they’ve talked to advisers. I’ve stood in fires as I’ve been addressed by wizards, on tables when talking to merchants, on house roofs when talking to guards…. and not one of them has seen fit to comment on the fact, “are you aware that you are on fire”, or “you have only just arrived at court, do not skip around the room while the king discusses dragons.”.

      And finally, yes. Yes, I have heard they are, “reforrming the Daarrngard”. How, you ask, because the guard you are next to told me, yes just now. In fact while you were stood next to him, and the past seven guards have also told me…”did you know they are reforrming the Daaarngard”. Yes, YES! I bloody well know!

      Not sure what the fix is, but hope it’s as much fun as the faults.

      • Hedgeclipper says:

        Most of that though is lazy design and not bothering to devote enough resources to the AI team, presumably because they feel the players don’t care much. Length of alert state is just a setting fix for example and not much harder to flag if there’s an ally’s dead body close by, a bit more complicated to check if its been seen. Checking whether the character is standing still and facing the king when he’s talking would also be easy, whether you’re in a fire or on a house would require a lot more work setting up the environment to flag information for the game to work on but also not really cutting edge AI – tracking which barks they’ve played to you/how long ago/how often etc would be easier. None of it is particularly cutting edge any more, there’s nothing technically challenging to having a creature see you stealth take down his friend run round lighting a bunch of lanterns and waving his torch in all the dark corners – but it takes time to program and in general while the graphics team on a AAA game can easily have a hundred people working on it the AI team is often one guy locked in a closet working on a hand me down 386 because most of the market still buys games on the oh shiny! and is content to chuckle indulgently about arrows to the knee.

    • Voice of Majority says:

      This was done reasonably well in the Thief games (of old). I suppose it was part of what made the games interesting. If you hurt one of the guard and it was seen, they would look for you and alert others and not give up. If you just made some noise they’d look for a while and then give up. The response was in line with what they guards had as evidence.

  16. b0rsuk says:

    Sophisticated AI doesn’t look good on screenshots or trailers. Creeps don’t need good AI – whole genres exist where AI is an afterthought. Many multiplayer games don’t benefit from better AI – mostly RTS games do, but it’s a declining genre, and RTS has worse problems, like the archaic “select a rectangle and choose a point target for it” control scheme. AI could be used to model civilians in FPS games, but FPS games are not meant to be a realistic representation of war, they’re war porn. And for completely symmetric games… board games do better ? Because of human face to face interaction ?
    Therefore, no significant change will happen to game AI. Good AI will remain a novelty, unless someone comes up with a way to make a whole new genre possible solely because of good AI. A groundbreaking game, which convincingly models a small town, procedurally generates AI routines like those in Ultima 5, crowd behavior.

    • Hedgeclipper says:

      Most forms of strategy games could do with a lot more smarts – difficulty there usually means the computer ‘cheats’ relative to the player – it would be nice if it played better by the same rules and better still if it showed more styles or ‘personalities’

      • James says:

        I noticed this in my modding of RTS games. The AI can often just summon units out of thin air, forcing the AI to play as you do has a drastic effect on how the game plays out. It tends to end badly for the player, as the AI must make the decisions that you have to make, only it is an i7 processor devoting most of its power to the task of ‘Build unit X or prioritise building Y’, whistle the human mind is considering the same thing, but is also considering such important matters as Jammy Dodger supplies, the present need for coffee and how damn cool your new mouse looks. AI could be far better than it already is just with a few tweaks to the way it operates and the way it must interact with the game. AI has become one of those things such as QA testing that AAA titles just seem to cut corners on these days.

        • Hedgeclipper says:

          I agree to a point, but it isn’t just summoning units, its not uncommon for it to have perfect information while the player has fog of war for instance.
          The AI can be unbeatable at things like building a ‘perfect base’ and can obviously get the clicks in much faster and at optimal times – its less good at working out strategic level things, optimal unit mixes vs player, when to attack/defend, avoiding strong points and suicidal attacks, taking advantage of openings and so on. Ideally what we want is AI that can handle all that and then do it in a way that looks ‘in character’ – if your game has you fighting humans they should make human seeming mistakes or skill based on the difficulty setting (LIzards from Zarrg should think like Lizards from Zarrg). We’re a long way off that though, chess has been unwinnable for most humans for years but it still doesn’t play like a human and it doesn’t look like research has advanced very much further vs more difficult games like Go let alone something with as many possible states as your average strategy game.

    • Mike says:

      Well, you never know :)

      “unless someone comes up with a way to make a whole new genre possible solely because of good AI”

      I’m actually involved in a paper at the moment about AI-based game design, games where an AI technique is the core component of the gameplay. A game like Third Eye Crime is a good example of this: link to thirdeyecrime.com

      • FriendlyFire says:

        While I won’t claim that it’s a complex AI, Left 4 Dead’s AI director is likely partially responsible for that game’s success. It made playing against bots far more interesting, dynamic and (ironically enough) human-like than it usually is.

        There’s also games like AI War which are dedicated to this sort of asymmetrical, players vs AI gameplay. I wouldn’t say they’re genres, but they certainly exist.

  17. b0rsuk says:

    Great AI sounds awfully close to “replayable games”, and no big studio wants that ! Games are supposed to be like movie tickets, you should watch them one after another. The thinking is, if a player is satisfied with a game for too long, he won’t buy more games. Who cares about great AI if you can just script everything in a short, linear game ? You might object that multiplayer games are just that – replayable. But fear not, matchmaking service can taken down, “costs no longer justify the profit”, all online gamers nod in agreement, and dedicated servers are not there to save us.

  18. SentientDesigns says:

    Excellent manifesto so far for Part 1. Looking forward to see which direction this series will steer towards. “[Game] academic research is the games industry” does sound awfully familiar: steps will need to be taken from both the traditional industry and academia to drive this point home, however (not necessarily actions or products, but a shift in mentality and priorities). Interested to see which (if any) such steps the series and contributors will suggest.

    A somewhat cryptic point made by the article (and not expanded enough) is that “the industry needs things that work, and AI generally doesn’t”, because of the ‘AI effect’ mentioned later. If this point stands (it is unclear if this is Mike’s point or if he is playing devil’s advocate in that part of the article), where does that leave AI in commercial games, and more interestingly in game AI research as (part of) the game industry? Is the answer to aim for “more intelligence, more understanding” under the hood or is that going to soon be taken for granted and not registered as AI at all? As (some) indie games embrace a (carefully crafted) presentation that hints at unfinished, amateur, commercially non-viable and even childish games, should game AI (at least the ‘bold’ AI developed in academia) embrace the fact that it is ‘broken’, and build on it (and play with the concept) rather than build despite/around it? I would certainly like the series to attempt to tackle this issue head-on rather than to re-iterate Laird (“Does this mean we’re seeing the games industry finally accept AI as a worthwhile field to explore and experiment with?”), as the risk of having the same discussion/rhetoric in another decade is obvious.

  19. A Gentleman and a Taffer says:

    Ooh, this sounds like a great series. I always wondered what happened to the great AI promises of 10+ years ago. AI was always a key selling point with games but now hardly anyone mentions it. FEAR seemed like a peak. Was it or can AI be much smarter now, its just kept artificially stupid as a design choice? Alien:Isolation obviously has some clever business going on, that’s one of the only recent examples I can think of noticing a leap in AI ability. Or am I falling for the ‘doubled health makes players think the enemy is smarter’ trick times infinity?

    • LionsPhil says:

      From what I’ve seen, the Alien is almost all cheating, teleporting around as necessary to provide appropriate scares.

      One of the awesome things about FEAR’s AI is that they wrote a short paper about it, and how it does goal-seeking using a pathfinding algorithm (good ol’ A*).

      • Mike says:

        I think this is a bit dismissive of what the team have done on A:I. I got the impression that, although the Alien does teleport, there’s a lot of intelligence supporting the game as a whole. Teleportation doesn’t mean you’re cheating, it’s no different to the sleight of hand a good GM might use in a tabletop RPG. The mistake is thinking of the Alien as the intelligence in the game – really it’s the game as a whole, a kind of Director-like influence a la Left 4 Dead.

      • aldo_14 says:

        One of the awesome things about FEAR’s AI is that they wrote a short paper about it, and how it does goal-seeking using a pathfinding algorithm (good ol’ A*).

        They call it A*, but the proper term would be heuristic planning – you relax an initial planning domain, use the solution of that domain to form a heuristic, then use the heuristic to guide state-space exploration of the unrelaxed problem. A* is a essentially a subclass of heuristic planning, where the heuristic is something like Manhattan/Straight Line distance within the world.

        Mind you, that paper seems to badly misunderstand HTN planning – “look at Hierarchical Task Network planning (HTN), which facilitates planning actions that occur in parallel better than STRIPS planning“. That’s completely not what HTN is for (in short, more rapid plan generation that uses human-defined decomposition rules to recursively refine an abstract task, into a task hierarchy resolving to a schedulable set of primitive tasks).

        It’s quite startling the difference in quality between that paper and an academic one – moreso given the level of coding skill required for games coding.

    • varangian says:

      F.E.A.R. would be, in the FPS realm at least, one of my favourite examples of good AI though I tend to think of it as Apparent Intelligence rather than Artificial. After all anything close to actual intelligence – say something that was dog smart – would have the player dead in seconds every time. Just think about how you’d get on if you were attacked by a pack of dogs and then imagine the dogs had guns and knew how to use them. What you really want are adversaries with enough tricks in their repertoire to look like they’re reacting intelligently rather than standing around like idiots with bullseyes painted on their foreheads. And F.E.A.R. did that well as the opposition rolled into cover or jumped through windows to evade you and crawled under machinery to flank you. Just by picking actions more or less at random it would sometimes do something that looked really smart and when it didn’t it still looked good so kept the illusion going.

      • A Gentleman and a Taffer says:

        Yep, completely get that. The 2 best games, in my humble opinion, are Deus Ex and Dark Souls and they are built on the whole principal of artificial stupidity. Deus Ex’s guards are so incredibly forgiving in ways a realistic AI would suck all the fun out of toying with. Likewise Dark Souls enemies, though tough, all clearly have fundamental weaknesses and stupidities built in to their design. I’m curious to know, though, if this realisation (backed up by the HALO comments in the article) has led all game designers putting realistic AI on the bottom of the priority list because ‘gamers won’t notice/don’t want realistic’. Would be a shame, I can think of loads of game types benefiting from realistic AI were such a thing possible. Imagine a splinter cell game where rather than an unrealistic army of guards to fight through each level you just had to outwit the one or two intelligent guards actually on duty.

        • Josh W says:

          It also reminds me of how in games like quake, the enemies seemed vastly more powerful than you, better equipped, but their AI gave you a significant advantage, dark souls is similar, where instead of manipulating their weapon switching and range, or their fire rate and lack of prediction, you are messing with their animations and over-commitment.

          But what if the game made your enemies less powerful than you, but more intelligent? I think it could be fascinating to play a strategy game where you are fighting some guerrilla fighters, or an FPS where you are facing creatures with very restricted attack types, slow projectiles etc. but very canny ways of using them.

          • Immobile Piper says:

            Relevant to this, “Smart Kobold” is a roguelike made with more or less that premise. You’re a powerful fighter venturing to a cave full of puny, supposedly intelligent kobolds.

    • Mike says:

      I’m going to mention Alien: Isolation a little later in the series. It was one of my favourite games of 2014, and I think it’s an interesting example where AI was used in a very sharp and clever way, even though it wasn’t a supersentient mega-intelligence. The developers knew the limits of what the could do, and designed they game around it.

      They also sold the idea of AI which was a very powerful trick in itself. But I think they did a good job, all the same.

      • A Gentleman and a Taffer says:

        Cool, looking forward to that article then. Be interesting to get some insight, I understand the comments above that it was down to a lot of teleporting and pretending it didn’t know where you were, but of course the game knows exactly where you are at all times. As a laymen with no clue how these things work, though, I really felt like I was being hunted in that game rather than the usual feeling of playing glorified whack-a-mole with enemies. It was possibly my favourite game of last year and the deployment of that one ‘smart’ baddie felt like a delivery of something gaming has promised for a long time but not yet delivered.

  20. kalzekdor says:

    Great article, looking forward to more in this series. Though most of my coding these days is dictated by client needs, AI was what got me into programming in the first place. It’s something that I remain strongly passionate about. Games have always had a wonderful feedback loop relationship with AI, with games advancing AI which advances games which advances AI, etc.

    When I have free time (so rare these days) I still dabble with some AI/game related projects.

  21. Imbecile says:

    I get frustrated by the lack of focus on AI in games. I guess the issue is that ultimately we tend to focus on things like resolution and frames per second which provide obvious numerical benchmarks, even if they don’t actually make the game significantly better.

    There are third party engines for physics and other effects. Could there not be one for AI too, or is it too contingent on each individual game?

  22. Urgelt says:

    Very enjoyable article!

    I’m a player, not a developer, but I’ve been thinking about AI in games since the mid-90’s. Back then, AI was absolutely about challenge level of enemies. Nothing else.

    Fast forward to a moment I think is as important as any discussed in the article: the release of Bethesda’s Oblivion title. Not only was enemy AI a factor in difficulty level, but AI was now important in portraying NPC personalities and routines. Suddenly it became possible to glimpse a distant future, where NPCs might – less than more, probably – behave as though there were actual people playing them, with personalities and the ability to have opinions and motives and understand and respond to language in ways appropriate to their characters.

    But since then, progress across the industry towards that elusive and unstated goal hasn’t been stellar. Or even noticeable, actually.

    One reason is the existence of consoles. Games designed for consoles have had severe restrictions on processing power and storage. Games designed *not* for consoles didn’t have a whole gaming market to address with their products – just desktop computers, and those were fragmented, too – and so could afford to spend less money on development. Proprietary market fragmentation into ‘stovepipes’ made game development more expensive and more difficult all around.

    Alas, market fragmentation hasn’t gotten better, it’s gotten worse. To the desktop categories and consoles we now must add mobile devices, many of which are also proprietary. The gaming market is more fragmented than ever, so economies of scale in development for really tough design work like AI are problematical.

    There are a few hopeful signs, however. One is the rising level of tech, which is improving processing and storage even for mobile devices at a rapid clip. Another is the growing ubiquity of broadband. A third is the growth of cloud computing and the software infrastructure required for that to work. And another is academic research. I wouldn’t say we’ve seen huge AI breakthroughs there, but incremental advances in AI have occurred.

    Given market fragmentation and the large variance in computational and storage abilities of consumer devices, and given improving telecommunications and cloud computing, the answer almost suggests itself. AI needs to be done in the cloud, independent of proprietary platforms. Games – and other apps – can be designed with hooks into the cloud to harvest behaviors for game characters and enemies. (Siri and Cortana already do this.)

    Many of the infrastructure pieces needed to generate AI behaviors for games and other applications are already in place: search engines, big data, speech comprehension, generation and translation, data farms and the software to enable cloud functionality. But there are still some things missing.

    Vision, for one. There is no architectural vision for cloud-driven personality emulation. Ownership, for another. Some algorithms are in the public domain, but many are not, and most have restrictions on who can use them or when. (IP laws are a big obstacle to AI advancement, I believe.) Also missing: a viable economic model to pay for AI development and market it. And of course there are no standards for any of it: no protocols for client requests or for responding to them, no protocols connecting search engines, language parsers, etc. to AI algorithmic guts. There aren’t even any discussions about vision or standards going on anywhere. This or that university may demonstrate a cool algorithm or solution to a particular problem, but no-one is piecing it all together into a set of solutions for personality emulation.

    AI (personality emulation, really) has to be in the cloud; platform fragmentation makes it difficult to do anywhere else, and the cloud is mature enough right now to perform the work. But I don’t see AI in gaming making any dramatic strides over the next five years. For those strides to occur, conversations about how to organize the cloud to perform this work would need to be already underway. It isn’t yet happening.

    • Voice of Majority says:

      I doubt IP laws are a big issue. You cannot patent an algorithm or a mathematical equation. However, I do agree that cloud computing is going to have an impact on this by changing the expectations. Google, Apple and Micfrosoft are all hard at work to make their digital assistants seem intelligent and human. Once that is taken for granted we will begin to expect that from all of our digital experiences. You may not need cloud to achieve better AI, but it will raise the bar.

    • aldo_14 says:

      Many of the infrastructure pieces needed to generate AI behaviors for games and other applications are already in place: search engines, big data, speech comprehension, generation and translation, data farms and the software to enable cloud functionality

      To be honest, I’m not really seeing the relevance to AI here, with the possible exception of being able to offload previously intractable problems onto a cloud system. It’s missing a fundamental aspect of AI behaviour, at least in terms of intelligent agent behaviour; rational, goal orientated activity.

      A fundamental element of rationality is the formation and execution of plans. Planning in a games AI context involves a (generally speaking) large, dynamic, continuous and stochastic environment – close to the hardest possible one (with the exception that there is at least no issue of partial observability, and communications can be robust and loss-free).

      Whilst there is not quite the same need for optimality is for planning in general, it is still an ongoing problem how to understand both the uncertainties/probabilities in such and environment and – critically – how to form executable and adaptable plans in real-time.

      • JamesTheNumberless says:

        That’s why the smartest AI is the kind that doesn’t have a plan to start with! The trouble with having to have a representation of the world is that you then encounter the Frame Problem. The more complex the environments in games get, the harder it is to use this kind of AI effectively. In real world AI researchers have for a while now stopped trying to make computers think the way that we think we think as humans, and started looking at the wider biological context and in particular at insects and other creatures which act as though they have a plan yet don’t actually have brains capable of forming any kind of symbolic representation of the world.

        • aldo_14 says:

          That’s why the smartest AI is the kind that doesn’t have a plan to start with! The trouble with having to have a representation of the world is that you then encounter the Frame Problem. The more complex the environments in games get, the harder it is to use this kind of AI effectively. In real world AI researchers have for a while now stopped trying to make computers think the way that we think we think as humans, and started looking at the wider biological context and in particular at insects and other creatures which act as though they have a plan yet don’t actually have brains capable of forming any kind of symbolic representation of the world.

          Well, the lazy riposte is ‘define intelligence’. And it is a bit of a lazy one, but it has to be said that one of the definitions of intelligence is acting rationally – and a core component of rationality is formation and use of plans.

          (of course, the ironic thing about this context is that the frame problem is most easily handled in a games context, because all the variables at at least known and programmer defined. The qualification problem, and also how to avoid an intractably complex state search space, are of course two other issues)

          That’s not to say there isn’t real value in something like Brooks’ subsumption architecture, or the use of something like a highly relaxed POMDP solution as a guiding heuristic. But the prevailing definition (as per Wooldridge) of an intelligent agent – and this is basically what an NPC character equates to – requires goal-orientated behaviour, which really requires some form of planning in order to ensure short term decisions don’t render long term goals intractable.

          Planning is still relevant and actively researched within real-world contexts, it’s just that it’s moved away from the concept of some fixed activity sequence to finding ways to encompass uncertainty. The reality is not that a planning approach is ineffective, it’s that ways are being researched to make it more effective. The de-facto standard for intelligent, autonomous agents is still BDI – both a plan-driven and one driven by Bratmans theory of human reasoning.

          (NB: one other thing. I wouldn’t categorize planning as ‘thinking like humans’; if anything it’s on the other side of the spectrum, as all the planning approaches I can think are driven by, in effect, some form of strict logical chaining)

          • JamesTheNumberless says:

            Right and that’s actually not an easy question. The definition of intelligence preferred by cybernetics (not as much to do with cyborgs as you might think) which is the currently preferred branch of AI and the one in which the most progress is being made, is about as far away from these ideas of rationalizing and planning as you can get.

            To have a plan you have to have a representation and the need to update this representation and to know which parts of the representation *not* to update in order to make decisions by planning is known as the Frame Problem, and it becomes more and more applicable to games the more games become simulations of the physical world.

            What you say is perfectly logical only for the PSSH branch of AI which is not in terribly good health right now.

      • Urgelt says:

        The relevance to AI is ‘personality emulation.’ That is, in fact, the core problem of AI, according to Turing. I think he was correct.

        Goal-oriented behavior for NPCs is already in games, albeit only in a very embryonic way. For the most part, story scripting determines what goals characters will have and how they will seek to attain them. But coding is also present in many games which allow NPCs to pursue very simplistic goals and exhibit habits independently of story scripting. (I mentioned Oblivion. I didn’t mention Skyrim or the Fallout games because they haven’t really advanced in this respect all that much.) There are even emergent behaviors, some of which surprise even game developers – for an example, I remember in Oblivion seeing Empire soldiers attacking one another because one of them fired an arrow at a deer, and it missed and hit the other soldier. All of that is taking place independent of the story and without the player initiating it. That makes it interesting (if rather obviously inane).

        A more sophisticated version of personality emulation might permit the player to talk (voice or typing) to an NPC using language instead of selecting options from a menu of phrases, be understood and have life-like responses. The processing for that, I suggest, should happen in the cloud. There will still be a need for game developers to assign habits, activities and goals to their NPCs, hopefully of growing sophistication.

        Conversation between player and NPCs could also alter NPC behaviors and goals, if the machine can understand them well enough. That’s AI, too. A wee bit of that is already in games, for example when a player recruits an NPC, he abandons what he was doing and becomes a follower. But those options are severely limited, and not based on text or voice language input from the player. Cloud processing could improve that experience by interpreting player language for the game and generating topical responses. The rest will have to come from game developers, who could design in more behaviors, more goals, more optional ways for players to influence NPCs.

        The game environment is dynamic and, to some extent, stochastic. But there are easily captured variables that can be used. The game knows what quests have been completed and which are left undone, which NPCs are enemies and which are not, whether certain events have taken place, the reputation of the player, the quality of his clothing or armor and his weapons, and so forth. For example, if bandits are hostile, they should not respond to conversational gambits in the same way as if they’re friendly to the player. Those variables are legitimate inputs for a cloud-operated personality emulation; in fact, without them, the NPC won’t make any sense no matter how sophisticated cloud-based AI becomes.

        • JamesTheNumberless says:

          Some important concepts in AI are situatedness and embodiment. There’s an argument which says that our intelligence is the way that it is not only because of the computational capacity of our neurons or the configuration of our brain, but because of the sort of bodies we have, and the way we both physically and socially take our place in the world.

          Taking this further, you will never have a truly human intelligence, unless it has a truly human experience of being. IMO the focus with game AI should not be to make a convincing simulation of a human, but to make a convincing intelligence embodied and situated in the game environment.

          Even if there was a way to harness some vast cloud based Chinese-room in real-time to update your AI, throwing computing power at these problems has generally proven not to be the silver-bullet that the likes of Newell and Simon probably hoped would be the case.

          Turing never intended his imitation game to be regarded as some sort of test of true AI, the fact that he realized the problems with it every bit as well as Searle did, is too often played down.

          • Urgelt says:

            Turing instructed us that with AI, appearances matter more than substance. When machines can seem human, they’ll be intelligent. This does not preclude other, new kinds of intelligence, of course.

            I didn’t use the phrase ‘simulation of human intelligence’ because I am indifferent as to the internal mechanisms which produce the *illusion* of intelligence. I used the word ’emulation,’ which is to say, its outward appearance. This is pertinent in gaming, because what gamers expect and want of games is in most senses a social experience. Humans crave human interactions, and so long as they do, games will present human NPCs with which to interact. The more lifelike they are, the more satisfying will be our interactions with them. The outward appearance is all that matters to consumers of games; the only people fretting about internal mechanisms are people working on games and AI research.

            AI researchers may be interested in the ‘human experience of being,’ but gamers are not. The illusion of humanity is plenty good enough to satisfy their cravings for social interactions within game settings. It is a pragmatic goal and one which avoids preconceive notions of how to do it.

            That said, games might be exactly what is needed to generate, for a machine intelligence, the ‘human experience of being.’ Game worlds are becoming quite sophisticated and extensive; they are small realities which can be experienced. (Robotics is the other main avenue for generating an ‘experience of being.’)

            But my own opinion is that language processing is more important to generating the appearance of intelligence; and I don’t particularly care if we ever go beyond the appearance of intelligence to some deeper actuality. The appearance is good enough for me, and I think that was Turing’s main point.

          • JamesTheNumberless says:

            I understand whay you’re getting at but my main point – aside from anything about Turing – is that I just don’t think the way we do AI in games right now is ever going to cut the mustard, even if you throw more clouds at it. I think biologically inspired AI is the key to AI that adapts better to both the nuances of how a human plays and the increasingly complex emergent situations that arise from a modern game engine. However, I *do* think we should be experimenting more in games with “intelligent” features, in the cybernetic sense of intelligent.

  23. froz says:

    Looks like a beginning of a great series. AI in games is fascinating subject.

    I hope you will look more into strategy games (especially turn-based). You would think that AI is the most important there and it should be improving a lot all the time. But my impression is that it’s not.

    What I also noticed that often game designers favour increasing complexity and/or scope instead of improving AI. Which in turn means that AI is even worse, as it can’t handle new challanges. Good example on this is Total War series, which seems to me to have not improved on AI over the years at all, probably because it got so much more complex then original Shogun.

  24. JonMinton says:

    AI moved into search engines, social network sites and Amazon recommendations. Games developers realised there were two ways to make games appear intelligent: either develop games as funny-shaped corridor (FSC) simulators, in which the player is always funnelled into making the same predictable decisions by the options made available… or have more open environments but artificial AI, i.e. other people, as the competition.
    And that’s exactly how games have developed since, as well as becoming much more commercially focused. From this perspective both approaches have their advantages: FSC simulators have limited replay value – the apparent smarts and challenge is all smoke and mirrors, so people get bored of the games quickly and buy something else. And multiplayer games, when they’re successful at generating communities, have plenty of replay value, but effective hold anyone who wants to remain part of that community to ransom: if you want to keep playing with your friends, you’ve got to pay us. Whichever route the player takes, the producers win.