If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Electric Dreams, Part 1: The Lost Future Of AI

And How It Might Still Come To Pass

In 2001 two scientific researchers, John Laird and Michael van Lent, wrote an article for AI Magazine titled ‘Human-Level AI’s Killer Application - Interactive Computer Games’. The magazine, published and distributed by the stern and serious American Association for Artificial Intelligence, went out to universities and laboratories around the world. In their piece, Laird and van Lent described a future for the games industry where cutting-edge artificial intelligence was the selling point for games. “The graphics race seems to have run its course,” they declared. As they saw it, “better AI [is] becoming the point of comparison” for modern games. This didn’t quite work out.

This is a series of posts about artificial intelligence and videogames. It’s also about science, society, the future, the past, YouTube, Elon Musk, and how all of these things can hurt and help the future of the games that we play and love. It’s about how Laird and van Lent’s dream never came true, and probably never will - but it’s also about a new hope that I have for science, research and games, and one that you can be a part of. In a sense, I’m going to claim the same thing that Laird and van Lent did fourteen years ago - that the games industry might be on the brink of major change. It’ll be up to you to decide if I’m repeating the same old failed predictions, or if something is different this time. In this first part, we’re going to look back and ask why nothing happened fourteen years ago, and examine our relationship with better AI in modern games.

The ‘killer app’ article came out one year into the new millennium, an exciting time for gamers whether or not you were interested in artificial intelligence. A new console generation was on the horizon, I was pre-ordering JRPGs like they were limited-edition jubilee memorabilia, and big games were appearing on the release schedule with equally big promises attached. Strategy games like Shogun: Total War promised to test your tactical thinking and offer exciting battles with human-like opponents (IGN called the AI 'startlingly realistic' in one preview). Artificial life in games like The Sims could make little digital people with hopes and fears and needs, people whose lives you could influence and poke and watch. Black and White, which came out within a few months of the edition of AI Magazine and was no doubt on Laird and van Lent's minds, offered the player a chance to teach a creature right from wrong, and watch it learn from its actions.

It's easy to see why people thought that AI was changing games. Traditional ideas like Total War's pursuit of the perfect opponent inherit from the old AI traditions of chess playing, but they were matched with brave attempts to create new kinds of games with AI - tinkering with living things, understanding how they thought. Many of these games were celebrated for their use of AI, even if it never quite became a selling point. Yet the illusion often faded, revealing the clunky systems underneath - Shogun's generals would stand in hails of arrows mulling over their options, little Sims would stand around awkwardly to avoid invading someone's personal space on the way up some stairs. This posed a problem for developers, because no AI is perfect and people tend to remember the time it slipped up rather than the many times that it didn’t.

One solution to this is to constantly strive for improvement: more intelligence, more understanding, more work on getting software to be robust and dependable. But it wasn’t the only solution, and one developer in particular was beginning to refine another solution to the AI problem in games. At the Game Developer Conference in 2002 Jaime Griesemer and Chris Butcher, two members of the original Halo team, spoke to an audience of designers, programmers, and other games industry professionals. "If you’re looking for tips about how to make the enemies in your game intelligent", Jaime told the conference, "We don’t have any for you. We don’t know how to do that, but it sounds really hard." It was a tongue-in-cheek comment - Halo's AI enemies had a lot of work put into them to make them smart and interesting - but it hinted at the advice that Griesemer really wanted to get across. The simple fact was that players didn't really know what they wanted. Players correlated intelligence with challenge - in other words, smarter enemies should be harder to defeat. Bungie noticed that if you simply doubled the health of an enemy, playtesters would report that they seemed more intelligent. The takeaway was a realisation that what the game was doing simply didn't matter - what mattered was what the player thought the game was doing.

This anecdote often comes up at both industry and academic events, and I often hear developers defending this view by explaining that the AI's primary function in a game is to entertain the player, and that this doesn't necessarily require it to be intelligent. But it betrays a way of thinking that sums up the problem of AI and games: the industry needs things that work, and AI generally doesn't. AI never seems to get any better in games - we always see the same hilarious pathfinding mistakes or errors in judgement. These days in particular, everyone will see it: games are now exposed to players in every minute of every hour of every day, through Twitch streams, through YouTube Let's Plays, through simply watching your friend play a game in a Steam broadcast. If a game has a problem in it, someone will eventually see it, and if developers are afraid of that happening then AI is a terrifying prospect.

Why is it that AI always seems to trip up? It's hard for us now, in an age of Alien: Isolation and Grand Theft Auto V, to remember what constituted innovation in the past. Concepts like crowds, squad AI or hiding in shadows were once revolutionary ideas. It turns out that as we get used to new technologies, we use the word 'intelligent' to describe it less and less, until eventually we just take it for granted, like YouTube's video recommendations or the route planner on Google Maps. People in artificial intelligence call this The AI Effect - "AI is whatever hasn't been done yet", as someone was once misquoted. It’s natural, then, that AI often breaks down or doesn’t work quite right. In a sense that’s part of what it means to be AI.

Five years after Laird predicted a bright new future for games in his AI Magazine piece he was interviewed by The Economist for an article about artificial intelligence in games, titled "When looks are no longer enough". He’s quoted a few times in the article, but one quote in particular stands out. “We are topping out on the graphics,” he said, “So what's going to be the next thing that improves gameplay?” Half a decade after his original prediction, the answer was still rhetorical to Laird. The Economist seemed to think so too - the rest of the article featured heavily a game called Façade, a dynamic drama simulation that had taken years to develop by AI researchers Michael Mateas and Andrew Stern. Façade was extremely unusual - a game that was cutting edge enough to warrant scientific papers being written about it, but playable and interesting enough to be spread around the games world and played by hundreds of thousands of people. “It's an example of where I hope to see computer games go in five years.” Laird said. Six months after the Economist article was published, the PlayStation 3 hit shelves, going head-to-head with the XBox 360. It was a year of Gears of War, Elder Scrolls, Company of Heroes and Call of Duty. The graphics race, it turned out, had a long way left to run.

Where does this leave us today? At the start of this piece I told you I was going to make the same claims as the ones made in that original article over a decade ago, a claim of change that has been so wrong in the past. If you look around at the current state of the games industry, it’s easy to see a similar atmosphere to 2001 or 2006. Games like Alien: Isolation and No Man’s Sky seem to hint at AI being applied at larger scales and more robustly than ever before. Just like 2006’s Façade, we’ve recently seen AI research projects like Prom Week breaking through into the games industry - Prom Week even won a nomination in the IGF. Does this mean we’re seeing the games industry finally accept AI as a worthwhile field to explore and experiment with?

I can’t tell you whether or not the time has finally come for Laird and van Lent’s dream of the future. My claim is going to be somewhat different, because the games industry has changed a lot in fourteen years. I’m going to argue over the course of this series that AI research no longer has to wait for the games industry to take it on. Instead, it’s time to acknowledge that academic research is the games industry, as much as indies, AAA or the people who play and talk about games are. By breaking down these mental walls between ‘academia’ and games, we can start to ask more important questions. What kind of contribution can academics make? How can they best make it? Who might be best-placed to help? Is ‘making a contribution’ something academics should even be doing?

We’ll meet some of the people asking exactly these questions in the next part, in two weeks time.

Rock Paper Shotgun is the home of PC gaming

Sign in and join us on our journey to discover strange and compelling PC games.

Related topics
About the Author
Michael Cook avatar

Michael Cook

Contributor

Mike is an AI researcher and game designer based in the UK. He's currently a senior lecturer at King's College London, and is best known as the designer of the game-making AI ANGELINA. He is less well-known as an expert on crocheting little Hollow Knight characters.

Comments