Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

For a best-case scenario future, generative AI must put creators at its heart

What happens if everything to do with generative AI goes right?

A CRT screen in the centre reads 'Electric Nightmares Part 3: Better Living'. Behind it is a collage of a man hugging a robot, a white dove, a heart, a hand in a peace sign
Image credit: Rock Paper Shotgun

One of the most controversial and often-cited criticisms of modern AI systems is that they’re built from, and dependent on, the stolen work of other people. It’s far from the only criticism people have of this new technology, which might make us wonder exactly why anyone wants it in the first place. Today, I’d like to talk about some of the brighter futures that might come out of generative AI for us, as well as why the path toward them might be a tricky one to walk cleanly.

Someone asked me the other day what I love most about my job, and after a decade my answer is still basically the same: being surprised by weird things. That goes for generative AI, too. One of the best ways to use generative AI in games is to lean into the strangeness that AI can offer, rather than trying to make it as human-like as possible. A nice example of this is iNNk, a game developed by some colleagues of mine at ITU Copenhagen. iNNk is a drawing game you play in a team against an AI: one person tries to draw a word, and their friends must guess the word before the AI does. Playtests showed people developing all sorts of wild strategies, from scribbling over the image to cover up what they were doing, to using visual jokes and wordplay to trick the AI’s more literal brain. This is a great example of how AI can enable new kinds of game experience, rather than encroaching on the human presence in the games we can already make.

AI can also provide new routes for accessibility. One of my favourite papers on the use of AI in games, “Your Buddy, The Grandmaster”, is about how so-called ‘superhuman’ AI systems might be used to support players at every level of ability and access needs, rather than simply trying to beat the world champions. The paper describes several different games based on the hardcore platformer Celeste, in which a superhuman AI enables new ways of playing the game. In one, the player’s control over the game is reduced down to pressing a single button – the jump button – and the AI takes control of all other aspects of the game. The AI’s high skill level means it can adapt to the player’s jumping, but it won’t play the game on its own, so both parties have to co-operate to finish it. In another version, the player can ask the AI to show a ghost how it would complete the level in a way it believes the player is capable of doing. None of the new games are supposed to be better than Celeste – Celeste is its own, beautiful thing. But they are all unique and interesting variations that might be more accessible or appealing to people of different skill levels, accessibility needs or interest.

A young girl battles against the wind on a mountain in Celeste
Celeste is a challenging platformer at the best of times, but AI-assisted versions of it can make it more accessible to players of all types of skill level. | Image credit: Maddy Makes Games

This kind of application – live AI systems reshaping our games or playing with us – aren’t what you mostly hear about when generative AI comes up. More often the dream we’re sold is more straightforward. Nvidia’s recent demo at CES, where people had conversations with game characters by speaking into a microphone, is billed by the tech giant as “the future of games”. A lot of generative AI products try to sell us on a vision of a future that is entirely open and free, which plays into a lot of dreams kids have of a game where they can go anywhere or do anything.

The reason we often end up hearing this kind of pitch is, I think, because of where generative technology has ended up. In the early 2010s, generative systems were about the kind of procedural content generation techniques you’d see in games such as Minecraft or Spelunky. Although they were treated as technical processes, what these techniques became were ways for designers to express themselves through algorithms. Spelunky’s level generator is an extension of game designer Derek Yu’s own sense of how to design levels; Minecraft’s sweeping landscapes are painted with a brush that’s been refined and trimmed over years of updates from Mojang’s developers. Traditional procedural generation techniques were an extension of the person who made the system.

Generative AI, however, struggles to pin down its guiding hand. A lot of its guidance comes from its dataset, but as we saw in the previous part of this series, data often comes from a confusing array of sources, many of which are legally unsteady. Nvidia’s NPC scripts didn’t all come from the same set of data by your favourite author – it’s a mish-mash of all sorts of things. This is perhaps why many of these generative AI pitches swing the other way, and present the player as the guiding hand. If there’s no human author, and the generative system doesn’t have a coherent voice, then the selling point has to come from the only human left in the process: you.

But is that really what we want? Looking at the big hitters of 2023, and the joyful social media fun with actors from Baldur’s Gate 3 or Final Fantasy 16 (and sometimes both of those games together), I can’t help but feel that losing this would be unsatisfying for most people. If I thought I was cool enough to write and act out the protagonist role in a video game, I’d have taken my one-man theatre adaptation of Mass Effect to the stage already.

Four players duke it out underground in Spelunky
Spelunky's level generator is an integral part of why its design has resonated so deeply with its players. | Image credit: Mossmouth

I don’t think this is insurmountable, though. I think generative AI systems can get past this, if we re-centre them on creators. It’s not just about having people donate their data to build models from it; it’s about giving those creators the ability to control, edit, shape and remake those models to achieve their own goals, and to do so without using data from millions of other unwitting and uncredited collaborators. If we can find a way to put generative AI back firmly in the hands of creative people, then we can begin to discover what artistic and empowering uses this technology might have.

Some of the touted benefits of AI are hiding other problems, which makes it tricky to talk about upsides. For example, AI could have a profound impact on localisation. AI translation has come on a lot in the last decade, which raises the possibility of translating games that would otherwise never be localised – games that have been abandoned, games only available through emulation, or the many hundreds of thousands of games made through hobbyist communities around the world. These are all games that are highly unlikely to ever result in paid translation work, so it’s potentially a powerful application of the technology.

Yet there’s a tension here, too. AI translations often lack the cultural understanding or poetry or a human-authored work. Translation work – even, or perhaps especially, unpaid fan translations – are an important way that people gain experience, build communities of translators, and keep languages alive and changing. Not to mention that AI translation is much better at some languages than others, and particularly languages under threat are often very poorly supported. Leaning more into AI translation could help bring a lot of new games to a lot of new audiences, and help preserve the history of the medium for more people, but it might also have unforeseen consequences for other parts of our community.

One of the difficult things to navigate when thinking about AI’s benefits is that it’s hard to distribute those benefits to people equally. If we were to make a little robot that helps to make your game better, that might be a huge benefit to the thousands of independent game developers struggling around the world to compete with the likes of Ubisoft and Activision. But there also wouldn’t be anything to stop studios like Ubisoft and Activision from building a thousand of those little robots and setting them loose in their company to widen the gap again. Nor would it help the tens of thousands of aspiring independent developers who can’t make enough money to afford the one robot in the first place.

AI benefits tend to be double-edged swords. But that doesn’t mean we can’t identify optimistic and positive stories at the same time, because thinking about these outcomes can help to guide us towards technologies and futures that we’re happier with, and that maybe don’t have these drawbacks. In the next and final part of this series, we’ll try to thread the needle through some of these ideas, and ask how AI is actually being used today by game developers, for better or worse.


In Part 4, A Short Distance Ahead, we look at what the near future of AI promises, and what we can do with it.

Read this next