Skip to main content

The creator of the Civilization V superintelligence mod on AI safety

Going beyond evil robots

Last month, the University of Cambridge’s Centre for the Study of Existential Risk released a mod for Civ V that introduced superintelligent AI to the game - not in the form of AI opponents, but as a technology that can end the game for every player if it's left unchecked. It's a novel overhaul to the science system, as well as an attempt to teach people about AI safety.

While I had some problems with the mod, I also thought it was a fascinating idea. Keen to learn more, I spoke to project director Shahar Avin about how the mod came about, the issues that it presents both poorly and well, and how people can get involved with AI safety themselves.

RPS: How and when did you first get the idea for the mod?

Shahar Avin: How - The idea was inspired by the concept of "differential technology" in the study of extreme technological risks, that says that it's easier to make technology development go faster than to make it go slower. At the same time, there are some technologies that make other technologies less risky (AI safety makes AI less risky, clean energy makes a growing economy less risky, etc.). So you want to selectively invest in technologies that make the world safer, earlier.

When I first came across this idea, it made intuitive sense for me because I could imagine all possible technologies as a big graph, and how you might want to advance down that graph in different ways - in the way the Civilisation technology tree does it. Which led me to think that the world of Civ captures some fairly complex machinery in an approachable way, and that adding other aspects of extreme technological risk to it could work quite well.

When - The green light for the project came during the Cambridge Conference on Catastophic Risk, in December 2016, just after the conference dinner in Clare college.

RPS: What made you pick Civ V, and were there any other games that you considered?

Shahar: As mentioned above (regarding differential technology), I started from thinking about making a mod for the Civilization game series, rather than making a game or game mod in general. Back then Civ VI had only just come out, and so we decided to go for a safer option of making it for the more mature Civ V, a game I had a lot of experience playing and so already had some ideas about how to integrate the concepts into the game.

RPS: Have you seen any depictions of superintelligent AI in games (or books and films) that interest you? I'm thinking of examples that go beyond the 'evil robot rebellion' idea, like The Talos Principle or Nier Automata.

Shahar: I like the Minds in Iain M. Banks' Culture novels, the Spaceships in Ann Leckie's Ancillary Justice. In both, the superintelligences depicted (which, fortunately, are very much aligned with human values) are collaborating with humans, while being very non-human themselves. Importantly, they are neither servants, enemies or gods, which are common tropes for human-level or smarter artificial intelligences. I also like the Fourth Men in Olaf Stapledon's Last and First Men. I guess I read more books than play games... I really liked the Haiku bot in Harebrained Schemes' Shadowrun: Hong Kong! [Shahar later told me that he'd checked out The Red Strings Club following our interview, and that there was a good chance he'd add that to his "educational arsenal". --Matt]

CivAImod 2

RPS: The mod depicts a superintelligent AI going rogue as the ‘default scenario’, in that it’s guaranteed to happen if the player doesn’t build safety centres. Do you worry that might send out an inaccurate message?

Shahar: If you build a bridge, and never worry about the stresses it is going to be placed under, the 'default scenario' is for that bridge to collapse. While the separation of AI research into "capabilities" and "safety" is artificial - bridge builders find safety to be an integral part of their work, and many "non-safety" AI researchers do as well - it does seem that as a species we tend to only work on failure modes and edge cases once these have been pointed out to us, usually by nature or an adversary in some sort of accident or catastophe. When the stakes are very high, as we think they are with AI, this is not a good approach. So yes, I think that if we don't start thinking now about how widely deployed AI systems might fail in catastrophic ways, yet we continue to develop and deploy such systems, the default outcome is that we'll experience these catastrophes.

RPS: Is there anything else that you wish you could have communicated more clearly? On the other hand, which likely dangers concerning superintelligence do come across well?

Shahar: There is a lot of nuance that we couldn't find ways to include given the time and budget we had, or that we thought would make the mod less fun. It is very unrealistic to have a counter for superintelligence (safe or rogue) - we simply don't know how far we are from creating this technology, and may not know until we're very close. The mod focuses on nation states developing AI, but we know that in our world corporations play a much larger role in AI development, and their actions and incentives are not the same as those of nation states. We kept the AI-related technologies fairly minimal, because we didn't want the mod to dominate the rest of the game, but that means we give a very sketchy and not very informative view of the history of AI in the 20th and 21st centuries (though we did add more content in the Civilopedia entries).

I would have loved to give the player more options to handle AI risks - multi-country treaties, global bans on risky applications, sharing of safety information, certification and standards, etc. - many of which are current active research topics in the governance of AI. Nonetheless, I'm happy that we got the basic components roughly right - AI could be a great boon but also pose an existential risk, there are things we can do right now to minimise that risk, AI safety research is probably top of the list of things we can do today, cooperation between nations is much safer than competition when risky technology is involved, and geopolitics can wreck havoc on the most altruistic and safety-minded player.

CivAImod 5

RPS: Will we see any changes or additions come to the mod, and have you got any plans for similar projects in the future?

Shahar: We have some small tweaks in the pipeline following player feedback (mostly from the Steam page), and we'll continue to respond to bug reports and feature requests. We would love to see these concepts incorporated into more recent versions of Civ, and in other games, but don't yet have concrete plans on how to go about it, if through additional mods or via collaboration with game studios directly. We are also looking at the potential to depict other existential risks in a similar manner, and also at creating a much more detailed simulation of the AI scenario.

RPS: Why should we fear superintelligent AI?

Shahar: I don't think fear is the right response to superintelligent AI - I think curiosity is much more important. I think we should be aware of the possible futures that involve superintelligent AI, and be aware that some of these futures end very badly for humanity, while others end up being much better than our current ways of life. Most importantly, we should be aware that there are choices we can make now that affect which future we end up in, but we'll only be able to make these choices in an informed manner if we're willing to engage both with the details of the technology, and with the complexity of its possible impacts.

The mod depicts a particular risk scenario of rogue superintelligence, that involves a fast takeoff of a system that is capable of solving problems and acting on the world in a manner that significantly surpasses human ability, yet a system that does not take into account human values or defers to human oversight, and which aims at a state that is not conducive to human survival. We have seen corporations willing to spend significant sums on the obfuscation of scientific findings regarding smoking harms or climate change, despite being entities made by people, of people and for people. It seems plausible that more powerful, yet less human, artificial agents could take much more harmful actions in the course of their operation. However, again, this is only one of many possible futures - the key message is to explore further, and then act responsibly based on our exploration.

CivAImod 3

RPS: Beyond playing and talking about the mod, are there any other ways people can get involved with the AI safety debate?

Shahar: Lots! Technical AI safety is a growing field, with research agendas, introductory materials, career guides, and full time jobs. There is also a growing field of AI policy and strategy (guide), with several research centres around the world. There are also numerous student societies, reading groups, blogs, grants, prizes and conferences that offer opportunities to get involved.

RPS: Thanks for your time.

Read this next