The creator of the Civilization V superintelligence mod on AI safety

civ-v-ai-mod

Last month, the University of Cambridge’s Centre for the Study of Existential Risk released a mod for Civ V that introduced superintelligent AI to the game – not in the form of AI opponents, but as a technology that can end the game for every player if it’s left unchecked. It’s a novel overhaul to the science system, as well as an attempt to teach people about AI safety.

While I had some problems with the mod, I also thought it was a fascinating idea. Keen to learn more, I spoke to project director Shahar Avin about how the mod came about, the issues that it presents both poorly and well, and how people can get involved with AI safety themselves.

RPS: How and when did you first get the idea for the mod?

Shahar Avin: How – The idea was inspired by the concept of “differential technology” in the study of extreme technological risks, that says that it’s easier to make technology development go faster than to make it go slower. At the same time, there are some technologies that make other technologies less risky (AI safety makes AI less risky, clean energy makes a growing economy less risky, etc.). So you want to selectively invest in technologies that make the world safer, earlier.

When I first came across this idea, it made intuitive sense for me because I could imagine all possible technologies as a big graph, and how you might want to advance down that graph in different ways – in the way the Civilisation technology tree does it. Which led me to think that the world of Civ captures some fairly complex machinery in an approachable way, and that adding other aspects of extreme technological risk to it could work quite well.

When – The green light for the project came during the Cambridge Conference on Catastophic Risk, in December 2016, just after the conference dinner in Clare college.

RPS: What made you pick Civ V, and were there any other games that you considered?

Shahar: As mentioned above (regarding differential technology), I started from thinking about making a mod for the Civilization game series, rather than making a game or game mod in general. Back then Civ VI had only just come out, and so we decided to go for a safer option of making it for the more mature Civ V, a game I had a lot of experience playing and so already had some ideas about how to integrate the concepts into the game.

RPS: Have you seen any depictions of superintelligent AI in games (or books and films) that interest you? I’m thinking of examples that go beyond the ‘evil robot rebellion’ idea, like The Talos Principle or Nier Automata.

Shahar: I like the Minds in Iain M. Banks’ Culture novels, the Spaceships in Ann Leckie’s Ancillary Justice. In both, the superintelligences depicted (which, fortunately, are very much aligned with human values) are collaborating with humans, while being very non-human themselves. Importantly, they are neither servants, enemies or gods, which are common tropes for human-level or smarter artificial intelligences. I also like the Fourth Men in Olaf Stapledon’s Last and First Men. I guess I read more books than play games… I really liked the Haiku bot in Harebrained Schemes’ Shadowrun: Hong Kong! [Shahar later told me that he’d checked out The Red Strings Club following our interview, and that there was a good chance he’d add that to his “educational arsenal”. –Matt]

CivAImod 2

RPS: The mod depicts a superintelligent AI going rogue as the ‘default scenario’, in that it’s guaranteed to happen if the player doesn’t build safety centres. Do you worry that might send out an inaccurate message?

Shahar: If you build a bridge, and never worry about the stresses it is going to be placed under, the ‘default scenario’ is for that bridge to collapse. While the separation of AI research into “capabilities” and “safety” is artificial – bridge builders find safety to be an integral part of their work, and many “non-safety” AI researchers do as well – it does seem that as a species we tend to only work on failure modes and edge cases once these have been pointed out to us, usually by nature or an adversary in some sort of accident or catastophe. When the stakes are very high, as we think they are with AI, this is not a good approach. So yes, I think that if we don’t start thinking now about how widely deployed AI systems might fail in catastrophic ways, yet we continue to develop and deploy such systems, the default outcome is that we’ll experience these catastrophes.

RPS: Is there anything else that you wish you could have communicated more clearly? On the other hand, which likely dangers concerning superintelligence do come across well?

Shahar: There is a lot of nuance that we couldn’t find ways to include given the time and budget we had, or that we thought would make the mod less fun. It is very unrealistic to have a counter for superintelligence (safe or rogue) – we simply don’t know how far we are from creating this technology, and may not know until we’re very close. The mod focuses on nation states developing AI, but we know that in our world corporations play a much larger role in AI development, and their actions and incentives are not the same as those of nation states. We kept the AI-related technologies fairly minimal, because we didn’t want the mod to dominate the rest of the game, but that means we give a very sketchy and not very informative view of the history of AI in the 20th and 21st centuries (though we did add more content in the Civilopedia entries).

I would have loved to give the player more options to handle AI risks – multi-country treaties, global bans on risky applications, sharing of safety information, certification and standards, etc. – many of which are current active research topics in the governance of AI. Nonetheless, I’m happy that we got the basic components roughly right – AI could be a great boon but also pose an existential risk, there are things we can do right now to minimise that risk, AI safety research is probably top of the list of things we can do today, cooperation between nations is much safer than competition when risky technology is involved, and geopolitics can wreck havoc on the most altruistic and safety-minded player.

CivAImod 5

RPS: Will we see any changes or additions come to the mod, and have you got any plans for similar projects in the future?

Shahar: We have some small tweaks in the pipeline following player feedback (mostly from the Steam page), and we’ll continue to respond to bug reports and feature requests. We would love to see these concepts incorporated into more recent versions of Civ, and in other games, but don’t yet have concrete plans on how to go about it, if through additional mods or via collaboration with game studios directly. We are also looking at the potential to depict other existential risks in a similar manner, and also at creating a much more detailed simulation of the AI scenario.

RPS: Why should we fear superintelligent AI?

Shahar: I don’t think fear is the right response to superintelligent AI – I think curiosity is much more important. I think we should be aware of the possible futures that involve superintelligent AI, and be aware that some of these futures end very badly for humanity, while others end up being much better than our current ways of life. Most importantly, we should be aware that there are choices we can make now that affect which future we end up in, but we’ll only be able to make these choices in an informed manner if we’re willing to engage both with the details of the technology, and with the complexity of its possible impacts.

The mod depicts a particular risk scenario of rogue superintelligence, that involves a fast takeoff of a system that is capable of solving problems and acting on the world in a manner that significantly surpasses human ability, yet a system that does not take into account human values or defers to human oversight, and which aims at a state that is not conducive to human survival. We have seen corporations willing to spend significant sums on the obfuscation of scientific findings regarding smoking harms or climate change, despite being entities made by people, of people and for people. It seems plausible that more powerful, yet less human, artificial agents could take much more harmful actions in the course of their operation. However, again, this is only one of many possible futures – the key message is to explore further, and then act responsibly based on our exploration.

CivAImod 3

RPS: Beyond playing and talking about the mod, are there any other ways people can get involved with the AI safety debate?

Shahar: Lots! Technical AI safety is a growing field, with research agendas, introductory materials, career guides, and full time jobs. There is also a growing field of AI policy and strategy (guide), with several research centres around the world. There are also numerous student societies, reading groups, blogs, grants, prizes and conferences that offer opportunities to get involved.

RPS: Thanks for your time.

16 Comments

  1. Godwhacker says:

    It’s almost certainly the right time to be thinking about these things, but the chances of superintelligence remain pretty slim in my opinion. AI has definitely gotten better in recent years, but there’s a huge, huge gulf between the domain-specific AI we have now and any sort of generalised intelligence, let alone superintelligence.

    What if the processing power required is proven to be out of reach? What if deep learning hits a limit, like expert systems or planning? Hopefully we’ll get as far as self-driving cars and beyond customised search results but there’s o inevitably about any of this.

    • simz04 says:

      On the contrary, we are on closing in on Skynet-like AI and it doesnt seem that military big wigs have the brain to understand the threat. There are more remotely controlled weapons being developped everyday, and there are super AI able to learn and adapt are being created too. All it takes is a spark of life from an unchecked super-AI to end the world.

      Just look at the DMZ between Korea’s, its filled with auto-turrets that can auto-shoot people without human intervention via infrared sensors. MIT in Boston is working on Terminator himself, they have a bot that looks like him and they sure plan on putting weapons on it in a near future. (and if they dont, someone else is)

      Soon we will have fully automated factories controlled and operated by robots with minimal human input.

      The danger is very real and tomorrow is closing in real fast. Its really scary when you think of it, that could happen in less than 25 years at this rate.

      • automatic says:

        Much more likely Skynet will be a mixture of chatbots, fake profiles and advertising banner managers that will influence ppl values through emotion manipulation. Marketing companies already do this kind of stuff for years. It’s just a matter of adapting techniques to the virtual world and we’ll have a robot Goebells.

  2. Zenicetus says:

    I don’t think Banks’ Minds or the Ship AI’s in the Ancillary Justice series are good examples, and not just because they work with Humans (more or less). The hallmark of a true alien, or an AI mind, would be that you can’t understand it. No common ground for communication at all. Of course that doesn’t make a good subject for writing fiction, so AI’s are usually personalized as humans.

    I hadn’t read Banks until recently, and was interested in all the praise for the books involving the Minds. But I was very disappointed when I finally read the books, including “Excession”. The Minds were just like human personalities, and didn’t come across as AI’s at all, except for the power they had available. Conversations between Minds sounded like any human conversation. Very disappointing.

    The AI threat, if there is one, is that it would be *completely* orthogonal to human experience and needs. The danger would probably come from an unsuspected source, like the Paperclips game. The challenge will be recognizing it for what it is, and trying to stop it while there’s still time. In other words, detection and fast response will be more important than prediction and quarantine, ahead of time.

    • jonatron says:

      It sure would be a shame if over a long period of time our motivations were slowly transformed from those which benefit fellow humans to some kind of technological complex. Thankfully, AI is a long way off, right?

    • jssebastian says:

      If you want fiction about AI that is more “alien” try Adam Roberts’ “The Thing Itself”. Title refers both to classic John Carpenter horror “The Thing”, and to Kant’s critique of pure reason, and the idea that an AI’s consciousness might be structured so differently from ours to be able to escape the Kantian categories through which we perceive “the thing itself”, the actual reality of the universe.

      Wouldn’t take it as a realistic prediction for the future but it’s a pretty interesting book.

    • jakinbandw says:

      Best AI story I know is Friendship is Optimal, where an AI attempts to optimize friendship and satisfaction of human values. It does have the downside of being a fanfiction though. It starts with a company making a My Little Pony game.

    • BlueTemplar says:

      You might want to try Charles Stross’ Accelerando.
      It features a large specter of intelligences, human, non-human and artificial.

      And for bonus points, since this book has started to be written ~20 years ago (during the dot-com boom), the first chapter is a prediction of… 2018, which managed to get at least some of the things right !
      link to antipope.org

  3. Arglebargle says:

    No Person of Interest connections?

    The producer talked about how they started out doing a mildly futuristic police procedural with a twist, and by season three it turned out it was becoming more of a documentary.

    Definitely exploring the territory.

  4. WiggumEsquilax says:

    Haikubot is a Reddit autoresponse algorithm.

    Jivebot OTOH is

    INFUSED WITH THE JIVE,
    OUR VERSE IS UNSTOPPABLE.
    TREMBLE, MEATBOUND FOOLS.

  5. Cederic says:

    I’m actually disappointed to find out that the superintelligent AI isn’t playing the game against you.

    Instead it’s a gimmick to push the whole ‘beware the AI’ narrative. Tell you what, University of Cambridge, how about you advance AI to a state where it’s even remotely likely to need us to worry about these issues.

  6. Mezelf says:

    we haven’t reached the Singularity yet, but in many ways the AI nightmare is already here in the form of Google.

    It’s neither independent nor super-intelligent but it is:
    1. super informed about every aspect of our lives
    2. manipulative (algorithms)
    3. evil (thanks to neoliberalism and Silicon Valley billionaires)
    4. entrenched and unavoidable in our modern society

    • Kollega says:

      Last time this mod got an article on RPS, one of the commenters posted this article – which posits that “superintellegent AI that’ll optimize humanity out of existence” is a fear of Silicon Valley moneybags because they fear being beaten at their own game. I’m tempted to agree.

      • LCheers says:

        That’s a great article, and matches what I’ve been thinking about AI. Even if we solve the “control problem”, corporations are already busy destroying the world for the sake of short-term gains. Giving them superintelligent AI would just help them do it more effectively.

      • LTK says:

        It’s a pretty weak argument though. Here’s a counterargument.