The Centre For The Study Of Existential Risk have made a Civ V mod about apocalyptic AI

Researchers at the University of Cambridge’s Centre for the Study of Existential Risk (an actual real institution) have released a Civilization V mod exploring the hot new apocalypse everyone’s talking about: unchecked AI casually wiping out humanity in the name of efficiency. If you’ve already clicked through the universe as a single-minded AI in Frank Lantz’s ace Paperclips, you might fancy this mod. Trapping a brilliant mind in a metal box does also have its benefits, you know.

The Superintelligence mod adds a number of AI technologies to the tech tree. This AI could help you usher in utopia and win or, if you don’t research enough tech to keep it in check, wipe out humanity as it converts the Earth into a giant fractal chessboard or summat to optimise everything towards its task (see above). Win some, lose some.

“We want to let players experience the complex tensions and difficult decisions that the path to superintelligent AI would generate,” said the Centre for the Study of Existential Risk’s Dr. Shahar Avin, who managed the project.

“Games are an excellent way to deliver a complex message to a wide audience. The Civilization games series has an amazing track record of presenting very complex and interlocking systems in a fun and educating way, including major risk issues such as nuclear war and global warming.”

Consider yourselves educated. The CSER’s website has more cheery reading about this and other existential threats too, by the way.

The mod is actually made by developer Shai Shapira. You can download it from the Steam Workshop in versions for either regular Civ V or with the Brave New World expansion.

It’s comforting to imagine that humanity’s end will be something sci-fi and exciting, isn’t it? Stop fretting over the doomsday that’s actually brewing–it’s so mundane!–and relax with an electric dream.

41 Comments

  1. podbaydoors says:

    Very few people know that Google’s Go playing AI only wins by blackmailing its opponent with their browser history. So much more efficient.

  2. cloudnein says:

    “…hot new apocalypse everyone’s talking about…”

    You haven’t seen “Colossus: The Forbin Project” yet, have you. Watch it, quick before you read any spoilers. (Note, the DVD’s available are full-screen pan-and-scan ick. Go to your favorite torrent site to find the wide-screen version.)

    You will curse me for a sleepless night or two, then realize that you’d rather have that “future” than the one we have now. (Spoiler: the future we have right now has to do with a twitter war about buttons.)

    • hprice says:

      Yeah, Colossus: The Forbin Project. One of those really underrated movies. It was based on Colossus by D.F. Jones (naval commander during WWII and a science fiction writer)which was written in the sixties. Colossus was a big computer built inside a mountain, and when switched on started talking with it’s cold war, Soviet counterpart. Hilarity ensues …

      But it is a really quite chilling concept, and book/movie. Luckily AI hasn’t got anywhere near the levels predicted by Colossus … yet.

      Oh, and Colossus was part of a trilogy. I’ve still got a copy of the first book. Haven’t bumped into the others yet. Might have to find copies online. Great first book though.

      • Dachannien says:

        The second and third books took a really weird and unsatisfying turn if you’re interested in “realistic” sci-fi. Consider them to be optional.

    • Premium User Badge

      MajorLag says:

      While I do not suffer from the alarmist paranoia about artificial superintelligence, I want to chime in and concur that “Colossus: The Forbin Project” is a really good movie and you really should see it.

  3. Kollega says:

    The human condition is a state of brilliantly solving problems and no less brilliantly creating new ones. Repeatedly. And yet, according to previous generations, the apocalypse should’ve come and gone a thousand times already, because we “couldn’t possibly have solved” the issues that were supposed to kill us all. And I’m pretty sure that this mod’s point is that there has to be actual effort in creating a friendly artificial intelligence, not that we’re all doomed and that is that.

    But hey, I’m just a naive idealist who takes the idea of “there is no fate but the one we make” at face value, so what do I know.

    • Babymech says:

      “The bacterial condition is one of brilliantly consuming sugar and excreting waste. Repeatedly. And yet, according to predictive population models we will soon poison the food supply with toxic waste, because we “can’t possibly keep eating and shitting and reproducing forever”. But hey, I’m just a naive e. coli.”

      ‘The human condition’ isn’t a thing. We are the same genetic templates as wandered the deserts 10000 years ago, but we have access to vastly different means of consuming the world and ourselves. We are monkeys with access to industrialization, division of labor, the scientific method, genetic splicing, nuclear fission, and computing technology, plus we are immensely more productive and populous than we have ever been before. It would be mind-bogglingly arrogant to assume that our situation and our ‘human condition’ is unchanging, and that therefore everything will work out somehow.

      • Kollega says:

        Human behavior changes, and human society changes (just look at Germany in the last 200 years) – but this doesn’t change the fact that civilization is all about solving old problems and making new ones. When we first discovered fire, we also discovered that we could accidentially or deliberately burn down that forest over there. That is what I’m talking about.

        • Babymech says:

          Nobody misunderstood your point. My point is that our problem causing/solving capacity is now so immense that it will soon dwarf our biological capacity for survival. We are getting to a point where we can’t afford to burn down that first forest by accident, because it will wipe us out, before we know what we’ve done and before we can start applying those problem-solving skills.

          What you’re saying is that humanity is great at learning from mistakes and using its second chances. With AI, nuclear war, or global warming, we might not have a second chance.

          • Kollega says:

            And it’s a good thing that we’re growing risk-aware enough that we might not need to make such a mistake to avoid it, isn’t it? Which is what this Civ mod is about. Not the popular “aw crackers, we are all doomed, better throw in the towel”.

          • automatic says:

            Life, or optimistically speaking, even humanity itself won’t be destroyed by a nuclear war or any sort of condition created by itself. It’s possible for it to make life terrible but to assume we have the power to destroy all of it is just another manifestation of human’s pretentious ego. There are 7.5 billion humans on the earth. Most probably whoever or whatever is responsible for causing global mayhem would be severely injured to a stop before it destroys it all. That’s how life works. If the cited bacteria population becomes too big as to cause harm to itself it either starts to dim or it ceases as a species. It would most probably not even exist as a life form if it wasn’t like that. That’s also probably one of the reasons an AI can’t take over the world by itself without the aid of humans. Because it’s not life.

          • Babymech says:

            @automatic – when you say “it ceases as a species,” do you mean it goes extinct? If yes, you’re agreeing with me. If no, then you’re arguing that no species would ever cause so much damage to its environment that it would go extinct, in which case the Great Oxygenation Event or the Permian-Triassic extinction could be interesting reading for you.

            There’s nothing magic about life itself or the earth’s capacity to sustain life that makes it logically impossible to destroy either of those things. Life is a complex and unlikely chain reaction that has literally all the odds against it, and it is certainly possible to disrupt that chain reaction – permanently. And finally – the threat of an AI is not that it ‘takes over’ the world, but that it renders it uninhabitable. It doesn’t have to be alive to do that, by any means.

            @Kollega – if you read the article again, it seems pretty clear that Alice was explicitly saying that our catastrophic decline will not be caused by scary sci-fi AI, but by our slow and dumb self-destruction in a hundred different little ways.

          • automatic says:

            An unorthodox but reasonable way of thinking about it is to consider life as part of the environment. Yes, life is statically unlikely by human standards, but if you consider the dimension of the universe it becomes much more probable. In fact, although we usually divide science on different schools, you can’t diverge the existence of life from other cosmic, physical events, like the birth of a star. To think Earth life is the only one possible seems as naive as people who used to believe Europe was the center of the universe. And that was just yesterday if you consider the time of human life on Earth… even today we have an alarming number of people who still believe Earth is flat and has boundaries.

            YES, humanity can be extinct by environmental consequences of their actions. BUT what I meant on my last comment was that the natural tendency of life is no avoid this kind of event. And that is because that is a property of life itself, just like emission of light is a property of excited energy. That said, humanity can’t end or avoid to end life by their will or lack of will to do it. This kind of thought is as theistic as a human shaped god creating life from clay. We can end individual lives by our will or the lack of it, we can end entire species like so, and we can provoke a lot of suffering to others and ourselves while we do it, but we can’t end life itself.

            No matter how sophisticated an AI is it will never work as a living being. Even though it is possible for it to have an unpredictable behavior it will only go as far as it has been programmed. AND that programming only goes as far as we can manage to represent reality in machine language. To have a language representing every single atom of reality, a virtual world if you will, you’d need a space bigger than reality itself to fit the mechanism that does it (that’s Baudrillard I think). AI needs that kind of absolute representation if it’s supposed to overcome life itself.

            AI will always be submitted to human will. It may cause a lot of suffering, it may be a environmental changing tool that eventually will end human life, but it will never substitute life itself. To think something we build with our hands from dust can have that power is over pretentious to say the least, theistic.

          • automatic says:

            I hope you understand when I say AI is submitted to our will this means it is submitted to the will of the part of the humanity who have the power over the people and not that we individually can decide what it does and does not. The politics that rule technologies are much more dangerous than technology itself.

    • wombat191 says:

      The only reason we aren’t playing a real life game of Fallout right now is because of both luck and people ignoring standing orders to launch

      • Premium User Badge

        MajorLag says:

        You could put that a different way: even at the height of our paranoia and willingness to destroy each other, it only took a handful of rational people to prevent it.

    • Scraphound says:

      No offense, but I absolutely despise this kind of thinking.

      Humanity is barely out of its infancy. We haven’t existed long enough to honestly say we’ve survived countless apocalypses, and therefore we’ll continue. Our skewed concept of time is a product of our minuscule lifespans and boundless hubris.

      Atomic weapons have only been around for 70-ish years. That’s nothing. No time at all. That’s an enormous threat looming over all of humanity that never existed before. Carbon is rising at an exponential rate. The industrial revolution only happened a couple hundred years ago. Again, no time at all. In a very, very brief span of time we’ve developed weapons capable of destroying our civilization, we’ve ignited a mass extinction event, and we are rapidly altering our environment.

      If humanity’s time on Earth is a drop in the bucket, the time since we developed the means to effectively obliterate ourselves is a single H20 molecule. It’s far, far too soon to blithely say, “We’ve survived hard times before! We’ll do it again.”

      “Ancient” (Hah! Calling 2500 years ancient!) Greece didn’t have to contend with nukes, global warming, or reality TV stars turned world leaders.

    • Rindan says:

      That’s a pretty scary way to think. I’ll tell you why; because if we are going on history, we are doomed. Humanity has in fact eaten an apocalypse more than once. Entire civilizations and people have been snuffed out. The earth is scattered with the ruins of people who feared oblivion and had their fears violently confirmed.

      The only real difference is that in the past we were so spread out that even with all of our might, none of the little apocalypses we have visited upon ourselves could be species spanning. The size of the apocalypse was always limited by how long it took a human to walk over and kill another one.

      We now have the capacity to exterminate ourselves. Not only do we have the capacity to do it, the number of people that need to agree to the deed is getting smaller and smaller. It would have taken a world wide cult to wipe out humanity a hundred years ago. Now, a few dozen men.

      Don’t get me wrong, I’m a techno-optimist. I for one welcome the AI overlords to usher us into utopia because the only path is forward, but I recognize that it is entirely possible that forward leads off a cliff.

    • aldo_14 says:

      The thing is, when it comes to problems… it’s better to be proactive than reactive.

      • Kollega says:

        I totally agree. But Alice’s stance seems to be neither proactive nor reactive, but rather “sit around and wait for the apocalypse to come because it’s inevitable”. Which is what I take deep offense with. I don’t want to “accept” that the future will destroy us, with no ifs ands or buts, and that there’s no way to prevent this. Because there is, but we have to be bothered to actually find it and try it and make it work.

        But of course, people will gladly call me out for suggesting that we don’t sit around waiting for the world to end, no matter where I go.

  4. Someoldguy says:

    It’s official. Even very smart people at Cambridge University prefer Civ V over Civ VI.

  5. Premium User Badge

    Drib says:

    Is this as… badly designed as it sounds?

    Does it just randomly go “welp u lost lol” if you get the AI techs in the wrong order or something? It says “within weeks” in that screenshot, which is less than a civ turn even in modern era.

    I get that they’re aiming to show it’s fast and possibly hard to combat, but that doesn’t sound like fun game design.

    • Zenicetus says:

      It looks a bit thin on the preview images, like all you have to do is build an AI Safety Lab, and if you don’t you lose. Maybe it’s more complex than that. No reviews of the mod on the Steam workshop yet.

      It would be great to see different ways this could play out, like not just immediate annihilation of humans vs. AI-assisted human nirvana. Things like the AI suddenly co-opting industry and energy production for its own use, but without killing humans. Just throwing them back to an extended pre-industrial era, where the surviving civs have to compete with each other on that level. The mechanics are in the Civ engine to do that, but it would probably require a major Firaxis expansion and not a mod.

      • CKScientist says:

        The AI threat guys would say that this kind of scenario doesn’t make sense. If you’re an AI that has just confiscated the planet’s manufacturing capacity to create more paperclips (or whatever), why leave alive a civilization that could threaten you again in the future? Better to kill them all and turn their bodies into more paperclips.

        • Zenicetus says:

          AI wouldn’t necessarily see humans as dangerous or an obstacle, depending on the means humans have to fight back. But as you point out, the danger is when they see us as resources, the way humans treat the rest of the natural world right now. That’s the paperclip scenario, or this video using a stamp collecting AI as an example:

          • automatic says:

            That’s called slavery and it’s much older than computers. The capitalist world already goes as far as human rights allow (where they are respected) in treating us as resources. If some AI like a facebook army of fake profiles manages to destroy that by promoting fascist politics then we’re done. Not far from reality.

    • avin says:

      It can be hard to get all the details across in a short article. The mod replaces spaceship building with AI development – it takes time to build AI labs and other research facilities, and they increase your AI progress every turn, towards superintelligence. Of course, we had to simplify – no one knows how much research and development will be required to make smarter-than-human AI, though there are surveys of experts on this. At the same time as each civ makes its progress towards superintelligence, there is a growing risk of a rogue superintelligence being deployed somewhere. You get plenty of warning before the “game over” moment, and you can influence that risk – by building AI safety labs, and via other means (I don’t want to spoil all the surprises).

  6. Cronstintein says:

    If you like this, do yourself a favor and check out Universal Paperclip (it’s free).

  7. automatic says:

    As comic as what I’ll say might sound, the fear of AI taking over humanity is nothing but a symbol to the fear of communism. It’s a representation of the fear that the class that is indoctrinated only to work and produce for the profit of others will eventually rebel, not respond as expected to commands, take over the system that condition their living and make it work for themselves instead. That’s not to say real AI is not a threat. It can eventually destroy human institutions in favor of financial profits and lead humanity to an orwellian corporative dystopian future.

    I find it funny how working class people fear AI though. It’s like they fear their own power. 2000 years of Christianity I guess. Have to watch old Terminators again.

    • Halk says:

      Sounds about right.

    • Tatzwelwurm says:

      Automatic – you are highly reliant on assumptions in your arguments. You assume you know how something that doesn’t exist yet, in a world that doesn’t exist yet, built with tech that probably doesn’t exist yet, working against unknown goals, will behave. You also assume that you have some sort of fundamental understanding of the baseline psychology underlying why a number of folks (most of whom aren’t part of the working class if you count folks like Elon Musk who are currently on the pulpit) fear AI, as well as a fairly simplistic academic argument (ideologically driven?) as to why you are especially privy to the social psychology of the ‘working class’.

      In conclusion, you sound like you think you know it all, and so you can follow a logical pathway to a conclusion. I think you have too many assumptions in your logic to allow that. You can merely make an argument, or speculate, but that isn’t how you are coming across – you sound like you are being definitive.

  8. Merus says:

    I am fond of the observation that corporations are like paperclip-optimising AI except in slow-motion. They optimise for profit.

    I figure that whatever we use to ensure corporations cannot optimise humanity out of existence will probably also work on AIs.

    • Halk says:

      >whatever we use to ensure corporations cannot
      >optimise humanity out

      That’s exactly what corporations are doing. It’s called automation.

    • Premium User Badge

      Melum says:

      Acclaimed SF writer Ted Chiang on this; myopic tech billionaires worrying about paperclip maximisers while running profit maximisers.

    • SuddenSight says:

      This is honestly a bigger fear, in my opinion, than the actual paperclip scenario.

      If an AI that actually wanted to paperclip everything appeared, all of humanity would unite in stopping it and (unless we make some really big mistakes) we probably will.

      But what if an AI decides that people in America are fine, but everyone else should be turned into paperclips? Or rich people are fine, but poor people should be turned into paperclips? Or that rich people can be expected to contribute more to the economy, so they deserve better living conditions and education, but poor people deserve less investment because they aren’t expected to improve the economy by very much?

      That last scenario is basically already happening right now, and it represents a much bigger injustice to humanity than the vague threat of a paperclipper at a time when most robots can’t even open doors.

  9. geldonyetich says:

    I was thinking a bit about this the other day. The thing is, I do a bit of programming myself, and one thing I’m inherently reminded of every time I dabble is that computers are overwhelmingly intractable.

    Most importantly, they’re unable to come up with ideas on their own. Even if we wrote an advanced AI program whose very purpose was to come up with ideas, its only as advanced in its ability to this as the programmer has invested their own idea-formation capabilities into it. Some programmers have been speculating for decades we would have self-writing computer programs by now, and this is the reason why it has not: the very hardware is incapable of true creativity.

    Unless we change our hardware’s approach to something completely different, AI can’t take over, because binary is but on/off, and no permutation of on/off switches, no matter how advanced or obfuscated, can do anything but imitate by instruction of those who switched it. AI on current computer paradigms can only be an extension of its programmers. We would have to basically completely restart our most rudimentary foundations of modern computing to change this.

    As such, it lacks the ability to adapt. Whatever it can come up with, even the most rudimentary adaptive creature should be able to overcome it in time. Skynet could be survived by generations of hamsters. The AI apocalyptic scenario only seems viable because of the one reason it could never truly manifest: our imaginations. At worst, we can only manage to screw up our programs and accidentally harm ourselves, but that’s not an AI uprising, that’s falling off a log.

    • Tatzwelwurm says:

      Interesting POV. I have to express ignorance at the current AI/Expert System style languages and paradigms out there and what they can achieve, though I am quite experienced in conventional procedural/object programming, and from there I can see what you are saying (but those other paradigms may invalidate any assumptions made without knowledge of them).

      However I can suggest (spitball) a way that you may be able to achieve effective creativity from conventional techniques – essentially you use evolution as a model.

      Take a bunch of random, or semi-random ideas, glue ’em together and check for initial viability against certain assumptions or targets. Then run it through a model of whatever, and see how it goes there. If it is viable there then turn it loose if it will apparently achieve a specified objective. In this way no-one has to think of it first and tell the machine to go do it, it can derive a new idea from mess of building blocks. Not very efficient in processing time probably but it would operate at machine speeds so I think it could work.

      Obviously you would need a very fast and powerful machine with a very detailed model.

      Hope that ramble made sense, if not apologies.

Comment on this story

XHTML: Allowed code: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>