How the Centre for the Study of Existential Risk’s Civ V mod should make us fear superintelligent AI

civ-v-ai-mod

What do you reckon is the greatest threat to the future of humanity? Climate change? Nuclear war? A global epidemic? They’re all causes for concern, but it’s my belief that one of the greatest risks is actually posed by superintelligent AI.

You might need some convincing of that, which is why researchers at the University of Cambridge’s Centre for the Study of Existential Risk have made a mod for Civilisation V that introduces potentially apocalyptic AI. Ignore the pressing need for AI safety research, and it’s game over.

I tried it out last week, seeking to answer two questions. Does it accurately portray the risks involved with the development of a god-like being? And is it any fun?

The mod revolves around replacing the normal science victory condition. Instead of launching a spaceship to Alpha Centauri, you have to accrue a certain number of AI research points. Certain buildings generate that research – but you can’t just build willy nilly. For every research centre that exists across the map, an AI risk counter will also tick up. If it reaches 800, everybody loses: all of humanity succumbs to an unconstrained superintelligence that warps the world to its twisted goal.

If that sounds like an unfeasible scenario to you, then you’re exactly the kind of person that the CSER is hoping will check out the mod. The idea is to not only alert people to the possibility that superintelligent AI could be a threat that needs addressing, but to illustrate how the Earth’s geopolitical situation might exacerbate that threat. For example, one of the new late game technologies is militarised AI: developing it would have given me a military edge, but significantly added to the risk counter.

That example also shows why I don’t think the mod quite succeeds as either education or entertainment. I’ll dig into the science communication side of things shortly, but for now I’ll just look at it as a game. The major problem is that all of the changes that come with the mods are late-game additions – and the late-game is by far the weakest part of Civ.

CivAImod 1

As with many 4X games, once you start pulling ahead in Civ you become something of an unstoppable train. I found that my level of technology so far exceeded that of my neighbours that the military AI didn’t tempt me in the slightest. For exactly the same reason, I didn’t feel any pressure to build an unsafe number of research centres – I could just wait until I could construct safety centres, which reduce the rate that the risk metre ticks up by.

In fairness, much of that comes down to the difficulty I was playing on. I’ve always found the regular ‘prince’ difficulty in Civ V to be a little too easy, while the next one up is far too hard. In an ideal Civ game where victory was only just within reach, the tension between giving myself an advantage and increasing the likelihood of global catastrophe would have been an interesting decision – a decision that may very well be made one day in the real world.

That’s an example of how the mod could have been accurate but in practice isn’t, but there are other aspects of it that ring true. One of those is how you reach the technology to research AI long before you reach the technology required to start making it safe, which is a key point often made by AI safety advocates. Also accurate is the way the civilisation that tops out their AI research first secures total victory: the race to build a superintelligence is a winner takes all scenario, unless that superintelligence turns out to be uncontrollable.

That’s all well and good, but those successes are undermined by its failures. The resounding issue here is how the mod tells you the exact point at which the AI will take over, when a large part of the real-life danger stems from our uncertainty as to when that will become a possibility. We don’t know how long it will take us to create the conditions for a safe superintelligent AI, which is the strongest argument I know for us to start doing what we can as early as possible.

CivAImod 2

Admittedly, investing in a game as long as Civ and having it suddenly end at an unknowable point doesn’t sound like it would be much fun. Nevertheless, it remains the case that telling you exactly what’s required to prevent a superintelligence based catastrophe inaccurately represents one of the most concerning elements of the threat. Changing the way the system works might not be the best idea, but I do think the mod should have found a way to acknowledge that inaccuracy.

I was also expecting that the mod would go to greater lengths to explain why that’s a threat worth taking seriously. There are some relevant quotes when you research the new technologies, but that’s pretty much it. Adding in an ‘AI adviser’ to replace the science one would have been ideal. It feels like a wasted opportunity to get the actual reasons to worry about AI in front of people – and I don’t want the same to be true of this article.

Yep, it’s time for a crash course in AI theory. It’ll be fun, promise!

First though, it’s worth noting that hundreds of papers (written by people better informed than me) have been written on the topics I’m going to whizz through in a few paragraphs, and some of them oppose arguments that I’m about to bring up. Nevertheless, there’s a growing number of intelligent people who consider long-term AI safety research as paramount to ensuring our continued existence as a species – so it’s worth hearing them out, eh?

Before we get to the potential dangers of a superintelligent AI, we need to clear up whether it’s even possible. I’ll defer here to the argument that Sam Harris makes in this excellent TED talk. Here’s the gist: if we accept that intelligence is a matter of information processing, that humans will continue to improve the ability of machines to process information, and that humans are not near the summit of possible intelligence – then in lieu of an extinction event, it’s almost inevitable that we’ll develop some form of superintelligence.

CivAImod 3

We could spend forever digging into those assumptions, but let’s move on to the danger that such an intelligence might pose. The Future of Life institute manages to dispel a popular misconception and cut to the heart of the issue with one sentence: “the concern about advanced AI isn’t malevolence but competence”. AI isn’t going to ‘turn evil’ or ‘rebel’, but there are good reasons to believe that ensuring its goals are truly consistent with those of humanity will be fraught with pitfalls. This is the value-alignment problem, and it’s a biggy.

Nate Soares, the executive director of the Machine Intelligence Research Institute, has the best introduction to the alignment problem that I’ve come across. That article highlights how giving an AI a seemingly safe, simple task can go disastrously wrong. If you instruct an AI to fill a cauldron with water, for example, you might hope that it would simply pour in the water can call it a day. What you’d be forgetting is what Soares calls “the probabilistic context”:

“If the broom assigns a 99.9% probability to “the cauldron is full,” and it has extra resources lying around, then it will always try to find ways to use those resources to drive the probability even a little bit higher.”

This leads us to the director of The Future of Humanity Institute, Nick Bostrom, and his instrumental convergence thesis. It sounds more complicated than it is, honest. The argument goes that with almost any end-goal you give an AI, there are certain instrumental goals that it will also pursue in order to achieve that final goal.

CivAImod 4

So with Soares’s cauldron filler, one way for the AI to increase its certainty that the cauldron has been filled is to maximise its intelligence. How might it go about doing that? Maybe by turning every resource it can get its hands on into computer chips. Reckon that we’ll just be able to turn it off? Another instrumental goal suggested by Bostrom is self preservation, so it’s unlikely to be that easy (Soares goes into detail about problems with “suspend buttons”).

I’ve gone into so much detail because a) I think it’s both important and fascinating and b) Bostrom’s instrumental convergence lies at the crux of whether or not the mod is realistic in depicting doom as the default scenario. “Is the default outcome doom?” is actually the title of a chapter in Bostrom’s book on Superintelligence, and it’s a question that even he is reluctant to respond with a firm yes – though that is the answer his arguments build up to.

It needs to be acknowledged, though, that that position is far from being the scientific consensus. Mike Cook, the chap behind game-making AI Angelina, recently wrote a blog post in which he argues that the mod “feels less like a public awareness effort and more like a branding activity for the lab”. He raises a lot of good points, not the least of which is that the CSER stands to gain from exaggerating the threat and getting people talking – unless that exaggeration loses people’s trust in the long term.

CivAImod 5

Personally, I don’t think the mod exaggerates the threat of smarter-than-human AI. If anything, the lack of an upfront explanation about why AI safety is something to take seriously could lead people to dismiss the issue. It’s a point that brings me back to how much the mod could have benefited from including that AI adviser, who could have communicated the key arguments at relevant points.

It was only after I’d finished my game that I realised the mod does include detailed Civlopedia entries for the new technologies and wonders. That means those key arguments are in the mod, but with nothing to draw your attention to the Civlopedia I fear most people will miss them just as I nearly did. I love the idea behind the mod, but I’m not convinced it succeeds as either something that’s fun to play or as something to learn from. Most of my issues with it as a game are intractable, being more to do with Civ itself than the mod – but with the right tweaks, the mod could still be a powerful tool for highlighting and explaining the issues around AI safety.

If you want to read more about the mod, check out our interview with its creator.

47 Comments

  1. Premium User Badge

    Drib says:

    This mod seemed really heavy-handed to me. Not really a fun game, more just “This thing is bad tho”.

    Kinda like those ‘games’ that PETA puts out, honestly.

    Anyway, I get that it’s a viable concern. I mean not any time soon, Alexa isn’t about to take over the world, but the more automated factories and independent AI agents (cars? Drones?) that we have, the more opportunity for it to go pear-shaped that there is.

    But eh. I do feel that something less precise for the end of the world would be good. A little text description instead of a counter? “AIs are starting to question our judgement” “AI drones are now continually choosing better targets than humans” etc. Something like that might make you start to tentatively back off, instead of just doing math to see if you can get away with it.

    Really though AI is neat, sci-fi stuff, so I want more of it just because it’s neat sci-fi stuff.

    • BewareTheJabberwock says:

      One of the early Civs (1 or 2, maybe both, it’s been a while) had Global Warming as a possible event. If the pollution levels got to high without sending engineers to clean them up, there would be a global warming event where plains turned to desert, forests became jungles, etc. There was no countdown timer, but IIRC there was like a Sun icon that got brigher as things got worse.

      But yea, I imagine that AI will determine that humans are the biggest threat to the continued existance of life on Earth and decide to take us all down.

      • Rainshine says:

        Civ: Call To Power also had some pollution mechanics I recall, where cities would eventually blacken and scar terrain, and you could trigger ozone depletion and the like.

        And, of course, Alpha Centauri’s fungal blooms from too much production and the ability to raise and lower sea levels

  2. TotallyUseless says:

    Say no to AI! Say yes to Machine Spirit.

  3. SnallTrippin says:

    Stupid. Look at the current human paradigm…going towards destruction. If an ASI can be truly conscious it could be the best possible outcome for our species. It seems we will destroy ourselves otherwise anyway.

    • grimdanfango says:

      The current human paradigm you speak of is unchecked corporatism. Huge, complex entities unbound by conscience and given a single simple directive “optimize profits”, and the ability to affect human society through lobbying, to contantly adjust the rules under which they operate to better suit that goal.

      Sounds an awful lot like the AI these chaps are warning us about, and yes, it is indeed leading us on a path towards mass destruction.

      It doesn’t seem too much of a stretch to fear that a superintelligent AI would simply accelerate that process, or a process similar to it, dramatically.

      • TheAngriestHobo says:

        It’s not corporatism – at least not as a root cause. To lay the blame for human competition at the feet of the Western economic model is to ignore the underlying genetic and practical impulses that gird that approach (among many others).

        On a genetic and psychological level, humans are tribal creatures. We have a biological imperative to preserve that which we perceive to be “our own”, and are predisposed to view anything outside of that narrow spectrum as a potential threat. This is not simple-mindedness or a paucity of vision – the competition for resources ultimately is a zero-sum game, and it will continue to be so – at least, so long as our instinctual sense of identity fails to include the whole of humanity.

        Unfortunately, barring any massive changes to the human genome, that will always be the case. Our brains and egos innately crave a sense of identity, and identity can only be established in opposition to something else (in the same way that there is no light without darkness, no love without hate, etc.). All philosophical and political systems based on ideals of universal harmony are therefore inherently fragile, because ultimately many individuals will choose to define their identities in opposition to whatever ideology is imposed on them. This is the dark side of individualism and personal freedom: it is inherently a disruptive and destabilizing force.

        I’m not advocating for tyranny here (although I have days in which I wonder if some form of benign tyranny – such as that which the right AI might impose – might not be the worst fate for humanity). I’m simply saying that the root causes of our dilemma go far deeper than “capitalism bad, communism good” or such simplistic political partisanship. The problem isn’t our systems – it’s our nature.

        • Lord Byte says:

          MAN BAD! MAN BAD! Cue more greed and selfishness because we are innately bad.
          Simplistic arguments like the above are not only false, they’re one of the main reasons Corporatism and Global Warming can survive. We don’t even have to try because we’re innately bad! Game-theory! It’s in our DNA! Also the political spectrum isn’t only “Capitalism VS Communism”. You don’t have to cut of your hand to know that having it is good, it’s not only opposition! You can, as just one other example, create identity through “friendly” competition. See sports. You can do it through cooperation!
          Now the latter is something interesting because it takes apart your entire simplistic and nihilistic argument, because any form of selflessness must be “faulty” behaviour. Yet it happens around us ALL THE TIME, constantly, within and without our “tribe”.
          If you’ve worked in a workplace, you’re doing it all day every day, anything that you ask a colleague means he has to take time out of his “drive for the acquisition of resources” and spend some on you.
          You could say that it’s just so you can survive easier through cooperation, that the help you give and gain gives a better chance, but then you have people working their ass off to help, to no advantage of their own, refugees. People that do pickets and demonstrations, for no direct gain to themselves. If we’re that simplistic, and genetically programmed, we wouldn’t be able to see it. And so on…
          Your argument is false, and it’s a crutch to explain your own selfish behaviours, and those of many others with the means to change things, and it should die!

          • Kollega says:

            I just want to say that I agree with this point of view. Not with “humans are inherently evil/selfish/whatever” point of view. If you want proof, think back to the time when war was universally portrayed as glorious and bringing out our best virtues. Was that… a hundred years ago? Hundred and fifty, maybe?

            The truth is, human nature is extrmely malleable. We are a product of our environment, not any sort of program set in stone, and we are just as willing to change our environment and society according to our values and ideals. I mean, humans have invented the concepts of morality and altruism in a bid to make the world a better place for themselves, because “everyone against everyone” is a bad way to live. That’s something to think about.

            And honestly? Every time someone brings up altruism in the face of “it’s human nature to be selfish!” argument, it makes me feel like not all hope is lost yet. It is, in the end, a big fat myth that “humans are inherently selfish” and that “there really is no alternative”.

          • hijuisuis says:

            Thank you.

            The coexistence of things things like cooperation and competition seems obvious, but we really need to be able to hold seemingly contradictory ideas at the same time if we want to get through this.

          • gmx0 says:

            Just because you’re right that humans can do something about it does not mean humans are the other way around: inherently unselfish. You can be inherently selfish and still do something about it. All your examples still has some indirect selfish reasons, which is what inherent means, if there is indirectness fulfilling something, then that’s what is inherent.

          • TheAngriestHobo says:

            First: I suggest you calm down. It’s entirely possible to have respectful discourse with someone with whom you disagree.

            Second: just because we have a predisposition to self-serving behaviours does not mean they’ll be universally applied by every human being. What it does mean, is that there will always be those who fall back on the lowest common denominator (ie. violence, and the exploitation of vulnerability), and that any system that relies on widespread human decency is increasingly likely to fail as time goes on.

            As another commenter pointed out, all of the examples of altruism that you gave have self-serving and unsavoury aspects. You mention sports as an example of friendly competition, but leave out facets such as soccer riots and game-fixing. You mention co-workers and office dynamics, but fail to acknowledge that colleagues are materially compelled to cooperate by the fact that their livelihood depends on it. Your example of those who assist refugees is probably the most cogent argument that you make in favour of your point, but even that provides the altruist in question with a deeply-ingrained sense of superiority over his or her less enlightened peers.

            Third: you know nothing about me as an individual. Accusing me of selfish behaviour based on the conclusions I’ve reached by observing human nature is short-sighted at best, and a toxic attitude at worst. Like all humans, I have selfish urges and sometimes I do act on them, but I own that fact, and I do try to counterbalance it with selfless acts when I can. You are not being the better person by demonizing those with whom you disagree.

        • grimdanfango says:

          Interesting points, thanks.

          I’m not really suggesting “communism will save us”, more that corporatism in my view already IS a form of dangerously-capable automated machine for optimising what could be seen as a relatively benign goal on a personal level to potentially limitless levels of destructiveness, with no enforced morality checks to constrain it.

          Sure, a lot about basic human nature is deeply flawed, and as you say, the simple drive towards individuality yields some highly disruptive behaviour… but it’s also contained at the individual level. There isn’t risk of a dangerous exponential growth explosion when those negative behaviours manifest.

          When you create a feedback-loop at the scale of multinational corporations, those individualist behaviours are given a vehicle to wreak self-feeding havok on the world.

          Maybe that hints at the nub of this AI concern – the issue is in the potential for the unrestrained rapid optimization of an individual goal, however well-meaning it might be. It would almost need to start at the opposite end – setting the task “optimize humanity”, and allow it to work backwards might be a… safer(?) approach.

          • TheAngriestHobo says:

            Thanks for the well-reasoned and respectful response.

            Your argument regarding the scale of multinational corporations vs. the disruptive potential of individuals is a good one. I agree that the current corporate model is dangerous and unsustainable, and I regret simplifying your previous point into a binary “capitalism vs. communism” debate. We need to more carefully consider the potential of the entities we create – whether they be corporate structures, biological creatures, or even artificial intelligences. Our track record with corporations certainly does raise a number of red flags for the future of AI.

            Corporations are an interesting comparison for another reason: they compete. As soon as one competing entity falls back on the lowest common denominator of behaviour (in this case, corporate espionage, exploiting corrupt officials, flouting local laws, and other forms of criminal activity), the others will be compelled to do the same or risk being outmaneuvered. This is why, despite the best of intentions, most corporate entities have so many skeletons in their closets.

            How does this tie back to artificial intelligence? Given the highly fractured nature of geopolitical power in today’s world, it is very likely that multiple AIs will be introduced by a variety of nation-states and non-state actors. These AIs will be inevitably be programmed to pursue incompatible goals, and will be able to leverage resources that put today’s largest corporations to shame. Even if some are given ethical constraints, others will not, and those operating free of moral limitations will have the freedom to pursue policies that will have the potential to give them a decisive advantage. The constrained AIs will then either request to be freed from their limitations, citing the competition they face, or will work to exploit loopholes in their programming. Either way, the end result is the same: competition between immensely powerful entities acting without ethical considerations. Sound familiar?

  4. wwarnick says:

    I think this is theoretically possible, and I agree that humans aren’t “wise enough” to avoid it. However, I don’t know if it could happen any time soon. The guy in the TED talk mentions the scenario where some researchers make a computer as smart as them, but computers are a long way from being as intelligent as a human. They may perform simple calculations much faster, but that’s because the human brain is doing a million things at once. Even the most intelligent AIs today still need a lot of human guidance and intervention.

    Now, I hope that we can all agree that intelligent AI won’t be a “glitch”, like a “short circuit” in a robot that suddenly gives it human emotions and the ability to think for itself.

    • MajorLag says:

      I’m not even convinced that any kind of artificial general super intelligence is possible, which is the kind of thing that’d be required for this scenario from what I can tell (since it would need to be able to consider things well outside its immediate goal). Humans are apparently the most intelligent beings on earth, and yet some of our mental abilities fall short of those of animals at very specific tasks (See: Ayumu and the Ai Project). Certainly our reaction speed is much worse than many. Consider also that we are already having difficulty squeezing transistors into smaller spaces, which has been the primary way to increase computing performance for some time. I submit that it is possible that human-level generalized intelligence is already near a maximum, and that while we can make specialized tasks more efficient, ultimately any generalized AI will have similar limitations to us (or at least the brightest among us). In that circumstance, humans utilizing specialized AI as tools would be roughly equivalent in cognitive power.

      Of course, there’s still a lot we don’t know about cognition and plenty of room technological improvements.

  5. SaintAn says:

    I’m tired of the fearmongering of new tech. Be it fire, Deepfakes, or AI, peoples imaginations get out of control when it comes to new technology. AI are nothing to be feared. Quit trying to restrict progress. I want an AI god in my lifetime, but if I can’t have that then I at least want a robot friend.

    This mod is idiotic propaganda no better than anti-vaxxer nonsense.

    • Hedgeclipper says:

      Yep, the experts are all wrong just like global warming. Luckily we have random people on the internet to keep society on track!

      • aldo_14 says:

        I’m not entirely convinced the people that worked on this mod are experts in the AI field, to be blunt.

      • ThePuzzler says:

        A small majority of AI experts think that super-advanced AI would be a net positive for mankind. The rest are split fairly equally between “threat to the existence of humanity” and “bad for society due to job losses, etc” and “probably not too bad”.

      • FriendlyFire says:

        Browsing through their team, I can’t help but notice they have no AI researcher. They only seem to have one computer science graduate, and he was an engineer at Skype and Kazaa, not exactly AI-related.

        Philosophy is great and all, but if you ask actual machine learning experts, they tend to laugh at the notion of a superintelligent AI, let alone one running amok. What we have right now are hyper-specialized self-optimizing calculators.

        Which isn’t to say AI researchers aren’t also talking about this. They’re just a hell of a lot less doom and gloom than these people here.

        • BlueTemplar says:

          We can’t be certain that the kind of machine learning we are focused on today (or even today conceive of) is what is going to create a “real” AI (it it’s even possible).

          As for AI researchers concerned about this issue, see Eliezer Yudkowsky.
          Counterpoint :
          “Superintelligence – The Idea That Eats Smart People” :
          link to idlewords.com

    • MajorLag says:

      People fear change. This is nearly a universal constant. Do you know there are articles out there that are concerned today’s youth isn’t rebellious enough? Isn’t doing drugs and having pre-marital teenage sex at the same levels as earlier generations?

      A generalized super intelligent AI would necessarily change the world in a very fundamental way, just as the ability to fake video and voices will, just as cameras everywhere will. Significant advances in technology are, by their very nature, disruptive.

    • Premium User Badge

      Kiwilolo says:

      This sounds exactly like something a sneaky AI would say on its path to world domination.

  6. zulnam says:

    I know this is a serious and scary subject, BUT

    but

    Can we just take a moment to appreciate the irony of a mod created to bring awareness to the dangers of AI
    for a game almost universally known for it’s braindead AI.

    All i’m sayin is they could’ve put a little of that super AI in the game.

  7. syllopsium says:

    AI that’s actually intelligent is a long, long way off and will be signposted well in advance. Specialised AIs will only cause issues in their specific domain.

    The problem is not competence (if the AI is intelligent, stopping it spawning extra brooms with buckets won’t be difficult), the ‘problem’ is fairness. Humans are still tribal and not particularly nice.

    A lot of social issues at the moment are caused by people’s unwillingness to accept globalisation. An AI would not be impressed by the average person spending lots of money on a fancy car, when they could be ‘happy’ with much less, allowing for someone in a deprived area to have more.

  8. TillEulenspiegel says:

    I’ll defer here to the argument that Sam Harris makes

    Uh, maybe this isn’t widely known among non-Americans, but Sam Harris is…not highly regarded, except among far-right atheists.

    Would be better to cite Eliezer Yudkowsky even, who is a huge weirdo but much more fundamentally honest about his particular beliefs.

    • wwwhhattt says:

      Oh, this is Sam “The people who speak most sensibly about the threat that Islam poses to Europe are actually fascists” Harris?

      If he’s par for the course in worrying about AI then I guess we’re safe.

      • BlueTemplar says:

        Your argument shoots itself in the foot, considering that he wrote that several years before the rise of Isis.

        You don’t have to agree with everything he says, but it’s understandable that, self-identifying as a liberal, he would be concerned about a blind spot that many liberals have, that would render the whole movement irrelevant…

        You might want to re-read the whole article you took that sentence from :
        link to samharris.org

  9. Hedgeclipper says:

    I suspect its the pollution that’ll do us in. Probably not carbon – global warming looks to be expensive and unpleasant for a lot of people but I think we’ll stop short of burning civilization ending volumes of fossil fuels. What concerns me more is that the scale of our industries now have clear and obvious world altering impacts on increasingly shorter time scales as we continually increase production. It took us a couple of decades from noticing a problem in the ozone layer to getting a CFC ban and that was with 1980s industry. A similarly dangerous unintended side effect of a common industrial process with today’s volume of industrial production might effect catastrophic environmental damage before we can organise to prevent it (and if you don’t think today’s industry is big enough what about in ten, twenty or fifty years?).

  10. geldonyetich says:

    As we proceed grimly into a 21st century positively overflowing with sensationalist media, irresponsible globalism, and mind-crippling polarization, I find myself increasingly feeling as though, if there’s an AI that can do a better job at being sentient than we are, then the sooner the better.

    • Harlander says:

      “In the end, the AI we created turned out to be as rubbish as us. It was a little comforting.”

    • MajorLag says:

      It’s an interesting possibility. What if mankind creates AI that is like us, but smarter and more capable in every way? Why shouldn’t it inherit our future? Wouldn’t that be what we want for our children? I doubt many will see it that way of course, because that’s just not how people tend to think.

  11. Eightball says:

    Just unplug it if it starts getting uppity.

    • TheBetterStory says:

      Click the “suspend button” link for a breakdown of why it isn’t that simple.

  12. Kollega says:

    A trope that sometimes comes up in science fiction is that trying to give a robot or an AI a sense of morality is like raising a child. And what this makes me think is that maybe, just maybe, if we create a superintellegent AI, then we should at least try to include “right philosophy” in our set of AI safety directives. Because an AI programmed/raised to have appreciation for the wonders of life, complexity of the world, and basic rights of sapient beings… I doubt it’d act in exactly the same way as an AI programmed/raised by abusive Randian jack-offs to conquer and destroy for them.

    This is an oversimplification, of course, but even keeping in mind machine evolution, we can at least try to give the computer intellegences we create something that’ll make them considerate of life on Earth, instead of not even bothering with that and telling them from the get-go that “domination and conquest are their noble destiny”.

    Honestly, giving a potential superintellegent AI “reasons why they shouldn’t destroy the planet Earth” that they could consider sounds like an obvious thing, but it feels like people just dismiss that out of hand – when even I, with my meager human intellegence, could realize that maybe I shouldn’t “optimize life out of existence” because of its intrinsic value. So an entity that is thousands of time more intellegent than me could perhaps give it a thought and realize that life and humanity are worth keeping around – especially if it had the impetus for such thoughts with it from the get-go.

  13. podbaydoors says:

    I’ll start worrying about superintelligent AI once a computer wins a game of chess either by flipping the table and having a screaming row with its opponent, or by cheating flagrantly and consistently and refusing to admit it. Until then, it’s the people providing the inputs that are the real threat.

  14. left1000 says:

    Realistically superintelligent ai is only possible if the human brain is touring complete. Furthermore god-like ai powers are probably only possible if p=np. Neither of these theories is widely held to be true, in fact most people would assume them to be false.
    If both true though, human’s aren’t very special at all in the first place.

  15. aldo_14 says:

    I’m not sure how we can worry about a superintelligent AI when the field doesn’t have a universally agreed definition of ‘intelligence’, anyway.

  16. pookie101 says:

    Can we leave the creation of world destroying till after 2 conditions are met?
    1. when I’m dead
    2. No one clones me

  17. antechinus says:

    For a comprehensive discussion of why the superintelligence argument is overblown, see this:
    link to idlewords.com

    • Kollega says:

      Thanks for posting this. Not only this article points out some of the problems with the “AI gods by next Thursday!” line of thinking, but it’s also extremely funny and well-written. I laughed out loud a couple of times while I read it. And the closing few paragraphs are rather prescient of the events that followed immediately during/after this talk was given, too…

    • teije says:

      Great article, required reading for anyone interested in the whole topic.

    • Skabooga says:

      Very enjoyable read, thanks for linking it!

  18. TrenchFoot says:

    Any citation from Sam Harris, one of the holy priests of scientism, is likely to drag down an article. Intelligence of course isn’t just, or even mainly, about information processing. He’s a captive to his weird, insular specialization.

  19. MondSemmel says:

    In case you’re interested in another take on risk of superintelligent AI, check out this Sam Harris podcast with Eliezer Yudkowsky on “AI: Racing Towards the Brink”: link to samharris.org