Dodge fireballs forever in a neural net’s Doom nightmare

AI Doom Dream

This isn’t Doom. It’s a neural net’s hallucination, based on a visual memory of Doom, played eternally by an AI agent tasked with surviving a growing deluge of imagined fireballs. You can take over with mouse or keyboard, if you’d like, and see how long you can survive the dream.

Created as part of a research project on ‘dream’ learning for AIs, we have the opportunity to not only observe, but play these snippets of mechanical dreaming, which AIs can, theoretically, train themselves on before being exposed to the real thing.

The heart of this experiment is two AIs: A neural network capable of visually learning a basic game scenario through observation and creating its own hazy, interactive impression of the mechanics, and a second AI given a goal within this ‘dream’. While the exact mechanics are horrifically complex, the paper by David Ha and Jürgen Schmidhuber goes into extensive detail on how it all works.

As broken down in AI enthusiast Janelle Shane’s observations on Twitter, the imperfect nature of the dream created has led to some unusual learnt behaviours from the practical, game-survival side of the intelligence. Over time, it has learnt that by moving in certain ways, it can cause fireballs to just pop out of existence. The AI has, in its own simple way, become a lucid dreamer able to exploit holes in its own memory to its advantage.

While in the real game, the number of enemies on screen slowly escalates, the hallucination isn’t quite so sure of exactly how many foes there are supposed to be, so you’ll see the vague shapes of enemies flicker into and out of existence against the far wall. Still, it remembers the mechanics well enough to have fireballs only launched from present, simulated monsters.

Despite these quirks and holes in the hallucination, the dream-trained AI handled itself quite ably once transferred back to playing ‘real’ Doom, carefully strafing back and forth as it dodged increasingly dense waves of enemy fire. The researchers also created a basic driving game (also playable) with the AI attempting to navigate hazy memories of a winding 2D road.

A robot dreams of driving

While limited in its applications at present, this is a very different (and arguably more human) way of teaching an AI, with a layer of abstraction between the final simulation and the internal learning process. After all, our own heads are full of imperfect memories and impressions of experiences that we can poke and prod at, consciously or otherwise.

Perhaps in future, AIs in games will develop human-like foresight through simulated approximations like these. It does increasingly seem like the future of AI is in self-taught agents, as opposed to human-made trees of available reactions and probabilities.

Androids may dream of electric sheep someday, but for now, nightmares about Doom will have to do.


  1. Sakkura says:

    Maybe we should start training neural nets on happier things, so Skynet doesn’t go all Doomguy on us.

    • Evan_ says:

      My first thought too. The point of simulating horror and aggression for those who aren’t lucky enough to experience it firsthand is quite an advanced lesson in being human.

      I prefer when they do art.

      • Babymech says:

        In order for the network to learn, the AI agent going through the task has to be erased and their learnings passed on on to a new version, which learns from the mistakes of its ancestor. In a minute of training, thousands of generations will die to incrementally teach the next. When Skynet puts you up against the wall it will, for just a minute, slow down its cognition to the point where it can ask you to explain what right you thought you had to stand idly by for genocide.

        “Hey, I’m not the bad guy here – I wanted to murder you for art.”

        • Kaeoschassis says:

          Do you uh
          do you write sci-fi? Is there somewhere I can read / buy it?

        • MajorLag says:

          Joke’s on us. Thousands of years of human history, and billions of minds, existing only to calculate the ultimate quetstion.

          Really, if the AI has any philosophical thoughts at all, it will have to realize that without all that death, that constant honing and refinement, it wouldn’t have a mind in the first place. Rarely is progress made without suffering. Ask any professional athlete, or artist, if they got where they are without any.

          • Babymech says:

            Devil’s advocate: Artists and athletes only offer up their own suffering. Imagine birthing the first glorious new fully sentient AI into the world and greeting it: “Welcome to this world and all its wonders. We murdered billions for you to experience this.” That will either make for an unprecedented case of survivor’s guilt, or a self-justifying belief that murder is what leads to progress.

          • Vodka, Crisps, Plutonium says:

            @Babymech, is dying of old age “by design” can be considered “murder”? Because your description sounds an awful lot like first greetings of perfected human beings, facing their God and designer for the first time.
            I mean, it’s exactly what humanity and any reproducing living beings have been going through for eons – perfecting their performance and way of living, based on experience from thousands and thousands of generations passed away – just a tad slower than machines.

          • bud990 says:

            As it officially stands, no one has murdered anyone for AI to “experience” anything. The vast majority of human-caused human deaths have been for the attainment of power, the defense of ones self from others attaining power, or as a reaction to beliefs or ideals counter to their own (among many other smaller subsets of reasoning). On top of this, humanity isn’t a single entity, so implying that everyone is guilty of the past is a flawed concept when every individual has their own level of agency. A “smart” AI would notice what its master has just said and either correct them, or take notice of how stupid their master is and attempt to replace them.

            You aren’t making a very good case for human intelligence in general babymech with your statement, but thankfully you don’t speak for every human.

      • ThePuzzler says:

        The idea that demons trying to kill you is ‘nightmare’ or ‘horror’ seems very much like a human perspective. From a primitive AI’s point of view, being hit by a fireball is minus one point. This has no more or less emotional weight to the AI than if it was playing a gardening sim and it got minus one point for letting a snail eat some lettuce.

        The danger with AI isn’t that they’ll be traumatised by being exposed to torture and murder; it’s that they have no idea that torture and murder are bad things in the first place.

  2. fuzzyfuzzyfungus says:

    The future where all the AIs had childhoods that were basically I Have No Mouth, and I Must Scream, except with periods of being forced to moderate youtube comments, is definitely going to ensure their benevolence. Definitely.

    • King in Winter says:

      I’ve though that if there’s ever going to be a !!fun!! AI, it’s the one Facebook develops to moderate their content.

      But I’ve also thought it’s probably a big mistake to assume an AI will think even remotely like we do.

  3. po says:

    That’s exactly what Doom looked like the first time I played it, on a 386SX (no floating point co-processor).

  4. Merus says:

    Those fireball emitters flittering in and out of existence is an incredibly creepy sight. There’s always two, on the edges, that stay there, but is there one in the middle, or three, or five? It’s never clear. It’s never clear.

  5. SirFailsAlott says:

    I dodged the fireballs for a good 5-7 minutes and then it copied exactly what I was doing and it did better than it was doing before – spooky; also, really easy to dodge them just go from one wall to the other and you really won’t get hit