This isn't Doom. It's a neural net's hallucination, based on a visual memory of Doom, played eternally by an AI agent tasked with surviving a growing deluge of imagined fireballs. You can take over with mouse or keyboard, if you'd like, and see how long you can survive the dream.
Created as part of a research project on 'dream' learning for AIs, we have the opportunity to not only observe, but play these snippets of mechanical dreaming, which AIs can, theoretically, train themselves on before being exposed to the real thing.
The heart of this experiment is two AIs: A neural network capable of visually learning a basic game scenario through observation and creating its own hazy, interactive impression of the mechanics, and a second AI given a goal within this 'dream'. While the exact mechanics are horrifically complex, the paper by David Ha and Jürgen Schmidhuber goes into extensive detail on how it all works.
As broken down in AI enthusiast Janelle Shane's observations on Twitter, the imperfect nature of the dream created has led to some unusual learnt behaviours from the practical, game-survival side of the intelligence. Over time, it has learnt that by moving in certain ways, it can cause fireballs to just pop out of existence. The AI has, in its own simple way, become a lucid dreamer able to exploit holes in its own memory to its advantage.
While in the real game, the number of enemies on screen slowly escalates, the hallucination isn't quite so sure of exactly how many foes there are supposed to be, so you'll see the vague shapes of enemies flicker into and out of existence against the far wall. Still, it remembers the mechanics well enough to have fireballs only launched from present, simulated monsters.
Despite these quirks and holes in the hallucination, the dream-trained AI handled itself quite ably once transferred back to playing 'real' Doom, carefully strafing back and forth as it dodged increasingly dense waves of enemy fire. The researchers also created a basic driving game (also playable) with the AI attempting to navigate hazy memories of a winding 2D road.
While limited in its applications at present, this is a very different (and arguably more human) way of teaching an AI, with a layer of abstraction between the final simulation and the internal learning process. After all, our own heads are full of imperfect memories and impressions of experiences that we can poke and prod at, consciously or otherwise.
Perhaps in future, AIs in games will develop human-like foresight through simulated approximations like these. It does increasingly seem like the future of AI is in self-taught agents, as opposed to human-made trees of available reactions and probabilities.
Androids may dream of electric sheep someday, but for now, nightmares about Doom will have to do.