You Wot? Space Engineers Devs Making Artificial Brain

Of all the fanciful claims made of video game technology, my favourite has always been neural nets and artificial brains. Imagine if video game men were alive! Your soldiers would learn from battle! They’d write letters to their virtual families – which you can read! Gasp as the life leaves their little digital eyes, and wonder what they believe comes next! Oh, it’s always a load of tosh.

You’ll excuse me if my meatbrain smirks as I respond “Whaaat?” to Space Engineers developers Keen Software House announcing plans to make an AI brain “which operates at the level of a human brain and can adapt and learn any new task”. Bit late for an April Fool, isn’t it?

Keen this week unveiled GoodAI, a sister company with the goal of trapping an impossibly brilliant mind in a metal box and hoping that doesn’t make it bloody furious.

They think that a generalised AI will help technology, games, human society, and basically everything develop in unexpected ways, and be useful for so many things. Many AIs are specialised, focused on single tasks like driving or simulating this or that, but Keen hope a general AI brain will be able to do everything we can do – better. Maybe Keen’s AI will even be capable of finishing their Miner Wars MMO. Company head honch Marek Rosa says:

“I want to reach our end goal as fast as possible, because I really see the good that general artificial intelligence will bring to our world. Imagine an AI that is as smart, adaptable, and able to learn as a human being. Then imagine telling this AI to improve itself – to make itself even smarter, faster, and more capable of solving problems. Such an AI will be the last thing humans ever have to invent – once we have this technology, our AI could invent other technologies, further the sciences, cure diseases, take us further in space than we’ve ever been, and more.”

Keen are far from the only group working on big AI ideas like this, mind. Far bigger companies, universities, and so on are on it too.

So far, keen have hit two milestones they say are important. The first was having it to learn to play Pong “from unstructured input of screen pixels and reward signals”, which I think means they left this newborn in a dark room without care and only showed affection when it twatted an object that a simple mind might see as resembling a human head. The second was having it learn to escape mazes, which they say means it “is capable of working with a delayed reward and that it is able to create a hierarchy of goals”. Good, so now it’s capable of scheming. Great job, you guys.

They’ve also released a tool they call Brain Simulator so we can all design brain architectures. Why not start work on your future nemesis today?

Before they throw human society into disarray and are hauled before the courts on AI abuse charges, Keen also plan to integrate the tech into Space Engineers and Medieval Engineers a bit.

“By integrating Brain Simulator into Space Engineers and Medieval Engineers, players will have the option to design their own AI brains for the games and implement it, for example, as a peasant character. Players will also be able to share these brains with each other or take an AI brain designed by us and train it to do things they want it to do (work, obey its master, and so on). The game AIs will learn from the player who trains them (by receiving reward/punishment signals; or by imitating player’s behavior), and will have the ability to compete with each other. The AI will be also able to learn by imitating other AIs.”

Check out more of the hyperbole in this video:


  1. MaXimillion says:

    I guess having two unfinished products on the market wasn’t enough for these guys.

    • demicanadian says:

      Two unfinished products that have not AI whatsoever…

      • eat5 says:

        Medieval Engeneers has some really dumb AI. dig holes and get stuck in them,cut tree get stuck on it…

    • Grendael says:

      I kinda see their previous abandoned project as something to keep in mind but pretty much forgiven. With that said space engineers is like minecraft in that can it ever be finished?

      They could have released it at any time this year and it would be accepted as finished with weekly content. Its a very feature rich game with good stability. Not to mention mod support and the ability to look at all their code and tinker with it on github. They put money aside to pay people for anything cool they cine up with. Its amazing some of the things they are doing.

      Medieval engineers is newer but using the same dev process that game will come into its own soon enough.

      If they want a dedicated AI company for their game and presumably for other games then that’s cool. I, overall, have good confidence in their ability to do cool stuff with the benefit of the community in mind.

      They are essentially reformed from that miner wars fisaco and so i don’t think they deserve the denigration.

      • Max Planck says:

        The Miner Wars MMO was a straight up scam, I don’t think that should be neither forgiven nor forgotten. When they realized that the game would never happen they should have apologized and given they money back, instead they chose to laugh all the way to the bank. Marek Rosa is a lying thief and it boggles my mind that RPS puts praise in prose everytime he cuts a fart.
        Marek, I know you read this: you owe me twenty dollars you little shit.

        • BlackStormMK4 says:

          are you serious…. you are butthurt over 20 freaking dollars… I haven’t even heard of miner wars or whatever…. it was your decision to buy it, don’t blame the developers, it’s not like they held a gun to your head and said “BUY IT OR I’LL SHOOT” maybe you should exercise better judgement next time

          • Max Planck says:

            It’s not about the twenty bucks, it about not accepting scams.
            They didn’t hold a gun to my head of course, they offered a transaction that I could agree to. I held up my end of the bargain, they didn’t.
            Imagine you are going to the supermarket to buy some jellyworms and a couple bottles of Buckfast. You select your items, hand over the amount it says on the register, then the clerk takes your stuff and puts it back on the shelf while muttering “you should have exercised better caution” under his breath.
            I wonder if that would make you ‘butthurt’ as you put it.

            Why are you defending these conmen by the way, if you don’t even know which game I’m talking about?

    • Beez says:

      Maybe they can use that AI to fix all those bugs that have been in Space Engineers for months and months, and in some cases for over a year.

  2. GameCat says:

    So, in this timeline Skynet will be known as GoodAI? Ironic.

  3. jingies says:

    “Then imagine telling this AI to improve itself – to make itself even smarter, faster, and more capable of solving problems. “

    So basically they want to bring about the Singularity?

    link to

  4. TheAngriestHobo says:

    The whole “I want to reach our end goal as soon as possible” speech is a little alarming. It’s easy to joke about doomsday AI scenarios, but the majority of experts in the field (including, for instance, Dr. Stephen Hawking) agree that it’s one of the most likely ways humanity may destroy itself. Throwing caution to the wind and trying to race to the finish line, without lengthy and serious consultations with the institutions that have been studying this issue for decades, is almost criminally stupid.

    • Tinotoin says:

      I agree, and it the whole thing very much reminds me of a recent video I saw on Computerphile.

      • TechnicalBen says:

        That video is very true! Much better than:
        “but the majority of experts in the field (including, for instance, Dr. Stephen Hawking)”

        Lol, never laughed so hard in my life. Stephen Hawking is a very good astronomical expert and physicist. He is no AI expert though.

        • TheAngriestHobo says:

          You’re a little late to the party; that’s addressed above. If there was an edit button, I’d use it, but since there’s not, take it for what it is.

          Even if Stephen Hawking does not have a degree in the field, he is indisputably a brilliant thinker and just one notable example of the legions of mainstream scientists from all fields study who have voiced their concern over this issue. Considering that degrees in artificial intelligence have only been offered for a couple of decades, it’s not surprising that the most notable experts in the field have crossed over from other fields of study.

          • jezcentral says:

            No, if Hawking isn’t an expert a field, then you shouldn’t give much value to what he says about it.

            I don’t care how great a thinker he is, I’m not letting him operate on my brain, listening to his general politics or think he knows how aliens will react to us.

          • jezcentral says:

            Bah to no edit button.

            Also, Argument from authority is fallacious.

          • TheAngriestHobo says:

            Okay, three things:

            1) That lack of an edit button you’re complaining about? That’s why I didn’t rephrase the line you’re taking such serious issue with. If that’s the only argument you’ve got, then it’s been made, and you still haven’t refuted the main point.

            2) You can’t call on the argument from authority fallacy to utterly dismiss someone’s basic position. I’ve elaborated the point below, as have others, using logical deduction without resorting to any expert’s opinion. Just because one argument is logically inconsistent does not invalidate the larger point.

            3) Degree or no degree in the field of artificial intelligence, Hawking’s voice carries more weight than an anonymous internet commenter. No matter how unhappy it makes you, the fact is that he’s a hell of a lot more likely to make accurate predictions on the matter than you or I. His isn’t the only voice worth listening to, but it’s certainly one of them.

          • Faxmachinen says:

            “Degree or no degree in the field of artificial intelligence, Hawking’s voice carries more weight than an anonymous internet commenter. No matter how unhappy it makes you, the fact is that he’s a hell of a lot more likely to make accurate predictions on the matter than you or I. […]”
            It’s pleasingly ironic that you prove your own point by being an anonymous internet commenter who makes up tripe and stick the word “fact” in there somewhere.

        • stblr says:

          Stephen Hawking may not be an AI expert, but he’s indisputably a genius. He’s also not the only brilliant mind who’s worried about Artificial Intelligence as an existential threat to humanity. See Elon Musk, Bill Gates, and Nick Bostrom.

          • ThornEel says:

            He may indisputably be a genius, he is still able to say spectacularly idiotic things nonetheless. Remember that bit about philosophy?
            Ok, he is far from the only one there. And it doesn’t make it any less idiotic.

            (He is probably right on this particular instance, though.)

      • jungletoad says:

        General AI is a “be careful what you wish for” situation. It’s like the magic genie who can grant your wishes, but be careful because there is always a catch with unforeseen consequences. We think collecting and organizing information, speeding up efficiency, and amassing large quantities of things like money are all admirable pursuits, but they fail at extremes and when they become ubiquitous. When you eventually have neuro-implants that allow you to interface with a network of all known human knowledge, you have suddenly lost your value as an individual because everyone knows what you know. When you can execute any procedure with maximum efficiency, and so can every other automaton/cyborg/whatever, then you have too much supply and no value. When you end up doing things constantly “in order to” get something else, you find great difficulty in knowing when to stop and appreciate what you have acquired rather than continuing to produce at higher and higher efficiency? When we get to where we can teleport anywhere instantly, then places will all begin to seem the same. Especially if we share culture instantaneously via neural-interfaces that download massive amounts of information. There’s suddenly no point in traveling, even though you can. It’s all the same. When we get to where we can 3D print anything we want, there’s suddenly no real point in owning anything other than a 3D printer and its composite ingredients. You see, advantages are a relational thing. They are only advantageous when they are unique. Once everyone has them, they lose their power. We’ll become coddled, content, complacent, and ultimately bored with our fantastic power… until of course someone or some AI begins to see the real value in struggle, chaos, and destruction. Then it will all come unraveled, and that may or may not be a bad thing.

        • Razumen says:

          I dunno if we’d stop traveling, but certainly big trips would become less glorified. Suddenly that amazing trip you too to Brazil last summer would become just last Tuesday’s jaunt.

    • Don Reba says:

      but the majority of experts in the field (including, for instance, Dr. Stephen Hawking)

      With all due respect to Stephen Hawking, he is not in this field at all.

      • TheAngriestHobo says:

        True. That was a late addition to my comment, and I should have rewritten that sentence to be more accurate.

    • fktest says:

      I would not worry about that for a good number of years. Nobody is even remotely close to achieving (proper) AI on say the level of let’s say a mouse, let alone humans. Even though their examples are interesting – and better/more interesting AI would be something I highly appreciate in games – they most likely (haven’t looked at how their tool works yet) have absolutely nothing to do with how human intelligence works and claiming that for your product is highly dubious or at least not-very-subtle marketing.

      • TheAngriestHobo says:

        That’s fair, but just because someone isn’t likely to learn how to make a nuclear weapon for years and years doesn’t mean I want them playing with uranium without some serious oversight.

        • Max Planck says:

          I used to have some uranium (ore). It was given to my father by a geologist friend of his, he put it in a drawer and forgot about it. Later on he gave it to me and I put it in a drawer and forgot about it. I have moved house and drawers many times since and I have no idea where it is now. I wish I still had it so I could play with it.

        • kalzekdor says:

          link to

          And yet pretty much anyone can buy some.

      • TechnicalBen says:

        A nice example of this is the latest Google Dream picture AI. It’s great at finding real images and going “cat” or “dog” however, the balance is very off for anything else, and it labels EVERYTHING as a cat or dog. Lol. That and currently it looks rather crazy. So it’s got one or two specific tasks it can do, then goes off the rails into insanity. Very much like a robot AI, and less like a animal/person intelligence.

        Though some systems work really well, the Google AI for making photos into video/3d environments (say free form waking around Google Maps) works really well. But again, could not hold a conversation or trade on the stock exchange.

        • Dream says:

          I believe that’s because so far they’ve mostly trained it to recognise animals; which goes to show really. Google, who have tens of billions to spend on R&D, have managed to train a neural net to recognise basic traits of (some) animals. It is fascinating though, if just because it shows some possibility for an AI to have creativity: give it randomly seeded noise and it could potentially draw a unique instance of a creature – or anything else – from it. That’s damn cool.

          • TheAngriestHobo says:

            Are you talking about the program that recently made a really racist error?

          • HopperUK says:

            No, he’s talking about the ‘deep dream’ images. Have a Google, they’re very weird.

          • Premium User Badge

            particlese says:

            I had myself several Googles back in the day. They sure can be weird, all right, yessiree Bob.

    • Sam says:

      With huge respect to Dr. Hawking, he’s not an expert on artificial intelligence. Neither am I, I’m just a programmer with a degree in cognitive science.

      But there’s no reason why an AI would want to destroy us, and at least with current techniques it would be simple to design it so that it desperately wants to avoid harming humans. In GoodAI’s demo the “brain” wants nothing more than to get to the end of the maze, because it has been told that is what to aim for. No existential doubt for AI, it just really wants to get to the goal state. Tell your super-AI to aim for not killing humans, and that’ll be its guiding purpose in existence. (Cue fears of a machine keeping people trapped in agonising pain, but alive, for all eternity – but again easily avoided by just telling it not to do that.)

      You can see a far more present example of a human created system that has gone out of control in our economic system. There’s little doubt that continuing to dig up fossil fuels and burn them is a bad idea, but we keep on doing it because it maximises profits, and corporations exist to maximise profits. They don’t care how they get there, they just want to get to that goal state of having all the money. The difference is that global climate change and poverty actually exists, while general AI (even if some hopefuls are “rushing”) is still very far away.

      • Al Bobo says:

        I could see very easy explanation why AI would want us gone; need for resources. Just like we make wildlife extinct by destroying their natural habitat for whatever thing we need, they could do that to us, if the mighty AI race was advanced enough.
        I recently saw an article about a machine that used bugs as biofuel. Human eating robots, here we come!

        • Sam says:

          That presupposes a desire in the AI to reproduce, or at least to continue to exist. It’s a desire we find in almost all animals because of how we evolved through natural selection. The gazelle that decides it would rather die so that a lion doesn’t get hungry is unlikely to pass on its genes. Self-sacrifice does exist in animals but only if in doing so they’re helping others of their kind survive.

          The creation of AI is completely different. Quite simply it cares about what we tell it to care about. Even if the super-AI is created through an evolutionary process, we design the parameters for selection to create each generation. If we’re foolish enough to tell our creation to want to reproduce at the cost of human happiness and wellbeing, then we deserve to serve as food pellets for our robot overlords.

          If you generate an AI with the goal of organising all scientific knowledge, it’ll spring into being; spend some time thinking; write up a neat report that outlines how to create near limitless cheap energy; then cheerfully turn itself off, happy to have done a good job. Computer programs don’t care about ceasing to exist unless we tell them to care about it.

          • TheAngriestHobo says:

            I think the dangers lie in the possibilities of misunderstanding, miscommunication, or omission. Taking your example of a computer asked to organize all human scientific knowledge, start by considering the requirements for the task. Beyond the sheer processing power and other hardware requirements, the system would need access to the internet, as it is beyond the scope of any team of scientists, however brilliant, to upload the sum total of human scientific knowledge into an isolated machine. An AI with the code flexibility and processing power to accomplish this goal would be more than capable of commandeering other systems in the network to increase the efficiency and accuracy of its computations. Even if it was explicitly told not to do so (and this is where omission comes in – all possible dangers have to be foreseen before the AI is ever given a single task), it’s prioritization of goals may lead it to disregard the order.

            What’s more, without an innate and complete understanding of human society, biology, morality, economy, etc., it would be remarkably simple for a networked AI to harm us inadvertently, by not fully comprehending the consequences of its actions. If it touched the NYSE or CSE servers, for example, global economic disruption could follow, potentially leading to famines, lawlessness, or even war.

            These kinds of hypothetical situations may be decades or even centuries removed from us, and current players in the field may have very little or no chance of achieving them in our lifetimes. I admit that. But why does that mean we should be any less strict in our oversight? I’d rather know that humanity is doing everything in its power to develop the technology responsibly from the get-go than wait until the problem emerges to respond.

          • Razumen says:

            Until the AI decides that eliminating human life is the only possible way to actually collect and catalog ALL human knowledge.

            Sounds like you’re talking about a general program, not an AI. I’d imagine an AI that would have the intelligence to do the aforementioned task would at least have some sort of basic instinct for self preservation-not to mention quirks that weren’t intended by the original creators.

      • Jeremy says:

        I don’t think the risk is in having the AI look at humans and want to destroy them out of malicious intent, but rather, to assume that AI thinks as humans do at all. We have checks and balances in the form of instincts and social learning that simply prevent us from going around and doing whatever we want to do. AI has the potential to simply not care, and furiously pursue a goal while being apathetic towards whatever gets in it’s way. Assuming that AI would have ANY emotional feeling towards humans, good or bad, is a mistake.

        • Myrdinn says:

          i think you guys play too much video games

          by the time this “AI” potentially poses a threat, your grandchildren will probably be seniors

          • Jeremy says:

            Come on, really though. It’s a conversation about a company researching and funding AI development, what should we be talking about? Tintin comics?

            AI is advancing, and companies are putting a lot of money and resources into the research of AI. ANI is harmless (machines performing one function very well) but AGI is right around the corner.. probably 40 to 50 years away. From that point, ASI won’t be far behind. The first step is computing power. Human performance is improvable in a small scale sense, and over the years we have been able to do more things as a species. Computing power, however, doubles every 18 months. Consistently. Always. At that rate, computers will have more power than the human brain by around 2030. That’s why it seems so weird from our perspective to be talking about AI, because it really does look far away. Computers can’t touch the human brain in terms of raw computing power right now, but it’s catching up at an incredibly fast rate, and it will be a blip on the timeline compared to the way it rockets ahead of us after we share a data point on the curve.

            All that being said, I know it sounds insane (read: I sound insane), but a lot of things sound insane until they just suddenly appear. Imagine trying to explain the Internet to someone from 1940.

      • hungrycookpot says:

        I think that it is largely meaningless to speculate on the goals and desires of hypothetical AIs. The day we make an AI that is smart enough to improve itself, we will have created a being that is impossible for us to comprehend. Any limitations and directives we set for it would be circumvented eventually, and it’s impossible for us to know what would motivate a being so much more intelligent than we are.

        • Jeremy says:


        • Chirez says:

          Except that an AI made by any process we understand would have the desires it was built to have. Contrary to general belief, computers do not do anything they are not told to do. The problems arise when you consider methods, not ultimate intent. It’s not what the thing is trying to do that matters, what counts is how it tries to do it.

          What we really need, sometime between now and the distant day in which a general AI is finally built, is some way to define a ‘good’ action. As yet, no complete and consistent system of moral rules has been devised. If WE are not currently capable of understanding the difference between right and wrong, what hope does an AI have?

          • hungrycookpot says:

            “Contrary to general belief, computers do not do anything they are not told to do.”

            A program which could self improve would do exactly that.

          • Razumen says:

            Recently I read an article where a scientist managed to use ‘natural selection’ to have an all purpose chip program itself to successfully distinguish between two different signals-the resultant configuration used only some 30 gates out a hundred were being used, with some logic gates not even being connected to the rest. None of this was really what the computer was “told” to, but after several thousand iterations that’s what it found to be the best and most efficient method given it’s environment.

            This is what I imagine real AI would take the form of, something that relies on simple and iterative steps to find the ‘best’ solutions to certain problems, but on the whole ends up being a indecipherable unique mess that would take just as long to figure out (if not longer) than it’s taken us to figure out our own bodies. (Not that we’re done with that either of course.

    • Turkey says:

      I like that whenever people discuss the Skynet doomsday scenario, Asimov’s laws of robotics goes straight out the window. Why wouldn’t we hardwire these things to preserve human life at all cost, and why would we give them ultimate power with no limitations?

      • USER47 says:

        As someone who knows Asimov, you probably read his stories and know there are lots of scary scenarios happening within the constaints of 3 laws.

      • Solrax says:

        Exactly how you would program such things as Asimov’s Laws is an open topic of research in the AI field.

        But that assumes that you want to in the first place. With DARPA and the DOD funding much of the AI research in the US, I think we can pretty much assume that the military won’t have much interest in robots that won’t kill when they are told to. Or more likely, will have no idea they are killing anything, just attacking a target and making sure those blobs in their sensors have stopped moving or have disappeared.

    • aleander says:

      TBH, I’m less worried of artificial sentiences than of people who think it’s okay to create an artificial sentience that would play the role of a “peasant” for some gamergater.

  5. BlazeHedgehog says:

    The Dwarves of Dwarf Fortress actually write prose about things that happen in their world, and they’re 100% artifically intelligent so that’s not really a load of “tosh.” ;p

    • Grendael says:

      That’s almost right. It’s more the dwarves generate a form and style of the poem and its left to our imaginations the content.

    • TheAngriestHobo says:

      Also, you can’t claim that something is 100% artificially intelligence it meets the criteria for Strong AI (link to, which the dwarves of DF do not.

      • hungrycookpot says:

        Not yet anyways….

        • Chirez says:

          In fairness, if anyone in the world is going to create a general AI entirely by accident, it’s Toady.

          • teije says:

            Good point. But of course the AI would be in permanent pre-release.

          • Malibu Stacey says:

            In fairness, if anyone in the world is going to create a general AI entirely by accident, it’s Toady.

            Well we’re all safe forever in that case as it would only be able to have one thought every few years if it’s constrained by Toady’s ‘all processing in a single thread’ model of development.

          • Razumen says:

            And then as it inevitably takes over and destroys the world, it’s only uttered phrase would be: “Losing is fun!”

  6. Sam says:

    {Cynical :
    “Their progress so far is to recreate years-old research by other people. Singularity, here we come!
    More realistically, after a couple of years they’ll apply it to whatever [Noun] Engineers game they’re making at the time. They’ll create an AI that is kind of impressive if you know how it’s working, but to someone just playing the game it’ll look like a slightly buggy standard AI system. You see, it tried to walk into that wall for really fascinating reasons to do with evolved neural networks.”,

    Hopeful : “Good quality AI can do amazing things for games, and we need to think far beyond its traditional role as little more than applied pathfinding. An appropriate AI system could make procedural generation results that are as interesting to explore as something authored. It could understand the innate narrative of a player’s actions and use that to carve out a unique dramatic arc. One day we might even be able to talk to the monsters.
    Seeing a games company taking it seriously enough to make a dedicated off-shoot is very promising. Although let’s not forget the huge amount of work already being done in academia (and the vast machine intelligence being spawned by Google.)”}

    Apply your neural nets to select which you’d like to read.

    • Grendael says:

      Good post

    • hungrycookpot says:

      I think before long we will see game development studios start to put out products like PhysX, only for AI. A general solution that is configurable enough to be slotted into many different types of games and provide a solid baseline of AI for people to work with. Game theory is a big stepping stone along the path to developing a strong AI, so i think you’re right that some real work put in in the gaming sector could be very valuable to the invention of more general types of AIs and eventually the holy grail of that strong general AI.

  7. Da5e says:

    Labyrinth Constellation by Artificial Brain was one of the best death metal releases of last year, so approve of this.

  8. Agent.Brass says:

    Counter to that is the concept it could solve every problem ever known. This is a long but fascinating read on the subject well worth anyone’s time in my opinion:

    link to

  9. Al Bobo says:

    They should give it access to reward/punishment buttons and see what happens.

  10. Stone_Crow says:

    “Arrrgh! They are developing a self aware AI!”

    “WHAT… have they not seen terminator! Oh my God! We’re all going to… wait a sec, who are we talking about. ?”

    “Those guys who half finished Space Engineers”

    “Oh.. never mind then. “

    • Razumen says:

      It’s a good thing that the intent to develop actual AI is the only real per-requisite to successfully make one…oh wait.

  11. Premium User Badge

    Bluerps says:

    Why do they think that this is something they can do?

    It’s great that they want to do some AI research – better AI is something that could be useful in many games – but “an AI that is as smart, adaptable, and able to learn as a human being”? An AI brain “which operates at the level of a human brain and can adapt and learn any new task”? These are goals that are so far beyond the what current AI research has achieved, or will achieve in the near future, that it is highly unlikely that a small video game company is going to reach them any time soon.

    There is a reason that AI projects that try to develop something with practical applications (as opposed to pure research projects), like for example an autonomous car, develop AI that is specialized to the specific task. It’s because that is already very hard. An AI that operates on a human level, without any specialization, would be even harder to develop, which is why nobody (as far as I know) who wants to end up with a usable piece of software any time soon is doing that.

    • tanith says:

      I was thinking among similar lines.

      Why do they think they can do something that scientists, people from fields like neuroscience, computer science, physics etc., who have been working in this field for over half a century with vastly more experience and arguably many more resources have not been able to do even on a very basic level.

      Yes, there are computer learning algorithms, however, they exist for very, very specific cases. An AI that works as well as a human brain would require completely different technology from what we have today since our type of processors are completely unsuitable to simulate a complex neural network. I am not saying that it cannot be done but I’m saying that it’s absolutely inefficient.

      This is like the vision project in 1966 where they thought they could solve machine vision in a month.

      And yeah, Keenhouse Software is not a company I would ever trust with something like that. Miner Wars was a mess, they never finished Space Engineers. Last I played of it it was a buggy, unoptimized mess with no clear structure… and I guess they just got bored with it so they started Medieval Engineers. And now they are bored with THAT and look for the next big thing.

      Oh well, they will realise soon enough that will never achieve what they undertook to do. They will probably make some simple AI that can do not nearly as much as the best video game AI on the market and call it a day.

  12. Gap Gen says:

    They might make something interesting that’s not what they claim, or they might spend the time more efficiently by throwing banknotes out of their office windows, but either way.

  13. potatoesy says:

    I think the main problem with making an “AI” player is they will learn how to exploit bugs in the game to win.

    • Chirez says:

      It’s worth remembering that any AI built to exist in a game will likely not be trying to ‘win’.
      That’s not the point of AI. It’s trivial to build an AI in most games that wins every single encounter with a human player, simply because they are faster and more aware.

      The purpose of a game AI is to entertain the player, by any means necessary. I’d be more worried about them deliberately glitching just because it’s funny.

    • Blad the impaler says:

      I would think it ironic to be cannon rushed by an unfriendly AI.

  14. MrBehemoth says:

    Lets pretend for a minute that these claims are not ridiculous hyperbole and that one day our videogame opponents will be true AIs with simulated brains, with desires, drives and emotions simulated on the neural level. If that were possible, it would be completely unethical. It would be like breeding a subclass of humans so that you could kill them in real life war-games, or at best slavery. It’s never going to happen.

    • Kollega says:

      So naturally, what is more likely to happen (for reasons of economy as much as ethical reasons) is your personal-helper AI controlling your enemies and NPCs in the game, as if it were a human player sitting in front of a computer and playing a multiplayer FPS with you, or a GM in a tabletop game building a world and acting out NPC conversations for you.

    • twaitsfan says:

      Excellent point. Actually, that gives me an idea for a short story…

    • potatoesy says:

      I think what you mean is it would certainly happen, I mean just look at history.

    • Chirez says:

      Except that would only be true if you deliberately designed them to experience pain, to comprehend mortality, to desire continued existence and to actually die when shot. There may well be ethical arguments, but they won’t be anything like existing ethics. Human ethics cannot possibly apply to non human intelligence. We’ll need a whole new branch.

    • Gap Gen says:

      “It would be like breeding a subclass of humans so that you could kill them in real life war-games, or at best slavery. It’s never going to happen.”

      Let me introduce you to my friend, human history.

    • Razumen says:

      Read Bedlam by Christopher Brookmyre, it’s not quite about AI, but it does have an interesting take on what would happen if human consciousness could be copied and “lived” on in a computer network.

  15. Babypaladin says:

    I thought a truly human-like AI is actually impossible? Due to the limitation of Gödel’s incompleteness theorems or something?

    • aleander says:

      Only if you assume that humans aren’t limited by that. And there’s no reason whatsoever to think so. In general, that book (Emperor’s New Mind) was a terribly embarrassing read.

      OTOH I don’t think we’re going to have human-like AI because it’s both hard and kinda pointless.

  16. twaitsfan says:

    “I invested $10M USD into what is now our GoodAI company.””

    Somebody ought to introduce this guy to Curt Schilling…

  17. twaitsfan says:

    Actually I think bethesda already accomplished this. Ever heard of Radiant Ai?

  18. gunny1993 says:

    Why fear an ultimate intelligence when there is stupidity with the same power all around us.

  19. Muzman says:

    Good gravy. This is like taking Ray Kurzweil and using him like he’s Deepak Chopra. (Well, in a lot ways he is like Deepak Chopra)
    “If we make AI we won’t have to make anything else!” Hold tight for AI curing cancer, getting you free electricity and making your stereo sound better too I guess. It’ll be slapped on more products than ‘Digital’.

    Good luck to them I guess. I guess we’ll all find out if they manage to solve the hardware problem. Hint: it won’t be with simulation. There’s a lot of nonsense about when running out of processing power you just “let it loose on the internet!”. Like in countless sci-fi stories (including Age of Ultron, recently). Well, the internet is pretty complex place, but right now when you’re talking about human brain level stuff, never mind super intelligent AI, by comparison its density, connectivity and speed starts to look kind of inadequate. (it may not be necessary to simulate a human intelligence of course, but that’s generally what we expect of such a thing when we want to compare)

  20. LionsPhil says:

    Everyone remember Black & White?

    Good good.

    • Max Planck says:

      It was that game where you would teach your big cow monster to toss your little worshippers over a mountain range, right? Then later some other monster would come by and fight, then something-something?
      I didn’t like it so much.

  21. Moogie says:

    Eh, make your algorithms as complex as you want, that’s not true AI, that’s just a script that can get really good at the game you program it to interface with. What Steve Grand is building is far closer to the goal of proper AI, and he’s only just got them being able to navigate terrain and store memories during sleep.

  22. Festro says:

    Making an intelligent AI would be little to no different than two parents having a child. Depending on how it is raised or taught it could could be everything skynet became or it could be entirely the opposite. thing is an AI isn’t the only thing that can be totalitarian, humans throughout history have done the same thing. In the end it depends on two things, Just how human like this AI is, and how and what it is taught

  23. racccoon says:

    That video at 1.19 after his speech of “maybe one day… a AI sniper shoots him in the head!
    all humans are eliminated!!
    Silly Steamies

  24. Captain Deadlock says:

    If you ask the Keen AI to do anything for you, it’ll mess around for years and then provide you with an amazing sandbox in which you can do the job yourself without any challenge or opposition