Valve’s Abrash On Virtual Reality, Wearable Computing

By Nathan Grayson on August 22nd, 2012 at 8:00 pm.

Valve enjoys keeping secrets. And while I’m beginning to suspect that Half-Life 3 will ultimately turn out to be a giant ray gun that erases all memories of the Half-Life franchise from our brains forever, there’s one thing Valve’s been uncharacteristically upfront about: its fascination with the future. Wearable computing, augmented reality, and – perhaps most pertinently, given recent extremely promising developments – virtual reality. During QuakeCon, I got the chance to sit down with Valve’s most vocal proponent of these new technologies, programming legend Michael Abrash. However, while Valve’s obviously putting some serious work into breaking new technological ground, Abrash was quick to point out that it’s still all in service of a singular end goal: entertainment.

RPS: What aspects of virtual reality is Valve really exploring? Obviously, you’re here at QuakeCon to support the tech, but what’s Valve’s stake in it? 

Abrash: So I had a family medical emergency. I was supposed to be here yesterday for John [Carmack's] keynote. Yesterday morning, I canceled everything. But then my daughter very generously volunteered to come to California and take care of the emergency because I really wanted to come to this. And part of it is because this is a pivotal moment. It’s possible VR won’t succeed, but I think that this is the shot it has. Right now is the best shot it’s ever had by far, and it has a really good chance of this being the beginning. And I mean really the beginning. Things are going to get way, way better.

It was obvious with Quake or even Doom that they were first-person shooters. But what was not obvious with Quake was that hardware change of the Internet, and you’d be able to have permanent servers. Then you’d have people putting up their own sites, clans, widespread mods, tournaments – the whole thing. Seeing that was John’s bit of genius. The rest was engineering. It seems obvious now, but it wasn’t obvious at the time. With VR, yes it’s obvious we can go and have a first-person shooter or an immersive experience, but the real question is, what’s going to happen that hasn’t happened before?

RPS: I was thinking about that while playing with Oculus Rift. I mean, the potential for non-violent games seems huge. Just staring at things was amazing to me. What if it paid attention to how long and where you stared, though? People could react. You could simulate awkwardness. I think that’d be pretty incredible, actually. 

Abrash: It’s true. So I think if we went back to 2005 and said, “I’m gonna give you this phone, and it’s gonna have as much processing power as a computer and a touch interface,” I don’t think you would’ve immediately said, “Oh, these are the games that are going to end up being successful.” You probably wouldn’t have even predicted that there’d be so many people buying and turning it into such a huge market. So I don’t know what VR will turn into, but I’m pretty confident it’ll turn into something great if the hardware can be good enough. That’s the thing that has to happen. I think [Oculus Rift creator] Palmer Luckey’s stuff will be good enough to get that started, and then it has to evolve rapidly.

So first, I’ll tell you what’s necessary for VR to work well. For VR to work well, you need display technology that gives you an image both your brain and eye are happy with. Trust me, that’s much harder than you think. Even if it was just a HUD, people wouldn’t be that happy, because you’re always moving. Your head is never still. And this is moving relative to the world, and if your brain is trying to fuse it, that can be rather tiresome. I’ll tell you there are lots of issues with getting that image up in front of you.

Second, if you want to do augmented reality – and AR and VR are what are interesting to us, because they’re entertainment experiences – wearable computing and just getting information up will be Google’s specialty. They’ll be great at it. How could they not be? So if you want to put things up in the real world, you have to know exactly where you are and what you’re looking at. Or you have to be able to process images.

So you’ve seen iPhone apps where you can make people look silly – mess with their faces, put hats on them, whatever. Well, if I want to put a hat on someone, I have to know exactly where he is. As I move, as he moves, the hat has to do the right thing or it doesn’t work. So tracking’s a really hard problem. Both John and I talked about that. Knowing your angle isn’t that hard, because you can get it out of a gyroscope. It does drift over time, though. But knowing your position is actually very hard. John talked about the Razer Hydra [motion controller], which has a base station that can track things relative to it using magnets. That’s fine if you’re within range, but the range isn’t that great.

So I think the solution is very similar to the way humans work. Humans have this three axis center, and then your visual system corrects for that. So if we have a gyroscope and a camera, and then the camera does the correction for that, I think it’s a long-term solution. But doing that processing, I think, requires a camera that can do things fast and in a higher resolution. It also requires processing that information, and that’s a power issue, a processing issue, an algorithmic issue – these are hard problems.

One example: if you were sitting here and said, “OK, I want to know which way I’m looking and reorient myself.” You know how far down there it is [gestures down a hallway] to any pixel I could use to reorient myself? And it’s not very well lit, either. In general, to do this is a hard problem.

And then, even if you’ve got tracking and display, what’s your input? Right now, it’s a game controller or a keyboard-and-mouse. That’s great, because we have a class of games people are used to in first-person shooters and a class of controllers people understand. All we’re changing is the display. Doing it incrementally that way is far more feasible, because you know how to enter that space and give people an experience they want quickly.

In the long run, it seems unlikely that’s the interface you’ll want. Maybe you’ll want to manipulate objects with your hands. Maybe it could be a Kinect-like thing. It could be gloves. Who knows? Maybe you just have a little touch pad on your finger. Maybe it’s gesture-based. Have you read “Rainbow’s End” by Vernor Vinge? In that, he just describes it as people controlling things by simply running or moving. It’s like people have learned this language of interacting with smart clothes. OK, that sounds to me like he’s kind of hand-waving it because he doesn’t have a good solution.

So input’s actually wide open. I doubt there’s one input modality. There’s quite a few potential ones – including speech. But all those things are going to get figured out over a period of time. I mean, I can tell you there was a day 17 years ago when no one had ever heard the phrase “mouse look.” John had to figure that out. It wasn’t clear how the mouse would work. It wasn’t even clear whether moving it forward would make you look up or down. John had to go and figure out the entire syntax of how to control first-person games, and he’ll just do exactly the same thing here. But it has to be figured out.

RPS: Oculus Rift also gave me a weird sense of weightlessness. It was incredibly immersive right up to the point where something hit me or even just got too close for comfort. I couldn’t feel anything, even though my brain was certain there should’ve been something brushing against my skin. 

Abrash: So that’s part of input, but my personal feeling is that – and this is far enough out that it’s something I’m not personally looking at – but my speculation is that there will be haptic devices. Once you have immersive VR that people are really using so that there’s a market for it, there will be experiments all over the place. My guess is that there’ll be some sort of form-fitting, shirt-like thing, and it’ll have some kind of percussive devices so it can tap on your chest and arms. That seems like an obvious and manageable thing. But there are so many ways that could go.

RPS: What about some kind of neural-interfacing-based thing – like Carmack mentioned during your panel?

Abrash: So here’s one thing about John Carmack: if John is interested in something, you can believe that something is within the horizon in which it will become a viable product. Because John never, ever wastes his time. I mean, when I was at id, people were coming out with VR stuff. And John was looking at it and saying, “No, this isn’t going to be successful.”

He is correct when he says neural interfacing stuff might be interesting, but it’s not in the timeframe of what might be a product in the next three years or even five years. I mean, there’s a professor at Cornell [University] who’s doing research about encoding down the retina might work.  I was actually surprised to find out. Gabe [Newell] dug up that piece of information. So there are people who are determining [the effectiveness of that technology], but think about it: you have to get the signal into a human‘s optic nerve or brain. Just getting through the approvals for that – even if you had it working today – [would be incredibly difficult].

So doing the kind of VR that Palmer Luckey’s doing – doing AR, as well – this all will prepare for a future in which neural interfacing will be the way to do it. But no one’s gonna jump straight to that. The other thing is that, once we do go to neural interfacing, it won’t really change anything. Sure, you won’t have a thing on your face, but you’ll effectively be getting visual input in terms of how it’s presented to your brain. You’re still looking through what you think are your eyes and so forth.

It’s kind of like if you went back to 1995 and developed the best BBS software ever. How much value would that be to you now? How much value would evolving it have been? None. Because BBS is a transitional stage. My belief is that both AR and VR will eventually merge. Similarly, I think a tablet is a transitional technology, because you don’t really want to lug that thing around. But if you have the glasses with the same kind of information, that’d be far more useful – which is what Google is clearly betting on. They want to replace your phone and your tablet.

But if you’d invested your time and money in the Internet in 1995, that’d have huge value. The Internet’s in kind of a final state – where BBSs and modems, those were transitional. This is the final state, and what we’ll see is refinement on that. More stuff delivered over it. Maybe the structure will evolve. But you won’t see it as something new. Similarly, I think once we get to wearable technology, it will evolve dramatically, but it’ll all just be the same model getting better and better.

So that’s an important distinction. Even if we get all the way to direct neural connections, it won’t really invalidate what Palmer did. It’ll just be an extension of what Palmer did.

RPS: You mention both virtual reality and augmented reality. Oculus Rift and things of that sort are obviously VR right now, though. Can modern AR – little gimmicky phone apps and things of the sort – really compare?

Abrash: Well, we’ve set out to figure out what’s an entertaining experience. What are people going to want to do with AR? And that’s a hard question, because you have to have prototypes. I mean, anyone can sit around and say, “Oh, you can put a hat on somebody’s head or change their face or geocache.” And maybe we’ll get there and it’ll be fun, but we need to get there to find out. We need to try those experiences.

So we set out to find out what the technologies will be able to do in that timeframe. What could happen? So if you read my blog, I’ve got a post that explains why we’re not gonna have hard AR soon – absolutely, flat-out, for sure. I’m pretty sure. Maybe someone will solve this, and I would be thrilled. I’d love to be wrong, because this would be better than what we thought. But you can only add color in [with AR]. You can’t overwrite things, because you’re putting up images, but you’re also seeing through [your display]. So I can’t replace someone’s face with another face. I can only superimpose something on it that blends with it.

Well, that’s pretty significant, because it means you’re never gonna do something that feels completely real except with VR – because AR has to replace something in the real world. So there’s a whole branch of that tree that just got lopped off, right? Because you say, “Well, we’re not gonna be in Rainbow’s End or Snow Crash at that time. It’s not going to happen.”

And then you go down this whole spectrum of things and reach Google Glass. And it’ll show you information. That’s useful. I mean, I think Google’s doing something very, very pragmatic. It’s a good step into this for them. But it’s not interesting to us because we’re an entertainment company. We do games.

RPS: So what’s the next step? And where do PCs and – by extension – PC gaming factor in that, if at all? 

Abrash: AR is hard. You can quote me on that [laughs]. So I thought about, well, what else is interesting? What might have potential? And I came to the conclusion that VR is kind of different-but-equal in that AR and VR will eventually converge. In the long run, AR and VR will be the same thing, and they’ll just opaque your glasses to different extents. So a lot of your experiences will be VR. Because when you sit at your desk, you don’t want AR. What would you use AR for in front of your screen? What you really want is VR and to not have the screen. With high enough resolution, you can just put your screens around you. And then your desk is wherever you choose to be.

Now, when you go out in the world, that’s AR. And that will radically change the way you interact with the world and other people. So I don’t want to downplay it in any way. I just want to say that, when you have your magic AR glasses many years in the future, you’ll still want to do VR things as well.

Also, VR’s all about entertainment. Right now, what else can you do in VR? But in AR, you can do other things. And VR is clearly more tractable. You don’t have to solve my tracking problem. You have opacity, so you don’t have the engineering problems of see-through. Whole bunch of reasons. So I came to the conclusion that VR is equally interesting to look at and more tractable in the near term.

RPS: So what’s Valve’s goal in that space? Is utility a major part of it? Or are games and entertainment still king? 

Abrash: We’re pursuing what wearable stuff could exist that could enable you to have new, compelling entertainment experiences. It’s about giving customers experiences that they want. We’ll do whatever it takes to make that stuff happen. It’s a big space, and it’s unexplored.

But the one thing I’ll point out with AR is that you really don’t know what’s a fun experience. So you can talk about basically geocaching or LARP-ing – because that’s really what it is at this point – and most people just really aren’t that excited about it. It requires interacting with the actual world around you. It makes you dependent on the world. It’s not a concentrated experience. And maybe that’ll be fun. Maybe it’ll be the Farmville of AR – which a lot of people play, so I’m not trying to downplay that.

But we’re looking for deep, rich experiences. I don’t know what they are. I’m not a game designer. But the technology has to be there. Again, hardware changes and enables things. So the first question is, how can the hardware change to really support experiences? That’s the first thing we’re looking at.

RPS: Thanks for your time.

, , , , .

85 Comments »

  1. Arkon540 says:

    Staring Eyes tag

    • Nathan Grayson says:

      The grievous error has been corrected. In penance, I have removed my own eyes, as they are clearly unfit for the task at hand. Eye. Whatever. I now type with a staring eye dog.

      • mcakherllon says:

        Feel comfortable with the possibility of starting an age gap relationship on c_o_u_g_a_r_l_o_v_i_n_g_c_o_m where people think just like you May be you have experienced a lot in life already and now want a totally different type of relationship. Recapture some of your youth and try striking up.

        • oqmvxwtr says:

          Superb wireless mouse! Price: $ 8.80! http://sameurl.com/Gg3K1
          The wireless mouse also utilizes 1000 DPI high-resolution switching technology to meet the needs of general users. This technology allows for a dual-power design. The mouse will go into sleep mode without activity for about one second automatically, but will go into deep sleep mode if there is no activity for about eight minutes. This unique design extends battery life by 30% and saves energy. This mouse is compact, easy-to-use, and frees you from your desk for a long time.

    • Josh W says:

      You know, I was actually thinking of commenting “staring eyes tag” before even checking if you’d added the tag. That is one of the staring-ist of eye-pairs.

  2. Premium User Badge

    lhzr says:

    Not to be “that guy”, but.. AR is augmented reality and I assume that’s what you and Michael meant by alternate reality. Alternate reality is something else.

  3. Pucho says:

    What is this, the Lawnmower Man? I’m sorry, I just cannot get behind this. Not just the VR element, 3D TV and always-on-me computing as well. I think a lot of people, like me, are feeling a little too invaded and frankly worn out by technology.

    I like computers and videogames. A lot! But there’s no way in hell that I’ll be putting on a helmet at home, let alone while walking down the street. A plastic guitar in my living room was too close for comfort.

    • Hoaxfish says:

      I can’t get behind the idea either, half the time I’m looking away from my computer monitor anyway (i.e. my attention is often split from the games I might be playing)

      What “weightless” technology that might emerge on the other side of the “wear a helmet” part of the development might be interesting, but I think that initial barrier is going to be a breaker.

    • Premium User Badge

      Lord Custard Smingleigh says:

      In the future you’ll be able to strap that plastic guitar to your head to play while you’re driving!

    • Fazer says:

      YOU CAN’T STOP THE PROGRESS

      • tormeh says:

        “These people Adam, they’re like ghosts; always in the shadows, always hiding behind lies and proxy soldiers. I need you to find them. They can not stop us; they can not stop the future”

    • Nand1 says:

      What is this, Star Trek? I’m sorry, I just cannot get behind this. Not just the pocket calculators, car phones and home computing as well. I think a lot of people, like me, are feeling a little too invaded and frankly worn out by technology.

      I like typewriters and board games. A lot! But there’s no way in hell that I’ll be typing away on a computer at home, let alone while I’m at work, trying to do my job. An atari 2600 in my living room was too close for comfort.

      • frightlever says:

        Yup. I think the technology will leave some people behind and draw others in. It’s like social gaming – that also appeals to some people but not others. As is often pointed out, just because TV came along, it didn’t mean the end of radio. Or cinema or books or walks in the park.

        The educational use for this sort of thing could be immense.

    • UmmonTL says:

      That’s well within your rights, I can certainly see problems with VR and AR. Especially if you go into a shadowrun-like future scenario where being connected and wearing an AR device is almost required for daily life.
      But just for entertainment I’d say it’s the way of the future. In it’s most basic form VR goggles should be equal if not better than a three monitor setup and most likely much less expensive. Adding in the option of turning your head to get even more field of view and you’d need a CAVE to match it. If you keep the head-moving to a minimum you could play while lying down which might even be more comfortable than sitting at your desktop. The big question is how they will do the controller and force feedback without making it ridiculously expensive.

      • Pucho says:

        Sure, but how many people have a three-monitor setup? It may or may not become mainstream. Personally, I can’t see this happening. I see this relegated to the enthusiast realm at most.

        • UmmonTL says:

          The question should be how many people WANT a three monitor setup. I’ve certainly been thinking about it but can’t really justify the cost. The VR glasses have the potential to be better at a much lower price point because not only do you not need three high resolution monitors but the hardware required to run this should be less demanding.

    • D3xter says:

      It’s important to make a difference between “hyped” tech for marketing purposes like most of the 3DTV stuff or WiiMote/Kinect/PSMove and whatnot and something that actually works and feels impressive.

      As such I tried 3D gaming back in ~1998 or so and it was alright but not particularly great and I mostly skipped all these generations.
      In regards to the VR helmet Palmer Luckey is working on or some of the new “motion control” devices like the Razer Hydra I am a lot more psyched, because they seem to be getting into the area where this stuff actually works and provides a benefit and an actually much different experience.

    • Totally heterosexual says:

      YOU CANNOT STOP THE FUTURE

      • Ratherly says:

        But I can mope quietly in the corner while the world moves on.

    • Contrafibularity says:

      That’s all well and good, but DO YOU PLAY DOOM?

    • ulix says:

      It will happen eventually, and it will permeate society. Obviously I don’t want to wear a helmet either, but glasses? Why not? Hundreds of millions of people wear glasses today. Those glasses would just have added functionality (most of which you already have, just on other devices and more complicated.

      The next step would be screens in your contact lenses, and the step after that a neural interface, as talked about in the article. It’s not a matter of if it will happen, it’s a mattter of when it will happen.
      I guess that when I’m 70 (I’m 27 now) most people in the Western World will be connected to the web through a neural interface at basically all times. They will not have to wear anything (apart from the stuff they already have implanted in their heads).

      And scary dystopian mind-control images aside (which of course COULD happen), wouldn’t that be incredibly cool and awesome?

      • Fanbuoy says:

        “They will not have to wear anything (apart from the stuff they already have implanted in their heads).”

        Are you talking virtual clothes? It sounds awesome! Cold, but awesome.

        • Drakedude says:

          This worries me. It’s so cool, but it worries me oh so very much.

      • PleasingFungus says:

        I think you may have read BLT a few too many times. Scary dystopian mind-control images indeed.

    • Geen says:

      YOU CANNOT STOP PROGRESS.

        • Wang Tang says:

          Your link actually proved him wrong, because – in Germany – you can indeed stop it:

          Unfortunately, this video is not available in Germany because it may contain music for which GEMA has not granted the respective music rights.
          Sorry about that.

          Yeah, I’m also sorry about that.

  4. Hydrogene says:

    Interesting read! But why is Michael Abrash a programming legend? Does everyone know who he is?

  5. evs says:

    This technology is exciting, but I hope that the visionaries spare a thought for those of us with disabilities. We already feel left out from 3D movies! It would be nice to hear someone say they had actually thought about how to make VR work for us. I contacted Oculus Rift about it and never heard back.

    • Fazer says:

      If you’re concerned about stereoscopic vision, Michael Abrash said in the Quakecon panel that it’s not really needed to get the immersion effect in VR. The job of finding yourself in the world is done mostly by ear canals that tell the brain what is your position, orientation.

    • UmmonTL says:

      What sort of disability are you talking about? The Rift is basically just a high resolution display fixed right in fron of your eyes. So as long as you can use a normal display there shouldn’t be any problems I think?

      • Fazer says:

        It’s not simple as that, it actually is able to show different images for each eye to create 3D effect (but it works differently than 3D in TVs). People with astigmatism or squint will still be able to enjoy using it, though.

    • TheWhippetLord says:

      Indeed. Accessibilty sometimes seems to be the price of progress in computing – for example your MMO guild might raid more effectively with voice comms, but suddenly the guy with a stutter that’s been happily type-chatting away to you has an extra obstacle. On the other hand VR tech in particular could expand the range of experiences available to some of us with extra limitations. You win some, you lose some I guess.

    • sophof says:

      I’m not sure what you want them to do about it and neither where they probably? Of course they can not change any sort of disability that you have and they can not fix it for you. These are not ‘tricks’, they just add a little bit of reality with every step. So if you can not see stereoscopically for instance, your vision will be essentially the same as in real life. However, people not able to do this clearly get by in real life, since the human body has many other capabilities. With every step extra in tecnology (say light fields) they will therefore automatically ‘think’ about those less fortunate.

  6. scottb says:

    I know this is a PC gaming site and all, but there’s actually some really promising applications of AR in industrial type settings. For example, a BMW mechanic (they are working on this) could have safety goggles with AR that will go step by step through a procedure and highlight the bolts that need to be removed.

    Along the same lines, a maintenance worker in a power plant could have glasses that recognize what equipment he is standing in front of, then bring up all documentation on that equipment. From that, he could have the glasses work through any number of procedures. May sound trivial, but it has potential to save loads of time (and money.)

    • UmmonTL says:

      Of course there are great applications for AR and VR in the industry. Abrash even mentions them a little but his point is that Valve are looking only at the entertainment aspect leaving other applications to other people. He also said that AR is hard to do and he is right, recognizing what sort of equipment or parts the worker ist standing in front of is hard. You would need to have everything RFID-tagged with necessary information which may be viable by now but would require a company to do a serious overhaul of their production process. The improvement has to be very significant here to justify the cost.

      A classic example of a field that could greatly benefit from AR is medicine. Having a 3D image from a CT-Scan overlayed during an operation would be amazing.

      • ulix says:

        Medicine, especially surgery, already IS benefitting greatly from AR. Mostly from haptic interfaces, not optical one’s, however.

        They are already using robots to operate on people, the robot is controlled by a surgeon and gives him haptic feedback (through motorized controls, similar to force feedback in modern racing games when you play them with expensive steering wheels).

    • belgand says:

      No, the killer app is using facial recognition or some other method to remind me of/broadcast people’s names. Possibly where I know them from, but let’s not get too fancy about it.

  7. Turin Turambar says:

    It’s kind of funny how they are demoing the VR with Doom 3 BFG, while they have a, IMO, much better game for doing a demo. Skyrim. It’s from their mother company, it’s much bigger than Doom 3 BFG and more actual, and I think the big open world, the slow walking pace, the living cities and the melee combat would tie much better with the VR experience. Think climbing up again the central mountain with the dragon order while looking down with your eyes the landscape. I am sure it should be great.

    • Magnusm1 says:

      What? Doom is a much better choice, as the genre that will benefit from VR the most are horror-games.

    • Claidheamh says:

      I really don’t see Carmack programming in the mess that is Skyrim’s engine. id Tech 4 is a much better environment, and it’s his baby, so he’s familiar with it.

      • Bobtree says:

        It’s also got a clean look. Skyrim is beautiful in the broad strokes, but very ugly in the details.

    • Sic says:

      Not at all feasible.

      The game is a programmers nightmare, and its renderer is far too demanding for doing stable stereoscopic proof of concepts.

  8. SirKicksalot says:

    you have to get the signal into a human‘s optic nerve or brain. Just getting through the approvals for that – even if you had it working today – [would be incredibly difficult].

    This was also mentioned in the body hacking piece linked in the Sunday Papers. So much bullshit, so many approvals and hurdles to go through, so many authorities that dictate what you can and can’t do with your own body! I wish human experimentation was easier to practice – legally, I mean.

    • UmmonTL says:

      Hey the experimenting isn’t the problem, no one forbids you from shooting lasers in your eyes or using electroshocks to turn your eyelids into shutter glasses. It’s just not easy to proove it’s not dangerous to the user in the long run which makes selling tech like that hard to get approval for.

      • Valvarexart says:

        But that’s actually fake, the guy admitted it. Well done, though.

    • Shuck says:

      Yeah, dammit, all these stupid laws are preventing me from vivisecting my neighbors!

  9. geoffreyk says:

    When they started talking about potential haptic feedback schemes, all I could think of are those pneumatic simulation chairs. I remember watching a video of someone doing an F1 sim with a nice 3-screen setup attached to one of those, and wondering how the stationary peripheral scenery wasn’t making him motion sick. Strap someone into one of those with a VR getup, and you could replace their whole periphery with a full cockpit (car/mech/plane, what-have-you), without all the added weight on the chair’s pneumatics.

    I’ve been dreaming of commercial ocular and cochlear nerve interfaces for quite a while, but this is a step on that road that I’m hopeful succeeds.

  10. aliksy says:

    So… how long until someone makes a lot of money with VR (or AR?) porn games?

    First lawsuit about a “view someone naked” google glass app?

  11. Reapy says:

    Yeah this story made me feel mortal. I’m 32 but damnit I want to be alive long enough for this kind of thing to take over the world.

    I thought it could be used for things like road side memorials when someone passes away in a car accident. Instead of a cryptic set of flowers, instead there is a marker there with the persons picture, bio, obituary, how they passed away. In a way I was thinking of it as a ‘secret world’ layered over our own when implemented right.

    It is actually really crazy if you think about it. People worry about tech and being ‘natural’ and AR is probably one of the best ways to do this.

    Imagine the things you can build with it, but never use a single natural resource to do so. Carve my name on a tree, but never touch the tree.

    We could build our houses we live in structurally sound and never have to worry about aesthetics ever again, change what the house looks like on the outside on a whim, no natural resources used at all. Hate how your neighbor’s house looks? Override his view, put your own up.

    Back on topic with the article, one of the things I love doing in games is exploring, the first time in an MMO in particular is great, like I’m really looking forward to walking around guild wars 2 when that releases (haven’t done any beta yet). Imagine having that while you are sitting at your desk, or I could even cast that view out my window if I wanted to, or whatever else I could build on the computer or in AR.

    Honestly it is some pretty crazy stuff, and I can’t see why it would be a goal to plow full speed ahead on. Too bad I’ll be dead before it ever reaches that level (damn you kids and your AR, it’ll be the death of us all, satan’s game!)

    • Contrafibularity says:

      I think that’s an illusion. You will always need hardware. Which will frequently be produced in a low-wage country under conditions amounting to electronic sweatshops or 21st century slavery, because capitalism, or at least Apple, dictates it.

      Instead of slaving away to buy real stuff we will slave away to buy virtual stuff. Not an improvement. And you’ll always be jacked-in, leaving humanity free to, quite literally, ignore the real world. Escapism is great for a few hours every day, but it wouldn’t be escapism if you lived in it.

    • LionsPhil says:

      I thought it could be used for things like road side memorials when someone passes away in a car accident. Instead of a cryptic set of flowers, instead there is a marker there with the persons picture, bio, obituary, how they passed away.

      And while you’re reading all that, you crash into a tree yourself and generate a second set of virtual dataflowers.

  12. TheManfromAntarctica says:

    This means that Half-Life 3 already exists, but we can’t see it until we have AR glasses and realize that IT’S BEEN AROUND US ALL THE WHOLE TIME (and some of us have already scored a few achievements without knowing)!!!!!

  13. alabastersteel says:

    I’m curious how long it will take for the wow factor to wear off for the average gamer, not the average consumer. We’ve been used to traveling in a mostly straight line with the occasional strafing and of course mouse look for aiming or focusing on a particular point in the distance. As excited as I am for the tech, I wonder when most of us will stop doing the gee golly gosh head swiveling and just resume our normal straight line trajectory behaviors we all have hard wired in our gamer DNA.

    To me this would be the most ideal for first person puzzle games like Portal, or immersive, ambient experiences like Amnesia, as terrifying as that prospect may be. Heck, I’d even like to see a resurgence in Myst type games for this tech, the whole stranger in a strange land feeling would be even more prominent. I suppose The Witness would be as good a candidate as any for this.

    For shooters, unless they’re plodding, mostly linear romps like a Half Life or Bioshock with lots of attention being paid to the world and art design, and giving ample breathing room to take in the world, I can see most of us just resorting to mouse look most of the time, because it’s a lot faster. I do want to be proven wrong, and see a kind of design revolution and how we look at games and our place in those worlds at a completely different level. I just hope enough developers hop on board and implement support for it properly, and it doesn’t turn into the equivalent of a Wii peripheral with a bunch of third party devs slapping on lazy coding for it. If we get just one killer app that has phenomenal and finely tuned movement response, and decent resolution, it will probably sell like gangbusters, and you’ll most likely see Microsoft, Nintendo, and Sony either trying to buy Palmer Luckey’s company, or making iterations of their own. I hope it’s the latter, because this could turn into a nauseating patent war in no time.

  14. Shuck says:

    I’m still skeptical about this. I was working with VR in the ’90s, and nothing seems to have fundamentally changed to address most of the issues that were true then. Plus, having to strap a display to your head is a harder sell when you already have 3D monitors.

    • Premium User Badge

      theSeekerr says:

      What changed is simple – solid-state accelerometers got small, cheap and reliable, high resolution displays got thin and cheap, and cheap embedded processing got fast enough to drop latency to an acceptable level.

      The inconvenience is still with us, for now, but the motion sickness, expense and general crappiness will soon be past…

    • sophof says:

      Basically all the hardware involved has changed dramatically, every single component of it. The difference with the 90′s is so big it is almost scary how fast we are moving.

  15. The Dark One says:

    So I won’t be able to buy a pair of sunglasses that act like Zaphod’s, selectively blacking out segment of my field of view that are distressing, any time soon?

    • alabastersteel says:

      Oh wouldn’t that be nice at concerts? Or sports events? Now blocking out the annoying drunken banter is another problem entirely. Maybe if they’re all AR replaced with the visages of reptilians or Mos Eisley patrons it would be more tolerable, if a trifle disconcerting.

    • LionsPhil says:

      I would have thought a plain old layer of liquid crystal would do that just fine. Maybe it just can’t get opaque enough.

      The hard bit would be detecting all the peril. Bear in mind that we can’t detect spam reliably. Or porn. Or seditious content.

  16. Shortwave says:

    My body is ready.

  17. Slinkyboy says:

    I wouldn’t be able to use this unless I master using hotkeys. I always find myself looking at my keyboard for a split second to make sure I press the right keys.

  18. Josh W says:

    This is a lovely ramble, although it occurs to me that there is a solution for those who want to mix the world weirdly with AR stuff: Rotoscoping.

    Well not exactly, but as “A scanner darkly” proved, reducing the visual fidelity of the real world makes it much easier to slide other things into it, so putting certain filters on things could be used to level things out a bit.

    Edit – Hmm, read his blog, he has thought about this a lot more than I have! There’s a serious latency problem with my idea, much bigger than I thought, because people are pretty good at working out when their view should be changing.

    • Shortwave says:

      Very wicked concept indeed. Thanks for bringing that up, I’d totally forgot about that during all of the VR conversations I’ve had recently. Heh. It’s be freakin’ beautiful really, some of the things you could achieve with that. It seems like the processing power required to do those kinds of things to the real world in a augmented reality scenario seem like it’d be drastically less than it would be to render an entire 3D world in virtual reality to me. With the added benefits of actually being to move around properly up and out of a room. Feel the air on your skin still.. Smell the freakin’ flowers. That to me is entirely way more exciting the more I think about it. Seems like dual 120hz 3D with augmentation would be more realistic (In both meanings of the word) than anything.. Hm.

  19. solymer89 says:

    Ok, who else thought that Abrash was just another verb you have never heard before?

  20. El_Emmental says:

    With VR, we’ll be able to deeply stare in a monster’s eye, while shooting his companion in the face – but we won’t learn about the working conditions and benefits of the Cacodemons yet.

    One day…

  21. Muzman says:

    “Half Life 3″ will actually be revealed at some big con’ as a giant robot suit for Gabe to crush any who resist and take over the world whilst cackling.

  22. Radiant says:

    He’s approaching AR from the wrong way.
    The goggles/glasses don’t have to do all that calculation.

    What needs to change is the composition of materials we use in everyday life.
    Paint, clothing, brick work, glass.

    They all should contain microscopic signaling material.

    That way the glasses pick up on the material which tell the glasses the depth and orientation to display the information.

    The depth and orientation is provided simply by the presence of the signalling material.

    Like how a reflective strip and light work together.

    There’s a scene in the new Total Recall [it's shit] where he takes a video phone call by touching some glass. And the message is displayed on that glass for all to see.

    With AR and signalling material all you’d need to do is touch ‘anything’ made from or painted with the material and the glasses will deliver the message ONLY to you.
    Because the glasses use just the material in that case for co-ords to display the images.

    You don’t need to do all the display [range finding and mapping] calculations.

    More advanced tech would be placing information within the signaling material itself.

    A tag on clothing for pricing and availability, an advert on wall space specifically targeted to you.
    Or a message left for you by friends [location and directions "follow me!"] imprinted by touching a wall or signpost.

    By using a facebook style personal network you could tag your own and other peoples glasses with personal information [birthdays, name, relationship status, photos].
    Or look for recent information left for you in the area.

    With that type of personal network connected to a larger network, together with the signaling material you could do pretty much whatever you wanted.

    That overcomes a lot of the issues in the article.

  23. uh20 says:

    i will jump on this when its realistic for general use, such as browsing your fancy glasses web covering 1/4 of your view distance

    my left eye cant look up, that might be a problem XD

  24. LionsPhil says:

    I am confused by the notion of “keyboard and mouse + VR”. Mouselooking while having display strapped to your head? How does that interact with the accellerometers/gyroscopes? And if it’s not mouselook (but, say, mouseaim—think Mechwarrior, or unpatched System Shock 1), how do you turn around short of sitting in a swivel chair and inevitably wrapping yourself up in the cables?

    • solidsquid says:

      Assuming it works the way I think it will, I’m guessing a mech is a good example. You can look around in the cockpit and change your view but to actually steer the vehicle you have to use the controls. So if you were playing say, Skyrim, you could get your character to look around with the rift and possibly aim, but to move and steer movement you’d use keyboard and mouse

  25. gandrewsan says:

    It wasn’t clear how the mouse would work. It wasn’t even clear whether moving it forward would make you look up or down.

    Down.

  26. babysoftmurderhands says:

    Hey nice guys. You transcribed and changed around the interview we recorded during Quakecon. Here is the original posting from August 8th of the interview we did at Quakecon 2012. http://babysoftmurderhands.com/2012/08/quakecon-2012-exclusive-interview-40-minutes-michael-abrash-valve-software/