Valve's Abrash On Virtual Reality, Wearable Computing
Valve Time Machine
Valve enjoys keeping secrets. And while I'm beginning to suspect that Half-Life 3 will ultimately turn out to be a giant ray gun that erases all memories of the Half-Life franchise from our brains forever, there's one thing Valve's been uncharacteristically upfront about: its fascination with the future. Wearable computing, augmented reality, and - perhaps most pertinently, given recent extremely promising developments - virtual reality. During QuakeCon, I got the chance to sit down with Valve's most vocal proponent of these new technologies, programming legend Michael Abrash. However, while Valve's obviously putting some serious work into breaking new technological ground, Abrash was quick to point out that it's still all in service of a singular end goal: entertainment.
RPS: What aspects of virtual reality is Valve really exploring? Obviously, you're here at QuakeCon to support the tech, but what's Valve's stake in it?
Abrash: So I had a family medical emergency. I was supposed to be here yesterday for John [Carmack's] keynote. Yesterday morning, I canceled everything. But then my daughter very generously volunteered to come to California and take care of the emergency because I really wanted to come to this. And part of it is because this is a pivotal moment. It's possible VR won't succeed, but I think that this is the shot it has. Right now is the best shot it's ever had by far, and it has a really good chance of this being the beginning. And I mean really the beginning. Things are going to get way, way better.
It was obvious with Quake or even Doom that they were first-person shooters. But what was not obvious with Quake was that hardware change of the Internet, and you'd be able to have permanent servers. Then you'd have people putting up their own sites, clans, widespread mods, tournaments - the whole thing. Seeing that was John's bit of genius. The rest was engineering. It seems obvious now, but it wasn't obvious at the time. With VR, yes it's obvious we can go and have a first-person shooter or an immersive experience, but the real question is, what's going to happen that hasn't happened before?
RPS: I was thinking about that while playing with Oculus Rift. I mean, the potential for non-violent games seems huge. Just staring at things was amazing to me. What if it paid attention to how long and where you stared, though? People could react. You could simulate awkwardness. I think that'd be pretty incredible, actually.
Abrash: It's true. So I think if we went back to 2005 and said, "I'm gonna give you this phone, and it's gonna have as much processing power as a computer and a touch interface," I don't think you would've immediately said, "Oh, these are the games that are going to end up being successful." You probably wouldn't have even predicted that there'd be so many people buying and turning it into such a huge market. So I don't know what VR will turn into, but I'm pretty confident it'll turn into something great if the hardware can be good enough. That's the thing that has to happen. I think [Oculus Rift creator] Palmer Luckey's stuff will be good enough to get that started, and then it has to evolve rapidly.
So first, I'll tell you what's necessary for VR to work well. For VR to work well, you need display technology that gives you an image both your brain and eye are happy with. Trust me, that's much harder than you think. Even if it was just a HUD, people wouldn't be that happy, because you're always moving. Your head is never still. And this is moving relative to the world, and if your brain is trying to fuse it, that can be rather tiresome. I'll tell you there are lots of issues with getting that image up in front of you.
Second, if you want to do augmented reality - and AR and VR are what are interesting to us, because they're entertainment experiences - wearable computing and just getting information up will be Google's specialty. They'll be great at it. How could they not be? So if you want to put things up in the real world, you have to know exactly where you are and what you're looking at. Or you have to be able to process images.
So you've seen iPhone apps where you can make people look silly - mess with their faces, put hats on them, whatever. Well, if I want to put a hat on someone, I have to know exactly where he is. As I move, as he moves, the hat has to do the right thing or it doesn't work. So tracking's a really hard problem. Both John and I talked about that. Knowing your angle isn't that hard, because you can get it out of a gyroscope. It does drift over time, though. But knowing your position is actually very hard. John talked about the Razer Hydra [motion controller], which has a base station that can track things relative to it using magnets. That's fine if you're within range, but the range isn't that great.
So I think the solution is very similar to the way humans work. Humans have this three axis center, and then your visual system corrects for that. So if we have a gyroscope and a camera, and then the camera does the correction for that, I think it's a long-term solution. But doing that processing, I think, requires a camera that can do things fast and in a higher resolution. It also requires processing that information, and that's a power issue, a processing issue, an algorithmic issue - these are hard problems.
One example: if you were sitting here and said, "OK, I want to know which way I'm looking and reorient myself." You know how far down there it is [gestures down a hallway] to any pixel I could use to reorient myself? And it's not very well lit, either. In general, to do this is a hard problem.
And then, even if you've got tracking and display, what's your input? Right now, it's a game controller or a keyboard-and-mouse. That's great, because we have a class of games people are used to in first-person shooters and a class of controllers people understand. All we're changing is the display. Doing it incrementally that way is far more feasible, because you know how to enter that space and give people an experience they want quickly.
In the long run, it seems unlikely that's the interface you'll want. Maybe you'll want to manipulate objects with your hands. Maybe it could be a Kinect-like thing. It could be gloves. Who knows? Maybe you just have a little touch pad on your finger. Maybe it's gesture-based. Have you read "Rainbow's End" by Vernor Vinge? In that, he just describes it as people controlling things by simply running or moving. It's like people have learned this language of interacting with smart clothes. OK, that sounds to me like he's kind of hand-waving it because he doesn't have a good solution.
So input's actually wide open. I doubt there's one input modality. There's quite a few potential ones - including speech. But all those things are going to get figured out over a period of time. I mean, I can tell you there was a day 17 years ago when no one had ever heard the phrase "mouse look." John had to figure that out. It wasn't clear how the mouse would work. It wasn't even clear whether moving it forward would make you look up or down. John had to go and figure out the entire syntax of how to control first-person games, and he'll just do exactly the same thing here. But it has to be figured out.
RPS: Oculus Rift also gave me a weird sense of weightlessness. It was incredibly immersive right up to the point where something hit me or even just got too close for comfort. I couldn't feel anything, even though my brain was certain there should've been something brushing against my skin.
Abrash: So that's part of input, but my personal feeling is that - and this is far enough out that it's something I'm not personally looking at - but my speculation is that there will be haptic devices. Once you have immersive VR that people are really using so that there's a market for it, there will be experiments all over the place. My guess is that there'll be some sort of form-fitting, shirt-like thing, and it'll have some kind of percussive devices so it can tap on your chest and arms. That seems like an obvious and manageable thing. But there are so many ways that could go.
RPS: What about some kind of neural-interfacing-based thing - like Carmack mentioned during your panel?
Abrash: So here's one thing about John Carmack: if John is interested in something, you can believe that something is within the horizon in which it will become a viable product. Because John never, ever wastes his time. I mean, when I was at id, people were coming out with VR stuff. And John was looking at it and saying, "No, this isn't going to be successful."
He is correct when he says neural interfacing stuff might be interesting, but it's not in the timeframe of what might be a product in the next three years or even five years. I mean, there's a professor at Cornell [University] who's doing research about encoding down the retina might work. I was actually surprised to find out. Gabe [Newell] dug up that piece of information. So there are people who are determining [the effectiveness of that technology], but think about it: you have to get the signal into a human's optic nerve or brain. Just getting through the approvals for that - even if you had it working today - [would be incredibly difficult].
So doing the kind of VR that Palmer Luckey's doing - doing AR, as well - this all will prepare for a future in which neural interfacing will be the way to do it. But no one's gonna jump straight to that. The other thing is that, once we do go to neural interfacing, it won't really change anything. Sure, you won't have a thing on your face, but you'll effectively be getting visual input in terms of how it's presented to your brain. You're still looking through what you think are your eyes and so forth.
It's kind of like if you went back to 1995 and developed the best BBS software ever. How much value would that be to you now? How much value would evolving it have been? None. Because BBS is a transitional stage. My belief is that both AR and VR will eventually merge. Similarly, I think a tablet is a transitional technology, because you don't really want to lug that thing around. But if you have the glasses with the same kind of information, that'd be far more useful - which is what Google is clearly betting on. They want to replace your phone and your tablet.
But if you'd invested your time and money in the Internet in 1995, that'd have huge value. The Internet's in kind of a final state - where BBSs and modems, those were transitional. This is the final state, and what we'll see is refinement on that. More stuff delivered over it. Maybe the structure will evolve. But you won't see it as something new. Similarly, I think once we get to wearable technology, it will evolve dramatically, but it'll all just be the same model getting better and better.
So that's an important distinction. Even if we get all the way to direct neural connections, it won't really invalidate what Palmer did. It'll just be an extension of what Palmer did.
RPS: You mention both virtual reality and augmented reality. Oculus Rift and things of that sort are obviously VR right now, though. Can modern AR - little gimmicky phone apps and things of the sort - really compare?
Abrash: Well, we've set out to figure out what's an entertaining experience. What are people going to want to do with AR? And that's a hard question, because you have to have prototypes. I mean, anyone can sit around and say, "Oh, you can put a hat on somebody's head or change their face or geocache." And maybe we'll get there and it'll be fun, but we need to get there to find out. We need to try those experiences.
So we set out to find out what the technologies will be able to do in that timeframe. What could happen? So if you read my blog, I've got a post that explains why we're not gonna have hard AR soon - absolutely, flat-out, for sure. I'm pretty sure. Maybe someone will solve this, and I would be thrilled. I'd love to be wrong, because this would be better than what we thought. But you can only add color in [with AR]. You can't overwrite things, because you're putting up images, but you're also seeing through [your display]. So I can't replace someone's face with another face. I can only superimpose something on it that blends with it.
Well, that's pretty significant, because it means you're never gonna do something that feels completely real except with VR - because AR has to replace something in the real world. So there's a whole branch of that tree that just got lopped off, right? Because you say, "Well, we're not gonna be in Rainbow's End or Snow Crash at that time. It's not going to happen."
And then you go down this whole spectrum of things and reach Google Glass. And it'll show you information. That's useful. I mean, I think Google's doing something very, very pragmatic. It's a good step into this for them. But it's not interesting to us because we're an entertainment company. We do games.
RPS: So what's the next step? And where do PCs and - by extension - PC gaming factor in that, if at all?
Abrash: AR is hard. You can quote me on that [laughs]. So I thought about, well, what else is interesting? What might have potential? And I came to the conclusion that VR is kind of different-but-equal in that AR and VR will eventually converge. In the long run, AR and VR will be the same thing, and they'll just opaque your glasses to different extents. So a lot of your experiences will be VR. Because when you sit at your desk, you don't want AR. What would you use AR for in front of your screen? What you really want is VR and to not have the screen. With high enough resolution, you can just put your screens around you. And then your desk is wherever you choose to be.
Now, when you go out in the world, that's AR. And that will radically change the way you interact with the world and other people. So I don't want to downplay it in any way. I just want to say that, when you have your magic AR glasses many years in the future, you'll still want to do VR things as well.
Also, VR's all about entertainment. Right now, what else can you do in VR? But in AR, you can do other things. And VR is clearly more tractable. You don't have to solve my tracking problem. You have opacity, so you don't have the engineering problems of see-through. Whole bunch of reasons. So I came to the conclusion that VR is equally interesting to look at and more tractable in the near term.
RPS: So what's Valve's goal in that space? Is utility a major part of it? Or are games and entertainment still king?
Abrash: We're pursuing what wearable stuff could exist that could enable you to have new, compelling entertainment experiences. It's about giving customers experiences that they want. We'll do whatever it takes to make that stuff happen. It's a big space, and it's unexplored.
But the one thing I'll point out with AR is that you really don't know what's a fun experience. So you can talk about basically geocaching or LARP-ing - because that's really what it is at this point - and most people just really aren't that excited about it. It requires interacting with the actual world around you. It makes you dependent on the world. It's not a concentrated experience. And maybe that'll be fun. Maybe it'll be the Farmville of AR - which a lot of people play, so I'm not trying to downplay that.
But we're looking for deep, rich experiences. I don't know what they are. I'm not a game designer. But the technology has to be there. Again, hardware changes and enables things. So the first question is, how can the hardware change to really support experiences? That's the first thing we're looking at.
RPS: Thanks for your time.