LivePlace: Online Photorealism On Rubbish PCs?

I still have no idea what photorealism means. It's just something people said in Edge circa 1994 about Playstation 1 dinosaurs.

Well, apparently so, but there’s a catch. TechCrunch broke the story, and has been following its developments. It started with the posting of an impressive video of a cityscape, apparently running on the OTOY server-side rendering technology, meaning that really fancy graphics can be displayed on… well, pretty much anything down to mobile phones. And that video’s beneath the cut, along with a few thoughts on the implications from RPS’ most tech-illiterate – and, in fact, generally illiterate – member.

This tech apparently works in any browser, without a plug-in. The basic idea is – as the server-side name may suggest – rather than the work to create a world being done on your computer, it’s done the computers running the virtual world. Your responses to the world are sent back, the computer outputs the image, and lobs it at you. Since your computer’s doing relatively little work, it doesn’t need anything other than the ability to display the colour image.

The immediate question would be lag. They claim that, working on west coast america, OTOY have got 12-17 milliseconds on the west coast where their test server is, and 100ms in Japan, presumably plus any latency of your own connection. While this may be insufficient for an action game – though Jim points out Quake is playable on pings of 120ms or so – there’s all sorts of games where it’d be absolutely fine. What LivePlace seems to be would be a good example – a post-Second Life interactive virtual world thing.

The second head-scratcher would be the actual set up. Sure, you don’t need the PC power to do the work, but the people who you’re buying the service from need to. In other words, there needs to the the equivalent rendering power to create the image for each person on a server. So, say, for a online world with 200,000 people working at any time, you’d need a server stack of 200,000 computers. That’s an over-literal take, of course, but it’s still an incredible amount of computing power. How much would this actually cost?

But really, the most interesting thing about this is the questions it raises. It’s towards a future where a game system is more like renting water, or cable television. All you have is a device to show the images which are provided to you. Is this something which appeals to current gamers? Would they be talked around by it? After all, if people have problems with Steam being down and not being able to play games, this has the potential to be far worse. Conversely, the idea of never having to worry about your graphics card being enough to handle the future equivalent of Crysis ever again is certainly a major plus.

If ever a story needed to be ended with a “Hmm” it’s this one.



  1. The Hammer says:

    The shot of the moving cars at the start was amazing…

    But, er, every building has an interior? What kind of claims are these?! The Getaway 2?!

  2. Junior says:

    But what is the weapon selection like!?

  3. Nuyan says:


    It’s actually strange that I never heard of the concept of server-side rendering before, as it makes a lot of sense and all sounds rather logical. At the same time it almost sounds like a April fools joke.

  4. Cigol says:

    I’m with the ‘Hmm.’ on this one.


  5. Fat Zombie says:

    Holy moholies. That’s quite something.

  6. Cigol says:

    Apparently the first part of the video, the impressive part, is actually a pre-rendered video made by some artist that has nothing to do with the project?

  7. Kieron Gillen says:

    Cigol: It’s definitely worth reading Techcrunches story, both for that and the comment from the actual developer.


  8. MisterBritish says:

    I think it was that Tom Francis who called this in the PC Gamer (UK) podcast. Had he seen a glimpse of this?

    I don’t think server-side computing will take off hugely in the near future (mostly due to connection issues, lag etc.) , but it is THE FUTURE, if that makes sense.

  9. Mil says:

    But really, the most interesting thing about this is the questions it raises. It’s towards a future where a game system is more like renting water, or cable television. All you have is a device to show the images which are provided to you.

    The balance between the extremes of “all computing power in the servers + dumb terminals” and “all computing power in user machines” has jumped around quite a bit during the history of computing. For the last 15 years or so the move has been towards the latter, so periodically you get people proposing the old dumb terminal idea as if it was something new. It might actually begin to make more sense as communications get better, but I doubt anything like it’s described in the article is going to happen any time soon.

  10. Guest says:

    Looks a little too fancy to be true. It just wouldn’t be technically feasible. After all, the client/server wouldn’t simply be exchanging abstract information about the players location and actions like they do in every other multiplayer game, the client would have to tell the server where it is located right now, the server then would have to render what the client’s seeing and submit this huge amount of data to the client. Or several thousand clients. At once. Unless they invented some new kind of tube to transmit this, I highly doubt that this wouldn’t end up being ridiculously expensive.

    Besides, simply transferring the cost for a new graphics card from the client to the server wouldn’t actually make that much of a financial difference for the average player – after all, the server would require a lot of maintenance and upgrades in general and several thousand new graphics card as soon as they want to offer a new game – the client will still have to pay for it one way or another, usually with a subscription fee. Besides, that subscription would only get you a new graphics card for one streaming game you’re playing, if you prefer to play several games simultaneously, you’ll have to cough up fees for several subscriptions, each requiring new graphics cards every year or so, probably causing you to end up paying even more for keeping those servers up to date than you’d pay for a new piece of hardware.

    Although this system would be a lot easier for casuals who can’t be arsed to keep their rig well equipped and just want to play something for an hour or two as soon as they get home from work, it seems rather redundant, considering that we already have consoles that are perfect for this sort of thing. I just can’t imagine many people paying for this kind of stuff.

    But then again, people are paying for Second Life, so what the hell do I know. Besides, one of the most cited advantages of PC games over console titles is modability which would be non-existent with streamed games. So you could have the ultimate console experience right from your PC! Wouldn’t that be incredible? You’ll never have to leave your PC to walk into your living room to play a game, you can just simulate it on your standard computer, without having to stop downloading porn! What a Brave New World indeed!

  11. Cooper says:

    I think the nod to cable tv is spot on. I can see the OTOY server-side rendering technology being a massive hit with the ‘new media’ types. Soap operas where you control the camera and which narrative thread to follow. Explorartion of CG rendered scenes from hollywood movies. Interactive art spaces, all that kinda stuff.

    Also: I doubt this will take off massively for gaming. Maybe a bit social networking. But I doubt it. The same way Second Life or Activeworlds suggested a Gibsonian realisation of the ‘net but failed to be (really) anything more than curios.

  12. Gap Gen says:

    Yes, you’d need some very clever latency hiding (e.g. predicting which images to render given a particular input as mouse movements have some momentum, etc), but in principle, maybe? Sounds like cleverer people than me have thought about this before opening their mouths.

  13. kenoxite says:

    This is mucho extraordinaire. I’m absolutely shocked and hanging on the bait of this leak-o-hype machine.

    Brilliant. Thanks for the heads up, RPSers.

  14. Theory says:

    If it’s a streaming video then it’s going to be running at a far lower res than games that render on the client. I think they could get away with it, assuming there’s actually a game behind it all that tempts people into paying.

  15. Monkfish says:

    By the time technology has evolved enough to make this idea truly feasible, your common-or-garden rubbish PC will have sufficient processing power to render ultra-realistic worlds anyway, thus eliminating its primary advantage.


  16. Noc says:

    Guest, I’ve got a Macbook. I do plenty of gaming, but it’s almost exclusively either indie games or games that came out before 2004. Why? Because I can’t afford to buy another computer. It’ll happen eventually, but the important thing is that the most bare-assed of machines tend to be the most portable: laptops and PDAs and the like. People buy them because they need something that doesn’t have a big blockly tower, but that choice usually hamstrings them from packing any real graphical heat. This would be in interesting way to circumvent that, in that it uses the internet to enhance the portability of gaming machines. I’m not concerned with making existing graphics technology obsolete, but if they can make the big ‘ole metal tower obsolete . . .

    The trend that would be needed to make this ubiquitous and affordable, though, is something I know nothing about: the cost of putting together supercomputers. It occurs to me that modern advances could cause a notable drop in those costs, to a level that would still be prohibitive to individuals but might become a more feasible expense for a game company. If anything, I can see this being run on the publisher side, rather than the developer’s: your Windows Live account would provide you with graphics, as would Steam or whatnot.

    It’s clearly a long ways away either way, but it’s an interesting thought.

  17. Turin Turambar says:

    You would need a 100Mbps or better connection. You need to stream uncompressed video in high definition.

  18. randomnine says:

    A 120ms ping works in Quake because, ever since QuakeWorld, the client does clientside prediction: instead of purely showing you where the server thinks you are, the client can simulate in advance based on your inputs and render up to the millisecond predicted information (which the server might overrule later, of course).

    This online rendering stuff is going to be horribly unresponsive for, well, just about anything. Latency of 20ms or so might work if you can get it, but remember you’ll have to add video encoding/decoding latency to that as well.

    As Turin says, yes, expect heavily compressed video.

    Still: at least it’ll be cheap, ish, since it’ll basically come down to renting a gaming PC on the other side of the country for a couple hours rather than buying one outright. I’m guessing maybe something like 50p/hr at peak times. If you spend more per year on your internet connection than on your PC, maybe it’d be worth trying.

  19. MetalCircus says:

    Okay, I saw this a couple of days ago and my reaction was equally “hmm”

    It’s impressive as hell, sure, and I’d love to see something like this. As a PC gamer with a relativley rubbish computer (by todays standards anyway) I personally would love something like this, and it might work towards crushing piracy a little bit if you think about it. But how is this possible? It’s like Tomorrows World, but slightly more believable,

  20. meh says:

    well the bandwidth costs don’t scale the way tv does. tv can be multicasted; this can’t. means tv’s cost is based on # channels; virtual world on # users.

    this setup only makes sense if bandwith is cheaper then processing power, and I guess that’s what they believe.

  21. Swyyw says:

    Impressive stuff. But like others have commented, it’s hard to tell if this would be economically viable.
    When he showed the paint application though, I couldn’t help but think about World of World of Warcraft :p

  22. Mike says:

    The only thing that worries me here is that server-side means that I’m less likely to have a personal copy of the world. To what extent can I crush buildings and lay waste to districts? I can customise my room, but that’s now how I want gaming to end up.

  23. Evangel says:

    The bandwidth of a DVI single link connection is 3.96 gigabits per second. If you compressed it down to a hundredth of the size, it’d be 396 megabits per second. Assuming you get a thousand users, you’d need a 40 gigabit internet connection at the server and a 400 megabit connection at the client.

    I’m not sure about American internet, but here in Australia, the best you can (realistically) get is ADSL2+ which is 24/8 megabits per second which is a far cry short of the 400 megabit required.

    Of course, if it’s running at piss small resolutions (320×240), then I can see it being… somewhat reasonable, but the only machines that render at those resolutions are PDAs and cell phones which would both be working on a wireless connection. Seeing as wireless network is about 100mbit if you own the network and are the only one using it, I don’t see this being reasonable any time soon.

  24. Martin Kingsley says:

    Theoretically intriguing, but practically improbable. I live to be proved wrong.

  25. NATO says:

    It seams to me that the big advantage too this from the distributors point of view is way it could bring back the commercial break from its failing position on TV… Because that’s exactly what this is, the true realisation of interactive TV.

    Can you imagine it? Every 10 to 15 minutes the whole world spontaneously freezes as every person is force fed a commercial break customised to match whatever products they have ever shown any interest in while in the world. But would it be a short break or long one? Good god is there time to go to the bathroom at least? don’t know. got to stay. got to watch…

    I suppose if your lucky it would just teleport you to a in-game ad so you could still interact a bit with some other kidnap victims for 30 seconds before being whisked back to un-reality. I know I sound pessimistic but really I feel sort of optimistic about this after all no-one can ever force you to play a game like that.

  26. Ian says:

    I’m not sure I understand this technology. Does it essentially mean that instead of my 8800 rendering it realtime, there’s an 8800 in the server for each person that’s connected to the stream? That sounds like it would be massively inefficient once you start serving up live realtime renders to thousands of people!

  27. Mark O'Brien says:

    The other thing is that it’s ray-traced. That’s a big deal. Ray-tracing is very processor intensive for any kind of high resolution. The fact that they are trying to do both server-side rendering, and ray-tracing at the same time make me think that something is up. Server-side rendering on its own is out there enough.

    On the other hand, if they really are targeting low-res devices like phones and PDAs (400 x 300), ray-tracing is actually a good choice.

    Something like this could actually work if you forgot about animating the images. Imagine an MMO version of Myst. You can choose where to go and interact with your environment, but it’s not really animated. Server-rendered images would just replace the text of a text-only MUD. That didn’t look like what they were displaying in the video, however.

    The whole thing strikes me as highly suspicious, like maybe they’re trying to grab money from gullible venture capitalists. Reminds me of Steorn.

  28. Grey_Ghost says:

    Well, I am flabbergasted. Should this stuff actually work then I’d be very interested in it.

  29. Tanner says:

    Add some goggles, and HELLOOOOOO METAVERSE!

  30. Evil Timmy says:

    Hey! Xbox 360 developers! Pay attention! That’s what bloom effects and lighting are supposed to look like! Not rubbish like everyone having a glowing outline and looking like even their skin is shrink-wrapped. You don’t win by having ‘MOAR SHADARZZ!’, you win by producing something that doesn’t look like it came from a toybox and is about to melt under a blazing spotlight.

    This is cool, but I doubt it’ll work well outside of places with blazing internet like Sweden and Japan. The US is far, far behind on this, and the UK not much better. Besides, as was said above, we’ll have this-side-of-the-uncanny-valley video processing and rendering before 10% of the US has 23Mbps internet (enough for compressed 720p + surround streaming). In select areas, or public spaces, it could definitely work (imagine being able to pause your game at home and go pick it up at the mall arcade with friends), but seeing as client-side horsepower continues to get smaller, faster, and cheaper, not to mention wrapped in increasingly reliable and arguably better hardware and software, I think this is a nice tech demo, but sorta DOA.

  31. DSX says:

    The idea of the metaverse is exactly what struck me too, that and as I was watching the vid, my father comes in the room and remarks that “People create a world like that to hide from the real world” and then walks out to watch TV again for the next 8 hours.. while I sit at my PC for the next 8 or so. Irony. had a similar technology/rendering issue, an entire persistent world (a real world, about 2% of the land developed) which was solved by locally stored textures and user created content accessible via any web browser.

    While it was cartoony and immensely low-res compared to city-scape, the problem was that after a year or so play, you had about (no joke) 50+ million separate tiny files stored in your game directory that your PC spent endless seek time sifting through as you traveled the world. It was impossible to cache enough to make it fluid even with the strongest game PC at the time.

  32. Jonathan says:


    There’s a number of problems I can see with this. Firstly, resolution. Mobiles, PDAs, laptops et al suffer from low native screen resolutions. You could have the equivalent of a hundred 8800gts and a phsyx drive but the picture will still be breath takingly jaggy. Also the screens are just too small to see the graphics that could be created, in fact better graphics engines can actually be detrimental to use. The failure of UMD is a good example of this, high quality low screen is just eye strain waiting to happen.

    Secondly animation. I noticed there wasn’t any while the camera was moving. By the sounds of it, panning around a moving model rather than a static image would confuse the heck out of the server.

    Reply to Cooper
    Except Eastenders would require 5,000 invisible camera men to record all. Unless you mean CGI soaps which would be a six month wait between episodes. Either way these morons would have something to say link to

  33. thesombrerokid says:

    this will never take off
    1) 12ms is no where near the feed back required for visual information a lag of 120 on quake for control with dead reckoning to give the illusion of contiguous play

    i.e. if the server rendered something 1/100 of a second later you saw it then resonded then 1/100 of a second later it responded processed the iformation and sent it back which you subsequently recieved 1/100 of a second after that you’d get pretty pissed off (this is optimum performaces) it would take half a second in the 100ms likelyhood to have a cause and affect processed, quake would not be playable in these conditions

    2) the aformentioned bandwidth constraints of streaming 1920*1200 resolution videos at 30fps put bandwidths at roughly 20x the best commercial global markets that have largely available 100mb conections

    3) the ovbious one why! why would a consumer want this when he could keep his pc locally and run this al for himself because it’d cost the same :S except he wouldn’t get control over the hardware and that includes doing dodgy or personal things without megacorp analysing it

  34. Richard Jones says:

    Looked impressive, sure, but why was it so mind-numbingly boring? Why re-create in a virtual space exactly what you can get by walking out the door? Get some imagination, people!!! The subway was a prime example. Physics doesn’t apply, right? Trash doesn’t need to exist. So why use a crappy old subway car with grab bars and shitty seats and trash all over the floor? Do something INTERESTING!

  35. Zarniwoop says:

    A few thoughts: Firstly, for all those worrying about bandwidth, I reckon that’ll be solved by limiting the resolution somewhat, perhaps to 720p, which should make it playable considering I can stream trailers off the apple site on a good day with my 5.5Mb/s connection. Of course you’ll still need some hefty compression going on, but I think it is doable. Secondly, with regards to computing power, why can’t you just have 1 big mainframe which renders this world, and then just have players acting as camera points within it? I mean sure, that’ll take quite a bit of power, but considerably less than it would to render this world individually for every single user.

    And lastly: how uninspired is this world?! Why on earth would anyone pay money to visit a world which looks exactlylike the one they live in?

  36. wichenroder says:

    I don’t buy it.

  37. Mark O'Brien says:


    I think anything like the kind of compression you see in regular streaming videos would be difficult if not impossible to achieve.

    Firstly, compressing takes some processing power, making it even slower to render, but even then the problem is that compressing a live interactive stream is fundamentally different from streaming a recorded movie.

    AFAIK, video compression techniques examine sequences of images in their entirety trying to find what they have in common and so save that information only once. You can’t really do that with live video, because the sequence doesn’t exist yet for you to examine. That’s one of the reasons why webcams have much worse quality than streaming a video from the apple site. It also explains why there is a lag from “live” video streams – I suspect they actually buffer a second or two and compress them before they send.

  38. ChrisL says:

    What? No matrix jokes?

  39. hINDUs says:

    I did the math:
    320×240 at 24fps = 42,2 MBit
    640×272 at 24fps = 95,6 MBit
    640×480 at 24fps = 168,8 Mbit
    800×600 at 24fps = 263,7 Mbit
    This is an uncompressed RGB stream

    When using XVID compression at 640×272, you get very good quality at 900 Kbit, which gives us a compression ratio of 100:1. This can be mocked up quite easily with AVC/H.264 up to 1000:1.

    This given we have:
    320×240 at 24fps = 42,2 MBit – XVID 432 Kbit – H.264 43Kbit
    640×272 at 24fps = 95,6 MBit – XVID 979 Kbit – H.264 98Kbit
    640×480 at 24fps = 168,8 Mbit – XVID 1729 Kbit – H.264 173Kbit
    800×600 at 24fps = 263,7 Mbit – XVID 2700 Kbit – H.264 270Kbit

    Well, the XVID bitrate values are calculated at “very good quality”, and the H.264 are “maximum compression, blurry image, no quality”
    So this is doable… But i think it will run at some variable resolution and at 5 – 15 fps… and oh, i have seen very impressive demos from demoscene which were raytraced at nice resolutions and run smooth on 2-3 GHz machines!

    As for video compression, it usual looks ahead 1-2 frames, webcam are shitty by design :P, connect a DV cam and use a decent resolution and compression and you will get a perfect quality stream :)

    The apple trailers are encoded at let’s say 900 Kbit while your webcam is 100-200…

  40. Scott says:

    God damnit, people keep stealing my ideas. First it was the smaller buttons toward the body on the Guitar Hero controllers, now this! I was literally thinking about server-side rendering earlier today.
    *puts on tin-foil hat*

  41. thesombrerokid says:

    compression = not runnable on any device! and thus defeats the purpose not to metoin you’re introducing more latency into the equation at best you can encode a stream in the same amount of time it takes to watch this intorducing another .5 second lag on the already .5 second lag you’ve got thus meaning you are seein your getting significant frames a second apart so you’ve solved one problem but made the other one eve worse xvid working on a .5 second stream would be pretty bad compression anyway

    ohh and
    800×600 at 24fps = 263,7 Mbit – XVID 2700 Kbit – H.264 270Kbit
    isn’t better than what local processing can do now it’s worse, people don’t pay more or surrender some of the power they have for worse, the do it for significantly more or not at all

  42. Pishtaco says:

    However good cheap home PCs get at rendering, this technology will still be attractive because it makes piracy much more difficult and makes advertising easier.

    And much more compression should be possible with this than with live video, since you have a complete model of the geometry of the world which you can use to decide how to compress things. Maybe there’s a simplified model running on the client as well.

  43. Jetsetlemming says:

    It won’t uniquely render the world per user- rather, it’ll render the entire world in one instance, or one instance per server shard, and then have users occupy view-points of the world- think of it in a similar manner to local console multiplayer.
    I’m still doubtful, though.
    Also, some of those shots were goddamn creepy. What is this supposed to actually BE, anyway? Horror themed Second Life?

  44. GothikX says:

    First of all: calling it raytracing is an absurd stretch, and I say that without having watched the video or anything. True raytracing needs to be computed for every camera/point of view, which would mean calculating it for every user, which is pretty much impossible beyond a couple users even with a zillion processor machine (well you get the point). Radiosity is much more likely what they meant; that can be calculated once and rendered multiple times (with enough trickery), with relatively less overhead after the initial calculation.

    I was going to go on a big rant on why this won’t work but there’s no point really. If it does work somehow within my life span (and before hybrid quantum/bio computers and hyperspace communications or whatnot), kudos to them.

    Also: “In other words, there needs to the the equivalent rendering power to create the image for each person on a server.” – not really. One of the commenters had it right – once you have the physical/meta aspects of the world computed and the geometry all ready for the pipeline, you can take snapshots of the world from multiple viewpoints without much overhead; that’s actually the one thing that makes it even remotely possible. And since it runs on a (presumably) custom-built special-purpose machine, the comparison with graphic cards isn’t correct; there is some level of resemblance (at least with traditional rendering without raytracing), but saying that you need an 8800 for every client is just wrong.

  45. Tom says:

    Colour me hhhhmmmmmmmmm’d big time.
    I’m in to 3D and I’m fairly confidant that rendering of that quality, even at low resolutions, then compressing and packaging it up as a vid (that’s 24 frames per second), then sending it across the internet to the client, while all the time being INTERACTIVE would require colosal amounts of rendering power and bandwidth.
    I could imagine the Pixar render farm techs laughing their asses off at this one.

    “By the time technology has evolved enough to make this idea truly feasible, your common-or-garden rubbish PC will have sufficient processing power to render ultra-realistic worlds anyway, thus eliminating its primary advantage.


    You may have hit the nail on the head there Monkfish. This is prototyping/10 year plan stuff.

    However, I have to admit, I only have experience with off-line rendering, so I can only judge from that. All kinds of wizardry goes in to real time rendering so who knows. But even with that admittance this still has hhhmmmmm stamped all over it for me.

  46. ohGr says:

    If they do manage to get this thing working right, Second Life will be crushed. All they need to do, is to add Furrys. And it’s good bye Second Life.

  47. Kurgol says:

    Okay I call b*llsh*t on these guys.

    3min 20 sec into the video you see Optimus Prime – basically copied straight from the same guy who did the opening sequence.

    Now 3:54 into the video a straight rip of more artwork by the same artist!

    Now go take a look at the Otoy screenshots. I don’t belive this video has anything to do with Otoy. The published screenshots on their actual website are vastly inferior to those in this video.

    If you want to look at the origonal artist’s own website you can find it here. I’m sure there are more examples I have missed.

  48. Nallen says:

    What the shit is this ‘the cloud’ rubbish on TechCrunch. “it couldn’t even be done by the cloud!” “maybe this is possible on the CLOUD” “omg some1 said clawd so now I repeats it”


    It’s as if no one has heard of the concept of distributed computing before.

  49. CJ McFly says:

    Wow I cannot wait to play with the realistic rubbish and fire hydrants. I really doubt that’s a spoof… from the amateur chair it seems like just as much effort to do it as it would to make a fake that good – embedded programs etc!

    Sure hope you can embed something a bit more exciting than paint though…are there any links to a development site or similar?

  50. np says:

    I knew this was coming, thanks for confirming what I already suspected. This is the future folks!

    “So, say, for a online world with 200,000 people working at any time, you’d need a server stack of 200,000 computers.”

    I don’t quite agree with that. If it is uses ray trace rendering like it says it does in the video and every light source is traced then surely you can have lots of different perspectives captured from the same rendered instance, if that makes sense? It would take a hell of a lot of processing yeah but not the equivalent flops of 200,000 computers. Of course having multicore parallel processors will be the answer to all of this, I think when I read last some boffin at MIT are up to 64 cores on a single die.

    I am of course, like your self admitted no expert in this field and don’t pretend to be. Very interesting though!