Unlimited Detail Wants To Kill 3D Cards

By Alec Meer on March 10th, 2010 at 12:05 am.

Jim pointed me at this video earlier, presumably believing that my knowledge of, for instance, how to overclock a processor without setting my house on fire means I can say whether there’s any water to its boasts. Um. Maybe? Unlimited Detail Technology reckon their clever tech is the biggest leap in realtime 3D graphics in decades. If what they say is true, we’re going to wave goodbye to games rendered in polygons, and hello to games free from cubist edges and limited model counts. Take a look below, and see what you think.

It’s eight minutes long, but they’re worthwhile minutes. Apart from the one powerpoint slide about why polygons are bad that it shows about 15 times, anyway.

(Before the ‘but that looks rubbish’ comments arrive – the reason Unlimited Detail give for the stuff they’re showing looking a little rustic is that the artwork is created by programmers, not artists.)

Let’s leave aside the fact that the voiceover alternates between chirpy educational TV and a strange creeping menace, and instead concentrate on what it’s promising. To whit, games that can show as much or as little as the creators wish, with no apparent concern as to the hardware they’re running on. It’s done by using points instead of polygons, it runs purely in software, and it can even do its thing on a mobile phone. It sounds amazing. It sounds crazy. Maybe it is – grud only knows we’ve seen plenty of wondrous-sounding technology promises fail to arrive over the years, but let’s hope this one can pull it off. The rough concept behind it is that the game is only ever rendering the pixels you can see at any one time, rather than trying to muster the whole shebang on a constant basis. Or, to use their words:

“Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call MASS CONNECTED PROCESSING. Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.”

There’s another video on the site – I can’t find an embeddable version yet, but it’s got much more footage of the tech itself in action. It’s the Comparison one at the bottom of the page that you’re after. Oh, be warned Voiceover Guy gets even stranger in it, possibly due to the insane bell-based soundtrack.

With the released footage only demonstrating static environments rather than an interactive landscape, and anything playable apparently being some 16 months off, it’d be reckless to start shouting “THE FUTURE! THIS IS THE FUTURE!” just yet. The theory seems sound, but the practice can only be complicated. Creating content, for one thing – it’s expensive enough for developers to create a current-gen AAA game, so how do they muster the resources to fill a photo-real world with, ah, unlimited detail? It’d be lovely to see it, of course, but it may take some doing.

Secondly, the tech’s strength seems to be in geometry – real-time animation, lighting and shadowing seems a little skipped over thus far, and may be an area in which UDT lags behind the otherwise more old-fashioned polygon-based rendering system. Again, though, the stuff on show is terribly early and wasn’t created by pro artists. Moreover, even if it can’t ultimately compete with high-end traditionally-rendered games, it could well be a fantastic way to make low-end machines create far more complicated scenes than they’d otherwise be capable of.

Keeping an eye on this one. The potential is incredible, whether or not the industry picks it up.

.

222 Comments »

Sponsored links by Taboola
  1. Snuffy (the Evil) says:

    Neat.

  2. A-Scale says:

    So I can play games with unlimited detail via On Live on my netbook? I’ll believe it when I see it.

    And for the record, their city looks much less realistic than one in say Crysis. Granted I’m certain they could have done a lot more work on shaders and what have you, but the structures in their tech demo look bubbly and strange to me.

    • Deuteronomy says:

      A-Scale, the whole point of this is that there are no “shaders” per se. I suppose creating the point cloud could be done by converting conventional polygon based data from high resolution renders. There is a lot of merit in the approach these guys are taking, but I’m not sure how this differs from voxels.

      The main issue I see is generating the point cloud on the fly algorithmicly, because there’s no way in hell I’m downloading a 5 terabyte game any time soon. Lighting this is going to be a bitch too. You are going to still need a video card for lighting and dynamic scenes.

      Tesselation seems to sidestep the issues they highlight in polygon based rendering, and is a way of creating a “point cloud” on the fly.

  3. Heliocentric says:

    So… Its a poor mans ray tracing?

    • llama says:

      No….watch the video before you post.

    • Heliocentric says:

      Posting from a mobile, in bed no less.

    • Stephen says:

      Cool but you really need to watch the video before you comment on the new graphics technology.

    • Noc says:

      We have a way — a real rather complicated way — of searching through unlimited points data, and only grabbing the ones that we need. And how many do we need? Well, we only need one for every pixel on the screen.

      …so they’ve working on a more efficient ray-tracing algorithm, and are using voxels. All of that “performing a spacial search” stuff…that’s all a raycast IS. They’re finding out which voxel (oh, I’m sorry, “point”) is in front of a given screen pixel, and have apparently found a way to do this that doesn’t involve shifting through massive amounts of junk data for each search.

      This is pretty cool, honestly, but it’s not like we haven’t already heard about this sort of thing. This isn’t some Crazy Revolutionary New Idea That’s Going To Change Everything That The Man (the “Polygon Companies”) Don’t Want To Hear About.

      It’s…another stab at voxels and raytracing. I’m totally in favor of people working on new sorts of graphics tech, but the theatrics are kind of making them look a little sketchy.

    • Biz says:

      Not a poor man’s raytracing

      a rich man’s raytracing :)

    • Steven Wittens says:

      Noc is right. Look at the debug rendering at the end (green glowing matrix-style)… this is exactly what you’d get if you were raytracing a sparse multi-resolution voxel octree.

      You wouldn’t do strictly classical raytracing, you’d be doing a sort of “fat” raytracing where you actually trace viewing cones with volume instead of infinitely thin rays. This lets you use the octree structure for maximum effect. You would start with a fat ray representing macro-blocks of pixels, only splitting up the fat ray into smaller blocks when it hits something. You stop the process when your rays become pixel-sized, and you never inspect octree nodes smaller than your ray is wide.

      Also notice, for as much as they put down voxels for their memory usage, the only scenes where they have a large viewing distance is where geometry is repeated *perfectly* on a grid. I.e. they’re just wrapping their rays around the same voxel structure, hiding this fact by using a simple rejection test to build those pyramids of monsters.

  4. Seol says:

    Meh, that’s rubbish. Their main point about the angular look of polygons is rendered moot since DX11 cards can run adaptive continuous level of detail entirely on the GPU thanks to their tessellation stage.

    Also I shudder at the thought of the amount of memory their point cloud data must consume (specially when animated and with complex material parameters, not to mention the cost of creating it), and I wonder if, being a software based solution, the CPU will have enough power left to, I don’t know, run a game at the same time.

  5. Javier-de-Ass says:

    that’s nice

  6. Axiin says:

    This really excites me! People can eliminate 3d graphics cards, spend more money on processors…. in the process get better graphics AND run bigger Dwarf Fortress worlds!

  7. Wulf says:

    It’s all incredibly static and pre-processed, isn’t it? I mean, you could use this for prototyping terrain, landscapes, and architecture, sure, but it’s still going to be static. And everything in the video seemed to nod toward that, from my limited understand. And again, from my limited understanding, the more animation/physics that a game has bouncing around, the more limited (ironically) this technology will be.

    Considering that everything is about lifelike this, interactive that, all with physics doodads and procedurally-generated what-have-you’s, this isn’t really going to be of a lot of interest. The thing is, at the end of the day it looks like the challenge is going to be a completely prefabricated and static world with limited animation versus a slightly (almost visibly, but not quite) angular world with complete interactivity borne of everything rendering real-time.

    Unless this can compete on the interactivity front, what chance does it have?

    — Edited to add… —

    Not to mention that something’s been bugging me about this, and it took me a while to figure out what it was. It’s the same problem that some fractal renderers have.

    Basically, with no anti-aliasing, and a potentially low-resolution, the entire video reminds me of a 2D game from the mid 90’s, perhaps with a pre-rendered backdrop.

    You know what I mean, it has very obvious pixels that make up the image, it doesn’t have smooth colours or patterns because of that. And I don’t know if it would be able to.

    So potentially we’d all go back to playing games that looked like they were in the 90’s, with almost the same level of pre-processing, it’s just that our computers will be doing the crappy rendering rather than the developer.

    • JuJuCam says:

      Think of it as exploring an alternative timeline from the 90’s when polygons didn’t take hold in the way it did. Yes, we’re back to 90’s graphics because we have to start over from there and relearn all the tricks that make polygons look realistic all over again without polys.

    • Premium User Badge

      Down Rodeo says:

      Anti-aliasing wouldn’t be that difficult, if their tech scales well then you could simply render at higher resolutions and resample it downwards, which is of course what quite a few games do already. Not sure how they’d do adaptive anti-aliasing though… some kind of edge detection? It looks impressive I suppose but many people in these comments have already raised issues.

      Also that guy is camp as.

    • Bonedwarf says:

      BRING BACK VOXELS!

    • Wulf says:

      Voxels, yes. Voxels I don’t mind. But everything in that video looks like a griany, low-res GIF image from the early days of the Internet, or from old game rendered backdrops and FMVs. Remember the FMV in stuff like The Pandora Directive? Great game, but that poor quality video did a number on one’s eyes, and still does (I’ve replayed it recently). I could see much eyestrain coming from this, and even Outcast looks better.

  8. M says:

    Hmm.

    It looks alright; we can emulate this to some extent already, though, by ‘guessing’ what’s between the points on a polygon. We do this all the time to smooth out models.

    I was going along with it until he started to get more technical. Then it seemed a bit weird.

    It sounds like they’re using something a little bit graph-like (not statistics-based graphs, but this kind – http://people.brunel.ac.uk/~mastjjb/jeb/or/gt1.gif) to represent what can and can’t be seen at any one time. The way it renders for “each pixel” despite apparently providing infinite detail sounds a bit like a fractal – they’re mathematical functions and so are infinite in size, but your monitor has a finite number of pixels so it just colours them in accordingly.

    Assuming they’ve got a really nice representation, they’re able to somehow explore the vast spaces required in order to render thirty scenes in a single second, then it might work. But there’s certainly some questions to be asked – like you say, lighting and collision detection would seem to have a different meaning for a system like this. You can’t collide infinitessimally small points – you’d have to put a bounding object around it. And that object would be polygon-based, presumably. The same goes for lighting.

    The other thing I wonder is how it looks in motion, because the computer will frequently try to decide between a bunch of different, equally infinitessimal points to display. There’d need to be some clever checking to stop the world from fuzzing and vibrating horribly.

    Ultimately, the work can’t disappear. Even if we transform a polygon problem into a state space search, computational work is still being done. As you say, sixteen months is a long time, and until we’re looking at a perfectly round tree in Elder Scrolls V, who knows.

  9. Dave says:

    I can’t suspend my disbelief enough to buy into this, without a better explanation. Also, programmer art aside, what we aren’t seeing:

    — animation
    — lighting
    — alpha blending
    — running in a real game engine where CPU time is consumed by other things.

  10. K.Boogle says:

    Now let’s all pray NVidia and ATI (and even M$) don’t get a whiff of this, or we’re all screwed out of the future =(

  11. JuJuCam says:

    Colour me skeptical till I’ve got it in my own hot little hands, but if it’s true it would be a gamebreaker.

  12. Diziet says:

    Why the hell is it voiced by Graham Norton? That video also appears to assume it’s own audience is dumb. Also gave me flashbacks to Elixir and Republic: The Revolution.

    • Dominic White says:

      “That video also appears to assume it’s own audience is dumb.”

      It’s a Youtube video. Have you ever seen Youtube comments? Dumb is an understatement. Mind-crushingly moronic is just about par for the course there.

    • Diziet says:

      @Dominic White

      Good point. I’ll shut up on that count.

      On another note I want 8 minutes of my life back. I mean it’s a great idea if it could actually work but it mentions nothing at all with regards to the following:

      – Great you’ve picked a point from the ‘cloud’ that you know you need to display, where’s the texture details? How the hell do you texture this stuff?
      – Great you’ve picked a point from the ‘cloud’ that you know you need to display, how do we calculate lighting?
      – Great you’ve picked a point from the ‘cloud’ that you know you need to display, oh shit is that a graphics card built to display polygons. Guess I’ll have to chew up some cpu time that should have been running the game code for you. This one is actually a minor consideration though as if we thought like this we’d never have got decent 3d cards in the first place.

      I know it’s madness but really the only way to create amazing real computer graphics, which I’m not even convinced we need but then again I do like the artistic aspect of some games, is to simulate reality. This pretty much means ray tracing, no?

    • Premium User Badge

      solipsistnation says:

      I think you can’t think of texturing in the same way as you do for polygons– that is, rather than a texture being an image and a bumpmap projected onto a polygon, each atom (or point or whatever) has a location (replacing the bump map) and a color (replacing the image).

      Lighting would mean computing whether each atom is in shadow or not, and from each light source in a scene, and adjusting the color of that atom accordingly.

      The CPU question is the important one. This is all stuff that we can do now, clearly, but I think it doesn’t scale very well.

      There are questions that aren’t obvious, too, like, how do you store the location of the atoms? Are they on a grid? They have to be, so you can keep track of where they are, but how big is that grid? How big are the atoms? At various angles, light sources can potentially cast a shadow from one point onto 2 other points. How does that work?

    • Tom Camfield says:

      @ Diziet

      He sounds like he’s been educated in a private school. They tend to pronounce some words a little prissily, for example pronouncing data “daaarta” instead of “dayta” (or, /ˈdɑːtə/ rather than /ˈdeɪtə/, at least I think that’s the IPA). This is part of Received Pronunciation or Oxford English or whatever you’d like to call it.

    • Lemon scented apocalypse says:

      ^ ‘Darta’ is the correct english annunciation. ‘Dayta’ would be american.

      In terms of the new technology: colour me suspicious.

    • Lemon scented apocalypse says:

      pronunciation even. Must have catholics on the brain

    • Nesetalis says:

      From what I gather, he is using point data.. which would contain color, alpha, and any other variable associated with it. Similar but not necessarily the same as Voxels

      if I was doing it, I would create objects from point data, and each point reference a sub object. so that as you zoom in, you then start rendering data from this sub object positioned at the point you are looking at.. or as you zoom out you figure out where the point you are looking at belongs in the grand scheme… and so forth.

      i gather hes doing similar, since he said search engine.. it can be done very easily with individual objects in a common database. Objects for shape, perhaps even proceural color applied to them via parent object.. inheritance.

  13. Casimir's Blake says:

    After a few seconds I realised what the visual style of this tech reminds me of.

    The Shamen – Axis Mutatis. William Latham cover art.

  14. fulis says:

    Isn’t this similar to what Carmack has been talking about with virtualizing geometry and essentially removing the polygon budget?

  15. Premium User Badge

    HermitUK says:

    I agree with a lot of the comments here. It’s interesting, but it seems to be ignoring some pretty big parts of games. Reminds me a bit of something like Riven or Myst 3 – something which looks fantastic but isn’t fantastically interactive.

    One interesting thing I did notice is that they’d avoided animating of any kind. “Programmer art” doesn’t prevent them from showing bit of wind swaying leaves and grass, or having those pyramids of 2000-odd monstrosities doing a little dance.

    It’s entirely possible that there’s potential here, but for now I have to file this in the same place as streaming games – can’t really believe it til I see it on the screen in front of me.

  16. Phydaux says:

    Wow, I think my hoax alarm just exploded.

  17. Mwalk10 says:

    Seems like it would take an infinite amount of hard drive space to hold an infinite amount of detail.

    They say they can compress it into something very small, but then that would make animation difficult when you need to change the state of things.

  18. EBass says:

    I mostly understand the concepts here, but I’m still skeptical. I mean his assertion that the way it’ll be done is it searches for the pixels it needs to display and then only displays them? Ok, but so what? We still need a massive amount of these being rendered in order for it to actually look good?

    I mean hell even in the demo we get framerate drop.

    • Premium User Badge

      solipsistnation says:

      That’s actually what poly-based rendering does– unless you’re playing Trespasser, the game engine doesn’t render all the polys in the level you’re exploring and all the polys in the items and creatures you encounter– it clips polygons on the backs of things and that are outside your field of vision.

      That’s an interesting point, too– if I want to not show the backside of, say, a crate, I have between 3 and 5 polygons I don’t have to render (3 for the straight-at-the-corner view, where you see 3 and 3 are masked, and 5 for the right-up-against-one-side view, where you see 1 and the other 5 are masked). For a crate rendered in points, there are potentially UNLIMITED points that don’t get rendered– that’s a lot more data manipulation than just skipping out on 3-5 polygons. On the other hand, because points are either behind something or not, that check may be much simpler and thus much cheaper as far as compute time– and no having to compute how much of a poly that is partially behind another object should be rendered…

      I think the scale, however, makes computing clipping for a zillion points take enough longer than computing clipping for a handful of polygons, even on zippy computers.

  19. Mwalk10 says:

    One thing they didn’t touch on is the ability to now store the insides of things. Fully deformable environments + characters here we come.

  20. kyrieee says:

    I’ll tell you what isn’t unlimited:
    Disk space and memory access times

    This sounds too good to be true, but I guess time will tell. I don’t see how you could bypass projective geometry using some weird search algorithm, but I don’t know much about search algorithms… I do know a bit about projective geometry though :P

    Also, the issues with LODs and popin they’re talking about are completely solved by tesselation which almost here now

  21. alinkdeejay says:

    To use an engine like this to the fullest, you’d also have to manually create these actual details. That by itself is a crazy amount of work, too much for the already huge teams of developers from the largest companies. Games like that will simply be too expensive and too much of a risk to develop.

    How would textures work? The good thing about polygons is that you can apply 2D textures on top of them. With these strange dots floating in space, it seems a little …impractical and annoying to apply textures.

    • Premium User Badge

      solipsistnation says:

      Yeah, it would mean a totally different style of tool, since textures would be a combination of different displacements of dots and different colors of dots…

    • Tacroy says:

      It wouldn’t be that different, and might even be easier. For instance, instead of drawing something that looks like a flat brick wall and pasting it on one long flat rectangle, you’d draw a brick and tell the engine that there’s a hundred of them arranged in such-and-such a fashion, and each one should have this much noise added to its appearance so they’re not all exactly the same.

    • Spod says:

      Perhaps it’d be possible to digitise real world items using some sort of “LASER” scanning.

    • Deuteronomy says:

      What is this “LAZER” you speak off?

  22. Premium User Badge

    solipsistnation says:

    Okay, at about 3:50 in the first video, a plant passes really close by and you can see that it is, indeed, made up of loads of floaty points. But at that range, the points are HUGE. It’s like that “Voxelstein 3D” game. Say what you want about textured polys, if the texture is at a high enough resolution and you have filtering and anti-aliasing and so on turned on, if you get that close it’ll still look smooth. Perhaps a bit weird, but not like giant floating matchboxes.

    You can see some of that in the comparison “propaganda2009″ video, at about 1:22, when he zooms in close on the ground (“They’re real pebbles.”). Everything that gets close to the screen gets very strangely blocky and it becomes more obvious that it’s a bunch of floaty “atoms.”

    I also note that the horizon (or at least where the view cuts off) is pretty close in most of their demos. I’m a bit skeptical of how well it really scales, although I’m sure there are point-reduction algorithms just as there are poly-reduction algorithms currently…

    So, yeah. It’s interesting, but I don’t think it’s as unlimited as they want to think. I’m curious about the memory usage of those scenes, too.

    And dear lord, they should hire a professional to do the demo videos. And not release a demo video with weird software bugs that the narrator has to explain away.

    • captain fitz says:

      Comment retracted–I just saw one of the other videos. And what’s with the noise on the right side and bottom of the screen in the comparison video? Looks terrible, I hope that’s not being rendered.

  23. Velvet Fist, Iron Glove says:

    The biggest thing that struck me about all the samples they showed was that they had a tiny number of different “models” on the screen. They were instancing the creature in pyramids many many times over, but even the jungle and town scenes were instancing a very small number of models.

    What that suggests about their algorithms’ bandwidth requirements and cache coherency I can’t say, but it doesn’t give a good impression.

    • Tacroy says:

      To me, it suggests that each model is much larger than with traditional methods. You would have to push a lot more data over the bus to the graphics card per model, if you want to have multiple different models.

      This is not necessarily a problem, however; you could get around the samey-ness relatively easily by telling the GPU “here’s our point-cloud model; here’s a function that you can apply on the model to change it with a seed value; now put f(model, seed) here, and f(model, seed+1) there.” – kind of a mixture of this stuff and the procedural content generation you find in some games. After all, it’s not like you need two different models for bad guy one and bad guy two; they’re both people with some slightly different coloration and maybe a couple of accessories.

  24. Mark says:

    Well, speaking with my experience in computer science, using search methods for a problem like this does seem to be an effective way to handle the issue. Some of my colleagues have done some pretty astonishing things with search algorithms. It doesn’t smell too much like snake oil to me.

    Of course, there’s more to rendering than pushing pixels. While an efficient search algorithm can lower the overall requirements needed in comparison to brute force, the algorithm itself will still have pretty high overhead. The complexity of the scene would still represent a limit to what can be rendered in a timely manner on a given system; it’s just that the bottleneck is now more likely to be memory or processing. Even if you’re not rendering all the geometry every frame, you still have to store it and you still have to search through it, which are both tasks that this method makes significantly more demanding.

    Doesn’t have the smell of snake oil to me, though.

    • dantokk says:

      Really? Comparing the problem of visibility “search” to Google and Yahoo search doesn’t smell like snake oil to a computer science guy?

    • Nesetalis says:

      from what i gather that they are doing, search is exactly the term they should use. and exactly what it is on the package.. so no, to me, its not snake oil.

  25. gulag says:

    Hmm, virtual dirt. That’s what it is. Particles stuck together to make stuff.

    I was having this discussion today with some mates. What if we could model soil in a game? Rocks and mountains and cliffs are easy to do with hard-edged polygons, but what about the soft, maliable blanket of soil that lies over an environment? What if you could keep track of enough of it that you could model earthworks, or trenches, or a landslide? Let players change the terrain, or use it as a way to encourage exploration.

    This seems like a step in the right direction, but as commenters have pointed out above: If it doesn’t move, it ain’t much use.

    • A-Scale says:

      I know this isn’t in the sense you’re talking about, but Tiberian Sun had a flame tank that could dig through the soil and Stronghold had sappers that dug tunnels. Interesting concept that, dirt digging.

  26. The Snee says:

    I’m pretty skeptical of this for a number of reasons.

    1) I’m not sure exactly how this point cloud system differs from things like voxels, or how it is more efficient than your current industry standard (a low poly mesh covered with a material that simulates the lighting for a high poly mesh). Because of the way things work now, you’re usually getting a smooth-ish transition between polys on the majority of surfaces, to the extent that it’s hard to actually find the surfaces unless you really know what you’re looking for. And once you get to that point, you start noticing the polys in real life, and commenting on how the texturework on that newly painted bench is fairly shoddy. It happens, trust me.

    2) Someone already mentioned this; the pipeline. At the moment you have loads of programs set up for poly based editing. You make your model, bake out normals, paint on textures, etc etc, import it as a mesh into engine and so on. I’d really like to see their toolset to see what you actually work with. The way textures are applied is a particularly weird one.

    3) Unlimited. Unlimited. The phrase keeps popping up in the video. I find that hard to believe. Even at it’s smallest possible size, each point in thier cloud system will be a few bytes of three dimensional positional data. There’s no way it could be unlimited, there’s a limited amount of memory on any machine. Instancing will help, as the animal pyramid shows, but that video was a little less than smooth.

    4) The video itself. Its presentation was awful. Artistic choices aside, the video and it’s narration seemed to be laced with an amount of bitterness and self absorbed superiority so high that i couldn’t bear to watch the whole thing. It gives me the feeling that while there’s talent there, there’s a bad direction behind the project, and they need someone who knows what they’re doing to take control.

    So in essence, if that video is a representation of their project, I can pretty much disregard it. Until I see some facts, proper, high resolution tech demo that show how the environment can be interacted with, and a demonstration of toolsets, I can’t make a proper judgment.

    • dantokk says:

      Yeah what’s with the whole “unlimited” thing. Despite his claims, the jump to 32-bit colour is not a jump to unlimited, just high enough to be indistinguishable by the eye. I’m sure there are imaging applications that use more than that.

      Similarly, when we finally do get true-to-life visuals they will be of limited precision, just enough to fool the eye and mind.

      The whole video shoots out little fallacies like that at pace designed to confuse, not inform, and it drives me bonkers!

    • Nesetalis says:

      highest color we usually go is 48bit color. most of those colors cant be displayed on a monitor or seen by the eye.. but they can be very necessary when doing advanced graphics editing.. (to get proper effects between color interaction and with things like anti-aliasing)

      this doesnt strike me as BS.. personally.. depending on exactly what they are doing it is very likely conceptualized from voxels.

  27. Mario Figueiredo says:

    This whole thing smells to prank. A few points to take into consideration:

    – It’s not polygon count actually that is an issue with current polygon-based architecture. It’s textures. If it weren’t for textures, we could increase polygon count to unimaginable levels.

    – The author doesn’t explain very well how they zoom into a point cloud. This is the big issue with voxels. They don’t scale well. This is their bottleneck. And let me tell you, a huge one. The amount of processing needed to recalculate the whole screen is today much higher than that needed to redraw a polygon scene. If the argument is new processors will be able to handle this, well so will they be able to handle more and faster polygons.

    – On the issue of zoom, there’s also the problem of voxel data scalability. As you get closer, points start to become evident, in order to avoid this, higher resolution data needs to be stored, which carries more points than the previous resolution. With big enough resolutions levels (akin to what we have in modern games) the amount of data needed is huge. I’d wager something like 10x to 100x to store a modern game with half the zoom capabilities of it’s polygon-based original.

    – A revolutionary new technology is never presented with such amateurish screens and dialog. The issue is not the quality of the graphics. That’s secondary. The issue is programmers showcasing something that has really nothing to do with what they allegedly been programming on. Point cloud techniques could better be showcased with morphing scenes, scenes with many moving objects in various directions, image destroy and rebuild scenes, discussing textures and how they are achieved, showing bump map effects and how they are calculated, rapid color changes, slow-motion zoom effects, etc. And all in a more adult-like tone. I went to they website and saw nothing of that.

    – The simple mention of the word “unlimited” completely destroys their case. It’s just not serious talk. If they mean unlimited because computers processors have the potential for unlimited development, well, so is polygon-based rendering unlimited, for the exact same reasons. If on the other hand they say unlimited because it’s really already unlimited (like they tried to imply with the pyramids), then it’s complete BS. As there is no such thing as unlimited processing capabilities. A scene rendered on a given resolution may also seem “unlimited” by employing polygons. The limit is not in the amount of polygons in a frame, but the amount of frames per second.

    – Finally… where’s the mandatory FPS readout?

    • Soobe says:

      I think the biggest problem with this idea is that we’ve already started to bump into the realism ceiling. Take Modern Warfare 2. You’re simply not going to find a game with higher production values until the next one comes out, and I have a hard time quantifying how that game would have been better, both in terms of game play and quality, had this tech been employed.

      Our problem isn’t polygons, it’s art department budgets.

      Of course that’s to say nothing of animation. I’m sure any solution would have to be of the reductionist sort anyway, in that you’ve have to subdivide and most likely create polygons of particle clouds!

      Who knows though, let’s wish these guys god speed!

    • godwin says:

      Excuse me here, but MW2 is hardly the apex of quality whether in terms of gameplay, writing or production.

  28. JuJuCam says:

    Forgive my ignorance, but would it be possible to integrate elements of this model with the polygon model so you could have, for instance, point based static environments but polygonal interactive / animated characters and features? Or are they mutually exclusive graphics engines?

    Something like the first two Wing Commanders, which had a polygon model that was replaced wholesale by a bitmap that changed depending on players perspective rather than textured in the now traditional way. A compromise between old and new technology.

  29. LionsPhil says:

    As noted, this is “just” voxels + raytracing, perhaps with some overblow pixie dust on top.

    Voxels mean lots and lots of drawing frames to do animation. You ever notice how the units in C&C2 and RA2 didn’t really have much at all in the way of animation, just rotation of the entire model, despite being from an era where polygon-based games like TA had spider K-bots scuttling around? It’s a lot easily to animate something that you’ve defined as a set of shapes than it is as a grid of 3D pixels.

    I don’t think nVidia nor ATi need panic.

  30. noom says:

    I can’t be the only person that found the voice-over guy rather endearing can I..?

    • SFLegend says:

      I thought that he sounded like Jim Sterling from Destructoid, which is kind of the exact opposite of this.

    • Bhazor says:

      I thought he was great. Just the right measure of smugness and condescension without being unlikable. I Imagine that is how Jeeves/Stephen Fry would talk. If they knew what cloud points were.

  31. Pijama says:

    For us gamers with a basic grasp on graphic technology but not enough to understand the finer points of it, WHAT THE HELL ARE YOU GUYS ON ABOUT, please?

    Pretty please?

    • Mario Figueiredo says:

      That it could be cool. But it’s too cool to be true. And our current processors and storage capacity agree it will not be true for a long while.

      In terms of game development, if we had the capacity to render entire voxel-based scenes in modern 3D games in real-time at rates above 30 FPS, the possibilities in terms of art design would be nearly unlimited. Think movies, now think games that look like movies (which for many of us may not be that cool anyway).

    • Premium User Badge

      solipsistnation says:

      Current graphics cards treat everything as a bunch of triangles– a rectangle? Two triangles. Thus a crate is 12 triangles– 2 per side. A tree trunk? A bunch of triangles arranged into a cylinder wider at the bottom than at the top. And so on.

      What these guys do is, instead of taking all those triangles and turning them into pixels with are then displayed on your screen, they start off with the pixel as the basic unit. So while traditional video hardware represents a crate in memory as 6 triangles and a graphics file (a jpeg or .bmp or whatever) wrapped around it, these guys want to represent that crate as a bunch of individual dots. It’s the difference between representing a tabletop as a flat surface and representing it as the atoms that make up that surface.

      The problems come when you decide what to draw. Polygons present one set of problems to solve, and atoms represent another, similar but not identical set of problems. The question is whether the atom problems or the polygon problems are more complicated and take longer for your CPU to work out.

      Currently, polygons are winning because they are relatively fast to work out and draw on your screen– this is partially because it’s how we’ve been doing it for years, and partially because you have a video card that does most of the hard work instead of your CPU doing it. These guys want to take that back from the video card and make the CPU do the work, since CPUs today are much faster (not just in clock speed, but in how rapidly they can process instructions) than they were a few years ago.

      It’s an intriguing idea, but I suspect that in practice it won’t work as well as polygons. It may look better if artists take the time to really get into the system, but it will be slower. Unless of course we had atom-cloud acceleration hardware, video cards that can work with tons of atoms rather than tons of polygons…

  32. Gorgeras says:

    I think you’re all missing the point.

    This is the Matrix. I swears. OMG, that means that voice belongs….to the Architect.

  33. Daniel Klein says:

    Trips all my many bullshit sensors. They say it’s different from Ray Tracing, but they never explain how. From what I can tell (have a point of origin, lines of sight, figure out which points of the geometry are needed) this is EXACTLY ray tracing. But the main reason my BS sense is tingling is that computer graphics is a highly lucrative, highly studied field, and they claim to have just come up with a cold fusion reactor wrapped in a perpetual motion machine. That they use all the usual snake oil peddler tropes (“we showed this to Big Company XX and they didn’t like it because they LIVE IN THE PAST!!! There is a conspiracy to keep us silent just like Big Pharma doesn’t want you to know that you can cure cancer with the power of CRYSTALS!!!”) doesn’t help their cause either.

    Basically, this is bunk.

    • lonkero173 says:

      I don’t think raytracing is usally done with point based data, so it’s not exactly raytracing as we know it. (Sounds a lot like some raytracing/voxel combination) Also like others have pointed out no animation is shown, and animation is typically the achilles heel of raytracing/voxel based solutions as that forces you to (at least partially) reconstruct the data structure you are using. And building the data structure is generally a slow operation while seeks are extremely fast (albeit they do get slower with larger data structures, usually O(log(n)) complexity). And do take this with a grain of salt, they don’t tell much and I’m not exactly and expert on these things.

      Anyway, looks interesting but I doubt it’ll get anywhere, too many problems. Also the gratuitous use of UNLIMITED is rather frustrating.

  34. shai says:

    It took some time before i noticed i wasn’t on ./ after reading this thread

  35. DMJ says:

    More details please. Not just “it’s like Google and it isn’t voxels and it isn’t raytracing even though it sounds awfully like exactly that”.

    We need something meaty and terrifyingly technical, using big words that we can look up on Wikipedia and nod sagely to ourselves as we raise a quizzical eyebrow.

    • xrabohrok says:

      I’m with him on that. I have a feeling that this movie is more aimed towards the venture capitalist crowd, whoever that is. But at the same time, its still belittling. I know what a polygon is, dammit!

  36. Deuteronomy says:

    Please go to youtube and type “Atomontage”. Prepare to have your mind blown into little voxels.

  37. u335 says:

    Well, you obviously can’t store an unlimited amount of points in a computer – so the point cloud must be procedurally generated. So, instead of 1 trillion points in a “cloud” actually stored somewhere, you only have 500 equations that produce 1024×1024 points of color. Ok. Thats fine, I’ll go with that.

    Methinks the crux of the matter is having those points INTERACT. Polygons give you useful little units that may interact with one another. It seems to me that you must re-invent this concept of ‘useful little units’ in order to make the point cloud method work for animation…which I would say most gamers are rather interested in. Otherwise, it seems the CPU load will be untenable. Any ideas on this anyone?

    • JKjoker says:

      you could also build the world with polygons behind the scenes but with a much lower count that would look horrible visually but works perfectly for calculations, a whole new technology always needs some tweaking, new ideas and time to mature

      you could still have a “gpu/physics card” for calculations, but without the need to be as powerful and expensive as they are now

    • u335 says:

      What you say is interesting, joker. I agree that it’s a good idea to keep an eye on this one, but I keep having a nagging feeling that something about this is bullshit. I guess we can maybe think of an example to test the waters: what if someone reprogrammed Asteroids using the point cloud method? Can we intuitively understand how rendering would be just as easy – or easier – using point cloud rather than vectors/polygons?

      Hmmmm…. /puts on thinking cap

  38. JKjoker says:

    sounds interesting, i can buy that a 3d search engine can find 1024×768 pixels faster than what it takes to build a 3d world with polygons, apply whatever they do to make it look nice and show it at that resolution, the cost of building “unlimited detail” worlds makes me a little uneasy tho, but i guess that could be fixed by deciding a quality/budget goal during development

    assuming it actually works i think they should start with portable consoles/pcs/cellphones first where they can make things that look better than anything with polygons, it will take them a while to reach a level where they can compete with the much more mature polygon technology on big consoles/pcs where processing power is available (which is why a lot of posters above are noting the demo doesn’t look better than current games)

    polygons didn’t get where they are now in a day you know, they started looking like crap, ill give these guys a chance, lets see what they can do

  39. WilPal says:

    I guess collisions could be checked like this:
    Check which layer the player is at in the “Point Cloud”, if he intersects, stop applying gravity/velocity etc.

    Obviously it would be far more complex than that, as that would only give very basic interactions, but you get the idea.

    Now i’m going to bed because that doesn’t make much sense.

    • Scundoo says:

      if he intersects, stop applying gravity/velocity etc.

      You are forgetting a little thing called inertia.

    • u335 says:

      Nah, not inertia. If there is an intersection the player character needs to experience a force. Inertia is just a general statement about motion in the absence of force, i.e. you never write an equation with “inertia” as a variable — but that’s all a bit OT.

    • Dave says:

      My guess is you do the traditional thing for collision: low-poly models and/or combinations of geometric primitives. The technology for automatically turning models for rendering into models for collision/physics is pretty well established.

  40. bbot says:

    Nothingdamn! Get this guy a PR agency!

    The video was just eye searingly bad. The voiceover was bad, the script seemed to spend most of its time insulting the audience, and the rest of the time skipping over technical details, the editing was lousy, the screen capturing was a slideshow, the phrase “unlimited detail technology” is and out and out lie, and for some reason this graphics demo was in 480p.

    Technically, it’s interesting. The point cloud geometry would be hard to fake, and the laggy video convinces me that it was actually some twat sitting in front of a keyboard and mouse controlling the thing, rather than an offline renderer. The lighting is realtime, as eagle-eyed viewers may have noted half-way through, when he messed around with it.

    Never saw any of the models move, though.

  41. Scundoo says:

    but that looks rubbish

  42. Bruno Daniel says:

    THE FUTURE! THIS IS THE FUTURE!

    No, seriously, it’s too good to be true. Perfect geometry? Sounds a bit like electric cars: awesome in paper, but simply too expensive for our current economy. But just as electric cars, this thing could very well be the standard in the future…

    I sure as hell can’t wait to see the SDK they mention in the video!

  43. MrBRAD! says:

    This will be suppressed by the big businesses so they can keep selling us expensive cards in the same way that pharmaceutical companies don’t want to find a cancer cure so the only choice for people is expensive chemotherapy.

    IT’S A CONSPIRACY MAAAAAAN

    /tinfoilhat

    • JKjoker says:

      i was thinking the same thing but then i remembered that both the cpu and gpu grind has been kind of frozen for the last few years, that neither Microsoft or Sony want to release a new generation of consoles any time soon (which would make software that could breath new life into them very tempting) and that the fastest growing gaming platforms right now are portable consoles, cellphones and netbooks which have little power and could make good use of this tech

  44. Justin says:

    Looks terrible, how computationally expensive is collision detection and animation going to be with “unlimited geometry”. Not to mention the hoax factor is at 11. Phantom and OnLive eat your heart out. Really not even post worthy, but apparently an effective troll.

  45. Batolemaeus says:

    So, they’re actually using tools for voxel editing to make their stuff…ho-hum.

    I’m interested, since it does seem like a nice improvement on voxel and raytracing/raycasting, somewhat of a blend between them. However, i doubt it will make much of an impact on traditional polygon based rendering.

    In my opinion, if they stopped pretending this was some groundbreaking stuff that has never been tried before, they’d gain a lot of credibility. The longer demonstration video goes into a bit more detail (not enough for my taste) and he did admit to rushing out the video and having some issues with their system, so apparently they do have something. Mentioning programmer art as an excuse gives them a thumbs up at least.

    However, I’ll wait until they can put some meat on the table. The videos just show a highly detailed voxel world, and that’s just not enough to get me excited.

  46. Tunips says:

    It seems that the good idea here, underneath the questionable mumbojumo, is that this display method would have a fixed computational intensity. So as computers get more powerful, there’ll be more time left over to layer more and more fancy shaders and lighting effects and such, rather than the current GPU race to hurl out more and more polygons.
    It does seem like the tomorrow’s tech. One rather suspects it will need the day after’s hardware before it looks as good as yesterday.

  47. MWoody says:

    Why do it on the CPU? Why not do it on the GPU, with something like CUDA?

    • namuol says:

      The SIMD (single-instruction, many-data) architectures present in modern GPUs aren’t nearly as versatile as the MIMD (multi-instruction, multi-data) architecture of modern (multi-core) CPUs.

      Basically, SIMD lets you perform the same (single) operation on a huge list of data (for instance, multiplying sets of two numbers together) all at once, since the GPU has many (hundreds) of little “processors” that perform this specific operation, while MIMD allows you to simultanously perform different operations completely seperately; that is, in an 4-core CPU, you could use one core to do such a multiplication, while the other does division, etc.

      While today’s best CPUs only have about a tenth of the number of “processors” as today’s best GPUs, the ability to perform operations independent of others is very valuable, simply because many rendering techniques (including the raytracing/voxel/”point cloud” varieties) involve lots of conditional branching (better known as “if statements” ;) ).

      It *is* possible to generate “branchy” SIMD code, but you usually end up with a lot of waste since what happens is that you need to compute the result of each conditional branch at a time, and discard the ones that are irrelevant.

  48. Tejlgaard says:

    I call bullshit too.

    Someone above said that you obviously can’t store unlimited points in a computer. No, no you cannot.

    He suggested that the points were somehow stored in equations. Well, uhm, yes, how exactly?

    Because that’s how polygons work. Equations and vectors signify surfaces, that the rasterizer can then populate with pixels by using textures.

    I spent the past hour thinking about this. Let’s say we don’t want unlimited points in a scene. Let’s say, we want a billion. A billion points. And no, you can’t say you want these points to be generated by equations, because you used a _fucking laserscanner_ as he suggests in another video. How are you going to compress that? You’re not. You can’t. The best you can possibly do is to map your points straight to memory addresses, and then have each point defined in relation to a neighbor, which must be the previous point; even if you could do this, that’s _still_ 4-6 bits per point, depending on how clever you are.
    Notice, that’s not even storing the colour with the point (this could, arguably, be done by using a texture, weird as that may sound, so that’s no different than normal, but it does count against the infinite detail idea)

    So nomatter how you look at it, to get a billion unique points, you need at least 500 megabytes worth of storage in ram. And the storage algorithm I outlined above? it would be very hard to code a search algorithm for.

    At the very most, with modern day computers, the upper limit would be 4 billion points. Not unlimited. Well…….

    Supposing you mounted one of them SSD’s that cost 5000+$ on your pci express bus, I suppose you’d have the bandwidth to swap in points fast enough that there’s a theoretical upper bound for a pc maybe 1000 times higher. Presuming they wrote a magical logarithmic search function for what’s an otherwise linear data structure, which is what they clain to have done.

    But a trillion points is mentioned in the video, and he goes on to say that this stuff has unlimited detail. That’s plainly nonsense. nomatter what or how you do this, they have not written something that supercedes previous experiments run on supercomputers.

    And from the looks of the video, they don’t actually render unique points; they render the same points, but from different angles. That’s not unlimited detail, that’s unlimited sameness, and I can do that too. It’s called a for-loop, and it’s how they did the pyramid monkey thingies.

    This is not a breakthrough, this is a small time company with a small time owner looking to make a quick buck by being sold to a major player.

    • namuol says:

      I think the memory requirements could be a lot lower than you predict since there’s a lot of repeating objects (instances) in these scenes.

    • Tejlgaard says:

      namuol:

      That’s exactly the point; repetition does not detail make. I’m talking about unique points in my analysis, since that’s what the video guy is talking about in his comparisons.

      These guys undoubtedly have _something_ it just isn’t the something they’re saying it is. What they’re saying is _physically impossible_.

    • u335 says:

      I think you nailed it, Tejl. And that was my point (haha!) above. You can come up with algorithms to make “unlimited points” and you can find the points you want using a search algorithm…but it seems to me that once you start animating all this stuff you are going to find yourself coming back to polygons (or, at least clumps of points) to make a game that, as others have pointed out, isn’t essentially like Myst — i.e. you want something interactive.

      This seems like a longwinded and ’round-yer-ass-to-reach-yer-elbow’ way to reinvent polygons. Which I find fairly unexciting. There may be nifty applications to this point cloud business (embedding point clouds in traditionally animated polygon scenes? — don’t people do that already with particle systems?), but I call bullshit on this reinventing games as we know it…not to mention all the other complaints others have made concerning the professionalism of the video, the rigor with which the technology is described, etc.

    • Nesetalis says:

      lets say you have object rock.. and object rock was made of points, each referencing object (material type) and object material_type has points in a crystaline shape that reference to atom and atom references to particle… :p
      thus.. every rock could render every single atom and every single particle… but the only data you need is a single instance of each particle, each atom, each material used, and then 1 rock with its numerous different points…
      for a curious fun.. you could make it so each particle references object rock :P suddenly.. infinite detail….. albeit repetitious.

  49. nine says:

    “We have this revolutionary idea, but those mean old game companies are totally ignoring us!”

    This is so obviously a fraudulent or at least delusional product I’m surprised you posted it here.

  50. Berzee says:

    Regardless of the technology, those were some of the most entertaining videos I’ve ever watched. Bruce is a great man!