Euclideon Say “Unlimited Detail” Is Not Dead

By Jim Rossignol on August 2nd, 2011 at 10:49 am.

Woo!
You might remember that a year ago we posted about the Unlimited Detail graphics engine, which is apparently based on a system called “point cloud data”, which is a step beyond polygon rendering into a miraculous world of infinite geometry. The claims caused quite a stir, but then the technology promptly vanished. The claims that it was vapourware all along are being addressed by a new video from the company behind the idea, Euclideon, in which they claim to be back, Back, BACK! And demonstrate it with a new video.

The new video zooms in to show individually-rendered grains of dirt. The claims get more astonishing from there.

__________________

« | »

, , .

196 Comments »

  1. Lobotomist says:

    Hope its not scam/vaporware

    The benefits of this technology could be immense both for players and designers

    • Snargelfargen says:

      The likelihood that pc games will be taking significant advantage of this are about as low as getting true high resolution textures in a major game release (Nonexistent, since even the next generation of consoles won’t support it)

    • Dhatz says:

      what? we didnt manage to have effective and precise tools for making game content damn it! Its why games are so over the top expensive to develop.

  2. Dzamir says:

    How can a modern pc handle the vector graphics? And the animations?

    And.. there’s really need to zoom to the dirt IN A FPS?????

    • Defiant Badger says:

      Why does it have to be an fps? What if it was used in, say, R.U.S.E?

    • Ging says:

      You’d still not need to zoom into individual bits of dirt. Unless you’re playing sim ant, in which case that scale fits.

    • Lord Byte says:

      It’s not about zooming in, if every grain is rendered then it can be realistically shaded, no need for bump mapping anymore as every particle is properly shaded and won’t give you the “it’s just a flat texture” idea anymore.

    • The Colonel says:

      You do realise that was demonstrating the scalable power of the technology NOT a game that they’ve made? Not impressed by an engine that can render dirt at the same detail as whole buildings? And at the same time?

    • Malibu Stacey says:

      It’s still a static environment. Sure they can look around the static environment but rendering a photo-realistic static environment has been possible since before the advent of 3D games as it’s simply a CPU intensive task (it just takes less time to do it now).

      As others say, making it interactive by adding things like physics effects to the environment & actors within the environment is where it’ll need to shine if it’s to compete with modern polygon based engines.

    • Mayjori says:

      Good luck doing destructible environments with that tech…….

      Not holding my breath till i see a game using the tech, big difference….

    • SquidgyB says:

      @ Malibu Stacey

      Yes, it’s *just* a static environment – and what they are showing us is *just* a rendering engine. As said in the vid – they’re not artists, they’re not game designers, and to expand on that, they’re not animators or physics specialists. That’s for game designers to make and show off.

      All they are showing us (and all they intend to show us) is their rather nifty scalable graphics tech. Nothing more nothing less.

      To put it a different way – it’s not a game engine… it’s a graphics engine, and they’re not selling it as such – it’s just another little piece of tech to license into a game engine.

    • Vagrant says:

      And you thought 3d modelling was hard before! Now you’ve got to render dirt! The real question is, can you develop faster/cheaper?

      Also, how much data would storing the position of batrilzions of 3d atoms take up?

    • Shuck says:

      @ Vagrant: Yes, exactly. I actually laughed when they said they’d be making friends of artists. Those artists aren’t necessary going to be super happy about all the detail they’d have to add to objects.

      Also, I can’t help but wonder if the fact that their environments seem to be made up of a couple different objects instanced a bajillion times isn’t significant. Using multiple instances of poly models is a good way to save on memory… if they can’t scale up to reasonable number of objects the engine isn’t much use.

      My previous experience with non-game developers making game engines is that they have no idea what’s necessary for game development, and therefore end up struggling with technical issues that have already been solved by the industry and ignoring issues that are important to the development process.

    • TheGameSquid says:

      Indeed, this will NOT help game developers at all. More detail still requires more work, as the artist’s job remains essentially unchanged, he only need to bring in more detail. Unless I’m mistaken here. I’m not really an expert on computer graphics and modelling. However, personally I’m not interested in better graphics when it becomes even more difficult/cost intensive to build the game.

      The real gain should be for the consumer, who should in theory require less hardware power for more detail. Whether this actually turns out that way remains to be seen.

      I’m confident that this is real (at least up to the point they’ve shown), but there’s still stuff such as animation that we know nothing about. I’m a little sceptical as to who will be able to actually use this.

      Nevertheless, it’s impressive technology, and it was actually a “What If” scenario I used to dream of as a Kid, because I always thought using flat shapes instead of dots was weird, since flat shapes are inherently defined collections of dots/points.

    • zbeeblebrox says:

      “As said in the vid – they’re not artists”

      Yeah see, that’s what bothers me. I would give an artist some slack for having focused so heavily on only a very small sample of objects, knowing the common “IT MUST BE PERFECT” mentality. But these guys are NOT artists, they are engine developers (I would like to think) – and as such, they should know better about how to show off the highlights of their engine in an *accurate* though not necessarily aesthetic manner. They could have placed a bunch of large, very detailed, highly variable, but *ugly* models all over a large, very detailed, highly variable, but *ugly* environment and that would have been convincing.

      Instead they focus on how a tiny sample of highly polished models scale in their engine. Leave the polish to the artists and the scaling to…Katamari I guess. Game devs aren’t looking for those things, they’re looking for ways to input a lot of VARIABLE content very quickly, dynamically, and cheaply (cheap and expensive from a production standpoint, not a money standpoint…although those two things are linked obviously). There is only a small amount of questionable evidence of that ability in this demo. It looks pretty, but it also looks static and expensive. The “lifestyles of the rich and famous” voice doesn’t help either.

    • Josh W says:

      What if I said I could make a much more efficient form of space travel, and all I need is for someone to make infinitely strong wires, but that as I’m not a materials engineer you can’t expect me to do that?

      The problem is that you can make some things easier by offloading your problems into other people’s fields.

      They’ve got an incredibly good occlusion/level of detail system, but in order to produce that, they are almost definately putting weird data structures in place; arranging the data so they can search it effectively. This will put restrictions on the animators, model designers, the poor guy in charge of collison detection, which could be pretty considerable, and they’re just not talking about them.

      That’s not to disrespect what they’ve done, looking at the island they created for example, it looks like they procedurally filled a larger model with smaller detail. If you add randomisation, that is a great way to add detail to models without memory expansion; basically you would be able to zoom in on the carpet, and it would randomly pick how to detail in the fibres you were looking at. The carpet would look different every time you zoomed in, but that would mean you weren’t storing the fibres for every carpet across the world.

      Alternatively you could keep track of only those ones that had been generated, meaning that the world could start out defined only at quite a high level, and build out from you as you explore, similarly to how minecraft works.

      This kind of approach means that different artists, with the right pipelines between them, can develop details on each other’s models, that will only appear if the person looks close enough.

      I’m pretty enthusiastic about what demoscene-background people could do with a system like that, applying the same skills they use to get full games into tiny files, and instead helping people to get massive procedural games in more normal sized packages.

      At the same time there are guys working on level of detail for simulations; define the changes in an area really abstractly, and only drill down to the small scale results needed when the player turns up. It’s likely they’d have to do the same sort of thing here, but applied to every change!

  3. Maxheadroom says:

    Spooky, I was just wondering the other day what happened to this

  4. arienette says:

    Give me back my polygons! I hate new things!

  5. pakoito says:

    Apparently they’ve been doing it since at least 2003. With the same demo LOL

    http://imgur.com/g0gXt

  6. MedinaRegal says:

    I think you browse Reddit too much, RPS. :S

  7. SuperJonty says:

    Looks interesting, but couldn’t they have got someone else for the voiceover?

  8. Tori says:

    If this could solve the problem, where in FPS games, when you stand too close to a wall you’re presented a fugly texture – I’m all in!

  9. Gabe McGrath says:

    Well, it seems pretty amazing.

    But I’d be very interested in hearing from a clever person who understood this,
    and could explain what this technology is all about.

    • tenseiga says:

      So far its been a lot of hype (a LOT of hype) and no hands on demo or any claims of any sort. They havent given any technical info yet.

      But apparently it seems like it works like minecraft works. What you are looking at is a lot of closely packed voxels (or cubes) that are consolodated into a single one when the camera is far away enough. Thats why animations are hard to do, the models are converted into really small points and moving them around with regards to a regular characters bones is moving a lot of precalculated point cloud data. All guesswork though.

    • Nesetalis says:

      you wont find anyone.. simply because this is still vaporware.. its been going on close to a decade now since they started, and they havnt got a working demo of the engine? :p if they beat duke nukem forever, i’ll giggle.

    • Cross says:

      Somebody pass this stuff to John Carmack, then we’ll see what happens!

    • Sheng-ji says:

      I’ve studied this video again and again. I have no real evidence of this, but its a suspicion I have due to some of the camera angles used in the demo and my previous experience (Worked my way through the industry to become a lead character artist for a major games company for 3 long miserable years on a game you didn’t buy)

      Their engine converts atom based models into relatively low poly traditional models just before rendering. The really clever bit is that as you get closer it ups the polys in the objects still in view almost seamlessly.

      Those of you out there with any knowledge of 3d engines are probably furrowing brows, spitting out pipes, spilling cocoa and exclaiming “But any 3d engine worth its salt does exactly that”

      What this engine seems to do to retain the illusion of unlimited detail is convert that atom based architecture into a sophisticated set of texture layers. Thus what you are seeing is a traditional poly based model which can smoothly increase polys as it becomes more dominant on screen with a procedurally generated texture using the atom based model that the artist created specifically designed to hide sharp poly edges.

    • Malibu Stacey says:

      Sheng-Ji you’re using a lot of words to say they’re using LoD

    • Vorrin says:

      heh, and you’re furrowing brows, spitting out pipes, spilling cocoa :D

    • Sheng-ji says:

      Hehe, yes I am! That’s not really the clever bit, the clever bit is transforming from atoms to polys in real time and procedurally generating the textures and normals in real time!

    • Shivoa says:

      As Sheng-Ji said, it’s more texture generation from the data than traditional LoD. The traditional way is to use the nature of mipmaps to sample the right texture resolution for the appropriate mesh and use detail textures (colour, spec, normal etc into the modern realm of displacement and so on) to provide detail beyond the normal level for extreme close-ups.

      If this engine provides some very smart tools to go from a 3D model and texture/surfaces system good enough for a 3D non-realtime rendering and can flatten that down to a data set which it can easily pick out the appropriate data to render the scene then that’s worthwhile tools people will be interested to learn more about. At the moment there is a lot of hype and next to no real evidence of solid backing.

      Regarding the claims of the polygon equivalent count in the demo, it is worth considering a DX11 tessellation engine with the representation of a sphere and a ‘fractal/recursive’ texture (the detail textures are the main textures only much smaller in world scale). That demo, a practical thing you can throw together with a modern engine running on current hardware, is technically showing an infinite polygon representation object (with tessellation creating the appropriate concrete polygon count for the rendered section in view) with unlimited texture detail (however far you zoom in the surface will look much the same with recognisable detail and small undulation/reflectivity changes if you use a displacement and specular channel in your texture). For years we’ve been working with this kind of thing (think back to Shiny’s Messiah and Sacrifice for tessellation and manipulation of polygon unwraps to texture surfaces to cope with polygon counts that change every frame – although in that case they limited themselves to splitting triangles in half to make for easier processing, running on much less powerful hardware than even a toybox does today).

    • Vorrin says:

      Yeh, Messiah and Sacrifice are really good example, such quirky original engines they had, which really gave them a very different look from their peers (and in the case of Sacrifice at least, projected it really ludicrously far ahead, bugs and all, but the graphics were seriously breathtaking)

    • othello says:

      Unfortunately, this is mostly just old technology that doesn’t really have relevance to games. They have an efficient data structure for storing the point cloud data; but that’s sorted and preprocessed. So no animation, no shaders, and you’re limited heavily by memory and disk space.

    • Nielk1 says:

      @Sheng-ji

      Your comment really helped. Here are work I can’t see the video but you gave me a very clear understanding of how it works. This sounds like it is the basic idea of combining 2 systems, dynamic lodding by MRM and atom to polygonal mesh.

      As you stated rendering of lower poly alternatives has existed for a long time, so really this just looks like MRM to the extreme. I think it is just a demo of point cloud + MRM and as such not something to see as amazing but is indeed a good idea.

      Personally I would like to see a nice point cloud deterministic physics engine to complement this at some point.

      Do they use textures or do they give the atoms material properties and extrapolate the appearance from that? You would in the end need a texture but it would be generated at run time before being sent to the pipeline.

      All in all this looks cool but generally impractical. Time will tell.

  10. Monkey says:

    I want to believe….

  11. GallonOfAlan says:

    Give me a demo, which includes animation, and I will be convinced. Until then, snake oil.

    • PoulWrist says:

      Well you can do combinations where you have traditional characters in the “unlimited geometry” world.

      My biggest issue would be destruction. If it looks photorealistic, then all the more problematic to have it sit completely untouched by the war raging in it.

    • Cross says:

      I imagine destruction would be easier to do with particles than polygons. Physics, as well.

    • Nevard says:

      Easier perhaps but no less CPU intensive
      Having billions upon billions of static, motionless particles on screen could I suppose conceivably not be a problem, having to check every one of them every couple of seconds to make sure it’s still where it should be would eat your computer

    • molten_tofu says:

      @Nevard

      I have no idea what I’m talking about, but it sounds like you could get a lot more efficient than scanning every atom in the environment for required state changes, and instead move to an event model where atoms indicate they require a state change. It would require building some event detection into individual atoms… or something?

  12. CaspianRoach says:

    Hehe I’d like to see the artist in question to find and tranquilize Mario in the woods.

  13. Nathan says:

    I will be interesting to see how this engine can work with dynamic environments. The graphical fidelity of modern games isn’t derived purely from their polygon count.

  14. Traspler says:

    Sadly it does not show any animations or physics… I’m still very sceptical about this.

    • CMaster says:

      Physics would be really, really difficult with a system like this, unless you can package up a lot of “atoms” into single physics objects the same size as current ones. Otherwise, you get into the realm of DEM, where on a typical desktop machine, a few milliseconds worth of simulation takes weeks to complete for 100,000 elements or so.

    • Chris D says:

      “Physics would be really, really difficult with a system like this, unless you can package up a lot of “atoms” into single physics objects the same size as current ones.”

      I’m not an expert but that doesn’t sound like it should be all that difficult. Isn’t this what we do with polygons anyway?

    • Stuart Walton says:

      Animation and solid body physics should be pretty straight forward. Boning a model and having the the points follow that isn’t far off how it works with polygons and physics can all be handled by invisible physics meshes as is currently the norm. The real problem is making it look good.

      You can warp your point cloud as a skeleton moves but that doesn’t mean it looks convincing. Different materials stretch and fold differently. Consider the skin covering an elbow, now consider a sleeve covering an elbow. That sort of soft physics is tough in polygons too, but easier to fudge. For a point cloud solution, it’s really down to what resolution of data points you want to model (and have the rest follow suit) and how much CPU you can devote to that task.

    • Nesetalis says:

      the reason why bending and ‘breaking’ polygon models works, is because there are only a few points of data..
      with a cloud based rendering, you have billions of datapoints… and EVERY SINGLE ONE OF THEM needs to now be located at a new position, with new rotation, and different ones ‘visible’ then depending on frame speed, you need to do this 20 to 60 times a second… not going to happen.

    • Stormbane says:

      Animations and physics calculations would work the same way as they do now (that is if this technology actually exists). Think of it like this, when you see a character wave you are seeing each pixel that makes up the arm wave but there are methods of simplifying the calculations so as not to individually work out the position of each and every pixel.

      What it possibly can’t do however is lifelike movement.
      With polygon models there is no way of making the skin move differently from muscle as it is potential just one big polygon (think of it as a solid piece of cardboard). With this voxel (I assume that’s what it is) model however it is possible to do this as the skin and muscles translate to individual ‘atoms’. The calculations involved in doing this would be enormous however.

      Then again if you ask me what they claim it does sounds like it takes ENORMOUS calculations to me already.

    • Chris D says:

      More questions for clever people.

      Is animation a significantly more complicated problem than zooming around a static environment? You’re still having to move things in both cases. Are we working under the assumption that you could do static environments but animation would be impractical or are we just assuming the whole video is faked?

      While theoretically you can do all kinds of clever things with skin using this system, we don’t necessarily have to get there all in one go. Presumably you could use this system for the environment and do moving parts with polygons if you wanted to, or am I missing something?

      Presumably at some point the barrier to improved graphics won’t be technical limitations but the expense of having an artist design something in that detail. How far from that point are we at the moment?

    • Petethegoat says:

      @Stuart Walton
      “Boning a model”
      hehehehe

    • IDtenT says:

      I don’t understand why the idea that all particles need to be simulated individually is a requirement for the physics to work. Anyone that’s ever done any statistical physics course knows that’s just not how we model physics – even within highly complex research models. Games? Pfft.

      It’s completely unnecessary and theoretically impossible to do an individual particle physics simulation in a real time environment; unless you build a machine bigger than the environment simulated. (Yay for information theory!)

    • UTL says:

      Animation is more complicated than zooming around since the latter only needs a good system to adapt the level of detail to work smoothly while for animations to look good you need a very specific and advanced way of moving the polygons or atoms or whatever around in relation to each other without destroying or distorting the original form. However this is not significantly harder with infinite Level of Detail, the math that is used for animating any detailed models is in theory infinitely scalable.
      I will try to explain it in simple terms, you describe the position of each point/atom/vertex in relation to only a few points that make up the skeleton or the bounding box depending on the method used. Any time you need an absolute position of a point in that model you simply add the relative position of that point to the absolute position of the skeleton point. If the skeleton point moves during an animation the relative positions of all those points does not change but the the absolute position does. There is of course more to it than that but this is the basic idea.
      Actual physics will be harder to implement of infinite detail. You can have an infinitely detailed box and calculate physics based on its bounding box (which is also how most physic engines work on normal models AFAIK) but you won’t see each grain of dirt move individually. I’m definetly no expert here but what I can imagine is that destruction physics could be vastly improved with this technology since it should be no problem to break for example the elephant statues arm or trunk at any point. Usually you would have to add new vertexes and polygons to model the break surface but with infinite detail that would not be necessary.
      I think this engine is real and will be usable for games in the near future but to use it for everything will be too complicated so a lot of stuff will be calculated like normal polygons in the background.

    • kyrieee says:

      If they could do animation then they would be showing it. My guess is that they’re using some kind of search algorithm for their rendering so they only need to access very little data per pixel rendered, but that typically requires highly pre-baked and static data

    • aircool says:

      The real world has the best graphics and physics engine I know of. But I don’t see a single CPU/GPU anywhere. Is the Universe just vapourware?

    • IDtenT says:

      The unfortunate thing is that the only thing that will ever be able to simulate the universe would be the universe itself. Man, you got to hate theoretical limits. Damn you Information Theory! D:<

  15. rkb53 says:

    Wow, I hope they allow people to try out that island demo when they finish this.

  16. PatrickSwayze says:

    Not sure if serious…

  17. Xercies says:

    I was really surprised that gaming uses so little polygons when I saw a making of feature, its all in the textures to be honest. I use more polygons then that! So something like this could definitly be useful

    • Jams O'Donnell says:

      Yeah, it’s all in the bump/normal/relief/whatever mapping for gaming. It’s pretty sweet what you can do for a low-poly model with something like Zbrush or Mudbox.

    • coledognz says:

      The reason modern games rely heavily on bump maps, normal maps, displacement maps and shaders is because they essentially bridge the gap from a low poly mesh to a high poly mesh while requiring a lot less processing power.
      I want to believe this is real but we’ll have to wait and see. If it is, and is able to run scenes like the one shown comfortably on current generation hardware, then it will be good for the industry.

  18. ArcaneSaint says:

    If this is real… And gets combined with a destructible environment… Then I’d probably need to get a new pc :D

  19. The Sentinel says:

    Polygons are good but very restrictive. I miss lots of the old software tricks clever programmers used to use before the polygon became the foundation of the industry. IF this tech is kosher – and that’s a very big IF – then it is enormously exciting. It would be amazing to see worlds as fully realised and much more realistic environments instead of hitting those same old illusory walls all the time that your brain has to try and ignore.

    I’d love some idea on the hardware needed to run his little demo, though. and what of GFX cards, built to handle huge amounts of polygons? Can they be used at all for this kind of processing or are we back to relying on the main CPU?

    • SpinalJack says:

      Voxel graphics are all software based, there’s not one graphics card in the world that renders voxel. If the industry weren’t so focused on polygons then that might have been different

    • Malibu Stacey says:

      In the modern era of multi-core CPU’s, voxel rendering makes a lot of sense. GPU’s were invented 15 or so years ago to take the load of visual rendering off the single-core CPU so it could be used for other stuff like AI, physics etc. This effectively made every system a dual-cpu machine albeit with a very specialized second CPU (CPU plus GPU).

      Now dual core CPU’s are effectively system standard while quad or higher are becoming mainstream, no longer just ‘enthusiast’ so it makes sense to look at CPU intensive rendering like voxels since you can dedicate a single CPU core to rendering while your other core(s) deal with almost everything else.

      Ironically in this system, GPU’s would still be the best choice for running physics code due to the massive paralellisation they incorporate (possibly AI too but I don’t know enough about that to be sure & it’s still evolving).

  20. Stuart Walton says:

    It’s simply the evolution of the voxel. If they really want to show it off they should stream in real-time data from a Kinect. If they can’t do that then they are way too far from being able to release anything usable by the games industry.

  21. DukeOFprunes says:

    Hooray, that does not look too good to be true at all!

  22. grasskit says:

    As many have said the problem of animating all this stuff has not been addressed yet, not to mention dynamic lighting and all the good stuff that make a world more realistic. Also their claims of “we’re just tech guys not artists” is silly, surely there should be enough 3d artists that would want to alhpa-test this “amazing tech” and make something more interesting than bunch of rocks. so yeah, vaporware.

  23. Derppy says:

    Very skeptical. I haven’t programmed, or even tried to program a 3D engine in my life, but I’d still imagine this has a ton of problems with animation and collision detection. Wouldn’t they at least need simple polygon-based collision maps for the objects? Handling collisions with such huge amount of geometry would take ages from any consumer CPU, or at least I think so.

    When I can run the demo with proper physics and animation, I can believe this is a breakthrough. However, until then I believe this only fits well for presenting static stuff.

    • Kollega says:

      Well, yes, they would need simpler collision maps, but i’m pretty sure that every engine out there uses different shapes for collision maps and visible graphics anyway.

  24. Strech says:

    I wonder how could they add dinamic shadows and animations with such a huge amount of data to move…
    In fact:
    http://www.youtube.com/watch?v=cF8A4bsfKH8
    That’s a 3d sprite and is not real-time rendered.
    I’ll start believing this thing when i’ll se some dinamc objects in their demos…

  25. Fazer says:

    Am I the only one who thought of this?
    UNLIMITED POWER (warning – NSFW language)

  26. Kollega says:

    I don’t know what to make of this. On one hand, the technology itself seems very much plausible. On the other… the idea of voxel graphics has been hanging around for quite some time, but they haven’t yet become usable, let alone dominant, which naturally causes skepticism when someone says they have a solution that will make voxel graphics usable in mainstream game development. So i can only quote G-Man: “We’ll see… about that.”

  27. Kdansky says:

    As someone who actually works with 3D graphics for a living, let me point out the issues:

    - Modelling for point-cloud systems is hard. The fastest way is to either convert polygon models (which means you lose any advantage you had), or use mathematical shapes such as spheres (which polygons can fake to look completely round anyway), or third, use a 3D-scanner. There is one two meters from where I’m sitting. It’s big enough to barely put a coffee mug inside though, and where will you get decent real models for everything that isn’t a garden rock? Building prop houses and scanning them isn’t going to make graphics more affordable. I do concede that better procedural generation might help, but (nearly) nobody bothers with it right now, which is such a shame.

    - Animations. Polygons are very easy to animate. Point clouds are not. Sure, you can do it, but it’s complex and inconvenient. Why do you think the UD guys never show animations? Because they can’t do it well enough.

    - Physics. Yes, you can run Finite Element simulations on point clouds. But not in real time, it’s still just too much number crunching which must happen too fast. Polygons are a ton easier to do, because you can simplify the models and just collide base shapes instead.

    That’s not going to work well for games.

    • Ovno says:

      You don’t lose any advantage you had at all because to export for a game engine you would have to model low poly or convert to it with bump maps and baked textures faking the detail back in, with this you would just use your original high poly, surely giving you a massive advantage…

    • identiti_crisis says:

      You’re missing the point. It’s not about how you do your job, but about those who follow you into the industry. You may be too old to learn new tricks, but there are whole generations below you who will learn this stuff first.

      Animation is the same as a mega-high polygon character, at least as far as the designer is concerned. I mean, how hard is it to select by volume? Then again, the animation can be applied to the mesh first, then exported to the point cloud after conversion. These would all be “middleware” tools.

      Only a fool would try to run physics on every single particle, much as you would be to do it per-poly in any current game. Just because you can’t think of a solution, doesn’t mean there isn’t one.

      Also: Z-Brush (et al.). I am aware that it renders and exports to polygons, but that’s only because of current convention and hardware.

    • Soon says:

      Your 3D scanner is rubbish. :P
      We scan entire industrial sites with ours which will range from a few hours to a few days work. However, we have to convert all the point cloud data into simpler polygon models because there’s no useful way to manipulate it beyond deleting points. I’m surprised studios aren’t utilising it when they’re modelling real world environments. No need for surveys or photographs. (Hmm. *Gets onto marketing*)

      Anyway, we’re crying out for something like this. They’re in the wrong industry.

    • Kdansky says:

      I love it when people who have no clue whatsoever on the subject flame those who do. Go get a PhD in Math or Physics, and we’ll talk about how hard it is to “learn new tricks” and solve those “easy” problems like volumetric selection or voxel animations.

  28. Muzman says:

    Boy that voice over doesn’t make it sound like bullshit at all.

  29. jatan says:

    voxels we all seen before(outcast ,comanche etc)-isnt the real issue that the demo is abit crap/badly presented- bad voice- slightly smug condescend tone,and example graphics a bit poor- and the progress for a year is not so great..quite a few other attacking this( type voxels into youtube etc)

    • Giant, fussy whingebag says:

      Well, the progress isn’t great for a whole year because it took almost that long to render the new video…

  30. CMaster says:

    While I join in with the scepticism over this whole thing, couldn’t you animate in just the same way that you animate polygons – attach each vertex to a bone, and create a skeleton underneath all of it?

    • The Sombrero Kid says:

      you could but you’d be animating a 3 million bone skeleton, that’s a lot of translations, it doesn’t have dynamic lighting either & the most lol part is that all reflections are done by duplicating the data.

    • CMaster says:

      Why on earth would you create a 3 million bone skeleton?
      Surely you’d jut create a normal, sensibly boned one and attach all the points in the cloud to the relevant bone, just like you currently do with polygon vertices. You no more need an individual bone for each “atom” than you do for each polygon in existing tech,

    • The Sombrero Kid says:

      either way you still need to do the 3 million transforms on the point cloud because of the way the renderer works.

  31. The Sombrero Kid says:

    bear in mind none of this stuff is new & there’s plenty of more impressive research in this field, the one thing this guy has over the others is hyperbole and NON-FICTION!

    • Mike says:

      Well, and he’s not in academia and therefore can sell his ideas to industry more easily/isn’t constrained by the need to publish.

  32. I_have_no_nose_but_I_must_sneeze says:

    Forget about all those boring rocks, trees and cactuses. I want to see the swan-winged tigers in motion. Swagers.

  33. MrSing says:

    This + Valve = HL3

  34. Faceless says:

    I sincerely doubt this isn’t bollocks, but on the other hand it made me wonder whether that isn’t the direction we’ll ultimately move in either way, or just continue multiplying the polygon count until our eye doesn’t register the ‘corners’, as is the case with circular shapes in real life.

  35. Schadenfreude says:

    I always liked the idea of Voxels (Think of the potential for destructable scenery!) but they seem like they’d be an absolute nightmare to use.

    Is it possible to mix voxels and polygons? e.g. build your big static world in voxels but rely on polygons for the stuff that’s currently impossible to do well with voxels. Or if you tried that would your render-engine emit a high-pitched scream that made your computer bleed?

    Wasn’t Westwood’s Bladerunner voxel-based? The sprites always looked great until you got in that lift and they’d start to pixelate (voxelate?)

    • Stuart Walton says:

      Outcast only used Voxels for the terrain, water and particle effects. The rest was all polygons.

      I see striking similarities between the terrain in Outcast and From Dust, I almost suspected the latter used a voxel-type system.

  36. Teddy Leach says:

    The next step is to obviously scan yourself into Mass Effect. Yes, you too can truly be Shepard!

  37. Stormbane says:

    I guess it took them a year to render that demo?
    I’m calling bullshit. As a kiwi I can attest that all Australians are lying scum bag criminals.

    • The Sentinel says:

      Whereas Kiwis are all racist, murdering haka-dancing savages? See – I can throw ridiculous stereotypes around, too.

    • Stormbane says:

      you forgot sheep molesting.

    • Voxel_Music_Man says:

      As I am also a New Zealander I will apologise on this guy’s behalf.

      No, I pretty sure he’s joking.

      You’re joking, right?

      Also, I have to protest against that sheep molesting comment. I assure you that our sexual relations with sheep (and cattle) is completely consensual!

    • Stormbane says:

      Yes I was joking. Was I not ridiculous enough or is this the internet?

  38. Davee says:

    I would like to think this is realistic for game development, but with the lack of technical info, the cheap wannabe-epic background music and the over-ambitious presenter as well as the lack of animation or physics… No.

  39. Mike says:

    This guy hasn’t gotten any more likeable, has he?
    “We declined all interviews and then disappeared. Many people said that the technology wasn’t real to begin with.”
    Well, I wonder why, you massive jerk.

    EDIT – Looks lovely, though.

  40. AndrewC says:

    The technology has been used in Medicine, and the Sciences. Medicine, people, and the Sciences.

  41. Ovno says:

    Personally from a software point of view, I can definately see the potential in this.

    As far as producing the models, you’d just make extremely high poly models and rather than then having to produce bump maps and baked textures you’d just export to this as is.

    For physics you would have to have actual low poly models, but that’s not a problem if you’re converting from extremely high poly models to point cloud anyway, as you could just convert from high poly to low which is what we do currently.

    Animation wise, I’m not so sure but I could see it working much like polys there as well, each point has a weighting from each bone and they are recalculated as you go, of course for the silly details he’s talking it would be a computing nightmare, but there is no need for 10000 points per cubic millimeter (or whatever he said), you could reduce that to 100 or 10 with no lose of visual fidelity, especially with the use of interferometry techniques to fill in the gaps if necessary.

    All in all I’d say its still a while off for any practical purpose but this is a hell of a lot better than just voxels will ever be….

  42. Kaira- says:

    I’m going to go with Carmack on this – no dice now, maybe ten years from now.

    • grasskit says:

      bah carmack. who made him the autohrity on 3d graphics, oh wait…

    • The Sentinel says:

      Now see, that one comment lends weight to the tech. If one of the most celebrated games-tech designers in the industry DOESN’T appear to automatically think it’s bullshit, then there may be something in this.

    • Ovno says:

      Glad to see some people don’t automatically jump on the “This is bollocks” bandwagon…

    • grasskit says:

      i thought most people just saying the tech has problems and cant really be useful in current state. which is what Carmack is saying, no? and yeah, obviously, years from now things might change.

  43. Danorz says:

    hmmm. i can’t see any reason this wouldn’t work, but then if it was a vaporware scam, it wouldn’t be very good if it wasn’t vaguely believable would it? hmmmm.

  44. Lars Westergren says:

    I want better audio in games, and better writing, not better graphics. That’s a lot cheaper to do, too.

    • Rinox says:

      Unfortunately, none of those actually seem to boost sales and thus of any interest to mainstream gaming. Now, if you’d said ‘more tantalizing boob jiggly animations’ or ‘more ridiculous Tom Clancy wannabe plots’, I would have thrown my (alas) virtual cash on your virtual stock.

      (but yes, it is quite sad, and I agree wholeheartedly)

    • manveruppd says:

      Hear hear!

    • Donjonson says:

      Exactly. MOAR STORY!

    • MajorManiac says:

      I wish to have better everything in games.

    • gwathdring says:

      Don’t forget better AI. And better visual design. And better procedural generation. And better dynamic-event scripting (if you have enough variety and complexity in the scripting of events and interactions … context sensitive control suddenly doesn’t make you feel trapped and bored anymore). And a more relevant comparison to graphics: more innovative visual design. Really, almost anything else can improve gameplay to a greater extent, even if it doesn’t market as well.

      When we get truly responsive and complete AI in games both for NPCs and enemies … imagine an enemy AI than can compete against you in a strategy game without cheating? Imagine a companion that doesn’t get stuck on walls, and takes flexible voice commands? That’s so much more exciting than “we could make it more pretty.” I have loads of pretty games. I want to use my gaming machine for something that effects my immersion in and enjoyment of games more than the visuals.

  45. Diziet Sma says:

    Why give coverage to these shysters I wonder… though it makes for some interesting comments. Also does nobody remember Elixir/Demis Hassibis’s claims for the Totality engine in Republic: The Revolution?

    “Republic: The Revolution uses the most advanced graphics engine ever seen. Designed specifically by Elixir the “Totality” engine is capable of rendering scenes with an unlimited number of polygons in real-time. Zoom smoothly from a satellite-like image of Novistrana (2000km square) to focus on minute details anywhere in the world. In addition, the game will feature special effects that completely surpass the state-of-the-art currently in development including unlimited real time light sources, self-shadowing, and physically based material models. All the above allows for the actions to be realised stunningly with a cinematic quality and feel, the hand-scripted camera moves vividly capturing the key moments of every action.”

    We all know how that turned out.

    • Ovno says:

      I believe the usual reason to give time to people like this is “Innocent until proven guilty” and also because the wisdom of the internet on these matters is not well known for having any basis in fact…

      That and its all fairly plausible if very very ambitious.

    • Batolemaeus says:

      That’s not how “innocent until proven guilty works”.
      When you accuse someone of murder, the guy is innocent until you can back it up with proof.
      When someone boasts he has “unlimited detail”, he won’t have it until he backs it up with proof.

      The whole idea behind this train of though is that you have to back up whatever you say with some good solid proof.

  46. Big Murray says:

    Why’d they get Tiff Needell’s annoying brother to do the voiceover?

  47. DeanLearner says:

    More importantly, will it support mirrors and will I be able to flush toilets?

  48. Jannakar says:

    It was as if a million art directors cried out and were suddenly silenced

  49. kulik says:

    Well, i’m skeptical about this. …but so was i before the great leap from 2d to 3d games took place.

  50. Atic Atac says:

    Voiceover guy is fail.

    That is all.

Comment on this story

XHTML: Allowed code: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>