Carmack on 3D: May as well be speaking alien

So, you're saying graphicsability will be improveitudes

While I understand about one word in forty, I do like hearing Carmack speak about what he’s been up to. His recent interview with PC Perspective is a goldmine of Carmack-thinking. For example, this is what Carmack’s been up to recently…

“It involves ray tracing into a sparse voxel octree which is essentially a geometric evolution of the mega-texture technologies that we’re doing today for uniquely texturing entire worlds. It’s clear that what we want to do in the following generation is have unique geometry down to the equivalent of the texel across everything. There are different approaches that you could wind up and try to get that done that would involve tessellation and different levels of triangle meshes and you can could conceivably make something like that work but rasterization architecture does really start falling apart when your typical triangle size is less than one pixel.”

I empathise. When triangle size gets less than one pixel around here, it’s complete chaos. Jim starts screaming, Walker just flakes out completely and it’s only Alec who keeps his head together, reassuring us that triangle size will be back at more than a pixel shortly. However, I have managed to decode some stuff.

Well, not me. PC Perspective in the conclusion rounds stuff up. The most interesting stuff is involving the ray-tracing, which I’ve been vaguely aware of. There’s been various people saying – and this is Kieron-o-vision – claiming that traditional Polygon-rendering 3D card stuff is going to disappear, in favour of Ray Tracing. You may remember ray-tracking from the nineties, when we used it on our Amigas to make pictures of spheres appear after eight hours of ‘puter-think. Now, apparently processors are fast enough that they can use that method to start doing in-game stuff. The major proponents are Intel, and the theory is explained a bit here. From our perspective, the biggest change would be that 3D cards basically would become obsolete, as they’re based around throwing around textures. Or, at least, that’s what I can make out. This is all Martian, remember. [We’ll hopefully be exploring a ray-tracing a little more in an extended post next week – Ed-who-may-or-may-not-happen-to-be-Alec]

Now, Carmack basically doesn’t think this will happen. He can see specific uses for the technology – which is the tree-tracing stuff he’s going on about – but doesn’t think it’ll win out. He also agrees with PC Perspective that Intel needs to actually show this is possible – the theory is all well enough, but they need to actually show, not tell – which is why they’ve been buying a load of middleware stuff.

So that’ll be worth watching.

My next post will be an analysis of Ada Lovelaces’ notes on Menabrea sketch of Babbage’s Difference Engine.


  1. weegosan says:

    subpixel triangles? It’s good to see Carmack isn’t limited by things such as… well the limits of all existing monitors.

  2. Stylez says:

    The question remains…

    Will this technology be used to power the next Commander Keen?

  3. James T says:

    Look, he’s talking funny-talk!

    Here’s a question: I haven’t played any games employing voxels, as I understand the term (and I don’t think there have been many — that Blade Runner game back in the day was all voxely, right?), so tell me — wouldn’t using ‘3 dimensional pixels’ make everything look like it was made of little blocks? Am I missing something there? It sounds dreadful!

  4. Gregory Weir says:

    James T: Everything is made of little blocks in reality: tiny, tiny little blocks. Atoms. We just can’t see the granularity. Essentially, if you make the voxels small enough, then the graphics don’t suffer. And that’s what Carmack seems to be talking about (I haven’t read the actual interview). If your voxels are smaller than a texel, that is, than a “pixel” in a texture, then they don’t show up as blocky, because your 3D card/monitor blends them together as part of its rasterization.

  5. panga says:

    @ James T

    All the Commanche games used voxels – usually theyre not rendered as squares, but circles. It can be a bit blotchy in some implementations, but also very accurate – medical visualisation systems use voxels for high data images.

    To some degree voxel-like tech is implemented in many modern 3d engines – used for point sampling for global illumination (this is used in the crytek engines I think).

    And, in my experience, Carmack’s correct – the use of raytracing is really outdated as its not a fully complete model – doesnt implement colour bleed for example. Far better, more efficient global illumination techs exist. Including photon mapping if you want all the bells and whistles.

    Technical whitepapers on request :P

  6. Nick says:

    Summary translation:

    Carmack weighs in on the rasterisation vs raytracing debate. Rasterisation is how game graphics work now – translate geometric shape information to pixel info. It’s more efficient than raytracing, but raytracing lets you do fancier, more realistic effects – CGI movies use raytracing, takes hours to process a single frame etc etc.

    Intel say raytracing is the future. They would, though, because raytracing is an ’embarrasingly parallel’ problem that looks better as you look forward to multi-core CPUs. Hence, Intel stand to gain if the games industry moves towards raytracing. Nowhere near that yet, some say we’ll never be for various reasons.

    Carmack says that classical raytracing will not offer enough benefits, saying that if we move away from raytracing against classical, trianglular geometry, and towards custom-designed data structures more fit for those sorts of calculations, we may get a lot of benefit.

    Sort of.

  7. Alexander says:

    James T: That depends on how big the voxels are of course, they could be one pixel, in which case you wouldn’t notice.

    Triangulation in games was probably the first step towards video cards, as these were specifically tailored for the job. However there are of course many different ways of rendering a game. To make up for the fact that inevitably triangular modeling is a little underpowered for more and more organic shapes and the fact that it really makes deformation a bitch, we have invented things such as bump/normal mapping, and tons of different ways to cope with the problems that arise. Now voxels could solve that problem in my opinion, because they work completely different from triangles, they are three dimensional spatial pixels (volumetric pixels to be pedantic). Here knowledge ends <- models What we do now (opposed to scratch scratch, which was ray-casting), is utilize different techniques for the engine to determine what to render, valve 's source engine for example uses VIS, which is a pre-ray-cast databank for the client to determine what to show and what not. So all the ray-casting stuff that would be done in real time, is done in advanced to ease the load on your computer. <- general purpose rendering Ray tracing proposed by Intel would be real time and do the casting work also! It handsomely possible with a 640x480 resolution nowadays because you need only cast about 640x480 rays and a few extra trips for lighting / reflections. However, this grows quickly when you come to resolutions of 1280x1024 or maybe even 2560x1024 when using dual monitors. Your computer will hate you unless you surprise it with superquadteraflopcores on valentine's day. Voxels are the shit because, for example they would actually allow you to (for example) blow bits out of things, the voxel's should be thought of as individual grains of salt, and depending on the computer's processing capabilities even smaller. Sparse voxel octree however I cannot decipher. still

  8. panga says:

    @ Alex

    Wolfenstein 3d was actually ray-casting, not ray tracing… to be pedantic :P

    ‘Sparce voxel octree’ as hes using it as far as i can see is referring to an optimised octree structure to hold 3d points. Octrees are mental- technically 8 dimensional.

  9. Lunaran says:

    Someone may correct me but this is my understanding of what a voxel octree is:

    Picture a cube. Now picture each face is split by a + shape, like one of those beginner’s Rubix cubes. Each face is four smaller squares. Now you have three squares around each corner, so if you picture interior faces as well then you really have eight cubes. If you turn one of those off you have a cube with an inset corner. Now, picture each one of those cubes is split the same way, and you can turn all those on and off. In this way, it’s represented as a tree with eight branches, and each branch has eight more. An octo-tree. Sauerbraten works roughly this way, using an octree to define game space and geometry rather than a binary tree like Quake#.

    Now, you can subdivide this cube all the way down until the ‘leaf’ cubes (the ones you don’t divide any further) are smaller than the pixels on the textures in the world. Then all you need is to assign each one a color, and thus, a volumetric pixel. Yeah, everything would be little blocks, but as Greg said, if they’re small enough you won’t see jagged edges anyway, at least not jagged edges that are any worse than running with no AA.

    I have no idea what makes it ‘sparse’ though. Voids aren’t represented in the data for compression’s sake?

  10. Alexander says:

    Panga thou art right, made some soup there. scratch scratch

    I ficksered it!!

  11. Naseer says:

    I’ll just scream out “OUTCAST!” and get myself worked down.

    That was an entire game made of voxels, looked great on a good rig.

    Also, Carmack is King.

  12. axel says:

    One problem Carmack doesn’t seem to consider is that rasterisation runs on hardware designed to do this kind of calculations. Raytracing runs on general purpose processors. Specialised processors would outperform cpu based raytracing by several magnitudes. His agument that Intel throws brute against the problem is only that true.

    His claim that raytracing on handhelds is a bad idea is disputable. Read this: link to

    Using sparse voxel octrees for geometric data would have several benefits. A voxel is just a cube. An octree is a cube divided into eight smaller cubes (and so on). The benefit is that the nearer you move to an object the deeper the rays move into the octree the more detailed the object becomes. That would for example solve the problem of rendering very far away objects by eliminating the need for reduced meshes. Perhaps you could even give every voxel and subvoxel its own colour value. That would eliminate the need for an seperate texture. The problem I see is the really high memory dependancy. You could either throw more memory against it or you could use procedural generation like .kkrieger did.
    Sparse voxel octrees would also help raytracing handhelds as they scale very good.

  13. Alexander says:

    There is a lot of funky comments by John Him Carmack on slashdot
    link to

  14. panga says:

    an octree-based system like this coupled with hw geometry shaders could give some interesting results, too.

    *investigates by candlelight*

  15. Sucram says:

    Ray-tracing might scale very well with scene complexity and give us very nice shadows but.. 1080p at 30fps and lets say 20 rays per pixel. My desktop calculator says that’s 1.2 trillion rays calculated per second.

    I think current CPU’s can do about 100million. Intel must be doing something rather special if they think that ray-tracing will be a viable alternative in a couple of years. It might be used in some small part, but not exclusively.

  16. Ging says:

    axel: from when I read the interview, I seem to recall that he does discuss the issue of hardware in terms of raytracing, it basically comes down to things like the SPUs in Cell. In fact, the SPU is nearly ideal for raytracing, purely because it’s designed to work in parallel and it’s for crunching the right sorts of numbers.

    There’s also vague memories of being able to use SIMD instructions to boost performance. Hmm, maybe I should go back and read it again.

  17. Irish Al says:

    I think the Spectrum was better.

  18. Mal says:

    This is quite a good article describing the problems inherent in applying a traditional ray-tracing approach to gaming (written by Dean Calver of the Heavenly Sword team).

    Oh, and if you want to see real-time raytracing at home, this is quite a fun demo.

  19. panga says:

    locality-sensitive hashing ftw

  20. Homunculus says:

    Rock, Paper, Shotgun: Leading the voxel charge!

    (Back to 1999 and Outcast, if we’re lucky).

  21. YouMeanMe says:

    [That was a really interesting comment. A shame you blew it by kicking off with pointless abuse – Admin]

  22. Andrew Farrell says:

    One problem Carmack doesn’t seem to consider is that rasterisation runs on hardware designed to do this kind of calculations. Raytracing runs on general purpose processors.

    Yeah, but isn’t that the point where you start putting technology optimised for that processing onto specialised “video-cards”?

  23. elias says:

    panga, I’ll take you up on that; I’ve never heard of “photon mapping”. Edit: ah, thanks for explaining, YouMeanMe

    GPUs are specializied to do “smoke and mirrors” 3D & special effects (shaders, compared to raytracing which could give perfect lighting including reflection, refraction, shadowing, and radiosity). But processors are probably powerful enough now (and especially will be in the near future) to do good looking raytracing in real-time. The real thing standing in the way is that nVidia and ATI (AMD) make a lot of money on GPUs, and wouldn’t want them to go away.

    If they would make GPUs optimized to do raytracing… it could happen in a few years.

    Oh, and as far as I understand, voxels are just like pixels of a 3D image. I suppose they would be sort of like cubes, or dots, but it doesn’t really matter as a GPU won’t render voxels directly. You have to make triangles to represent them when you send it to the GPU, so any games you’ve seen that used voxels wouldn’t really show you what a voxel “looks like”–they can make them look like almost whatever they want (as long as it doesn’t slow down the game too much).

  24. YouMeanMe says:


    Photon mapping is ray tracing. It is implemented using two ray tracing passes, one to generate the photon maps and one to generate the actual render. To say ray tracing is rubbish but photon mapping is great in the same sentence suggests you have a poor understanding of one or both.

  25. axel says:

    Ging and Andrew Farrell:
    Yes, it all comes down to the hardware. Raytracing optimised video cards will certanly come. ATI and Nvidia might expand their card’s capabilities into that direction and Intel already walks down the path.

    Another bonus of ray tracing is the simplicity of some effects. Shadows and reflection for example are quite difficult to program with rasterisation. And the reduction of complexity is (or at least should be) one of the targets while designing a piece of software. Another point for raytracing.

  26. Chaz says:

    I’d like to join this discussion and say something clever, but I don’t have a clue what any ones talking about. My knowledge of 3D graphics could be written on the back of a postage stamp with a large worn out felt tip pen.

    If all this means games will look even better in the future and require less expensive hardware, then hurray! Or something like that.

  27. Down Rodeo says:

    This isn’t a real time thing but it is interesting for a look at how the basic code would work. And it uses PCG (kind of)! The geometry, camera position and target etc. are generated from a pseudo-random ten digit seed. The spheres provide a good example of how you get high-quality reflections for relatively little code (the relevant section is highlighted by a comment in the function &raytrace()).

  28. Del Boy says:

    Honestly, what’s wrong with 2D?

  29. panga says:


    ‘Photon mapping is raytracing’ is as true as saying ‘Physics processing is raytracing’

    True, photon mapping traces rays, but so does goldsource! Its mearly a stage in the mapping process. Photon mapping is not ‘defined by’ raytracing at all – it borrows more from radiosity if anything.

    After writing a hashed-based photon mapper intended for use on the GPU for my degree, I like to think ive got a bit of a grasp on the subject, although i may have been a bit too scathing on RT – its in my nature :P

  30. Scandalon says:

    So yes, once we get enough “power” to have voxels small enough to be a “pixel” or smaller, then we need something like these octrees everyones talking about as a data structure to hold all this info, but able to hold additional data (i.e. what is this particular “smidgen” of volume’s other properties – weight/mass, friction, color, noises it makes when something hits it, etc, and be able to parse it fast enough. (I presume some tricks for compressing, approximating and outright faking some of these things will be used initially.)

    Oh, and storage (i.e. Hard Drive) technology has to undergo a major shift to make accessing all this data fast enough possible.

    I remember playing the outcast demo and thinking “where’s my voxel accelerator card?!?” Somebody mentioned something about voxels and 3d cards, but I don’t think any voxel-based game released ever used 3D acceleration of any kind.

    The real question is – once we get through all that, will we then get more power allocated for simulating more interesting things, like A.I.? How about real AI characters, not just better pathfinding? (Thought that’s greatly needed too!)

  31. Lh'owon says:

    I empathise. When triangle size gets less than one pixel around here, it’s complete chaos. Jim starts screaming, Walker just flakes out completely and it’s only Alec who keeps his head together, reassuring us that triangle size will be back at more than a pixel shortly.

    Damnit, don’t make me laugh, I’m in a library.

  32. JP says:

    Some fellow wrote a raytracing renderer for ioQuake3, an open source branch of the Quake 3 engine (whose source was released in 2005):

    link to

    As you can see, it’s neat but nothing mind-blowing in the visual department, simply because it’s rendering content (map geometry and textures) that wasn’t designed to take advantage of it.

    Others here have summarized the approach Carmack is researching, but the key phrase is really “unique world detail”. The megatexture stuff in current gen lets artists paint unique detail on any part of a huge world, so extending that to geometry, an artist can decide to put a wad of chewing gum on the underside of one specific bench in a park, and the engine can handle that from 10000 feet with minimal overdraw or running out of memory.

    Also, those nasty sharp-edged shadows we can’t seem to escape this generation would probably go away, without the expensive multisampling approaches we have to settle for now.

    From my perspective it all sounds like we’re getting seriously close to the point of near-total diminishing returns, but Carmack has made 3D graphics the Problem to Solve for his career, and his pursuit of that has been very respectable (even if it doesn’t always produce GotY).

  33. YouMeanMe says:

    A photon mapper is built on top of an existing ray tracing engine. You use a ray tracing engine to generate the photon map which you typically then use to generate your ray traced final image (though technically you could use the photon map with any renderer).

    If you use pregenerated photon maps with a non-ray tracing renderer what you are doing at that point isn’t photon mapping, you are basically just rendering using precomputed light information.

    Your example of the physics engine is way off, most physics engines do not use any form of ray tracing, they use various forms of volume collision checks. The point being you can implement a physics engine completely without using ray tracing but you cannot implement photon mapping without having a ray tracing engine. You may have created a photon map like affect but it was not photon mapping.

  34. Wickedashtray says:

    as complex and confusing as his subject matter is, I will still sit in rapt attention the entire time one of his live discussions is being shown.

  35. panga says:

    Lets not get too detached from the original problem here: We agree that that raytracing is part of the process of photon mapping, that was never in dispute.

    What we dont seem to agree upon is that photon mapping uses raytracing as part of a wider process that produces results of far higher quality, and as its becoming possibl possible to run photon mapping programs in realtime on gpus it seems a waste of time trying to implement a pure raytracer (which, funnily enough, seems to have nothing to do with carmacks plan).

    i dont want to argue on the internet… im hungover

  36. NegativeZero says:


    “I have no idea what makes it ’sparse’ though.”

    A sparse octree only divides its octants up where there is actually something to represent. You have your 3D space. A non-sparse (dense) octree algorithm would divide this space up into octants, and each of those up into 8 until you reach the maxiumum depth. A sparse octree instead only makes this subdivision for areas where things exist – areas with nothing in them don’t get subdivided. It’s a little hard to explain without a diagram.

  37. elias says:


    The guy working on this is doing voxel-based rendering on the graphics card in real time by creating meshes on the fly and sending them to the GPU. According to him, he uses similar methods to generate geometry for the GPU at his job, where he works on medical visualization software.

  38. vic says:

    +1 Wickedashtray and JP

    Yeah, even though I have no intention of any doing 3d graphics programming, I just can’t passup a Carmack presentation. He has a way of talking of which is like a window into a great mind.

    That it is his lifes work and he shares unselfishly and publicly makes all the difference. Fascinating to hear him muse on the next experiments he’ll be doing. Out of my depth here but what I took from the talk is that he perceives that uncompressing and accessing the data format is where the logjam is now rather than in the rendering of surfaces. If he solves that he knows he can hack other approaches to render surfaces close enough to any ray traced hardware. And you better believe that rendering gum on a bench is the most banal of possibilities.

  39. Spudd86 says:


    Ray tracing graphics hardware, it’s a research project at a university, so they don’t have real hardware that is all that impressive, but the video that came out of their simulator is awsome.

    Hardware Simulator video

    And the successor project is here

  40. itsme says:

    while i’ve skipped all but the first 3-4 comments it appear that there are youtube-like morons everywere