Get With The Programmer: Carmack Speaks

He looks relaxed, doesn't he? Carm, you could say.

Code guru, hobbyist rocket scientist and co-founder of iD Software John D. Carmack has been tracked down at E3 by PC Gamer, and they were rewarded by a rather excellent twenty minute interview. With him, obviously. Want to know what he had to say about consoles holding back iD’s work on the PC? Of course you do. It’s after the jump.

CVG’s embed code is neatly borked, so I’ve transcribed one particularly juicy quote for you below, because I love you. Alternatively, watch the full thing here.

Talking about something he wished he’d done differently during Rage’s development, Carmack said this:

“When we started on the game six years ago, I looked at the consoles and said ‘These are as good as the PC’, and our development strategy was to develop live on all the platforms. And now we’re looking at PCs that have ten times the horsepower of the consoles. I’m making a large change in my direction just saying ‘We should be building things efficiently on the PC and then deploying on the consoles.’ And we didn’t make that as crisp of a distinction as we could have.

“My development system now has twenty four threads and twenty four gigs of memory, and we can start putting on half a terrabyte of solid state drives, and these are the things that are gonna drive the development process on the PC. I’m actually as excited about how we’re developing tht titles in this coming generation as the graphic enhancements and things that I’m gonna make.

“…it is unhappily true that we have these consoles here running at sixty frames per second, and we could have these massively more powerful PC systems that struggle sometimes to hold the same framerate because of unnecessary overheads. If we were programming that hardware directly on the metal the same way we do consoles, it would be significantly more powerful.”

Do you know, that might explain why iD’s trademark of coupling their games with breathtaking technology isn’t really present in Rage. Did anybody else see that coming? All of you? Never mind.


  1. Bilbo says:

    Speaking as a layman I can’t help but wonder if some manner of gaming-specific OS would be the way forward… I’d forgo multitasking and stuff if it meant I was getting better performance

    • Xocrates says:

      In which case you end up with a console with a mouse and keyboard.

    • Jumwa says:

      Doesn’t sound so bad.

      I’d love a version of Windows without all the BS they make me choke down. Something streamlined and game focused from the base up, but where I could do regular PC stuff. Consoles, after all, have been modded to do it in the past.

      But I’m not even an amateur at this sort of thing, so I might be talking out my butt.

    • Bilbo says:

      @Xocrates A console that I chose all the hardware for. So not really a console at all.

    • Xocrates says:

      It’s not that simple. For a gaming dedicated OS to be able to support any hardware you want, it could easily become very bloated. Alternatively you can make it an open platform and just install the programs and drivers you need, but that would remove the “gaming-specific” part.

      In short, all you’re asking for is games on Linux. Possibly with Steam or equivalent installed,

    • leeder krenon says:

      SteamOS would be rather neat.

    • Optimaximal says:

      In all fairness, Windows does only install the stuff it needs for your hardware… Everyone (well, except Apple) moved away from the ‘load every device driver so it just works’ methodology years ago!

      What you’re really after is essentially a PC Desktop version of the 360 Dashboard/PS3 XMB. Or to keep it in the PC universe, a modern-day Boot Disk!… DOS4GW… *rose-tinted*

    • mwclark4453 says:

      That’s actually an intriguing idea. You could dual-boot to launch a game session and then boot again to jump into ‘work’ mode.

    • Generico says:

      All you really need is a subsystem for Windows that will just play a game straight from a disc and install in the background. That way you just put in the disc and go, no hassle, just like a console. And actually MS was talking about a system like that back before Vista was released. They called it “Tray-n-Play”. Unfortunately, it never really went anywhere.

      As far as a dedicated gaming OS goes, Windows 7 already does a fairly good job of unloading or pushing to the background all of the subsystems that are not necessary when running a full-screen game. I don’t think you’d get any significant benefit from doing something like a dual-boot with a gaming focused OS.

      Consoles don’t squeeze more out of their hardware by having a more optimized OS. They get more out of less because they have a specific hardware configuration that is uniform across the whole platform. That lets developers write much more optimized code and design much more optimized levels. Also, the APIs for interacting with hardware are a lot more optimized too, since they know exactly what they’re working with. You’ll just never get that level of optimization on a platform where hardware configurations are customizable.

    • Bilbo says:

      @Xocrates It’s almost like I’m coming at it from the perspective of a layman… now if only I’d made that clear when I first commented, we could’ve avoided this whole ugly beating-me-over-the-head-with-facts situation

      You took “Gaming-specific OS” and “install the components I want” and went “Hey, Linux would basically do that for you, minus the gaming specific part at least” – you just made it snarky

      Good for you

      Gonna go have a mars bar

    • Jumwa says:

      Friends don’t recommend friends use Linux.

      Poor form there, tsk tsk.

    • bonjovi says:

      double boot?
      win on one boot and steam OS on other.
      too much hassle?

    • Xocrates says:

      @Bilbo: I never intended to come across as Snarky. As such I apologize if appeared an asshole. All I was honestly trying to do was provide clarification why your idea wouldn’t work that well.

      Although to be fair, you can’t really expect that people won’t point out flaws of a flawed idea. If you say something like “Speaking as a laymen, why don’t they make planes out of the blackbox material?”, I would expect that you’d be aware that people might actually answer you, in fact I would expect that to be what you want.

      (As a side note, I never intended to come across as defending, or even recommending, Linux)

    • Jumwa says:

      “As a side note, I never intended to come across as defending, or even recommending, Linux”

      Apology accepted, right guys?

      Let us all be brothers again.

    • DrazharLn says:

      Generico has the right of it. The consoles have better performance for their hardware than PCs because of what Carmack calls “direct to metal”, that is, writing code to run directly on the hardware of the consoles.

      This is practical in the world of consoles because that’s a homogeneous hardware environment. The machines that we’re using are heterogeneous, they have differing hardware components. Different components require different machine code or at least differently optimized machine code to run the same programs at the same performance.

      It’s very impractical to write code for every different piece of hardware and impossible to optimize for every different combination of hardware components, so you don’t. Instead we abstract away the detail of those components with uniform interfaces handled by our operating systems and write our programs to use those interfaces instead. The downside is that it’s slower (but not by that much), the upside is that it actually works.

    • Bilbo says:

      Yeah, once it was said once it was a fairly easy thing to follow – but yay, I understood the idea of programming straight onto the GPU too. Did anyone else?

    • PoulWrist says:

      @Xocrates how is it bloated because it supports hardware? Windows takes up a lot of space these days cause it has to have a ton of backwards compatibility with different architectures, and because it comes with a lot of drivers out of the box. Take away printer drivers from your Windows installation and you free up about a gigabyte. Drivers are what makes it support what’s there.

    • Baines says:

      On the idea of consoles having better performance because they have known uniform hardware, don’t forget that later revisions to reduce production costs will sometimes break compatibility with games. That happens when trying to create mostly equally functional but cheaper hardware, and not adding anything new or fancy.

      The PS2 lost compatibility with some games with nearly every (if not every) hardware revision. And the problems weren’t consistent. It wasn’t “X is broken in all games that used it”, it was stuff like Game X simply doesn’t start anymore, Game Y has graphical problems, and Game Z locks up when it tries to save.

    • Boozebeard says:

      I’ve always thought it would be great if the consoles ran some kind of boiled down modded windows. Then all you would need to do is develop for PC with as much fidelity as you wanted and just make sure the lowest settings would run on the console hardware.

    • LionsPhil says:


      No no no, no.

      Ok everyone, I’m putting on my Computer Science PhD hat for this because the comments on this are going full-speed-ahead into armchair dumb.

      You are asking to return to DOS. Let me remind, or for the young’uns in the audience, tell you, why this is not a good thing.

      But you want hardware abstraction layers. Here’s the first spoiler: even DOS had a degree of hardware abstraction, inherited from CP/M. It’s the BIOS, and provided ways to talk to storage devices like hard drives and do certain graphics operations. But the BIOS was left to rot unmaintained, and hence we got the boundless joys of having to find the right flavour of Sound Blaster Compatable driver to tell each game to get sound to work. And if you got a newer card that didn’t do Sound Blaster 16 emulation, or you had an AdLib or something and not all games cared to support that? Too bad. No sound for you. So it turns out you do want Plug’n’Play, and since you want short-routes to hardware where they can be formed, you want DirectX, because that’s what DirectX is.

      But you want a protected memory model. A what you say? A rather expensive, when it comes down to it, way to keep each little application in its own little memory sandbox. Something that Windows has done properly since NT (so, since XP for the desktop), and is the reason why XP and later fall over and die and generally misbehave less than 9x. Maybe rose-tints have made you forget this, but in the big ol’ mash of applications memory model of Windows 9x (or MacOS Classic) a single memory corruption bug in any single program can cause completely arbitrary bugs in every other program, and this happened all the freaking time. There’s a reason why several of the old Windows application crash messages suggested saving your work and rebooting: one game crashing to desktop and the whole OS is as stable as a Karma ragdoll. (For those who can’t remember the heady days of early ’00s physics, that means it’ll turn into a mess of wildly thrashing polygons and implode if you look at it funny.)

      Oh, and by the way, you don’t get a choice if you want to use 64-bit code. It’s protected mode or bust. The days of applications dealing directly with physical memory addresses ended long ago (not even Windows 95 worked that way, even if it didn’t keep them properly separated!).

      But you want multitasking. Because you need to run Steam for that game you got in a sale. Because you want Mumble for voice chat, and the weird little daemon you don’t even know exists which makes the programmable buttons on your joystick or fancypants keyboard or gamer mouse work and switch automatically depending on which game you’re playing. Because you want to flip over to a walkthrough in Firefox. Because you want the Steam overlay to work at all (hint: it runs as a separate process to the game). It doesn’t matter if you say “oh, but I don’t need the appearance of running multiple programs, all that system stuff can still happen”, because the cost of multitasking is in all the context switching required to support more than one process running “at once” at all. And in the glorious multi-processor world we live in today, saying “I only ever want one thread of execution” is saying “please ignore at least half of my processor power”.

      (And no, you don’t want “co-operative multitasking”, even if it seems simpler and faster. That’s what Windows 3.1 (and MacOS Classic) had. You know, the one that locked up a lot. Because it only takes one application to go wrong and stop co-operating and the whole system is screwed.)

      And chances are you want ‘all that bloat’. Here’s a quick rule of software development for you: whenever you think “man, I could write a fast-and-light version of Large Product X that only includes the 10% of features people ever use”, you will find that everyone uses a different 10% and by the time you’re done you’re back up to the same size. Here’s another: the “cruft” a code accumulates is best known as fixes and workarounds for real-world issues and complexities needed to make it work, and no matter how hard you click your heels and believe in a nice little world of pure simplicity, a rewrite will not work without doing the same things. Useful modern operating systems are complicated because they are solving complicated tasks. I’m not saying literally everything in Windows is good—christ knows Microsoft have a bad habit of their core OS team doing a stellar job, only to then foist crap like Product Activation or Genuine Advantage on top, but the sheer amount of work to replace it with something useful is literally millions of man-hours of work for absolutely negligible gains.

      Oh, and Linux? HAHAHAHAHA. I’m sorry, but the ubiquitous graphics system used by desktop Linux, X11, has even more indirection than Windows or MacOS, since it insists on reducing everything to a network protocol, even if locally that short-circuits it a bit to a different type of internal communication. This is matched by a performance hit, as anybody who dual-boots can discover, despite what Linux zealots (who never actually run Windows to compare) will tell you. These days its audio stack is also going through two abstraction layers: ALSA for actual hardware, and PulseAudio because it seemed like fun at the time. Sorry, did I say two? Since most things are still written to ALSA, but PulseAudio is hogging it, PulseAudio then emulates an ALSA interface, effectively being a big pointless S-bend in your audio pathway. On top of this it does all the same costly things like protected memory and multitasking.

      Ok. I think I’m done.

    • Wisq says:

      Whatever happened to the multiple stream software mixer in ALSA? It was supposed to eliminate the need for all these sound daemons and basically do what Windows does.

      Or what Windows does for crap sound hardware, that is. If you run Linux on a system with a _real_ sound card, you can play lots of stuff simultaneously, direct to ALSA, with no issues and no sound daemon. But since 95% of sound hardware out there is the crap variety, most distros just stick you with the sound daemon crap.

      Also, X11 does short-circuit rather tightly when you get into 3D accelerated stuff. These days, it doesn’t always compare very favourably with Windows graphics performance, but there was a time when I could actually run the original Half-Life engine faster on Linux with Wine than I could natively in Windows. Probably because people still cared about OpenGL back then.

      Linux could still be made into a good gaming OS. The preemptive realtime kernel support is (I’ve heard) a big step forward in terms of killing latency and getting it up to speed with the other OSes. Of course, Linux on the desktop is still a big pipe dream IMO, and we’d probably better off focusing on (say) Mac instead. But if someone were making a “PC Console” for some weird reason (with their own game library etc.), Linux would be the obvious choice, if only for customisability reasons.

    • alice says:


      *Slow Clap*

      It drives me crazy that Carmack knows all of this as well but still talks about “direct to metal” as if it is around the corner.

  2. Bats says:

    Didn’t the fella from AMD say the same thing and then get blasted by a bunch of ‘journalists’ on the web and have to redact the statement after the fact? Some developers *want* to program straight to the GPU instead of having to deal with DirectX/OGL interfaces and whatnot. Hopefully something comes of it, because it’d really be amazing to see what is really possible.

    • Backov says:

      Yes, and the fellow from AMD rightfully got blasted. Mostly by gamedevs. Lots of them did it right here on RPS.

      Sure, the PC would be faster if you could program direct to the metal. It would also be faster if it was magical, because that’s what would be required to get direct to the metal across the entire range of PC hardware.

      Mr Carmack was asked a question, and he answered it. Either that part where he said “but that can’t ever happen” was redacted, or he just thought it was obvious.

    • Optimaximal says:

      I think it was how the AMD guy put it. What I remember of his soundbite was essentially ‘we need to dump OpenGL/DirectX’, ignoring just what they facilitated.

      Programming directly sounds all well and good until you remember that was what made DOS/Windows gaming so bloody hard pre-DirectX…

    • Moni says:

      I think he made a mistake in his wording and it sounded like he was calling out DirectX specifically, when he should have said, “We could do more without the API overheads.”

      Actually, in hindsight, I think it was the Bit-tech article that chopped up his quote a bit, and made it read a bit funny.

    • bglamb says:

      Yeah, that guy was misrepresented I recall.

      But Carmack goes on to say that the reason you can’t do it is also because you would go bankrupt.

      You can’t spend ‘x’ millions making a PC game look the best it can be if the market isn’t there.

    • Moleman says:

      The AMD thing was rightly blasted, because there are exactly three companies that really care about the ability to program direct to metal- iD, EPIC, and Crytek. The only way to make a graphically impressive game without an engine development level organization would be to license one, and then, basically, your engine devkit becomes your API- so the status quo, but Carmack gets a slightly vaster pile of money.

      Really, just keep in mind that Carmack is justifiably revered, but iD also hasn’t been a huge player in the gaming scene in years– well loved, yes, but tech 4 didn’t exactly set the world on fire- the number of games using it as a platform is in the single digits, and exactly two weren’t either iD games or sequels to older iD games farmed out to other studios. He can look back on the wild days of the late 90s, early 00s as the time when the quake 2+3 engines strode the earth and he was a living god, but that was also the first rumblings of the current status quo- as soon as folks could get off the PC upgrade carousel for consoles (which were “good enough,” but cheaper), they did. In vast numbers.

    • ResonanceCascade says:

      John Carmack has said repeatedly that he doesn’t develop engines to license them. In 90’s people came beating his door down to get their hands on Quake and Quake 2, and licensing them literally just came with x amount of hours of support from Carmack himself. So he told people ‘sure.’ I guess that’s what led to the myth that id is all about engine licensing. Couldn’t be further from the truth.

      As engines become more and more complicated and the support part becomes more important, licensing outside of a small family of developers becomes less and less appealing to guys like Carmack, who really just want to sit down and make new, cool stuff.

      He’s not some corporate mastermind machinating to overthrow Unreal; you can take what he says pretty much at face value.

    • PoulWrist says:

      Indeed he did, and some CryTek fellows did as well. Then there was a backlash of people going “BUT WE NEED IT” and it’s probably true that the world would otherwise be dominated by sweetlooking proprietary game engines that everyone else would have to license because only the big and powerful could afford to develop something competitive.

    • soldant says:

      @Optimaximal knows the score. The DOS days of having to know IRQ addresses and the like to get something like sound to work are dead and gone for good reason.

  3. Pete says:

    I’d love it for someone to be more specific about what those overheads are.

    (I don’t write games, but I write very cpu and memory intensive code for a living)

    • Optimaximal says:

      The operating system in the background is consuming memory & cycles, as are the display drivers & APIs provided by DirectX/Open equivalents.

      Use of the word ‘needless’ is wrong, because the OS is actively running crucial system management tasks, but it’s all resource that the game cannot use.

    • jetRink says:

      Funny, at my company they are always asking us to write resource efficient code. To each his own, I guess.

    • Pete says:

      Derp. Code with large datasets that requires a lot of processing. There’s always a programmer time vs buy a bigger computer tradeoff.

    • Frye2k11 says:

      I am no graphics expert either, but I don’t think they’re talking about system overhead. I can’t see how that is specific to games. Besides my cpu use is way below 1% when idle.

      I think what they mean is you need expensive api calls for even the most basic graphics job. (NERD ALERT! like 100’s of glMultMatrix calls per frame in opengl).

    • Sagan says:

      For a lot of applications, the most expensive part of graphics programming is talking to the GPU. The problem is, that you are talking to DirectX, which is talking to the driver, which is talking to the GPU. So if you are telling DirectX “I want to draw this model with these textures with this shader” then there is so much going on between that and the image actually appearing on the screen, that you don’t want to do more than a few hundred draw calls each frame. Which is not a lot when you have got a ton of detail and multiple render passes and all that fine jazz.

    • Pete says:

      So where is that overhead coming from exactly? Is this the overhead of a round trip on the bus to the card? Too many CPU context switches? Simply going up and down the DirectX call stack isn’t going to be that heavy.

      (One of the things I have learned: be careful of optimising without a profiler. How much of this “overhead” is speculation and how much is measured?)

    • PoulWrist says:

      As mentioned above, it’s the overhead in the APIs, drivers, and other such things. They have to go through those to get to the hardware, whereas on consoles they code directly for the hardware. Which is a reason why Unreal Engine 3 is so very used; developing an engine that looks just as good and does the same things is a costly affair, as you will notice Carmack saying they worked on Rage for 6 years now, and that doing the “best” solution for current lvl PC hardware would possibly take a decade.
      So, incremental steps, and in the future we’ll likely see better utilisation of our hardware.

    • HeavyStorm says:

      Carmack has never been a friend of multiple abstract layers. He always call then “unnecessary overhead”. I remember reading on his .plan many years ago that games wouldn’t have so much lag if the OSI model didn’t imposed a lot of, again, unnecessary overhead. But I don’t think the fellow is saying that’s a bad thing, he just would love to live in a world where such overhead simply wasn’t there.

      And to answer the question of what overhead is this, yes, the DX call stack may not be that heavy, but data transform and remapping are. The whole reason we need DX is to abstract a number of different data models that each piece of hardware expect. For example, maybe a texture is as a big array of integers ranging from the upper left on nVidia chips, while it could be that ATI chips think of texture as an array ranging from the bottom-left. Or center, who knows.

      Of course the example above is absurd, but there are a number of subtleties that DX need to take care of when talking to different hardware. It presents a very uniform (non-optimized) API for us, developers, and then knows what he needs to pass on to drivers. Of course, I’m lying here. The fact is that DX is a standard and different hardware have to adhere to it, so DX is uniform as well, but it can deal with different behaviors in drivers. Let us remember the issue when Shaders came out and we had a lot of HLSL to learn (DX HLSL, nVidia Cg, etc.).

      Finally, I don’t suppose that the overhead Carmack is talking about is that inherent to the OS. What I think is that, since he is working on both Computers and Consoles, he has written an abstraction layer himself, one that enables his developers to write a single code that runs efficiently across very different architectures (PS3, Xbox and Windows PC… and probably Linux, since usually he does that).

  4. Nallen says:

    I wrote a clever comment with links about that AMD guy saying the same redundant stuff about programming ‘directly to the metal’. I even included an analogy.
    It got eaten :(
    Anyway you can’t program straight to the GPU, unless you’re willing to only allow people with that GPU play your game.

    • TillEulenspiegel says:

      I smell an opportunity for AMD or NVIDIA to give id Software a large pile of money in exchange for adding a render path that’s designed specifically for their flagship GPU. It’d be an interesting experiment. Let’s see what’s really possible.

    • Optimaximal says:

      I take it you missed the whole ‘The Way It’s Meant To Be Played’ marketing campaign from NVidia?

      This has been going on for years, be it drivers written specifically to make games look better/play faster, cheating benchmark programs or just sabotaging rivals using financial clout.

    • bear912 says:

      In reply to comment: If I’m not mistaken, some GPUs from a single manufacturer will be at least partially instruction-compatible.

      Not in reply to comment: While I have little-to-no experience doing any kind of real-time 3D work, I think perhaps GPU manufacturers should take a lesson from the success of x86 hardware. Intel’s architecture has remained successfully backwards compatible for a very long time (see footnote), and yet the architecture and instruction set has also successfully scaled to modern systems, to the point where it is still the single dominant PC architecture. While this approach may have some drawbacks, it at least warrants some consideration. Perhaps it is not the standardized hardware of consoles that PC gaming needs, but rather a standardized architecture.

      In computer science, a specific solution is always faster than a general solution, and so, while abstraction is an important tool for programming, there is sometimes value in removing abstraction. There’s a reason that the vast majority of programmers do not program in assembly language, but there’s also a reason that a few people still do. As it stands, the layers of abstraction put in place to keep programmers from having to deal with numerous architectures, manufacturers, and models has an unfortunate side-effect: overhead (incidentally, the fact that this pseudo-standardization is done in software helps Windows maintain it’s stranglehold dominance as the OS for PC gaming, whether it deserves it or not).

      * Footnote: I’m too lazy to check and see how long x86 has been backwards compatible. Perhaps someone else will feel like looking it up…

      tl;dr: Perhaps GPU manufacturers would actually benefit from a standardized architecture and instruction set, but this would require broad cooperation between a number of companies, and will probably not happen for a long time, if ever.

      Still tl:dr: GPUs should take some lessons from CPUs.

      #WallOfText #RantMuch

    • Baboonanza says:

      Well, if you want to go back to days when you didn’t know if a game would run on your GPU, alt-tabbing crashed everything and a crash more-often-than-not required a PC reboot be my guest.

      I really don’t understand this desire to going back to the metal. Modern GPUs are hideously complex beasts and to get the best out of a card requires careful balancing of tons off different factors for each and every card model. The API is what gives both the developers and the driver coders a target stable enough to write code that works on the wide variety of hardware found in PCs.

      You could make an argument for a slightly less intrusive API, and progress has been taking us that way for some time. There is already a proprietary OpenGL extension that allows much-more direct VRAM control for instance.

    • PoulWrist says:

      You can code for the architecture of each chipgeneration, but yes you would be doing a disfavour to anyone with a previous or future generation chip. Course, there’ll always be some level of compatibility, but it’d take longer.

  5. Hunam says:

    But he also went onto say that going ahead with all this tech was kinda silly as games look good enough now that he wishes he’d just kept updating his last engine and got 2 games out by now.

    • Nallen says:

      Sounds like he barely stopped short of saying ‘Rage was a fucking great big expensive, misdirected waste of time’

      I may be putting words in his mouth now.

    • Moni says:

      Well Brink and Prey 2 do look quite pretty…

  6. Diziet Sma says:

    This is the takeaway quote for me:

    “If we were programming that hardware directly on the metal the same way we do consoles”

    E.g. get rid of abstraction layers like DX.

    • Pete says:

      But I thought development for the Xbox involved DirectX? Anyone here actually an xbox dev?

    • Optimaximal says:

      Abstraction layers like DirectX and device drivers are what makes the PC such a varied platform. For direct-to-metal programming to work, the game would have to either be patched almost relentlessly for new hardware or all programming would have to be done for a small subset of hardware.
      There’s a reason Apple can do clever things such as live/background GPU switching in Mac OS X and that’s because they know the hardware it’s going to be running on and can write for it.

    • manveruppd says:

      Actually you’ve had that on PCs before Apple started doing it: do a search for Nvidia Optimus, it’s been around for 2-3 years now. Ironically, Apple started doing it after they switched to Ati graphics on their laptops! :p

    • Shivoa says:

      As Pete said, 360 requires you to code to the DX (9.custom 360 extensions) layer.

      Part of the reason there was such issues with backwards compatibility from xbox to 360 games was because they did not demand that xbox games were coded through their API (DX) and so many titles optimised going direct-to-metal. That meant code that now needs to be interpreted through a conversion from the Pentium3-x86 to IBM and the non-nVidia patent infringing nVidia nForce2 era GPU to ATi 360 GPU translator. My experience with coding on the 360 and talking to friends doing the same (which is far from exhaustive and I welcome someone with more experience and not under NDA from talking about the specifics giving more details) is that you talk to the DX layer and you don’t get much choice about it. This means future compatibility will avoid the myriad of patent issues that the current backwards compatibility layer involved.

  7. Sam Crisp says:

    That picture has quite possibly the worst mouse-over text I have ever read.

  8. Rii says:

    Rage @ 30fps on Sandy Bridge integrated graphics. Impressive.

  9. aircool says:

    Since when did Julian Cope start working for id?

  10. Rii says:

    The take-home quote for me:

    “We no longer think that building the prettiest pictures is the best way to deliver value in games.”
    – John Carmack, 2011

    Of course Satoru Iwata said much the same thing back in 2005 and the success of the Wii largely bears him out. Coming from Carmack, though, well, that’s a whole ‘nother kind of credibility.

    • Optimaximal says:

      The problem is that Jon’s whole talent is behind delivering great looking products in the most innovative/clever/efficient way possible. The gameplay is handled by other (potentially less talented) people outside his ‘sector’.

    • Rii says:

      I’m not sure why that’s a problem.

      I have an enormous amount of respect for Carmack and am always willing to listen to what he has to say. We’re gradually – gradually – transitioning from game development as an exercise in technology to one of creativity. It’s the shift from the engineer to the artist, and it’s the engineers who are building the tools to bring that future about. Digging their own graves, to put it melodramatically. And I think that’s a glorious, wonderful future for the industry, but that doesn’t make it any easier on those engineers whose absolutely essential work and impressive talents are by degrees becoming marginalised from the public face of the industry.

      Beyond his abilities as an engineer and programmer, I think Carmack has shown enormous foresight and maturity in accepting, adapting to, and even welcoming these changes and those which are to come. He’s essentially outgrown his fanbase at this point. He’ll always have a place in the annals of gaming history, but his age is over and the attention he still receives is merely the residual echo of past deeds, and I’m sure he knows that. Kudos.

    • bastronaut says:

      “Digging their own graves” is a but excessive. A better way to think about it is that engineers can spend less time on grinding out generic engine components and more time adding new and innovating behaviours and gameplay interactions. They can work more closely with designers to expand the boundaries of what gaming means and what games do, how the environment works, how AI and characters work, and more.

      Environmental realism had it’s time in games, and while it won’t go away, the gaming world desperately needs to get back to experimenting. Yes, designers and artists will drive this to some extent, but so will new software architectures and algorithms.

    • ResonanceCascade says:

      I think you’ve raised a pretty huge point here, Rii. I think video games are nearing the beginning of a loooong transition similar to what Avid did for video in the late 80s and Adobe has done for photo editing and motion graphics over the last 15 years. Which is getting the tools locked in and simplified to the point where several orders of magnitude more people (artists) can use them effectively. This is a REALLY hard problem, but I think we’ll see it cracked in the next 8-12 years, especially as it becomes more apparent that it’s a vitally necessary step for the market. Very exciting.

    • D3xter says:

      @Rii: The day what you are saying becomes true is the day that both creativity and innovation have died and we have lost because literally ALL games in a time span of at least a decade will look, feel and play almost exactly the same (see the Gears of War/Unreal Engine dilemma and potentiate it)… you see that “creativity” you are talking about is deeply intertwined with the technological part of gaming, as (like Carmack said) aside from the game design and art side of it, it is more “engineering” than art, your engine will directly influence what and how you will be able to show and in what ways you will be able to interact with the world you are trying to depict… this is also why several “types” of games generally use differing engine and technology because while one might be able to render great racing scenes, it will not be the best choice for your RPG or shooter…

      What he is referring to is the increased complexity and workload that comes with technology improving and that you simply can’t spend your time rewriting EVERYTHING from the ground up for your next game or two every few years like you used to be able to in the days of yore using 2D and simple algorithms, but that you rewrite certain parts of it.

      I see a bigger push with outsourcing things and the general use of more middleware/specialization instead (not much unlike the movie industry that does it with special effects, rendering companies, sound, makeup/props etc. etc.). You will probably see more and more symbols popping up instead, that trend has been increasing the last few years… starting “Witcher 2” for instance I see fmod for audio functionality, I see the established Havok for physics interaction, I see Scaleform used as a UI design solution, SpeedTree for realistic foliage and trees, PATHEngine as a path finding AI solution etc. and I dare say the amount of middleware being used in the future will only increase with things like lighting, complete AI routines, face animation etc. being handled by them and an increasingly smaller part falling back on the actual engine engineers, while the initial “game design” gets easier and leads back to just being able to work with your tools.

  11. Milky says:

    I never realised id is said as id, I had always called it I.D if that makes any sense.

  12. niko86 says:

    I’m sure I read that windows 8 would or could be modular. Certain parts of the operating system would only initialise when required.

    Microsoft could create a game optimised OS profile say an ‘xbox mode’. As long as you pc has the minimum requirements its good enough to be an xbox. Microsoft could still make and sell an xbox console and maybe update the specs every few years to keep up with pc’s.

    And if your pc is better than minimum specs obviously it can do high resolutions more image processing etc.

    • MDevonB says:

      Niko, while that would be a great idea, it wouldn’t be feasible. It’s a matter of architecture. The DirectXBox’s processor is a PowerPC design. Modern Windows runs on x86/x64. You could make an emulator, but we don’t have the processing power for it. BSNES which does (obviously) SNES emulation requires a decent PC to use.

      With Windows 8, they’re going to have enough problems juggling 3 ARM architectures on top of x64 (I hope they just gave up on x86) so it probably isn’t financially wise either assuming they can infact get everything working.

  13. Salt says:

    Real interesting to hear him talking about the focus on improvements to development, rather than slightly better parallax mapping.

    Reminds me of Eskil (creator of Love) discussing his beautiful development environment and tools like procedural texture generation.

  14. MadMatty says:


  15. Quaib says:

    I really like this new far more pro-PC anti-console RPS.
    I feel that in the past you’ve sort of been slightly apologetic for devs deving for consoles primarily and porting to pc and similar crimes, although RPS is of course PC only and all that.
    But I like the new aggressive attitude.

  16. bonjovi says:

    Just realised. What he says basically comes down to this:
    We’ve been developing for consoles, now porting to PC, expect shitty ports.

    • FakeAssName says:


      I kinda read it as: “I’m sorry Rage looks like shit on the PC, things would have been fine if we could have released the thing years ago when it was ready to be a high quality PC game for the time (but would have played shitty on a console), the new overlords didn’t like that and made us redo things so it played great on the console … unfortunately you guys are now kinda fucked with a late model PC game and people are blaming Id because I can’t just come out and say that the string Bethesda tied around our balls is tripping us up.”

    • tbradshaw says:

      Positively hogwash.

      The game runs wonderful on PC. In that section of the interview Carmack’s talking about our development process. He’s describing how currently idtech5 is actually developed live on xbox, ps3, and PC simultaneously, which slows down the development process. He wants to update the development process so that we can take advantage of the vastly more powerful PC during the iterative development process and then just deploy to consoles for final testing and iteration.

      There’s no “port” for PC. Every platform is developed simultaneously.

    • wazups2x says:

      “There’s no “port” for PC. Every platform is developed simultaneously.”

      Ha! Never heard that before…

  17. Om says:

    No. I’m not going back to the days when id dictated the pace of PC hardware development. I *like* not having to upgrade my machine every six months

  18. Stevostin says:

    “Do you know, that might explain why iD’s trademark of coupling their games with breathtaking technology isn’t really present in Rage”

    I am actually very sensitive to the “megatexture” stuff, the idea that just not two walls are the same. I can feel it in every video, and I am surprised it’s never spoke off by the press. I feel it’s a very deep change for all gaming – that being said, it seems so heavy to sustain design wise (they say no hobbyist will make mod with this) that it probably will not be really used. Still, lots of potential, especially if they do Fallout 4 with it.

  19. daf says:

    “Bare to the metal”, so instead of Microsoft and each hardware makers creating the drivers and a graphic API for game developers to use, game developers would have to do it all on their own.

    Has someone with some experience in programing I shake my head every time someone hears this and thinks it means we should get rid of DirectX and similar tools. It’s like we all forgot the days were powerVR and voodoo introduced accelerated 3D and you had to have a SGL, glide and software version of your game to support all hardware. Do you really think any sane developer today would like to return to the days of rewriting their render code for each graphic card? Most barely bother writing a decent PC version let alone the number of different versions to support all the popular graphic cards.

    What needs to be done and what i’m sure Carmak is referring to is improve driver and DirectX code to be more efficient, not get rid of it. It’s not uncommon to see “30% increase in ” on driver releases and I’m sure when rage is out we’ll see improvements being made.

    /rant mode off

    • pepper says:

      Hear hear, optimize directX/OpenGL, but please not back to those days.

    • SoupDuJour says:


      Also, right now, the limiting factor (for sales and critical reception) on most games really isn’t graphics, or processing power, but balls to try something new, or to just make something aesthetically pleasing.

      The focus is shifting toward the aesthetic aspects of game developement. The actual design of the gameplay, art direction, characterization, storytelling, world building… that type of thing. People need to get better at that, and some pretty good improvements are already being made.

      The difference between 10000 and 10000000 polygons for a character model isn’t as big as the difference between retarded dialogue (most games) and brilliant writing (portal etc).

      Anyone else notice that in a lot of (console) games the gameplay is almost an afterthought? Like… some filler between the things the devs clearly found more interesting to do, i.e. cutscenes/quicktime events. Then again, gameplay where the player can actually DO stuff is hard to focus test. And as such “a risk to investment”. ;P

    • bill says:

      I don’t think Carmack was saying that he wanted to programme Bare to Metal for the PC (well, he might enjoy it for fun, but he knows it’s not realistic for actually shipping games), he’s just saying that is the reason why you can get a lot more out of the equivalent hardware on a console than you can on a PC.

      If you imagine a PC with 360 specs, there’s no way it could run something like Skyrim – because there’s no standard hardware and so there are lots of middleware layers and operating systems in between.

  20. HelderPinto says:

    That’s not a pretty pic of Carmack.

  21. bastronaut says:

    Considering Carmack’s words in the context of some of what he’s said in the past about the development of earlier games, I think he would just like the option to adapt key parts of his architecture to the strengths or weaknesses of the particular hardware. I doubt he wants to re-write his whole engine in assembly or anything comparatively extreme. He wants to get as much performance as he reasonably can, so that he can do cool stuff with it.

  22. Neurotic says:

    Wow, the geezer’s looking old! I remember when his hair was golden and his skin was taut and shiny with youthful perspiration.

  23. Wulf says:

    I really like Carmack, always have, even if I haven’t liked all of Id’s games. He’s this wild-eyed mad savant when it comes to coding and he loves to geek out at people in ways that they (or more specifically I) don’t always understand. Of course, these are things that I’m more than happy to read up on them and thus actually learn a thing or two about the development of games.

    It’s fantastic that he typically doesn’t treat people like lay persons. I respect that. Instead of just talking about how great his AAA game is, like every other talking head in this industry, he gets into the nitty gritty of the actual work he does. It makes him not only rather intense and compelling to read/listen to, but it also seems to make him completely immune to the usual industry spiel and spin.

    “Do you know what makes me excited?” [Carmack attacks with vivid diagrams and big words!] [Games journalist is stunned!]

    CliffyB used to have a bit of that about him before he became this worryingly dead eyed zombie who’s always doped up on something, and never intense beyond running around with plastic guns, and never talks about things that he’s passionate about beyond making money (and apparently gay people hitting on him). I think somewhere along the line CliffyB must’ve suffered brain damage, but I digress.

    I’m much more interested in IdTech 5 than I am in Rage, right now. I still remember the incredible lengths he went to to explain the mega-texture technology, I remember reading through a five page document about that in a half-conscious state, with a dozen browser windows open leading me to read up on other things just so I could make sense of all of it. And that’s Carmack for you.

    I want more developers to do that, to be intense about things that are important to them.

    ArenaNet is like that about art, too. Which is why I love ArenaNet. When you hear people like Daniel Dociu talk about his work, and showing you his work, it’s absolutely mesmerising and it’s so much more enjoyable than just another interview about how violent a game is, or another AAA computer-generated trailer. Though I’m probably not the target audience of those latter instances, they probably work for some people which is why they’re there, but meh.

    I want more Carmack. And more Dociu. And people like them. Talking about whatever the hell they want to.

    The thing is is that there was a revelation even here, in this Carmack ramble, that got me thinking. How much is Windows holding us back? I mean, if we had a really light Linux distribution which was designed to be solely for gaming and based upon the most cutting edge version of OpenGL (and OpenGL has always, always been able to do more than DirectX, and it’s always had the features there before DirectX too, it’s just that DirectX used to have better tools, but even that’s changing), and the OS was stripped of most functionality other than supporting a really light version of a browser, IM client, and mail (not unlike the PS3)?

    Now what would you be able to do with an OS like that? Versus, say, the bloated mess that is Windows, coupled with the restrictions of DirectX? Carmack used to be a major OpenGL guy too before the 360 started changing things for developers, and it’s easy to see why. Again, OpenGL will use more of what your graphics card has to offer than DirectX will. And yet we’re using DirectX on Windows. The question is why? Why are we doing this?

    The next question is: How successful would Valve be if they expanded on Steam a little and turned that into a Linux-based OpenGL powered OS?

    That might not be as insane as it sounds. They have a lot of the functionality in Steam right now, they have a browser, and they could expand their IM system to work with other networks, perhaps by making a deal with the Pidgin guys to build in some of their code, and a mail client is easy. Hell, you could even use webmail for that (though it wouldn’t be so elegant). And they could then not only provide Steam as a service, but Steam as a platform. So you’d have the 360, the PS3, Windows, and Steam.

    It’d be a bold venture, but I’d be up for trying it. I mean, it’d be a dual-boot system for when I need the applications that Windows has to offer, but many people I know spend most of their time in Linux anyway (I’ll come back to this), and why do I spend time in Windows? Games. That’s really it. If not for the games, I’d be spending most–if not all–my time in Linux, too.

    Now here’s the fun thing! If the OS was designed modularly, then you could have a system where you could have a gaming OS mode, and a complete OS mode, where you could switch between the two without a full ‘boot time,’ and you could even keep your IM and browsing sessions going. And if it was Steam, then they could make a concerted effort to create a standardised API for that platform and working with hardware, and they could pull hardware developers into line, too (no more shitty drivers!).

    It’s a dream, sure, but… when I look at how successful cloud computing is on netbooks, and even operating systems which are entirely browser based (I’m not kidding about this, look it up!), then I begin to think that maybe there’s a market for this, and maybe this could allow us to unlock the power of the PC again by having a dedicated gaming platform, with games that are designed to play up to the power of that platform.

    No, I can’t see it happening either. But it was worth musing on, wasn’t it?

    • Wulf says:

      Did everyone else enjoy the stuff on input latency as much as I did, too? I learned something new, today!

      Yeah, Id really needs to let Carmack have more air time. He’s able to make his games seem far, far more interesting than any marketing, or any PR talking head ever could. After fully watching that, I’ve gone from zero interest in rage to having a mild interest in rage, and considering the sort of game it is, that’s one hell of an achievement.

    • Hmm-Hmm. says:

      Yes! Yes! Yes! More of this sort of thing, indeed. Referring to your first post.

    • LionsPhil says:

      Most of your post makes me facepalm, but regardless I feel compelled to namedrop Tim Sweeny as the Epic guy who actually geeks out about his job, or at least used to. How many game developers do you know gleefuly posting on programming language design forums? Some of his writings on there about UnrealScript and where he felt that kind of level of language were heading (circa 2006) are great reads.

    • Wisq says:

      Much as I love Linux — for my servers, anyway, having switched to Mac on the desktop — I’m confused how anyone could think that what the PC needs is to become a sort of “console lite”, particularly by using the OS that has the worst desktop adoption rate of them all.

      Dual booting is something Linux users take for granted, and most other users would boggle at. Not in execution, but as a whole concept. Rebooting to switch from my desktop OS to my gaming OS? Christ, we just finally managed to get most new games alt-tabbing correctly so you could actually use your computer for other stuff at the same time, and now you want to make us _reboot_ instead?

      The biggest reason PC gaming will never completely die out is that we already have these machines, and we already run them to do our daily work and our lifestyle stuff like email and browsing. Games were always the next logical step on top of that, and so they happened. But games have always operated the same way as other software, aside from things like fullscreen mode. Firing up StarCraft 2 might take a little longer than Word, but it’s still supposed to be just as easy, and run alongside all your other stuff. Having to reboot and change your system mode to game would be like a return to the days of DOS and boot diskettes. I can’t see how it would be anything but a huge step backwards.

      And that also means we’re probably never going to see Linux as a gaming OS, at least not until it has a major share of the existing desktop market. It’ll always be a nice lightweight choice for those games that explicitly support it, but I don’t see it being used to turn PCs into consoles, and I wouldn’t want it to anyway.

    • Kaira- says:

      A Steam OS would, in theory, be nice, but in reality I personally wouldn’t support it. Even though I like consoles, I also enjoy the freedom of PC, and locking it down to single platform with varying hardware just doesn’t seem right. Plus all the other gripes I have with Steam.

    • Wulf says:


      “Most of your post makes me facepalm”

      That you just say that and don’t bother to say why is enough to make Godzilla facepalm. It just seems like disdainful one-upmanship where you think you can get away with it, for no reason other than one-upmanship.

      Or is this just PC master race nonsense that leads you to open your post that way, as described by Yahtzee in his latest review? Because I could see that. Either an attack in regards to how I think the PC would work better as a pseudo-console, or how I’ve claimed (quite correctly) that Windows is a bloated mess. That or it’s my statements about OpenGL, which are backed up by Wolfire. Maybe it’s my statement that drivers tend to be shitty, but that’s backed up by so many Steam subforums about games where there’s a thread saying ‘We’re waiting for a driver patch from nVidia/ATi before we can fix this.’ It’s hard to guess, really.

      Really, if you’re going to say something like that, at least say why. Otherwise it just makes you look like a dick. Though if I’m right, saying why would also do that, too. I’m getting pseudo-intellectual vibes from you here directed at a guy who’s just talking about what he likes.


      Except it doesn’t. Not on netbooks, anyway. My point was that if it was marketed right as a new approach to OSes – one targeted at gamers where it has a sort of ‘console mode’ and a ‘full OS mode’ which you could switch between without losing your browser, mail, and IM sessions then that could possibly work out well.

      I’m familiar with Linux, I love it, and I’m well aware of its modularity. The modularity of Linux is what could make this work, since you can start up and shut down parts of Linux without losing the rest of it in the process. If you used Linux modularity cleverly, you could create an OS which would far better suit the needs of gamers. Like Carmack mentioned, the heavy multitasking of your common and garden PC does lead to it having trouble running a game as well as a console. This would be a workable solution to that problem for anyone who wanted to devote their system to gaming (and it works doubly well for those who tend to like Linux better, anyway).

      And it’s not rebooting as such, it’s more like ‘starting up,’ I just used that term to explain it. See, you can shut down the front-end user interface in Linux and start it up pretty quickly without a reboot, a similar system could be used for this. I believe that recent pad devices use this too, to switch from a more light OS to a full OS, and they do it much more quickly than a Windows PC boots.

      The rest of it it’s not worth commenting on because it just comes from unfamiliarity with Linux, which I won’t blame you for. I’m just saying that I can’t stand around all day explaining the intricacies of a good Linux distro.


      Finally, a good point. One that isn’t just ‘your usual RPS insults’ or unfamiliarity with the topic, but an actual point. And it’s a great one. And to you? You’re absolutely right. I wouldn’t mind being locked into Steam though for my gaming, but this is why I suggested Linux, they could build it off Linux so that whilst they ‘own’ the gaming mode, they could leave the OS mode alone for people to do whatever they want with.

      And Linux has always offered more freedom of OS than Windows. This isn’t to detract from your point because I get what you’re saying, but what I’m getting at is that whilst this approach will provide less freedoms in some area, they’re already less freedoms that we’ve already accepted (would it be incorrect to say that many of us use Steam for most of their games library?). But with a Linux OS, you’d have new freedoms to offset that. So for those who use Steam anyway, it’d be full of bonuses and very few negatives.

      But if you aren’t a fan of Steam in the first place, then I can completely see why you wouldn’t support it.

    • LionsPhil says:

      Jesus, Wulf, take a pair of pruning shears to your posts.
      tl;dr – hahaha Linux, see above.

  24. terry says:

    I want John Carmack’s development machine.

  25. MythArcana says:

    “[snip]…consoles holding back iD’s work on the PC…[/snip]” – John Carmack

    This says it all, and from the man who knows what’s going on. I would like to expand on this statement a bit, though.

    Consoles are holding back EVERYTHING on the PC and making us all traverse in the wrong direction. Enough said.

  26. Very Real Talker says:

    why do people make fun of Gabe Newell for his generousness of body, but not enough people make fun of carmack for his ghoul like physique? I think gabe newell is healtier, and at least makes the economy go around with all the money he spends in food. That man is constantly creating wealth. Carmack is so selfish that he doesn’t even want to spend in notmakingmelooklikeaghoul substances, also known as food.

    • FakeAssName says:

      I’m kinda worried about him, maybe it’s just that haircut in combination with bad lighting but he kinda looks like hes on the rebound from cancer.

    • sbs says:

      yeah, i was a little scared when i saw the picture.
      i hope it’s just, you know, age.

  27. killmachine says:

    i could listen to this guy for hours talking about computer games (if its not getting too technical of course ;) )

  28. wodin says:

    So RAGE has been designed really for consoles? I thought it looked damn fine in the trailers…better than a console could spew out anyway…hmmm…will RAGE stay on my to buy list?

  29. afarrell says:

    John Carmack in mistaking the technology for the game absolute and unprecedented shocker!

    • Angel Dust says:

      Did you even watch the video? He spends most of the second half of the video talking about how a lot of the tech stuff he loves and is interested in is NOT the most important thing.

  30. ResonanceCascade says:

    What Carmack basically said in the interview was (I’m summarizing): “If we could develop for high end PCs only AND stay in business, games would look a lot better. That said, if I could do it all over again, I would have aimed a little higher on the PC side, because the game took a really long time to make.”

    What some of you apparently think he said “Rage wuz develipped fer consoal only dats TTLY ghey and suxxorz.”

    • FakeAssName says:

      no, he said:

      “when we started working on rage the PC and console tech were rather similar in capacity, so we designed the game to that benchmark. but then we ended up taking too long to get it ready to ship, and while the console tech has remained static, the PC side has made astronomical leaps from where it was; so now while the game may still be top of the line for consoles it’s kinda dated in comparison to what the current standards for PC games.

      so I’m kinda disappointed on that front, and have made it a point to develop for the PC because no matter what consoles will always lag behind and it’s easier to down port from the PC specs than it is to up port from the console.

      (that’s the sanitized version with no judgmental slant, the game was developed for console capacity so rage on the PC is technically a console port. IMO this wouldn’t have happened if they shipped the game back in 2009 when the PC version was ready and then busted out the console versions latter on.

      AKA: I blame Bethesda for the delay)

    • ResonanceCascade says:

      FYI, my summary was distilled from the entire 30+ minute interview, not the little snippet up top.

      “the game was developed for console capacity so rage on the PC is technically a console port.”

      That’s not what “port” means. That’s just realistic multiplatform development, no different from what Valve or nearly anyone else does. Halo 2 is a console port. There’s a very big difference.

  31. destx says:

    I think some of you negative nancies really need to watch the whole video.

  32. Scandalon says:

    So, what Carmack really said was that we’re bumping up against the limits of our biological-based selves, and that it’s time to turn our attention to converting ourselves into cyborgs/pure thought so that we can get input latencies down further and the ability to appreciate microscopic details.

    You know, if you ignore all that stuff about artists and soul and all that ridiculous claptrap.

  33. Angel Dust says:

    What a legend. He’s been in the ‘game’ for decades now and he still appears to be as enthusiastic and passionate as he must have been when he started. More of this kind of thing.

  34. max pain says:

    RAGE 2.

  35. vrekman64 says:

    He should go out and talk more. Doesn’t sound like a geek or anything.
    I haven’t met him (probably will never meet him) but something tells me he is a decent person, honest and composed.
    He is a legend of his field but does not do by as a star. I really appreciate him

  36. manveruppd says:

    I never knew that Carmack’s middle initial was D! That’s why I always come to RPS for the latest, most important news of the day! :p

  37. bill says:

    I for one am grateful to our console overlords and the way they’ve managed to allow me to keep playing games without having to upgrade every 10 minutes.

    I don’t think consoles are holding back PC gaming, I think they’re helping it immensely by vastly increasing the number of potential customers. Instead of your game only being playable by the 1% of PCs that gaming geeks keep on the cutting edge, it’s now playable by up to 50% of PCs that were made in the past few years.

    • bill says:

      plus, as he says, the whole hardware to hardware comparison is dumb anyway… consoles can be programmed directly and are known standard hardware configurations.

  38. sbs says:

    I seriously SO love this guy

  39. Kaldor says:

    A large chunk of Carmack’s brain is probably completely converted to program codes and digital architecture by now.

  40. jiminitaur says:

    It seems to me the best way to tackle the API overhead issue would be to develop a Graphics Layer bytecode and let the platform optimize the graphics process through JIT or host-specific compilation instead of relying on developer foresight and catch-all API drivers.

    I’m especially for the former, I like the idea of games that run better the more I play them.