Powered By At Least 70 Hamsters

By Alec Meer on January 4th, 2008 at 4:45 pm.

We don’t write about PC hardware all that often on RPS, unless it’s something absolutely batshit or doomed to failure. In the parts of my life that don’t involve obssessively checking whether anyone’s said something rude about us in our comments threads though, I keep a detached eye on what’s new in silicon heaven.

Today, it’s leaked details on the next card from current 3D champs NVIDIA. The GeForce 9800 GX2 is, I suspect, a very silly name. I know several people still just about making do with five-year-old ATI Radeon 9800 cards, so to see that the current top-of-the-range from a rival company sports the same name will doubtless having them weeping hot, salty tears of absolute confusion, should they finally sumbit to ugrading any time soon. On top of that, there isn’t any major change in the GPU’s capabilities from the current GeForce 8 series, so that 9 at the start is a bit of a red herring. A red herring that could earn lot of money, of course – we wouldn’t have this long line of deliberately confusing 3D card naming conventions if that wasn’t the case.

Regardless, is this, at last, a card that can run Crysis at its fabled Very High detail settings at a decent resolution? It is, after all, two GeForce 8800 GPUs shoved onto a single board – SLI on just one card. That should be incredible, right?

Apparently not. The mooted performance gain is just 30% over the previous GeForce line-topper, the 8800 Ultra. 30% can be quite a sizeable amount when it comes to framerates, but I fear it won’t be enough to compensate for Crysis’ legendary hunger. Of course, you could always grab a second 9800 X2 and go for quad-chip joy, if you’ve got (at a guess) around £800 spare all told. You could also buy a hat made of solid gold if you wanted, but I wouldn’t really recommend that either.


Most of the GeForce 8 series is due to be replaced by single-chip 9 series boards later in the year

So is this two-chips-one-card release a one-off (well, two-off, as NVIDIA tried it with the somewhat neglected 7950 GX2 a while back), or the thin edge of a coming dual-GPU wedge? I hope it’s the latter, and the rumour mill’s thrown out a few hints at it. SLI’s still pretty offputting to a casual crowd, both for the perceived complexity of the concept, the expense of two seperate cards, and that, currently, NVIDIA nForce motherboards (which is Kieron’s prompt to say ‘motherships!’ in comments) are the only ones to support twin GeForces alongside an Intel Core 2 Duo (with the singular exception of Intel’s upcoming and insanely niche SkullTrail board. On the plus side, ‘SkullTrail’ in an awesomely inappropriate name for a slab of printed circuitboard). I considered a move to SLI a little while back, but I’d need to replace my otherwise perfectly adequate Intel P35-based motherboard, and I just plum can’t be bothered.

So a move to twin GPUs makes an awful lot of sense – it may mean people aren’t locked into buying NVIDIA motherboards to accompany their NVIDIA graphics cards, but it’s a much more commercially viable concept than the still poorly taken-up SLI (only 1.19% of Steam users had multiple GPU systems back in August – and that figure incorporates ATI’s rival Crossfire tech too). I’d love to see a £250 9600 GX2 or something later in the year – it just makes things easier. The nutters can have their Tri and Quad SLI if they want; hopefully the masses will get access to one-size-fits-all twin-GPU fun sometime soon.

__________________

« | »

, , , .

37 Comments »

  1. Nallen says:

    Urgh, so I have to wait for the 10 series before I upgrade my 8800 now? This is getting as predictable as the Star Trek movie thing with the ‘even number good, odd number bad’ routine.

  2. Meat Circus says:

    Hey, what it means for me is that Quad GPU SLI goodness to those of us with an SLI setup suddenly becomes feasible and almost affordable. Squee!

    Oh, and Alec Meer smells of wee.

  3. Nimic says:

    So does this mean my 8800 GTS is going obsolete?

    *cries blood*

  4. Meat Circus says:

    @Nallen:

    Expect to be waiting a longish (multiyear maybe) time before NVIDIA upgrades their GPU microarchitecture again. They’ve now got a fully unified shader model GPU running at hair-inducing speeds on a 90nm. And GPUs are inherently parallelizable.

    So, what I suspect they plan to spend the next couple of years doing what Intel and AMD has done instead: shrink process sizes (65nm, 45nm) and use the reduction in die size and heat consumption to cram more and more cores on a card.

    Sure, it’ll start with 2 chips, but I reckon it’ll be 4, 8, 16 and 32 over the next couple of years, all using G8 cores and smaller feature sizes.

    NVIDIA has no need to go to the vast expense of an entirely new GPU architecture for ages, so I reckon it will take its time.

    The 9xxx series designation pretty much confirms it to me. Since there’s no G9 GPU on the way, there’s no harm in borrowing its designation for a family of G8 multicore parts.

    This is all speculation, btw. Somebody from NVIDIA will now post and tell me I’m talking a big load of mouth-toilet.

  5. Cigol says:

    Wouldn’t more GPU’s on one board just mean more of a bottleneck? I’m no hardware expert (in fact quite the opposite :D) but it stands to reason doesn’t it that the more stuff happening on one board the more information travelling back and forth? Or is that too simplistic a view?

  6. Muzman says:

    I dunno if this is OT or uh OT, but this Crysis system requirement business has got to be the greatest piece of macho marketing wank since bad fuel economy was a selling point on your ’63 Lincoln Continental or some such.
    “Uh how can we spin poor optimisation, extravagance and inefficiency?”
    We’re ahead of the game. Looking to the future. It’s the luxury engine of tommorow, today. Like a car too wide for the feeble roads of the present, driven by an engine so powerful it can only be run on high octane fuel that extra terrestrials will one day provide us.
    Don’t you want to be ready when that day surely comes?

  7. Meat Circus says:

    @Cigol:

    Yes, it would. Fortunately, the PCIe 3.0 specification provides for buses that can move 1GB/s per lane, and up to 32 lanes per card. So it should be possible to keep the textures flowing for a few years yet…

  8. Butler says:

    @Nimic:

    It kinda already is due to the stupidly good price vs performance of the 8800GT. Every man and his virtual dog is buying one (or two for SLI) though, so they are pretty hard to track down.

    In terms of bang for buck, this GeForce 9800 GX2-whatevermajiggy will have to be shit hot to be even considered over 2x GTs.

  9. Sander says:

    Nit: the 7950GX2 was appropriately neglected, because it had the problem that Nvidia now seem to be planning to avoid, namely that the card only runs on some obscure chipsets on the eh, mothership(?). And by the time I could afford that card, mobos with that chipset didn’t support any CPU anyone would want to purchase.

  10. groovychainsaw says:

    So if i tell people ive got a 9800 pro in my machine, they will go from ‘huh’ to impressed now ;-)
    Still not enough games to make me upgrade quite yet, would rather get another console, tbh

  11. Matt says:

    I think I might wait and see if an actual new card comes out instead of two cards stuck together. Curse ATi and their failure to compete.

  12. Max says:

    My current machine consists of an AMD Phenom 9600+ Quad-core CPU which I’ve OC’d to 2.65GHz, 8GB of PC2-6400 DDR2 memory and two ATI Radeon HD 3870 XTs in Crossfire.

    Can I run Crysis on Very High? Hell no. I’ll be interested to see how these new monsters perform, though I’ve always been an ATI man really.

  13. John P (katsumoto) says:

    Well I got the 320 8800 GTS in October, when it was THE BEST CARD EVER. This month’s pcg has already informed me it’s now shit, which is a shame. But I imagine i’ll keep it for another year at least, then see what I can do.

    Another thing we all have to consider when upgrading GPUs is our PSUs. I have a 550 one (very good make supposedly, can’t remember the name though), but I don’t know if it could handle much more than an 8800.

    Do these “Dualcore!” 9800s take up twice the juice then? oof.

  14. Max says:

    When it comes to PSUs, it’s not really about the rated wattage, more about the number of amps on the 12v rail.

    For instance, you could have a 1000W PSU, but if it only had 10A on the 12v, it’d be worthless.

    And I doubt the 9800GX2 will use twice the power of an 8800, since they will use a newer, smaller fabrication process I’d have thought.

  15. John P (katsumoto) says:

    Aha, okay, cheers! Well I have no idea how good mine is in that respect and i’m not at home right now, but this surely warrants an investigation.

    This “2 cards in 1″ thing is quite intriguing. And, just like we now laugh at Pentium 100s from 1998 and old PCGs which read something like “and this takes up a gargantuan 100mb on your hard drive to install”, I have no doubt we’ll be laughing in 2018 at the idea of having only 2 cores in our processor, and only 2 chips (or whatever) in our graphic cards.

  16. derFeef says:

    ATI will have dual GPU Cards in jannuary… too bad nvidia…

  17. Lacero says:

    So after nearly 18 months (the time for cpu power to double by moore’s law) they’ve managed a 30% improvement by shrinking the die size and putting two chips on one card.

    The only think I’m less impressed by is ATi.

  18. Lacero says:

    ..and my own spelling. muppet

  19. Meat Circus says:

    @Lacero: Are you aware that time seems to be running at two to three times normal speed in your head?

    The 8800 Ultra was announced in April 2007, which unless you’re doing New Math is not eighteen months.

  20. Max says:

    “The only thing I’m less impressed by is ATI”

    Er… ATI’s 3800 series are an absolute triumph. From a price/performance perspective they’re unbeatable. They’re also the only cards out there with DX 10.1/SM 4.1 support (not a huge feature I give) and Crossfire X supports up to Quad-card configurations. Why you’d be less impressed with them than a company who just keeps price gouging them with continual revisions of an aging product (the 8800) is beyond me.

  21. Lacero says:

    I was thinking of the original G80 chip Meat Circus, I know they did make some changes to it for the ultra but in the end it’s the same chip at 10% higher clock speed. But you’re right, we’ve gone ~43% in 18 months not 30%. I still think this is poor.

    I’m less impressed by ATI because they’re the ones allowing the price gouging by not releasing a high end card to compete. I can’t blame NVidia for sitting on their growing pile of money, but I can wish ATI would compete in the part of the market I care about and force NVidia to up it’s game.

  22. Max says:

    I must say, I agree… I’d love to see another Radeon 9700 Pro come to the market and kick nVIDIA in the teeth… Good times… =P

    And don’t forget, you can use up to three 8800 Ultras at once, so take that into account when you’re thinking about your Moore’s law argument. After all, the only way we can continue doubling CPU power every year and a half is by putting several cores on one die – looks like it’s going that way with GPUs too. =]

  23. Radiant says:

    I can’t read these comments; my brain just span backwards.

    I have a p4 3ghz [ht!] with an ati 1950 card.
    To stop it over heating when I play COD4 [at 800x600] I have to take the case off and stick a CEILING FAN over the top.

    This upgrade for pretty pictures bullshit just makes me run screaming to my consoles where the pretty pictures in the adverts are the same pretty pictures when I stick it in the machine.

  24. Radiant says:

    Don’t get it twisted though I do have a HD [1080p] projector.

    I play Bioshock on my wall and playing street fighter is like having a couple of midgets fight it out in your living room.
    And I can read the text on Pro Evo.

  25. malkav11 says:

    At least nVidia’s numbering scheme is semi-quasi-comprehensible – bigger numbers, thus far, = better. I have absolutely no idea whether an ATI card quoted to me is any good or not. I never have. (Of course, I’ve never owned one, either, except for the ATI Rage 128 or whatever the fuck in my ancient relic of a Mac G4 tower.)

  26. Optimaximal says:

    malkav, i’d actually say Nvidia aren’t comprehensible at all. They are still sticking with this stupid ‘numbers & letters’ system that means absolutely nothing because the company just sticks whatever numbers it wants on the chipset. Not to mention their next gen cards being named after a rival chipset that royally pasted their 5th gen offering.

    AMD just started doing a new system that actually makes sense. the first 2 digits denote generation whereas the second two show the position of that card in the lineup. No XT, no GTS, no AlphaPlus3MegaShinRyuKenBetaZetaMega Edition…

    That’s it!!! NVidia are copying Capcom!

  27. Max says:

    Exactly right, Optimaximal.

    Say you have a Radeon X1950.

    It’s from the X1000 series, and it’s a very high end card in that series (950).

    nVIDIA do the same – an 8500 is a mid-range 8-series card, so your idea that bigger numbers = better card idea is flawed, since a 7900 will beat the utter crap out of the 8500.

  28. Max says:

    Oh and I just had a thought; there are so many 8800 cards now. I can think of these (in order of performance-ish):

    8800 Ultra
    8800GTX
    8800GTS 512
    8800GT 512
    8800GT 256
    8800GTS 640
    8800GTS 320
    8800GS

    And there’s also a mobile 8800M GTX, though I don’t really know where that would fit in the table. There’s no way it’s as powerful as a full desktop GTX. Also I’ve not included all the factory overclocked cards which could add to the confusion.

    But still, any normal consumer (i.e someone who doesn’t obsessively follow the development of these products like myself) is likely to be mighty confused by this.

  29. Monkfish says:

    To be fair, until ATI/AMD decided to change their naming system recently, they were just as bad as Nvidia in tacking suffixes to the card’s name.

    For example the X1950, used as an example above, came in XT, XTX, Pro and GT flavours, as well as different memory configurations. They were certainly fond of the letter “X”, that’s for sure. :D

    If there’s anyone trying to decide which card to buy, you could do a lot worse than visit Tom’s Hardware VGA Charts (they have one for SLI, too). It’s a great way to get an at-a-glance idea of where each model fits performance-wise, and eases the confusion brought about by the cryptic naming conventions.

    Anyway, I won’t be trading in my solitary 8800GTS for the new GX2. I’ll hang back ’til the autumn, and see what either ATI or Nvidia cook up. I’ll grab whatever’s best – there’s no brand loyalty here.

  30. po says:

    Nice to see they’ve finally added the Ultra and GT, but the GTS 512 is still missing. I’m guessing from the OC GT score that the GTS which has been overclocked to 800 core 2200 memory with the stock cooler will be better than an Ultra (which has next to no OC potential). I’m going for GTS 512s in SLI with water cooling :D

  31. Max says:

    8800GTS 512 is the same core as the GT, but with faster clocks and extra shaders which are (oddly) disabled. I wonder if there’s a way to unlock them. :/

  32. po says:

    Unlikely these days. Most chip manufacturers seem to have wised up to unlocking/reflashing. It’s even possible that some of the disabled shader cores are defective (fully working core goes into GTS, one with bad shader unit becomes GT).

  33. John O'Kane says:

    The move to two cards on one is an interesting, but not sustainable move. I remember 3dfx were doing similar maneuvers shortly before they went under. The whole point of GPU’s is that they are stream processors that parallelise (instruction, threaded, pipelined) the graphics pipeline as much as possible to get speed ups.

    Think of it as a huge 600 stage assembly line, with people doing a different job at each stage processing incoming materials. While it may take 600 units of time for a piece of data/material to get rendered/made, on average 600 units of work have been done each time a stage is processed. That means the bandwidth is high, this is what makes the cards better than a typical cpu, which to keep in with the analogy do just a few stages of work in their assembly line. Actually they’ve reduced it recently with the move to multi-core architecture, hence the drop from 4 ghz to 2 ghz, the pipelining and other implicit parallelizing tricks making the 4 ghz seem faster even though it was clocking more at the 2 ghz range. Apologies to the pedantic if I’ve over simplified or exaggerated for effect.

    In this case the two cards just agree that they’ll share the job, scan-line by scan-line for your display resolution. They don’t work together and put extra strain on the cpu (driver) to feed them. That’s a fairly brute force, power consuming, hungry approach. To refer to the assembly line from before, we’ve basically got two assembly pipelines, and they need two factories worth of materials. Not very elegant at all. Due to power constraints there are hard limits on this for now. Even if there weren’t, managing the data to get graphics unit on the card from the cpu (a full copy of the scene geometry, textures etc for each unit) is going to be a bottle neck or at least synchronisation issue. Which is why this isn’t done in the first place.

    I’d expect future, sustainable speed ups to come from new techniques like unified shaders (yeah, I know that’s been done, hence why I said “like”), advanced culling systems, more pipelining (assembly stages), power reduction, smaller circuit boards etc.

  34. bobince says:

    Ugh, what an ugly blighter. Maybe I’m just getting old but I can’t summon any enthusiasm for the way the GPU market is these days. More cores! More cards! More heatsinks! Bigger fans! More power connectors! Shovel more puppies into the furnace! If I give her any more she’ll blow!

    SLI and uber high-end GPUs are a noisy power-hungry irrelevance for the vast majority of PC gaming. Let’s have some real innovation please. It’ll be interesting to see what happens when ATI/AMD start putting CPUs and GPUs on the same core, for example.

  35. Skylance says:

    I don’t think Crysis is really *that* much of a resource hog…

    I’ve got a mildly overclocked Core Duo 6420 at 2.6GHz, one 8800GT, and 2 gigs of PC-800 RAM… Not really the most impressive system in the world, but I still run Crysis at an average of 40fps with everything on “high” and my resolution is at 1680×1050. Even on the second highest setting it’s still (technically, at least) the best looking title I’ve ever seen.

    I don’t use Vista, so I can’t vouch for the Very High settings, exactly, but I’ve tried the DX9 hack and I still get a solid 20-25 fps even with shenanigans going on– I really don’t think it would take much more to kick it up into the 30fps range.

    Point is, people bitch about how nasty Crysis’ requirements are, but the truth is that they’re really not that bad. If you max everything out, play at 2500x whatever resolution, turn on AA and AF as high as they can go, well, sure you’re gonna take a nasty framerate hit, maybe, probably even into the unplayable range. Since when has that not been the case with PC Games? I never expect to be able to ratchet a games settings to the hilt unless I’ve got hardware a least a year newer than a given game.

  36. Irish Al says:

    Yeah – I get the distinct impression that it’s people trying to play it at stupid resolutions. As long as it shifts at 1280 x 1024 what more do you need?

  37. Max says:

    When you have a large monitor (I myself have a largeish 24″ with a native res of 1920×1200) you can’t play at these lower resolutions without sacrificing massive amounts of fidelity.

    Especially not a 4:3 resolution like 1280×1024 – have you any idea how ugly games look when they’re being stretched into 16:10 aspect ratio from 4:3?

Comment on this story

XHTML: Allowed code: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>