Skip to main content

The GTX 680: NVIDIA's New Big-Boy Graphics Card

It's no Voodoo 2 though, is it?

You're probably going to have be a little patient with me here. I used to talk of graphics cards and processors regularly back in my past life on PC Format magazine, but my technical mojo has diminished sharply in the intervening years. I retain a working knowledge of what's half-decent and what's a big pile of donkey doo-doo, but if you want me to talk dirty numbers to you, you're going to be disappointed. It does seem jolly good, and I know that I want one in my PC, but I am no Jeremy Laird and RPS has not enjoyed a review unit with which to test NVIDIA's claims in full. So, in reporting the news and details on NVIDIA's new flagship graphics card, formerly known as Kepler but to be released as the GeForce GTX 680, I shall have to report what I was told and leave you to draw your own conclusions.

As regards RPS and hardware coverage, it's something we want to do a little more of - Hard Choices being our vanguard. Obviously we're a gaming site first and foremost, but equally obviously PC gaming requires PC hardware so it's silly to overlook it entirely. We'll try to cover the major events/releases as they happen, but do bear with us while we work out exactly how and to what extent that happens.

Let's kick off with a noisy official introduction video:

Watch on YouTube

Okay. The GTX 680 is the high-end of the new range, and carries an asking price of £429, $499 or 419 Euros. More affordable, mid- and low-end cards will follow (and taking wild bets on the names being stuff like 660 and 620 is eminently reasonable) but haven't been detailed or announced as yet. So, for most of us, the 680 specifically isn't going to be something we'd ever consider buying, and instead acts as a guideline to what this generation of GPUs is going to offer PC games.

Here are some headline numbers for you:

1536 total cores, draws 195 watts (across 2x 6 pin power connectors: no 8-pin ones required), a base clock speed of 1GHz, 2GB of GDDR5 memory running at 3GHz, 28 nanometre die, PCI Express 3.0, four display outputs.

What's interesting, from my only semi-informed eye, is none of those things, however. Yes, yes, it's all incremental improvement both on NVIDIA's 500 series cards and ATI's until-now benchmark-leading 7970 board: such is the way of the graphics card market. The main thing for me, a man who spends his every day sitting next to a hulking PC chucking out extreme heat, ravaging the environment and resulting in fuel bills that make me weep, is the fact this thing both uses a lot less power and brings in a whole new scaling system, whereby the card drops its performance and thus power usage depending on what your applications actually require.

NVIDIA claim it's "the most efficient graphics card ever" in terms of power draw, and in real terms that means it should drop to it teeny 15 watts of draw when you're just idling on the desktop, with the GPU running at just 324Mhz.

Watch on YouTube

Even in demanding games, you can set the card to not go full pelt if there's not going to be any benefit to it. For instance, while it's very nice to know that your PC is running a game at 600 frames per second, unless you're an oddball who spends games scrutinising the image with microscopes and speedometers rather than actually playing, you're unlikely to really spot much difference from it running at 60 frames per second. So, you can set your drivers to limit the frame rate to, say, 60, or 120, or whatever helps you sleep at night, and if that requires less than the card's maximum output/speed, it'll scale itself down in 13Mhz increments to suit.

All this means a much quieter card too, which is particularly appealing to me - I prefer gaming with speakers rather than headphones, but all my PC's fans going bbbbbrrrrrrr buzzzzzz chitterchutter rather spoils the experience. This card is supposed to offer 41 dBA at peak usage, against the 580's 48-odd decibels. I've only seen the card running in a noisy room so I can't attest to what that means in practice yet, however. Hopefully I can find a way to get hold of one and have a listen myself before I make any real judgements there.

The other thing is that the card can scale itself in the opposite direction too - boosting its clock speed if a game demands it. NVIDIA reckon that, while the base clock is base clock is 1006MHz, more typically you're going to see it running at 1058MHz. If you have a well-cooled case, it will keep raising itself above that, until either hitting heat problems and scaling back down, or reaching an as-yet undisclosed speed ceiling. I saw Battlefield 3 happily running on a card that had clocked itself about 20% higher without issue, which was promising. Again, complicated new driver settings and third-party control panels enable you to mandate exactly what the card gets up to if you don't trust it to do its own thing or demand that it works at full-pelt at all times.

Third-party manufacturers are already offering all manner of tweaked cards that are pre-overclocked to higher speeds and can stand to boost to yet higher, but the downside of all this is that if you have a crappy old, poorly-ventilated case in which your PC runs hotter than a thousand suns, the card is going to downclock itself in order to avoid melting into a pool of horrifyingly expensive grey goo. This may mean it actually underperforms, and your games run like crap.

So, if you're even considering picking up one of these boards, make sure your PC's airflow and cooling is up to scratch if you want to make the best of it. I'm a bit worried people who aren't all that techy are going to pick up one of these cards cos they've heard it's good, slap it straight into an ancient box coated in dust and grime and then have a mysteriously bad time - hopefully the message can be spread far and wide though.

OK, onto features, which is where I'm on even more infirm territory than before. The biggest news, in terms of promotion, about the GTX 680 is that it can run the fabled Unreal Engine 'Samaritan' tech demo in real-time on a single card. When Samaritan was shown off to a chorus of coos and wows last year, it was running on three GTX 580s, which were drawing 700 watts and making a hell of a racket. I saw it running live on one 680, drawing 195 watts and being fairly quiet about it. It looked amazing, and I was duly astounded that a single card could render such a scene.

Watch on YouTube

The catch is that Samaritan's newly-efficient performance is as much to do with software and driver optimisations as it is the new hardware. That particular build of Epic's next engine had been tweaked and poked and prodded to specifically run well on this particular card, with all manner of smoke and mirrors employed to ensure its pre-determined scenes looked top-notch.

So, this does not mean that games can necessarily look quite that good on a GTX 680, at least not without devs working incredibly closely with NVIDIA before release. However, that it is at all possible on a PC that doesn't cost an outlandish number of megabucks is exciting stuff indeed.

Of course, with so many high-end PC games still being branches from development that's focused so heavily on increasingly aged consoles, I don't know how many games we're going to see that truly take advantage of the 680's box of tricks. I mean, something like Skyrim can already be made to run smoothly at max settings on a mid-range card from the last year, so upgrading to 680 isn't going to add something new to the experience.

Unless more Cryses or Battlefields or games of that hyper-graphics ilk are en route, I suspect it's going to be the power-scaling stuff that's the most immediate boon from this new card and whatever follows it. Apparently we're going to see some impressive new physics stuff in Borderlands 2, and Max Payne 3 will look its bestest on the 680 (the screens throughout this post are MP3 on a 680), but what we don't know is whether that will also be possible on older cards from both NVIDIA and ATI.

Similarly, these new tech demos below are terribly impressive, but this is the software specifically designed for the card, running in idealised circumstances with minimal environments. Sure, it can render this stuff, but rendering it as part of a larger game with tons of other stuff going on is an entirely different matter:

Real-time fur:

Watch on YouTube

The possible future of the PHYSX stuff - destructible objects that fracture convincingly and into ever-smaller parts:

Watch on YouTube

More realisitically, there's little-known Chinese body-popping tour de force QQ Dance, which looks like this on a 680:

Watch on YouTube

Some other facts I'm just going to drop in here rather than pretend I can offer entirely useful context about them:

- FXAA, a speedier if less precise take on anti-aliasing which is becoming ever-more prevalent in games, is now a toggle in the driver control panel, so you should be able to force it on even in games which don't seem to offer it. NVIDIA claim FXAA offers roughly the same visual gain as 4x standard anti-aliasing, but running 60% faster.

- DirectX 11 tessellation happens at 4x speed of the ATI 7970, apparently.

- Power throttling can turn off cores completely, but it will never go all the way down to zero cores.

- The throttling/scaling stuff and the effect of the local environment thereupon means it's more than possible you'll never get the same benchmark result twice.

- The card runs about 80 degrees C normally.

- Another new feature is 'adaptive V-sync', which essentially turns V-sync on and off as required at super-quick intervals, so in theory you'll suffer neither screen-tearing or that damnable V-sync lag.

- Each 680 will ship with a free kitten.

- There's no longer a seperate shader clock. It's all one unified clock, with a longer shader pipeline - so a boost to the main clock should affect shader performance too.

- It's 10.5" long, so will fit into any case that takes a 580 etc. Unsuited to weeny cases, however.

- It can run 3D Vision stuff on a single card, if you're deluded enough to want that.

So there you go. The 680 is out now, as are assorted pre-overclocked variants. As far as I can tell from reading reviews chatting to people who follow this stuff more closely than I, it's a suitably impressive card if the highest end is where you need to be - but most of us (i.e. those of us who don't have £420 going spare and/or don't rock 27"+ monitors) would be wise to hold out for the mid-rangers based on the same Kepler architecture, or whatever the similarly-priced and specced ATI response might turn out to be. Let's hope for those later this year.

Read this next