3D Card Update: AMD Fury, How Much Graphics Memory Is Enough, Nvidia's New Budget Graphics
By the Furies, it's big
It's time to catch up with the latest graphics kit and developments as fully unified shader architectures wait for no man. Nvidia has just released a new value-orientated 3D card in the GeForce GTX 950. We're talking roughly £120 / $160 and so entry-level for serious gaming. But could you actually live with it?
Meanwhile, AMD's flagship Radeon R9 Fury graphics has landed at Laird towers. Apart from being geek-out worthy simply as the latest and greatest from one of the two big noises in gaming graphics, the Fury's weird, wonderful and maybe just a little wonky 4GB 'HBM' memory subsystem begs a potentially critical conundrum. Just how much memory do you actually need for gaming graphics?
So, that new Nvidia board. This week, it's just a heads up for those of you who haven't heard about it. I've haven't got my hands on one yet and a quick Google search will furnish you with any number of benchmarks.
I'm more interested in the subjective feel of the thing. The new 950 uses the same GPU as the existing GTX 960 chipset, so it's based on Nvidia's snazziest 'Maxwell 2' graphics tech. It's lost a few bits and pieces in the transition from 960 to 950 spec. With 768 of Nvidia's so-called CUDA cores and 32 render outputs, inevitably it falls roughly between that 960 and the old GTX 750 Ti, which remains Nvidia's true entry-level effort.
At the aforementioned £120 / $160, it's only very slightly more expensive than the 750 Ti was at launch and even now it's only about £20 pricier, so it probably makes the older card redundant for the likes of us. The new bare minimum for serious gamers? Probably, and it's cheap enough to actually make sense.
By that I mean that the GTX 960 has always looked a little pricey. If you're going to spend over £150, my thinking is that you may as well go all the way to £200, which buys you a previous-gen high end board with loads of memory bandwidth.
On that note, my worry with the 950 is the 2GB of graphics memory and the stingy 128-bit memory bus width. But we'll see. I'll hopefully have some hands-on time before my next post.
AMD's R9 Fury
Sapphire's Tri-X cooling tech is a bit bloated this time around...
AMD's Radeon R9 Fury, then. I won't get bogged down in all the specs, we've covered all that here, but suffice to say this is not the water-cooled X variant, it's the standard, air-cooled non-X. Either way, it's the subjective feel and performance and its physical bearing now that I have one in front of me that I'm interested in.
First up, it's a big old beast. Next to its nemesis, Nvidia's GeForce GTX 980 Ti, it's both much longer and significantly thicker. This could be problematical. The length could cause compatibility problems with some cases – it only just fits into a full-sized Fractal R5 box.
The thickness, meanwhile, exceeds the dual-slot virtual box, so to speak. Depending on your motherboard and any peripherals you have plugged into PCI or PCI Express slots that could cause a clash and also make multi-GPU graphics tricky or impossible.
What it's not, however, is noisy. Not if it's a Sapphire Tri-X R9 Fury, at least. Subjectively, this custom-cooled card makes virtually no noise under full 4K workloads. According to the noise app on my phone, what's more, the overall system dB levels under load are identical for this card and MSI's GTX 980 Ti Gaming 6G.
But what about performance? One of the most interesting aspects of the Fury is its memory subsystem. The so-called HBM or high bandwidth memory uses a funky new stacked memory technology and an uber-wide 4,096-bit bus, which is cool.
The actual Fury circuit board is pretty puny - and beautifully engineered
Less cool is that fact that this puts limitations on the size of the memory pool, in this instance 4GB. It's not that 4GB is definitely going to be a major issue. But you can't help but notice that AMD's own Radeon R9 390 boards have double that, the Nvidia 980 Ti has an extra 2GB and the Titan X monster has fully 12GB in total.
Memory matters
Before we go any further, it might be worth just squaring away why memory amount matters at all. The main issue is pretty simple. If the graphics data doesn't fit in graphics memory, the GPU will be forced to constantly swap and fetch data over the PCI Express bus, which has a small fraction of the bandwidth of the GPU's own memory bus.
The upshot of that will be a GPU waiting for data to arrive and that typically translates into truly minging frame rates. Now, the higher the resolution you run and the higher the detail settings you enable – particularly settings like texture quality and certain kinds of anti-aliasing (more on the latter in a moment) – the more graphics data the game will generate.
Looks purdy - but all-lit-up means maxxed-out graphics memory
So the question is, when, if ever, the Fury's 4GB a problem? It's worth revisiting now and then for cards at any price point, but it's particularly critical for a card that's pitched at the high end with all the expectations of high resolutions and quality settings that come with that.
The simple answer is that in most games, most of the time, 4GB is plenty. I used Witcher 3, GTA 5 and Rome II as my metrics (I wanted to include Mordor, too, as it's a known graphics memory monster, but my installation corrupted and the re-download stalled with disk errors, so it simply wasn't possible in time, sorry).
By the numbers, the 980 Ti was slightly the quicker of the two in Witcher 3 and Rome II. Subjectively, you wouldn't notice the difference much in Rome, while in Witcher 3 the 980 Ti felt that little bit smoother at 4K, which reflects the fact that it was just keeping its nose above 30 frames per second and the Fury was occasionally dipping into the 20s. However, we're talking about a subtle difference subjectively and one that's unlikely to reflect the amount of video memory. Witcher 3 even at 4K runs within 4GB of memory.
Oh, hai, I'm a DP-to-DVI adapter!
GTA V is a little more complicated. For starters, I was suffering some pretty catastrophic crashing issues with the AMD Fury card. Long story short, most settings changes to the video setup in-game triggered a black screen that required a reboot. I know that GTA 5 is buggy, but it did just have to be the AMD card that was prone to crashes, didn't it?
AA, eh?
Anyway, it's a pity because GTA 5 has options for multiple types of anti-aliasing – AA for short - and this is where video memory can become critical. Arguably the best type of AA uses super-sampling techniques. In simple terms, this means rendering the image at a very high resolution and then sampling down to the display resolution.
There's more than one way to skin the supersampling cat and the newer techniques remove some of the workload while maintaining quality. But needless to say, that's very performance intensive as it means in effect running at a higher resolution than your display res. At 4K that's obviously going to be quite acute. These days, however, most games use an AA tech called FXAA which roughly stands for 'fast approximate AA'.
The shizzle here involves doing the AA in post processing rather than via down sampling a higher res. The consequence is that it can be done with little and sometimes no performance hit at all. The downside is that it can be a bit hit and miss. It can both soften overall image quality and miss some of the jagged edges. That said, it does have the advantage of being able to smooth out edges within shader effects and transparencies.
Spot the difference - AMD above, Nvidia below...
Whatever, the bottom line is that if a game uses FXAA, the Fury's 4GB is much less likely to cause a problem. GTA 5 offers both types. With FXAA, the game hits 3.5GB of graphics memory at 4K and runs very nicely indeed. However, even 2x multisample AA takes it right up to the very limit of the 4GB limit. Anything beyond that and you breach the 4GB barrier and the game will not allow it, via the in-game options at any rate.
Of course, you can force multi-sample AA in the graphics driver if it's not made available by the game. But whatever card you're using that's going to come with a pretty big performance hit. Bottom line – if you can't tell what tech a given game is using, if it doesn't hurt performance perceptibly, it probably isn't any kind of super- or multisample AA. For a really forensic look at the question of graphics memory, this TechReport piece is worth a look.
AMD versus Nvidia all over again
Anyway, the take home from all this is that the Fury is pretty appealing. You're looking at £450 / $560 for this Sapphire Tri-X versus roughly £575 / $660 for the MSI 980 Ti, so it's tangibly cheaper and most of the time the subjective experience is pretty close. For the record, that includes input lag. I found both cards suffered from similar levels of lag at really high settings in Witcher 3 and GTA5. Rome II felt snappier all round on both boards.
However, at this price level, you might argue that if you can afford £450, you can probably afford £575 and, personally, I'd make the stretch just to remove as many performance limitations as possible and also maximise my future proofing.
I'd also favour Nvidia on account of stability. That's not always a deal breaker. I've been a big fan of cards like the AMD Radeon R9 290 and would overlook my concerns about stability in return for a strong overall package. But when things are tight, as they are here, it can sometimes swing the decision.
Of course, what makes sense in terms of graphics memory at this level doesn't necessarily apply to cheaper cards like the new Nvidia GTX 950. Until next time, then, hold that thought. Toodle pip.