3D Card Update: AMD Fury, How Much Graphics Memory Is Enough, Nvidia’s New Budget Graphics

It’s time to catch up with the latest graphics kit and developments as fully unified shader architectures wait for no man. Nvidia has just released a new value-orientated 3D card in the GeForce GTX 950. We’re talking roughly £120 / $160 and so entry-level for serious gaming. But could you actually live with it?

Meanwhile, AMD’s flagship Radeon R9 Fury graphics has landed at Laird towers. Apart from being geek-out worthy simply as the latest and greatest from one of the two big noises in gaming graphics, the Fury’s weird, wonderful and maybe just a little wonky 4GB ‘HBM’ memory subsystem begs a potentially critical conundrum. Just how much memory do you actually need for gaming graphics?

So, that new Nvidia board. This week, it’s just a heads up for those of you who haven’t heard about it. I’ve haven’t got my hands on one yet and a quick Google search will furnish you with any number of benchmarks.

I’m more interested in the subjective feel of the thing. The new 950 uses the same GPU as the existing GTX 960 chipset, so it’s based on Nvidia’s snazziest ‘Maxwell 2’ graphics tech. It’s lost a few bits and pieces in the transition from 960 to 950 spec. With 768 of Nvidia’s so-called CUDA cores and 32 render outputs, inevitably it falls roughly between that 960 and the old GTX 750 Ti, which remains Nvidia’s true entry-level effort.

At the aforementioned £120 / $160, it’s only very slightly more expensive than the 750 Ti was at launch and even now it’s only about £20 pricier, so it probably makes the older card redundant for the likes of us. The new bare minimum for serious gamers? Probably, and it’s cheap enough to actually make sense.

By that I mean that the GTX 960 has always looked a little pricey. If you’re going to spend over £150, my thinking is that you may as well go all the way to £200, which buys you a previous-gen high end board with loads of memory bandwidth.

On that note, my worry with the 950 is the 2GB of graphics memory and the stingy 128-bit memory bus width. But we’ll see. I’ll hopefully have some hands-on time before my next post.

AMD’s R9 Fury
Sapphire’s Tri-X cooling tech is a bit bloated this time around…

AMD’s Radeon R9 Fury, then. I won’t get bogged down in all the specs, we’ve covered all that here, but suffice to say this is not the water-cooled X variant, it’s the standard, air-cooled non-X. Either way, it’s the subjective feel and performance and its physical bearing now that I have one in front of me that I’m interested in.

First up, it’s a big old beast. Next to its nemesis, Nvidia’s GeForce GTX 980 Ti, it’s both much longer and significantly thicker. This could be problematical. The length could cause compatibility problems with some cases – it only just fits into a full-sized Fractal R5 box.

It’s tight in there…

The thickness, meanwhile, exceeds the dual-slot virtual box, so to speak. Depending on your motherboard and any peripherals you have plugged into PCI or PCI Express slots that could cause a clash and also make multi-GPU graphics tricky or impossible.

What it’s not, however, is noisy. Not if it’s a Sapphire Tri-X R9 Fury, at least. Subjectively, this custom-cooled card makes virtually no noise under full 4K workloads. According to the noise app on my phone, what’s more, the overall system dB levels under load are identical for this card and MSI’s GTX 980 Ti Gaming 6G.

But what about performance? One of the most interesting aspects of the Fury is its memory subsystem. The so-called HBM or high bandwidth memory uses a funky new stacked memory technology and an uber-wide 4,096-bit bus, which is cool.

The actual Fury circuit board is pretty puny – and beautifully engineered

Less cool is that fact that this puts limitations on the size of the memory pool, in this instance 4GB. It’s not that 4GB is definitely going to be a major issue. But you can’t help but notice that AMD’s own Radeon R9 390 boards have double that, the Nvidia 980 Ti has an extra 2GB and the Titan X monster has fully 12GB in total.

Memory matters
Before we go any further, it might be worth just squaring away why memory amount matters at all. The main issue is pretty simple. If the graphics data doesn’t fit in graphics memory, the GPU will be forced to constantly swap and fetch data over the PCI Express bus, which has a small fraction of the bandwidth of the GPU’s own memory bus.

The upshot of that will be a GPU waiting for data to arrive and that typically translates into truly minging frame rates. Now, the higher the resolution you run and the higher the detail settings you enable – particularly settings like texture quality and certain kinds of anti-aliasing (more on the latter in a moment) – the more graphics data the game will generate.

Looks purdy – but all-lit-up means maxxed-out graphics memory

So the question is, when, if ever, the Fury’s 4GB a problem? It’s worth revisiting now and then for cards at any price point, but it’s particularly critical for a card that’s pitched at the high end with all the expectations of high resolutions and quality settings that come with that.

The simple answer is that in most games, most of the time, 4GB is plenty. I used Witcher 3, GTA 5 and Rome II as my metrics (I wanted to include Mordor, too, as it’s a known graphics memory monster, but my installation corrupted and the re-download stalled with disk errors, so it simply wasn’t possible in time, sorry).

By the numbers, the 980 Ti was slightly the quicker of the two in Witcher 3 and Rome II. Subjectively, you wouldn’t notice the difference much in Rome, while in Witcher 3 the 980 Ti felt that little bit smoother at 4K, which reflects the fact that it was just keeping its nose above 30 frames per second and the Fury was occasionally dipping into the 20s. However, we’re talking about a subtle difference subjectively and one that’s unlikely to reflect the amount of video memory. Witcher 3 even at 4K runs within 4GB of memory.

Oh, hai, I’m a DP-to-DVI adapter!

GTA V is a little more complicated. For starters, I was suffering some pretty catastrophic crashing issues with the AMD Fury card. Long story short, most settings changes to the video setup in-game triggered a black screen that required a reboot. I know that GTA 5 is buggy, but it did just have to be the AMD card that was prone to crashes, didn’t it?

AA, eh?
Anyway, it’s a pity because GTA 5 has options for multiple types of anti-aliasing – AA for short – and this is where video memory can become critical. Arguably the best type of AA uses super-sampling techniques. In simple terms, this means rendering the image at a very high resolution and then sampling down to the display resolution.

There’s more than one way to skin the supersampling cat and the newer techniques remove some of the workload while maintaining quality. But needless to say, that’s very performance intensive as it means in effect running at a higher resolution than your display res. At 4K that’s obviously going to be quite acute. These days, however, most games use an AA tech called FXAA which roughly stands for ‘fast approximate AA’.

The shizzle here involves doing the AA in post processing rather than via down sampling a higher res. The consequence is that it can be done with little and sometimes no performance hit at all. The downside is that it can be a bit hit and miss. It can both soften overall image quality and miss some of the jagged edges. That said, it does have the advantage of being able to smooth out edges within shader effects and transparencies.

Spot the difference – AMD above, Nvidia below…

Whatever, the bottom line is that if a game uses FXAA, the Fury’s 4GB is much less likely to cause a problem. GTA 5 offers both types. With FXAA, the game hits 3.5GB of graphics memory at 4K and runs very nicely indeed. However, even 2x multisample AA takes it right up to the very limit of the 4GB limit. Anything beyond that and you breach the 4GB barrier and the game will not allow it, via the in-game options at any rate.

Of course, you can force multi-sample AA in the graphics driver if it’s not made available by the game. But whatever card you’re using that’s going to come with a pretty big performance hit. Bottom line – if you can’t tell what tech a given game is using, if it doesn’t hurt performance perceptibly, it probably isn’t any kind of super- or multisample AA. For a really forensic look at the question of graphics memory, this TechReport piece is worth a look.

AMD versus Nvidia all over again
Anyway, the take home from all this is that the Fury is pretty appealing. You’re looking at £450 / $560 for this Sapphire Tri-X versus roughly £575 / $660 for the MSI 980 Ti, so it’s tangibly cheaper and most of the time the subjective experience is pretty close. For the record, that includes input lag. I found both cards suffered from similar levels of lag at really high settings in Witcher 3 and GTA5. Rome II felt snappier all round on both boards.

Old enemies meet again…

However, at this price level, you might argue that if you can afford £450, you can probably afford £575 and, personally, I’d make the stretch just to remove as many performance limitations as possible and also maximise my future proofing.

I’d also favour Nvidia on account of stability. That’s not always a deal breaker. I’ve been a big fan of cards like the AMD Radeon R9 290 and would overlook my concerns about stability in return for a strong overall package. But when things are tight, as they are here, it can sometimes swing the decision.

Of course, what makes sense in terms of graphics memory at this level doesn’t necessarily apply to cheaper cards like the new Nvidia GTX 950. Until next time, then, hold that thought. Toodle pip.


  1. MiniMatt says:

    “If you’re going to spend £150 you may as well go to £200” and “if you can afford £475, you can probably afford £575″…

    Umm… I’m not sure money quite works like that.

    • LexW1 says:

      It sort of does, though, once you take into account what you’re getting for your money, and I say that as someone who grimaces at £120 for a graphics card, let alone £200 or GOD HELP US £575 (pretty sure my ol’ Voodoo 2 SLI config cost a lot less than that, even in modern money).

      I mean, if you’re spending £150 on the graphics card, the gain you get over the £100 or £120 would theoretically need to be pretty significant. But it isn’t, really. Whereas the £200 one? Yeah, that’s a massive gain. The point is that cards in the £150 region rarely offer good bang-for-buck. You’re going to be thinking about upgrading a hell of a lot sooner with a £150 card than a £200 – indeed probably around the same time you’d be wanting to upgrade a £100 or £120 one.

      With £450, you’re already spending more than an entire low-end PC on the graphics card (my current PC cost £400 and is capable of playing most current-gen games on medium settings, and games from last year or before on much higher ones). That is huge fucking money, so we know you have significant disposable income. So you could spend £450, which is way, way too much to spend on a graphics card, and get a card that is “almost great”. Or you could realize that if you’re blowing huge amounts of money on this pointless thing, you might as well either go for the gold star and get the £575 model (that’s what, 3 AAA games more?), or realize your folly and drop back to the £200 model. You’re getting godawful bang-for-buck either way, but who wants “almost great”?

      • Asurmen says:

        It sort of doesn’t at all. Just because you can afford it doesn’t make it worthwhile but little to no gain. If those small gains aren’t worth £125 unless you’re literally rolling in money, why would you?

    • TacticalNuclearPenguin says:

      It does when you have to make a sort of lasting investment.

      You can save a bit, or make a slightly bigger sacrifice, but the idea is to purchase what you actually desire instead of compromising too much.

      I don’t know about you, but spending less to be unhappy for me is the same thing as simply throwing money out of the window “just because”.

      • Booker says:

        Lasting investment?!? Are you being serious right now? If you were to make a “lasting investment”, you would NEVER buy a card, that is going to loose 50% of its current value in just a year (tops!) to begin with!
        In that case you’d buy a card that costs significantly less than this one, because at much lower rates, they don’t loose nearly as much.

        • MattM says:

          The return isn’t in resale price, its in the enjoyment provided by the card. Expensive dinners have a negative resale price after a few hours.

        • cpy says:

          They don’t lose that much value anymore, since new generations are faaaar apart and fps gains are in single digit %.

    • MiniMatt says:

      Fair play, clearly a bit more nuanced than I initially gave credit for.

      I’d concede that the performance per pound gained between £150 and £200 is significant and that if you can find that £50 from anywhere then adding it to your GPU budget is likely wise. I’d contend this is very much not the case however in the £450 to £575 bracket.

      And I’m not sure futureproofing is really a concern of those spending ~£500 on a graphics card – those likely to spend that sort of money on a GPU are unlikely to be happy three years down the road with what is by then a low-mid range card; I’d suspect those who spend ~£500 on a GPU are far more likely to be upgrading every 12-18 months because cutting edge graphics are clearly very important to them.

      Perhaps my point was poorly worded. Personal experience is, of late, that I very much have to stick to a budget (whatever the size of that budget) and if I have budgeted £150 or £450 for something then I absolutely cannot simply bung another hundred quid on the pile, as my budget is my budget. Now, like I say, if the performance per pound at a certain price point means that adjusting other budgets down to compensate would make sense then by all means that’s a recommendation I can take on board.

      • MattM says:

        I don’t think the majority of buyers are price insensitive even near the top of the GPU price range. When AMD or NVIDIA release a new fastest GPU they can charge a price premium over the price performance line set by the rest of the GPU market but its usually only about $50. Mail in rebates are pretty common even on high end GPUs. Its a way to make more sales to people who care about a $10-$20 price difference without lowering the price for others.
        Someone earning the median income in the U.S., Canada or many of the EU countries could reasonably afford a pretty high end gaming computer but isn’t making enough that they don’t worry about money at all.

      • Ergates_Antius says:

        I think an important point is this: If you can’t afford to spend £575 on a card, then you probably can’t really afford to spend £450 either.

  2. raiders says:

    I’m just a poor lad. I only game at 1080p and plan on staying that way. I’m running two R9 270X GAMING cards in Xfire; which means I’m only running 2G. So I’m assuming I’m okay by the logic of this article since I don’t have problems mentioned here while running on ultra settings. However, with that said, I sure as hell am lookin’ to go Nvidia when the puppies are obsolete.

    • DizzyCriminal says:

      I’m with you. I got a refurb 270X last May to replace my HD5670. It’s running on a Phenom II with 8GB RAM, so I’m thinking with a new CPU and DX12 reducing overheads I should be able to ride out the next few years. Sure I wont be maxing everything out or going beyond FHD, but its a better deal than the XboxO or PS4 so that’s good enough for me.

    • Otterley says:

      He’s just a poor boy from a poor family,
      Spare him his life from this monstrosity.

  3. TacticalNuclearPenguin says:

    Long story short HBM is indeed the future, but just not a game changer just yet. The actual chips in current GPUs are not fast enough to run at settings that would require such bandwidth.

    Still, absolutely looking forward to what happens with HBM in 2016/7, especially the future revision with even extra speed and capacity.

    Nvidia will soon start using that too, but as of now they proven that the actual chip is still the biggest bottleneck until it’s manageable to make it way faster, and for that we’ll need the future families.

    • The Sombrero Kid says:

      It’s not the chips, graphics programmers are trained to think of memory fetches as expensive, so we minimize them, if we could bank on HBM we’d be doing far more texture fetches, making lots of things like shadows, blurs and stuff look way more awesome.

      • Bagpuss says:

        ‘Blurs and stuff’ are the first things I turn on a newly installed game. It futts with the image too much and is usually distracting to gameplay.

        I have a 980, so I could keep them if I wanted.

  4. Det. Bullock says:

    Just curious, other than Batman Arkham Knight are there any games that require (not “work better” I mean “require”) a better graphic card than a lowly Radeon HD7770 with 1GB?
    I never had problems with any game I threw at it in the last years, I think the heaviest was Tomb Raider that only had a few hiccups performance-wise with tressFX and all (I think the fact that my LCD is an old 1280×1024 screen helps a lot), and it would be kind of weird to find a graphic card that supports relatively heavy games to become obsolete so fast.

    • TacticalNuclearPenguin says:

      Did you play TW3? I mean, i know it’s kind of lightweight in the VRAM department compared to expectations given the graphics on display, but while it hardly requires anything more than 2GB i seriously doubt it can work with 1.

      There must be many more games now, but maybe i just don’t get your point. Maybe you mean games not even simply starting up, but i’m sure there are a few.

      • Booker says:

        I’ve played and finished it with an AMD Radeon 6850 with 1 GB VRAM, no problems. Smooth on middle texture quality. Never played a game that looked this good and was this optimized. Tons of games out there that don’t look near as good and run slower at that on the same hardware.

      • Det. Bullock says:

        Unfortunately not, but I always buy games on the cheap (though Pillars of Eternity is tempting me), I played Tomb Raider only because my brother bought it (he buys games rarely and when he does usually near dayone in physical copy) and he couldn’t bother to open his own steam account so he registered it on mine.
        Probably I’ll be able to test my card on Rise of The Tomb Raider next January.

      • emge28 says:

        Hi, just wanted to say that I play Witcher 3 on high settings with an overclocked Radeon HD 7850 with 1 GB Ram in 1080p, and it runs fine framerate-wise, I guess about 30 fps, maybe sometimes a bit lower, which is enough for me. I don’t notice any stutter or hiccups.

        The occasional pop-up of people seems to happen on stronger cards as well, and very rarely in Novigrad the whole screen gets blurred for half a second, which also seems to happen on stronger cards so doesn’t seem to be related to low VRAM.

        I actually wanted to get a new graphics card specifically for Witcher 3 and the HD 7850 1 GB was always meant as an interim until then, but I can’t really justify a new card when The Witcher 3 runs fine.

        I wonder how Arkham Knight will run once it gets rereleased (and whether I actually want to play it even if it runs well).

    • tehfish says:

      Having loaned my GPU to my brother quite often recently for him to play higher-end games (we’re both broke right now), swapping between ATI 1GB 6850 and 2GB 6950 cards, yes its makes a huge difference depending on the game.

      The raw GPU horsepower difference isn’t immense, but the difference is extremely noticeable in games that require more than 1GB RAM.

      For older games 1GB is fine, but for recent stuff 2GB is absolute minimum. If i were to buy a new card now, i’d put 4GB as the minimum spec to go for at the current time.

    • MiniMatt says:

      At 1280*1024 I’d guess it’d still just about run anything. My HD7770 only recently had to go, though I run at 1920*1080.

      Over the last year I’ve upgraded from a first gen i3 + HD7770 to a new i5 + GT960 – with a brief stint inbetween with new i5 + HD7770.

      Old i3+HD7770 really really struggled with Dragon Age Inquisition – though that seemed to be CPU related as new i5 + HD7770 ran ok on low-middling settings and looked ok. Seem to remember it running a bit happier and prettier under Mantle rather than DX.

      Old i3 + HD7770 really failed to run Shadow of Mordor at anything like acceptable levels. New i5 + HD7770 can just about keep it acceptable at lowest settings, doesn’t look that pretty. New i5 + GTX960 runs at middling settings well, looks pretty.

      Witcher 2 ran very well on the old i3 + HD7770 on low-middling settings, and looked very nice (haven’t tried it on anything newer).

      Witcher 3 ran ok with new i5 + HD7770 and looked quite nice too. With new i5 + GTX960 it ran well and looked very nice.

      • Det. Bullock says:

        Well, I have an i5-3470 so I guess that I shouldn’t have issues, and as I always buy games on the cheap via Steam and GOG sales it might take a while before the more demanding titles may be within my budget.

      • Det. Bullock says:

        I should also add that I was able to play The Witcher 2 with everything maxed out and FXAA without problems that I could discern, that’s why I found odd that Arkham Knight had such steep minimum requirements, I guess it’s just a badly optimized game.

  5. Andy_Panthro says:

    Am I supposed to be able to see a difference in quality between the two GTAV screenshots? or is the point that they’re both approximately the same?

    • Timbrelaine says:

      If someone bribes me I’ll swear up and down that their pixels are the best. But otherwise, I think they’re the same too.

      • MattM says:

        The Jpeg compression makes it tough to compare the small differences in the AA methods.

    • LetSam says:

      The screenshots at least show that Jeremy didn’t re-calibrate his monitor after swapping cards.

    • Don Reba says:

      Here you go: link to i.imgur.com

      • Ejia says:

        Ooh, that Welcome to Night Vale game is progressing nicely, I see.

  6. aircool says:

    Is there anything worth getting that will be better than my GTX680 but isn’t stupidly expensive?

    I’ve had the 680 for three years now and it still seems to do the job.

    • SuicideKing says:

      970? Though I usually give 3 generations between an upgrade so maybe you ought to wait for HMB2 infused Pascal next year.

    • Stirbelwurm says:

      Well, I’ve got a 660 and still don’t see a reason to upgrade. I used to buy every second generation back then, but today that wouldn’t be worth it.
      Just like you I would always ask myself, if an upgrade now would be worth it. But I realised, that I was only asking myself these questions of upgrading, just for the sake of upgrading.

      Long story short, if you don’t have anything that requires more power, why buy it? The longer you wait, the better and more cost-efficient your next card will be.

      • Jediben says:

        I went from dual 680s at 1920×1200 to dual 970s at 2560×1440. The change in resolution demanded the upgrade to maintain 144fps at max settings, and was worth it for that. If the res wasn’t changed I don’t think the improvement would be apparent.

    • MattM says:

      The GTX 970 is ~$330 and is a little over around 50-65% faster. Not a bad upgrade but I’d wait one more generation (or the price drops right before a new generation) if your looking for a 2x improvement under $400.

    • goettel says:

      I’m on a 660Ti and although I’m tempted by the 950, benchmark-wise I feel it’s fine to skip the 900-series entirely and wait one more year. My tripple-A’s (Witcher 3, GTA5 and BF4) all run good and look great at around mid/high levels (I guess I’m feeling the 2GB limit these days). Frankly, it’s unoptimized early-access stuff that hurt the most, e.g. 7DTD has become noticably sluggish with the last couple of releases.

      So my 2¢: ride that 680 another year, she’s still great.

  7. TRS-80 says:

    No comments on the first DX 12 benchmarks showing big gains for AMD?

    • mattevansc3 says:

      Because that game was developed for Mantle in conjunction with AMD.

      As DX12 favours in game optimisations over driver optimisations that game by design is biased towards AMD and is not an accurate measure of DX12 performance.

      • Asurmen says:

        You’re going to have to explain how the benchmark shows bias.

        • mattevansc3 says:

          DX12 favours in game optimisations over driver optimisations.

          AoC has been heavily optimised for AMD GPUs.

          The resulting DX12 scores show a huge performance increase on AMD GPUs going from DX11 to DX12. The same benchmarks show a decrease in nVidia performance going from DX11 to DX12.

          The optimisations for AMD GPUs and lack of optimisations for nVidia GPUs within AOC will automatically bias DX12 scores in favour of AMD GPUs.

          The benchmarks don’t show us what unoptimised AMD DX12 performance is like nor does it show what optimised nVidia DX12 performance is like. Therefore neither set of scores can be compared and it shows no real information or trends.

          • Asurmen says:

            That’s just a series of assumptions. It does not show bias at all.

    • SuicideKing says:

      AoS is CPU limited and quite weird (1600p “high” has less FPS than 1080p “high”), so what it really only shows is that AMD’s DX11 driver is terrible, and Nvidia’s DX12 driver needs some more work.

  8. SuicideKing says:

    Yeah what I got from the TR piece was that 4GB is enough seeing that current day GPUs aren’t really capable of 4K gaming anyway. When more than 0.5% of us have 4K, we’ll have much better GPUs with more VRAM.

    On the other hand, Tom’s Hardware had to lower the effects at 1080p for GTAV and Shadow of Mordor when they tested the 950, so I think 4GB is the minimum one should go for at 1080p.

    Of course, if you can’t spend the extra money that’s fine, but you’ll have to compromise. My GTX 560 can’t keep up anymore with its 1GB needing compromises for Arma 3 and Rome II (at 1080p).

    • mattevansc3 says:

      But what about the middle ground? 21:9 gaming is starting to creep in and after the Digital Foundry video on the subject I’m personally preferring the extended field of view over the increased detail.

      If 4GB is just enough for high end 16:9 1080p is it enough for 21:9 1080p?

      • SuicideKing says:

        Eh? As long as the pixels are the same the aspect ratio doesn’t matter…

        • mattevansc3 says:

          The aspect ratio directly affects the number of pixels.

          A 16:9 1080p image is 1920×1080 which is 2m pixels.

          A 21:9 1080p image is 2520 x 1080 which is 2.7m pixels.

          Going from 16:9 to 21:9 is a pixel count increase of just over 30%.

          • Jediben says:

            No the number of pixels and the aspect ratio are independent. You are assuming pixel density is increasing with a ratio change but they could go down and still be a higher ratio mathematically. You could literally have 21 horizontal and 9 vertical pixels!

          • Jediben says:

            I mean rows of pixels. You could have a 21p monitor which is 21:9, using only 189 individual pixels.

          • Jediben says:

            Actually that would be 9p…

          • Jediben says:

            I realise now that your original point doesn’t actually relate to what I thought you said.

          • Jediben says:

            All of three of is are right, on three separate matters! Hurrah!

      • Asurmen says:

        Er, yes. The article answers that itself although as pointed out there’s more to memory usage than just resolution.

  9. Freud says:

    There is a hole where a GTX 970 Ti should be. The gap between the $200 and $350 cards seems too big to me.

  10. CookPassBabtridge says:

    I got a bit burned with the VRAM issue. Mainly my own fault – I bought two EVGA 980’s, which have 4GB, but at the time “industry insiders” (whom I shall henceforth not listen to ever again) said there was definitely a high VRAM version in the offing as they did with the 780, due for release within a couple of months of the date I bought them. So I bought an EVGA intending to take advantage of their trade-up scheme. Then of course the cards never materialised, but the 980ti has 6GB.

    I haven’t managed to run out of VRAM yet, and though I am pretty sure if I bought Mordor and added the texture pack I might have troubles if I DSR up to some ungodly resolution, but nonetheless it makes me think twice about listening to industry predictions and even leerier still of the marketing approaches these guys use.

    SLI 980’s should last me a while though.

    • mattevansc3 says:

      One of the stated benefits of DX12 is that it has a combined memory pool so your two graphics card in SLI would give you 8GB on DX12 games.

      • CookPassBabtridge says:

        I thought that had been confirmed as misinformation? Would be good if its true but I don’t think it is.

        • Simplex says:

          This is unconfirmed and may never actually happen.

        • mattevansc3 says:

          AMD are saying its true…or at least will be on their hardware. link to wccftech.com

          • CookPassBabtridge says:

            Not to be an ass, but that’s WCCFtech, and they say all sorts of things that never end up being true. A bit like the 8GB GTX 980’s mentioned above. They seem to be a toned down version of KDramaStars a lot of the time.

  11. The Sombrero Kid says:

    One thing to consider, that you haven’t is that the new generation of rendering api’s *seem* to fix all AMD’s driver woes for them, giving them ridiculous performance boosts over nVidia on those new api’s, so if you can handle lower performance today in exchange for what *seems* to be unparalleled future proofing AMD might be the right bet? I personally am not really a risk taker so I’m not sure if I will but it’s worth considering.

  12. The Sombrero Kid says:

    It’s also worth noting the next gen gpu’s from both companies will have shed loads of HBM and a massive die shrink.

  13. mattevansc3 says:

    No assumptions, just facts;

    The Nitrous Engine powering Ashes of the Singularity was built around Mantle before it supported DX12;
    link to overclock3d.net

    AMD ‘s logo, not nVidia’s is on the game. Every press release mentions AMD and AMD showcased this game at their event.

    DirectX12, like Mantle, allows for games to be coded at a low level.

    So even though this is an AMD sponsored game, based on an AMD low level API designed for AMD GPUs the game in no way favours AMD GPUs?

    • mattevansc3 says:

      That was meant as a reply to Asurman.

    • Asurmen says:

      DX12 isn’t based off of Mantle. That’s Vulcan. Different APIs. Both vendors have had access to DX12 for the same amount of time. Merely having a logo also doesn’t mean bias either.

      The performance doesn’t mean much either. Saying DX12 matters mainly on game optimisation rather than driver optimisation is false. The game running worse for Nvidia doesn’t mean bias.

      • mattevansc3 says:

        AMD’s own DX12 slides state that Ashes of Singularity is optimised for AMD hardware.
        link to wccftech.com

        AMD say the game is optimised for their hardware on DX12.
        The publisher says the game is optimised for AMD hardware on DX12.
        The developer and creators of the game engine says its optimised for AMD hardware on DX12.

        The game is optimised for AMD hardware on DX12.

        Game optimisations bias the benchmarks in favour of the hardware being optimised for. Therefore this game is biased towards AMD hardware and it shows in the benchmark results.

        • Asurmen says:

          That’s marketing and doesn’t actually mean what you think it does. It’s saying AMD hardware is good at using DX12 capabilities and AoS has been built from ground up using DX12. That STILL isn’t bias.

          There are simpler explanations than bias that you’re ignoring.

  14. Radiant says:

    Was looking for a cheap BUT capable gfx card and saw a r270x 4gb going for around 80 quid.
    Worth it or is outdated and been superseded by better tech?

    Budget is max £100 ish

    • Radiant says:

      Can’t edit [c’mon dawg].

      But how is this nvidia budget card relative to that r270x-290x?