2016 Will Be Great For Gamers: Part 1, Graphics

This year. Soon to be so last year

Four long, desolate years. Yup, it really was 1,460 sleeps ago, almost to the day, that the very first 28 nanometer graphics chip was launched, allowing card-makers to squeeze billions more transistors into their GPUs – meaning better performance for theoretically lower costs as a result. But here we are and 28nm is still as good as it gets for PC graphics. That’s a bummer, because it has meant AMD and Nvidia have struggled to improve graphics performance without adding a load of cost. It’s just one reason why 2015 has kind of sucked for PC gaming hardware. But do not despair. 2016 is going to be different.

In fact, it’s not just graphics that’s getting a long overdue proverbial to the nether regions. Next year is almost definitely going to be the best year for PC gaming hardware, full stop, for a very long time. So strap in for what is merely part one of my guide to the awesomeness that will be 2016.

NB: For instant gratification you can find the usual TL;DR shizzle at the bottom.

The 28nm problem
The problem with PC graphics has a name and its name is TSMC. That’s the Taiwanese outfit which, as we speak, knocks out all the high performance graphics chips you can buy. AMD and Nvidia design ’em. TSMC bangs ’em out. We play games.

In theory, TSMC should wheel out a new production process every couple of years at worst. In simple terms, that means the ability to make chips composed of ever smaller components, mainly transistors. Which is good for a number of reasons. With smaller transistors, you can pack more of them into a given chip size. More transistors in a given chip size means more performance for the same money. Or you can make smaller chips that maintain performance but cost less. It’s all good and the basis upon which our PCs keep getting faster and cheaper.

Fast today. Slow tomorrow?

Smaller transistors also tend to allow for better power efficiency and / or higher clockspeeds, the benefits of which I shall trust you to tabulate on your own.

Anyway, TSMC hasn’t exactly been nailing its roadmap of late. Prior to the current logjam, it got a bit stuck on 40nm. So, it binned the planned jump to 32nm for graphics chips, said sod everything, and went straight to 28nm. Gillette-stylee but probably without the corporate triumphalism, you might say.

I digress. The same thing has happened again. Only this time it’s been even worse, taken even longer and the jump will be from the existing 28nm node to 16nm.

So, there you have it. Next year’s graphics chips will finally get a new process node and we can expect a really dramatic leap in performance across the whole market, right? Affirmative. But as ever it’s a bit more complicated than that.

No more new but not really new boards like AMD’s 390X in 2016

FinFET fun

For starters, Nvidia will indeed have its next family of graphics chips made by TSMC at 16nm. But not AMD. For the first time, it’s going to hand its high performance graphics chips over to the company that makes its CPUs and APUs, namely Global Foundaries, or GloFo for short.

GloFo, of course, used to be part of AMD before it was spun off during one of the latter’s regular cash haemorrhaging fits, sorry, carefully managed rounds of restructuring. And GloFo’s competing process will be 14nm, not 16nm. When you’re talking about processes at different manufacturers they’re not always directly comparable. So let’s just say they’re in the same ballpark.

The other interesting bit is that both processes will sport what’s known as FinFET tech. This is somewhat complicated and involves the normally flat channels inside transistors extending upwards (hence ‘fin’) into the gate and allowing for much better control of channel inversion and in turn conductivity from source to drain.

I know, tremendous. But seriously, think of FinFETs as 3D transistors that are more efficient and as offsetting some of the problems that pop up as these things shrink ever smaller. The critical point is that the FinFET thing means that this isn’t just a straight die shrink from 28nm to 16nm or 14nm. It’s supposedly going to be even better than that.

This somewhat silly water-cooled AMD Fury contraption is what happens when you get stuck on 28nm

Of course, they always say that. Wasn’t it strained silicon that made the last node or three super special? Or was that SOI? No, sorry, it was high-k metal gate. Wasn’t it? Whatever.

The big question mark in all this is arguably GloFo. It hasn’t made big graphics chips for AMD before. But that’s a story for another day. Let’s just cross our fingers for now because if GloFo cocks it up, we’re all buggered.

Those next-gen cards in full

What we do know is that the manufacturing side should allow for much, much more complex GPUs. In fact, I’ll tell you how much more. Roughly twice as much more. Yup, today’s top-end GPUs are around the 8-9 billion transistor mark and these next-gen puppies due out in 2016 are going to take that to at least 15 billion and beyond at the high end. I cannot over emphasise just how staggeringly, bewilderingly big these numbers are. This stuff really is a marvel of modern science and engineering.

AMD’s new family of chips will be known as Artic Islands and the big 15+ billion transistors bugger is codenamed Greenland. Nvidia’s lot is Pascal and the daddy is GP100, which is rumoured at around 17 billion transistors. Monsters, both, in other words.

4K gaming on a single somewhat affordable GPU in 2016? The affordable bit might be a stretch…

High bandwidth memory comes good

As if that wasn’t enough, both are going to be using the second gen of that HBM or High Bandwidth Memory gunk that AMD used on its Fury boards this year. HBM delivers (unsurprisingly, given its name) loads of bandwidth. But the first generation only allowed for 4GB of graphics memory. That sucked but HBM2 takes that to 32GB, which I think you’ll agree sounds like plenty. Actually, the high end may well top out at 16GB, but the point is that HBM2 doesn’t have a size problem.

Either way, we’re looking at roughly 1TB/s of graphics memory throughput, which technically is known as ‘1CT’ or ‘a craptonne of bandwidth’. Of course, there will be architectural improvements, too. Nvidia is arguably ahead of the curve here with its uber-efficient Maxwell chips. But AMD will be making its first really major revision to the GCN tech that’s been powering its graphics chips for the last four years. GCN 2.0? Most likely.

All told, the net result could well be something silly like 80 per cent more performance. Graphics cards that are suddenly twice as fast as before, in other words.

If VR is to be the next big thing, we’re going to need much more powerful GPUs…

This isn’t just a high end story, either. The impact of this will extend across the market. Maybe not from day one. But as the whole range of 14/16nm GPUs appears with chips like Nvidia’s GP104 and AMD’s Baffin, we could be looking at not far off double the performance at any given price point by year’s end. So budget cards could mean big performance.

This kind of generational performance leap has happened before. But the long wait is going to make it especially satisfying this time around.. As for exactly when this will all kick off, it’s not certain. But we should see the first launches by next summer. With all that in mind, 2016 in PC graphics will look like this:

TL;DR

– PC graphics finally moves away from ancient 28nm transistors in 2016
– HBM2 memory will give an insane 1TB/s of bandwidth and support 32GB of graphics memory
– Overall upshot could be a near doubling of gaming graphics performance…
– …or today’s performance at roughly half the cost
– Proper 4K gaming on a single GPU should be possible
– GPUs with enough performance for really high resolution virtual reality (VR) look likely, too
– Nvidia is sticking with TSMC for manufacturing, but question marks remain over AMD’s new graphics chip partner, GloFo

Overall, then, things look very fine for PC graphics next year. And yet that’s just the beginning. 2016 is looking hot in a lot of other areas too. Like CPUs. And SSDs. And displays. And stuff I can’t be arsed to think of right now but hopefully will cross my mind in time. So tune in next time for a whole lot more awesome.

119 Comments

  1. Christo4 says:

    I have contradictory feelings.
    On one hand, i’m happy for these advancements and can’t wait to see how games will look like and perform.
    On the other hand, i just build a pc a few months ago and expected it to last quite a few years, but now i think after 1 year i’ll have to change it with all these advancements and DX12, with parallel compute coming along.

    • TimRobbins says:

      Nah, aside from some horribly optimized console ports you’ll be playing on max for quite a few years. Game development will take years to be on par with your current hardware potential, unless you plan on playing 4k with VR or something.

      • Doganpc says:

        Pretty much, until the next generation of consoles comes out. Games won’t be designed to handle this much graphical power (4 more years of crappy ports!). Honestly, it’s going to be a bit underwhelming for a while. Then, after they’ve worked out the kinks in the hardware on us PC folks; the consoles will get all the marketing hype.

    • TacticalNuclearPenguin says:

      Your PC won’t change though, you’ll just have a range of awesome shiny stuff constantly taunting you.

      Granted, you could argue that the presence of twice as fast cards migh tempt developers into being even more lazy than they are now when it comes to optimization, but i’d wait to turn into full pessimism just yet.

    • Maxheadroom says:

      I’m in the same boat so don’t worry. My feelings have always been its better to get the last iteration of the older generation than the very first one out the door of the new generation.

      I’ve no doubt the new ones will be plenty faster, but they’ll likely be big, noisy, hot, power hungry and expensive too. I’d give it a couple of years for the tech to properly bed in and mature.

      just my 2 cents

  2. Captain Joyless says:

    But… but… I just upgraded my graphics card this year. =/

    • Old Rusty Dusty says:

      I did too, but high end cards tend to hold their value… In a lot of cases if you play ebay right, you’d be surprised at how much you can get for a cutting-edge last-gen card. I personally plan to throw my 980Ti on ebay right before or right when the new cards drop, and expect to get most of my money back to cover the cost of the new card. It still costs money to upgrade of course, but if you can sell a $700 card for $500 and then buy a new $700 card, then all of the sudden it only cost you $200 to upgrade, which isn’t too shabby. I just know way too many people who are attached to old hardware and sit on it until it depreciates until nothing… you gotta play the upgrade game.

      • melnificent says:

        This is how it should be played. I managed to snag a 290x for around £200 on amazon right before the 300 series hit as people were expecting big improvements so the 200 series stuff dropped. My exact model is currently at £320.

      • paralipsis says:

        I’d be tempted to do the pre-emptive sell off of my current card, but I can’t be too sure that the first generation of the new cards will have all the features that I want. For instance, the later generation 970 and some 980Ti cards offer “semi-fanless” designs. I’m currently using a gen 1 970, and couldn’t justify the upgrade costs when the semi-fanless cards came out, but with a doubling in performance around the corner…

        I’ve currently got way more heatsink than my CPU needs, and I suspect it could well idle without fans. So I’m really tempted to get a graphics card and a new power supply that also don’t use fans on idle. I could potentially end up with a PC where it only uses fans at all when in full gaming mode.

        • Premium User Badge

          steves says:

          That is entirely doable.

          There are already PSUs that’ll not run the fans when under low stress, ditto for modern high-end GPUs.

          There’s craziness like this:

          link to quietpc.com

          for yer CPU, and while there will probably be one or two case fans spinning a *little* bit, there are options that are all but silent at low RPM if you set up the motherboard software right.

          • Person of Interest says:

            I would not recommend those, or any, fanless CPU coolers. They generally can’t keep up with a fully-loaded CPU, and the CPU’s protection circuitry will throttle the clock speed. Silent PC Review is the best source of information on quiet components, and they did a thorough review of the NoFan cooler linked above: link to silentpcreview.com

            You can get a good, fanned cooler, and pair it with a fan spinning at an inaudible 12dB (that’s 12dBA @ 1m actually measured in a hemi-anechoic chamber, not “marketing department 12dB”), which will keep even an overclocked quad-core from overheating and throttling.

          • TacticalNuclearPenguin says:

            Best thing would be passive radiators for custom watercooling, incredibly huge things, but even then overclocking usually doesn’t work.

            Still, it might cool your stock clock setup decent enough and, if you put enough of them, even add the GPU in the same loop.

            Passive CPU coolers though, as per the above suggestion, aren’t really great unless they’re meant to at least be part of a waterloop, but if you get the very best one and you find that your CPU can be downvolted at stock speeds, you actually might be in luck.

    • Premium User Badge

      Carra says:

      I’m waiting until the next gen to upgrade my PC. My Geforce 670 still runs everything at a playable framerate/quality.

  3. MiniMatt says:

    The next big leap in graphics performance is always, without fail, 6 months after I buy a new graphics card.

    It’s happened again. Thanks Jeremy. :)

    • Keios says:

      Can you let us know when you next upgrade in that case?

    • LazyAssMF says:

      Yyyyyyyep, sam here, always the same, and it’s starting to annoy me to tell you the truth… :/

      • Unclepauly says:

        If your card plays the games you like up to your satisfaction who cares?

  4. kwyjibo says:

    With all this horsepower, developers won’t even need to give a shit about optimizing their half-assed console ports! Trebles all around!

  5. JimboDeany says:

    So do I wait to get a new PC? I was going to get one next week, but now I’m unsure what to do. How long do I wait, do I just buy regardless? Do I wait until the prices begin to drop? I am befuddled.

    • TillEulenspiegel says:

      You can always do what I did and hang on to your old video card for another few months. I’ve got a brand new i7 6700K system…with a Radeon 7850.

      Actually, it might be a better idea to just wait, and buy everything after Nvidia releases their new GPUs. It’s moderately likely that Skylake CPU prices will drop a bit by then.

    • Unclepauly says:

      You can safely go Skylake as there wont be anything to surpass it for a really long time. Graphics cards are really about the only thing that is going to keep improving performance wise.

      • TacticalNuclearPenguin says:

        Although Kaby Lake, as a refresh, will probably fix the slightly underwhelming stock speeds, the higher-than-expected power draw differential with HT on, and hopefully the very thin silicon that people already managed to bend in more than one occasion.

        As a 2600k user i was sort of hoping for Skylake to be an overclocker’s wet dream but we’re still not there yet. Then again i’m not saying it’s not a wonderful proposition already, especially because the associated chipset is incredibly great and full featured already.

      • Premium User Badge

        syllopsium says:

        To be picky, there will be Skylake-E, where they bring back in the 6/8 core models.. Not so useful for games, but good for other things. Possibly a speed bump at that stage, too.

    • Drumclem says:

      I am in the exact same situation, was all ready to go GTX 970 and i5 6500 (Skylake !!!) but this makes me hesitate… I haven’t bought a new rig in YEARS (upgrading by little touches since 2009, can you imagine?) and I’m really impatient. The idea of waiting for another six months really is nearly unbearable, especially with all these Christmas discounts all around. What should I do?

      The horror, the horror!

      • JimboDeany says:

        Also with the Witcher 3 in my “to play” box and Total Warhammer on the horizon can we afford to wait?!? I think I’m just going to go for it and then upgrade when these come along if necessary.

        • Jeremy says:

          Another option is to JUST get a new GPU, which is what I did. Not sure how old your PC is, or if you’re someone that needs max settings(I am not), but PCI-e is backward compatible, and the performance difference of a 970gtx(or other card) on 2.0 vs 3.0 is basically non-existent. As long as the GPU power draw isn’t going to set your PSU on fire, it could be a good stop gap. I’ve got a 5 year old AMD build, and with a 970 in there, I’m able to run everything quite well. The Witcher 3 runs beautifully in particular.

          • JimboDeany says:

            Sadly I am currently using a laptop so it will be a full rig that I buy. I’m going to go for it I think

          • Jeremy says:

            Ahhh yeah… not much you can do about that then unfortunately. The nice thing is, as the person below says, the only reason you would need one of these 2016 cards is for 4k gaming. A new build with a 970 or 980 will get you to Ultra for most games on 1080 without skipping a beat.

          • Unclepauly says:

            Yeah I don’t know how they managed it but Witcher 3 requires almost ZERO CPU powah. 1st generation dual-cores even run the game decently. The latest CPU’s that have a problem with the game are pentium 4’s! Talk about optimized?! On the GPU side of things the game isn’t particularly the most optimized game in the world but it’s not a dog either.

      • TormDK says:

        If you plan on sticking to 1080p as your go-to resolution for the next couple of years, then there is 0 reason to wait for these new monsters to come out.

        These monsters will only see their values worth at 60+ FPS 1440p gaming with everything maxed out and AA turned on, or on 4K

        • Drumclem says:

          Ok, clearly no reason to wait then since I’ve just bought a 48″ 1080p TV (Samsung UE48H6400, which is awesome and cheap, by the way). You’ve brightened my day, I can now spend a ridiculous amount of money on a brand new rig without any more ado. HUZZAH !

          And thank you both for your answers. How nice this comment section is.

          • Jeremy says:

            Ooo.. I’m excited FOR you. Nothing better than a brand new PC that can chew through games. Good luck and have fun!

  6. stylius says:

    Sorry to bring it up, but GPU upgrade in 2015 was a bad idea, especially if you would not be able to do it again soon. And so far it was because DirectX 12, but now we have a second reason.

    You can still count the Dx12 games with the fingers on your right hand, but soon this would change. And while the current GPUs would support the API, they are not built purposely for Dx12. Now this would change and with the smaller transistor size, I am now really excited to see what the future holds.

    • TacticalNuclearPenguin says:

      “but soon this would change”

      Yep, a couple years give or take.

      • Hedgeclipper says:

        Maybe, W10 is still under 30% on the Steam survey and its not only free they’ve done everything short of sending thugs round people’s homes to beat them until they install it. I’m guessing adoption is going to be slow.

  7. Retne says:

    Thanks for the column, Jeremy.

    Very interesting news, but I also just loved the style

    I do care about / love all the nerdy stuff, but I very much enjoyed this as a piece writing, too.

  8. yogibbear says:

    Still holding strong onto my 770GTX waiting for Pascal to drop… but boy oh boy is the performance starting to be really crap and the desire for a 980Ti so strong…. Pascal cannot drop soon enough. Q1 2016 please (even though I know it’ll be late Q2/early Q3).

    • Alfius says:

      Yup, I too am running a 770GTX. It struggles with GTA5 at 1900×1200, was pondering a move to a 980, but I’m better off waiting by the sounds of it.

      • Eleven says:

        Me three. I recently upgraded the rest of my PC except for the graphics card, expecting big things for GPUs to happen in early 2016.

        My 770 can just about run Elite Dangerous in VR on the Oculus Rift, but only just. It’s so cruel having almost the best space-dogfight experience, but having to be patient until the hardware catches up.

  9. caff says:

    My new PC will be based around the new graphics cards when they hit. Right now seems like a bad time to upgrade.

  10. tehfish says:

    Crumbs, this sounds so very odd with all this fuss over ‘ancient’ 28mm GPU’s, considering i’m running a 40nm GPU and still playing brand new games such as fallout 4 at 1080p*

    But yes, New GPU time soon i think :)

    *ATI6950 2GB. It’s coming under min spec now. But still perfectly playable ;)

    • gunny1993 says:

      Playable is a relative term, I mean good god, do you even get 30 fps with that card?

    • TacticalNuclearPenguin says:

      There are varying standards on what’s playable and what’s the minimum level of image quality you get versus the framerate, but let’s put it in another way.

      You mention Fallout 4. How about in 2016, a mid end card is able to max it at 1440p? Sure, you probably don’t care that much, but you’ll agree that it’s a monstrous step forward i’m sure!

    • TormDK says:

      I’m rocking a 980Ti currently, and to be honest, it’s not living up to the expectations I had for it when we are talking about 1440p. I do get some dips below 60FPS from time to time when playing modern games.

      So if we can get even a 20% increase next year (50%+ increases on a single generation seems like utropia to me), then I would likely fork over the cash for the top line model then.

      • TacticalNuclearPenguin says:

        50% seems like utopia now because the last two generations ( at least for Nvidia ) saw the actual mid range come first, and the full-fat later. The only real big Kepler didn’t come with the 680, but with the 780ti, same goes for Maxwell.

        This time around ( still talking Nvidia ) it appears that the full-fat will be with us from the start, and it will have HBM2, but alongside they’ll sell the midrange with GDDR5+ ( the plus meaning it’s a new and much improved version ).

        • TacticalNuclearPenguin says:

          In practice, what i’m saying is that the 580 should be compared to the 780ti, not the 680, and that the 780ti should go against the Titan X and not the 980.

          Many rumors say this stretching won’t happen this time. I understand rumors are just that, but they have some logic since the previous segmentation was caused mainly by the incredible hurdles caused by manufacturing issues, this time it seems the step to 16nm FinFet has been incredibly smoother.

        • TormDK says:

          Well, we can always dream.

          A 50% increase to what the 980Ti brings to the table right now is pretty massive for a 1440p gamer like myself.

          So unless it’s a 999$ retail price product they are launching(Aka, Titan Y) then I would likely consider such a purchase a must have.

          • TacticalNuclearPenguin says:

            That’s actually my fear, that this time around a new Titan won’t be designed with a very close yet far more affordable competitor in mind, thus forcing people to go big or go home.

  11. Jeht says:

    Will these cards have DP 1.3 so I can run 4K at 120Hz+ some day when such monitors exist?

  12. aircool says:

    My GTX680 is still going fine, which, when I heard the news of the next-gen GFX cards, I decided to spend the money that I was saving for a GTX970 and wait for a PS4 in the sales.

  13. Sakkura says:

    The jump from 40nm to 28nm was not unusual, it was perfectly normal. A die shrink is normally supposed to be a 30% reduction in feature size, and 40nm x 0.7 = 28nm precisely.

    For historical reasons, the 40nm -> 28 -> 20 -> 14 processes are known as half-nodes and have come to dominate for GPUs. The 32nm -> 22 -> 16 -> 10 processes are known as full-nodes and dominate for CPUs.

    Shrinking from 40nm to 32nm would only be half a step forward, and that’s typically just not worth bothering with. 28nm was the logical step. After 28nm, the next logical step was 20nm (rather than 22nm, as used in eg. Intel’s Ivy Bridge and Haswell CPUs), but TSMC fucked up their 20nm process. It was still used in some mobile chips, like the problematic Snapdragon 810.

    • Jeremy Laird says:

      The reality is that both AMD and Nvidia used the 65, 55, 40 etc nodes. Almost definitely both would have made chips on TSMC 32nm had it not been borked.

      Also, you talk about TSMC’s 20nm ‘rather than’ Intel’s 22nm as this means something when it doesn’t. These are ultimately product labels not measurements made to a shared standard. They’re just rough guides to feature sizes, not absolute and directly comparable measures.

      …and most importantly, afraid you don’t understand basic geometry. When calculating 2D feature size, you do not take 28nm as a direct proportion of 40nm. Those are 1D metrics. You need to square them and then compare. In fact, 28nm features are 0.49 the size of 40nm features – ie less than half the size. A 0.7 shrink of 40nm would be in the order of 34nm. Likewise, a shrink from 40nm to 20nm would net features one quarter the size. Hope that is clear!

      • Unclepauly says:

        Oh Lawdy! That was quite the smack down.

      • aircool says:

        I was not aware of that.

      • Sakkura says:

        Yes, that’s exactly why they shrink the feature size (what we’re talking about) by ~30%. Process nodes are described by their 1D dimensions, not their 2D dimensions.

        As for 20nm vs. 22nm, sure it’s somewhat arbitrary since they don’t adhere strictly to the number and there are different features you could measure the size of, but as a target they aim for while developing a process there is obviously a difference. It’s just that it looks like a trivial difference at 20 vs 22, even though it’s just as big as ever – like 65 vs 55 that we saw circa 2008.

        And yes, a few GPUs were made on the 65nm process node, but 55nm was more commonly used. The subsequent 45nm node was not used in GPUs, they went 40nm. Then they went 28nm, right on schedule. The 32nm process node was for CPUs only, and that goes whether you’re talking about Intel’s, GloFo’s, or TSMC’s fabs. TSMC cancelled their 32nm process, specifically because it wasn’t a big enough upgrade over 40nm and there was little demand from customers like AMD and Nvidia.

        link to semiaccurate.com

    • TacticalNuclearPenguin says:

      The point still stands that this time the jump is deeper after a long hyatus, plus it’s not just a stupidly huge ramp up in transistor count but we also need to add FinFet to the equation.

  14. HigoChumbo says:

    Damn… I was hoping that next gen gpus arrived in time for The Division and TW:Warhammer.

    My good old Radeon HD4870 is not up to the task anymore, and there is just no way I’m spending money on the gtx900 series and their AMD equivalents. Specially not when we are this close to the true next generation.

  15. jorjordandan says:

    While it would be nice to believe that moving to a 16/14nm process will yield double performance, looking at the past doesn’t bear this out. The performance gap between the 40nm gtx 580 and the 28nm gtx 680 was actually smaller than the gap between the 680 and the also 28nm 780, although both yielded pretty similar advances, performance-percentage wise.
    The reason for this is because the tech is hard and expensive to develop, so nVidia and AMD are incentivized to release the smallest possible advances that will still move product. The performance gains are driven sheerly by the need to amortize the cost of developing the tech. The highest end next gen card will yield about 12 – 13k in passmark. Every other tier will move up by approximately 1000 points. There will probably be a bunch of rebadged 28 nm cards still in the mix too, just like the gtx 600 generation.

    But who knows! Maybe I’ll have to eat my words :)

    • Unclepauly says:

      You can eat them now if you like. 580 to 680 was essentially a die shrink and tweak(680 had around 15% more transistors). Pascal is a total redesign with around twice as many resistors being used. (980ti 8 billion transistors with Pascal having roughly 17 billion). Yes it’s true the next flagship card will be around twice as fast as the 980ti. In fact the mid range cards are probably going to be as fast or faster than the 980ti.

      • TacticalNuclearPenguin says:

        580 to 680 was also a difference in architecture, and that’s horrendously important let alone the fact that Kepler also started the trend of gimping double precision performance.

        Take the 780ti and the Titan X as an example, both are the full-fat chip of different architectures with the same node, but in many games the difference was around 40-50%.

        • Unclepauly says:

          Oh yeah, you’re right. The 680 is when Nvidia dropped the double precision performance and started their concentration on performance per watt. I still remember them having roughly the same architecture though with certain sections cut out for energy efficiency. My memory isn’t a shining beacon though :(.

  16. GreatBigWhiteWorld says:

    Ordered my new rig yesterday, decided to go with GTX 960 with a view to upgrading down the track. The price leap to the 970 and 980 seemed a bit insane. Not too much of a graphics whore myself.

    The i7-4790k and 500gb SSD however will do magical things for my pre-historic single core vomitrously overmodded Paradox games though =D I’ve heard wonderful things about that CPU and didn’t want to miss out.

    • Unclepauly says:

      A few weeks ago they had 970’s on sale for around 240-250 USD which is only 50 bucks more than a 960 but I guess you are right atm. Cheapest ones around are about 300 USD.

      • GreatBigWhiteWorld says:

        I’m battling with the Aus$ which is currently in the shitter =( Only reason I’m remotely spending up is the fact that I won’t be going to Europe next year for similar reasons.

    • Premium User Badge

      daktaklakpak says:

      The 4790K is a nice CPU, but don’t even think about using the stock cooler. Prime 95 will send it into thermal shutdown within seconds (made that mistake myself).

      • GreatBigWhiteWorld says:

        As I was reading an article about how it seldom ever hits 4.4ghz without additional cooling, I got a message saying that they had just stopped using 4790k’s and offered an upgrade to the 6700k. Probably better in the longterm, and free DDR4 RAM upgrade with it.

  17. Ericusson says:

    So is there actually a date for the production of them all shiny new toys ?

  18. J. Cosmo Cohen says:

    This is probably a good place to ask. I’m currently in the market for a 970, as that’s my price range. Since these new cards will be coming out soon, will the 970 drop in price relatively soon? Or am I better off grabbing it now?

    Also, which 970 is the new version? I’m terribly confused reading the Amazon reviews. It seems they’ve compacted all the reviews, regardless of the model.

    • Unclepauly says:

      Get a 970 now and enjoy it for half a year. The next gen GPU’s wont be until summer or later. The 970 is a hell of a card for 300 bucks(or cheaper if you’re a frugal shopper).

    • Person of Interest says:

      “Soon” might mean 6-9 months until wide availability of the new products. You can use an historical price tracking website to see how graphics card prices vary after launch. My understanding is: new cards don’t drop drastically in price until a better product launches at a similar price point. So you may as well get a 970 or AMD-equivalent now if you are in dire need of an upgrade, or else hold out for the next generation.

      As far as new card versions: Are you referring to the models that turn off the fans when the card isn’t busy? If so, your best bet is to look at the manufacturer websites and see which specific model numbers are advertised with silent/0dB operation on the desktop. Amazon may cluster the reviews, but the model numbers are usually more specific.

      • J. Cosmo Cohen says:

        Thanks! Amazon doesn’t list the model number on all the 970’s for some reason, but I didn’t think to check individual websites. It wasn’t necessarily about the fan; I thought I had read somewhere the newer ones just ran better all around, but it’s possible it was referring to noise levels.

        • Person of Interest says:

          I haven’t heard anything about later 970 models being better-running, but then, all the reviews I read were written in 2014.

          I have read articles that reveal later revisions (v2.0, v3.0) of the same model number of motherboard may have stripped-down parts, such as fewer power phases or cheaper capacitors. But the reported culprits were budget boards only, so I hope the same thing hasn’t happened to midrange graphics cards!

          I mention the fan speed because when the 970 launched, only Asus Strix and MSI Twin Frozr cards stopped their fans when the card wasn’t running a 3D application. Several manufacturers, such as EVGA and Zotec, later updated the BIOS for some of their cards, to allow the fans to either shut off or run more slowly than before.

    • C0llic says:

      I would say unless you are planning on say gaming above 1080 or using VR stuff, just get a 970. The new cards won’t change the current specs of the consoles so that card will be more than adequate for quite some time, at which point you can move up when the time is right.

      Otherwise, maybe wait until the next gen cards. I’d say for sure though this news means getting an absolute top of the line current gen card is a bit of a waste so no 980’s. In my opinion anyway.

  19. Atrak says:

    Sadly I am currently using an old Radeon 6870 1GB card which has been showing its age for quite awhile now.

    So I’ve been looking to get a new card (something around a 970 or r 390) Given that I’m unlikely to be able to afford to get a new card next year (if I buy one now) when all these super new cards come out.

    I’ve got so many games sitting on a pile waiting for a gfx card that can do them justice (Witcher 3 I am especially eyeing you off). That holding off for another 6 months seems like am kind of torture. I guess that’s if those cards are even out in 6 months.

    I knew buying that 2k monitor would come around to bite me in the ass :P

    • Unclepauly says:

      Lower the settings? A 6870 should at least play low/medium which is still good looking on Witcher 3 imo.

  20. Premium User Badge

    gritz says:

    Refreshing to see PC graphics covered with such high-quality writing. I stopped keeping up with the hardware arms race news long ago, so I really appreciate this kind of clear summary.

    I look forward to 4k gaming, but I worry that high quality displays just won’t be there. It seems like every panel at the high end (whether you’re talking refresh/sync or resolution) seems to be plagued by quality control issues. I hope that stabilizes in time for this new generation of graphics to take hold.

  21. wxid says:

    I’m hoping this means that I’ll be able to get (the equivalent of) a current high end card in a low profile format.

    Sure it won’t be ‘high end’ when it turns up, but it’ll be better than the current 750ti which seems to be about the highest end of low profile cards these days.

    • mukuste says:

      There are some “GTX 970 Mini” small form factor cards, if that’s what you mean.

    • Premium User Badge

      gritz says:

      To be fair, the 750ti is (was) a great card. At 1080p, I haven’t seen any reason to replace it with some hot, noisy 900 series hunk of silicon.

    • Unclepauly says:

      Look into the Radeon Fury Nano. It beats the nvidia 980 and is really f’n small.

      link to newegg.com

      Sapphire is like the EVGA of AMD cards (awesome customer service). It’s a dual slot card but it’s extremely short. I’m thinking about using one in a mini PC I’m building that’s going to resemble a Nintendo Gamecube.

  22. po says:

    A couple of months ago I was looking into getting an Intel 750 Add-In-Card SSD, the only problem being to get the full performance out of it you need a PCIe 3.0 slot with 4 lanes. At present fitting one would end up disabling every other PCIe slot except the one for the GPU (and I use one for my sound card).

    The only consumer motherboards capable of supporting these things in addition to multiple other non-GPU cards were just starting to show up at trade shows, and were limited to the absolute top end of the market.

    Hopefully that will hold true for the midrange boards that get released this year, and we can get a nice improvement of storage speed over SATA.

  23. goettel says:

    I’ve been holding off getting a new PC for about a year now, waiting for the Rift and Vive to drop and see what engine they’d like to purr to. Having seen Elite’s minimum specs for VR (you know, for when they get ’round fixing it), I’m happy I did.

    • Replikant says:

      Exactly. I’ve been waiting for the graphics card die-shrink for some time now and have so far refused to upgrade. Granted, my GTX460 is still fine for the games I currently play, but I’ve been avoiding latest-gen games due to their requirements.
      I am really looking forward to VR, however, and am thus very happy that VR and 16/14 nm cards are happening at approximately the same time.

  24. Humppakummitus says:

    Wonder when we’ll get over 256 brightness levels, and decent, non-shimmering moving shadows. And lose that weird banding effect usually seen on characters’ faces.
    Feels like everything else is progressing, but some of the basic things in image quality are stagnant.

    • TacticalNuclearPenguin says:

      Even in 2D tech, 10 bits per channels are only reserved to professional cards and the advantage is not always clear, since very few monitors that support 10 bit do so natively without dithering, even the ones that cost like a shipment of beautiful teenage slaves.

      • TacticalNuclearPenguin says:

        Also, the gaming world is full with people using things that can’t even display the first 5 dark shades and last light ones.

      • Humppakummitus says:

        I think dithering would be a pretty decent option, though, especially with high resolution displays.

        • TacticalNuclearPenguin says:

          Absolutely. Good dithering does work and many professional are still “stuck” with it for 10bit content, though i guess it’s still not the most elegant solution.

          Well, that or several thousand bucks for DCI-specced OLED beasts that are used for Hollywood color grading duties.

  25. TheNavvie says:

    Do we think the Pascal Titan will still be $1000? Will there be a need for a Titan?

  26. bobbobob says:

    This has got me excited for ~2018 when I buy something with this tech at human-payable prices.

    • guygodbois00 says:

      Intel Q6600 and GTX260 over here. I guess, I’m good till late 2020s.

  27. jcvandan says:

    Awesome, was just pricing up a new build this morning so I can have a machine sat next to my TV for use with my Steam Controller. I can happily wait a few months if the rewards are potentially this good. I just googled and there’s talk of a Q2 release which aint so long.

  28. sirdavies says:

    This should be bumped to the top of the page every week through the holidays. I was going to go from a 660 Ti to a 980 this christmas. Thanks for saving me the frustration. Guess I’ll stick to slightly mediocre AAA experiences for now :P

    • goettel says:

      I’m in the same 660Ti boat as well, and it’s still pretty smooth sailing in e.g. GTA-V, Dying Light, Fallout 4, Witcher 3 and Battlefield 4. Just a slider down here or there goes a long way.

  29. Themadcow says:

    Ah, kudos for the Gillette reference. One of the best articles ever on The Onion. link to theonion.com

    • Jeremy Laird says:

      Glad somebody got it! :D

      • fabronaut says:

        that made me smile as well! definitely a stone cold Onion classic.

        excellent overview. I’ve been happily rocking my i5-2500K system with some obnoxiously large aftermarket 120mm tower cooler (some Noctua jobby, iirc?) and an MSI 7950 card with 3 GB of onboard RAM.

        the funniest part is that I bought this rig with the intention of overclocking it, but I haven’t even needed to bother. I’ll prolly get around to that some time in the next few months. Pretty much everything I’ve been playing is well over 60fps at 1080p or 1900×1200, depending on which monitor I’m using.

        I kinda wish I’d just held out and got one of the 9xx series Nvidia GPUs, since the old 8800 GTX I had in this system as a loaner was a damn good card for the price at the time. 7950 was certainly an upgrade, but I think the upgrade I want most is to try out that Freesync / Gsync tech.

        I couldn’t really put my finger on what was bothering me in some game engines in the past, and I think it boils down to being really annoyed by frame tearing in some titles. I mostly left V-sync off in the past since I didn’t always have a computer capable of hitting that magic native ~60fps target, so the input lag and stutter-y feel kinda threw a wrench in the works.

        about the only game I’ve tried recently that pushed it a bit was BF3, and there seems to be no functional difference between high and ultra settings. optimization seems a bit off, but doesn’t bother me that much. I suppose if I were to bother trying BF4 or the Star Wars one, maybe then I could benefit from the extra CPU clockspeed and what not, but mostly I’ve been playing Dota2 or poking at my back catalog, so it seems a bit irrelevant.

        we’re so friggin’ spoiled when it comes to the sheer volume of content at a pittance. it’s pretty staggering when you think about it!

        • Slackar says:

          I’ve been happily rocking my i5-2500K system with some obnoxiously large aftermarket 120mm tower cooler (some Noctua jobby, iirc?).
          Ditto, that’s what I have. Still works a charm. Overclocked it a bit, but nothing major.

          but I think the upgrade I want most is to try out that Freesync / Gsync tech.

          I couldn’t really put my finger on what was bothering me in some game engines in the past, and I think it boils down to being really annoyed by frame tearing in some titles. I mostly left V-sync off in the past since I didn’t always have a computer capable of hitting that magic native ~60fps target, so the input lag and stutter-y feel kinda threw a wrench in the works.
          This. I couldn’t quite put my finger on it. This sometimes barely (sometimes more) noticable but unbelievably annoying input lagging/stuttering/tearing what have you in some games, and the constant juggling of vsync on/off. Got myself a gtx970 and one of those gsync 144hz monitors. Man, what a difference. Everything’s smooth, no morries about vsync, no more tears or input lag.
          http://www.scan.co.uk/products/24-aoc-g2460pg-144-hz-g-sync-gaming-display-displayport-1920×1080-350cdm2-80m1-1ms-usb-hub

          • Slackar says:

            Damn this editless comment system. The last part is missing some cite tags. Should be like this(fingers crossed it works this time):

            but I think the upgrade I want most is to try out that Freesync / Gsync tech.

            I couldn’t really put my finger on what was bothering me in some game engines in the past, and I think it boils down to being really annoyed by frame tearing in some titles. I mostly left V-sync off in the past since I didn’t always have a computer capable of hitting that magic native ~60fps target, so the input lag and stutter-y feel kinda threw a wrench in the works.

            This. I couldn’t quite put my finger on it. This sometimes barely (sometimes more) noticable but unbelievably annoying input lagging/stuttering/tearing what have you in some games, and the constant juggling of vsync on/off. Got myself a gtx970 and one of those gsync 144hz monitors. Man, what a difference. Everything’s smooth, no morries about vsync, no more tears or input lag.

            link to scan.co.uk

          • Slackar says:

            I give up ๏̯̃๏

          • fabronaut says:

            I’m just as likely to botch things up in a comment, especially with my Wall O’ Text (TM) methodology. I feel your pain. :)

            The part that made me a bit sad about the whole “hey guys, now with Freesync!” market debut is that I picked up one of those 144 Hz monitors beforehand. (BenQ XL2411 — I thought there was a Z on the end there, but manufacturer product variants are a pain in the ass, so I’m gonna go with what’s on the bezel of the monitor.)

            I got it on sale for ~$250 Canadian (so about $100 – 150 off of MSRP? I think they refresh this model every year, it seems) and the colours seem pretty fantastic to me.

            Weirdly enough, I have it right next to a 1920 x 1200 BenQ PL2411 monitor which is all IPS LED with low input lag and all that goodness, and while it does seem a bit easier on the eyes, for whatever reason, the colours just don’t pop the same way when side by side.

            Granted, I haven’t the faintest idea as to what I’m doing with colour tuning. I just poke around for other people’s calibration settings and maybe download an ICC profile if I can find one. The whole “IPS vs TN” thing seems a bit overblown to me, but like I said, maybe I just didn’t calibrate it right or the anti-glare coating takes out some of that vibrant pop?

            Now I’m on the fence waiting for the next year or two of graphics cards and monitors to drop. I might hurl some money at one of those hilarious 34″ ultra wide IPS curved screens if they’re any good, provided it has whichever asychronous tech I settle on.

            Nvidia drivers seem less shite by comparison and I do like their more reasonable power efficiency, but I tossed a 750W PSU in this system, and I’d be surprised if I’m pulling much more than 450W even with a moderately power hungry AMD card.

            Also, a $200 premium on a monitor for that Nvidia G Sync functionality? HAHAHAHAHA NOOOOPE. That same $150 – 200 premium can buy me another monitor on sale or half of a fairly top shelf graphics card here in Canada. Our dollar is pretty crap compared to the US currency at present, so that doesn’t help, but really Nvidia? You guys are worse than Intel’s trickle of generational improvements, far as dollar value is concerned. </3

  30. Clavus says:

    Wonder if we’ll actually see those huge performance hikes, or if manufacturers will deliberately gimp their designs so they have some breathing space for the next few generations.

  31. RealityJones says:

    I chuckled at the part of the story where the writer claims, “if GloFo messes up, we’re all buggered” (or something like that).

    You meant to say that people who care about AMD cards are buggered. I’ve avoided AMD/ATI videocards for years due to the fact that they always seem to have problems first when new games come out.

    I have no specific loyalties to Nvidia but less issues on a regular basis means I have no ongoing need for AMD hardware really.

    • C0llic says:

      As an Nvidia fanboy I’d agree with the ‘we’re all buggered’ statement. Competition is good. CPUs no longer enjoy that; I want at least two competing companies for graphics cards please !

      • TacticalNuclearPenguin says:

        Yeah, but for that to happen, i’m starting to think AMD would be better off if someone else bought them.

        Samsung as a random example.

    • Premium User Badge

      gritz says:

      Yeah no. I tend to only buy Nvidia myself, but I know the cards I’d be getting from them would be slower and more expensive if AMD wasn’t a legitimate competitor.

    • Jeremy Laird says:

      As C0llic suggests, you miss the point. The point is that if AMD fails to compete with Nvidia, then Nvidia will have less reason to bring new GPUs to market and Nvidia fans then suffer too.

      Competition is good whether you prefer AMD or Nvidia.

  32. mattevansc3 says:

    Wasn’t part of the TSMC problem the fact that NVIDIA and AMD relatively small consumers and less of a priority than their others, notably Apple and Qualcomm? TSMC has been the sole supplier of A9X chips for the iPad and jointly supply the A9 chip for the iPhone, both of which are on 16nm fabrication while Qualcomm made up over 50% of TSMC’s 20nm orders. The sheer size of those orders was rumoured to delay NVIDIA’s and AMD’s move to smaller fabrication methods.

    • Jeremy Laird says:

      Yup. And no doubt that has helped to convince AMD to roll the dice with GloFo.

      Next time graphics gets deprioritised for smartphone chips by TSMC, it could be that Nvidia is hit but not AMD. A year or two on a superior prod process would be interesting for AMD, that’s for sure. I would guess AMD is quietly hoping for this scenario to become reality!

  33. Winterborn says:

    Just upgraded this November to a factory overclocked R9 390. Won’t be able to afford to upgrade graphics again till at least 2017. Thought this card would do me well for the next three years or so. My despair. Taste it.

    • ChairmanYang says:

      I’d imagine you’d be totally fine for about three years for cross-platform games at 1080p/60 FPS. Console hardware isn’t going to change until then, and most big games are going to be based on console hardware.

  34. 3DFruitBat says:

    I picked up a GTX 960 for $200 around launch, is it safe not to expect a 16nm replacement until late next year (or 2017)? I’m pretty happy with its performance, since I don’t play many bleeding-edge games.

  35. poohbear says:

    Well finally something exciting on the hardware end. I felt it stagnated the past couple of years, and for CPUs the past 4-5 years (every since sandybridge). So its looking exciting and can’t wait to upgrade my GTX 970 (if the Pascal equivalent does indeed provide double the fps!)

    on a side note why is the reply box waaaaayyyy at the bottom of all the comments? shouldnt it be at the top for easy access? just putting that out there for RPS!

  36. MattM says:

    For the first time in my life I’ve got disposable income burning a hole in my pocket. It’s a combo of getting a professional level job while still living the life of an underemployed 20 something. So I can’t wait for the next gen of GPUs. I’m planning on getting the 980Ti equivalent and using it to drive a new 1440p IPS gsync monitor. I’m saving games like Crysis 2 and 3 and the Witcher 3 for this new system. I just wish the new gpus would come out in early q1, but it seems like its probably going to be more like q2 based on the lack of buzz right now.

  37. Grovester says:

    “2016 is looking hot in a lot of other areas too. Like…displays.”

    Sigh. I’m writing this using a brand new Asus ROG Swift PG279Q. And it’s marvellous, but would be rather annoyed if something game changing comes along to beat it.

    • fabronaut says:

      If it makes you feel better, I picked up what I think is the last model of 144 Hz refresh monitors before they moved onto tossing Gsync / FreeSync into everything… -_-

      Mind you, I grabbed it on a fairly substantial discount. Trying to future proof tech upgrades is a bit of a crapshoot though. The monitor you picked up is apparently a pretty fantastic piece of kit. It’s fun to drool a bit over the next wave of shiny, but it’s nothing like what it used to be even a decade or so ago, let alone in the halycon days of the early 90’s (as far back as my memories go, anyhow). Hardware cycles are fairly extended, so in practical terms it seems to be less about better numbers leading to substantial increases, and more about iteration in implementations that are actually somewhat reasonably priced.

      I assume the functional difference in newer monitors will be things like “now with more curvature!” and “competition and market saturation results in heavily reduced price premiums and increased QC!” And hey, worst case scenario, you can probably sell that monitor for 70% of the list price with a bit of luck, as companies seem to shy away from completely leapfrogging their flagship models in subsequent iterations. Possibly because they can’t produce those improvements at a sane price at scale, or because they don’t want to piss off everyone who just shelled out a heavy premium for the latest and greatest? /peanutgallery

  38. lexarflash says:

    In terms of performance/price ratio, the mid-range cards work well, and the high-end products cater towards higher-res, but aren’t quite powerful enough to do the job. Also, games are build usually to run well on consoles or common PC specs, rather than 1% of the fastest machines. Only maybe 10 gaming titles really will tax your machine, so for now top GPU comes at a premium

    • Cronstintein says:

      Typically recent games have been designed to operate at 30fps base with frequent dips well below that. Now “run well” is somewhat subjective but if I have a game performing that way, I would certainly not deem it to be running well.

      There are occasional counter-examples (Halo and Batman come to mind) but in general, the consoles have been having a rough time of it lately.