2016 Awesomeness: Nvidia’s New Pascal Graphics

If it was a car it would be a gold-wrapped, kleptocrat-owned Bugatti Veyron ostentatiously double parked outside a Knightsbridge hotel. It’s still bloated, it’s still overly complex and you still can’t afford it. But it’s a graphics chip and a harbinger of things you might actually be able to buy. I give you Nvidia’s new Pascal GP100, a 15.3 billion transistor beast and the beginnings of that 2016 awesomeness I promised for the new year. In other words, if you’re thinking of buying a new graphics card, you might want to hold fire. Meanwhile, Intel has also taken the wraps off a massive new chip you can’t afford and the final piece the Laird Gaming Dungeon™: Driver Edition has arrived. Yup, I’m liking 2016.

Nvidia’s Pascal, then. It’s a whole new family of graphics chips to replace the incumbent Maxwell crew. The first to break cover is known as GP100, which also happens to be the flagship chip of the family. The snag is that it’s been announced as a Tesla-branded high-performance parallel compute product rather than a gaming video card.

You could argue that’s an academic distinction given that the price of the future gaming card based on GP100 will be out of reach of almost all of us. Instead, what matters is that the new Tesla P100 board confirms that Nvidia’s new Pascal chips are incoming in all their 16nm FinFet glory.

I’ve been through this before, but the short version is that PC graphics chip have stagnated in recent years and 2016 is going to see what amounts to a double-jump in chip manufacturing tech from 28nm transistors to 16nm. And that should translate into a much bigger leap in performance than usual with this new generation.

You can rest assured this is one graphics card that can play Crysis. Well it could if it was a graphics card. Which it isn’t. Oh well

For the record, the new GP100 chip is an absolute monster at around 600mm2 and over 15 billion transistors. For context, an eight-core Intel Core i7 is just 356mm2 and Nvidia’s previous biggest graphics chip, as found in the Titan X, rocks in at just eight billion transistors.

Anyway, other highlights include a crazy 4,096-bit memory bus, just like AMD’s Radeon R9 Fury but with second-gen HBM or high-bandwidth memory and 16GB of the stuff instead of the Fury’s arguably problematic 4GB buffer.

Elsewhere, you might argue the GP100’s key metric is its 3,840 eye candy-creating shader thingies, or CUDA cores as Nvidia likes to call them (a few of which are disabled in the Tesla board). That compares with 3,072 from Nvidia’s current gaming monster, the Titan X.

Nvidia’s Pascal GP100, yesterday

However, for Pascal, the internal structure has changed, with each of Nvidia’s so-called Streaming Multiprocessors (SM) specced up with 64 CUDA cores where the outgoing Maxwell family has 128 CUDA cores per SM.

What’s more, Nvidia has loaded this new chip with lots of double-precision compute performance that’s irrelevant to gaming but will soak up some of that epic transistor count. The point being that there’s extensive architectural change here and a direct comparison of the CUDA core count probably makes no sense, especially for GP100.

But here’s the point. Nvidia says the chip is in production right now as is the fancy new HBM2 memory. One day, maybe late this this year but perhaps further out, it will make its way into the Geforce Titan Über Alles, or whatever the hell they call it, and it will be pointlessly expensive and hardly any of us will buy it.

Pathetic 2015 humans and their puny Titan Xs beware, Pascal is coming…

But they’ve made this mega GPU work. And that bodes well for the smaller Pascal chips many of us might buy. I would guesstimate late summer for the first Pascal for us lot, a replacement for the current GeForce GTX 980, with more mainstream members of the Pascal family following swiftly.

Thus, the main moral here is that if you are in the market for a new graphics card, hold on a few months. This kind of technological leap doesn’t happen every year. 2016 is going to be special.

So, at worst whatever you’re planning on buying will be a bit cheaper. At best you may be able to buy dramatically more performance for your money. Watch this space.

As for Intel, it has just released its new Broadwell-E/P CPUs. A bit like Nvidia and the new Pascal chip, Broadwell-E is being released first as a heavy iron product for servers. And just like Pascal, the numbers are bonkers.

This is what a 24-core block diagram looks like…

The top Broadwell-E/P will have 22 cores and 44 threads – count ’em! (actually the top chip has 24 cores in silicon, it’s crazy stuff) – but it’s likely by the time it makes it into desktop PCs, those numbers will be 10 and 20 with a baseclock around 3.0GHz and a Turbo clock of about 3.5GHz.

Of course, as is the case with all of these high end chips, it lags a generation behind the mainstream Skylake Core i3, i5 and i7s in terms of architecture. But the point here is that if AMD actually delivers with its new Zen chips, Intel has a cupboard full of chips up to 22 cores with which to respond.

I’m sure it won’t need to go anywhere near the full 22 core chip on the desktop. But the 12 core, 14 core or even 18 core variants?

And, of course, what with DX12 and Vulkan promising much better threading support for games and VR potentially shaking things up, too, having loads of CPU cores might actually be relevant for gaming. It’ll be fun finding out.

And finally, thanks to Playseat.com I finally have access to a proper driving pod. As they say on the BBC, other makes of driving seat are available, but this is the one currently resident in my house.

I am excited. Are you excited?

Just having the seat sitting there has me genuinely more excited about the whole escapade. I do love a good bucket seat. Anyway, the empirical work of data gathering has begun, so tune back in a few weeks or so from now for what I shall confidently pitch as, but won’t remotely be, the definitive dissertation on modern driving games.


  1. mattevansc3 says:

    It’ll be interesting to see what path NVidia take with their mainstream GPUs.

    With the Fury series, especially the Nano, AMD is trying to improve performance while downsizing the physical aspects of their cards. With the GP100 NVidia has gone the other way and kept the physical GPU the same size and use the die shrink to pack more guts into the same space.

    NVidia may well keep the performance crown but AMD might just grab a better share of the mainstream.

    • TacticalNuclearPenguin says:

      Big efforts in sizing are going to be less relevant however with HBM tech, which already helps cutting down a huge chunk.

      What you end up with is a very expensive, enthusiast level card that’s way “smaller” than before, and it’s hard to imagine the typical target costumer having real estate problems.

      • Kaldaien says:

        I think you may have the wrong idea here. It’s nothing to do with physical real-estate. The days of super long ISA cards like the AWE32 are over. A top-end enthusiast card occupies the same physical space as NV’s mid-range cards (the low-end are silly little critters not even worth discussing).

        The benefit of a smaller package is efficiency, thermal and electrical. You are not going to capture the mainstream market by producing a GPU with identical TDP as the Titan X for half the price. Not everyone has a PSU capable of powering the more demanding cards.

        I for one would welcome a GPU with the same compute power, but a third the TDP as my 3 GTX 980s. That isn’t the direction NVIDIA is going here :-\

      • mattevansc3 says:

        This card uses HBM2 and uses a GPU with the same die size as the Titan X even though it’s being done on a 16nm manufacturing process as opposed to the GM200’s 28nm process.

    • pillot says:

      Not much hope of this, nvidia has been dominant for so long and nothing indicates this will change. Somewhere in the region of 55% market share vs 25%

  2. AthanSpod says:

    link to reddit.com


    “Update 14: And Tesla P100 is already in volume production – maybe good news for consumer side too? Shipping ‘soon’ for cloud use and ‘consumer’ use by Q1 2017.”

    and I’ve seen 2017 cited for consumer/gaming Pascal GPUs elsewhere (such as link to tomshardware.com ) in the past week as well. So don’t hold your breath if you really do need to upgrade before then.

    • Jeremy Laird says:

      Not clear if that refers to GP100 specifically as a consumer part or Pascal generally.

      There’s some NV form for not bothering to put its ‘big’ chip into a consumer card immediately, thereby allowing for a launch of the mid-sized GPU to much fanfare as the fastest thing ever and then being able to repeat that a bit later with the big chip.

      If they go with the big chip first, the mid-sized can’t be launched as the ‘best thing ever’ as it will be second string from birth.

      But we will see.

      • mattevansc3 says:

        Anandtech have said that a high cost and general low availability of HBM2 memory means NVidia aren’t Likely to get a consumer level P100 out until next year.

        • Hobbes says:

          Making my 980Ti the best decision I’ve ever made. By the time Pascal ramps up to numbers where I can warrant going for it, I’ll be able to resell this bugger for £300-350 and be able to upgrade for £150-200. Works for me.

          • RobT says:

            I’d expect Nvidia to drop their retail prices on old 980Ti stock fairly significantly once Pascal comes out. As you can buy a new 980Ti for £500 now, you’ll be doing well to get £300-350 for a used card once the next gen cards are released.

          • TacticalNuclearPenguin says:

            If you plan to always sell and upgrade to the next monster, then yes, buying the big one at every generation is what makes the most sense if you also factor in user’s satisfaction, with Titan being a special scenario that i wouldn’t recommend.

          • Sakkura says:

            They won’t drop the price of the GTX 980 Ti much. It’s on old 28nm which is just expensive to produce that many transistors on. There isn’t much room left for price drops on 28nm hardware in general. Look how cheap you can get a ~350mm^2 GPU today, those used to cost over $500 when 28nm was new.

            The main advantage of new process nodes is that they allow you to put more transistors on a given size chip, and thus reduce the cost per transistor (resulting in more performance per dollar).

    • caff says:

      Absolutely. We are already into Q2 2016, and with no definite consumer announcements from Nvidia, it’s very unlikely we’ll hear of a product launch until next year.

  3. TillEulenspiegel says:

    Can’t wait for the GTX 1080. Might be the first time I actually buy a high-end GPU, albeit not the crazily expensive Titan model. A process shrink also means they can keep the power consumption / heat generation under control.

  4. LarsBR says:

    Are those KRK Rokit studio monitors on your desk?

    Also, this is the most excited I think I’ve ever seen you on this site :-)

  5. Laberheinz says:

    I am NOT excited. Not two of the GPUs shown on the Nvidia-Event have been mounted the same way, so it’s quite possible that there is still no working product at all. Not that AMD/ATI has brought more than a few slideshows, but at least they will present their new graphics cards at Computex.

    • RobT says:

      AMD have done a bit better than slide shows, they demoed Polaris at CES at the start of this year.

  6. fish99 says:

    I’d be excited about Pascal if the 970 replacement was likely to arrive this year, which from what I read, it isn’t. I do want to replace my 970 though since I have a dodgy one which resets my PC under extreme load and I CBA to RMA it.

  7. LarsBR says:

    Jeremy, out of interest, I looked up the required and recommended CPUs for Quantum Break and compared them to my 2011 clunker. Absolutely no reason to upgrade. Sad!

    link to cpubenchmark.net

    • Jeremy Laird says:

      Yeah, with the possible exception of VR, no doubt the reality is that any half decent quad-core CPU from CPU in the last, I dunno, four or five years will remain good enough for games for a while longer.

      Even if DX 12 and Vulkan do deliver, it’ll take a while for that to feed through into games. Nobody is going to be building a game that needs an eight-core CPU for a very, very long time.

    • TacticalNuclearPenguin says:

      Intel chips from Sandy Brige onwards are still supposed to be fine for a while, although nowadays i’m starting to see a trend in benchmark that, in some games, slightly favors the i7 variant with HT, whereas back then an I5 was the same.

      Not that you need an i7 or anything, nor am i implying that there’s an important difference, but it’s a slight hint that things are somewhat changing.

      I’m personally sitting on my 2600k for another good while, waiting for skylake-E.

      • obowersa says:

        I’m still kicking around on my 1090T and it does its job admirably.

        Will be interesting to see what finally forces me to upgrade away from it

        • Unclepauly says:

          Not to be an a-hole or anything but there are plenty games out now where the 1090t fails to keep 60fps while the i5’s and up do.

  8. DavishBliff says:

    I haven’t bought a new video card in five years or so. I’m guessing this means a price drop is coming for 970s/980s? What’s the consensus value card right now, a GTX970? I realize this isn’t exactly the perfect space for buying advice but hey

    • ikehaiku says:

      Yeah, 970 for 1080p, 980 for higher definitions and/or refresh rates is the kinda sorta ballpark. But that’s on the “safe” side, I guess even a 960 could do wonder in almost all games if you stay away from super-taxing graphical options.

    • OmNomNom says:

      If I were you I’d try and buy a used 980. Plenty of people have upgraded to the Ti and you can get 980’s at a fraction of the sale price. Don’t buy a new one at full whack.

    • RobT says:

      When a new generation of cards comes out all the old top tier cards tend to drop into the middle range with smaller pricing gaps than they currently have. This means the best value graphics card right now is unlikely to be the same as the best value once the next gen are released.

      If you’ve waited 5 years already I’d
      wait another couple of months longer and see how all the prices drop. Once you are the new prices then you can make an informed decision on what to buy. You might also be able to pick up a used card at a bargin basement price.

    • lglethal says:

      I’m super happy with my Asus GTX 970 STRIX OC running everything at 2560×1440 at 90Hz refresh rate.

      Playing Rise of the Tomb Raider on ultra settings (just dropped anistropic filtering one notch), and I have not had even a single noticeable drop in fps. Saying that it could be happening and I’m just not noticing thanks to my G-Sync monitor, but I am still super happy with this card.

      So based on my experiences, go the 970. But of course your mileage may vary! ;)

      • fish99 says:

        970s are super nice cards, I have one, but let’s not pretend it runs everything on full settings at 1440p at 90 fps (dunno why you say 90 Hz refresh rate since that’s irrelevant, it’s framerates that matter, any card can do high refresh rates), because it doesn’t come close. Division, GTAV, Arkham Knight, Dying Light, the new Dragon Age, Witcher 3, Fallout 4, just a few games that will dip below 60 fps at 1080p, and not even on full settings.

        It’s a great card, but exaggerating doesn’t help inform anyone.

        • deanimate says:

          Fully agree. I have a 970 and when people say stuff like that I start wondering if my card is missing a load of cores or made out of mustard.

    • Sakkura says:

      There isn’t much room for price drops for current cards.

      Anyway, an R9 390 is better value than a GTX 970.

  9. cairbre says:

    I’m hoping to do a complete upgrade this year and don’t want to pull the trigger until these cards come out. I was planning September/October is this a realistic timeframe? I have a sandy bridge processor from 2011 so I think it’s time.

  10. Vesperan says:

    I feel like the article is trolling me, successfully.

    AMD also makes graphics cards!

    We know this because AMD has shown off actual live working graphics cards and teased their power consumption / performance. And we know their coming in the next few months.

    Unlike Nvidia, who have only announced a future top end card for consumers in early 2017. The silence for anything mainstream has been near absolute. Either their sitting on something stunning, or nothing (surely not?).

    Nvidia will have some fantastic products in 2016. So too will AMD.

    …so lets not forget that AMD/Radeon graphics cards actually exist?

    • Jeremy Laird says:

      I know what you mean. But this news is Nvidia news. When there’s a similar AMD announcement, it will be about AMD. And there’s nothing wrong with that. Except, of course, somebody will complain that a story about AMD GPUs somehow isn’t all about Nvidia GPUs.

      I linked to the story that previewed the upcoming chips from both teams. But I’m not going to just repeat all the AMD info from that link. There is no call to do that.

      Anyway, if AMD beats Nvidia to market by a long way with this gen, more power to them. I’d be surprised if it was going to be that one sided, but we shall see. AMD could do with having the upper hand somewhere for a while.

      We all could, to be honest, because we all need AMD to do well. But I’m still not going to make a story about an Nvidia announcement all about AMD. Sorry!

  11. OmNomNom says:

    It’ll be a great upgrade for sure but we all know that it’ll never be more than 2x the performance even if it has the potential. There is no business sense in it being super duper awesome

    • Jeremy Laird says:

      This is true. They will not unleash everything from day one.

      However, if both AMD and Nvidia have lots of performance in hand, then I am sure that we’ll see a fairly rapid sequence of one-upmanship take place.

      I think we’ll see performance scale more rapidly over the next few years than the last few years. Then it will slow down again, as I think Moore’s Law is looking a bit broken and the rate of transistor shrinkage may have slowed for the long term.

      • OmNomNom says:

        This I would hope for, like the old days of AMD having the most raw power but Nvidia having niche performance and features. A little competition is always good for us consumers.

        I really hope AMD can deliver this time round as efforts in recent years have just failed at the high-end really.

  12. 2Ben says:

    Yeah well, my Vive should arrive in May, and my Uber Puny GTX760 isn’t up to scratch by a long shot…
    What to do, chief?

  13. brucethemoose says:

    No, THIS is GP100:

    link to jimcorace.com

    A Buggati is street legal and “practical”. If you somehow got one, you could use it yourself.

    If you got one of those Tesla GP100 GPUs, you couldn’t stick it in your computer, and you couldn’t game on it. No, it’s more like the trophy truck… It LOOKS like you could use it, and it’s REALLY good as what it does, but the thing can’t even drive down the road. And as an individual, you generally can’t buy one, even if you can afford it.

    • Kaldaien says:

      You would be frankly surprised. The headless compute GPUs that NVIDIA sells are used by more people than I have fingers to count for gaming ;)

      I don’t understand it, but for some a device that must be streamed because it has no display output, and costs an absurd price for no real benefit scratches some sort of impractical hardware itch.

  14. Premium User Badge

    phuzz says:

    I was thinking I’d be upgrading this year, but my R9 290 (not even a 290X) has been handling everything I throw at it just fine so…
    Mind you, I’ve been thinking about getting a new monitor this year, hopefully something a bit bigger than 1080, so I might need new graphics then.

  15. Ericusson says:

    And then have to wait some more for the mobile versions of the chips.
    To be honest, I am almost not convinced waiting for new laptops is such a good idea.

  16. onodera says:

    Looks like your place is bigger than Walker’s. Have you bought a Vive?

  17. Risingson says:

    Bought an asus gtx970 yesterday. Don’t spoil me the hype, Jeremy.

  18. mactier says:

    Shiny cars are probably the worst graphics demo ever. They look basically the same since 2008 and in mobile games.

    And here I would need a new graphics card and computer asap. That’s too bad. Maybe I will have to go with onboard graphics as a compromise, playing mostly 2D games, while waiting for the new cards…

    • Llewyn says:

      Basically the same since 2005 in the case of that one (it’s a photograph).

  19. MyKeyboardSucks says:

    Nothing about this excites me, only fills me with dread and anxiety. Six months ago i spent £400 for a GTX 980 and already im being told “its old tech” “recommended GTX 980ti”, “new hardware on the way” etc etc, six months and ive dropped from ultra 1440p to medium – high 1080p with certain releases Quantum break being a good example. I just want 60fps! please don’t make me go back on the game again for the sake of smooth game play. Please Nvidia

    • Eight Rooks says:

      I get what you’re saying, but given how horrendously optimized Quantum Break seems to be for pretty much everyone, I don’t think it’s the best benchmark for “Oh, God, I need a new graphics card already?”

  20. CoD511 says:

    I think there’s an important distinction to make, between using the codenames GP100 and P100. I never heard GP100 mentioned once by Nvidia in the conference.

    The P100 chip for example is completely compute focused with no application to gaming; literally in the sense it’s lacking crucial units in which we need for taking such as ROPs. Nvidia released a diagram of each SM and potentially a while die layout one but I’m not sure. Regardless, it’d be difficult for such a chip to be used for graphics at all and I imagine it’s a separate design specifically for the compute segment, a very powerful one with an impressively large die at 610mm2 (largest ever made by them or AMD I believe) with the majority of SM units enabled leaving with the impression of good yields.

    Despite how useless it’d be for a regular consumer (doesn’t use PCIE for starters!), the successful creation of enough to mass produced and supply them to customers with almost fully enabled dies is a very promising sign along with the high clockspeeds and fairly moderate in consideration of everything, use of 300w only.

    But when GP100 eventually does come after GP104 and after this separate product line of P100 chips, it bodes very well for a large die filled with gaming specific requirements as the only main focus without having to squeeze compute capability of significant nature into the die due to a different line up a adressing it! Meaning lots more space to use for then to improve performance on a die that could potentially feature a huge amount of 15 billion or more transistors dedicated to maximising gaming potential. Finally, there’s no hampering by having to squeeze the needs of compute plus gaming on the same card, which could be a seriously big deal. Regardless, my original point; GP100 is likely going to be a different chip layout with specific rendering capabilities entirely aimed at gaming. P100 is a compute only chip as evident by its ability to not rasterize any pixels. Still in shock at the monster of a chip they achieved however.

    • Jeremy Laird says:

      P100 isn’t a chip codename, it’s the product name for the compute offering:

      link to nvidia.com

      I haven’t seen a die shot don’t think there’s one out there, but a block diagram isn’t going to include sections of the chip that aren’t used in a given implementation. So a block diagram of the P100 implementation wouldn’t show ROPs if they are present in silicon.

      Everything I have seen indicates that Tesla P100 contains a GPU codenamed GP100 and while it’s far from impossible Nvidia has made a pure compute chip here with the raster bits stripped out, there is no particular reason to think that is the case.

  21. Jeremy Laird says:

    I am more optimistic than some here re product launches. Maxwell is already two years old. Nvidia has been selling 28nm GPUs for four years now.

    A Pascal launch next year would make Maxwell three years old. I think Nvidia will be keen to push the new chips out, especially if AMD releases anything competitive.

    I remain hopeful for GP104 this year.

  22. Audiocide says:

    Yeah we get it. You can’t afford it. Jeez.

    Or hold on. Let’s call rich people “kleptocrats,” ’cause you hate that they can buy things. Cute. And very English working class of you.

    • Jeremy Laird says:


      For the record, I’d love a McLaren F1, which would cost much more than a Veyron. Or a 250 Lusso. Again, more money than a Veyorn. But both were engineered for adults. The Veyron was built to please manchildren.

      And for the record my attitude is indicative of snobbery directed at new money vulgarity and thus quite the opposite of the working class chippiness you so charitably implied.

      In the meantime, God speed!