Intel’s New Uber CPU And The Future Of PC Gaming

Once upon a time, the launch of a new Intel uber CPU was unambiguously exciting. You’d have the raw appeal of the chip itself, capable of new heights of computational prowess. But you also got a glimpse of the near future for more mainstream CPUs. These days? Not so much. So what to make of the shiny new Intel Core i7-6950X and its 10 mighty cores? Is it remotely relevant to gaming? While we’re on the subject, are CPUs generally terribly relevant to gaming, now? And what might recent announcements regards high-performance respins of the Xbox One and PS4 consoles tell us about all this?

In the good old days, Intel had a single CPU socket and, essentially, a single CPU. Often, little separated Intel’s top chip from more mainstream processors beyond clockspeed. That meant that any new uber CPU brought with it an unspoken challenge – can you match me by clocking the twangers off my cheaper siblings?

Sometimes the answer was yes and suddenly £150 and a bit BIOS butchery bought you gaming frame rates to match a £500 CPU. Now, everything is just so bloody complicated. For starters, Intel now has two sockets for desktop processors – currently LGA1151 for mainstream systems and LGA2011-V3 for high-end rigs.

The latter is arguably more a server and workstation platform re-badged for desktop use. But that’s ultimately academic. What matters is that Intel’s platform bifurcation makes for a total disconnect between the aspirational high-end and what most of us can afford.

Making matters worse is the shift from clockspeed to core count as the enabler of major performance increases. In other words, kiss goodbye to overclocking as a method of making cheap chips perform like their big-ticket siblings. You simply cannot make a quad-core CPU seem like something with double or more the core count with a few BIOS changes.

The good old days of turning a cheap chip into a champ with a bit of overclocking…

Of course, the caveat to all this is that games have not historically scaled well across lots of cores. By that I mean that you can add lots of cores to your CPU, but games usually haven’t been able to make much use of them. The returns tend to diminish pretty rapidly beyond four cores.

The reasons for that are complicated. But it’s certainly true that the incentive for game developers to code for more than four cores is slim to none when Intel’s mainstream platform remains capped at four cores and AMD is failing, for the time being, to offer anything really competitive.

Anyway, that’s the context for the arrival of Intel’s mighty new Core i7-6950X. The headline numbers include 10 cores, 20 threads, a nominal clock speed of 3GHz and a maximum Turbo speed of 3.5GHz. Oh, and it’s part of the Broadwell family, which means it’s made of 14nm bits and pieces but actually last-gen compared to the newest Skylake chips for the LGA1151 socket like the Core i7-6700K.

So, yes, it’s Intel’s first 10-core CPU for desktops. But if you care to peruse some game benchmarks, you’ll find that doesn’t make for gaming greatness. In fact, it’s often beaten by the best quad-core chips.

You could argue that’s great because it means you don’t have to worry about the 6950X idiotic $1,723 (about £1,400 in old money) price tag (for the record, Intel has also launched three further new i7-6000 chips with eight and six cores, the cheapest of which is almost affordable at $434). On the other hand, it does make you wonder if games might simply be different if the bulk of the installed base of gaming PCs had gone beyond four cores. Would there be exciting new things going on with AI, for instance?

Plenty of pins in an LGA2011, but not that many of ’em are relevant to desktop performance

Which brings us to those new consoles. Needless to say, the specification of games consoles has a huge influence on how game devs go about things. At the E3 show at the mo’ details of a rather unusual mid-life refresh of the two more performance-orientated lumps, the Xbox and Playstation, are emerging.

Both essentially target virtual-reality capability. That means a huge uptick in pure graphics performance instead of the simple slimming down and lower cost that normally accompanies these kinds of refreshes. As a for instance, the refreshed Xbox, currently codenamed Scorpio, is said to be jumping from 1.23 TFLOPs of pure processing power, to over 6 TFLOPs.

On the CPU side for these consoles, things are less clear. The upcoming PS4 Neo seems to be sticking with those awful AMD Jaguar cores, all eight of them, while the CPU spec for Xbox Scorpio is also sticking with an eight-core design but the identity of those cores is unknown.

There has been speculation that Scorpio might receive AMD’s upcoming high-performance Zen cores. But that seems unlikely to me. It would mean eight Zen cores plus a GPU that seems to be roughly on a par with AMD’s newly announce Radeon RX 480 on the same chip. Not impossible, but certainly it would represent an unprecedented level of performance for a console. And it would be expensive.

Could Xbox Scorpio sport AMD Zen cores? Doubt it!

Whatever, if anything would motivate game developers to get good at coding for multiple cores, it’s the current PS4 and the Xbox One and their fairly feeble CPU cores which are based on AMD’s low-power ‘Cat’ architecture and thus more like Intel’s Atom cores than high-performance desktop cores. In practice, developers have found this tricky and generally speaking games still don’t scale well beyond four cores.

Then there’s Microsoft’s DirectX 12 API and the alternative Vulkan API, both of which supposedly help improve game performance scale across multiple cores. But the full implications of those have yet to be seen.

If there is cause for hope for significant CPU performance increases for mainstream PCs, it’s AMD’s upcoming Zen CPU which might, just might, shake things up. I’m not expecting it to go on sale for about a year, however, so it’s not exactly just around the corner.

It’s a complicated overall picture, then. Honestly, I don’t have a great feel for the extent to which developers are being held back by CPU performance. But CPU performance has certainly stagnated in mainstream PCs over the last five years and instinctively I’m not comfortable with that. Ultimately, I find it hard to believe it’s a good thing for gaming and game development innovation.

Sponsored links by Taboola

More from the web

From this site


  1. padger says:

    I think most top end games are held back by GPU low end requirements rather than CPU. Very few games are so reliant on CPU that they will top out (although they do exist). Rather, the fill rate of any given game’s screen is going to be dependent on how well a gpu can handle all those clever effects that developers can now throw at a game. The fast it can chew through them, the better the game will look. On most gamers machines, I’d bet it’s the graphics card that would, if upgraded, give the greatest payoff, especially for a game like The Witcher 3.

    • Windows98 says:

      The only game I can think of that will totally max out a CPU is Dwarf Fotress.

      • TillEulenspiegel says:

        It doesn’t, not really. Your CPU monitor may report 100% usage, but the vast majority of that is actually waiting for data from RAM. And RAM latency hasn’t significantly improved in a very very long time.

        Fast CPUs are fast when they’re cranking through instructions. But when they have to wait for data, your shiny new CPU doesn’t make much of a difference.

        • muro says:

          That’s news to me – is there any documentation on that? I would assume if it’s waiting for e.g. RAM, it doesn’t report 100%.

          • overflow says:

            Well as a game developer myself i can confirm his statement. The main issue in this day is bandwidth. It did not increasing at the same rate as Cpu or Gpu speed. A cache miss is a true thing. The cpu is running and full speed but its waiting many cycles to get the data so it can continue. (if im not mistaken it was 21 cycles on an intel cpu. That means 21 of your 30000000 hz (3ghz) go down the drain(for that second). and this does happen many many times every second. the good coders can mitigate these)

            link to
            link to

            these are not the ultimate charts because its not only about the mhz but they get the point across.

            Also Intel and and also AMD have hit a wall in speeding up their Cpu’s. That’s why they go to parallelism and put more cores in because that increases the theoretical(also practical) output. But if your code is not optimized for that or it is not possible to make your software multi threaded then you will be stuck with one core and its speed.

          • muro says:

            Oh, sorry for not being clear – I don’t doubt that the CPU is often stalled waiting, I just find it surprising that the CPU would report being 100% used.

          • BertieDugger says:

            If a CPU core is executing instructions then it’s busy, i.e. not idle. If those instructions operate on data values which are not in the CPU’s cache then the data need to be fetched from RAM, which is slow, and executing those instructions will take longer than the same instructions would take to operate on values that are already in cache. Either way, the core is busy, not idle, because there are instructions being run.

            So a core at “100%” is busy, but not necessarily working optimally, because it could be busy doing things that involve long pauses as part of the work. Running at 100% with no cache misses will get more work done than running at 100% with lots of cache misses. You can’t tell how many cache misses are happening just by looking at the CPU monitor.

      • hujsh says:

        EU4 (with the M&T mod) will definitely push some CPUs to their limit. Increasing CPU speed is the main thing that can speed that game up because there’s so many nations that each need to make decisions every in-game day as well as various event probabilities that need to be calculated for those nations.

      • frightlever says:

        It’ll max out one core of your CPU, or depending what addons you’re running, maybe even two.

      • Unclepauly says:

        Almost every complex RTS game ever made will max out even the highest end intel cpus. Every single Total War game ever made is CPU limited.

    • blur says:

      The only exceptions being simulations. I most definitely can remember hearing my processor grind its gears late in Civ IV games, when all players had expanded, and the AI needed to make decisions. Likewise, any games that do fluid simulations can start to chug based on a CPU bottleneck.

      • Rob Lang says:

        I had the same! Faster clock and RAM has made all the difference. Civ AI is (unfortunately) a good example of a task that cannot be parallelised and so is stuck on a couple of cores. Fluid dynamics is better but you tend to have to take some liberties with the accuracy!

  2. yogibbear says:

    The i7 6950X and it’s ilk and worth it if you do video encoding, compression, editing, etc. they’ve never been targetted at gamers. They do do gaming well, but are obviously never price competitive at doing so. If you do any sort of heavy video editing or whatever and time is valuable to you, then that’s what these processors are for.

    As for the rest of the article, I have zero faith in the next wave of worst non-announcements ever being either fair with their supposed specs, or being up-to-date given all the flip-flopping about whether or not they are “new consoles” or just “4k current gen consoles” or if they’ll be replaced by entirely new console gens very quickly. By the time you can actually buy a scorpio I fully expect 6 tflops to be uncompetitive. If you needed a reason to not buy an Xbone, an Xbone S, or a Scorpio you certainly got it from the MS E3 presentation in spades.

    • rommel102 says:

      Not sure that is exactly fair…the Xbox One S is coming out in 2 months and will support HDR gaming, so that is a tangible increase over the standard model. I’m not going to be upgrading to it or anything but new buyers should be happy with it. It will also be the cheapest 4K UHD player on the market. So even non-gamers looking for a player will be tempted to get one.

      Scorpio will be a solid VR capable console for less than $500, and I strongly suspect that Microsoft will be partnering up with one of the prominent PC HMD makers to offer a discounted headset as well. That will get you in the VR door at a mainstream price point. It will also either provide native 4K gaming or Phil Spencer will be out of a job, because of the internet outrage that would arise from not following through on that promise.

      • Nitai says:

        The first Console to bring high quality VR to your living room at a sub $600 price range will make money hand over fist, I predict.

      • PenguinJim says:

        “the Xbox One S is coming out in 2 months and will support HDR gaming, so that is a tangible increase over the standard model.”

        It’s only a tangible increase if you also buy an HDR screen, of course.

        (For me, the most exciting thing at E3 was the addition of Bluetooth to Xbone controllers. :/ )

  3. ZIGS says:

    Meanwhile, I’m still getting by with an i5 760 from 2010 (OC’d to 4Ghz)

    • SirDeimos says:

      High five! Still getting by with an i5-750 at 4GHz.

    • dangermouse76 says:

      Me to!.Go i5 760
      I ran MSI afterburner recently just to see what my cpu and gpu ( GTX 660 ) were doing whilst I ran Fallout 4.
      CPU ran at about 60% utilisation and 60 degrees Celsius.
      The GTX at about 80-90% and around 65 degrees Celcius using 1.8gig max of VRAM.
      That’s Fallout on 1080p.
      Antialiasing: TAA
      Anisotropic Filtering:8 samples
      Texture quality through to Lighting quality: High
      Godrays: Low
      AO and Weapon debris: Off
      Screen Space reflections through to Lens flare: Off
      View distance: 75% across the board
      Distant object detail / Object detail fade ultra.

      I get the occasional drop down to 20-30fps but 60fps most of the time.Pretty happy to be honest.
      But with a bucket load of TAX coming back to me, I see a new build GTX 1070 in my future.

    • wu wei says:

      Core 2 Duo from 2008 paired with an R9 290X. Amazed at what it can still run.

      • Warlokk says:

        Yep, I’m still running a 4- or 5-year-old Phenom II 965 Black edition here, and it’s doing just fine… paired up with a GTX960, I’m running all the latest stuff usually at High settings, so I still have no reason to dump money into a new MB/CPU just yet.

    • muro says:


      While I paid too much money, I recently upgraded to a last gen workstation:
      link to

      Before I had a Dell workstation (E5-1650) I grabbed for $400 on ebay. Buying the newest machines with such small improvements is a complete waste of money.

    • mr.kock says:

      Hell yes. Running the same setup, but lazy oc to 3.4.
      Gonna push 4 and consider oc’ing my young little 970 =)

    • Chorltonwheelie says:

      i5-3570K bowling along at a rock steady 4.6GHz is still shredding everything thrown at it.

      Haven’t seen anything to tempt me to upgrade my cpu anytime soon.

      The GTX1080 on the other hand….

      • Unclepauly says:

        Dude, an i-5 is in the top 5% of gaming performance in regards to cpus.

    • Pharaoh Nanjulian says:

      Bring on the willy-waggling! Here it’s an E8400 and 4GB of DDR2. No troubles with Verdun, World of Warships, Heroes and Generals, Red Orchestra 2…

      I saw a school’s throwing out pile of superseded technology today: a room of a hundred perfectly adequate monitors. Keyboards, thin clients, amplifiers, the lot. We are a race of little brain, rushing for the technological imperative at the expense of the world. I am reminded of an American tourist I heard in the archaeological museum at Constanta: “See honey, even then they were striving for the afterlife, and they didn’t even have Christ!”

  4. SuicideKing says:

    I think it’s been conventional wisdom in tech circles since probably 2011 and Sandy Bridge that games really don’t show much improvement beyond a Core i5. Yes, there is the odd game that’s CPU bound (Arma 3, for example) but even there you want higher clock speeds over MOAR COARS. In recent times, DDR4 and SSDs may make for smoother game play in “open world” games that keep streaming data. Probably the few games that could benefit from a standard i7, fast memory and fast storage are simulators.

    X99 is a prosumer platform. Outside of simulators and/or people who like to record gameplay footage, edit, encode and upload it, there’s no real point from a gaming standpoint. Sure you get more cores, but an older platform and lower clock speeds. On top of that, few game engines scale beyond 4 to 6 cores, and from whatever I’ve seen even driver stacks from Nvidia and AMD stop scaling beyond 6 cores. Heck it was probably 6 threads, don’t remember well enough.

    As for Zen, it may match Haswell, and it’s probably not going to come out before Q1 2017 (best case). Kaby Lake from Intel will probably be out at sooner or at the same time.

    Finally, it’s not so much as “CPUs have held back gaming performance” as much as “DX11 is single threaded, parallelisation of algorithms is hard, and physics and engineering is hard”. Sometimes it’s on the developers.

    Anyway, for the most part, you’re going to be GPU-bottlenecked. For the mainstream market (1080p and below, 60Hz refresh rates) this will probably be solved over the course of this year, now that GPUs are being made on 16/14nm processes.

    • Unclepauly says:

      At 1080P, a gtx 980 or better is cpu bottlenecked in almost every game.

      • Unclepauly says:

        tbf though, that bottleneck usually hits at about 100-120 fps.

        • brkbeatjunkie says:

          I respectfully disagree as an owner of a 1080p/144hz panel and a gtx 980, I regularly see frames per second over 144 in battlefield and overwatch etc when vsync is off.

  5. satan says:

    Yeah my last system upgrade was more of a ‘shit… has it been that long since I upgraded?’ than a.. ‘damn that new X game running on that new Y engine looks incredible, better start pricing a new rig’.

    • tnankie says:

      My last two have been “oh my X has failed” (SSD and GFX card). Though my motherboard might be the next to go and that might be new cpu time…..I think I got this one in 2010 (I really don’t know).

  6. Capt. Bumchum McMerryweather says:

    most of this is pretty irrelevant, considering there are virtually no games out there that have the remotest impact on a half decent processor. The 4570: a middle of the road processor from what, three years ago, still performs incredibly for both gaming and general use.

    It’s going to take at least another 2 – 3 years for the rest of the hardware to catch up with even that processor, let alone the 3 bazillion core, superhypermega-threaded triple die 2nm monster that will inevitably hit the shelves.

    • Unclepauly says:

      RTS games and simulators would like to have a word with you. Also ARMA 3 has set up a sniper post about 2 clicks from you position.

  7. geldonyetich says:

    I tend to take a look at single core speed as the best measure of power, with additional cores only harnessed by specific apps. Yes, looking forward to an API that uses them, it’s quite overdue.

    Funny enough, the most exciting chip to put on a motherboard right now is NVMe.

  8. vorador says:

    It’s getting ridiculous, probably because Intel doesn’t have real competition on mid and high range, and server sistems. When Skylake was released, it required a new motherboard and new memory, with barely a 10% better performance than the equivalent Broadwell model. And prices are slowly but surely increasing over time.

    Wish AMD got their stuff together and knocked it out of the park with Zen so Intel get some competition, but i pretty much doubt it. And AMD is on life support, losing money quarter after quarter, so things don’t look very good.

    • asthasr says:

      For truly computation-bound workloads, arrays of GPUs and massively multicore ARM clusters are actually eating a lot of the Intel/AMD pie.

    • poohbear says:

      well, AMD’s stock has skyrocketed nearly 200% since February, so there’s that. They’re not on lifesupport anymore, their market cap has more than doubled which will give them alot more funds to play with.

      • lanelor says:

        Skyrocketed up after going down, down , down. Sadly AMD shares are nowhere near so the firm can be a really influential on either CPU/GPU markets

  9. Kaldaien says:

    I like to compile software and run games through debuggers while I am playing them. The typical end-user has no need for additional threads (right now), but if you supply them, I am sure uses WILL pop up.

    Xbox One hints at the possibilities with its virtual machine running apps in a completely different OS session snapped to the side. Just because games themselves do not leverage these extra cores does not necessarily mean gamers will not benefit. Just give it time.

  10. Nitai says:

    For a value hunter like myself, everything is based off of the best bang for the buck. If this years new processors aren’t significantly better that I have to replace my Core i5 3570k then.. I simply won’t.

    I’m still running the above mentioned processor with my R9 290 @ 1080p resolutions at 120mhz refresh rate with 16GB of ram and an SSD. I still auto-detect at high to ultra settings with most games and still maintain a 60+fps.

    Whatever type of hardware that will allow for real time raytracing at 4k 120fps VR with realistic physics is the only thing I feel would be considered next gen to my mind.

  11. Slow Dog says:

    >if anything would motivate game developers to get good at coding for multiple cores…

    Yes, Jeremy. For the only thing difficult about parallel processing is summoning the motivation, in precisely the same way the only thing difficult about journalism is hitting the keys in the right order.

    • OmNomNom says:

      What a bunch of idiot developers! Why didn’t they just make it more betterer in the first place?

    • Capt. Bumchum McMerryweather says:

      Though making an asinine comment is infinitely easier than both, it has emerged.

      Joking aside, there is truth to this statement. There is no question that developing instructions to split across threads without your code collapsing in on itself is difficult. That said, a lot of developers are not going to spend a lot of time and resources working it out, because right now it’s just a sink for time.

      99% of the PC crowd runs on 4 cores or less, and midrange processors from five years ago are still keeping up with room to spare. so why bother spending time and resources on an exponential scale for pointless optimization?

  12. teppic says:

    CPUs matter quite a lot in RTS games and MMOs where the CPU is kept busy a lot of the time and will hold back high end GPUs (giving lower frame rates and stutter). You do only see the limitations on higher end systems mostly where you’re running everything at max.

  13. SCdF says:

    As a photographer I’d be excited about this CPU, because more cores means LR / PS runs faster in key situations (eg generating frickin’ previews).

    As a software developer I’d be looking into to see if a compiler I use for a significant amount of my time would take advantage of multiple cores, because this could be a CPU I’d be excited about.

    As a gamer I don’t give two shits: games are still not very good at taking use of multiple cores, and AFAICT even owning an i7 for gaming is not something that matters, since games tend to be GPU bound far before they’re CPU bound.

  14. Holysheep says:

    It’s sad that people code like shit. Not enough games use more than two cores, or shit like hyperthreading and such.

    I expect it to change with the new big ass engines as they get sort of popular though, UE4 seems on its way to do that.

    • SCdF says:

      It is sad that people code like shit. Multi-threaded programming is really fucking hard though, to give them credit. Especially for things that require maintaining 60 frames, because dealing with shared memory overheads and edge cases are so high.

      This is not about game dev, but this paper is cool: link to

      tl;dr; they put reasonable effort into writing a good single-threaded algorithm and got faster (sometimes much faster) results than 128-core high scalability platforms solving the same problem.

      • TechnicalBen says:

        Totally agree. The “law of diminishing returns” is a real mathmatical thing too. Which is why we don’t just have a Crossfire/SLI that allows 100 GPUs. It’s not that they cannot fit that many slots on a motherboard, but that they get less returns for each additional GPU.
        Same applies to CPU.

        We are going multicore, but only as the costs and heat and energy requirements go down. So that more of the connections, and chips that run the communication channels can fit in and don’t cost more or take longer than a single core.

        Yes big server farms exist, but only for specific or really large data and calculations.

  15. Carra says:

    My PC is now 4 years old and it’s the first time that upgrading seems like a better option than buying a new one. 16gb is still plenty and my i5 3570K still seems capable to run anything I throw at it.

    Looks like intel doesn’t want my money, I’ll just buy a new GPU.

  16. Baines says:

    But CPU performance has certainly stagnated in mainstream PCs over the last five years and instinctively I’m not comfortable with that. Ultimately, I find it hard to believe it’s a good thing for gaming and game development innovation.

    I don’t know. Obviously at some point stagnation will be bad, but placing limits can also force more creative solutions to problems.

    When PCs visibly improved year after year, the solution to nearly everything gradually became to “throw more power at it.” You didn’t have to be as creative, because hardware performance improvements meant your sequel would be able to look better and do more regardless. Code performance became increasingly de-emphasized, with the common belief that hardware improvements made by release date would solve any current code inefficiencies.

    Without the crutch of frequent hardware leaps, coders instead have to become more creative to remain competitive and to do more with relatively the same amount of resources.

    This is somewhat reflected in console history. Each new generation sees devs release games with a visible jump in visual quality and effects, made relatively easily possible by visibly increased hardware performance. As the years pass, games continue to improve in visual quality as well as in gameplay quality, despite running on the same hardware. Some of that is from the devs better learning the quirks of the new hardware, but it is also the devs seeking new solutions and new ways to make their games look innovative and improved. Then the next generation comes, and the cycle starts over with new titles that have the hardware offered jump in visual performance but which can be more underwhelming in improved gameplay (and which will look increasingly underwhelming in hindsight.)

  17. Jetsetlemming says:

    My current PC is based on a Dell Desktop that was sold to me for $50 from a university’s surplus warehouse. It has a Core i7 2600 and 8GB of RAM. Only problem was it only had the space for a single hard drive, so I replaced the case, and the proprietary Dell motherboard doesn’t have standard ATX case plugs for things like power and fans.
    But I jury-rigged a solution with duct tape for the power, and it turns out my computer runs cool enough without the intake and outtake case fans as long as I keep it clean, so whatever.

    Look forward to maybe buying one of these insanely fancy new CPUs for $50 also at some point in the future. 8)

    • Jetsetlemming says:

      Note: This replaced a PC I built fully from parts for $350 in ~2008, so I’ve been budgeting like a motherfucker this whole time and still playing plenty of PC games. This shit is way cheaper than consoles! :P

  18. Siimon says:

    i5-4690K @ 4.5GHz on all four cores, and both GTA5 and The Division gets bottlenecked by CPU on occasion.

    • Unclepauly says:

      Most complex open world games and mmo type games run into cpu bottlenecks here and there, but are still mostly gpu bound. Example: in GTA 5 when you get in huge firefight and all the cops/military show up and there’s ton of pedestrians and bullets, explosions, crashes, etc going on a cpu can get hit pretty hard. The rest of the game though, it’s gpu bound.