Week in Tech: Intel Shrinks Desktop Apathy Down To 14nm

22nm apathy to the left, 14nm on the right

Spool up the apathy drive and buckle in for yet another family of unexciting new CPUs from Intel. The 14nm Broadwell generation is nearly upon us and Intel has begun the slow drip feed of info about a CPU hardly anyone will notice or care about in desktop PCs. It’ll be a while yet before we get full speeds and feeds. But we already know enough to say that Broadwell is more of the same. No more cores, barely any additional CPU performance, better graphics and battery life. Deathly dull and disappointing? Yup, except possibly for mobile gaming. It’s all too familiar. Of course, if it’s exciting desktop stuff you crave, Intel’s Haswell-E is, surprisingly, shaping up rather nicely. Pity I can’t tell you any more about that, for now…

Back to Broadwell. As ever, there are two key aspects to any new CPU from Intel. The tech used to manufacture it, and the chip’s actual features. Tech-wise, we’re talking 14nm and long overdue at that.

14nm has apparently been the most difficult production node yet for Intel, with yields still a long way off what the existing 22nm node achieved at the same point in its development cycle. As it happens, this fits in with the broader hunch that Moore’s Law is slowly grinding to a halt.

The figures can’t be denied. If Moore’s Law was fully on track, we’d have cheap chips with 20 or 30 billion transistors by now. Instead, it’s more like two or three.

On the other hand, Intel also says the 22nm node was its most successful and best yielding process ever. As our American cousins are wont to say, go figure. Whatever, 14nm is said to be fit enough for full scale production. Intel has flicked the switch and the first 14nm Broadwell chips will appear in retail boxes of some kind (mobile devices, actually) by the end of the year.

14nm yields closing the gap to 22nm at PRQ. It’s never easy!

It’s also worth noting, if you care about this kind of thing, that 14nm will maintain Intel’s conspicuous advantage over the competition. Everyone is struggling to keep the process-shrink train on time these days.

As for Broadwell’s features, what have we learned from this week’s announcements? We start with the earth-shattering news that the CPU cores will offer something in the region of five per cent better IPC. In other words, for each clock cycle, a Broadwell CPU core will do five per cent more work than the existing Haswell core found in Core i3s, i5s and i7s. Woot.

At this stage, Intel’s full-fat x86 cores are surely highly honed. And Broadwell is a Tick not a Tock in Intel’s chip-dev nomenclature, which means the emphasis is on the 14nm production node, not the chip’s internal gubbins. But still, the crushing sense of stasis remains.

On the CPU-core side, there’s not much more to report. As for graphics, the same basic GPU architecture remains but with a few tweaks. The partitioning has been revised so that the smallest modular block now contains eight execution units (EU) instead of 10 and the mainstream GT2 config gets three modules instead of two. So that’s 24 EUs up from 20. Hazzah.

The graphics gets some further tweaks to boost performance and overall it looks like GT2 should be good for about 30-something per cent more performance compared to Haswell. No word on what the top GT3 config will look like, but 48 or 56 units (Haswell GT3 has 40) are the two most obvious options.

So, not much about actual CPU performance, then…

That still boils down to a relatively incremental improvement. It’s hardly the difference between not really being able to game with integrated graphics and suddenly flicking all the eye candy switches at 1080p.

Where Broadwell does hold some interest, inevitably, is power consumption. The big push is fanless cooling. The idea is to put it into properly thin tablet PCs and have them passively cooled. Like ARM-powered Android and Apple iOS tablets, just with desktop-class CPU cores. Nice.

Our own Fingers McMeer is the proud owner of an MS Surface Pro and I’ve little doubt he would like the idea of a revised model that was thinner, lighter, slightly faster, had better battery life and was passively cooled. I know I would.

It wouldn’t be a killer mobile gaming machine. But it would be a super all-round computing device with a just-about-tolerable turn of speed for casual gaming, so long as you pick and choose your game titles carefully.

For the record, Broadwell mobile CPUs will also see some rebranding, with ‘Core M’ the new epitaph upon the gravestone of our desktop CPU hopes. So look out for that. Just don’t expect any exciting action on the desktop. We’ll still be stuck on four cores and I doubt Intel will crank up the clocks much – given the problematic birth of the 14nm process, big clockspeed boosts may not even be an option.

Core M should make for a pretty super Surface Pro, to be fair

Broadwell CPUs for desktop PCs won’t actually appear until next year and will, I assume, be sold under 5000 series branding. IE, today’s Core i7-4770K will come a Core i7-5xxx something or other. Thus, by the 5xxx suffix shall ye likely know them.

Finally, I mentioned Haswell-E. It’s the new ultra high-end CPU from Intel that slots into the server-derived LGA2011 socket. Normally, I struggle to get excited about this high end Intel platform.

It’s fairly cynically rebadged Xeon server kit priced in the stratosphere and hobbled in terms of core count. Up to now, we’ve only seen six-core chips where Xeons can be had with 12 or 15 cores and even more due soon. Admittedly, most aren’t necessarily suitable for desktops, but the sense of epic sandbagging remains.

Anywho, what we already officially know is that the next LGA2011 chip for desktop PCs will have up to eight cores for the first time. What I can’t tell you is some other really interesting stuff I learned about overclocking and performance. Because it’s under NDA. I’m not actually under NDA, but the chap who shared the info is and it just wouldn’t be sporting. The truth will out by the end of the month.

To be clear, this LGA2011 clobber will still be catastrophically expensive. But the potential performance leap for this generation is at least interesting in a ‘look, there’s a Ferrari, isn’t it fast!’ kind of way. And that’s better than nothing.


  1. Cockie says:

    Nonono Intel’s not allowed to bring out better CPU’s, I just built my first computer… ;)

    • frightlever says:

      Assuming you’ve gone i7, you’re probably good for the next 5-6 years, possibly longer.

      • Cockie says:

        I went with the i5-4690 (because budget).

      • rei says:

        Great to hear that about my i7 920!

        • kael13 says:

          i7 900 series were a great set of CPUs, but I’ve now decided it’s most definitely worthwhile to upgrade. Will be picking up Haswell-E and an X99 motherboard in a few weeks.

  2. SquareWheel says:

    I find that exciting. 14nm is inconceivably small. It’s incredible we can manufacture on this scale.

    • Hypnotron says:

      not saying it was aliens but…

    • typographie says:

      If it helps with the sense of perspective at all, most virus particles are larger than 14 nm. Bacteria start at about 500 nm. It is indeed mindbogglingly impressive.

    • Grygus says:

      I remember reading years ago that the theoretical best we could achieve would be 9nm, because at that point you start getting into quantum mechanics and instead of being able to say a signal goes from A to B, it’s a smear of probability. Or something like that. Anyway, 14 is really close to as good as it will get unless we have a major breakthrough in circuit design/manufacturing. Impressive, indeed.

      • Rizlar says:

        Bring on the quantum computers!

        It is interesting to think of Moore’s Law slowing down. I would quite like to see what mankind could achieve having reached a stable limit in computing power. Though no doubt some new unforseen breakthrough would turn everything upside down again…

    • Geebs says:

      Better power efficiency is much more exciting than faster speeds, since things became ‘fast enough’. Longer lasting, cooler laptops which can also game a bit? Fantastic IMHO.

      • P.Funk says:

        Bollocks. So long as there are games that do not handle multi core CPUs properly there will be a need for more powah. I’m looking at you FSX.

        • Baines says:

          No, that is just games that need better programming. Developers have been able to get away with poor programming for decades because the belief was that computers would always be getting faster.

          Even at college I was taught that it was better to not put effort into efficient code, that it was from a code production standpoint more efficient to focus anywhere else since a year or two of hardware improvements would more than make up for your code anyway.

  3. mtomto says:

    1000 cores doesn’t mean much if an application/game only uses 1 core.

    • typographie says:

      That’s true, but developers won’t design consumer apps to make extensive use of 4- and 6-core processors until they start filtering down into the mainstream a bit more. Its up to Intel to move the bar forward, the installed user base has to come first.

      • snv says:

        The hard step is writing multicored code at all instead of singlecore code.

        If a developer _properly_ gets past that (majorly meaning avoiding synchronization bottlenecks), it shouldn’t matter if its two cores or 20 — the code should just scale without modification.

        • jalf says:

          Except that for a lot of problems, it’s not really possible to scale beyond a handful of cores. Not everything is embarassingly parallel. It’s not a matter of developers just being stupid and lazy.

          • WiggumEsquilax says:

            It also depends on genre. Synchronizing an FPS might be beyond profitability. On the other hand, not using a separate core for A.I. decision making in a turn based game, would typically be difficult to justify.

        • Asdfreak says:

          “If a developer _properly_ gets past that (majorly meaning avoiding synchronization bottlenecks), it shouldn’t matter if its two cores or 20 — the code should just scale without modification.”

          You have no idea how wrong you are. The overhead necessary to add more and more threads grows exponentionally with the methods used in production code today. It shouldn’t matter? Are you insane? The mere amount of possible Heisenbugs and Indeterminism bugs possible in a gamecode simultaniously using 20 cores gives me nightmares.
          There are interestin new concepts in Programming Language Research, but they are either

          1) Only applicable in pure functional code, which would be a nightmare to use for games and would require the retraining of virtually every games programmer, because many of them have their brains so hardwired on C and abominations like C++ that you would have to open their skulls and change a few switches.
          Also it would be a nightmare to write a game in PURE functional code, meaning no side effects, which is extremly hard for games, and would require a ridiculous overhead to work for such grand complex simulations like games.

          2)Deterministic Parallelism would be ridiculously slow when used in a complex simulation with enormous amounts of asynchrone inputs EVERY frame you calculate

          3)Lots of the new fancy stuff is just not ready yet for application in prodution environments

          4)etc. pp

          You make it sound like a piece of cake, but I just want to make clear that your assumption that it would scale seamlessly no matter how many cores is just ridiculous. If you invented a system that can scale itself seamlessly like that without problems, I’m sure the research comunity and virtually every developer on the planet would like to see a generalized and peer reviewed paper on that.

          Truth is, nobody in the world knows the right way to do multithreading. Using threads and all that other stuff used in production code is extremly easy to get wrong, and most alternatives are not any better.

          • stampy says:

            Totally right! That’s why Google can’t possibly run on massively parallel cheap, off-the-shelf hardware. It’s why all of Facebook’s analytics are computed on one gigantic, proprietary CPU. It’s why there is no such thing as Hadoop and or Cassandra.

            If parallelizing a problem always meant an exponential cost on the parallelization factor, nothing would ever be massively parallel… and the simple fact is that things ARE massively parallel. Yes, some problems are hard to break down into parts. No, not all problems. The post you are replying to did an excellent and clear job of addressing that. All that you succeeded at was being rude on the internet while being quite wrong.

          • Widthwood says:

            stampy, it’s very simple. In games you have to have result ready at a fixed point in time. Google, Facebook, Hadoop – they don’t have such requirement.

            Even when you have massively parallelize-able task (which search and analytics certainly are, and games – certainly not), setting a hard deadline means you have to divide your task not in equal chunks, but in equally complex chunks – meaning you have to know in advance how much time a task will take before you even start to calculate it. If you don’t have equally complex chunks – then all your worker threads will wait for that single one that took the longest to complete its chunk.

            Now, for Google that is not a problem – they have millions of other tasks from other users to keep workers busy. Games, on the other hand, must wait until the tick is fully complete before they will do anything else. This leads to under-utilization of your multi-core CPU, and the larger number the cores – the less utilized they will be. Programmers have to invent their own algorithms to increase utilization, thus leading to further complexity, more bugs, more CPU cycles and RAM spent on just keeping this algorithms running, etc. For now, it’s easier and more efficient to just hardcode fixed numbers of CPU cores into engine and optimize it for those specific cases, than to try to use some kind of universal infinitely scaling system that will never perform as good and will waste a lot more resources – that’s why a 15 core CPU will be nearly useless, and it’s not because programmers are bad – it’s actually because they are smart.

    • tehfish says:

      True, but when coding for the new consoles involves writing broadly PC-like code optimized for 8 slowish CPU cores, upcoming games should be heavily multi-threaded by default.

  4. duns4t says:

    Registered just to congratulate you on this headline.


  5. stele says:

    Looks like we’ll have to pull that tooth.
    And the gums look awful!

    • remon says:

      They’ll just have to come up with some Col-Gate transistors.

    • Doganpc says:

      I logged in just to comment on the toothiness of those transistors.

    • elyoungque says:

      Argh! I was going to post a similar comment. Kudos

  6. The Dark One says:

    I’m not that bent out of shape with the Haswell-E chips. Fewer cores means the one that remain can operate at higher frequencies before toasting the chip. I don’t know if Intel cherry picks the individual cores that respond best to overclocking, but it’d the type of thing I wouldn’t be surprised to see on a premium (stupidly expensive) product like this.

    That said, it’s not all good news on the Intel front. They’ve recently admitted that the implementation of their own TSX instruction set is flawed in all Haswell and Broadwell chips manufactured to date. While they’ve announced that the next Broadwell stepping will contain a fix, the only avenue they’ve offered owners of current hardware is a microcode update for the CPU itself, disabling the externsions. This includes the upcoming rebadged Xeons, since the bug was discovered too late.

    • TacticalNuclearPenguin says:

      It’s actually easier than that, a Xeon chip with even just 1 out of 12 cores damaged can’t be sold, but you sure as hell can kill another 3 cores and sell it as an 8 cores X processor. That’s what Nvidia did with the Titan aswell, turning a profit out of something that would otherwise go into the garbage bin.

      Don’t hope in any binning, i don’t think they ever really cared about testing which were the weakest cores in order to chose which to disable, sad but true.

    • Sakkura says:

      There is no cherrypicking, since Haswell-E uses an entirely different die from Haswell. You cannot just take a Haswell-E chip, disable 4 cores along with some cache and part of the memory controller etc. and then have a Haswell CPU. It’s not possible.

      You can still disable part of the Haswell-E chip, but then it’s like making a Core i7-4820K (4 cores) from a chip that could have been a Core i7-4960X (6 cores). These are Ivy Bridge-E CPUs, but it would work the same way for Haswell-E.

  7. ivorjetski says:

    It’s a couple of reject toilet mats pile on top of each other, right?

  8. steves says:

    Having literally half an hour ago just built a new PC with the no-brainer 4690K, this news gladdens me. No need to worry about that becoming outdated any time soon.

    Decent cooling and some modest overclocking is where it’s at with processors I reckon – a case full of “bequiet!” fans, one of these bad boys:

    link to overclockers.co.uk

    (pretty cheap liquid cooling, and no risk of getting, er, liquid, everywhere)

    With a dab of Arctic Silver magic, plus the really quite user-friendly software you get with an ASUS ROG motherboard, I’m currently getting a stable 4.4GHz, with barely a whisper, and nothing more than a pleasantly warm breeze!

    Now I need to find something to play with all my new power…

    • Premium User Badge

      phuzz says:

      If your CPU is less than four years old you probably don’t need to upgrade unless you want a new motherboard. An SSD and/or a new graphics card will make much more difference to your gaming for the amount you spend.

  9. TacticalNuclearPenguin says:

    Ok then, i’m not under NDA, so i’ll talk.

    The chip fell in the “right hands”, it got destroyed to demonstrate that it uses fluxless soldering instead of the cheap ( now less cheap but still crap, with Devil’s Canyon ) thermal paste.

    link to guru3d.com

    Now, this is only a part of the thermal advantage, there’s also the fact that the heatspreader is far larger than “normal” CPUs because the chip is simply huge and natively supports 12 cores, which in turn will probably also help the thermal hotspots created by the integrated VRM ( which is a stupid idea for performance minded folks and it’s being scrapped with SkyLake ).

    link to overclock.net

    It’s still totally advisable to wait for SkyLake, hoping also for a good jump in clocks, let alone the fact that it’ll be a “mainstream” platform with a DDR4 option just like Haswell-E ( but far cheaper ).

    Obviously, it’s hard to wait for someone like me who gets a little too excited for the enthusiast platform, but the fact that they broke the trend of offering the 600 euro option with the same cores as the 1000 euro one but less cache ( as it was the case with SB-E and IB-E ), i’m a little turned down.

    Sure, 6 cores are already a lot, but maybe Broadwell-E or Skylake-E will make 8 cores fit into the medium priced option, and the new technology might allow for better overclocks aswell.

    • SuicideKing says:

      Yeah, pretty much. Jeremy sometimes seems to be cynical about Intel because it’s cool.

      • Jeremy Laird says:

        Can’t really win. When Intel was bringing out great chip after great chip a few years back, I was accused of being a sell out publishing all the positive reviews. Now they’ve brought out several very boring new CPUs for desktop PCs and apparently I’m trying to be cool. So it goes.

        • SuicideKing says:

          I don’t know, I didn’t read RPS back then…and by no means am I accusing you of bias. It’s just that…there’s nothing particular to complain about, especially given they’re the only ones having success at advancing process tech and putting out better x86 processors.

          Are they charging more for less? Not really. Did they put out exciting CPUs in the form of the i7-4790K and Pentium G3258? Yes. Are they improving their CPUs every year? Yes.

          Will gamers even see much benefit of a 6 GHz chip or a 6 core chip? No.

          Then what’s the complaint/cynicism about?

          Consider the fact that I’m on a Q8400. For me, stepping up to the 4790K would pretty much double (or more) the performance in applications that can use all the resources.

          For applications that can’t use all that computing power, the “fault” doesn’t lie with Intel.

          So, I don’t get it. That’s all I’m saying.

          • Universal Quitter says:

            I recently upgraded from an FX-8120 to the i7-4790k, giving me a similar doubling of performance, and I have no complaints. Well, it idles a little hotter than I’d like, but that’s about it.

            But to your point, I kind of agree that for people who don’t always have the shiniest new PC components, these table scraps from Intel don’t seem so bad.

          • Jeremy Laird says:

            Complaint is pretty simple. Intel could very easily push on mainstream desktop CPU performance if it so chose. But it doesn’t. You never know what software/game developers might do with more performance until you give it to them.

  10. Keyrock says:

    Not excited about Broadwell’s meager gains? Don’t blame Intel, blame AMD. What motivation is there for Intel to improve performance, particularly in the mid-upper end of desktops, when they have zero competition from AMD? Intel is literally competing with itself in the mid to upper desktop market. AMD is so far behind that even their absolute top of the line desktop chip, running at a staggering TDP of 220W and pretty much REQUIRING liquid cooling for anything more taxing than web browsing, can only barely compete with Intel i5s at 84W TDP, much less i7s. It only makes sense that they would devote all their resources into lowering power consumption in their quest to move into the ARM-dominated mobile sector.

    • TacticalNuclearPenguin says:


      Imagine what would happen if Intel decided to develop a 220 TDP CPU too ( Haswell-E is 140 or something ).

      12 cores ( Intel cores ) @4.5 ghz ??

      The gap is so stupid right now that it’s not even funny. AMD really needs a LOT of money and fresh talent right now to throw into the R&D machine. But i really don’t know where they can get that.

  11. racccoon says:

    My comp board is over 4 years old maybe even more..lost track. its kind of crazy to think its not evolved to higher levels of greatness like when we used to upgrade on regular basis, but, on the other hand it is easy to understand, after thinking about our human achievements in anything, we always come to a bridge,. & sometimes bridges are harder to cross & sometimes impossible to go further.
    As humans we just can’t beat some records.. so we have to create new ones in order to help out with our determinations & needs, I on the other hand have only upgraded ram and graphics cards, thats it!.
    So I’ll be pleased with any inventions to make my PC custom building experience like I used to have & enjoy. :)

  12. WiggumEsquilax says:

    From someone who knows almost nothing about computer engineering, will these energy efficiency improvements translate well to GPUs?

    My CPU may not be the biggest energy hog, but my graphics card does mean, mean things to my energy budget.

    • Voice of Majority says:

      Well they would, but the thing is Intel doesn’t like to share and everyone else is lagging far behind them.

      NVIDIA Maxwell architecture was supposed to be done with TSMC’s 20nm process. They had to go and implement the first Maxwell chips (which are still great) on the 28nm node because TSMC’s 20nm was not ready for production. Probably still isn’t.

    • Sakkura says:

      Nvidia’s Maxwell architecture (found only in the GTX 750 and 750 Ti so far) is a very big step forward in reducing power consumption. TSMC, who makes the GPUs for both Nvidia and AMD, have been stuck on 28nm for a long time due to delays in their 20nm node (while Intel was sitting at 22nm and are about to go 14nm). They may even decide to skip it and go straight to 16nm. That should also contribute to increasing efficiency in graphics cards.

      In general, the pace of improvement has been MUCH higher in GPUs than CPUs in recent years. Not least because GPUs handle tasks that are easy to divide up and run in parallel, so you can always just throw more “cores” (shaders etc) into a GPU to make it faster. Adding more cores to a CPU doesn’t always help, because a lot of software (especially games) is bottlenecked by the performance of a single core. The new graphics APIs, like Mantle and DirectX 12, are supposed to help alleviate this.

  13. SuicideKing says:

    I know right! Manufacturing CPUs on a 14nm process is so freaking easy! Intel is just evil.

    I mean, all games in the last 5 years use 8 threads and Intel KNOWS it. They just keep tricking us into thinking the bottleneck is with Direct X or GPU performance.

    It’s not like the i7-4790K starts with a 4 GHz base clock and a 4.4 GHz turbo for the same price as an i7-4770K which is now cheaper.

  14. GenBanks says:

    I wish AMD would make a comeback… Would be nice to actually have to consider them when making a CPU purchase. I found my Opteron 165 in the cupboard the other day…

  15. Sakkura says:

    Haswell-E does NOT use the LGA 2011 socket. It uses the new LGA 2011-3 socket, which is completely incompatible.

    Might want to fix that error, I know it’s a small one but you wouldn’t want people with LGA 2011 motherboards to buy CPUs that don’t fit their boards.