Can AMD Make Gaming CPUs A Two-Horse Race Again?

This. Is. Zen. Probably

The roulette wheel of rumours that is PC hardware news is usually pretty pointless, unless bun fights over shader specs or clock speeds are your bag. But, occasionally, something really significant for the future moves into view. This is one of those times. AMD has been talking about its upcoming PC products and technologies in the last week or two, including a completely new CPU core and some fancy memory technology that might dramatically change the way we all think about integrated graphics and gaming. Is Intel’s stranglehold about to be loosened?

So, the new CPU core. It’s called Zen and it’s due next year. It’s a simple fact that AMD’s track record regards launching new CPUs on time has been poor for the last few generations, so I wouldn’t make too many assumptions. But next year would certainly be nice.

Anyway, the thing about Zen is that it’s a traditional x86 core for PCs. I mean traditional in the sense that it’s being pitched as a proper high-performance architecture, not some ultra-mobile or embedded effort destined for the internet of things or whatever the prevailing fad may be. And it’s claimed to be completely new.

In other words, the Bulldozer experiment is dead. Bulldozer is the PU tech that underpins all of AMD’s current full-power cores. So that’s the crusty old FX chips and the newer and shinier APUs with graphics on-die.

Zen is coming next year. Please, please let it be good.

Anyway, Bulldozer was meant to be a multi-threading beast thanks to a modular architecture that saw a pair of integer units sharing a floating point unit. Think of it as a bit like Intel Hyper-Threading, but with more hardware thrown at it.

It was certainly novel and made the question of what actually qualifies as a CPU core trickier than ever before. But as an actual CPU, things didn’t work out so well. For starters, games still haven’t become as efficiently multi-threaded as expected (though DirectX 12 might change that).

That’s a problem because Bulldozer-derived chips suffer from fairly piss-poor single-threaded performance. It’s not that they’re terrible chips. I’ve been playing around with an FX-8350 lately and it’s really not all that bad.

But for gamers, Intel’s huge single-thread advantage (in the region of 100 per cent sometimes) is just overwhelming. AMD says Zen will boost per-core IPC or instructions per clock by 40 per cent, which is pretty epic.

If I’m really honest, even 40 per cent won’t be enough to blow Intel away. But depending on clockspeeds, it should put AMD back in the game, so to speak. AMD is also promising to improve and revise Zen more rapidly and extensively than it ever managed with the Bulldozer family which has largely stagnated despite the Piledriver and Steamroller iterations.

Zen will also get a new socket, AM4, and I assume some new chipsets. The latter are long overdue for the FX line of desktops CPUs and hopefully will bring plenty of USB 3.0 and PCI Express storage goodness.

AMD says Zen fixes Bulldozer’s rubbish single-threaded performance

Whatever, all the noises AMD is making about Zen are exactly what you’d want to hear as a PC gamer. It’s a proper x86 design and it aims to improve performance and fix platform shortcomings in precisely the areas that matters for games. Bring on 2016.

The other interesting AMD development is a new tech known as HBM, which stands for high bandwidth memory. The shizzle here involves sticking graphics memory into the same package as the GPU or graphics chip itself. The idea is that putting memory near the main chip makes signalling much simpler and allows the bus width to be made much bigger.

The other cleverness with HBM is die stacking, or piling the memory chips atop of one another. You, or rather AMD, then connects ’em courtesy of TSVs or through-silicon vias. If that sounds a bit familiar, it’s similar to the tech in the latest 3D-flash-memory SSDs, like the Samsung 850 Evo.

Will AMD’s stacked memory tech finally make integrated graphic a goer for games?

Put the on-package memory together with the die stacking and you get a 1,024-bit bus and 100’s of GB of bandwidth. Currently, AMD is pitching HBM as its next memory solution for graphics cards.

However – and this is where the rumours kick in – it’s thought Zen CPUs will come with integrated graphics and that on-package HBM memory. If so, that could dramatically change the way we all think about integrated graphics.

Currently, memory bandwidth is a big killer for integrated graphics. CPUs make do with much lower bandwidth than graphics cards, which is a major problem when that limited bandwidth is being shared by graphics, CPU and every other subsystem. In short, it makes proper gaming graphics performance impossible.

However, with HBM, you have the potential for a single-chip PC that looks rather like an Xbox One or Sony PS4. Except it will probably be better and faster by every metric. To be clear, I don’t think this will kill off add-in graphics card overnight. But it could present entry level and eventually middling gaming PCs with a more affordable option.

There are downsides, of course. With CPU, GPU and even memory on one package, your upgrade options are very limited. But then a lot of people don’t do piecemeal upgrades. If that’s you, Zen-plus-HBM could make a decent gaming PC more affordable than ever. Here’s hoping.

Finally, what I haven’t mentioned so far and what will actually arrive first is a new high-end graphics card from AMD with that HBM memory tech. It’s due out this ‘quarter’ which means it should appear by the end of June. of course, it’ll cost the earth and hardly any of us will buy it. So it goes.


  1. Wisq says:

    I imagine putting CPU and GPU all on the same small chip is going to result in heat being more of a limiting factor, too. Not really an issue for a system that’s already underpowered to begin with (e.g. an HTPC), but for proper gaming machines, another reason that external graphics cards will be sticking around for a while.

    • Wisq says:

      … but on second thought, applying your own high-efficiency after-market CPU cooler is much more of “a thing” than doing the same for GPUs, so maybe a better cooler can help make up for it.

      • jrodman says:

        Total heat generated should be lower due to efficiencies gained.

        Heat within the one package though.. not so sure how that will work out. I have similar concerns.

        • TacticalNuclearPenguin says:

          Can be mitigated if the package is big and everything is well spread, this also gives more work surface for the heatsink.

          • Phoxtane says:

            Yeah, but you’re also limited by the die size – I remember hearing that each process node has a limit on how big the overall device can be; for example, I believe that the chip used in the Maxwell-based Nvidia cards is pretty much at the size limit for the 28nm process they’re using.

          • inf says:

            No, in case of maxwell, Titan X (GM200) for example, it is actually larger; 601mm2, vs 398mm2 of the GTX 980 (GM204).

            But yes, there is a limit. Product segmentation (among many other things) dictates how close they get to the limit of the process for any specific product.

          • kael13 says:

            The memory and GPU won’t be on the same die, they’re both attached to a something called an interposer which holds the connections between the two.

    • Premium User Badge

      phuzz says:

      But pretty much every Intel CPU already has built in graphics as well, even some of the i7s.
      It’s less the heat that bothers me (clearly they’ve found ways round that), more the waste of money when I’m going to be using a separate graphics card anyway.

      • jrodman says:

        Well, if you’re not using those circuits, I doubt they’re generating heat. That scenario presumes the integrated graphics can’t become superior, which I think is going to be ultimately false at some point.

    • PoulWrist says:

      AMDs A4, 6, 8, 10 series are already doing this with GPUs that are a lot more performance in the graphics department than what Intel has available. In a 95w package for the biggest one. Of course it’s not the dedicated GPU level of performance, but it’s certainly quite decent for what it costs.

    • SuicideKing says:

      Almost all CPUs since 2011 have had GPUs integrated on die – notable exceptions include Intel’s Extreme Editions Xeons, AMD’s FX line, and the smaller Atom line – though Atoms are now SoCs so they have a GPU component as well.

    • BlueTemplar says:

      I don’t know where you saw this mentioned. They’re talking about putting *memory* on the same chip as the GPU. And if I’m not mistaken, memory doesn’t use much power – the issue you might have would stem from worse heat dissipation instead, especially if you stack several layers of memory on top of the GPU…

      • Wisq says:

        I’m not referring to anything specific to this product line, just to the notion that on-CPU graphics might someday replace external graphics cards.

        It’s all well and good to put more things on the CPU (for efficiency reasons), but you’re still putting two different hot-running components together, tight against each other and under the same heat sink and fan, when the individual performance of (high-end versions of) those components has already typically been limited by heat.

        That suggests that you won’t be able to go as high-end as if they were separate, and cooled separately. Whether you can outperform the separate solution depends on how important the cross-communication is, compared to the speed (and heat) of each component itself.

        • Wisq says:

          Oh, and another thing (damn you lack of edit):

          Memory typically doesn’t do so great in high-heat situations. As an example, when the “Bitsquatting” researchers wanted to induce RAM bit-flip errors (as compared to background bit-flipping caused by cosmic rays etc.), they put a heat lamp over their RAM and saw several orders of magnitude of increases in bit-flips.

          So I’m a little concerned that putting RAM in with your two hottest components does not seem like the safest thing ever, in terms of bits flipping and then causing either graphical glitches or even crashes.

  2. Stardog says:

    Not as long as they continue to give their CPU’s stupid names. i3/i5/i7 + number is much easier

    • marach says:

      ‘Cause A4/A6/A8/A10 +number are are sooo much harder… oh wait!

      • mattevansc3 says:

        That is their APU range only, you’ve forgotten their FX range. Also the i3/i5/i7 chips are distinct on features. i3 CPUs are dual core CPUs with hyper threading enabled and with no turbo mode. i5 CPUs are quad core CPUs with no hyper threading enabled but do have Turbo mode. i7 CPUs are quad core or more, have hyper threading and turbo mode enabled. That’s three distinct groups.

        Go on the AMD website and its a mess. There’s little rhyme or reason to their naming system. There’s A8’s that look more powerful than A10s and some A4s look better specced than the lower end A6s.

        • jnik says:

          Your universal rule for i5 was true for Lynnfield, most of Ivy Bridge, most of Haswell. Clarkdale, Arrandale, Sandy Bridge, one Ivy Bridge desktop part (and all Ivy Bridge mobile), some Haswell (including all mobile), Broadwell are HT, 2 cores. Unless Wikipedia is lying to me, which I grant is possible, but it doesn’t seem Intel’s nomenclature is that tidy.

          • mattevansc3 says:

            You are correct in that the U(ltra book) M(obile)and E(mbedded) series are dual core with hyper threading but those aren’t desktop CPUs and aren’t, or at least shouldn’t, be available to the general public. I should have been more specific in what I posted though.

            There is also the 4th gen i5-4570T that sticks out like a sore thumb as its the only desktop i5 to be a dual core CPU, as according to Intel’s website.

        • carewolf says:

          You are wrong on how Intel separates the chips. What is in a i3, i5 or i7 depends on the CPU generation, the market (ultrabook, mobile, desktop), the price, the silly number that follows it, and the general direction the wind is blowing.

    • airmikee says:

      *cough* Nehalem, Sandy Bridge, Ivy Bridge, Haswell, Broadwell *cough*

      • TacticalNuclearPenguin says:

        Yeah, well, that’s the codename though, and everyone has stupid ones. Many stores won’t even report this stuff in their listing.

        • airmikee says:

          According to AMD ‘Zen’ is also a codename.

          link to

          “Development of a brand new x86 processor core codenamed “Zen,” expected to drive AMD’s re-entry into high-performance desktop and server markets through improved instructions per clock of up to 40 percent, compared to AMD’s current x86 processor core. “Zen” will also feature simultaneous multi-threading (SMT) for higher throughput and a new cache subsystem.”

          And I agree, all codenames are stupid.

      • SuicideKing says:

        Those are code names – AMD equivalents are Bulldozer, Piledriver, Steamroller, Excavator, Zen. AMD also has separate names for the CPUs – Zambezi, Vishera, Kaveri, Godavari, Llano, etc.

        • airmikee says:

          Newegg still lists most of those terms in the product listings, just as they put AMD’s names in the titles.

          link to
          link to

          Someone avoiding a specific brand of processor because of a stupid codename would have no processors in the world to buy. Almost every codename in almost every industry in the world is stupid, and most actual product names are stupid as well. But who actually makes their purchasing decisions on the codename or name of a product? If it does what you need it to do at a satisfactory price point, the name shouldn’t even register on the list of possible choices when making a purchase.

      • Solidstate89 says:

        Those are code names not SKUs.

        • airmikee says:

          Zen is also a codename, not a SKU, but thanks, CatherineObvious.

      • K_Sezegedin says:

        What’s wrong with the codenames? Intel’s strike me as elegant and AMD’s are kinda over-the-top and funny.

    • Gordon Shock says:

      ALL manufacturer of electronics should use a “user friendly nomenclature…for pete’s sake monkeys with typewriters could do a better job.

      How about going for simplicity and stop trying to be geek-friendly only. Something like 15A, 15B, 15C.

      The number being the year the product is released and the ABC being the grade or quality range of your product….see how easy that is/was!

      For your viewing pleasure:

      link to

      • jrodman says:

        The problem is these numbers are in the hands of marketing.

    • PoulWrist says:

      How is FX-6350 any more stupid than i3-4630i or i5-4220s ? What do the s, i, t, etc. whatever they put on the end even mean?

      • SuicideKing says:

        No suffix means full TDP, no overclocking.

        -S = lower power, but all cores
        -T = lowest power, usually a dual core HT model
        -K = Overclockable
        -C = Crystalwell derivative, i.e. on-board DRAM cache (L4) – also known as an Iris Pro Graphics part

        That’s all you have on the desktop side. There is no -i suffix.

    • Vayra says:

      Riight. Intel naming schemes are easier.

      We have i3, i5, i7, but we also have K, U, T, and P chips, we have a series of numbers behind that which is plain misleading (higher is not always faster!) and we have mobile segment which uses the same i3/i5/i7 but utilizes different core setups, basically halving the core count and adding HT. BUT there is also an i7 Q part which is an actual real quadcore.

      Yea, that’s real easy.

      • Vayra says:

        Oh wait, I forgot the Core M/Y, Atom, Xeon…. and some handful of additional ones!

        But that’s easy too!

  3. thedosbox says:

    That “graph” is horrendous marketing fluff. Specifically, the arrow showing the improvement in IPC is 3x the height of the purple arrow. An improvement that’s meant to be 40%.

    • kuangmk11 says:

      purple line is only the improvement in the last generation not a total. They are claiming 40% over total.

      • thedosbox says:

        Let’s assume that’s true. It’s still marketing fluff, as they’ve massively truncated the Y-Axis to emphasize the improvement.

    • skalpadda says:

      Now where’s that picture… ?
      Ah: link to

    • pepperfez says:

      I mean, yeah, obviously it’s marketing fluff; there aren’t even any units. Impressionistically, though, that graph represents pretty well what a 40% performance improvement in one generation feels like — that’s a big jump all at once!

    • horrorgasm says:

      Well…yeah. It’s a marketing graph. Did you think that was a was a graph from an official scientific study or something for a minute there?

  4. mattevansc3 says:

    AMD can put out the best hardware in the world but as long as their drivers remain a piece of shit there’s no point in contemplating them.

    • jalf says:

      How do you figure drivers are relevant to their CPUs?

      • Asurmen says:

        How does he figure their drivers are a piece of shit? I wish that old myth would die.

        • mattevansc3 says:

          For a start I can’t use the Catalyst driver AMD supplied to Microsoft for Win8 and Win10 because it tells the Intel driver that all Radeon 5xxx cards and above support Switchable Graphics when they don’t. This causes systems like mine that have a card that does not support Switchable Graphics to go into a boot loop. This has been the case since the Win8 Technical Preview.

          Secondly Eurogamer did an article not too long ago that showed that AMDs DX11 drivers are so poorly implemented that when a Radeon card is paired with a low specced CPU such as an i3 or one of their APUs performance takes a nose dive. Lower specced GeForce cards outperformed Radeon cards when paired with AMD APUs.

          Its not a myth, AMD drivers are that bad.

          • Wisq says:

            I’ll second this by adding my own experience:

            Due to arguably poor hardware choices on my part (a dual-GPU AMD card and two dual-GPU nVidia cards in SLI), I’ve now spent a few years where I was forced to use Crossfire (no way to turn it off at all) and a few years where I was forced to use SLI (no way to turn it off without the system freezing).

            My SLI bugs were typically little niggles. Oh, some shadows are a bit weird. Oh, I’m getting some flickering over there. Oh, that’s a weird colour. In almost every case, it could be solved by setting up an nVidia profile for that game and changing the multi-GPU rendering mode. I only remember one case where I couldn’t, and there it was just shadows, so I just turned them off and I was fine.

            (Oh, and the only reason I was stuck in SLI was my unique (ridiculously overpowered) quad GPU setup; had I just had a single dual-GPU card, I could have turned it off just fine.)

            My Crossfire bugs were usually “this game will not run at all, and furthermore, it will freeze your entire computer”. My only option at that point (after figuring out the issue) was to run the game in windowed mode. The only known workaround was to disable Crossfire, and I literally could not, since I had a dual-GPU card.

            So, anecdotal story, grain of salt, etc., but, nVidia has typically kept me happier in the driver department.

          • Asurmen says:

            So your anecdotal evidence = crap. Got you. As for the DX11, a high end GPU doesn’t get combined with a low end CPU for a reason.

            Still looking like a myth.

          • Geebs says:

            .. but another high-end GPU with the same CPU is fine? As “synthetic” benchmarks of driver performance go, they could have done a lot worse.

            Just to bump up to n=2 for “AMD drivers are a bit crap” – I used to have a 5870 which crashed or glitched in a bunch of games my prior and subsequent nVidia cards were fine with. On the hardware side, the blower on that card died faster than any I’ve had before or since.

        • Faxmachinen says:

          How about that time I tried to connect two rather dissimilar laptops to a TV through HDMI, and in neither case the sound would work until I had uninstalled all the AMD HDMI audio driver versions?

          And while I mention that laptop, I tried to replace the MXM graphics card on it earlier with the same model, but it wouldn’t even pass POST because it had incompatible firmware.

      • mattevansc3 says:

        Because this article is referring to their APUs that have on die Radeon cores and therefore use AMD graphics drivers.

    • newprince says:

      1) What do drivers have to do with CPUs?

      2) The only problem I’ve had with AMD graphics drivers was in Linux, and really it was kind of a pain to get Catalyst Control Center working and playing nice with xorg, but I expect things to be different in a couple years.

  5. PopeRatzo says:

    When you get this far behind, AMD’s only chance for success is aggressive pricing.

  6. omegajimes says:

    I’m an AMD fan, so I’m used to, and ready to be disappointed. Like, super used to and ready to be disappointed. But, THIS TIME MIGHT BE DIFFERENT!?!?!?!?
    Honestly, the thing that gives me the biggest hope is the 2012 return of Jim Keller. Keller led design of the K8 Athlon 64/xp processors, he left and eventually became the chief engineer at Broadcom until he was poached by Apple and helped design the A4/A5 chipsets. Keller is a processor wizard, and I have more faith in him than I do in anything else AMD.

  7. FriendlyFire says:

    There are worrisome elements that go along with this however… Chiefly, the 40% improvement, as far as I know, is not substantiated. It’s a goal. Bulldozer was supposed to be a significant improvement as well. I hope they can hit their mark, but I wouldn’t be surprised if they can’t, and even if they did they’d still trail behind Intel, so they’d need aggressive pricing which they might not be able to afford with the R&D costs of Zen.

    For GPUs, there’s the concern that first-generation HBM is limited to 4gb of RAM. That may not be bad now, but if you buy an enthusiast card, you’d expect it to last for a while, and then 4gb might not be enough for UHD or higher resolution textures. Also, the rumors peg the 390X at $850, which would be suicidal.

    • Asurmen says:

      There’s also rumours there’s no limit and 8GB is entirely possibly, so we’ll have to wait and see. However, I don’t see why that price is suicidal. If it can produce Titan X or better performance they’ve more or less won the top end enthusiast market like they did with the 295×2.

      • mattevansc3 says:

        AMD confirmed the cards are limited to 4GB.

        • Asurmen says:

          Care to provide a link to that? Because the internet says otherwise. There’s an Ars Technica article with absolutely no details and links to any evidence.

  8. mattevansc3 says:

    Its also worth mentioning that AMD is doing another Mantle and rushing out a product to market in the knowledge that something better is due to be rolled out shortly just to claim “FIRST!”

    The R390X is using Gen 1 HBM which is capped at a maximum of 4GB. While the RAM may have higher bandwidth that is not the barrier to UHD gaming as images are stored in RAM for consistent reuse and not cleared once its drawn.

    Gen2 HBA is in the process of being finalised and will be available early next year. They’ve already got Gen2 HBA up to a maximum of 8GB and nVidia are waiting for this to be available before announcing their HBA cards.. Intel are also expected to announce their CPUs using the rival HSA later this year.

    While AMD may be first to market with HBA it doesn’t mean HBA is ready for the market.

    • mattevansc3 says:

      Just realised I made a typo. Intel will be using HMC not HSA.

    • remon says:

      HBM is AMDs design. By definition they are “first”.

      • mattevansc3 says:

        No its not. The stacked RAM is a Hynix invention and Hynix already use stacked DRAM and TSVs in RAM strips. nVidia announced they would be using this same technology and setup in their Pascal based GPUs back in March.

        AMD designed the interposer in the package but as nVidia have been using their own marketing fluff terms we can’t say whether an interposer is unique to AMDs design or whether the interposer is a requirement for the tech.

        • czerro says:

          I don’t understand your argument. If AMD has Gen 1 HBM now, why wouldn’t they use it? What you are saying is silly. They should wait for Gen 2 HBM and another year before releasing an HBM equipped card? There is always something better around the corner. That’s how hardware development works. You are acting as if AMD is somehow locked into Gen 1 HBM forever. They will just switch to Gen 2 when it’s final and fab is up on it for the gen after, and/or possibly late entries to the 300 gen. Nothing precludes this, which makes your argument confusing. Grasping at something terrible to find. Save the needling once there is something concrete and some numbers.

          • mattevansc3 says:

            But the buyer is locked into that product and that’s the issue. Games like Shadow of Mordor have already broken the 4GB frame buffer limit. The only 4GB card on the market that can run it at it’s highest settings at a 4K resolution is an overclocked GTX980. The bump from 4GB on the R9 290X to 8GB literally takes it from being borderline unplayable to market leader.

            The 390X is going to be sold at a premium card price yet there’s already serious doubts about how well it will perform as a premium card and for how long if we start seeing a trend towards high end gaming requirements exceeding a 4GB frame buffer.

          • jalf says:

            Games like Shadow of Mordor have already broken the 4GB frame buffer limit.

            No they haven’t. The frame buffer is nowhere near 4GB.

            If you’ve got a 4K display, then it’ll be 32MB. And hey, let’s be generous and say you’ve got three monitors, and the game is able to use them all. And let’s say everything is rendered using HDR throughout. We’re still talking about, at the very high end, a few hundred MB used for frame buffers.

            Yes, the game uses lots of GPU RAM for other stuff, so all in all it can end up using 4GB of GPU memory. But it’s not the frame buffer, and it’s not a “4GB limit”. (less VRAM is fine if data can be shuffled in and out of it faster. Increasing the bandwidth between GPU and VRAM is one part of that puzzle although it does little to improve the bandwidth between system RAM and VRAM)

            There is no magic “4GB limit”. There’s a balance between a lot of different factors. And it is silly to pretend that the amount of VRAM matters more than anything else.

        • Asurmen says:

          No, it’s both an AMD and Hynix invention. Also there’s been absolutely nothing 100% confirming a 4GB limit for Gen1, also within the last year there’s been what, 2-3 games that have gone over 4GB at lower than 4K res and absolutely everything switched on. 4GB, even if that limit does exist, won’t be an issue for quite some time.

          • mattevansc3 says:

            So its an AMD invention even though Hynix have been using stacked memory TSVs on their server grade DDR4 memory for about a year?

            Its also not been 100% confirmed Gen1 is limited to 4GB when Hynix, the guys manufacturing it, have explained why its capped at 4GB and AMD, the guys implementing it on their GPUs, have explained why its capped at 4GB?

          • Asurmen says:

            AMD and Hynix came up with the memory, so, er, yeah.

            Nothing has confirmed a 4GB limit.

    • jalf says:

      The R390X is using Gen 1 HBM which is capped at a maximum of 4GB. While the RAM may have higher bandwidth that is not the barrier to UHD gaming as images are stored in RAM for consistent reuse and not cleared once its drawn.

      That is almost entirely incorrect.
      Memory bandwidth is a major issue on GPUs.

      Even if you were right that one data is loaded into VRAM it is never cleared or modified, then it would still need to be *read* almost constantly. That requires bandwidth.

      In reality, though, the data in VRAM certainly isn’t static. Some of it is, sure, but some textures or vertex buffers are shuffled in and out of VRAM (to make room for other data), shader uniforms are constantly being modified,, the framebuffer is being redrawn every frame, usually a bunch of other rendertargets are being drawn as well, rendered to textures which are then read in a subsequent pass. The Z buffer is tested against and updated, and so on.

      Yes, bandwidth between the GPU and VRAM is a huge deal.

      • mattevansc3 says:

        GDDR5 has a 352GB/s transfer rate, HBM is 512GB/s. While this sounds a lot to put it in comparison it is less than a 50% improvement. The PS4 has an almost 300% bandwidth advantage over the XboxOne yet the quality difference, while noticeable isn’t huge.

        You are also ignoring the fact that games are capping performance based on available VRAM. Shadows of Morder will not run the ultra quality textures on a card with lower than 6GB VRAM. The R9 290 8GB is already beating the unreleased 390X on image quality. GTA5 reports a requirement of 14GB VRAM to run ultra quality at 4K resolutions. There are a swathe of games like Titanfall, Sleeping Dogs and Shadows of Mordor (Without the ultra texture pack) that use 3GB+ VRAM on high/ultra setting.

        4GB is already a limiting factor in some games and the trend is that it will become a bigger factor over the next couple of years.

  9. Ashrand says:

    Return of Keller means a lot, AMD talking a big game again means a lot, AMD were playing up shared memory tech in their APU’s years before this (and i wouldn’t mind 4gb of RAM right there on the Die with a graphics level of bandwidth available to the CPU, no sir) so interesting times!

    And from a purely selfish point of view i hope it works out for them, more competition in all kinds of silicon is no bad thing (particularly when there has been so little of it) and i would like to see AMD in the commanding position over their competitors (on top or underdog, AMD has always been the guys pushing standards not balkanizing proprietary stuff and a bit more stability is what PC gaming deserves)

  10. montorsi says:

    If I had a dime for every promise AMD has made the past few years, I’d quadruple their earnings as a company.

  11. frenchy2k1 says:

    If that sounds a bit familiar, it’s similar to the tech in the latest 3D-flash-memory SSDs, like the Samsung 850 Evo.
    No, this is completely different.
    Samsung is using 3D NAND memory. Each CHIP is now made of several layers of NAND cells. Each package can still contain several chips (up to 16), but only a few pins are connected (chips are basically in parallel on those pins)

    HBM is a stack if chips (same as the stack of NAND chips inside a Flash package), however, the novelty is the *very* wide bus used. This means a new technology called Through Silicon Vias (TSV) is used, letting you connect with upper chips *through* the lower ones. This let you bring down LOTS of pins and allows for the very wide bus. Competitor is called Memory Cube.

    About AMD claims, the things that go unsaid so far is their jump in manufacturing capability. AMD has been stuck at 28nm, like the rest of the industry bar Intel, and Zen should come hopefully on 16nm FinFet. This alone should provide a very big jump in efficiency. Now to see if Global Foundry can deliver…
    Zen sounds great, but I will wait to see it to believe it. 40% higher IPC is nice, but only if Zen can keep Bulldozer’s frequencies (~4GHz), as work done follows IPC*frequency. Bulldozer was AMD’s P4, an architecture designed to climb in frequency, which Intel abandonned for a reason (hint: it’s not efficient, as power rises exponentially with frequency, while work done rises linearly).

    Adding HBM on package for APU may result in great perfs, but price will also probably jump. Remember Intel’s Iris Pro chips, with 128MB of cache onboard…

    • mattevansc3 says:

      It may also be worth mentioning that stacked memory and TSVs are already being used in server grade DDR4 RAM strips. Also its the interposer that’s really creating the wide bus as it drastically shortens the distance the interconnects need to travel.

      AMD APUs should be significantly cheaper than Intel’s eDRAM on the Iris Pro chip. The physical size of the SRAM chip means you are using more silicon per chip and lower yields. Stacked RAM would take up less room allowing AMD to use smaller manufacturing processes and increase yields which in turn would bring down costs.

    • SuicideKing says:

      14nm FinFet, since GloFo’s using Samsung’s process.

    • disorder says:

      Manufacturing process is the part that’s habitually overlooked in these equations, and it’s not minor, it’s the critical element. The seemingly great secret is, if you’re two years’ behind on process, you are releasing a two-year old CPU as new. Heat, clock is a matter of physics, and you sidestep it by targeting differently (ARM target power consumption) – and bulldozer wasn’t a wholly terrible concept for dealing with that (in fact the same as power did, sparc did (to excess, with niagara) and on, even intel themselves) – but the problem is going ‘wide’ doesn’t suit a lot of workloads, most definitely including games. Remember how people complained about PS3 development in 2006.

      Intel’s processes’ have been advanced ahead of others’ since at least 2000. And while I don’t know they pulled away head further in that regard recently, certainly they’ve had a process advantage longer, while everyone else is still pulling up from 28nm (a solid sign that semiconductors aren’t the business-as-usual they used to be). There’s more than one reason they’re the only semiconductor fabricator that don’t lease out their capacity. In Core iWhatever, intel have an architecture that is also very good. That’s tough to beat.

      Competitors have caught up – when they have a design two years’ better than intel. Bearing in mind a half-decade lead to design cpu’s that’s a tall order. K7, and K8 did it vs P4. But Intel aren’t a badly run company, and they responded. If they certainly seem to be sitting on their butt now and leveraging their process advantage against ARM, and just incrementalling Core, AMD had better count on an intel response already being mostly cooked (like the Banias/Pentium M turned out to be).

      Which would suck for them, and again, in the longer term, for us. Simple thing is, under that inertia I don’t see any way for AMD /to/ win. I’d argue they’re functionally maintained, by intel for anti-anticompetitive purposes – until there’s some fundamental change.

      • mattevansc3 says:

        I believe Intel own their fabrication plants which allows them to push through die shrinks. After AMD spun off their fabrication plants they’ve q to compete with other companies to get access to the plants using the smaller manufacturing processes. The problem for AMD is that Apple has far more money than them, can order in higher quantities and is winning first dibs on the new processes for their ARM chips.

        • frenchy2k1 says:

          Yes and no.
          Money is not helping when there is NO competing process on offer.
          Most manufacturer are now concentrating on low power (mobile chips). This is a very different optimization than fast high power (CPU).
          The only chips using smaller than 28nm are some mobile ones (20nm planar at TSMC for Apple and 20nm planar and 16nm FinFet at Samsung for their mobile chips and some Apple).

          Intel is the only one that optimized for speed/power and sub 20nm and hence the only ones with CPUs at those sizes.

  12. cafeoh says:

    Oh you’re right, I didn’t even notice. I kind of pictures the whole untruncated graph behind. I mean it still says 40% in big bold letter in the middle, so it’s not that big of a deal.

    I feel like that graph isn’t meant to show an improvement in IPC as much as an improvement in improvement, too, if you see what I mean. It says 40% more instruction per clock, but it really means 400% better AMD.

  13. SuicideKing says:

    So I was watching the TechReport Podcast with David Kanter (of Real World Tech fame), they brought up an interesting point – it’s not clear whether the IPC increase is claimed to be per core or per thread – i.e. one core will have two threads so that 40% could be factoring that in.

    Anyway, they’ll have to stand toe to toe with Skylake from Intel so I’m not extremely expectant. At least they’re putting out a new platform.

    HBM is looking to be really great though, and I hope AMD delivers, though the economics of the situation don’t look favourable until Nvidia also adopts it next year with HBM 2.0.

  14. teppic says:

    AMD’s FX CPUs are still pretty good for the money, they can beat the i3 in games that use 4+ threads and often match the i5. They do suffer very much in games that heavily rely on single thread performance. As mentioned, DX12 and Vulkan could make quite a difference, with scaling up to 8 cores greatly improved.

    For some other uses, like virtualisation, they’re vastly superior to Intel’s price equivalent models.

    • BlueTemplar says:

      Can you actually give examples of games that manage to use 3+ threads (and where the threads 3+ taken together couldn’t share the same die as the 2nd thread by use)?
      I’m still wondering if you really need anything more than a tri-core if you want to run games (one of the 3 cores being left for the OS and other background programs)…

      • teppic says:

        The Witcher 3 seems to scale very well on four cores. There are a handful of games that do pretty well across multiple cores (e.g. Far Cry 3, BF4, Metro Last Light) but most that can use more than two are still very reliant on one thread. There are also many games, e.g. Tomb Raider, Bioshock Infinite, that the CPU is more than adequate for and so perform just as well as any Intel CPU.

        • BlueTemplar says:

          I see you only cite first or third person games, mostly shooters. I fail to see what kind of calculations you would have to do to overload the CPU on these kind of games (maybe anything crowd-related, like in the French Revolution’ Assassins’ Creed?).

          I’m more interested in strategy games with a real-time component, where it’s not uncommon to overload the CPU.
          But even Supreme Commander (2007) that boasted to be fully multithreadable, wasn’t really so in practice.

          There’s also the upcoming Ashes of the Singularity game (from Stardock) using the Mantle-based Nitrous engine (from Oxide) boasting “thousands of units”, but I wonder whether they actually managed to multithread gameplay-affecting calculations (as opposed to just display calculations like the physics of a spaceship crashing to the ground in StarCraft 2 that have no actual effect on gameplay).

  15. syllopsium says:

    Not a chance. 40% isn’t enough and we’ve been here before – when AMD’s cache architecture was theoretically better, however Intel’s ‘inferior’ implementation was better in reality. Alternatively the time when they said ‘8 core’ chips were better, except they’re not proper 8 cores and are slow as shit.

    Most damningly there’s a complete lack of imagination – no mention of innovation, no new instructions and no HSA on the FX chipsets, which reveals that it’s a complete gimmick and AMD isn’t taking it seriously. Remember, even if an FX CPU is designed to be paired with a GPU, there’s definitely scope for on processor APU resource.

    AMD is ok at the low end, due to their integrated GPU. Everywhere else they’re just embarassing.

    I’ve got an HD6950. The only reason I’d buy AMD GPUs currently is if I needed to run certain BSD Unixes that can’t yet run the binary NVidia drivers. NVidia are clearly innovating more, and they continue to have integrated stereoscopic 3D that’s *still* a third party option on AMD.

  16. phil9871 says:

    Another Intel Fanboy giving a review…

    “If I’m really honest, even 40 per cent won’t be enough to blow Intel away”…Have your guys (RPS Staff) ever looked up minimal fps benchs on Paired with a $380 390X the i-7 vs a A10…The minimal Difference is a whopping whole 3 fucking frames per sec…Im srry but intel doesn’t blow AMD away. You guys might blow intel once in awhile in your reviews… Im tired of reading how bad AMD is for 1/4 today the price of a i7, tho 1/7 when the i7 came out at $1k, & and how they are utter shit for gaming. Anyone who says that is a fucking moron. Period.
    A 3.5-4ghz AMD chip is fine for every game out today and tomorrow for at least a year or 2.

    90% of home comsumers who do everyday browsing or light/even heavy gaming well never have a issue with a AMD vs a Intel for simply gaming, youtube, WebBrowsing, watching streams including netflix & youtube, etc…
    They aren’t going to be running bench’s (just to stare at the results day after day to convince them $1k was worth it)), encoding video, or running CAD type apps. Where L3 and floating point performance matters.

    link to

    Yup a whole whopping 3 fps. Paired with the other Video card..The AMD A-10 beats the i7.

    All the Reviews on this site about AMD vs Intel are LAME biased BS article’s. Joined this site simply to say that.