AMD’s new Ryzen CPU and gaming: Take two

As we saw two weeks ago, AMD’s new Ryzen CPU is excellent in many regards. Hurrah. But its most conspicuous weakness is gaming. Haroo. Ryzen really is awfully important for all PC enthusiasts, so it’s worth a closer look at just what is going on with Ryzen and PC gaming. Be warned, however, for now there aren’t any easy answers.

Part of the problem is the inevitably silly scheduling of big product launches. I understand some of our American cousins got as much as a week with Ryzen before the reviews were published.

Back here in Blighty, I only got my hands on the thing 24 hours before the story went up and I was not alone in that. Moreover, Ryzen is a brand new and largely unknown architecture which makes everything a lot more complicated. At launch, in other words, it just wasn’t possible to have a complete handle on how the thing performed and why.

Anyway, the broad picture from the first round of reviews was that Ryzen was hot stuff for multi-threaded apps and decent in terms of single-threaded performance but suffered from some performance blackspots, many of which just so happened to overlap with games. Bit of a bummer.

The immediate rationalisation was that games simply aren’t optimsed for Ryzen. They’re largely developed on Intel platforms and often using tools like Intel’s own compilers. So it’s no surprise to see occasionally mediocre games performance on Ryzen.

Unfortunately, on reflection that argument doesn’t really bear scrutiny. For starters, virtually none if not very actually none of the benchmarks used in any of the reviews were optimised for Ryzen. Cinebench isn’t optimised for Ryzen. But it absolutely bloody flies.

Likewise, many games are indeed designed first to run on AMD hardware – namely the AMD CPU cores found in the current high-performance console duo. Granted, those low-power AMD cores bear little relation to the new Ryzen chip. But they bear even less relation to any Intel CPU core, on which they pretty much all run very well, it must be said.

An optimisation issue or just basically borked?

So, there really must be something quite specific about Ryzen that’s causing the problem. That isn’t to say that it can’t be solved with optimisations. But as a generic PC-compatible CPU, Ryzen must have a few wrinkles.

Frankly, that shouldn’t come as a surprise. It’s a brand new architecture and AMD simply doesn’t have the man power and resources to polish its product to quite the same sheen as Intel. Indeed, when some early benchmark numbers leaked out and looked extremely promising, it was just this scenario that prevented me from getting too excited. Ryzen was looking quick. But would it also be a bit, for want of a better word, buggy?

For AMD’s part, in an official statement it very much pointed the finger for perceived gaming performance shortfalls at a lack of optimisation:

“CPU benchmarking deficits to the competition in certain games at 1080p resolution can be attributed to the development and optimization of the game uniquely to Intel platforms – until now. Even without optimizations in place, Ryzen delivers high, smooth frame rates on all “CPU-bound” games, as well as overall smooth frame rates and great experiences in GPU-bound gaming and VR.”

Its immediate response is to seed over 300 Ryzen kits to game developers with a view to supporting a flurry of optimisation work. But I’m not completely comfortable with AMD’s attitude as it stands. Aside from the obvious issue of console ports initially developed for AMD CPU cores, my personal experience simply doesn’t tally with AMD’s statement.

I revisited a few games and while many are indeed subjectively indistinguishable running on Ryzen as opposed to an Intel CPU, that doesn’t apply to all. In subjective terms, to give just one example, Total War: Attila running on Ryzen is tangibly choppier than a typical Intel processor. There’s no getting away from it.

But why, exactly, might that be given Ryzen’s basic performance is competitive to say the least? Currently, there are a number of candidate problems that may eventually be deemed the prime culprit.

They include the specifics of AMD’s simultaneous multi-threading technology which, like Intel’s HyperThreading, allows a single CPU core to crunch two software threads in parallel. Then there’s the basic architecture of Ryzen which consists, essentially, of two quad-core modules. The talk here is of latencies in communication between the two modules.

See those two big rectangular lumps? Those are the dual quad-core modules…

Compatibility issues with the ways Windows 10 schedules software threads and indeed how that relates to both Ryzen’s multi-threading and its modular architecture are also in the mix.

For now, however, AMD is dismissing much of that:

“We have investigated reports alleging incorrect thread scheduling on the AMD Ryzen™ processor. Based on our findings, AMD believes that the Windows® 10 thread scheduler is operating properly for “Zen,” and we do not presently believe there is an issue with the scheduler adversely utilizing the logical and physical configurations of the architecture.”


“We have investigated reports of instances where simultaneous multi-threading (SMT) is producing reduced performance in a handful of games. Based on our characterization of game workloads, it is our expectation that gaming applications should generally see a neutral/positive benefit from SMT.”

You can read AMD’s view in more detail here. But the overall upshot of all this is that we probably won’t know exactly how Ryzen’s gaming performance is going to shake out until a good number of game developers have had a crack at optimising their engines.

Until then the same conclusion I drew last time applies. Ryzen is a great chip at a fantastic price. But the one group of PC users for whom Ryzen is probably least immediately compelling are gamers.

From this site


  1. Sakkura says:

    The release of the R5 and R3 chips will help clarify what’s going on, as some of them should have one of the modules entirely disabled (removing the latency between them as a variable).

    The R7 chips were never really that gaming-oriented anyway. You don’t need 8 cores for current games.

  2. geldonyetich says:

    One thing I was wondering about this: are AMD (formerly ATI) graphics cards inherently more compatible with AMD CPUs?

    Because currently I have a pair of GeForce 970s running in SLI which are, in all honestly, complete and absolute overkill. However, they are running an an AMD FX-8120 (an earlier revision) which is I feel is getting a tad long in the tooth, even though it’s perfectly adequate at running anything that isn’t Planetside 2. Consequently, while my benchmarks with the same cards aren’t very impressive as some, I wonder at the necessity of upgrading.

    So, theoretically, lets say I want to get the best possible performance out of SLI 970s. Would it matter if my motherboard/CPU is based on Intel architecture or not?

    • Premium User Badge

      Don Reba says:

      I don’t think there is such a thing as CPU-GPU compatibility. Those choices are completely independent.

      • Harvey says:

        While you’re right, it’s worth noting that if you buy an AMD graphics card and and want to use your old Nvidia graphics card for PhysX, you cannot, as Nvidia will detect red and disable the feature.

        I recall dimly that Nvidia cards disable physX with an AMD chipset mobo too, but as i type this I realize that’s probably not the case.

    • Person of Interest says:

      A recent interview with AMD CEO Lisa Su at Anandtech sort of explained why AMD isn’t trying to bind its CPU’s and GPU’s too tightly:

      LS: What I’d like to say is (and you’ll appreciate this) that I love CPUs and GPUs equally, and the markets are a little bit different. And so think about it this way: our GPUs have to work with Intel CPUs, and our CPUs have to work with NVIDIA GPUs, so we have to maintain that level of compatibility and clarity.

    • tehfish says:

      I’m really not answering your main question here. But as someone with the same CPU (though in my case overclocked to 4.2ghz base clock no turbo) and just one of a comparable specced GPU (an R9 390) You are being CPU limited without doubt in some modern games.

      In my example, with AMD’s power saving active, the GPU might only be running at 2/3 the max GPU clockspeed as the CPU can’t keep up to fully load it.

      It still does surprisingly well considering the sheer age of the CPU, but it really is at it’s limit now for modern games… I’m saving for a Ryzen for when a few more models are out :)

  3. aircool says:

    I’ve used AMD CPU’s and ATI GPU’s before. It always seemed that there would be some problem with one or the other when a game was released. Even now, after a game has released, the forums for that game will have a thread dedicated to some bug or screw up due to AMD hardware.

    These things often get fixed, but, now matter how clever the hardware, I refuse to use anything but Intel/Nvidia as the CPU and GPU (along with the dedicated mobo) are the most expensive parts of a PC. Too much risk (ah-hah… CPU pun) to throw £500-600 at team Red, even if their hardware performs better for less cost. Programs and games are generally more stable with Intel.

    • Sakkura says:

      Nvidia has always had its share of driver issues, and for the past couple years I honestly think AMD has been considerably better than Nvidia on this point.

      HDMI color bug, the various embarrassing bugs at Pascal release, and so on.

    • Baines says:

      That depends a bit on the games, or more particularly on the publishers and dev studios. I’ve seen several console ports that have Nvidia-specific issues, particularly among titles by Japanese studios. You can also get the occasional bit of software that prefers an AMD CPU (and much more rarely might not even support an Intel CPU) simply because the devs favored using AMD chips over Intel.

      As for forums having a thread dedicated to AMD screw ups, to be fair it seems every game forum these days will have a bug/crash thread regardless of hardware. Maybe it will be AMD, maybe Nvidia, maybe Intel, maybe Win10, maybe controllers, … Something, and as likely several somethings, won’t work for some people. (Last time I looked, the UMvC3 thread had complaints because the fighting game released without support for the d-pad on Xbox 360 controllers. Something that almost no reviews mentioned, and which Capcom seemed to have no interest in addressing.)

      Intel and Nvidia are probably a more stable pairing, simply because the two see so much developer support and user support.

      • xyzzy frobozz says:

        They also have a lot more money to splash on R&D than AMD.

        Which is exactly why you should support AMD where possible. The better they sell, the more competitive the market.

        • Chorltonwheelie says:

          So I should spend my hard earned money on inferior hardware in order to support some fanboi’s favourite multinational corporation in its failing attempts to earn more than another?
          We’d all like to see real competition driving CPU innovation but wishful thinking and ‘supporting’ a so called little guy like a football team will just prolong stagnation.

    • Yasha says:

      You know, this is precisely the type of marketing fallout that companies like Intel and Nvidia depend upon to make sales. Truthfully, programs are not “more stable” on Intel and Nvidia. AMD has had not had bad driver days in an awfully long time, and the only reason that “AMD = Bad Drivers” still exists in the consumers’ imagination is because of marketing techniques and silly comments like yours which continue to perpetuate lies and fear. You are saying that 500-600$ is too much money to drop on “unstable hardware” and thus AMD is a poorer choice. This is just false, and just because people say it all the time does not make it true.

      I should clarify and say that this post is NOT suggesting that you only buy AMD and never buy Intel or Nvidia. What I AM suggesting is that by perpetuating lies derived from corporate marketing, you are doing far more harm than you realize, for the narrative you spin finds a home in many hearts.

      • Chorltonwheelie says:

        Intel and Nvidia’s products are better than AMD’s by a measurable distance at the moment. Try not to lose any sleep over it. If it’s what you can afford or just plain prefer then great… but no need to start telling fibs.

    • a potato. says:

      I gotta say, I used a Radeon HD 5770 and a Phenom II X4 965 Black from 2010 until 2015, when I (finally) upgraded to an i5 6600 and a GTX 970.

      I played every single game I tried without problems.
      Of course the performance wasn’t exactly desirable after, I don’t know, 2013. But they worked flawlessly, both as CPU/GPU and as a great space heater.

      Damn, those things get hot. They worked, though.

  4. thenevernow says:

    “I only got my hands on the thing 24 hours before the story went up”

    Please, take more than 24 hours to review hardware and software. You’re always fashionably late in the big picture anyway, so you might as well take your time.

  5. zarthrag says:

    I don’t get this zero-sum “if you’re not first, you’re last” when it comes to processors. Ryzen wasn’t meant to be the king of the hill. It was meant to be competition. In that, it succeeded. Once the programmers learn to properly manage threads from sharing data across the CCX, this flaw can be mostly avoided. Do you think microsoft won’t optimize the scheduler to not move threads that communicate across CCX boundaries?

    Even with the flaw, I’m going to get awesome performance for a lot less. I’ve been clinging to my 8350 for a long time – and gaming just fine with it. Next year’s refresh might even take care of this issue anyway – just in time for us to complain about some other issue on a processor that costs half as much as it’s direct comparison. *eye roll*

    • ColonelFlanders says:

      I agree with this sentiment. What I also find a barmy notion is that any modern processors aren’t MORE than enough for gaming. A lot of people are acting like the flagship Ryzen not being *quite* as good as the flagship i7 makes a shit of a difference, when in fact any processor over £200 will do you for 5+ years of ultra settings. Hell, I have an i5 4570 which is about as entry level a gaming chip as you could get, and it’s still more than adequate for anything (except of course for Planet bastard Coaster, which needs Stanford’s distributed computing network to even bloody run).

    • xyzzy frobozz says:

      Well, technically, with only two companies making x86 chips, if you’re not first you are indeed last.

      But I totally agree with your sentiment. My next build will be AMD purely because the performance is more than acceptable at above 1080p, where I game.

      And, frankly, I’d like to support competition.

  6. ElementalAlchemist says:

    Jeremy if you are doing benchmarks you may want to investigate the effect of memory frequency. From what I have seen, frequency is having an inordinate effect on performance. I think Ryzen tops out at 2666, and it seems like it may be effectively bottlenecked by that in some cases, especially at lower frequencies.

    • RCoon says:

      To echo this, Zen’s Infinity Fabric (the connect between the two CCX’s) clockspeed is directly tied to memory frequency. Increases in memory speed significantly improve performance. Basically for the first time in ages, memory speed actually matters.

      • ElementalAlchemist says:

        You think their current frequency limit is hardware related (either CPU or chipset), or will it be able to be raised via a BIOS update in the future? I’m seeing high end DDR for Z270 boards with freqs up of 4266MHz, which is almost double. If Ryzen really is memory frequency limited, you’d have to wonder how it would perform with a few sticks of that tasty RAM.

  7. xyzzy frobozz says:

    If the problem is SMT, surely it’s easy enough to test it by disabling it?

    Hell, you’d still have up to eight cores, which is more than enough for pretty much any game out there.

    This should be easy to test and confirm/dismiss, surely?

    • Malcolm says:

      You could also go some way to ruling out the latency between the quad-core modules by setting the thread affinity of the .exe (assuming you can work out which of the logical cores map to which modules). Right click the process in the details tab in task manager -> Set affinity

  8. Premium User Badge

    MajorLag says:

    There’s a few things about this that seem strange to me. Firstly, modern games are much more GPU bound in my experience than CPU bound, so to see any significant difference in gaming experience when using such a high-end processor is odd. If I had to guess, it has less to do with the processor than some other part of the architecture that’s married to processor, something in the motherboard chipset, where it’s impossible to do a straight side-by-side comparison with Intel. Granted, my experience here is very limited since I don’t develop in the AAA game space, nor do I pay a whole lot of attention to the high-end graphical whizbang gaming scene at all, so take that with a grain of salt.

    I think the bit about optimization and console hardware is off the mark too. I’m not familiar with the situation, but I imagine that most developers use some form of game engine as a base that has, over the years, already been optimized for PC (Intel) and consoles (some non-Rizen architecture, I assume ARM?) in completely separate build paths, so “Optimized for Intel, not Rizen” is still pretty damn valid as an argument.

    • ElementalAlchemist says:

      some non-Rizen architecture, I assume ARM?

      No, the PS4 and Xbone both use a customised (each slightly different) version of an x86 8 core AMD Jaguar CPU. It’s basically a laptop CPU.

  9. pistachio says:

    Competition is always good but I am just too satisfied an Intel customer to switch to AMD.

    The first generation i7 i bought performed so far beyond expectation that I am still using it right now. Overclocked from 2.67 to 3.8, water-cooled it has never hit over 75 degrees, the mainboard a mid-range Asus P6, and the best memory available at that time, 5 years ago. The whole thing cost me around 1000 pounds without a graphics card, yes quite a lot but heck it has been worth it.

    Even though I am on a bit of a tight budget, I am simply not looking for a bargain anymore, that’s how much value I have gotten out of this machine. Just started saving a little earlier so I can do it again right now if i want to. But I don’t need to just yet, it still runs pretty much everything to my satisfaction.

    Happy with Ryzen’s arrival but I just don’t want to risk it.

    • MacPoedel says:

      I also have a first gen Core i7 (a Core i7 860, you probably have a Core i7 920 considering the P6 motherboard and clockspeeds), OC’d to 3,8GHz and with 1800MHz RAM. It still performs perfectly adequate, but the chipset is a bit long in the tooth. No SATA600, no USB 3, those make no difference for gaming but for some of the more workstation-y loads (wouldn’t have an i7 if it was only for gaming) I’d like faster IO, so an upgrade is really due. I could add some of the above ports with PCIe cards to my system, but my massive CPU and GPU coolers are blocking most of the PCIe slots, leaving only some PCI slots open and what use are those.

      Regarding your system, Intel has no real successor for your 920, that was a bargain to get on the high end desktop platform. Right now you’d have to pay twice as much to get the cheapest high end Core i7 (the 6800K hexacore).

      If you’re willing to stay on your current platform and make it last even longer, you could get a Xeon W3670 3.2GHz hexacore, they’re quite cheap on eBay.

  10. waltC says:

    Just as was true for the original Athlon, it will take a bit of time for game developers to optimize for Ryzen and for the proper Ryzen compilers to make the developer circuit. In the beginning it was a mixed bag in gaming for Athlon vs. the original Pentiums, but just a couple of months later you could no longer find a gaming web site that used Intel cpus at all–and that lasted for almost two years…;) In the coming months look for dramatic game-performance improvements for people restricted to 1080P or less. As we move past 1080P, however, the cpu becomes progressively less important to frame-rate performance, right now. Future is very bright for Ryzen–it’s a much newer architecture than Intel’s iX series, which is already very mature and tapped in terms of optimizations, etc.

  11. rodan32 says:

    I pondered this a very long time. My poor old i5 2500k (or the mobo it’s in) started getting a little flaky. I’m glad AMD is back in the game; these look like nice chips at a good price. My kids game on APUs (I’ve got three of them; two A6s, and one A10), and I have no complaints at all about AMD quality or performance for the price. But I wound up replacing my i5 with an i3 7350K.

    The reason? Most of the games I play take more advantage of high single-core performance than multiple cores (I’m looking at you, KSP). The i3 7350K is a rocket in single-core stuff.

    I’ll have more AMD hardware in the future, as well as Intel and Nvidia. It’s all about matching up price/performance/need. I’m waiting to see how the R5 chips stack up before I drop on a new CPU for my choreputer (Plex server, game server, Teamspeak server, whatever crap I don’t want on my main machine). It’ll make better use of multiple cores.

  12. uh20 says:

    Definitely picking up an APU when it is available. I have high expectations that
    it will be the gamechanger.

  13. changeofpants says:

    I keep seeing the intimation that “Ryzen is bad for gaming,” frequently accompanied by accusations that AMD as a company has dropped the ball or something. I get that some gamey gaming gamers in the PC master race are obsessed with the absolute greatest number of foopas, but for the majority of gamers playing at 60hz or below, the difference between an 1800x and a 7700k will be invisible.

    Ryzen IS inferior for the gaming enthusiast/VR/high refresh rate monitor owners who mostly use their PC to play games, especially considering the $500 price point of the 1800x. However, for the typical person who games at 60hz and the people who use their computer for other things in addition to gaming, the sub-$300 hexa-core and sub-$200 quad-core chips are compelling. Users will soon have access to an unlocked 4c/8t for under $200, or a 6c/12t chip for about the same price that Intel charges for a 4c/4t chip. Assuming these lower core count Ryzen parts don’t dip below 60hz in game benchmarks, I bet budget-minded gamers will tend to go for the discernible utility of additional cores over the nigh-imperceptible benefit of 120hz headroom on their 60hz displays.

Comment on this story

XHTML: Allowed code: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>