Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Do DirectX 12 & Windows 10 Make Your Games Faster?

Possibly awesome. Definitely free

The mixed blessings of Windows 10 have been ours to experience for a few weeks now, and that means a new gaming API thingy (technical term) in the form of DirectX 12. We've touched on the possible impact of DX12 for all things gaming previously, how it promises to unleash CPU performance for free, bring the PC level with consoles when it comes to reducing overheads and all that jazz. Well, now it's out, some early DX12 software has emerged and there's all kinds of intrigue going on between AMD and Nvidia, the big noises in PC gaming graphics.

So pull up a pew and let's see if DX12 makes games run faster on the graphics card you've already got...

NB: There's a TL;DR at the bottom if you're that way inclined.

We're very much in the early days of DX12. That applies to everything from software availability to the very substance of the technology and how it applies to differerent hardware.

In other words, DX12 isn't yet a common feature in games. By that I mean it's not actually in any fully available retail game. But whatever the current situation, the one thing you can be sure about is that it's set to be a critical battle ground for the big boys in PC gaming hardware.

Before we dig into the details, a quick word on what DirectX 12 actually is and where you'll find it. It's a multimedia API or application programming interface. That means it's a set of routines and protocols that sit between apps (ie games) and your hardware (ie your CPU and GPU) and define how it all interacts.

DX12 covers all kinds of things from audio to 2D video. For we gamers, it's actually a subset of DX12 that matters most – the Direct3D bit that governs 3D graphics. The big change for the new version of Direct3D is a lower level of hardware abstraction (bear with me) along with a reconfiguration of how the graphics pipeline works.

Sometimes known as running 'closer to the metal', reduced abstraction basically means games can get at the graphics hardware more directly and efficiently. It's a bit like having a game that speaks the same language as your GPU instead of needing an interpreter to translate messages. That should mean games running faster on pretty much everything including your existing PC. Better frame rates on the video card you already have. This promise of free performance is what makes DX12 so exciting.

This DX12 stuff has been a long time coming...

Technically, only Nvidia's very latest second generation Maxwell GPUs (including the GTX 970, 980 and later) fully support the entire DX12 spec. However, my understanding is that any AMD GCN and any Nvidia Kepler card or better will support pretty much all the DX12 goodness that matters. For AMD that means GPUs that date right back to the Radeon HD 7000 series and thus anything newer qualifies. On the Nvidia side, we're talking GeForce GTX 600 series and newer. In other words, most cards released from 2012 and onwards should, in theory, support it.

If there is a downside, with less abstraction comes a greater workload for developers. It's the difference, very broadly speaking, between coding once and letting the API sort out the differences between AMD and Nvidia hardware and various generations thereof, for instance, and building more distinct code paths for each.

The significance of the change to the graphics pipeline, meanwhile, mainly involves a reduction of what's known as the draw call overhead. Yup, more jargon. But it's not actually that complicated a concept.

Better load balancing of software threads is part of the DirectX 12 promise

Draw calls are requests from the CPU to render an object or element in a 3D engine. Each call generates a certain amount of API overhead or load for the CPU. The idea with DX12 is to reduce or remove that API overhead.

Put another way, DX12 could well mean that your CPU will rarely be the limiting factor for existing games and in turn for future games that 'spare' performance could be used to do things like cleverer AI. We'll see.

Anyway, that's the theory. The big practical questions involve how much of this is going to materialise in reality and whether AMD or Nvidia graphics will have any particular advantages. The question of whether reduced CPU overheads might eventually make a cheapo AMD CPU more viable is interesting, too - currently Intel chips are very much the best bet for a gaming system.

To be frank, final answers to all these questions will take time to emerge. But we do have some early insight now in the form of the very first real-world game benchmark that includes support for all this stuff. Yup, it's Ashes of The Singularity [official site], the gameplay content of which I am largely oblivious to beyond the fairly obvious observations that as an RTS game, there's potential for a proverbial arse-load of objects, units and ballistics on the screen at any one time. It is thus the kind of game capable of generating a killer CPU load in ye olde DX11.

Lots of objects used to mean a heavy CPU overhead...

Now, I usually don't like doing this, such is my impeccable work ethic. But on this occasion I'm going to cheat a bit and conjure up something of a poll of polls. In other words, I'm pinching performance impressions and numbers from across the web. I'm not convinced running the benchmark on the limited set of hardware I have currently is sufficient to provide a full enough picture. That's my excuse anyway and I'm sticking with it.

If you fancy a look at the raw figures, the likes of PC Perspective and Computerbase (warning: auf Deutsch) are decent places to start. So, switching from crusty old DX11 code to the brave new DX12 kind in this benchmark is what the comparison is all about. Here's the bombshell factoid. For AMD graphics, performance for some cards jumps by anywhere from 60 to 90 per cent. Yes, that's huge.

Nvidia GPUs, meanwhile, do anything from actually losing a little performance to gaining about 25 per cent. The net result of which can see something like an AMD Radeon R9 390X leap from being miles off the pace of an Nvidia GeForce 980 to being on a par or even a bit quicker. It's dramatic stuff.

Look a little closer and some really interesting details emerge. For instance, AMD's FuryX board improves by as much as 94 per cent at 2,560 by 1,600 pixels. So this stuff is clearly relevant for high resolutions. On the other hand, as you move down the GPU stack, AMD's advantage seems to dwindle. The Radeon R7 370 only improves by around 15 per cent with DX12 enabled. Disappointing.

RTS'er Ashes of the Singularity from Oxide is first out of the gate with DX12 support

On the CPU side, you can see that in some situations, a chip like an AMD FX 8370 can go from a rather unplayable frame rate well below 20 to a more tolerable number in the low to mid 30s. It's a similar situation with cheaper Intel Core i3 chips. Flicking the DX12 switch can make those CPUs viable at higher detail settings. That said, the benchmarks also show that faster Intel CPUs still dramatically increase those frame rates at pretty much any setting. In other words, high performance CPUs are not suddenly redundant in DX12.

The problem, of course, is that one game makes for a rather singular data point. One might reasonably conclude there's something significant in all this for the traditionally CPU-heavy RTS genre. Beyond that and in the context of the broader gaming landscape, it gets very complicated awfully quickly.

What's more, the developers of Ashes of the Singularity, Oxide, are arguably somewhat aligned with AMD (you can read their views on all this here). Inevitably, the war of words between AMD and Nvidia has begun. Actually, that's not entirely fair. By Nvidia's standards, its response to the underwhelming performance of its own hardware in Ashes has been an uncharacteristically low key press release rebuttal. Here's the key passage:

"We believe there will be better examples of true DirectX 12 performance and we continue to work with Microsoft on their DX12 API, games and benchmarks. The GeForce architecture and drivers for DX12 performance is second to none - when accurate DX12 metrics arrive, the story will be the same as it was for DX11."

It's precisely this kind of s***storm that should run better in DX12

So, Nvidia is saying this benchmark does not reflect how its hardware will broadly perform in DX12. Well, it would say that, wouldn't it? My spidey sense, informed partly by the knowledge that AMD has been ahead of the curve in this area courtesy of its own Mantle tech (which is in effect an AMD-only API replacement for DirectX that offers many of the same claimed advantages), tells me AMD may have an early edge with this DX12 shizzle.

Oh, and you will need to install Windows 10 to get access to DX12. It's not available as an upgrade package for earlier operating systems. All of which just leaves space to reiterate the fairly obvious fact that DX12 is essentially non existent in actual games as I write these words. Indeed, it will likely be years before it's commonplace. That's especially true when you consider the additional workload it creates for game developers.

And yet DX12 does look awfully promising. There's enough potential on view to suggest it could be used to enable some very cool things in previously CPU-limited gaming genres sooner rather than later. DX12 looks like it will be very good for PC gaming. But the full impact is still a few years away.

TL;DR
- DirectX 12 is a new API that reduces CPU overheads in some situations dramatically
- That will allow at least some games to run much faster
- It's probably compatible with your graphics cards unless it's really, really old
- It's only in Windows 10
- AMD cards benefit more than Nvidia in early DX12 software
- But there aren't yet any finished, shipping games that use DX12

Read this next