Week in Tech: AMD And The End Of x86

By Jeremy Laird on May 8th, 2014 at 9:00 pm.

AMD's crown jewels are our gaming tools

Existential crisis alert. AMD has been laying out its vision of the future of CPUs this week. And it calls into question the very meaning of what makes a PC. AMD is proposing parallel development of pin-compatible chips based on x86 and ARM. For most things I do with my PC, whether there’s ARM or x86 inside doesn’t matter much. I’m not bothered whether there’s an ARM or x86 chip underpinning m’Chrome browsing, for instance. But gaming is a very different matter. Whether for good or ill, being a PC with the x86 instruction set, Windows OS and DirectX API definitely means something when it comes to gaming. But if everything goes ARM or at least instruction-set agnostic, what happens to PC gaming? What does PC gaming even mean? Does RPS disappear in a puff of speculative logic?

Wait, I know what some of you are thinking. Not another bloody doom-and-gloom piece about the death of the PC.

Oh, OK. Guilty as charged. My opening ‘existential’ gambit and the ‘end of x86′ thing is perhaps too high on melodrama. But I promise, that’s largely not where I’m coming from. Instead, I’m simply pondering along the lines of, “OK, if that happens, what does it mean for the PC and for gaming?”

I’d rather think about this stuff early doors than wake up one day and suddenly realise it’s too late, even if it will likely take several years to shake out. It’s about keeping up to speed, not prognostications of inevitable doom.

Anyway, a quick précis of what AMD has announced is probably in order. In simple terms, AMD’s plan is to make both ARM and x86 chips from here on.

A certain high performance subset of these chips will be very similar despite the differing instruction sets. They’ll be pin compatible. They’ll offer AMD’s GCN graphics on-die and other shared non-CPU features. They’ll presumably have similar power consumption ratings. But you’ll have the choice of ARM and x86 CPU cores.

For us PC die hards, the most relevant parts of AMD’s announcement are its intention to develop its own high-performance 64-bit ARM core and a new x86 core.

For the first bit, that means AMD won’t just be taking ARM cores off the proverbial shelf (it will do that, just not exclusively). AMD will also licence the 64-bit ARMv8 instruction set and design its own custom CPU cores, codenamed K12, which could be used in something that resembles a current x86 gaming rig.

The second confirms AMD at least intends to keep developing new x86 cores. So this new strategy is not the beginning of the end for AMD’s x86 CPUs.

These new ‘ambidextrous’ chips, as AMD calls them, are due out in 2016 according to the announcement. Given AMD’s track record, you’d take that timing with a dollop of scepticism. But it’s the intention that matters, not the details of the schedule.

The upshot of all this is the possibility of various systems and devices that are identical in almost all regards save for the instruction set of their CPU cores. It’s at this point that you begin to wonder why anyone save for gamers would choose x86 in this scenario.

The context here is consumers, not academics or professionals demanding the ultimate in performance. For my money, our game boxes need to be derived from consumer tech, not chips aimed at enterprise and high-end academia. In other words, our game boxes need to be affordable.

In the past, x86 held a really clear performance advantage. That used to boil down to the whole ARM RISC vs x86 CISC thing where the ‘reduced’ ARM instruction set delivered awesome efficiency but modest performance and x86 broadly the opposite.

But in the last few years, ARM and x86 have been converging on common performance and efficiency ground. It’s an increasingly even contest.

The other big factor in favour of ARM is its status as a semi-open platform. Yes, the core instruction set is privately owned and controlled. But ARM’s licensing model is dramatically different from x86′s jealously guarded limitation to just two players, Intel and AMD.

Off the top of my head I can think of at least four outfits that currently design their own ARM cores. That means more competition which usually translates into more innovation, lower prices and the rest. Meanwhile, x86 chips for PCs are stagnating a bit and PCs look pretty pricey.

And you could ask whether a wholesale shift to ARM would be a bad thing even for gaming. Microsoft already does an ARM-compatible version of Windows complete with DirectX and D3D. Who cares about the CPU instruction set?

Well, you only have to look at the current limitations / awfulness of Windows RT to get a notion of how the transition from x86 to ARM for PC gaming is much easier said than done.

Then there’s SteamOS which is currently focussed on x86. Getting a broad library of games running on SteamOS x86 is a tough enough task, let alone thinking about a parallel ARM version.

No, it all gets very messy when you consider moving games across to ARM. The question, then, remains. What happens to PC gaming if the vast majority of non-gaming client devices have gone over to ARM? Can the gaming PC remain the sole x86 consumer device and be affordable enough to be relevant? Or will we have to go through a painful transition period as games too leap the x86-to-ARM divide?

Anyway, now you know what AMD has planned. What do you reckon to the whole ARM vs x86 thang?

, , .

65 Comments »

  1. TacticalNuclearPenguin says:

    Well i guess AMD is trying to chase the money somewhere different, there’s little else i can clearly get out of this.

    I hope our usual playground will still be the focus and that they’ll use the new found money to shake Intel’s throne.

    They’ll need more than that to convince me not to buy the latter anyway, but i wouldn’t mind a new Intel chip that was about something more ambitious than “60000% more power saving, 5% extra performance!”.

    • SuicideKing says:

      Seeing that AMD’s failed to provide much power savings or much of a performance increase since 2009, I think Intel’s excelling at a relatively tough science.

      Those 5% increases aren’t easy. Physics and stuff, you know?

      Anyway, Devil’s Canyon is coming, so is Haswell-E with DDR4, so yeah…

      • james.hancox says:

        Devil’s Canyon is just Intel giving overclockers the TIM they should have had since Ivy Bridge. Not exactly a staggering advance.

        • SuicideKing says:

          No one’s saying it’s an “advance”, but it’s what we’ve been asking for.

          Or is that a problem too, now? That they’re actually making an effort to prioritize the desktop again?

        • HisDivineOrder says:

          The “advance” is that Intel realized they made a mistake and is pivoting to support the desktop CPU market again in a big way when last year it was “common knowledge” that Intel planned to cut out socketed CPU’s altogether. Now they’re saying they’re not only continuing to support socketed CPU’s, but they’re tailor making some for overclockers.

          That’s advancement in attitude.

          • SuicideKing says:

            WTF? It was never “common knowledge” that Intel was going to drop socket support, it was merely a “common misconception” that no one let go of.

            Broadwell was supposed to “lack” socket support because it was supposed to be mobile-only. Socketed CPUs for the desktop were supposed to “return” with Skylake, Haswell’s refresh and Haswell-E were going to happen this year (so was Broadwell, which will still technically happen, but later than expected).

            Devil’s Canyon was unprecedented, and yes, there lies the advancement of attitude you’re talking about.

    • TacticalNuclearPenguin says:

      You’re talking to someone who’s mad enough to consider going for Haswell-E, so don’t worry, i absolutely love Intel and i know they do not slack one bit.

      Still, you see, part of that little increment is also due to their incredibly huge focus on integrated GPUs, which is proving to be a tough battle for them. I understand they need to get their money this way aswell, i’m just dreaming of my own “perfect world” in which Intel and AMD are still only producing actual pure CPUs, mobile gaming is a small niche and the new consoles have 16 cores and a dedicated GPU that’s still not available in the PC consumer market.

      Yep, a little unrealistic i know.

      • SuicideKing says:

        True, i’d love to see a pure CPU from Intel too…and a more competitive AMD, so that Intel doesn’t at least stops its arbitrary product segmentation.

        I’m not sure 16 cores is profitable for anyone to make at the moment, especially since most software just isn’t there on the consumer side…

  2. KDR_11k says:

    Isn’t the x86 instruction set rather crappy anyway? There has been so much bolted on top of that (MMX, SSE, AMD64, µInstructions, …) that the actual x86 set seems more like a leftover stuck in there. Of course Intel’s approach with IA64 was a faceplant out of the gate too due to lacking compatibility but is it really a bad thing to finally kill that ancient instruction set?

    • Ieolus says:

      Backwards compatibility is one of the biggest draws that made the x86 line of CPUs so successful.

      • waltC says:

        Believe it or not…;)…I wrote my piece before I read your comment…! We are obviously on the same page…!

        • Ieolus says:

          What your piece lacked in brevity, it made up for it in historical data. Nice writeup!

          But I still ninja’d you! :)

    • Premium User Badge jrodman says:

      IA64 had a lot more problems than lack of backwards compatability.

      Another arch can totally become useful (and this already happened in phones, tablets, cars, satellites, etc etc), but I’m doubtful about the value of swapping arches for what we think of as PC computing. A lot of people have failed at that one already.

    • jalf says:

      It’s a mess, certainly, but that doesn’t really matter much. It’s a mess that people have invested a lot of money into.

      Starting from scratch with a sane instruction set could objectively speaking yield better, faster, more efficient chips (even if the difference wouldn’t be *huge*, some overhead and inefficiencies could be cut out).

      But first, there would be *a lot* of catching up to do. Intel and AMD have both invested so massively in developing hugely complicated and extremely advanced x86 architectures which are *fast*.

      ARM could be made just as fast, but it took x86 CPUs decades to get to where they are today. It would take a while for ARM chips to be designed to offer similar performance.

      And secondly, it would break compatibility with existing applications.

    • Low Life says:

      The same is true for ARM, though, it’s not like the architecture has remained the same since it was introduced in ’83. There are different versions (like ARMv6, ARMv8) and extensions (such as VFP).

    • bp_968 says:

      Except the older x86 instructions take up so little space on die that it’s not really statistically relivent. Say you removed all instructions from the pentium 3 and back (it really couldn’t be done like this, but it’s just a math example). The pentium 3 had a transistor count of roughly 9 million. If we remove 9 million transistors from a i7 in 2012 then we have removed less then 1% of its total transistor count (1.4 billion).

      The other problem with switching to ARM right now is plenty of applications still demand strong single threaded performance and intel x86 crushes ARM in that type of performance test. The selection of ARM CPUs right now is pretty impressive though. Go look at digikey at the ARM SOCs available, the MCUs, the CPU/FPGA combos, so many neat toys on that site!

  3. mattevansc3 says:

    Personally if you are going to write a technical article you need to be more technical in what you are writing. ARM is a complex platform, definitely more complex than the x86 platform and without highlighting those differences readers who don’t frequent more tech orientated websites aren’t going to grasp the negatives of the ARM platform and why its not already used in desktops beyond power vs efficiency.

    Strictly speaking nothing with ARM is “off the shelf”, all ARM processors are custom built by the manufacturer such as Apple, nVidia, Samsung, etc to their own specification.

    ARM architecture is very much like Linux in that there are no industry wide standards for it, its why ARM is primarily in fixed specification consumer electronics because of the need to tailor your OS, drivers, etc to the specific hardware. Its why you see mobile OS performance as iOS->WP8->Android, the more ARM manufacturers you cater for the less optimised your OS gets and you start noticing huge performance hits.

    AMD chasing the ARM market is too allow them to break into the mobile market which nVidia has a heavy presence in and Intel is attempting to breach by getting Android to support x86. AMD offering an architecture agnostic platform allows them to court OEMs by offering bulk purchase of motherboards and then swapping the processor based on the OS and architecture they want to make.

    • Jeremy Laird says:

      Sorry Matt, but you are wrong about that. ARM processors come in two basic flavours – custom designed by ARM licencees or off-the-shelf.

      ARM offers numerous ready-to-go processor designs. The latest are the A53 and A57 64-bit designs. However, some companies only license the instruction set and design their own cores to suit.

      So, for instance, Nvidia is using the off-the-shelf ARM Cortex-A15 design in Tegra 4, but Apple designed its own CPU cores for the A7 chip in iPhone and iPad.

      What may be confusing you is that the SoCs or ASICs that contain the processor cores with other functionality like graphics, radios etc are custom. But the processor part may be custom or it may be off-the-shelf.

      • mattevansc3 says:

        Rereading what I wrote I can see where you are coming from.

        ARM call their CPUs processors, the SoC manufacturers call their SoC’s processors (and the ARM processors CPUs) and everybody calls AMD and Intel chips processors. I was referring to the SoCs as ARM processors as they are ARM based processors but in hindsight I should have said ARM based SoCs.

        Also the “off the shelf” comment was in comparison to current x86 processors (CPUs and APUs) where you can literally just buy them off the shelf, plug them in and they will work. The same is not true for ARM, ARM based SoCs are unique and it is not a simple case of “here’s your ARM SoC, have fun”, which is what the AMD slides suggest and isn’t picked up in the article. Those slides don’t fit the ARM model nor the standard Windows/Linux desktop model and much like Mantle its just noise without industry wide support. Are we really going to see the likes of Realtek do ARM compatible drivers for all its network chips just to accommodate AMD?

        With regard to the doom and gloom aspect, even if AMD going down the ARM route was a sign of things to come both Unity and Unreal Engine 4 support ARM and X86 architecture, Microsoft have created a fairly good set of development tools that allows software to be compiled for ARM and X86 architecture while Intel and Google are working on bring x86 architecture to Android so Android apps will likely be both ARM and x86 compatible. Both the PS4 and XboxOne run on X86/64 architecture and with Sony’s finances the way they are the likelihood of them putting the R&D into a new architecture is slim

    • SuicideKing says:

      What? No, ARM is an ISA, and the Cortex chips are the “off-the-shelf” parts in question. Also, what Jeremy says above.

  4. Geebs says:

    So… AMD looked at their strengths from recent years – losing money in a race to the bottom, and failing to keep up with a sleep-walking Intel – and decided to combine them. Hmm.

  5. waltC says:

    The cpu is not as important as its software compatibility. If AMD, or anyone else, can develop a cpu which will run 100% of the currently available x86 software (which includes all Windows games) 2x faster than the fastest x86 i7 from Intel, while using 80% or less of the current x86 Intel power requirements, and cost 80% or less of what the fastest current i7 costs–then no one’s going to object to such a cpu in the slightest…;) It would massively succeed almost overnight.

    But if the choice presented businesses (and individuals on a smaller scale) is dumping $10M of software it will cost them $20M to replace with new software for the new cpu, and another $40M to replace all of their current x86-based hardware with the new cpu-based hardware, hardware that only originally cost them $20M–for possibly a 30% IPC/performance gain at most–then you can fuggedaboudit…;) It will never happen that way.

    That’s exactly what Intel presented to the markets with Itanium x64 (IA64) when it attempted to go rdram & Itanium x64 instead of the sdram & AMD64 direction AMD had chosen. Itanium x64 instead of AMD 64 would have meant a *huge* expense for individuals and businesses alike–simply for the sake of transitioning to a new Intel-owned architecture–and accordingly it was soundly rejected as the markets opted for AMD64/sdram instead. And, the x86 emulator for EPIC that Intel had developed for Itanium slowed Itanium down so much that current x86 cpus running the same software blew them away. That’s when Intel *eventually* figured it had best move quickly with an x64 x86 cpu of its own, and it licensed x86-64 from AMD and built its Core2 series of 64-bit desktop cpus.

    The lesson learned is clear: backwards compatibility with existing software (and to an extent, hardware standards) is enormously valuable. The x86 64-bit cpus made by Intel and AMD these days are about 80% RISC and 20% CISC in design and performance, and hardly resemble the venerable 2/3/486 and early Pentium CISC cpus at all. People often refer to the “baggage” inherent in x86 without really understanding how important the software-compatibility issue is economically.

    Yea, probably every single *year* AMD and Intel could come up with brand new architectures physically incompatible with x86 Windows software, architectures that theoretically would be much faster and sip less power at the same time. However, the rub is that they would be practically useless if the giganormous existing Windows software base could not run on them. Fast doorstops and paperweights is what you’d wind up with, as even with efficient x86 software emulators that were as much as 80% compatible running on the new cpu architecture, you still would wind up with something running slower than existing x86 and being less compatible and costing more at the same time. It’s not a viable concept.

    The challenge is two fold: to make a cpu that performs better while sipping less power & while supporting the enormous base of x86 Windows software, and running it faster (processing speed) than is currently possible on the fastest of x86 cpus today. That’s why x86 will never abruptly vanish to be replaced with something brand new and completely incompatible–it will simply fade away over time while being slowly replaced *transparently* by cpu hardware 100% compatible with x86 Windows software but as “non-x86″ in nature as it is possible for a cpu to be. The change won’t be abrupt and catastrophic, it will occur slowly, over time, in such a way that most people will hardly notice the eventual paradigm shift. It’s already true–as the current crop of x86 64-bit cpus has almost nothing in common with the original x86 CISC cpus from Intel. So much has changed but it has been so gradual, and backwards compatibility has been maintained, and so few have even noticed the massive changes in x86 cpu designs and architectures.

    • toxic avenger says:

      And would you look at that: out of the hysteria shines the bright light of reason ;) Nice write up, thanks for taking the time!

    • rexx.sabotage says:

      Very helpful and informative, I think I might even understand what’s going on here now :D

    • SuicideKing says:

      Excellent (and correct) write-up! Thanks.

    • frightlever says:

      Yeah, this comment should get the highlight treatment.

    • Jeremy Laird says:

      I can’t agree with all that.

      Already, some consumers have transitioned to ARM-powered devices for their main computing tasks.

      When your average punter buys something 10 years from now to browse the web and do basic productivity work, will they buy an x86 box/laptop/convertible/tablet? Or an ARM-powered device?

      To me, over time, it increasingly feels like the latter is more likely. I think people are becoming more and more attached to / aligned with mobile operating systems and the apps and functionality on their smartphones and tablet. I see no reason for most consumers to stick with x86.

      The exception is gamers. Which is the angle of the post. What happens to PC gaming if x86 no longer dominates client devices?

      • mattevansc3 says:

        Yes but they’ve only transitioned to ARM products because that is primarily what’s being offered.

        One of the main reasons why ARM is excelling in the mobile market and Intel is not is because ARM based SoCs can offer built in data connections while Intel chips currently require an additional LTE chip which increases cost and size for the mobile device. Its why the Nokia Lumia 2520 and Surface 2 LTE are the only Windows tablets you will be able to get on a dataplan but Samsung Tabs running on ARM SoCs and Android are being bundled in mobile phone contracts left, right and centre

      • SuicideKing says:

        You are aware that Intel’s pumping a lot of cash to make Android on x86 happen, right?

        • Jeremy Laird says:

          Intel has pumped a lot of cash into all sorts of things that have conspicuously failed to succeed, so that doesn’t reassure me of anything in particular.

          • bp_968 says:

            Each new node is becoming vastly more expensive and difficult to produce. Intel is already a few years (at worst) ahead of TSMC and global foundries. ARM can design CPUs/SOCs all they want but intel will probably be destroying its competitors who use ARM in price/performance very very soon, primarily due to the fact that they still operate their own foundries. Just look at some recent articles with Nvidia complaining about the massive price hikes from the foundry to produce the GPUs on the next process “level”.

            I play with ARM a lot in the embedded world and their great. That said intel is stomping into their playground and it something I’m sure their not at all happy about.

            On a similar topic, if intel does someday device to drop socketed CPUs it *is* possible to remove and install BGA soldered CPUs (like what’s in most laptops). It takes some practice, and a small investment in tools but it is doable for a DIY gamer with the nessisary resolve.

    • HisDivineOrder says:

      First, Intel didn’t have to “license” x86-64 from AMD. AMD and Intel already have a cross-licensing agreement that allows either company to use the other company’s license… forever. This is why AMD dropped 3dNow! in favor of SSE eventually and has adopted subsequent versions of SSE as they showed up.

      This was part of the agreement IBM compelled Intel to make with AMD to act as a secondary supplier for x86 chips way back in the day. The only real adjustment made to this agreement was a clause that used to be in the deal that said AMD had to fabricate its own chips to retain the x86 cross-license agreement. As part of a one billion dollar payoff…er… settlement with AMD over monopolistic behavior back in the day that AMD was helping the government (and nVidia) sue over, Intel let AMD sell its fabs (which merged with IBM’s old fabs and became GloFo) and outsource its x86 chip fabrication while also retaining their right to the x86 cross-licensing agreement. This is also why an AMD buyout is unlikely to happen. Any company that buys AMD won’t retain the x86 cross-licensing agreement when the company is owned by someone else. AMD took the payoff…er…settlement because selling off their fabs plus a billion was going to keep them from going bankrupt that year.

      As for your comparison to ARM, you (like many others apparently) forget that ARM is also starting to have its own fair share of baggage from older designs.

      Now the transition from x86 to ARM (or MIPS for that matter) will probably not happen in the gradual way of chips gaining support for both or OS’s gaining support for both. It’s more likely imo you’ll see “web apps” become a bigger and bigger thing where the web becomes so much more important that eventually a software layer will stand between the apps and the OS to such a degree that apps aren’t usually tailored for a specific CPU. Then you’ll have HTML5 or a successor showing up with the ability to run on any browser (or OS-based browser like Chromebooks that support ARM or Intel chips) with the chip not mattering because the application runs in an architecture-agnostic environment.

      Much like the transition from application-based email to web-based email, you’ll find casual games (which were already at the forefront by supporting Flash) moving into a space where they support any web browser and subsequent applications going farther and farther down the line.

      Look at Microsoft’s advances with Office on the web as the future. Right now, they’re still very early, but Google showed us the way and it’s Google Docs and Office Online. This is why Intel chips and ARM chips can be on Android and not worry about compatibility with either chip architecture.

      Compatibility will be solved away from the chip level and eventually you’ll just pick the fastest chip or the most efficient chip and it compatibility with software will be a moot point.

      Right now, it’s only a problem in Windows anyway. Microsoft’s the only company still dividing up the two chips into two different products with different application support across the same OS (Windows RT is an ARM port of Windows 8 without some of its features and with the ability to install applications manually disabled).

      In reality, the problem right now is Microsoft’s attitude toward ARM. If MS would treat ARM-ported Windows 8 (currently called RT) as a full version of Windows 8 with all the same rights and privileges, then this divide could begin to be bridged with ARM-compiled executables.

    • patstew says:

      I think there’s a much better chance of ARM making inroads into the PC market than there was for Itanium, for several reasons. Firstly, it’s already very widely used. Secondly, the rise of Linux and other platforms is making it increasingly unacceptable to write non-portable code (I know OS-portability does not imply architecture portability, but paying attention to one during development makes the other much more likely). Thirdly, an ever greater amount of software is only translated to machine code at run time (e.g. everything on the web, Java, .NET, Python etc) and could be trivially moved to any architecture with an appropriate interpreter.

    • LionsPhil says:

      Funny, I thought I’d said “yes, this” to this, yes.

      Yes, this.

  6. vodka and cookies says:

    To date there hasn’t been an Arm PC standard much like the old IBM compatible PC standard this is why Arm SoC’s are all unique and so fragmented you cant just install an OS like you can on a PC.

    However part of AMD’s plans include such a standard for it’s server chip lines which could easily transition to to a home PC standard if the demand should ever arise.

    The only OS which could spur Arm presence in the home market is Google’s Chrome OS, other linux OS havent got a hope in hell.

    Your unlikely to see Windows ever on these kind of chips unless there is some major change at Microsoft, they did support AMD x64 it’s possible they might support AMD’s Arm server plans.

    x86 is impossible to compete in with Intel looming over it so I understand why they want to get out.

    • mattevansc3 says:

      ARM already has a huge presence in the home market, it has a bigger presence than ChromeOS does.

      ARM powers a huge amount of consumer electronics and embedded items, they just don’t sell direct to consumers and see no need to. Also ChromeOS isn’t really that big, its echoing the netbook fad from about ten years ago. It only accounted for 8% of PC shipments to US retailers between Jan 2013 and Nov 2013 and during Sept 2013 to Jan 2014 only accounted for 0.2% of US and Canadian desktop web traffic (does not include mobile browsing, if it did that percentage would be even lower). This is a device that is designed to be used online so these figures are an accurate representation of current ChromeOS usage.

      • bp_968 says:

        ARM = CPU architecture, ChromeOS = OS. You can’t really compare the market shared them anymore then compare the market share of a Honda Civic and a car stereo (or any other two random items)

  7. jalf says:

    That used to boil down to the whole ARM RISC vs x86 CISC thing where the ‘reduced’ ARM instruction set delivered awesome efficiency but modest performance and x86 broadly the opposite.

    Who told you that? It’s nonsense.

    To the extent that CISC vs RISC has anything to do with performance, it’s the other way around. The first thing modern x86 CPUs do with incoming instructions is to translate them into a proprietary RISC format for execution. Because overall, RISC is just easier to execute efficiently. x86′s advantage has been sheer momentum: it was big enough, and had big enough companies supporting it, that billions have been spent developing fast x86 chips. If the same resources had been spent making fast ARM chips, they would be at least as fast as their x86 equivalents.

    The best thing you’ve been able to say about x86 as a CISC instruction set is that the overhead of decoding it is pretty much constant, and so where once a major part of the chip had to be dedicated to just decoding the ridiculously complicated instructions, today, it’s only a tiny corner of the chip that’s wasted on this. But it’s still waste that could be avoided by just using a sane RISC architecture in the first place.

    The only problem with running ARM CPU’s on a PC is backwards compatibility. Performance (and Microsoft’s weird blunders with Windows RT) have nothing to do with it.

    • Premium User Badge Malcolm says:

      I wonder how feasible (or desirable) it would be to have multiple Instruction Set translators on the same chip, ie one for x86 and one for ARM. Given all the work is being done in a proprietary instruction set anyway (I know intel do this, I don’t know precisely what AMD does in this regard on their recent processors). Or indeed if the ARM core is suitable for the actual processing, just having an x86 translator and a native ARM core, thereby giving you x86 compatibility when you need it.

      I’m not a hardware engineer (can you tell?) so this may be a completely stupid idea :)

      My understanding was that most of the problem with porting Windows to ARM was not the instruction set, more the hodgepodge of data bus standards that tend to appear on ARM SoCs rather than just having PCIx to rely on.

  8. Wulfram says:

    Hopefully I won’t need to understand what on earth any of that means when I buy my next computer,because if I do I’m in trouble

    • frightlever says:

      Unless Intel stops making x86 chips anytime soon (SPOILER: they won’t), you shouldn’t have a thing to worry about. As the article and WaltC points out above, enterprise is the big spender and they demand backwards compatibility for years.

  9. joa says:

    I don’t buy that moving games over to ARM would be a terribly messy affair. Assuming that the APIs remain compatible and the game’s source code isn’t littered with inline assembly, it should be unproblematic. That’s why people bother with all the abstraction in the first place

    • jalf says:

      .. and assuming that game developers get around to recompiling their games for ARM. Which might happen for a few high-profile games. But unless it is going to lead to additional income for the publisher, they’re not going to do it for 99% of all existing games out there.

      • varangian says:

        Perhaps this is another good reason to hope Valve can make SteamOS a goer. As Linux has supported x86 and ARM (and many other architectures we don’t care about for the purposes of this discussion) for a good while then producing versions of a SteamOS game that ran on either architecture should not be too tricky. Probably less difficult than producing versions for the PS4 vs. Xbone, for instance.

        • LionsPhil says:

          Changing the underlying OS does nothing to change the architecture portability of the code.

          I would not expect many games at all to be as easy to port to ARM as just changing your compiler options. Massive Hairy Problem #1: proprietary middleware libraries. (And, as noted, Massive Hairy Problem #0: nobody who can do anything about it cares.)

    • Premium User Badge Malcolm says:

      Endianness is often the major problem porting otherwise “portable” code between architectures. It seems that ARM supports both big- and little-endianness so might not be such an issue in this case.

  10. Gargenville says:

    ARM-based Chromebooks (all the Samsungs and HP’s Chromebook 11, among others) are the most common implementation of ARM CPUs in a traditional PC(ish) system and they’re both dramatically slower and draw more power than their Haswell Celeron counterparts while offering no real pricing incentive (or at least none that’s passed on to the consumer).

  11. mbp says:

    I have thought a lot about the end of x86 over the last few years because I have a large shelf of x86 games dating back to the 1990′s. I don’ t really care what replaces x86 as long as I can still play the original Deus X , Half Life and all my other old classics. If this has to be done via emulation so be it.

  12. Wixard says:

    When it comes to games this console generation is already set in x86 stone. By extension I think PC games will remain running the old windows standby for at least as far out as my crystal ball sees.

    AMD DOES need new x86 designs though. There’s a chance that the Next next gen may end up running x86 too (or rather x87 mmx/64/SEE1234 etc) If AMD wants a chance at securing those designs, they’ll need to come up with new chips that are at least competitive with intel.

    Assuming Sony and MS continue another console generation, there’s a high chance they might want backwards compatibility with current consoles.

    The days of pushing intel forward are long gone though. Too expensive for too little money.

  13. tehfish says:

    Correct me if i’m wrong* But is there any reason why at some point in the future, a combination ARM/x86 multicore CPU could not be produced?

    AMD’s new beema/mullins core seems quite close (x86 cores with an arm security CPU on die) I can’t imagine it being that huge a stretch.

    Performance wouldn’t be perfect, and a significant amount of software messing about would be needed, but it seems actually feasible… And having both cores in hardware fixes any emulation hilarity.

    *please do, i’m quite interested in the more techie view on this :D

    • marach says:

      http://www.arm.com/products/processors/technologies/biglittleprocessing.php AMD are apparently working on versions where the big is an x86 core yes.

    • Geebs says:

      The technical answer is: why bother? If you’re not concerned with backwards compatibility, the major OS all run on either architecture already: windows/RT, Linux +/- android, BSD. X86 chips are becoming more power efficient but not really getting any faster, ARM is getting faster but not any more power-efficient; both of them are increasing the number of cores in order to seem “faster” to the consumer, neither is getting much better compiler support to handle concurrency so that multithreading hasn’t actually caught on that well.

      In this situation combining both x86 and ARM on one chip would give you a much larger, more expensive and necessarily less power efficient piece of silicon with more parts to go wrong, which wouldn’t be socket compatible with older designs and wouldn’t really solve any important problems of background compatibility. So, why bother?

    • SuicideKing says:

      No it’s not “quite close”. Different programs are running in different environments.

      The problem is, programs are broken down into instructions specific to a ISA, and if you’re trying to run something simultaneously on two different architectures, programs have to be compiled for both, in the same binary. Plus you’ll have to migrate between two different architectures during runtime.

      Even on big.LITTLE configs the process isn’t easy. Making it inter-ISA would be very difficult. Plus, there’s the political side of it too, why would Intel bother? And can AMD pull it off? Not now.

      So, no, won’t happen.

      What will likely happen, is that CPUs and GPUs may merge at a core level, though that’s still a few years off.

  14. SuicideKing says:

    What?! WHAT?!

    Jeremy, from whatever AMD’s shown so far, indicators are that this design is mainly for servers and networking products, not consumers. Pin-compatible doesn’t mean replaceable. These could well be soldered on.

    Next, both K12 and the x86 core are likely to be Jaguar/Puma successors, from what i’ve been reading.

    Then, both cores are due in 2016. Intel isn’t going to be twiddling its thumbs, you know.

    Finally, AMD isn’t in a position to do squat. It’s a minority player in the PC market, and this will be too little too late. They’d have to have equivalent performance at a substantially reduced price for this to make a difference to the current picture.

    Seriously, this article was just…not cool, bro. ARM really won’t gain more market penetration in the PC space than Intel will in the smartphone space, out of which it’s pulling out.

    ARM can’t touch x86 in PC, x86 can’t touch ARM in smartphones. Tablets is where they’ll battle it out, and with tablet sales slowing Intel’s playing a long game of attrition.

    EDIT: It may be the end of x86 for AMD, but not x86 in general. Christ, why would you ever write such a thing? The only player in the ARM space that comes close Intel’s Core u-arch on performance is Apple’s Cyclone. That’s it. The rest of ARM competes with Atom.

    Might as well get Bay Trail-D if you want an ARM experience, at least you’ll have software compatibility. :/

    EDIT 2: You’ve also not considered the foundry side of the equation. AMD depends on TSMC and GloFo. They have to be on time, and their manufacturing must be free of defects. TSMC’s roadmap has been delayed so far, and they hopefully will be able to switch to FinFETs in 2016. GloFo so far has a very bad track record, and has usually been the major factor in AMD’s recent CPU failures (except maybe Bulldozer).

    Intel will be sitting pretty on 14nm’s second (or third) revision by then, or even 10nm if they manage to stick to their roadmaps.

    I mean why for the love of god would anyone want to switch to an AMD ARM/x86 platform, both of which are untested and demand skepticism (due to AMD’s track record)? WHY would you write this article? WHY.

  15. Tei says:

    Maybe this will allow to run arm compiled drivers, and emulate arm software, without a speed lost.

  16. bp_968 says:

    Actually we REALLY want to move to a CPU agnostic world and even better, an open API and open OS world. Linux and OpenGL will be worlds improvement over being enslaved to directX and the whims of Microsoft. I’m super excited with the direction PC gaming is going. We have tons of great indie games coming out, and more and more games are cross platform (and it’s easier then ever to make them cross platform). It’s easier then ever to get mods for games to extend their life. It’s a great time to be a PC gamer.

  17. Fallward says:

    I actually couldn’t finish reading the article because of these highly annoying advertisements which now have sound. Instead, I have skipped to the bottom of the page in order to whine about it. Whine complete.

  18. bad guy says:

    O.G. Intel; too hardcore. AMD competes aginst other CPU playaz.

  19. Premium User Badge Malibu Stacey says:

    Dear Jeremy Laird (if that is indeed your real name),

    could you do those of us RPS regulars who enjoy your tech posts a favour & create yourself a tag so they’re much easier to find please? Something as simple as “hardware” or “technology” would suffice.

    Cheers muchly =)

  20. albertino says:

    So if you have cores in year crown jewels – is that a good thing? I bloody hope so ‘cos I’ve just had mine installed.

  21. BoMbY says:

    PC gaming is not dead – it is rising. Sales of OEM PCs are going down, but gamers were mostly never their customers – only non-gaming and office/desktop sales (to some degree) are going down. If you’ll be able to run the same programs/games on ARM, it may work for office/gaming PCs, but I don’t see that. I wouldn’t bet on AMD with this ARM thing …

  22. OriginOfBob says:

    “What began as a conflict over the transfer of consciousness from flesh to machines has escalated into a war which has decimated a million worlds. The Core and Arm have all but exhausted the resources of a galaxy in their struggle for domination.”