Week in Tech: AMD’s new single-chip console killer

By Jeremy Laird on January 16th, 2014 at 9:00 pm.

Kaveri. Heterogeneous computing. Mantle. What? I just want a decent CPU and graphics card, please. Don’t know about you, but feels to me like you need a masters in integrated circuit design to keep up with PC processor and graphics tech at the moment. AMD has just outed Kaveri, its latest APU or CPU-GPU thingie. What with all this heterogeneous computing stuff, the promise of Mantle and an integrated graphics core that’s not far off next-gen-console performance parity, Kaveri pulls together the tangled web that is AMD’s current strategy in a single chip and puts a different spin on what’s important in PC processors. It’s also bloody confusing. Is Kaveri any good, what does it all mean, should you care, can you even keep up? Answers of sorts I shall provide. Meanwhile, a quick note on Dell and its alleged 30Hz 4K clanger.

AMD Kaveri, then. Let’s muscle through the headline specs and features really quickly, just so we all understand what we’re dealing with. Then have a stab at explaining what it all means. Some of this is slightly heavy going, but if you haven’t already got your head round things like HSA and Mantle, now’s the time.

Kaveri is the codename for AMD’s latest APU or Accelerated Processing Unit. So, that’s CPU and graphics on a single chip. Of course, all Intel’s mainstream PC processors have on-chip graphics, too, right down to the poverty-spec models. So simply having on-chip graphics maketh not a revolution in PC hardware.

Significantly, however, most of Kaveri is new – or at least new for an APU. The CPUs cores are brand spanking and constitute a debut for AMD’s Steamroller architecture. Then there’s the graphics, which is based on GCN or Graphics Core Next.

The Troy McClure of computing
You may remember GCN from such consoles as the Playstation 4 and XBox One. It also stars in all of AMD’s latest Radeon graphics cards for PC. It’s one graphics architecture to rule them all, essentially.

As for speeds and feeds, we’re talking four CPU cores (well, ish) and up 512 GCN stream processors. The latter is a pretty healthy number when you consider the shiny new XBox One has 768 of same. Yup, integrated graphics is edging towards console gaming parity.

AMD’s Kaveri: Giving next-gen consoles a kicking. Kinda

Anyway, those new Steamroller CPU cores are claimed to be 20 per cent quicker per thread, per clock. Harrah. Sadly, however, Kaveri is clocked a bit lower than previous APUs – down from a peak of 4.4GHz to 4GHz – so the net result is more like 10 per cent. Haroo.

As for spotting Kaveri with a view to actually buying the bloody thing, you need to look out for an AMD Ax-7xxx chip, for instance the A8-7700K or A10-7850K. The previous generation has Ax-6xxx branding. Not exactly memorable, but there you go. Oh, and you’ll need a motherboard with FM2+. Kaveri breaks compatibility with the FM2 socket, sadly.

Hello HSA
So, that’s the CPU and GPU covered. Thing is, in some ways it’s how those bit work together in Kaveri that is, in theory, the interesting bit. AMD is calling it the first true HSA or Heterogeneous System Architecture chip for PCs.

What on earth does that mean? In really simple terms, it’s a chip with different internal components each designed to handle different types of work load. But here’s the critical bit. That happens seamlessly and essentially invisibly to the end user. You run an app or a game, threads are generated and they sail through the most suitable part of the chip.

That ought to mean, for instance, almost any kind of floating point operation running on the massively parallel GPU rather than the CPU. Sounds a lot like general-purpose processing on the GPU, you might think. And there is some overlap.

Errr…

But with Kaveri, AMD has taken it to the next technical level. As for what that actually involves, well, stuff like giving both CPU and GPU full and shared visibility of the entire memory space and interweaving different instruction types as they queue for executing. To be honest, the details probably don’t matter. Just understand that there’s more to Kaveri than simply sticking CPU and graphics on one chip.

The snag to all this is the requirement for software support. As a general rule, Kaveri’s supposed HSA awesomeness won’t make the slightest bit of difference to existing games and apps. They’ll all need to be tweaked.

More about Mantle
You could say the same about the other big Kaveri-relevant AMD technology, namely the Mantle API or software interface. We’ve touched on Mantle previously and it remains a tricksy concept to fully pin down. But there are a few key elements to absorb.

Firstly, it’s designed to help game developers code specifically for AMD graphics and simply make games run faster. Part of that means making it easier to cook up cross-platform games for consoles and the PC.

Ummm…

Think about it like this. The consoles now have the same graphics architecture as AMD PC graphics. So wouldn’t it make a lot of sense if all the effort developers put in to get games humming on consoles paid off for the PC, too?

Mantle’s final major feature involves reducing CPU overhead and improving multi-threading in games. It’s complicated stuff, again, but take-home message involves the way Microsoft’s DirectX D3D API generates CPU-intensive draw calls.

Put simply, with Mantle you can crank out an order of magnitude more draw calls and maintain playable frame rates. That’s the claim. So, that might mean thousands instead of hundreds of characters being animated on screen. In other words, forget about the performance boost a really quick new CPU might serve up. You’d need a chip 10 times faster to give you what Mantle enables on your existing CPU.

Well, so the marketing men say. To be fair, early demos of Mantle have looked extremely promising. But just like HSA, the devil will be in the software development. Will enough developers bother to support Mantle to make AMD hardware the default option?

Will any of this actually matter?
In the end, it comes down to a hunch. Personally, I find it hard to believe there won’t be some benefit to having AMD PC graphics when running cross-platform games. What I don’t fully understand is how much overlap there is between developing for consoles and developing for Mantle. If it’s mostly duplication, I doubt Mantle will take off. If much of the work is shared, things will get very interesting indeed.

As for HSA and heterogeneous computing in general, it’s been a long time coming. In the long run, it will have a dramatic impact. I reckon that’s a given. But I suspect we won’t truly begin to feel the effects for a few more chip cycles.

Knee bone connected to the thigh bone?

Oh, as a final note, it’s worth pointing out that Kaveri proves AMD’s entire experiment with the Bulldozer architecture has been a failure. Kaveri’s new Steamroller cores are improved, but not enough to really close the gap to Intel. In pure CPU benchmarks, the fastest quad-core model performs about the same as a similarly-clocked dual-core chip from Intel. Not great.

I’m actually left wondering what Kaveri would look like with some updated Phenom cores with better power gating and maybe a few tweaks to improve performance. Faster than Steamroller, I wager. Despite that, AMD’s hardware in general and Kaveri specifically look promising. If HSA and Mantle come off at least in part, AMD hardware is going to look very attractive over the next few years.

I’ll admit predicting how all this will shake out it’s a very tough call. If I was buying right now, I’d ignore Kaveri or anything AMD and still go for an Intel CPU. But I’d almost definitely pick AMD graphics. They’re very competitive as things are. And something spectacular might just happens if Mantle takes off.

4K clanger
Anywho, what about that end note re Dell and the 30Hz 4K clanger? Dell is planning to wheel out a new cut-price 4K monitor. The Dell P2815Q is a 28-inch item that will ship for just $699, which at first sounded bargainous-going-on-bonkers.

But then the realisation it was based on TN panel tech kicked in and it made a bit more sense. As it happens, Lenovo, Asus and Philips (at least, there may be others) have all announced sub-$1,000 (sorry no UK pricing on any of these at the moment) have also announced 28-inch TN ultra-HD screens and it seems likely they all use the same panel.

Anyway, the real killer isn’t the TN tech. I’m open minded about that, TN has come on a lot in the last few years and if this new panel is the step forward it’s claimed to be, it might be very decent indeed.

No, the real problem is the apparent revelation that the P2815Q is capped to 30Hz running at its full 3,840 by 2,160 pixel resolution. Yuck. I still think there’s scope for this being some kind of misquote or cock up concerning the refresh rate running on HDMI 1.4 (only DisplayPort supports that resolution at 60Hz currently). But if it’s true, it’s a very surprising limitation given that Dell is pitching this screen at least in part as a gaming tool.

I’ve had a dig around the other brands who’ve committed to sell similar screens and as yet I can’t find any 100 per cent reliable info re refresh on any of them.

Anyway, this curious episode acts as a reminder regards what is actually important in gaming. Is it millions upon millions of pixels? Or is it refresh? In practice, it’s obviously both. But if you offered me 2,560 by 1,600 at 120Hz plus G-Sync or 3,840 by 2,160 at 60Hz (forget 30Hz, it’s a total deal breaker), it would be a very tough choice. I’d probably take the former for gaming and the later for everything else.

As it happens, I’ve spent a bit more time larking about with 4K gaming in the last few weeks and I’ve come away pretty impressed by how the current top end GPUs cope with 4K. You have to be a bit reasonable about some of the IQ settings. But it’s actually pretty viable with a single GPU already. I’m just not sure it’s actually better than a lower(but still very high resolution) rendered at super-smooth frames rates.

, .

73 Comments »

Sponsored links by Taboola
  1. DrManhatten says:

    Well you get a decent GPU but a lousy CPU I am afraid.

    • phylum sinter says:

      “As it happens, I’ve spent a bit more time larking about with 4K gaming in the last few weeks and I’ve come away pretty impressed by how the current top end GPUs cope with 4K. You have to be a bit reasonable about some of the IQ settings. But it’s actually pretty viable with a single GPU already. I’m just not sure it’s actually better than a lower(but still very high resolution) rendered at super-smooth frames rates.” //article quote

      It really depends on the type of game for me – for a racer/sim (of which i like flying and grid style racers too) those i really crave the detail, and sacrifice resolution to about 720p for some games to run on my 5850, but for fps and strategy games i tend to turn quality down and resolution up. That said, i’m probably going to be upgrading in the next 6 months, probably to a 7970 or r9 card. Til then, my Asus VE278’s 1920×1080 max is good enough for me – though i can see wanting 4k this year too.

    • Kein says:

      I think you misplaced words CPU and GPU, should be vice versa in your sentence.

  2. db1331 says:

    Console killer? How do you kill something that’s already DOA?

    • Billzkrieg says:

      Yea, those 3 million XB1s and 4 million PS4s sold really aren’t encouraging. It’s obvious the next-gen consoles are flops and console gaming is dead.

      /s

      • geldonyetich says:

        Get back to us when they redeem themselves as not being prematurely-released paper weights. Until then, all the sales figures really establish is that there’s a lot of suckers out there.

    • macc says:

      LIKE for this comment.

  3. Premium User Badge

    jrodman says:

    How is 35 bits of addressing the “entire memory space”? Where did they put the other 29 bits?

    • nameroc says:

      Please note that Intel’s newest offerings can only adress 32GBs of memory as well – Intel and AMD both don’t use the full 64 bits for memory addressing. That’s overkill for the foreseeable future. The following link has more information: http://en.wikipedia.org/wiki/X86_64#Virtual_address_space_details

      • Premium User Badge

        jrodman says:

        I feel the term “memory space” has usually and nearly uniformly for the last couple decades, referred to the complete virtual addressing space. However, even if we restrict ourselves to what sorts of addresses are put on the bus, both implementations of amd64 use 48bit addressing these days.

        A 32GB limit is just a limitation on the current consumer-grade chips. I can assure you we have boxes running intel hardware with 64GB of ram, and some of our customers have boxes with closer to 100GB of ram.

        • Universal Quitter says:

          Not working in the industry, my mind reels at the idea of needing 100GB of RAM. What are your customers doing, running detailed simulations of the universe?

          • vecordae says:

            Most of your RAM usage in gaming comes from the art assets. Higher-fidelity art assets require a matching increase in available RAM if one uses the current computer architecture. Higher fidelity art assets require more memory bandwidth in order to load as quickly as lower-fidelity assets. It all builds upon itself rather quickly.

            On the other hand, as SSDs become cheaper, faster, and more reliable, we may see entry-level computing do away with dedicated RAM hardware entirely.

          • Premium User Badge

            FriendlyFire says:

            My desktop has 16GB of RAM and I get capped doing some of the research I’m currently working on. I’m not doing anything particularly advanced or big, either, so I’m not in the slightest bit surprised that you can fill up 100GB+ of RAM. If you make heavy use of caching (for instance, if whatever you’re caching has a very large CPU cost), it makes a lot of sense to invest in enough RAM so that everythng fits in it as opposed to tanking hard when it starts going on disks, even SSDs.

          • Premium User Badge

            jrodman says:

            We make a search engine of sorts. People are poking around through terabytes or petabytes of data (usually distributed across many many nodes). Many (tens to hundreds) of concurrent requests are running at once. The partial answers to these many requests in aggregate may occupy 10-30GB depending upon what is being requested. The rest is effectively just a cache, holding things like bloom filters and so on.

            100GB isn’t strictly required, they could accomplish their goals in perhaps 20GB. However the 100GB may give them 2x-3x the performance and it’s worth it to them.

          • welmog says:

            Simulations of the universe (and not very detailed ones at that) use a whole lot more than 100GB of RAM: http://en.wikipedia.org/wiki/Millennium_simulation#Millennium_XXL

          • Premium User Badge

            Gap Gen says:

            Nah, you can do a simulation of the universe with 100MB or thereabouts. It’s just going to be a small, shitty one.

            But yeah, how much memory you use really depends on how much stuff you’re willing to put in. Note that the Millennium simulation doesn’t even include any visible matter. And even then the dynamic range of the universe is huge – even getting individual stars in simulations of a single galaxy is a big ask for modern computers. Taking it to the absurd level, if you’re going to simulate every atom in the universe you’re going to need a supercomputer bigger than the universe.

          • mukuste says:

            All kinds of scientific computing need tons of memory. You needn’t go as huge as the entire universe; just a cubic centimeter of certain porous materials will give you more data than you can hope to crunch on most systems.

          • Subsuperhuman says:

            I work on an electron microscope when I have the time. It spits out image files that are >10GB in size. If you start running scripts over image sets then the ~100GB of RAM we have on the control/access machine really comes in handy.

          • Premium User Badge

            phuzz says:

            You want lots of memory for running virtual machines. A single server with (for example) 8 CPU cores and 64Gb of RAM will be able to run maybe a dozen or two VMs that would used to have to have each run on a separate machine.
            Modern CPUs are so multithreaded that CPU is almost never an issue (most servers sit at less than 10% most of the time), it’s RAM and storage that’s the bottlenecks. RAM is the easy one to solve, just throw another few 16Gb sticks in, designing storage that can be used by many VMs at once and is fast, cheap and high capacity is, well, you earn your pay.

        • timzania says:

          Well at this time the virtual address space for x64 implementations is actually 48 bits. The pointers are 64 bits in size but the upper 16 aren’t usable yet, even for swap/etc. This will be easy to increase when we really need to.

          In this case I imagine they are referring to real addresses though. Not sure that the GPU works in virtual space. I’m not an expert on this but it seems like it would be inefficient for all GPU memory accesses to need translation from the MMU over on the CPU. So the CPU probably just hands off real addresses for the GPU to work with.

          • joa says:

            Hmm good point — if GPU uses only physical address, and CPU uses virtual addresses then this memory sharing will next to useless, because due to something called “paging” what may seem like a continuous bit of memory to the CPU will actually be scattered around memory to the GPU, so there will be no benefit as the GPU will often have to seek in memory to find things.

        • nameroc says:

          Don’t get me wrong, I agree there! The slides do seem to be saying both the CPU and the GPU portion can access all of the memory space they can see – which is a reasonable amount compared to other consumer grade chips in the market, and which is what will be used for gaming. Unless your computer is multitasking for gaming, research and manipulating large amounts of data, that much RAM is overkill. :)

  4. Darth Grabass says:

    “Console killer”…except not really.

    • Jeremy Laird says:

      Well, a little artistic licence in the title with a more nuanced discussion in the text.

      Fair to say the graphics part of Kaveri is remarkably close to the XBox One given the latter is brand new…

  5. Moraven says:

    So its like a sidegrade to the SoC that is in the PS4/One but for PC?

  6. Detocroix says:

    Kaveri: buddy / friend in Finnish. Nice choice for name :D

  7. Blackseraph says:

    Well that’s interesting, I wonder if there is few finnish people making this based on the name.

  8. Spacewalk says:

    “Thank you, come again”.

  9. Borsook says:

    I use current generation AMD APU as my gaming rig and I couldn’t be happier, best choice I ever made. Really good performance, incredibly cheap compared to Intel CPU and a GPU.

    • mukuste says:

      I’m also thinking that this thing would make for a fantastic, cheap Steam Machine… hopefully some of the hardware vendors will provide something like that.

  10. huldu says:

    Performance wise the “graphic” gpu is pretty horrible, but really you shouldn’t expect more out of it. I don’t understand why they’re wasting time and effort in trying to combine the gpu and cpu. It’s only good for laptops and for idiots that have no idea what a computer even is. Imagine some random guy buying this crap and thinking it has to be good… /sadface.

    • Borsook says:

      I bought one, I’m not a random guy, been building PCs for 20 years now. And it is really good. My middle of the line AMD APU runs all the games I have at 30-40 FPS (which is what i need) and was incredibly cost effective. And believe it or not it matters to many people.

    • dorn says:

      It makes more sense if you think of it the other way actually. The CPU has become so tiny that we shove it onto the GPU. Removing the bus latency outweighs the loss of some space on the die. Intel will do the same thing.

      When they work out the kinks we’ll stop buying CPU’s.

    • Slazia says:

      People have a lot of different needs and resources. If it does the job, why not get one?

    • mukuste says:

      Gotta love how everyone who has different needs from a system is an “idiot”.

  11. ThTa says:

    Makes me wonder what’d happen if Intel decided to give their GPUs as much die space as AMD does for these.

    Purely out of curiosity, though. I much prefer their focus on improving architecture and process, since it serves me much better. (My desktop is going to have a dedicated GPU anyway, so I’d rather they don’t waste too much space on something I won’t use, and their approach really benefits portable gear.)

  12. morbiusnl says:

    okay, so the monitor is UHD not 4k, thanks for clarifying.

    • Jeremy Laird says:

      It’s both, really. There’s no consequential difference between the two and 4K is really an umbrella term for displays with roughly 4,000 horizontal pixels. Whether you’ve got slightly fewer than or slightly more than 4,000 pixels is trivial.

      Really the only difference between 3840 by 2160 and 4096 by 2160 is the aspect ratio and that works in the former’s favour as it’s the same 16:9 ratio as most HDTV content. 4096 by 2160 is roughly 17:9 which is a bit wider than HDTV but taller than most feature films.

      I know 4K cinema projectors are 4096 by 2160, but I can’t remember seeing a feature film in recent memory that wasn’t much, much wider than that. Well, apart from IMAX, which isn’t 17:9 either, so there’s no content to match.

      The difference in image data / detail is too puny to worry about.

      • bit_crusherrr says:

        DCI 4k is the cinemascope 4k iirc.

        • tossrStu says:

          “DCI 4K” sounds like a British remake of Robocop.

          • Shadowcat says:

            I shall now stop reading this comments thread — I’m confident that this is the high point.

    • Bent Wooden Spoon says:

      Unless you work in film or TV you won’t be seeing “4K” at all. As Jeremy touched on, 4K in the true sense (DCI compliant) requires a 17:9 aspect ratio just like you get in the cinema. It’s a standardised technical term that’s been pilfered for use as a marketing buzzword, which is why we’ve suddenly jumped to referring to stuff by horizontal pixel count instead of vertical.

      Unless panel manufacturers start farting around with the now standardised aspect ratio of their screens, UHD is all you’re going to get.

  13. Dr I am a Doctor says:

    It won’t be in the MBA so who cares really

    • Geebs says:

      This; although I’m seriously leaning in the direction of a Retina MacBook Pro next time, I’m doing an unusually large amount of text editing at the moment and I’m completely sick of the eyestrain.

  14. RProxyOnly says:

    So as people who wish to see the PC flourish as opposed to becoming something else, say even more console like, if we ALL agree, right now, not to buy AMD. we can kill the console market stone dead..

    No.. More.. AMD.. c’mon, chant with me.

    On a different, but no less serious note, mantle is bad for everyone.. a proprietary SOFTWARE language.. FUUUU… proprietary hardware is bad enough, but this has the potential to fracture the games market worse that anything we’ve seen yet… Do ANY of you really fancy the possibility of buying a game only for it NOT to run on your GPU of choice?

    Nvidia will NEVER pay to licence Mantle, or even use it if it were free.

    • Borsook says:

      The past shows us clearly that if Mantle takes off Nvidia will pay for it. The same as Intel chips use AMD64 instructions. Nobody can afford to be left behind. Normally Nvidia’s position could guarantee that Mantle fails and they don’t have to consider buying the license but the console situation can change that and force them to bite the bullet.

      • RProxyOnly says:

        No they won’t.. they wouldn’t pay for a licence and then pay to redesign their chips, TBH they wouldn’t have to pay for a licence, AMD have said they can have it if they want to use it.. but they wouldn’t. Mantle isn’t compatible with Nvidia’s chip architecture. Nvidia would have to dump their own existing direction/research and then reverse engineer. Not happening.

        • Bent Wooden Spoon says:

          AMD have already said it isn’t tied to GCN. Check the first point on the second slide down:

          http://wccftech.com/amd-mantle-api-require-gcn-work-nvidia-graphic-cards/

          As with everything around Mantle, how effectively it’ll work remains to be seen, but saying “Mantle isn’t compatible with Nvidia’s chip architecture” is a bit premature given that goes against what AMD themselves have already said.

          • RProxyOnly says:

            Well I wouldn’t have said it if I hadn’t read it, but I’ll have to conceed the matter considering I can’t find the article and you’ve backed up your point.

            So given that, my only come back is good.. possibly… dependant upon mantle performance on Nvidia HW, which is anyone’s guess.

            On a side note, instead of trying to make out that I was simply wrong, it’s a pity you didn’t mention that the article actually stated that the original plans WERE to make it proprietary and they’ve changed their mind, so rather than being wrong, simply because I was wrong, I am wrong because my info is outdated.

            However even my being ‘wrong’ may not be the case, given what the article writer actually said.

            “Of course heres the thing we are not sure about. Mantle was clearly designed with GCN in mind, so when AMD talks about other vendors being able to utilize Mantle does that mean that Mantle will work on their current Architecture? Or will the actual architecture of rival vendors (Nvidia) be need to be modified to support Mantle? If its the later then this is a very subtle move from AMD’s side pushing towards a Red Future. Another thing we dont understand is what was up with all the apparent hints that Mantle will be GCN only. Unless AMD suddenly decided to make Mantle, Multi-Vendor (Unlikely) AMD had been planning this all along yet all information previously pointed towards a GCN Only Mantle API.”

  15. dangel says:

    There’s one snag – ddr3. Currently this limits bandwidth – amd needs ddr4.

    That said I love my two apu systems – and they’re great value for money (htpc/office box). My main rig is i7 though.

  16. Shooop says:

    Mantle is so vastly overhyped it’s not funny.

    AMD’s own PowerPoints don’t show very impressive performance gains over Direct X (about 10%), and because it’s a low-level API it’s more difficult to code for. A LOT more difficult.

    Hell, nVidia even already has their own low-level API, it’s called NVAPI. The reason you don’t hear about it is because they’ve accepted the standards of Direct X and OpenGL. It’s just easier for developers to learn one or two APIs everyone’s agreed to use instead of learning everyone’s proprietary code and writing a game that can run stable on all of them.

    Mantle is a boon only for consoles who need to squeeze out as much performance out of their hardware as possible to stay relevant for three or four years. PC developers will flick it aside unless they’ve got lots of free time on their hands because of the increased difficulty in coding for it.

    Mantle is nothing more than a buzzword to throw around to pretend they can stay competitive in the PC video card market without actually putting any real effort into surpassing their rival by making more powerful hardware.

    The worst news about this offensively bad joke AMD is telling is it’ll probably hurt OpenGL’s chances of being taken more seriously again by taking away the attention it deserves.

    • Premium User Badge

      FriendlyFire says:

      And one thing hasn’t been made clear, as far as I know: can consoles even use Mantle?

      Both Sony and Microsoft have designed their own specific APIs which are closer to the metal, just like Mantle, so I’d be surprised if they’d also allow AMD’s own tech on there. They want control over that.

      If Mantle’s not working on consoles, then that entirely eliminates the supposed multiplatform advantage.

      • AbigailBuccaneer says:

        I believe AMD/Microsoft/Sony have all said they have no intention of seeing Mantle on current-generation consoles, but I’ve not heard anything about the next generation.

        Theoretically, it’s a good fit for consoles – because it’s already so similar to the low-level graphics APIs that console developers already have available to them. Whether it offers any advantage over existing APIs is unclear (to me, at least).

    • Josh W says:

      And even though it’s a little unlikely for new market entrants to come in at this stage in the game, we should be on the side of making open gl as kickass as possible, so games will last longer, and be usable on a greater variety of hardwear.

  17. crinkles esq. says:

    What was the name of that APU in the late 90s..it was CPU, GPU, audio, and networking on one chip. Performance was good, but not anything stellar, and I don’t think this will be any different. The future is using the GPU for parallel processing tasks like Apple is doing with the Mac Pro, utilizing the OpenCL language.

  18. georgecshowalter says:

    my buddy’s step-aunt makes $82/hr on the computer. She has been out of work for 10 months but last month her paycheck was $18010 just working on the computer for a few hours. read this….
    http://www.dub30.com

  19. mattevansc3 says:

    There is a load of hyperbole coming from AMD as of late, especially around Mantle.

    Claims about easy porting and closer parity with consoles because of Mantle and the GCN architecture are quite fictitious.

    Firstly Mantle is PC only. While the XboxOne and PS4 have the hardware neither is supporting Mantle as both already have direct to die APIs so the gains you’d see on a PC don’t apply to consoles.

    Secondly if using the same architecture was all that was needed for porting games Macs wouldn’t need boot camp to play games and Metro Last Light wouldn’t look worse on SteamOS. As derided as it is Windows8 has better cross compatibility between ARM and X86/X64 architectures than most dev tools have between OSX and Linux on the X86 architecture.

    And finally lets not beat around the bush here, the reason PC gamers don’t get ports is because we don’t want ports, we want superior games at a lower price. Look at Dark Souls, Namco gave us a straight up console port, same resolution, same menus, same everything and PC gamers were vocal in calling it a lazy port. They wanted higher resolutions, menus that catered for keyboard and mouse, more graphical options, etc and all for the PC RRP of £30. Us PC gamers are generally more hassle than we are worth for console game makers.

    • ChromeBallz says:

      Except that Dark Souls turned out to be hugely succesful on the PC to the point where it’s now being developed with it’s own proper version. Personally i sunk a lot of time into that game and i haven’t regretted it for a second.

      There’s more than enough overlap to allow for ports, to or from the PC. Games developed for both platforms are also obviously welcome, see Skyrim for example – That game could even be fully modded on PC but not consoles.

  20. james___uk says:

    Impeccable timing RPS! I get this chip in the post tomorrow, doing a new build. Naysayers or not I’m gonna find out what this setup will be like for myself, I plan to get a GPU too but Im very curious as to how the CPUs GPU performs

  21. Colej_uk says:

    I can imagine this kinda thing eventually being popular in steam boxes- cheap with decent performance. Then stream the ultra-intensive games from your powerhouse PC if needed.

    • frightlever says:

      Sounds like the next iteration of the chip might exceed the current consoles and PC gaming was starting to get held back by the need to maintain compatibility with the 360 so you could actually see a Steambox that was pretty cheap and able to play most games at 1080P, perhaps without all the bells and whistles.

      I’m not all that fussed on high fidelity PC gaming and I suspect most PC gamers aren’t either. You’re going to see more PC gaming enthusiasts on a PC gaming blog who do want the highest possible resolution and framerates but you can’t judge people’s attitude to alcohol by looking round an AA meeting. As I mention below, I’m thinking about building a new PC but I aim to make my builds smaller and quieter, while also more powerful than the previous build but I’ve no interest in SLI, over-clocking or whatever. I want a tidy system with a two hundred quid GPU that’ll do me for 2 years.

  22. GoliathBro says:

    Yo Jeremy.

    I’m absolutely loving your articles.

    That’s all.

    • frightlever says:

      Agreed. I’d be lost on current PC tech without them, and I’m getting that itch to build a new PC…

  23. CookPassBabtridge says:

    At what point will kaveri become sentient and launch nuclear missiles against humanity though? I need to know if it’s worth booking a holiday

  24. bstard says:

    All those pictures are confusing, but I understand killing consoles. It increases your chance on 72 homosexual maidens.

  25. otto_ says:

    Color me excited.

    I’ll wait until februrary when they’ll release the Dual Graphics drivers and depending on that build an itx 7850 R7 240 crossfire machine…

  26. Ovno says:

    As far as 30hz goes, as long as the render is asynchronous (so that update code runs as fast as possible and render code is only called when the previous present/flip has completed, without blocking the logic) it will make absolutely no difference to the responsiveness of games and as the eye can only see 24hz anyway it will make no difference to the look of it, so it all comes down to how well any particular games engine is written.

    If I could guarantee that all games I was buying were using said asynchronous render architecture then I would have absolutely no qualms about buying one.

    • Skiddywinks says:

      The eye only seeing 24fps is totally fictional. Tests have shown that people can notice tangible differences in smoothness up to something like 120fps, although obviously they are unable to say what fps they are looking at absolutely. The human eye and brain don’t work the way technology works so attributing one to the other is ignorant at best and intentionally misleading at worst.

  27. P.Funk says:

    “The Troy McClure of computing”

    Best header of the year so far.

  28. Josh W says:

    The thing that most interests me about this is getting all those stream processors for game logic. If this starts to become the norm, all kinds of simulation stuff could become feasible. The dwarf-fortress-alikes basically would love this.