Week in Tech: AMD’s Mostly Not-New Graphics

By Jeremy Laird on October 10th, 2013 at 9:00 pm.


All of this has happened before. And all of it will happen again. AMD has just launched its latest family of ‘new’ graphics boards and I feel like number two’s been whispering portentous, spacey waffle in my ear. The spec lists for the new boards are shot through with galactic levels of déjà vu. But before you get completely bummed out by what mostly amounts to a major bout of rebranding from AMD, there’s a wildcard in the form of this weird new thing called Mantle. It might – just might – give AMD GPUs, including every Radeon HD 7000 already in existence, an unassailable performance advantage in the bulk of new games over the next few years.

Those ‘new’ AMD chips in full
I’ll keep this bit as snappy as possible. On paper, we have an entirely new family of graphics cards accompanied by new branding. Out goes Radeon HD, in comes Radeon R7 and Radeon R9. From here on, R7 are cheaper mainstream graphics cards, R9 are the pricier performance boards.

As things stand, five GPUs have been announced, two R7s and a trio of R9s. But here’s the thing. Only one of these GPUs is actually new. The rest are rebranded versions of the existing Radeon HD 7000 Series family.

So, it’s dusty old chips cynically rebranded? That’s certainly one way to look at it. But first, let’s get the boring bit out of the way and identify exactly what we’re dealing with here.

There’s a low-end R7 250 we don’t need to worry about. The relevant stuff starts with the R7 260X. It’s based on the Bonaire GPU as found in the Radeon HD 7790. Next up is the R9 270X, which is Pitcairn reborn, otherwise known as the Radeon HD 7870.

Then there’s the R9 280X. Here we’re talking Tahiti and thus HD 7970. Throughout, the detailed specs appear to be identical to the outgoing boards. Things like clocks, shader counts, the works, it all looks very familiar.

7970 begat 280X. Actually, more accurately they’re one and the same.

The only one of the above I’ve had a play with is the 280X and I can confirm it’s a dead ringer for the 7970. Move along, nothing to see.

But wait, there is something new
That leaves us with the Radeon R9 290X. Finally, we have something that’s actually new, a GPU known as Hawaii. The nasty NDA hasn’t lifted on that, so full details aren’t yet in the public domain. But the overwhelming weight of opinion says 2,816 shaders (up from 2,048 from AMD’s previous best), a meaty 512-bit memory bus, a doubling of ROPs compared with the 7970 (or 280X, if you like) and, again, a rough doubling up on tessellation hardware. The 290X is going to be fugging quick, make no mistake.

What the 290X is not is a major step forward in architecture. It’s basically a GCN chip (that’s AMD’s current graphics architecture) with a few detailed tweaks. So, essentially the same core architecture as the 7000 Series and, of course, the two new games consoles. This may well turn out to be a very good thing for reasons we’ll come to in a moment. But even the 290X isn’t a truly new chip. Think of it as a 7970 with some added horsepower.

All of that begs an immediate question. Why? It really boils down to two things. First, there’s no new viable production node available for AMD to use. The TSMC 20nm node apparently isn’t quite ready for beefy graphics chips, so 290X remains a 28nm effort as do the rest.

Secondly, with both XBone and PS4 being based on GCN, there’s a strong argument for keeping AMD’s PC chips closely aligned with those consoles. Hold that thought.

The sordid matter of money
What this all comes down when it comes to taking a view on how attractive these ‘new’ boards are is pricing. With a new flagship GPU and a bunch of rebrands, you’d think the 290X enters up top and everything else falls down a peg in the price list. If that was the case, I’d be cheering the new chips to the rafters.

It’s not like existing games make exhaustive use of GCN’s features. So I’m happy to see it stick around for a while, especially as it’s the target architecture for developers courtesy of the consoles.

AMD carrying over its GCN clobber may not be a complete bummer when you consider the console connection.

So, what I was really hoping for with the new GPUs was a return to AMD’s fleeting strategy of maximum gaming grunt at around £200. That’s what AMD gave us with the old Radeon HD 4870 Series and that’s what it promised it aimed to keep delivering.

The 4870 gave you 80 per cent-plus the performance of Nvidia’s best for miles less money. But AMD ditched that promise within a generation and since then performance GPUs have been getting pricier and pricier. It’s now frankly out of control with Nvidia charging an idiotic £800 for Titans.

As I write this, the typical UK entry price for the 280X looks like roughly £230 to £240 which is at best the same as 7970s have been going for of late and therefore an utter fail in terms of bringing existing tech down in price. This makes me very unhappy.

As for the 290X, with any luck pricing will undercut both the Nvidia Titan (not hard to achieve) and the Titan-derived GTX 780. At a guess, £400 to £450, then. Then again, if the 290X turns out really quick, it will probably be more expensive.

Of course, it’s worth pointing out that Nvidia did much the same thing regards rebranding with the GTX 700 series earlier this year. In fact, it didn’t add even one new GPU. It just reconfigured and rebranded. But it did give us more performance for less money, even if its pricing towards the high end remains pretty offensive.

In the end, I can only hope that UK pricing for the 280X adjusts down a bit and in the meantime, point you guys in the direction of the cheap 7970s that are available for as long as stocks last. Scan, for instance, has an Asus 7970 clocked at 1,010MHz for £220. If I had to buy a board today, that’s where my money would go. Remember, it’s the same chip as the ‘new’ 280X. You’re not buying into a defunct architecture.

The console connection
The final ray of hope in all this is that Mantle thing I mentioned earlier. There’s a lot of debate over what exactly Mantle amounts to. AMD is being a bit cryptic. And frankly I don’t want to get bogged down in the details. But what it might be is this: the low-level graphics API pinched from the XBone and brought to the PC.

AMD’s Mantle: Low-level API. High concept stuff.

If it really is just that, the impact could be pretty dramatic. As you guys know, consoles tend to punch above their weight regards graphics performance. And that’s because developers make low-level code that’s hooks directly into the hardware. It’s worth the effort for developers because the consoles are a static target that sticks around for years.

On PC, coding tends to be done at a higher, more generic level that works across the broader spectrum of GPUs inside PCs. But with GCN being in both consoles and then sticking around in PCs, you have the tantalising prospect of PC hardware operating with similar efficiency and overheads as consoles. Put another way, the very same low-level code that runs fast on consoles will run fast on PCs, too.

In theory, that could give AMD chips a major advantage with any game that’s developed in parallel on PC and console. One really intriguing possibility is that Mantle might even help AMD’s CPUs. Part of what Mantle is claimed to do is massively reduce the CPU overhead from draw calls. If so, we could be in a situation where CPU performance becomes much less critical for in-game frame rates when running AMD GPUs.

I’ll stress that, for now, this is all pretty speculative. And there’s already talk of some things Nvidia can do to counter all this. But we’ll begin to get an idea of just what Mantle will deliver when DICE releases a Mantle-enabled build of Battlefield 4 in December. If it runs like poo off the proverbial, then things will be looking very interesting for AMD.

Battlefield 4 will be the first chance to get the measure of Mantle.

Personally, I hope Mantle delivers on the hype. Not because I want AMD to come out on top. But because there’s a chance it will force a market-wide price adjustment for GPUs. If AMD chips are suddenly faster in every new console port, odds are Nvidia will have to respond with more competitive pricing. And that means we’ll all get more performance for less money whatever graphics card we buy.

__________________

« | »

, , , , .

51 Comments »

  1. GamesInquirer says:

    I’m glad because it hopefully all means a longer supported life for my 7970. I don’t have the funds for the highest end upgrades every time all the time like others do. Even this was a stretch, never mind a Titan!

    Not that I expect that radical a difference with MANTLE, nor do I think it needs to provide that to be a successful endeavor as every bit helps. In my case the Battlefield 4 beta performs fine without MANTLE outside its own lag and stuttering issues. Not with the solid (vsynced) 60fps of Battlefield 3 but still a more than playable comfortably above 30-40fps result in heated action at max settings, nothing like how BF3 run on my older rig. Not bad considering it’s still a beta and how I’m not using the beta drivers for it. I could always reduce some less meaningful settings also (the game in motion is aliased too much even with its AA options maxed anyway so I could disable those for example).

    They still have the 290X for the ludicrous end segment of the market, people with that kind of investment in the hobby always have been able to buy incredibly powerful things so it’s nice to see that us who spend less for gaming hardware may be getting extra mileage out of it also this time, even though it likely comes out of necessity rather than intent as they may have simply needed more time to get their new architecture stabilized and cheaper before using it for lower end cards. We’ll see it happen.

    I also hope 4K becomes a trend, if the highest end machines push the possibility of 4K gaming that could mean our lower end systems get by longer at the commonly used 1080p as the games would perhaps not increase their visual complexity that much in order for the new cards to run them at 4K.

    With the GPU front potentially covered for a while, hopefully next gen console ports won’t be sloppy enough to require 8 CPU cores or whatever despite the much lesser efficiency of each core on the upcoming consoles, that could mean I will need a new motherboard and CPU at least as I’m on 1155.

    Edit: I have to say it’s also nice to get some benefit out of exclusive tech like that for once, PhysX quite annoyed me in certain games as it’s impossible to get it to run good on AMD so you end up without chunks of a game’s visual flair, even in older games like Borderlands 2 which I recently replayed. While I don’t like closed technologies like that, at least this one doesn’t necessarily screw over the other brand which will run as it always has in DirectX or whatever developers choose, it just potentially provides a boost for this one. PhysX will probably remain an issue with stuff like Witcher 3′s wolf fur. Yeah, TressFX is a similar approach (I’m not sure how it performs on Nvidia however) yet being far from a compete physics solution it’s not nearly as important. It would be nice if it actually was that and became interchangeable with PhysX, so that the same games run with PhysX on Nvidia and TressFX on AMD hardware. KInd of like how there’s that HBAO/HDAO deal though some developers don’t even bother. But TressFX seems like a failed and lost cause at this point sadly, considering the hit just for one’s hair. Not that I have any brand loyalty anyway, my previous card was a GTX285, I still didn’t benefit from PhysX then as my system was too low end with an E8500, I’d probably have needed a second GTX285 for it.

    • Clavus says:

      Just a note: they released an update for the BF4 update earlier today that massively improved performance for a lot of people. My HD7950 manages 60 fps at high / ultra settings.

      • GamesInquirer says:

        I’ll try it later, I didn’t play today actually, thanks for the heads up!

        Edit: not solid 60 but I don’t think it goes below 40 in 64p Conquest. Hard to see without afterburner’s counter working, the game’s fps overlay is a little slow, but it’s nothing that makes me feel bad with all that’s going off.

    • vahnn says:

      Same here, I hope these cards hold strong for some time to come. If my 7970 is still holding up well next year, I’ll just grab a second one. I won’t hold my breath, but it’s a nice thought.

    • Dux Ducis Hodiernus says:

      780 is pretty much the titan at a reasonable price, there’s like, what, 5-10% difference in performance in absolute most. And you can easily overclock the card to go above that. And the titan is like almost twice the price of the 780, so yeah. Tired of hearing all about “titan this” “titan that” when there’s such a much more bang for your buck for a still very very high end card with the 780.

  2. arccos says:

    Interesting. I’ve been using ATI/AMD for a few a generations now, but was thinking about switching to Intel/NVidia for the next one because it seems to be where more of the speed improvements were going, and AMD seemed to be in a bit of financial trouble. If AMD can pull off at least doubling the performance on their systems, people may start jumping ship in the other direction.

    • borkbork says:

      You mean doubling performance on the CPU side? I wouldn’t hold my breath, since Mantle would only affect game performance. As it stands right now, you don’t need a high CPU for gaming as it won’t really bottleneck performance until you start getting into insane GPU builds like triple Titan SLI, 2x 690, 2x 7990, etc..

      If you’re talking GPUs, doubling performance in one generation would be an insanely (I’d say impossibly) tall order, for either AMD or Nvdiia. =P

      • snv says:

        I sadly had to learn that with some game engines the CPU remains the bottleneck, even if it is running only at, say, 20% on each core. This can be misleading.
        Take on Helicopter and ARMA 2 for example rely more on a fast responding CPU, and not so much on the throughput. Haven’t even tried ARMA 3 because of that.

        • borkbork says:

          Huh, interesting.. although I can see a game like ARMA being an exception to a rule like that. Wonder if it has anything to do with those games being relatively heavy on physics computations..

        • Grey Poupon says:

          If a single thread jumps from core to core (as they normally do to even out heat dissipation), windows will show 25% load on every thread. If you were to disable this, you’d unsurprisingly get one core to 100%. This is something people often don’t take into account.

          I haven’t really looked into it, but since ArmA has a lot of stuff on the screen thanks to the insane view distances, with any luck it’s capping out the draw calls of DX. Mantle should be able to do 8 times more and thus help a lot with the performance. Even if this isn’t the case, Mantle takes a bit of workload off of the CPU so it’d help a bit even if the API itself wouldn’t be better for ArmA than DX, though this is highly unlikely since a lower level API is pretty much always better. Doubt we’ll get the level of difference that we had with DX and Glide though.

          • xf11 says:

            So for you to understand, when object moves on screen its processor that changes 3 values in Vector3(x,y,z) object and THEN gpu draws that on screen. So when you have server with 100000 objects its CPU that processes them each one each frame, THEN sends that to gpu. I can draw literally 200.000.000 polygon object right now in my 3ds Max and get 100 fps, but it cant handle 100 1000 objects because the sucker still works on 1 core.
            Why i even try to explain this lol.

        • Zenicetus says:

          It’s not just Arma. There are other game types that are CPU-limited, like hardcore flight sims running not just world graphics but multiple, concurrent flight models. X-Plane limits the number of other AI air traffic in your vicinity, based on number of CPU cores and how fast they are. Each AI plane is running the exact same flight model as the player’s plane. I’m not sure, but I think Rise of Flight does that too, for the enemy planes.

          AAA strategy titles can be very heavy on the CPU. Civ 5 has been a little better optimized now, but on first release it would slow to a crawl in the late game between turns, crunching all the data for the various factions. Nothing going on visually there for the GPU, it was all CPU data crunching that would bring even high-end computers to a crawl. Total War: Rome 2 is doing that now, although it’s still being optimized. Rome 2 is also probably CPU-throttled in the tactical battles, because it’s modeling all those arrow trajectories and sword hits for thousands of virtual soldiers.

          A FPS game like Crysis may put a heavy load on the graphics card, but there isn’t much to stress the CPU compared to these flight sims and strategy games.

          • Alfius says:

            I’m a pretty hard core ARMA 2 player looking to upgrade for 3 – I was planning to pull the trigger on a new machine towards the tail end of this year. Now I’m thinking of holding off a while.

            Any other ARMA-ists out there with views on building a rig optimised for CPU intensive gaming? I’m quite the amateur with this sort of thing but I expect that I’ll be wanting a fair chunk of fast RAM? I do get noticeable slowdown in large servers running complicated scenarios (CTI/Warfare) and particularly on Cherno. Anyone else with similar experiences?

  3. thenotsofunnyguy says:

    Starting an article with a BSG reference. Classy.

    Most gamers tend to underestimate the impact that software that’s running the hardware has on the performance. So I don’t really care that AMD is only fine-tuning the hardware if they can provide greater performance with software changes. What matters is the end result.

    What I find more interesting is that AMD wants to make OpenGL extensions to eliminate the draw call bottleneck which is basically what mantle does and the only reason to use Mantle instead of OpenGL would be consoles.

  4. Sami H says:

    Mantle picture: “Enables 9X more draw calls”. Compared to what? D3D9 on the PC vs. 360 was a clear win for the 360, but later versions of DX have closed this gap dramatically by rejigging (technical term, honest) the API and driver architecture to cut down on the number of low-level system calls required.

    And then there’s OpenGL which blows both out of the water. I seem to remember an nVidia presentation referring to the new batch/instancing API in DX10/11 and saying that while the features provided a huge boon for those APIs, when the features were added to OGL there was hardly any speed-up because OGL had such little overhead to begin with.

    Considering the success NVIDIA had with the propriety CUDA API/tools, I’m not surprised to see AMD try and do their own. Saddened (we really don’t need more propriety APIs), but not surprised.

  5. Wedge says:

    Yeah the pricing is $50 over what all it should be (dunno bout all your EU/UK prices). With the lack of a 7950 equivalent and the pricing of a 270X coming in around where you can now easily get a still superior 7950 (and over a basically equivalent 7870), none of the “new” cards seems to make any sense.

  6. TechnicalBen says:

    “Only one of these GPUs is actually new. The rest are rebranded versions of the existing Radeon HD 7000 Series family.”
    Die. Die horribly AMD. In fire and smoke and acid. Die like you and NVidia and Intel as all you rebranding “new” model lying price fixing scum should.

    I moved to AMD because of the GeForce 5000 fiasco. I went ATI because of value and honest branding (I can tell you the spec of an ATI card by the first, second and third digits, as they relate to generation, board and spec… now I can tell you jack all).

    I might be over exaggerating here. But I’m fed up with these games the companies play. :P

    • Sharlie Shaplin says:

      They have been sandbagging with their lower end chips for quite a while. They have barely made any improvements since the 5770.

      • ancienttyrael says:

        Thats actually wrong as the 7770 has vast improvements with tessellation work over the 5770 and its a good board to get into 1080 gaming.
        AMD’s plan in this whole mess is pretty interesting, its obvious that they have limited supply of silicon coming in if there is only two new boards coming out with the series. So far they thrown in tons of support for the old boards as in reality they are pretty damn good and held up against Nvidia with two/three generations (granted the renames). The Mantle thing is a good idea as it makes AAA devs code easy between consoles and PC and if they can pull off the low level api on pc its going to look great.

        • Sharlie Shaplin says:

          Technical details aside. If you already have a 5770 the actual difference you will see on screen in most games, is negligable. Upgrading is not worth the price imo. In terms of actual performance the 5770 was a big step forward, which I imagine is why they didn’t release a replacement for it for a long time, and even then they just rebranded the 5770 to 6770 and upgraded the hdmi port.

  7. borkbork says:

    Hmm, from the interviews I’ve read AMD claims that the new TrueAudio feature is a hardware block built directly into the GPU’s die, the idea being that the additional audio processing won’t detract from the GPU’s performance. If this is true, and it’s a feature 280X has and the 7970 doesn’t, it might be a compelling argument to wait for the 280X… if that kind of thing matters to you.

    • Skyhigh says:

      If i recall correctly, 280X won’t have the true audio feature, only the 290X, 290 and one of the lower end cards of the R200 series.
      It seems though that all Graphic Core Next cards will support “Mantle” later on. i think this includes the 7970.

    • Jeremy Laird says:

      TrueAudio is a feature in the 290X (a new GPU) and 260X. The 260X is based on Bonaire, one of the newer 7000 series chips. It always had TrueAudio inside, it just hasn’t been enabled until the 260X.

      The 280X, 27)X and 250 do not have TrueAudio.

  8. realitysconcierge says:

    Edit: Ehhh this was an ill thought out comment.

  9. luieburger says:

    I can’t tell if AMD is fighting tooth and nail with its last gasp of breath, or if its turning the tide in its favor. I’ve switched between nVidia and AMD several times during each of my hardware iterations since the 90s. I keep picking whoever has the most bang for the buck. I’m holding on close to my 5830 and 550Ti, squeezing every drop out of them until after SteamOS, Steam Controllers, and Mantle are out in the wild. Then, only then, I will make my next hardware decision.

    I’m rooting for AMD because I really want them to put the pressure on nVidia and Intel. More competition is always good for gamers. nVidia will get my $$$ if they can boast superb performance with OpenGL extensions on SteamOS/Linux. AMD will get my $$$ if they put Mantle with some decent games on SteamOS/Linux (or make their own OpenGL extensions) and beat nVidia on performance/dollar.

  10. Carra says:

    I’m curious about Mantle. I hope it will prod Nvidia into action. And maybe even use that fancy, expensive graphics card in my PC to the fullest instead of seeing 7 year old console hardware deliver something similar.

    • GamesInquirer says:

      That’s up to game developers, not Nvidia/AMD. If they want a game to run on consoles because of the sales potential they will obviously make a game that can run on them and then provide incremental cosmetic improvements for the PC version rather than spend money in completely remaking the game in a way that takes full advantage of the hardware for anything more than that. At best you get something like Battlefield which provides the PC version with twice the amount of players and size of maps but clearly that’s only viable for certain games. Of course even PC exclusive games don’t necessarily take full advantage of such hardware, not only because they want to support lower end systems much like multi platform games support consoles but also because of the available budget.

      But hey, upcoming new consoles mean an upgrade of that low end so with the cosmetic improvements on top of that I think you’ll be impressed by some games for a while, until PC hardware moves well beyond what those systems can do again in a couple years (yes, yes, it’s already better, hardly of the same difference as with the current consoles though, the supported feature set is much more similar for now).

  11. neolith says:

    God, I hate it when a rebranding of GPUs happens and I have to re-learn everything. Every time this happens, I am more likely to get my next card from the OTHER company.

  12. Retro says:

    Some random thoughts..
    - Splendid, one more API to code for!
    - Let’s assume NVidia follows suit, even better, two new APIs!
    -Given my experience with Eyefinity, it will not work like it’s supposed to
    - Any guesses on how well AMD’s driver team will be able to support ISVs on the PC, with two consoles to serve as well? Again, given the Eyefinity experience..
    - Let’s assume AMD goes under (and there’s not any more cards being sold, even used ones), what happens with all the ‘Mantle’ games? At least if they’d be coded against the DX (or OGL) API one could just swap cards
    - I really don’t see how any PC gamer (and even worse, Tech Editor) can see this as a Good Thing

    • Jeremy Laird says:

      You misunderstand what Mantle might be. There’s going to be a low level API for the consoles. The idea here isn’t adding yet another. It’s to allow developers to apply the work done for console directly on the PC. So it’s not another API to code for.

      Moreover, Nvidia can’t do anything directly analogous to this since its hardware is not in either of the new consoles. There are things AMD can do re OpenGL extensions that might be similar in terms of adding performance. But they won’t be equivalent in terms of allowing devs to optimise once and hit both console and PC with those optimisations.

      Only AMD will be able to offer that. But as I said above, this remains speculative. It may not turn out to be quite as simple as all that.

      Also, ‘Mantle’ games will still be PC compatible and run on any DX-compliant GPU. They’ll just run faster (or it’s claimed they’ll run faster) on AMD GCN chips.

      • thenotsofunnyguy says:

        I don’t think he misunderstood a thing. It’s just another API to draw stuff. And yes, it is used on consoles but that doesn’t change anything.

        But it’s no problem in my opinion because there is OpenGL. When you start to write a new engine, you use OpenGL as renderer but have an abstraction layer on top of it so you can simply add other APIs after that to support or optimize different platforms.

        There is one thing that bugs me however. You could have always programmed directly on the hardware on PC (at least on Linux and I guess Mac, too). The DRM has ioctl’s that everyone could use but apparently nobody wants to, so they just use the OpenGL API in mesa or the proprietary drivers. Turns out it’s too hard to develop directly on the hardware and they want abstraction. Now Mantle has less abstraction and everyone goes crazy. I still believe that OpenGL with the right extensions can get the same performance as Mantle. At least a AMD dev thinks so, too.

        • Deadly Sinner says:

          No, it being used on consoles does change everything. Most big games are released on both consoles and PC. So the idea is that the bulk of the work for most games has already been done.

      • Fataleer says:

        No. I think he just makes a valid point.
        When Voodoo vanished because of subpar performance in then next-gen, people had the choice of using Software rendering (which was ugly even then, except for first Carmageddon) or nothing. It took some time to get OpenGlide, which would translate Voodoo Glide to OpenGl. Now, the question connected to that would be, is AMD planning on licensing that to “say” Nvidia?

        While low level code on pc in theory sounds like great idea, what it will be good for in two-three gen time, when tech moves? Unless it is low level api, which then, in turn, could be portable. If it is not going to be, or if it is going to be protected (as GTA V PC is) by lot of money from interests, I predict the same future for it as for Physx in terms of actual implementation.

    • LionsPhil says:

      This.

      Abstraction is a good thing. The alternative is DOS. Do you remember DOS? Do you remember having to choose your sound card from a list, and hoping that “Sound Blaster Compatable” worked otherwise?

      You do not want software which is written “directly to the hardware”.

      • Deadly Sinner says:

        I don’t know why you bring this up because every game that will use Mantle will also use DirectX. And every PC exclusive that has to choose one will obviously choose the one that doesn’t cut out half of their potential customers. So I think it’s safe to say that it will be nothing like DOS.

      • Low Life says:

        Generalizing is a bad thing.

        If the benefits of writing to the hardware outweigh the negatives, then I sure as hell want software written to the hardware. In this case, the benefit is that the particular piece of software runs faster if the user has a video card with a specific architecture. Negatives approach zero, because the game will still fall back to the higher level API when needed.

    • GamesInquirer says:

      Word is it’s not really an all new API but instead rather similar to what developers already have to use for the next generation systems from Microsoft and Sony. If that’s true then it could lead to quite a high adoption rate from all multi platform developers as it wouldn’t necessarily require much work. Hopefully that’s the case as of course it has little chance of taking off otherwise. More performance out of the same hardware (of a given brand yes, but it doesn’t necessarily negatively affect the other brand), I don’t see how any PC gamer can consider it bad. You might wanna attack PhysX before MANTLE if that’s your fear. It could also help Linux and Mac support mind, I doubt the API will remain only on Windows. But maybe you even think that’s another bad thing as it’s another OS. Oh well. There’s nothing here about developers abandoning other APIs mind, just utilizing this, when situation allows, to boost performance for some. Besides, the “what if x goes under” argument works for pretty much everything. What if Microsoft goes under and we end up with PCs on a different OS, then our DirectX games won’t work, new hardware won’t have drivers for Windows to keep using it and so on. It’s kind of childish. Even though not exactly unrealistic with that argument at hand one should never do pretty much anything, ever.

  13. RIDEBIRD says:

    290 non-X seems to be very forgotten here. Perhaps because of the dumb naming convention?

    If that card manages to land at around 350 europounds and it, as it would seem on paper and somewhat sketchy leaked graphs, outperforms/equals the 780, you have a very priceworthy card.

    I’ll be getting myself a 290 or 290X depending on cost in december, as that should be a good amount of time to judge if AMDs drivers are absolute shit this time around as well, or if they finally learned to do stuff properly. I’ve read about the 7900-series having decent drivers as of late, which seems to be a rather huge improvement, so that bodes well, but I still see AMD lagging behind with drivers for almost every big/semi-big game release.

    Would be nice to go AMD again as it is a lot more card for cash, though that’s pretty pointless if the drivers don’t work as they should.

  14. czerro says:

    Why is everyone surprised by this? Everyone knew this was a rebrand of existing cards to fall in line with the branding system of the upcoming Hawaii chips and beyond moving forward (290/x). These aren’t new cards and were never pitched as such. Hawaii is the NEW chip. NDA should be up at the end of next week and we can see what AMD’s NEW cards can do. Everyone running the 280x through it’s paces and coming away saying, It performs exactly as a 7970, but the price-cut is impressive! Congrats! It is exactly a rebranded 7970 to fit in the new sku scheme as has been reported for MONTHS from AMDs own mouth. Almost feels like people are trying to deflate the 290/x release next week (the ACTUAL new cards) by vaguely confusing people who aren’t following the news.

    • Strabo says:

      Nobody is surprised. Everybody is disappointed because neither are they noticeably faster than the old versions of the cards (which would be doable with running the more mature chips at higher rates) nor are they priced a tier down as it is usually the case (so the 7870 Ghz at 250 GBP would become the 280X at 200 GBP), but even priced higher for what is at best the same performance.

      • czerro says:

        There has indeed been a price. The 7970′s were reduced to like 350-360 a month ago as a lead in to the 290x, and the 280x is priced 50 dollars cheaper than that. The remaining 7970′s were then priced to match as they are essentially the same card… a 7970/280x at 299 is a steal even though it’s not a new card. That’s the real story here and relevant information for the buyer. With the rebrand came the price drop, and there is nothing to really close to matching it in the price/performance area at the moment.

  15. ThatsaMoray says:

    Nvidia’s Titan pricing is anything but idiotic. It is deliberately unaffordable; a completely unnecessary yet aspirational gonk for the masses to hanker after, and for the more-money-than-sense crowd to preen themselves over.

    • Fataleer says:

      Also, yield in those production line is low, so they have to somehow get the money for wasted production capacity.
      Low yield production – high prices. It is actually very reasonable and common business practice.

      But it is marketed so well, I would believe your explanation quite comfortably.

  16. zeekthegeek says:

    I thought Mantle was platform-agnostic. That is, it will work as well on Nvidia hardware as it does on AMD. Because unlike Nvidia, they don’t see the need to arbitrarily cripple features in a way that makse them exclusive.

    • GamesInquirer says:

      As I understand it, it’s an API designed to better take advantage of specific AMD hardware. Open or not (I don’t think it is, but even if it was), it simply can’t function well or even function at all on other hardware it’s not meant to recognize.

      Not even older AMD hardware will be able to take advantage of it, only GCN cards. Not that Nvidia owners should care. If they’ve been happy with running games on other API this won’t change that, no developer will abandon that market share just because they can with hopefully little extra work allow people with AMD hardware to get some more out of it.

      Nvidia could provide another API for their own architecture, if they can get the compatibility to the same levels (that’s the important part, this API needs to function on every new AMD card without problems, which is a tough order as it will no longer be only used on the single GPU model found in consoles) and not cause such issues to developers or gamers. Nvidia don’t have the console market to help with developer adoption of such an API but they’ve been able to popularize custom to their hardware solutions in the past anyway, like PhysX which did screw over other brand owners, unlike AMD’s API.

      Hopefully if both companies provide such things of their own they will both be easy to use and earn equal treatment from developers so that how a game runs is strictly up to the power of the hardware chosen by the gamers based on the price and performance ratio that suits them rather than the intentional lack of proper optimization for a given API. As long as the hardware remains compliant of certain standards rather than diversify due to the unchaining from DirectX it should be viable. Of course it’s not hard to imagine Nvidia diversifying on purpose given PhysX and the like but I hope for the best.

  17. Strabo says:

    I wouldn’t get my hopes up for a 400-450 GBP 290X. Rumors are currently putting the price between 550 Dollars and 650 Dollars, with the lower end more likely than the upper in my opinion (as it would be competitive with the Nvidia 780).

  18. SuicideKing says:

    @Jeremy:

    1. What about Mantle becoming a repeat of Glide? Or a repeat of PhysX?

    2. About the 280X…performance is between the 7970 and 7970 GHz edition, for about the price of a 7970…and in the US, $50 more than a GTX 760, that performs much worse. So At least compared to Nvidia, pricing is pretty good. Heck, it’s $100 cheaper than a GTX 770.

    “There’s another positive in all of this for AMD: in the process of hacking away at its flagship’s price tag, the company pushed Tahiti into a price band Nvidia doesn’t service. True, R9 280X is slower than the Radeon HD 7970 GHz Edition. But the card is also about $30 cheaper. At $300, there’s a $50 premium over GeForce GTX 760, but that board doesn’t handle QHD resolutions as well with detail settings cranked up. If I was building a PC to game on a 2560×1440 display and wanted to get in the door as inexpensively as possible without sacrificing graphics quality, the 280X would be my card. That value is why I’ll hand the Tahiti-based board our Smart Buy award. There’s certainly something to be said for revisiting a GPU when it’s selling for $200 less than the last time you reviewed it.”
    http://www.tomshardware.com/reviews/radeon-r9-280x-r9-270x-r7-260x,3635.html

  19. Didden says:

    The 290 and 290X specs are online if you know… you type in their names. Looks erm, faster.

  20. sunaiac says:

    Aaah, RPS …
    Singing love song when comes the rip off 680 (slower than a 7970GHz for just 100€ more), then regretting that prices are too high :D
    Then comparing the 300€ 280X to the price “empty shelves” 7970 when actual price for 7970 is 350+ € to be able to say there’s no evolution …

    Aaah, RPS …

Comment on this story

XHTML: Allowed code: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>