Week in Tech: AMD’s New 285 GPU, NVMe SSDs And Stuff

By Jeremy Laird on August 28th, 2014 at 9:00 pm.

Oh, you silly GPUs. Remember the days when by your names should we know ye? No longer. Increasingly, both AMD and Nvidia appear to be engaged in a game of one-upmanship when it comes to baffling branding. Enter, therefore, the new AMD Radeon R9 285. The nomenclature suggests it should sit above the existing R9 280, but in fact it’s cheaper, less complex and most likely a bit slower. Why not Radeon R9 275? I have no idea. Still, it looks like a promising new option in terms of bang for your buck. Meanwhile, the complete package for next-gen SSD performance is finally coming together as a major new controller chipset with support for NVMe is announced. Yes, NVMe! Oh and on a related note, it now looks like you might want to skip Intel’s upcoming Broadwell architecture / CPU family / platform / whatever and jump straight to Skylake. Details after the break.

AMD’s new graphics card

The new AMD Radeon R9 285 graphics chipset, then. For the record, the new 285 is more important for AMD than it is for you and we. Going by the information released so far, which is admittedly a bit patchy, it’s not a revolutionary new GPU or chip architecture.

Instead, it’s about delivering a similar sort of gaming experience as the existing Radeon R9 280 for a bit less money in terms of retail pricing and – here’s the critical point – presumably at a significantly lower production cost.

The problem for AMD (and Nvidia of late) has been production partner TSMC’s inability to keep the Moore’s Law die shrink thing on schedule. TSMC has been stuck on 28nm for an absolute age.

As a consequence, a little while back AMD found itself launching a new high-end GPU (the Radeon R9 290 series) on the same 28nm process as its incumbent high-end GPU (the Radeon HD 7900 series, which was subsequently rebadged the R9 280).

They’re both big, expensive chips and the new 285 is a stopgap solution to that problem of making loads of really big, expensive chips while everyone waits for TSMC to get a grip – or AMD shifts production to Global Foundaries, which of course was once part of AMD. Aaaaaaaaanyway, what AMD has essentially done is to take a look at the 280 and identified which bits of the chip it can hack out without dramatically borking gaming performance. The result is a leaner, more cost efficient chip.

Not every detail of the specification has been released, but here’s what we do know. It packs the same stream shader count as the R9 280, thus 1,792 of the tiny number-crunching blighters. As for texture units and ROPs (don’t ask, if you don’t know, you probably won’t care), we’re talking 112 and 32, once again making the 285 a dead ringer for the 280.

Where’s the difference, then? It’s the memory subsystem. The 285 recieves a cut-down 256-bit bus to the 280’s hefty 384-bit item. So much less bandwidth even taking into account the slightly higher 5.5GHz effective data rate of the graphics memory, up from 5GHz.

Oh, and the standard memory buffer shrinks from 3GB to 2GB, though this last delta is something board makers are free to close up should they choose. The clocks which are very marginally down from 933MHz to 918MHz for the boost clock, too. AMD hasn’t revealed the default clock.

Price wise, the official number is $249 for the new 285 with ye olde 280 rocking in at $279 ye olde 280X rocking in at $299. UK Radeon R9 280s kick off around £160 and 280X’s around £195, so call that £150 or thereabouts for the new 285 on this side of the Pond.

What kind of value does that represent? As yet, it’s not entirely clear. It’s possible the new 285 packs an exciting new revision of AMD’s GCN architecture that does funky things in terms of performance à la Nvidia’s Maxwell architecture. Could this be GCN 1.2?

If it is, AMD is keeping bloody quiet about it. Moreover, if it was something dramatically new and faster, its sub-R9 280 pricing would be pretty odd. So I’m thinking this is so-called GCN 1.1 and the pricing reflects its performance.

Personally, I’d spend that bit more and get the 280 with its wider bus. It’ll likely hold up better over time at higher resolutions. It’s a proper high-end chip and that tends to make for superior longevity.

One last thing – the 285 goes on sale 2nd September. Next!

Faster SSDs

In other news, Marvell has announced its NVMe-Enabled, PCI-E 3.0 x4 88SS1093 SSD controller. Cue much rejoicing. To decode that specgasm, it’s a controller designed for the new PCI Express-based drive interfaces (M.2 and SATA Express, discussed in posts passim), it supports up to four PCI -E gen three lanes and thus up to 4GB/s of bandwidth (real-world bandwidth will be a bit less) and it does the new fangled NVMe thing.

The latter bit means it doesn’t suffer from the overheads and latencies of the AHCI storage control protocol, which is the standard protocol for SATA drives and was never concieved with solid-state storage in mind. NVMe was designed for SSDs and should be much faster for random access performance, I reckon.

Anyway, Marvell controllers are found in some of the best value drives, including Crucial’s SSDs, so the prospect of an NVMe Marvell controller rolling out to affordable but fast SSDs sounds good to me.

However, the whole PCI-E thing raises issues. To get proper performance, you need to use the PCI-E lanes on the CPU die itself, not any hobbled PCI-E lanes that might be hung off the chipset and share bandwidth with all and sundry.

Now, Intel’s mainstream LGA1150 CPUs only sport 16 lanes on the CPU die, so any lanes used for storage can’t be used for graphics. Arguably, eight lanes of gen three PCI-E is enough for a single GPU. But in an ideal world, you’d have the full 16.

Well, rumour has it that Intel’s upcoming Skylake generation of mainstream CPUs will sport 20 lanes on-die. And thus you can have your 16-lane GPU cake and eat your four-lane NVMe- enabled SSD. Of course, the slight snag is that we’ve still to see any CPUs from Intel’s next-gen Broadwell family actually on sale as yet. Skylake actually comes after Broadwell and so is fully two generations away.

However, despite the delay rolling out Broadwell, it looks like Skylake might stick to the original schedule, in which case we should see it on the desktop next year. Whatever, it’s something to factor into your upgrade plans.

If your PC is really chugging along with a magnetic drive, an old SSD, a crap CPU or whatever, ignore all of the above and dive into a Haswell system today using Intel’s existing processors and platforms. But if it’s humming along OK and you’re are flirting with the idea of an indulgent upgrade, perhaps wait until it’s clear just when Skylake is coming and what it will offer.

, , , , .

27 Comments »

Sponsored links by Taboola
  1. PopeRatzo says:

    Faster SSDs?

    If they get any faster, I’ll be able to play games released on Nov 11 on Nov 10.

    • Premium User Badge

      Amatyr says:

      Only if you’re on the right side of the arbitrary ocean releaseline.

    • neckro23 says:

      No kidding. I’m still getting used to HDD thrashing being a thing of the past (although I still run all my games off spinny metal disks). I just stuck a Samsung 840 in my 3-year-old Macbook (total cost, including funny Mac hardware adapter: $150) and it’s like I just bought a brand-new laptop.

      We’re so spoiled. Lemme tell you about the Bad Old Days and 20-megabyte double-height MFM hard drives, son…

      (And don’t even get me started on the virtual cloud servers you can now rent for a pittance, with stupid fast RAIDed SSD disks in them…)

      • Damien Stark says:

        “Lemme tell you about the Bad Old Days and 20-megabyte double-height MFM hard drives, son…”

        I hear you. I remember my father’s computer, a 286, had a 40 Mb hard drive, and that was SO BIG that we partitioned it into a 10 and 30 Mb partition.

    • Person of Interest says:

      I’d like to see real-world benchmarks that show useful speed gains with faster SSD’s. Hardware reviewers seem to always run IOMeter with 32 operations queued, or measure I/O latency out to 1/2 hour of continuous writes. I assume they do this because they get consistent results, and it produces nice charts that differentiate products. But it seems divorced from the things most people do on computers.

      When Anandtech reviewed the Samsung XP941 SSD over a PCIe 3.0 x4 interface, they fawned over its 1GB/s sustained read and write speeds, quintupling the speed of a SATA 3Gbps SSD. But when they tested something practical, like Photoshop installation on a Mac Pro, the new drive completed installation in only 9% less time (267s vs. 288s). I haven’t seen them publish application or game load times lately, either–I cynically assume they dropped those tests because the numbers stopped improving with newer SSD’s.

      • TacticalNuclearPenguin says:

        Measuring install times ( or game load times ) is even less relevant than IOPS and other stuff.

        The real deal about SSDs is their responsiveness, and i’m incredibly thankful that the SATA interface is still bottlenecking us, because at least manufacturers are forced to improve the side of performance that matters instead of pushing for higher linear speeds, which are already incredibly high anyway and as such going further is mostly good for marketing only.

        As of now, SSDs are still improving on performance consistency, which is far more important than anything else. You don’t buy an SSD only for load times, you buy it for the “Agility” and the incredible feel of the whole PC experience, which is something that’s not possible to have on conventional mechanical drives, even with raided high RPM ones.

        Try a SATA2 bottlenecked SSD on a crap computer and you’ll probably still fall in love with it. It’s all down to no moving mechanical parts, with many dozen of times faster random performance. That’s where extra optimization is happening right now.

        • Person of Interest says:

          I agree that performance consistency is a huge factor for SSD’s. StorageReview thoughtfully includes maximum latency as well as average latency for some of their benchmarks, but it still does not get enough press attention.

          I want to see reviewers focus on activities that make me wait for my computer. Examples:

          1. Game frame rates.
          2. Tab switching and task switching.
          3. System and application start-up time.
          4. Copying files.
          5. Working with large files: starting/stopping VM’s, skipping through movies, extracting archives.

          Some of those things may only be related to SSD’s in odd ways, for example, the SSD may only affect 1 when the game hitches because it can’t stream assets fast enough. 4 and 5 might be mostly unaffected by the SSD because of the OS’s write-behind cache.

          My point is that “IOMeter 4KB random write QD=32″ does not seem like a good proxy for these activities. Even Anandtech’s torture test, which replays 2TB of real-world recorded disk activity, is flawed: my OS cache makes most under-2GB writes “free”, and a 5ms wait doesn’t matter if my screen still redraws within 16.7ms.

          I would love to read an article that catalogs all the things we do with computers that are not perceptually instant; identifies which ones are affected by storage; and establishes synthetic benchmarks whose results closely correspond with performance and responsiveness as perceived by a user.

      • fish99 says:

        For some real world numbers, I replaced the 7200rpm drive in my laptop with an SSD, and this laptop has crappy old hardware (core 2 tech) that limits the SSD to about half of it’s desktop top speed, and yet boot and load times for apps were cut in half. When you’re loading 3DSMax in 30 seconds instead of 60, you really notice that, or booting in 18 seconds instead of 40. The difference in access times for all the little reads and writes your OS is constantly making evey time you do anything, that makes everything feel smoother too. The annoying little pasues aren’t there anymore.

        • Damien Stark says:

          This is absolutely true, but I think his point was that going from one SSD to a higher-performance SSD doesn’t make much difference.

          Whether he’s right or not, we can all agree that going from a standard HDD to an SSD *does* make a big difference.

  2. Premium User Badge

    Scrote says:

    Mind doing a spot more of decoding specgasms?
    I’m blurry on what exactly this controller is. How would one make it use the PCI-E lanes on the CPU die? Do motherboards support this now? I guess I’m asking – are much-faster SSDs using this controller lark appearing anywhere near us soon and can I just shove one into an M.2’s hole?

    • nrvsNRG says:

      The upcoming (any day now) X99 motherboards support M2. Actually I thought maybe Jeremy would do a round up of the ones that have been shown off so far.

      • Sakkura says:

        Z97 and H97 motherboards already support M.2 and SATA Express.

        • nrvsNRG says:

          Yeah I know that, I’m just exited about Haswell E and X99 right now. Besides, it will be using PCI-E 3.0 and is capable of much greater speeds on x99.

    • Sakkura says:

      It will use the PCIe lanes available, there’s no way to configure this manually. Current Z97 and H97 motherboards support M.2 and SATA Express interfaces already. Other SSDs using at least M.2 are also already available, but this is one of the first controllers that supports NVMe. Basically just means it can take full advantage of these “next-generation” connections. Consumer-oriented SSDs with this controller inside should arrive next year.

  3. Premium User Badge

    steves says:

    My new-ish PC has an ASRock Z97 Extreme6 board, and one of these dinky little things as a main system disk:

    http://www.overclock3d.net/articles/storage/plextor_m6e_pcie_m_2_ssd_review/1

    These tiny M.2 SSDs (the ones that go faster than the SATA III limit – the form factor is rather confusing, you can also get cheaper, slower ones that are designed for notebooks) mounted flat on the motherboard are great, and not just for the speed – getting rid of ugly cabling is never a bad thing. It’s about 3 seconds booting Win 8.1, and loading bloated hogs like Photoshop, Visual Studio, and all the usual offenders are near-instant.

    As someone who remembers loading things from cassete tapes back in the ’80’s…I think we have finally arrived.

    That motherboard has another M.2 slot as well (Ultra M.2 x4!), which can theoretically go up to 32 Gbps. But there aren’t any SSDs that come close to that yet. The Samsung XP941 thing used in the review below is about as good as you can get at the moment, but I will wait a while.

    The loss of GPU performance from PCIE3 x16 to x8 isn’t a big deal according to this:

    http://www.anandtech.com/show/8045/asrock-z97-extreme6-review-ultra-m2-x4-tested-with-xp941/11

    1% off 3D performance maybe? I can live with that.

  4. MacTheGeek says:

    File this under AMD’s Confusing Brand Labels: You’re comparing the price of the 285 to the 280X, not the 280.

    Glancing at US Newegg, and ignoring rebates:
    R9 280 – from $210 to $240
    R9 280X – from $260 to $330

    AMD’s launch price of $249 for the 285 fits right in between these ranges. It’s a replacement for the 280 (which wil be phased out), not the 280X. I suspect that AMD will launch a 285X in a couple months to replace the 280X.

    • Jeremy Laird says:

      Sorry, yes, I meant the X re the price comparison, I will update. Thanks!

      Or did I? Generally fell foul of the branding confusing, will fix it in any case!

    • Convolvulus says:

      I’m still using a Radeon 9800 from 2003 because no subsequent card has a higher number.

      • Premium User Badge

        steves says:

        Yeah, but does it have an ‘X’ after it? And if so, what about a ‘T’?

        http://www.trustedreviews.com/ATi-Radeon-9800XT_PC-Component_review

        See, it is ‘trusted’, so it must be true.

        I wish the random numbers and letters bollocks would go away, but I can only assume nVidia & AMD know what they are doing, because they have been doing it for a long time now.

        At least nVidia got with the ‘Titan’ thing, which sort of implies biggest and best. Although where do you go after that? Announce year zero, and start again with ‘midget’?

        And even now, plain old Titan is no longer top dog. Whether or not a Titan black is better than a Titan Z, or indeed a 780 Ti, I will never know. Or indeed care – I normally buy the quietest card I can find with the 3rd or 4th tier of whatever the ‘best’ GPU is, but I still need to spend ages checking out various tech sites to figure out what the bloody numbers & letters are.

        I’m picking on nVidia here because I wanted G-Sync (an actual thing, with a name that means something!), but AMD are just as guilty, perhaps more so, because they’re still sticking a fucking ‘X’ on the end of stuff!

  5. SuicideKing says:

    Tonga is supposed to be GCN 1.1 only, from what I know. It was released in one of their FirePro cards a week or so ago.

    Narrower bus shouldn’t be much of a concern for 1080p and below, which is arguably the target market for the 285. Nvidia’s got away with a 256-bit bus for quite some time now.

    Memory will apparently come in 2GB and 4GB versions, because with a 256-bit bus they usually keep the memory at a power of 2 (something to do with lane distribution, i think. Did cause some problems with the 660 Ti).

  6. sophof says:

    Another thing to keep in mind is that the 280 series does not support Freesync for gaming, where the newer cards like the 285 probably will (the 290 at least already does). I suspect that’s going to be a big deal once freesync monitors hit the market.

  7. Zekiel says:

    I heartily wish graphics cards had more sensible names. I’m looking at replacing now, and I just have no idea how to search for “a decent but not too expensive graphics card that will play Metro 2033 Redux” (say). I find it incredibly difficult comparing Radeon with other Radeon cards, let along trying to compare with nVidia as well. THank goodness there aren’t five manufacturers.

    I know TomsHardWare does an incredibly helpful graphics card hierarchy chart, which is basically what I’m using. But that doesn’t make it tremendously easy to actually search online stores. Every time I find a card I then have to check back on the hierarchy chart to see if its better or worse than what I’ve previously found, and check the system specs for one or more games that I want it to be able to run…

    Anyone have a better suggestion of how to go about this?

    Bottom line: I wish they made one new graphics card each, per year, which is better than the previous year’s, and called all their cards “Nvidia 2010″, “Nividia 2011″ etc. That would make thinks MUCH simpler.

  8. Monkey says:

    I’m a little surprised you didn’t wait until the Haswell-E embargo is lifted later today, before posting. Was looking forward to your thoughts, till next week then…