Hard Choices: Ask AMD…Pretty Much Anything

By Jeremy Laird on April 2nd, 2013 at 5:30 pm.

Bit of an experiment this, but I’m meeting up with some senior AMD suits later this week. So instead of standing around looking plausible and pretending I know stuff, I thought it might be fun to give you lot the chance to put whatever queries you might have to AMD, makers of Radeon graphics cards and FX/Phenom/Athlon processors. It might not, of course, but what with the PS4 and next Xbox (allegedly) going all-AMD and the PC component market in something of a transition – oh and with the very future of AMD in question – well, there’s plenty to ponder.

Whether we get any answers worth having is another matter. But don’t ask, don’t get. Fed up with AMD drivers? Want to know what’s next for the FX CPU? Sound off below and I’ll lay it on Paxman-styleee later in the week.

__________________

« | »

, .

130 Comments »

  1. Keyrock says:

    Do you expect APUs to eventually outperform and/or make dedicated GPUs obsolete? If so, how long do you think it will be until that actually happens?

    • Lekker says:

      Apply a little logic here. There’s nothing APU can do better in -terms of performance- than dedicated card in your lifetime. There is simply more room to work with in dedicated GPU space. Until we see carbon nano tubes being used on regular basis I don’t see this happening.

      • particlese says:

        I think shared memory could be interesting for performance if you have some useful code that is well-suited to parallelization but requires lots of branching. Fractals and other ray-tracing-esque algorithms come to mind, but there are usually approximations and things that can be used for “good enough” applications in games.

        I’d be interested to see if this is something AMD is going after, especially since I’ve heard on the Blender Podcast that AMD’s OpenCL support may be hardware-limited. (Or at least is a royal pain to work with compared to something like Cuda.)

      • Keyrock says:

        I disagree. APUs have the advantage of shared memory, which can be a disadvantage right now to some degree because DDR3 is significantly slower than GDDR5, but once DDR4 comes along (next year?) that gap should narrow if not disappear. More importantly, there’s the issue of latency. This is an area where APUs have a clear advantage over a dedicated card. No matter how great of a throughput you make for the pipeline, electricity takes time to travel from point a to point b, that will always be an issue for dedicated cards, even if they use fiber optics it will still be an issue (albeit a lesser one). As graphics cards become more and more powerful, latency becomes a larger and larger piece of the performance puzzle (don’t take my word for it, as John Carmack), and APUs have the distinct advantage of having the CPU and GPU as one physical unit, which minimizes, if not completely eliminates, latency of information going back and forth between the two processor units.

        • Sakkura says:

          DDR3 is not slower than GDDR5. GDDR5 is simply DDR3 modified to be much faster in bandwidth and much slower in latency. That means DDR3 is much better for a CPU, but GDDR5 is much better for a GPU. Using the same memory for both a CPU and a GPU is a problem, because they require very different things from the memory.

          • evernessince says:

            Well ya learn something new everyday. Why do they call it GDDR5 though? I guess all the praise for the PS4 is more unfounded than everyone believe and it kinda makes the next gen consoles that much weaker.

            Here is a big problem I need to ask about: If AMD’s console APU is using memory made for the GPU and the CPU already has weak per core power, won’t we see games with inflictions indicative of a weak CPU? For example, weak A.I., poor randomization, and lower amounts of active animations. I can see this as very bad, my cpu currently only reaches 40% on skyrim super modded with enb.

          • Brun says:

            “Graphics Double Data Rate, version 5″ = GDDR5.

            As for CPU performance, the theory is that more power can be squeezed out of a lower-power CPU by liberal application of multithreading. Most PC games (and other applications) today are single- or double-threaded at most, and often their multithreading implementation is weak (i.e. running most of the game on a single CPU and only splitting off things like sound or inputs to other threads).

            I’m not an expert in CPU/GPU operation, but perhaps increased multithreading – i.e. increased paralellization – will help negate some of the disadvantages of GDDR5 when used with traditional processors since it’s already designed for use in highly paralellized applications (video cards).

          • rockman29 says:

            @ever

            360 used GDDR3 RAM, and had no problem with any of the above characteristics in games, FYI :)

          • jalf says:

            DDR3 is not slower than GDDR5. GDDR5 is simply DDR3 modified to be much faster in bandwidth and much slower in latency. That means DDR3 is much better for a CPU, but GDDR5 is much better for a GPU. Using the same memory for both a CPU and a GPU is a problem, because they require very different things from the memory.

            Well, except it’s not that simple.

            The reason for the higher bandwidth is that each module is connected directly to its own memory controller, rather than going via a single shared memory bus to a single shared memory controller. So you get absurdly high bandwidth simply because you have a lot of RAM modules, and each has its own connection to the rest of the system. That actually has very little to do with the RAM itself.

            And I’d love to hear how bad the latency is (and *why* it would be noticeably worse). Anyone know?
            Regurgitating the same old hearsay just isn’t that interesting if no one has any actual facts.

            There’s a lot of latency in other operations associated with the GPU (transferring data between system RAM and GPU RAM takes *a long time*. And there is *always* a lot of latency when reading from RAM, and the GPU is designed to hide this very well. But that doesn’t mean that GDDR has significantly higher latency than other types of memory.

            Most of the sources I can find on Google talk about *low* latency, partly because the GPU is just physically much closer to its RAM than the CPU is to system memory.

            Can anyone provide some actual numbers on the supposed higher latency of GDDR RAM, or at least provide a rationale for why latency would be higher?

          • Sakkura says:

            I am SO disappointed that you didn’t notice my link to just such data in the “build your own steam box hard choices” article.

            http://www.sisoftware.net/?d=qa&f=gpu_mem_latency

            You’ll notice graphs where the latencies go into the several hundred cycles. Compared to your average DDR3-1600 with a latency of 9 cycles, you can see the difference is drastic.

          • jansweika says:

            I expect that the confusion is exactly what they are looking for, but it would be interesting to hear what their explanation is. http://www.youtube.com/watch?v=cWNV7FG0qBU

          • jalf says:

            Thank you! You’re entirely correct, I missed that link.

            However… Well, let’s start with this:

            You’ll notice graphs where the latencies go into the several hundred cycles. Compared to your average DDR3-1600 with a latency of 9 cycles, you can see the difference is drastic.

            Regular DDR memory has a latency of hundreds of cycles too. You’re probably thinking of the CAS latency, which is just for one step in the process of reading/writing memory. The total latency is vastly greater. When the CPU needs to access data in RAM, it has to wait a few hundred nanoseconds. (The CPU’s L2 cache has a latency around 10 cycles typically. Why would Intel and AMD bother adding this cache if it was as slow as RAM? ;))

            But let’s look at the actual data in your article

            For the sake of simplicity, let’s single out the AMD hardware. They conveniently show an APU, running against DDR3 RAM, and a GPU, running against GDDR5. Just what we need for our comparison.

            And according to the graph (for Global Memory), they seem to be fairly close in terms of latency. Almost identical latency (in cycles) for small blocks of memory, the GDDR setup wins at mid-sized blocks, and the APU gets ahead on large blocks. All in all, I’d call that even.

            However, this is when the latency is measured in cycles. The dedicated GPU is a good 70% faster clocked, meaning that it for the same absolute latency, it will show a far higher cycle count. Which means that if they use the same number of cycles, then the latency in the GPU case is much, much lower.

            Heck, in the summary table below the graph, they even say outright about the APU, that

            Memory latency is higher, no doubt due to shared DDR3 memory.

            .
            And they put numbers to it: the GPU clocks in at 703 nanoseconds. The APU at 1110.

            So yes, you’re right I missed this (very interesting) article. Thanks, I’ve bookmarked it for future reference.
            But the data in it doesn’t seem to support your claims at all.

            Looks to me like the latency is *significantly* better in a GPU/GDDR setup than APU/DDR one.
            Now, like I said before, I personally wouldn’t expect a huge difference in latency for the RAM itself. That ought to be about the same. But the way in which GDDR is connected to the outside world, with dedicated memory controllers per module and a much, much shorter path to the processor is significant.

            Those hundreds of nanoseconds it takes to get data from RAM to CPU aren’t because of the memory. The memory costs perhaps 30-50ns all in all (going from memory+gut feeling. It could be more, could be less), and the rest is for routing data on the long, winding path along the memory bus and through the motherboard. A GPU+GDDR setup can avoid much of that, so I would expect such a setup to have lower latency overall.

            Which… seems to be the case. :) For a CPU (or APU) setup with GDDR ram, I would expect latency to be roughly the same as it is with plain DDR. It could be a bit better, but I wouldn’t expect any major differences.

            I wonder if the confusion has arisen because GPUs are designed very much to *hide* memory latency. Which could be taken as an implication that there’s an unusual amount of memory latency to hide. There isn’t (as far as I know, and as far as your data shows) it’s just that at the rate a GPU is expected to perform, even the usual memory latency is completely unacceptable unless the GPU does a lot to hide it and work around it.

        • Apack990 says:

          Complicated scary tech words!

    • Bremze says:

      Worst case, when moving data onto the die takes more power than the power envelope increase you gain from moving to dedicated in the first place. Best case, when being on die allows for algorithms that are either impossible or infeasible on dedicated units.

  2. Brun says:

    Biggest question I would ask is why aren’t they making a more serious push in the mobile market, both in CPUs and in graphics processing (though those categories are necessarily blurred a bit in mobile).

    Intel has thrown itself headlong into tablets with its Atom processors – while clearly not leading the market their upcoming Haswell architecture is going to see them gain considerable ground both in performance and TDP. Nvidia has also pursued mobile with Tegra. I know it’s a crowded market but it always puzzled me that AMD doesn’t seem to be pursuing mobile at all.

    Given that you’re going to be talking to “suits” I imagine you’ll be able to get a better answer to a more business-oriented question like this.

    • ThTa says:

      They are making a push for mobile with Temash and Hondo, intended for tablets, and they’ve already stated they think they’ve the advantage over Intel there because of their APU strategy. (Meaning better graphical performance in the same envelope.)

      They can’t really make a push for smartphones, though. Their current designs don’t really work on that scale, and they’ve long since sold any branches with that kind of expertise to Qualcomm.

      • Brun says:

        Then perhaps a better question might be why are they doing such a terrible job of marketing those chips? I’d never even heard of either of those and I read plenty of tech websites.

        • Bremze says:

          Because they sell them to OEMs instead of end users. Seems a bit backwards, but I guess they rather let the OEMs spend the marketing dollars in their place a la PS4.

          • Brun says:

            Yeah but even the OEMs don’t advertise these chips – unless they go by other brand names.

        • Nallen says:

          Who should they be marketing to and why should you have heard of them? Anyone making these kinds of purchases would be aware of the solutions available I’m sure.

  3. mr.ioes says:

    Why did they remove the option to read VRAM and will they reintroduce this metric?

  4. Hoaxfish says:

    Ask them how they feel about being commonly portrayed as the people with no drivers, compared to Nvidia’s portrayal as a fire-hazard.

    And I guess their rivalry with Intel, both as CPU and GPU manufacturers. Also, the continuing rise of ARM chips in mobile devices.

  5. serioussgtstu says:

    Ask them whether continually waving their willies in the press actually achieves anything, or if they can appreciate the fact that some people find it slightly pathetic and sad.

    http://www.techradar.com/news/gaming/consoles/amd-on-the-ps4-we-gave-it-the-hardware-nvidia-couldn-t-1141607

    • JoeGuy says:

      Did you forget that Nvidia came straight out and said the PS4 was under-powered and they weren’t willing to deal with Sony at the prices they agreed with AMD.

      AMD just responded to a question about Nvidia’s stroppy teenager attitude to loosing out on the console space.

      • serioussgtstu says:

        They both came out looking equally stupid. It’s not about who instigated the argument, that’s irrelevant; the fact that processor manufactures feel so insecure that they need to get into these pointless debates is what I have a problem with.

        • JoeGuy says:

          It’s not irrelevant when it’s literally a reply to a question about how much willy waving the other people were doing. That’s pretty relevant.

          Someone else instigating and then asking the other people how they feel about it is what happened. They gave a quite reasonable answer to a challenge of their product by an apposing company. That’s pretty relevant lol.

          • serioussgtstu says:

            And so the whole conversation turns into who said what about the other and why AMD think they’re so much better than Nvidia, and that’s boring to listen to.

            Edit: LOL

          • JoeGuy says:

            Yeap. A company saying “well the other company would dis us wouldn’t they but we are happy with what we are making” is the mother of all jerk fests.

            I can’t believe they get away with all that swagger. Chop them down at the knees before there head hits the clouds I say.

      • Trent Hawkins says:

        that wasn’t the reason Nvidia gave. Nvidia straight up said that they do not expect the PS4 to sell well and would much rather use the production line on something better. In other words they’ll probably be working with Microsoft, which is an obviously smart move.

        • Apocalypse says:

          So far it seems more likely that nvidia is working on phones and tablets, while amd gets xbox and ps4.

        • jkz says:

          Yeah, we didn’t want them contracts anyways.

  6. Turin Turambar says:

    A year ago or so they were some comments of AMD about the possibility of giving a more low level access “to the metal”, and how the hardware wasn’t really exploited at 100% in computers, unlike in consoles, because there are several APIs in the way.
    http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

    They talked about it… and nothing happened. Could they push something of a movement in that area?

    • Paul says:

      Yes this please. Ask them if they are going to work with MS/OpenGL/Valve on getting better access to hardware for developers.

  7. Parge says:

    With AMDs per thread performance being way behind Intels, do you think that having AMD chips in the new Xbox and PS3 will be beneficial for owners of AMD chips in the future.

    Bonus question: The AMD APU in the new consoles is a very slow chip vs full fat desktop chips from the FX series – do you see increased parallelism being coded into games as a way around this?

  8. Jubaal says:

    The big question I would like to ask is, did you threaten to overrule him?

    • Gap Gen says:

      Forget that, what everyone’s asking is whether you threatened to overrule him.

      • Jamesworkshop says:

        are we allowed to instruct Jeremy Laird

        Jeremy Laird, was not instructed

        • Gap Gen says:

          I suppose one could ask them instead about their latest processor, and then question them repeatedly on whether they threatened to overclock it.

    • Jeremy Laird says:

      I would dearly love to ask that question. 14 times.

  9. Snargelfargen says:

    Have they given up in the desktop cpu horse race with Intel? Seems like AMD’s offerings are only competitive for intersections of some very specific applications and budgets right now. It would be terrible if Intel became the only game in town.

    • Brun says:

      They aren’t the only game in town, even without AMD. See: everyone who licenses ARM chip designs.

      • Snargelfargen says:

        My question is specifically about desktop cpus. Small part of the market, I know, but relevant to my interests.

        Are ARM cpu’s available and viable for Windows desktops? That would be pretty cool.

        • Sakkura says:

          ARM CPUs cannot run Windows, ever. Windows RT, though, is another matter. Basically the software (including OS) needs to be rewritten from the ground up to pass from one ecosystem to the other. It’s like Mac before they ditched PowerPC and went Intel.

          • neongreenie says:

            Microsoft have supported other architectures on Windows before, Itanium being one that comes to mind. As long as Microsoft have kept the kernel of their OS fairly portable since they dropped Itanium support, there shouldn’t be any reason not to support various ARM architectures.

        • Brun says:

          There’s no reason they couldn’t be, especially in low-power applications like office PCs. Obviously OS rewrites would need to happen to make that work, but Windows RT exists, so there you are.

          My point is that it’s not like there wouldn’t be checks on Intel’s power. They may not have a (direct) competitor in the desktop world without AMD, but if Intel started abusing its position and OEMs and OS makers started getting fed up with it there are other places they could turn.

        • Snargelfargen says:

          Thanks for the clarification guys. Sounds like things would have to get pretty bad before somebody could justify investing in the ARM architecture (or even something else) for mainstream desktops.

          Definitely interesting times to be a pc gamer. Seems like the platform’s future will be shaped largely by repurposing innovations in mobile technology.

          • Brun says:

            Sounds like things would have to get pretty bad before somebody could justify investing in the ARM architecture (or even something else) for mainstream desktops.

            They’re already making the necessary investments for mobile, thus they’re acquiring the necessary experience for ARM deployment. If Intel started making trouble the interested parties would be in a much better position because of this than they would have only a few years ago.

      • njursten says:

        But he’s asking about the desktop. ARM is more or less not used at all there, right? I’m kind of guessing though, based on how much you hear about ARM when talking embedded systems.

    • Shivoa says:

      Yep, same question from me.

      With Intel spending their additional transistor budget for each generation (outside of the hex- enterprise models that are far too expensive for your typical gamer) on removing the low end discrete GPU market from nVidia / AMD and lowering the power profile to more easily target mobile devices rather than offering the gamer anything more for their money than a small slither of increased performance… why haven’t AMD turned up with a competitive $225 model without the iGPU? Even being a node shrink behind Intel’s focus on their fabs shouldn’t prevent AMD from providing a very competitive offering and yet most gamers are still best served buying an i5-3570K (and before than a 2500K) and ignoring the wasted money on the iGPU silicon.

      • Bremze says:

        They already do and it’s Piledriver. Look up Crysis 3 CPU benchmarks to see what I mean. The issue here is that AMD is reliant on game developers to use upwards of six threads to be competitive, when most games still use a single thread for game logic with a few threads spawned for input handling or sound. They can’t keep up in single threaded nor floating point heavy workloads, because Intel has invested gargantuan piles of money to be where they are.

    • LintMan says:

      @Snargelfargen,

      Yes, that was my question also:

      Has AMD ceded the Desktop CPU performance market entirely to Intel? Intel’s shifting focus away from this market seems like it might provide an opportunity for AMD to catch up or gain the lead.

      • TheManko says:

        Just today I read that their next iteration called Steamroller is coming later this year and will be about 30% faster per clock. That’s a big enough upgrade that they will suddenly look pretty interesting again for gamers, not just for the few 8 core games like Crysis 3. I hope they deliver, as Intel is already acting like a monopoly.

  10. Dana says:

    What are your future plans regarding price-performance range of your hardware.

  11. Sakkura says:

    How do they feel about the whole FMA3 vs. FMA4 saga? Or related issues of standards, such as OpenCL adoption. Is that coming along as well as they’d like?

    Do they think Piledriver and Steamroller will compare a bit more favorably with Haswell due to common FMA3 support than Bulldozer did against Ivy/Sandy Bridge?

  12. Vandelay says:

    A simple question that I am sure has been asked before, but I don’t recall ever seeing an answer; why does AMD, as well as Nvidia, insist on using a bizarre naming format that only serves to confuse those who do not spend hours researching their products?

    I expect that the confusion is exactly what they are looking for, but it would be interesting to hear what their explanation is.

    • Bremze says:

      OEMs. “What’s that? Ripping out GDDR5 for DDR3 tanks the performance like a champ while saving a few pennies? Full speed ahead, a higher number is the only thing buyers care about anyways.”

    • Abtacha says:

      They don’t. AMDs nomenclature is actually pretty clear.
      First digit : Generation (current: 7)
      Second digit: General performance class (for gaming 7 low end, 8 mid range, 9 high end)
      Third digit: order within the class.
      NVIDIA is pretty much the same, though they complicate things slightly with their affixes like Ti und more recently boost.

  13. Sakkura says:

    Oh, and are they planning to improve (graphics) driver support for Linux now that Valve are doing their Linuxy thing?

    • EstrangedManatee says:

      This is a good question. ATI’s linux drivers are really terrible. It would be nice if they could stop being terrible.

    • Sagan says:

      Oh yes, I also care about this. Please tell us that you will focus more on Linux now. I would be satisfied if they got a good selection of the titles on Steam to work with their drivers.

  14. PatrickSwayze says:

    Why is your driver support so shoddy?

    Multiple times in recent memory I’ve bought games that are AMD sponsored yet have had graphical glitching which I’ve had to solve myself. (Dawn Of War 2 comes to mind)

    Even DIRT 3 which came with AMD video cards was suffering video display driver related crashes, still does.

    Farcry 3 performs terribly on my 6xxx series card yet lesser AMD cards perform better.

    Why do AMD drivers often perform poorly when games are first released? Why are you so behind in this regard?

    It’s no fun buying a brand new PC game and having glitches or choppyness knowing people with a Nvidia card aren’t getting these problems, and that I am going to have to wait for drivers while everyone else is enjoying the game.

    I’d much prefer it if you charged as much as Nvidia for your graphics cards and boosted your driver support massively

    As it stands now unless I get a good answer here, I will be buying a Nvidia card for my next build.

    Don’t get me wrong my AMD card is great when it performs but the grass looks greener on the other side at the minute.

    Also: How are you going to counter PhysX?

    • Sakkura says:

      Far Cry 3 runs surprisingly well on my Radeon HD 6850, so I don’t know what you’re on about.

      And as for drivers performing poorly at release, see Nvidia vs. Tomb Raider.

      • Snargelfargen says:

        A lot of these driver issues seem to be caused by communication issues between game developers and AMD/Nvidia. Patrick seems to think most of the blame lies with the video card companies; That may be true, but I would definitely like to hear AMD’s side of the story.

        It seems like an intrinsically flawed system; game developers and GPU companies need to share drivers and game builds to produce a polished product, but neither party has the power to hold the other accountable if they don’t hold up their end of the bargain.

        • Hahaha says:

          “That may be true, but I would definitely like to hear AMD’s side of the story.”

          This

      • SkittleDiddler says:

        AMD has a long, long history of shoddy GPU driver support. It’s not fair to compare the two.

        • Sakkura says:

          So does Nvidia. It was Nvidia that released a driver that could cause graphics cards to self-destruct.

          • SkittleDiddler says:

            Nvidia’s overall record for bad driver releases is nothing compared to AMD’s, even taking that horrific incident into account.

      • PatrickSwayze says:

        I’m on a 6950.

        I don’t mean to sound so passive aggressive but I had a 5950 which actually destroyed my entire PC through its faultyness, then another 5950 which lasted a few months before it began over heating (mid way through Deus Ex HR) and I then got a 6950 from the company I was buying from (ebuyer- great service) which runs flawlessly, bar driver issues.

        Must’ve from when I got Metro 2033 to it been playable on the 6950.

        I just don’t feel like I bought a quality product, unlike when I bought an AMD CPU in the early 00′s.

        Drivers are just as important as the kit itself when it comes to graphicz cardz and AMD are really letting their selves down.

        • Bremze says:

          5950 isn’t a thing, that was either a 5970(a double GPU card) or 5850. Honestly, it sounds like either you have extremely poor case ventilation, which would also explain the performance issues, or you had some bad luck with RMAs which has absolutely nothing to do with drivers. I’m not sure what you mean with “Farcry 3 performs terribly on my 6xxx series card yet lesser AMD cards perform better.”, either. Nvidia has their fair share of driver woes, but that usually gets glossed over because Nvidia.

          • Vandelay says:

            I’m running Far Cry 3 on a 6950 at, I believe, maxed settings at 1080p and it runs very nicely. I do have the 2GB version though. Is yours only 1GB?

          • PatrickSwayze says:

            Nope I have good ventilation and cooling, I know how to PC.

            And I understand shitty drivers because I’ve seen both Skyrim and BF3 performance fluctuate ridiculously.

            And its a 2GB version

    • analydilatedcorporatestyle says:

      “How are you going to counter PhysX?” Probably by ignoring it like everyone else!

    • dE says:

      I second the question about the Driver Quality. Well not in such a passive aggressive manner but rather “What are your plans concerning Driver Quality and Stability in new games just as well as older games?”.

      Because last time I bought a Computer, I had the choice between two nearly identical systems. I didn’t buy the one with an ATI Graphics Card in it. Reason being: I spent too many hours juggling drivers because Version blabla 9 was the one version that would work with THIS game but version blabla 10 was the only version that worked with THAT game.
      Maybe it was my choice of games but a lot of times when I installed one, I was mentally prepared to fuff with drivers for a good two hours to get it to actually run. Switched to Nvidia, didn’t have to do the Driver Dance yet.

  15. EstrangedManatee says:

    In late 2012 I remember AMD announcing their partnership with Dataram to sell their own branded version of RAMDisk. I’ve seen ramdisks being used in the business, research, and industrial environments since the DOS days, but I haven’t really heard of any significant usage on the consumer market. Are they hoping to expand sales of Radeon memory to corporate users or are they trying to develop a demand for ramdisks in the consumer market? Do they expect this technology to compete solid state drives, hybrid drives, and SSD caching, or are they trying to supplement it?

  16. JoeGuy says:

    - Are you going to enter the mobile market in any substantial way in the near future?

    - Can we expect the recent change to pretty decent drivers to continue?

    - Will you be supporting the 7xxx series drivers and optimizations for a longer period as you have had a substantially longer window than normal for one series being your main line of products, resulting in a larger uptake in this particular series than normal.

    - And obviously, when could we expect an update to the FX-8xx & HD 7xxx series?

    • Bremze says:

      AMD are holding on to the 7XXX series on the desktop side until around Q4 this year, so it’ll still be the main target for driver optimizations for quite a while. GCN was a pretty much ground up new architecture compared to VLIW5/4, which explains the higher than usual gains from drivers, but we’ll probably see diminishing returns as the driver team gets more accustomed with the new arch.

      As for the FX series, there have been rumors (stemming from the fact that the Piledriver follow-up seems to be missing from some newer AMD roadmaps) that AMD are dropping pure desktop CPUs in favor of APUs, but whatever is the case, we’ll see come Q3/4, when the Kaveri APUs using Steamroller cores and GCN based graphics units are released.

  17. captain567 says:

    Bulldozer didn’t do so well, in part due to it’s unconventional architecture. Is AMD planning on focusing on more conservative designs or trying to convince others to support theirs?

  18. Faxmachinen says:

    Ask them about the future of GPGPU and what plans they have for making massive parallelism available to the common coder. Then follow up with questions about the possibility of real-time raytracing pipelines and what other rendering technologies they’re looking into.

  19. FuzzyPuffin says:

    This might be a question for Sapphire, but why does the Mac edition of the 7950 have such a rediculous markup over the PC version when it’s the same card, just with different firmware?

    • Bremze says:

      I’m guessing it’s due to the small volumes. QA costs are divided over fewer units sold.

    • Sakkura says:

      Apple would have charged twice as much for it.

  20. snorkel says:

    Ask them about their plans to support VR and the Oculus Rift in general.

    • Sheng-ji says:

      Also, if NVidia are being as open as they are claiming with the technology behind the shield, will we see support for that device

  21. analydilatedcorporatestyle says:

    Will they be soldering future processors to motherboards(like Intel) or will the seperate motherboard/CPU update path still be possible?

    • Bremze says:

      I believe that either that they already do for some Bobcat SoCs targeting tablets, or are going to do with Jaguar SoCs made for the same purpose, I’ve seen nothing about doing something similar on desktop chips, though.

      • analydilatedcorporatestyle says:

        Don’t keep up to date with hardware stuff but read somewhere(can’t be arsed) that Intel will be shipping PC CPU’s hardwired to motherboards in the near future????????????

  22. particlese says:

    I’d love to hear an update on AMD’s inconsistent frame timing that some benchmark outfits have been harping on them recently for. (I think Tech Republic was the one who got that ball rolling.)

    I applaud them for trying to minimize the delivery time of individual frames (especially with VR becoming a Thing that needs low latency), but I found the resulting so-called “microstutter” in Skyrim supremely distracting/annoying. AMD’s beta driver that targets this problem has worked wonders, so I’m eager to learn more!

    • Sakkura says:

      AMD had a good long chat with Anandtech about that issue just recently.

      http://www.anandtech.com/show/6857/amd-stuttering-issues-driver-roadmap-fraps

      • particlese says:

        Ooo, hadn’t seen that yet. Pretty meaty, so far. Thanks for the link!

    • Bifurk8 says:

      Finally created an account just to ask about microstutter and frame times to find I was already beaten to the punch. This. I’d like to know if AMD is designing their next generation of hardware with these issues in mind and if they plan to address them further in previous generations via driver updates or CAPs.

      Also, does AMD have anything coming down the pipes to compete with Nvidia’s adaptive v-sync tech? I’m tired of screen-tearing in Eyefinity when using a combination of different video outputs.

  23. analydilatedcorporatestyle says:

    As you are being Pax Man (Jeremy Laird) can Cara Ellison be Ms. Pax Man? whot with her beard and all that lippy!!

  24. DigitalSignalX says:

    Does AMD feel the “optimization race” among AAA titles is healthy or fractures the gaming community? Many games perform dramatically different on same generation AMD/ATI setups compared to their Intel/Nvidia counterparts leaving PC owners to struggle in support queues or muck under the hood trying to tweak their games into working order. My concern is that if some sort of baseline performance spec could be ironed out, PC would be able to compete better with consoles in terms of launch reliability.

  25. pilouuuu says:

    I want to know if they are releasing a technology similar to PhysX any time soon. I know it’s just a gimmick feature, but everytime I decide which GPU to buy the fact that AMD is lacking that turns me off a little bit. But I have to say that I always end up choosing AMD and that’s why I love them to support effects like those better.

    • Rognik says:

      As someone who invested in two great AMD GPUs for pretty graphics, I would love to see Physx or something similar. I’ve been playing Planetside 2 a lot lately and the Physx effects are absolutely beautiful. I wish I could turn them on, but using CPU for them in a game that is already CPU bottlenecked would be madness.

  26. TT says:

    My number 1 question for years is:

    Why cant we select the scale option In Catalyst Control Center/ Properties (digital Flat-Panel) Image Scaling?

    It as been grey for years now. The only way to access it is by changing the monitor resolution – choose your option – and go back. It wont show your chosen option but will work.
    I often chose a lower resolution to play a game at a better frame rate, but not at the cost off having a blured stretched version wish it default´s to.
    I always prefer to keep the native aspect ratio with black borders around wish should be default anyway since the CRT monitors have vanished..
    It sounds like nick-picking but is so simple to correct, yet…

  27. B0GiE-uk- says:

    I would like you to ask them if they could set up a “DECENT OFFICIAL FORUM” where we can have single game threads with all users bugs/glitches posted and actual official reply’s from the driver teams and what they are doing about it.

  28. Fazer says:

    Why don’t they do an AMA on Reddit instead?

  29. Megakoresh says:

    I only would like them and any other component manufacturer to answer one question, but it’s a big one:

    The hardware development is clearly shrinking the size of the steps in terms of performance differences between GPU and CPU generations. Do you see the future in continuing development of current type of microchips, moving on to new fronts (like fully optical computers) or is software optimization holding most of the improvements for us in the near future?

  30. gruia says:

    hd8850 is it really as good as advertised?
    20% cheaper than 7850 and 30% stronger?
    Whats the date of its release , winter?

  31. Bigmouth Strikes Again says:

    (a) Since you consider the 4 year old HD 4XXX series legacy, will you similarly keep anally raping those who made the mistake of buying other products of yours, such as, in my case, a card from the HD 6XXX series?

    (b) I know hell will freeze before you answer this, but I have to ask: what’s your margin with the new consoles? How much are you making and how significant is it to you (financially)?

    • Brun says:

      For (b), speculation is that the answer is “not that much.” Remember, consoles have to be cheap to succeed, which means that Sony/MS/Nintendo probably beat them up hard on the price of the APUs.

    • Bremze says:

      How exactly are people running 4XXX series cards being “anally raped”? Nvidia releases non-beta drivers on about a quarterly basis, are Nvidia users being “anally raped”?

      • Bigmouth Strikes Again says:

        There’s a lot more to AMD’s legacy drivers than just being released quarterly. I’ll quote Anandtech: “This means that those products will move from receiving monthly driver updates to quarterly driver updates, and at the same time AMD will shift away from working on further performance improvements and new features for those cards, and instead working solely on bug fixes and other critical updates.” (emphasis mine)
        Knowing the standard of their drivers for their, as yet, “fully supported” hardware, I do fear they are going to do the same thing with every generation: 4 years and then good luck.
        And I’m sorry if it offends your sensibility, but the HD 4XXX series was launched in June 2008; the move to legacy happened in May 2012. That’s actually a little under four years of full support. Considering how vital drivers are when it comes to graphics cards, that’s just lame.

  32. Jason Moyer says:

    Will the Pitcairn cards ever get drivers that actually work with DirectX 11, frame latency wise?

    • Bremze says:

      Pcper.com should have frame time benchmarks up for Pitcairn in about two days, but looking at how Tahiti does, I’m guessing it already does.

  33. Sander Bos says:

    How do you, as a hardware maker involved with at least one next generation console, feel about SteamBox as a platform?

  34. james.hancox says:

    When are you going to ship the PS4 chip on a standard mini-ITX motherboard with a bucket of GDDR5? I don’t want to buy into the new consoles, but I want exactly that kind of hardware sat under my TV running Steam.

  35. Jenga says:

    We’ve seen really aggressive game bundles with AMD graphics cards, any hints as to future refreshes to these bundles? Battlefield 4 bundle perhaps?

  36. Dowr says:

    Ask them if they prefer milk or white chocolate.

  37. Jeremy Laird says:

    Some good questions guys. The nice thing is, lots of them are the naggy questions I’d be asking anyway, but I get to blame them on you! Watch this space, etc.

  38. MeestaNob says:

    Ask if they have an easy way of emulating PhysX on an ati card.

    Because nVidia made it all but proprietary tech I’ve not bought one on principle, even though I suspect there is a software solution…

  39. inspirius says:

    What is the process that you go through when optimising a driver for a game? Do you analyse what calls are made and then disable parts of the driver that aren’t being used, or is it more involved than that?

    I’m a software engineer so happy to read as many gory details as possible.

    • Bigmouth Strikes Again says:

      That’s a good one, I would also love to know that.

  40. Itanic says:

    Forgive me if there is a similar post to this as TLDR. **Edit just read all previous posts**
    I am a gamer and a PC enthusiast of 30 Years.
    After switching between 486′s chips and K6 chips in the early years and coming accross early ati cards also… it was a time of bewonderment when the 9700 Pro card came out and blew nvidia out of the water.
    I positively salivated over owning one of these cards and eventually did! Then came the X64 processors and Intel were left holding the baby for a year or so, i owned several AMD X64 processors and laughed at how intel fumbled. For the last 6-10 years i have felt dirty about owning both intel processors and Nvidia video cards but i’m sorry to say that being a gamer and PC enthusiast they have had the edge ( not so much on video cards) but certainly CPU’s. Can you ask the guys at AMD when they will be releasing new bleeding edge tech or hiring some new guys who think outside the box?. To once again get people thinking that they could again pull the rug from under intel with new speeds, idea’s and technology as i for one “as before” would like to champion their products to a gaming community i am part of or to work colleagues, customers and friends of mine. Yours hopefully. Ian

  41. ShineyBlueShoes says:

    I went with an all AMD system this time around after a long time running Intel and Nvidia and while generally happy I do have to say I’m getting fed up with their drivers. Having to manually switch my 7970 crossfire configuration around based on the game I’m playing because the 13.1 drivers are awful with these cards, the beta ones are better but don’t work with some games, and then the crossfire profiles are 4 months old.

  42. ShEsHy says:

    Ask them why all Radeons run so freakin hot.
    My Sapphire HD 4870 1GB Vapor-X runs about 30 degrees (Celsius) hotter than any other component in my case (including OCd CPU and RAM). And it’s not just me, every Radeon owner I know has the same temp issues.

    P.S.
    Please don’t try to give me any advice as I’m not asking for help.

  43. andytt66 says:

    Crossfire. Where does that sit in AMDs plans for the future?

    After spending a thoroughly frustrating weekend trying to get a new HD7970x2 Crossfire setup to play Witcher2 without crashing after more than 30 seconds, I’ve sworn off dual-card systems for life. I doubt I’m alone, the internet seems full of gamers that have issues with multi-card setups.

    But it *is* a lovely idea. So I’d like to know if there are any plans for anything in the future that might actually approach single-card levels of stability.

  44. sejm says:

    My question is in the same vein as others here – when can we expect to see systems such as the PS4′s hardware with memory shared effectively across CPU and GPU at faster speeds.

    That seems to be the big thing with the PS4 architecture that could potentially allow it to outperform PC – the shared use of GDDR5 rather than 3 coupled to reduced OS footprint.

    Appreciate that individually fast RAM would be optimal but an APU unit that worked in the same way may see a resurgence in PC gaming, which can only be good for the industry.

  45. Apocalypse says:

    Linux Driivers. Steambox, release of the new GDRR5 APU Platform for PC.

    And will we ever see a APU that actually is cable to give is a decent performance boost when in crossfire with a desktop card?

    For crying out loud, the current APUs are in this regard ridiculous, wasting good space on a GPU that will not do anything once you upgrade your graphic.
    the PS4 Jaguar APU sounds like a perfect candidate to get into xr-fire mode with a similar strong dedicated GPU.

  46. Bootstraps says:

    Ask them the following:

    1) How do AMD feel about the fact that working conditions at one of their major business partners (Foxconn) is so bad, that the shop floor staff have resorted to mass suicides as a negotiation tactic? Would they care to comment on the managements complete ongoing disregard for them? Do they think that installation of nets around the sides of the workers dorms to catch the frequent jumpers has affected the price of the AMD boards they’ve manufactured, or the stock price of AMD?

    2) What concrete steps have been taken to ensure that none of the rare earth metals used in the manufacture of their products are associated with child labour, slave labour or warfare, given that so called “conflict minerals” are often not properly traced through the supply chain? What is their historical record on this over, say, the last ten years? What is different between the situation then and now?

    3) Does the AMD CEO feel he deserves his $1m dollar salary? How does his salary compare to the staff who clean his office? Or to the salary of those actually assembling AMD products overseas?

    4) Following a 60% collapse in stock price last year, AMD upper management has promised a “restructuring” in the last quarter of 2012, with the aim of reducing costs by 25%. How many people will be losing their jobs over the next couple of years to achieve this?

    I mean, you weren’t planning on asking them only easy questions were you, Mr Paxman? ;)

    • riverman says:

      I’d tell you to read a freakin’ book, but I’m sure you wont, so here’s a podcast for you regarding “point” number one. conditions are pretty awful in the factories that make the junk we “need”, but foxconn is actually one of the better factories to work at over there.

      http://www.thisamericanlife.org/radio-archives/episode/460/retraction

      • Aaax says:

        What book?

        • Bootstraps says:

          £5 says it’s “Atlas Shrugged” or Willard Romney’s “No apology”

      • Bootstraps says:

        Goodness me, well if only you could take your podcast back in time and play it to them! I’m sure they’d all see the error of their ways, once they’d had “American Life” tell them that, actually, their working conditions are just fine. Foxconn could take down the nets too, so it’d be a win-win.

  47. Aaax says:

    Do you sleep with your secretary?

  48. Tams80 says:

    Probably too late to the party, but here goes…

    What are their plans for hybrid crossfire with mobile (full voltage laptop) APUs and GPUs?
    If they have any significant plans, are they going to push them to OEMS?

    I ask this because finding a laptop with AMD anything is almost impossible and with crossfire they have an opportunity to provide the best full voltage mobile graphics performance.

  49. FullMetalMonkey says:

    Where does AMD expect graphics technology to be in five to ten years time? Are we going to see a decline in dedicated graphics cards (personally I don’t think so) and a rise in dominance from APUs? Also sort of graphics technology do you think is going to be focused on in the future. Recently it’s been alot of lighting effects, volumetric smoke and tesselation.

  50. phanteh says:

    I doubt this’ll see the light of your interview, but could you ask them if / when they’re going to fix their graphics cards performance in Hawken? Or are they just in too deep with Nvidia? (I’ve posted this on their support forums, to no answer).

    What with the tech Intel have been showing off, is the PC gaming market going to become more and more segregated into hardware camps?