Week in Tech: Intel Loves You, VR, $3,000 Graphics

By Jeremy Laird on March 27th, 2014 at 9:00 pm.

The golden age of detachable twangers returns...

Right, then, it’s been an intriguing week or so in PC gaming tech. The virtual reality roadmap just got a rocket up the bum with the news that social network and moneybags megacorp Facebook has snapped up Oculus VR while Sony has injected additional momentum by showing off its own prototype headset for the PS4. Meanwhile, remember when you could buy a cheap Intel chip and overclock the twangers off it? Those days may be returning. Intel has apparently decided that it cares about we PC enthusiasts after all. Well, kinda. Oh, and Nvidia has another catastrophically expensive video card which you won’t be buying. Same old.

We’ve covered the Facebook-buys-Oculus news elsewhere. And Sony’s Project Morpheus headset for the PlayStation 4 obviously isn’t for PCs. But suddenly it feels like VR just got very serious. A bit of competition can only be a good thing and already it looks like Sony’s entrance might help push things in a more ergonomic direction as regards the design of headsets.

Admittedly, the idea of Facebook taking over Oculus VR makes me feel a bit queasy and not just because it’s a moderately bizarre acquisition at first glance. Romantic if ultimately unrealistic notions of Oculus VR being a purist tech company with gamer’s interests at heart pretty much go out the window, that’s for sure.

But with Facebook bankrolling the thing, at least you can be fairly sure the project won’t want for investment. As for the Sony angle, on paper even the PS4′s hardware looks a little weedy for high-res VR rendering.

Sony’s ‘open air’ headset is a new take on VR ergonomics

Smooth frame rates and minimal lag are proving hyper critical for avoiding motion sickness when it comes to VR and the PS4′s GPU is a relatively modest bit of kit by modern PC standards. But if Sony can make it work at HD resolutions, it suggests big-money hardware may not be entirely necessary for a decent VR experience. Promising.

So, to Intel. One of my big beefs with Chipzilla in recent years has been its general disdain for PC enthusiasts. We all get that mobile is the big growth market and that has to be where much of Intel’s attention is focused.

But that doesn’t explain things like cynically locking down CPU overclocking to a few premium models, cheaping out on thermal packaging and generally dragging its feet.

My theory on part of this is simple. All Intel CPUs should be unlocked. The overclocking community isn’t huge so I doubt it would hit Intel terribly hard if a few enthusiasts and gamers bought slightly cheaper chips and overclocked them.

But it would generate good will with a group of people who would surely evangelise Intel products if only they had reason to do so. At worst it’s got to be zero sum.

I have a similar hunch as regards high end desktop CPUs with more cores. Intel makes much bigger margins on Xeon CPUs than desktop chips, granted. But I doubt the cannibalisation would be that catastrophic should Intel put out some Core i7s with a few more cores. Xeons top out at 15 cores, current desktop Core i7s at just six. It’s pretty pathetic.

And again, some truly exciting desktop product would be good PR. The thermal packaging issue is a bit trickier. I have no idea how much money Intel saves by using cheap paste inside the packaging of its recent chips.

Anyway, the good news is that Intel does appear to be waking from its slumber when it comes to showing enthusiasts a bit of love. It’s recently announced a number of relatively interesting chips.

Cheesy ‘Anniversary’ branding, but a cheapo chip with an unlocked multiplier? Yes please

First up is confirmation of an eight-core Haswell-E chip. OK, it’s an LGA2011 chip, it will cost a bomb, it’s still seven cores short of the top Xeon and I’d rather see Intel adding cores to chips that drop into its mainstream socket. But what the hell, it’s got more cores.

Oh, and it will link in with a new chipset that debuts DDR4 memory support from Intel, which is a welcome development that will eventually filter down to the mainstream.

Next up are some new K series models based on the existing Haswell quad-core chip with overclocking enhancements. Intel has mentioned improvements to the thermal packaging, though no specifics have been mentioned and it’s not clear what sort of points we’re looking at.

It’s possible these chips could have higher default clocks, too, which would be very welcome, here’s hoping.

Another new product involves a chip based on the upcoming 14nm Broadwell die shrink and offering Intel’s fastest Iris Pro integrated graphics in socketed, unlocked desktop format. That puts an end to any doubts that Broadwell would even be available as a socketed chip.

But most intriguing of all is news of a cheapo unlocked ‘Pentium Anniversary’ CPU. Conceived to tie in with the 20th anniversary of the Pentium brand, the chip will likely be a dual-core model and will definitely be fully unlocked.

Bargain basement gaming rig based on a heavily overclocked budget CPU? That’s my kind of system, so fingers crossed.

Again, no prices on any of these, but everything I’ve mentioned will roll out this year. Like I said, I’d rather see Intel unlock the lot. But this is all definitely a step in the right direction, so I’m trying not to be too grumpy.

$3,000? Cheap at twice the price

And finally, Nvidia has announced the Titan Z, a dual-GPU video board with a budget-conscious $2,999 price tag. It’s basically a pair of GK110 Titan Black GPUs in one chassis and thus sports 5,760 stream processors and 12GB of video. Cue much rejoicing.

The sharper among you will note that it’s about 50 per cent pricier than a pair of Titan Blacks. Ostensibly, it has the advantage of being a single video card. But the form factor looks to be getting on for triple slot, so any space saving over a dual-board arrangement is pretty moot.

It is thus a halo product for Nvidia to shout about and ensure the Radeon board with dual R9 290X GPUs AMD has been teasing us with lately but has yet to actually announce doesn’t claim the fastest video card title. It makes very little sense to actually buy a Titan Z.

__________________

« | »

, , , , .

51 Comments »

  1. Joshua says:

    After the Core 2 days, this OCable pentium feels like the first message from a relative you have had lost contact with for such a long time. At one time, one feels glad that contact is being restored, albeit slowly, on the other hand, part of you regrets having had such a breakdown in the first place.

  2. bhauck says:

    When someone says “mobile” when talking about Intel, does that include chips for not-tiny laptops? Your standard 15.6-inch desktop replacement ish type thing? Or is “mobile” phones and netbooks only?

    • Koltoroc says:

      they talk about mobile as in phones. So far all their attempts have been solutions in search of a problem.

      Intel big problem is, that x86 is only interesting for backward compatibility which is largely irrelevant in the mobile space. There the most important metrics are die size and performance/W and in both intel is still playing catch up despite a few years in process advantage. It would be different if they dropped x86 in favor of a more efficien µArch, but that is not going to happen.

      there have been only 2 or 3 mobile devices (phones, tablets) with intel guts so far, all of at best marginally successful, if at all.

      • frightlever says:

        In respect of the article I’m pretty sure he means mobile as in laptops, which has been the focus of Intel’s recent releases, with desktop chips essentially being variations on the laptop chips, rather than the other way about, as in days of yore.

      • SuicideKing says:

        What on earth are you on about?

        1. “Mobile” means laptops, even in general. “ultramobile” means smartphones or tablets.

        2. The entire “oh x86 is only here because of backwards compatibility” nonsense is old now, please move on. Intel’s shown enough times that its ISA can lead in absolute performance and compete effectively in perf/W at the same time, and we’ve also seen ARM having to resort to big.LITTLE because efficiency had started to hit a wall.

        3. That’s complete B.S., there have been quite a few smartphones/tablets with Intel SoCs inside, and that number should increase considerably by the end of this year.

        Please don’t spread misinformation, especially given that this is a gaming site, not a tech site.

      • jrodman says:

        I’m a little doubtful that the instruction set matters anymore. I think the efficiency issues are all about intel not having donkey’s years of experience trying to succeed at low power like ARM and company do.

      • Malcolm says:

        In terms of transistor budget the x86 translator is insignificant. The whole RISC/CISC efficiency argument has been dead since at least 1995 when the Intel P6 architecture started translating x86 instructions to RISC-type “micro-op” instructions for execution.

        ARM started in a power efficient space and have been increasing performance, while Intel started in high-performance space and have been increasing power efficiency. They’ve pretty much met in the middle now and it’s all about how small and power-efficient you can manufacture your transistors.

  3. kael13 says:

    I plan to upgrade this year, but I have no idea what I’m going to buy. Broadwell, Haswell-E or these improved Haswell chips. It’s too much choice. As long as I’m spending plenty less than two grand, I should be good though.

    • LVX156 says:

      What’s wrong with AMD? You can get a FX-8350 8 4.0GHz Black Edition that overclocks VERY well. 4.5GHz air-cooled without upping the voltage is no problem at all, and if you have a really good fan (or water cooling) it’s very stable at 5+GHz.

      And it’s only about £160, which leaves you with an extra £100 you can spend on cooling, or a better video card.

      • kael13 says:

        So I did a little research as I know nothing about AMD cpus… You’re heading into 200W+ TDP territory there, I could heat my house with a CPU like that. Although, the FX 9590 actually looks quite tempting, with the list price about a third of the RRP.

        • LVX156 says:

          What the hell are you talking about? The FX 8350 has a 125 W TDP. The only ones that are over 200W are the 9370 and 9550, which are basically the very best of 8350′s overclocked to 4.4 and 4.7GHz respectively.

          And I thought this was a site about games? There aren’t any single-threaded modern games.

      • SuicideKing says:

        What’s wrong with AMD?

        Well, their single thread performance sucks, and that’s still the most important metric in today’s consumer workloads, including games. It takes a very, very well threaded workload for an FX-8350 to match a Core i7, and even a Core i5 or a Core i3 posts better all-round performance.

        Intel also has better memory controllers, and their chips draw lower power.

        • jrodman says:

          It’s not just consumer. A lot of server workloads work better with higher single-thread performance too. Not everything can be parallelized. Everyone’s going Intel!

          (And don’t even mention the non-x86 contenders, they’re so far behind by now.)

      • Keyrock says:

        What’s wrong with AMD is that the Vishera chips need to be running at above 5 GHz to even come close to competing with even the i5s, nevermind the i7s on average and they do so while sucking down way more power than their Intel counterparts. AMD’s IPC is lightyears behind Intel chips. AMD is so far behind in the mid to high-end desktop segment that they have essentially given up on that portion of the market for the time being and have even admitted to as much, which is why Intel can sit on their hands making minimal, if any, strides in desktop performance and concentrate on low power variants as they try to push into the ARM-dominated mobile territory.

        It pains me to write that as I was a big fan of AMD and used their chips almost exclusively back in the day, and even when they dropped the ball with the original Phenom, right through to Phenom II (a 945 was my last AMD desktop chip). I miss the good ol’ days of Athlon 64 mopping the floor with the Pentium 4 space heaters.

        • Snakejuice says:

          Hear hear! I’ve used AMD in all my past rigs and my last rig had a high end Phenom II and it was just awful! It was bottlenecking me ALL THE TIME and AMD’s stock cpu fan (what can I say, getting too old and lazy to use 3rd party cooling) sounds like a chainsaw under just a light load. After that fiasco I quickly built a new Intel i5-3570k based rig and I kid you not when I say my performance in cpu demanding games DOUBLED! Also Intels stock CPU fan is USABLE and QUIET!

          • LVX156 says:

            Come on, modern games are almost all about the GPU. There are very few games where the CPU will be the bottleneck before the video card (unless you have a £350+ card).

            Your performance doubled you say? Sounds unlikely in games (unless you bought a new video card too): http://www.bit-tech.net/hardware/2012/11/06/amd-fx-8350-review/6 (that’s Skyrim benchmarks, and Skyrim is so poorly programmed that the CPU has to work a LOT more that in any other modern game, that’s thy the picked it for benchmarking; neither Skyrim nor Shogun 2 shows no 100% performance increase).

            If we look at Cinebench and how the 8350 performs with a well.programmed multithreaded program, no i5 beats it: http://www.bit-tech.net/hardware/2012/11/06/amd-fx-8350-review/3

            Sure it’s not as fast as the Intel processors, I never claimed it was. But it’s less than half the price of a good i7, and if gaming is the primary thing you plan on using your computer for, there’s nothing wrong with buying an 8350 and spending the extra money on a better video card.

      • syllopsium says:

        AMD is simply vastly outclassed by Intel hardware to the point where it’s ridiculous. A 4770K is up to a hundred quid more and vastly better. In the scheme of a complete system saving 100 quid isn’t worth it.

        Granted, if all you run is games it’s probably an ok option, but for that you’d probably use a non FX processor anyway. Intel’s CPUs are so much better at compilation benchmarks it’s not funny.

        I’m so looking forward to an eight core CPU because I want to mess around with virtualisation and ideally I’d like more than 4-6 cores. If we’re really lucky it’ll support ECC RAM. My only upgrade option from a Core2Quad system with ECC is a series of rather expensive Xeons. Once they go beyond four cores, or into multiple sockets they’re hideously pricey.

        (I also looked at the AMD server chips. They’re as lacklustre as the desktop offerings and you have to be careful to note they’re a series of NUMA nodes on one chip)

  4. Enterprise2448 says:

    “It makes very little sense to actually buy a Titan Z.”

    It may be branded GTX as in “gaming”, but it is not primarily a gaming GPU (even though people are probably going to buy the shit out of it for that purpose, just for the sake of it). Like the original Titans, it has a much better FP64 performance (a third of FP32), meaning it is very good for scientific CUDA computing. This is why the price is so high, otherwise it would be eating into their Tesla GPGPU market.

    • FriendlyFire says:

      Yep. I’m pretty sure the original Titan in fact did eat into their Quadro market a bit; anyone who doesn’t need the drivers is probably just as well-served by Titans than by Quadros, and that can mean a lot more people than you might think. The compute market is growing rapidly.

  5. Didden says:

    There are still many applications and games that aren’t necessarily faster with more cores – the EVE servers are a good example, with their high MHZ Xeon chips. I still don’t think the software is there yet, to really make use of more cores. It isn’t like adding another core makes it 10% faster each time, it is always diminishing returns.

    • TechnicalBen says:

      True. Though we’d like some improvements. Perhaps they are there, just too incremental to notice right now.

      That and more cores usually means you can do things like video rendering in the background. Or even, with the amount of extra ram we have these days, things like “in house streaming” becomes move viable with 1 pc and lots of little devices. With 8 cores, 4 for your game, 4 for someone else in the house/family, and 16gb ram… it only leaves the GPU to be the bottle neck of running 2 or more games on 1 pc and streaming them out. :P

  6. Sakkura says:

    There are reports that Intel is on the way to launching consumer-oriented CPUs with 8 cores, up from the 6 cores you get in current LGA 2011 Core i7 CPUs. But, honestly, the lack of 8-core CPUs hasn’t been an issue. There has been precious little consumer-oriented software that could utilize the extra cores and in any way needed the extra performance. Over time, software is getting better threaded in general, so it’s probably about the right time for them to step up to 8 cores. And then in another few years hopefully making 6 cores standard, with approachable pricing instead of the enthusiast “tax” they currently levy.

    Here’s the story:
    http://www.anandtech.com/show/7874/haswelle-8-cores-x99-ddr4

    • frightlever says:

      The new consoles have 8 core processors, so it’s probably going to become more important as time goes one. But plus ça change, plus c’est la même chose.

      • TechnicalBen says:

        They have like 4+2 and 1/2… well, they have 4 true cores, and 4 additional cores which don’t quite have full resources (less memory access on chip or something). It’s not quite the same as Intel H/T, as that splits 1 existing core, AMD instead add another cheaper/cut down core to the existing one (so it’s not a true 100% additional core, but close enough, like 90% of the resources/parts).

        It’s all very complicated: http://www.tomshardware.com/reviews/fx-8150-zambezi-bulldozer-990fx,3043-3.html

        If you put an Intel 4 core with 8 way HT next to a true 8 core, you would see the 8 core was twice the size and twice the chips. HT only allows twice the instructions, it does not speed them up (well, a tiny % quicker).

        If you put AMD or Intel true 8 core next to another true 8 core they would be the same, all have 8 real fully featured cores.

        If you put AMD 8 core Bulldozer (or the likes of the PS4 chip) next to an 8 core CPU, you would notice the AMD was slightly smaller as it had slightly less on chip memory space and shares some chips/resources between cores. Which makes it a tiny bit slower than true 8 cores, but a lot faster than Intel 4 cores with 8x HT.

        • jrodman says:

          Hyperthreading is pretty easy to understand. it makes workloads that involve LOTS of task switching within a fairly sweet spot of thread counts go faster. It doesn’t make 4 compute-bound threads go faster at all, and it is pretty useless for a thousand threads as well. It does help for rapid switching for some thread count near to 8, like 6 or 14.

          This has sometimes been a notable win for desktop and workstation jobs. It usually does not help much for workloads that are finely tuned for the platform, because on a four core chip you should be running four jobs at any time, regardless of the hyperthreads, so for things like very tightly tuned videogames or High Performance Computing clusters it typically is not a big win, and sometimes a small loss.

          HOWEVER, I sure thought the cores on the modern game chips weren’t hyperthreading but limited-functionality cores that simply had limited access to main memory. Am I out of date on the old generation/

        • SuicideKing says:

          In practice, i have rarely seen an FX-8000 series chip (4 modules, 8 int ALUs, 4 FPUs) beat a mainstream Core i7 (4C/8T)…

          Though of course you’re oversimplifying, no two cores are equal. How they’re fed, IPC, branch predictors, pipeline length, etc. will make a ton of difference as well.

          Also, Jaguar is a proper core, so the PS4/Xbone actually do have 8 whole cores (but split into two 4-core clusters).

          I don’t remember the other cluster being inferior, just that if a workload is split between them, each time one core needs data in the other cluster’s LLC it’ll suffer a latency hit.

          • TechnicalBen says:

            Yep. I may be wrong, but from what I’ve seen HT benefits from non-intensive tasks. Full cores benefit from intensive, as it gets the whole core. So say, video rendering with HT gives no benefit, as it needs the entire core to work. But say 16 web pages benefit from HT as none of them need the entire workload of the CPU, but do need more threads.

            If the actual power/GHZ and other stuff in the AMD chip make them comparable with an i7 though, probably not. If only Intel mixed both techs in the chip. :P

          • SuicideKing says:

            Well, i do think it has to do with how much a particular task uses a core as well, but it could be the nature of the algorithm that matters more.

            Honestly, i can’t remember any (known) well threaded task that didn’t benefit from HT…the tech only makes sure the cores are kept busy, after all. I’m not really familiar with it on a low level, so maybe Real World Tech would be a good site to visit for this.

            Anyway, benchmarks:
            http://www.anandtech.com/bench/product/836?vs=837
            http://www.anandtech.com/bench/product/836?vs=697

            AMD’s Bulldozer arch gets hit very hard when it comes to floating point operations, because of shared resources and the FPUs not being kept busy enough, but i believe the FPUs themselves are fairly good.

            I believe Intel’s gone back and forth with HT, using it on the second iteration of the Pentium 4 and then they didn’t implement it for Core Duo and Core 2…Then it returned with either Nehalem or Sandy Bridge stuff.

      • SuicideKing says:

        Those consoles have Atom-class cores, and while a large x86 quad core won’t have much issues running anything that runs well on the consoles, anything that can execute 8 (or more) threads concurrently will likely see a performance boost.

        • TechnicalBen says:

          Atom is Intel, no? So how can they have Atom class on an AMD?

          • Polmansol says:

            i think SuicidKing means performance wise?

          • SuicideKing says:

            Yup, I meant power and performance wise. Jaguar was an update to Brazos, just like Silvermont was an update to Saltwell…which itself was a die shrink of their older Atom cores…(it’s a bit confusing, Atom used to be the name of the processor family and now it’s the SoC family).

            Anyway, Jaguar and Silvermont are both cores (an Atom branded SoC would have a few silvermont cores), and their target platforms are tablets and netbook class devices. Intel’s also putting Silvermont into phones.

            Jaguar isn’t based on Piledriver/Bulldozer/Excavator like most of the APUs and the FX chips, neither is Silvermont derived from Sandy Bridge or Haswell. Silvermont implements an ISA equivalent to Nehalem, though the actual implementation isn’t like Core.

            Trying to explain all this makes me realise how many code names are involved. :/

    • Keyrock says:

      I would be shocked if we saw a mainstream octa-core desktop chip from Intel before SkyLake (2016?).

  7. biggergun says:

    It’ll be interesting to see Intel compete with AMD in lower segment. Either poor AMD will die an unpleasant death or… well, or it will not. Personally I hope it lives. Root for the underdog and all that.

    • wodin says:

      Unlikely considering it powers the PS4 and XBox thingies..

      • FriendlyFire says:

        I’m not sure the PS4 and Xbone are that big profit makers for AMD really. I’m pretty sure the margins are razor thin to get within the tight price envelopes the consoles have, so while it DOES sell a lot of AMD APUs, it doesn’t necessarily give AMD as much money as they would’ve selling as many “normal” CPUs.

    • jrodman says:

      You don’t even need to care about the underdog. Two suppliers is just so much healthier than one.

  8. RPSRSVP says:

    It’s just Nvidia feeling a little nostalgic about the “good old days”: http://i.imgur.com/rl7JLpq.jpg

    I made up my mind, I have had the following setup for a while, I5-2400-INTEL H67-2x4GB 1333MHz RIPJAWS-128GB Crucial M4 Antec520W HCG-MSI 7950 TWIN FROZR III-Q270 Catleap-MX Revo-WIN8 64

    I’ll wait til 14nm CPU’s and 20nm GPU’s to upgrade. By the time those become mainstream, Win9 should have a high adoption rate, maybe even 2160p monitors will enter the 3 digit price range.

  9. Furiant says:

    Congrats to Nvidia on the $6000 they make on those.

  10. edna says:

    Gaming rig on the cheap? Buy a 2500k off Ebay for £100 and crank it up to 5GHz (or at least 4.5GhHz). I know it isn’t the latest generation chip, but it’s a fine overclocker and is only 3 years old. Got to be better than a new, albeit relatively cheap for a new chip, dual core.

    • SuicideKing says:

      But when you consider the rest of the platform, you’d want something newer.

      • edna says:

        Not sure I would/do. What do you gain on the newer platforms, really? Not being sarcastic, I honestly don’t know. My 2500k/5GHz outperforms my shiny Xeon PC at work.

  11. Widthwood says:

    Pentium Anniversary really won’t be suited for gaming.
    Dual core, even with fast cores, just wouldn’t cut it anymore once everyone optimizes their engines for new consoles.

  12. jrodman says:

    Pentium Anniversary reminds me of when chips had names instead of brands.

  13. particlese says:

    A friend gave me his 166MHz Pentium with MMX when he upgraded back in the day, and my dad and I replaced his 133MHz non-MMX chip with it. The sucker overclocked to 233MHz (board-limited), and it was awesome. I rocked Encarta 95′s MindMaze so hard with that thing. I was similarly delighted when a BIOS update let me turn my 2.8GHz Phenom II x3 into a 3.0GHz Phenom II x4 (or 3.4GHz if I set the fans to “loud” and bumped the voltages up a little) on my current gaming PC. I’m more concerned about noise level these days, but I do love playing around with clock speeds and whatnot. Hurray for Intel allowing a bit more of it!

    Also, is it just me, or does “Titan Z” sound like a Nintendo DS game? That one with all the adorable immortal giants…

  14. Keyrock says:

    Hopefully the “Improved TIM” on the Haswell refresh chips means “soldered on” rather than “we used slightly more cheap ass paste” as I’ll be building a new rig in likely 4 or 5 months and I will most likely wind up getting a Haswell refresh. I don’t expect Broadwell to show up this year in desktop form, it just doesn’t make business sense. I expect to see some low power variants maybe sneak out the door before year’s end, but I would be shocked if the desktop Broadwells arrive before 2015 and I just can’t wait until next year.

    • SuicideKing says:

      Well the Iris Pro part is expected this year, and i saw a slide that indicated December 2014, so that’s probably likely.

Comment on this story

XHTML: Allowed code: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>