Week in Tech: Proprietary PC Tech and Nvidia

Last week we caught an early glimpse of Nvidia’s latest and greatest GPU design, known as Maxwell. We’ll have to wait a while to see what impact it has on true gaming PCs, but the sheer power efficiency of the new architecture certainly looks promising. Anywho, the Maxwell launch event was a chance to hook up with Nvidia and quiz them on a subject that’s been vexing me of late, namely the rise of proprietary gaming tech – well, mainly graphics – for the PC. What with Mantle and HSA from AMD, G-Sync, 3D Vision and Shield-tethered game streaming from Nvidia, it feels like gaming hardware is becoming increasingly partisan. So what gives? Tom Petersen, Nvidia’s Director of Technical Marketing for GeForce, gave me the low down.

Right from the get go, I’ve not much liked the cut of Nvidia GameStream’s jib. You need an Nvidia GPU in the host PC, fair enough. But not only do you need Nvidia tech in the client, it’s limited to just one Nvidia-produced device, the Shield handheld console. Yuck.

That’s a pity, because there’s some definite goodness in Nvidia GPUs regards reducing streaming latency in the form of ShadowPlay and the on-die hardware that enables it. If you’ve bought an Nvidia GPU, shouldn’t you be free to choose the client device? What with Valve recently releasing its much more open streaming tech in beta form, you are. And that leaves Nvidia’s locked-down end-to-end solution looking pretty cynical.

“Right now our streaming tech is end-to-end with Shield and a host PC based on Kepler,” says Petersen, “but Shield isn’t a requirement for playback, so who knows what might happen.”

Petersen reckons the current pure-Nvidia-end-to-end setup is actually justified. “We aim to guarantee an experience and the most important characteristic for streaming is latency. We’re using our encoder and decoder technologies to minimise latency. If we don’t control both ends of that – the host and the client – it’s harder to guarantee the experience.”

Nvidia here, Nvidia there, Nvidia bloody everywhere

There’s logic in what Petersen says, for sure. But I still think we should be able to decide just how much we want to invest in Nvidia’s efforts to reduce latency. Take Petersen’s logic to the extreme and Nvidia would only be selling Titan Blacks because anything less is literally sub optimal for gaming.

And what about G-Sync, another end-to-end Nvidia technology? Couldn’t Nvidia open out compatibility on the GPU side and still make money flogging G-Sync boards to monitor makers?

“Firstly, G-Sync is not a gift to humanity,” says Petersen. “But, of course, there’s a natural tension. If we’re doing G-Sync and it only works with our GPUs, isn’t that a barrier to adoption? I totally get that. But we’re carrying the ball with G-Sync. We’ve spent a tonne of money investing in it and developing it and we’re now building the modules with partners. If we don’t get any return for that, we’re less likely to do something like this again.”

“Generally, there are two ways to think about this. Are we trying to develop a business for monitor technology or are we trying to make it so that our GPUs are differentiated versus our competitors? It’s the latter.”

Refreshingly honest, then, and frankly hard to argue with. With that in mind my remaining objections are cost and choice. G-Sync monitors are looking expensive and they limit your choice. Petersen says six major monitor makers, including Asus, Viewsonic, BenQ and Philips, are on board so hopefully the choice part of the equation will improve.

As for cost, in so many words Petersen suggested the monitor makers see G-Sync as a nice opportunity to make some margin and that means opening high with pricing and adjusting down if the market won’t swallow it.

Nvidia G-Sync: Seriously smooth at just 40fps

He also said the added cost of G-Sync for a monitor that’s had its scaler board swapped out for a G-Sync board (rather than adding G-Sync and keeping the existing board) is around $50. That sounds like a pretty big addition to the bill of materials to me and that’s a shame because G-Sync is a seriously nice technology. I had another chance to see it in action that day and the way it makes gaming at circa 40fps looks super slick is very impressive and could actually save you money. No longer do you need a £500 video card kicking out 100fps-plus for perfectly smooth gaming.

While I had Petersen’s ear, I couldn’t help but poke him with the prospect of Mantle, AMD’s very own proprietary tech. What’s his take on Mantle and the promise of dramtically reduced CPU overheads in-game?

“The idea of reducing the overhead in abstractions layers throughout the software stack – that’s great. But there’s a trade off between higher level descriptions or layers and ease of use. Mantle as a concept sounds pretty good, but I don’t think the delivered experience so far has been that great. Microsoft may or may not view it as something to look at. If Microsoft does something with Mantle, we’d certainly support it.”

Anyway, the overall momentum at the moment definitely feels like it’s towards proprietary tech at the cost of open standards. Want Mantle and some general console-port goodness? You need AMD and GCN. Fancy G-Sync or low-latency game streaming? It has to be Nvidia end-to-end. Is that the way Petersen sees it?

“We love gaming and we’re always looking for ways to make our platform better but it has to make business sense. And I don’t think that’s incompatible [with proprietary technology]. People who value what we’re doing are happy to buy our products. If they don’t value what we’re doing enough, they’re not going to buy them.”

Ultra low-latency game streaming? Yes, please. Nvidia Shield? No thanks

That’s not to say there’s any stubbornness involved. “Shield is an example where we did a technology that’s compelling but we’ve changed the pricing as we figure out where it fits – we’re changing and we’re learning. The same thing is going to happen with G-Sync. If it turns out that we’re doing something stupid with our pricing or policy that’s preventing adoption, we’ll adapt.”

Overall, Petersen makes a pretty good case for Nvidia’s broader strategy when it comes to proprietary technologies like G-Sync or GameStream. Put simply, these technologies have to make money. The point for me is that it’s easy to get antsy about steps companies like Nvidia take to make sure the things they invest in actually make money. When a guy like Petersen gives you straight, honest answers and doesn’t pretend that technological lock-in and end-to-end solutions are purely about looking after the customer, I tend to have more respect for what Nvidia is doing.

If there is a problem, it probably comes down to a lack of competition. With only two big players remaining in each of the core PC component market – graphics and CPUs – market distortions are inevitable. But we should still be bloody grateful for what competition there is. Things could be an awful lot worse.


  1. SuicideKing says:

    Actually, Mantle’s days are numbered. Recent indicators are that MS and Khronos are working on lowering the overhead of DX and OpenGL, and Nvidia’s definitely helping out with OpenGL.

    AMD has FreeSync in the works too, and if the method is available to desktop users after DisplayPort 1.3 is implemented, then G-Sync’s days are numbered.

    • L3TUC3 says:

      Even so, Mantle shows that reducing overhead at the API level is a viable solution to gaining ground in the fps crunching game. It’s good to have that competition that drives OpenGl and DirectX into reviewing their systems to see if similar gains can be made, it’s felt rather static for the last couple of years. MS has market share to lose if Mantle gains ground and gamers start moving to linux based OS for the performance gain. As a pure gaming platform windows has a lot to lose.

      It’s actually quite interesting to observe and see how the market will adapt to the developments. AMD might not come out on top in the end, but they did throw the first punch.

      • FriendlyFire says:

        It’s only felt stale because of the last console generation, which still had DirectX 9 level graphics and API. DX11 implemented multi-threaded contexts, which is a HUGE deal, but until most engines are designed as opposed to ported to DX10+, we’re not going to see much gain from those techniques because the ancient engine is just getting shoehorned into a more modern API without taking advantage of the new techniques.

        Now that consoles are DX11-level, we’re going to see a progressive gain in performance and a much greater use of more advanced features such as tessellation.

        • Geebs says:

          I sincerely doubt it. Tessellation generates even more stuff for your video hardware to show, but the parts in the current consoles don’t really have the oomph to benefit from that. On the other hand they do have a unified memory architecture, so they might just push for keeping more detailed geometry in RAM.

          Tessellation makes sense on the PC because they have lower bandwidth interfaces and smaller amounts of video memory, but much better actual graphics processors.

          • TacticalNuclearPenguin says:

            Sorry but APUs have nothing like more bandwidth, as G-DDR5 ( the G is important ) is not ideal for the CPU and nothing even close to that 160-190 gbyte/s would ever be needed, especially if the CPU is clocked at a pathetic 1.8 ghz with a crappy core architecture on top.

            Then, moving to the GPU, the very same bandwidth is actually crappy, instead, just where we were finally moving to something that could really use it! Too bad.

            Now, i understand my own GPU is maybe overkill ( 780ti ), but let’s just use it’s bandwidth for comparison’s sake: around 350 gbyte/s when mildly clocked, basically double the amount.

          • Geebs says:

            Your GPU’s internal memory bandwidth is not the same thing as your PCI-e bandwidth, which is about 8GB/s for PCIe 2.1, with a high latency. Which is why it’s a good idea to shove a smaller set of geometry to your (ridiculously) fast GPU and then tessellate it, v.s. pulling a lot of geometry out of main memory across PCI.

            Also, the RAM in your 780 Ti is….wait for it…. GDDR5, and you only have 3 gigabytes of it.

          • SuicideKing says:

            Assuming you’re talking about the console APUs:
            1. Bandwidth is very important for the integrated GPU. Yes the cores are small but there are 8 of them, so with proper threading and proper coding you could make the CPU matter less. With an integrated GPU and unified memory access, you’re not sending stuff across the PCIe bus and you don’t have to duplicate data in separate address spaces. So, if you can feed the GPU fast enough, you should be good. Of course, the GPU itself should be capable enough to process that data fast enough.

            2. GDDR5 is tuned for high bandwidth, and is needed for the GPU. CPUs will like low-latency DDR3 more, but DDR3 bandwidth is usually too low for GPUs, which is why the Xbone needed an eSRAM (or was it eDRAM?) cache.

            3. On desktop APUs, it depends, really. GPU limited scenarios will be more bandwidth constrained, while at other times the CPU may prove too slow.

            4. The only reason you need LOTS of video RAM is for texures, especially at 1080p and above. Other data doesn’t take that much space.

            5. 8GB/s (in each direction) is for PCIe 3.0, not 2.x, which had/has 4 GB/s in one direction (and total of 8, yes, if that’s what was being implied).

          • Geebs says:

            If geometry was so cheap, why bother with tessellation? Why even bother with normal maps?

            That wasn’t really the point I was arguing though, which was that I don’t imagine that the APUs in the Xbone and PS4 are going to lead to some huge increase in uptake for PC-specific DirectX11 features, because their (at this point, pretty unimpressive) strengths are in other areas.

          • TacticalNuclearPenguin says:

            Geometry is not cheap on GPU processing, it has little to do with how much VRAM you use, which is mostly a matter of textures, resolution and AA. Tessellation is cheaper than doing the same thing with regular methods for the simple fact that it’s a pretty clever new technology which GPUs have to be able to support in the first place.

            Your argument doesn’t hold water, if the new consoles were to use much more raw geometry than a PC they would turn everything into a slideshow, because it simply is a matter of power.

            And all this debate about memory bandwidth vs PCI-E’s own and latency makes little sense aswell, as there are tangible rewards in having more of that since pretty much always.

            Eitherway, my point was that GDDR5 is utterly terrible for CPUs ( not GPUs ), it’s high numbers are due to it’s optimization as VRAM, is not something that works properly with a CPU let alone a terrible one which not only doesn’t have 8 real cores, but rather 4 modules, but is also weak when it comes to IPC and this even before we start talking about it’s clock.

            Don’t overestimate multithreading either, remember that gaming is a matter of syncing everything to the CPU’s main thread, which has to wait for everything else.

          • Geebs says:

            I think you’ve missed a couple of points, I’ll see if I can clarify my position a bit. Like I said, this is about whether adoption of DX11 features like tessellation is going to be driven by the new console generation.

            With respect to the importance of PCIe bandwidth – yep, you have a bunch of RAM in your GPU. Once things are on the GPU (the old paradigm of “textures live in GPU memory and you feed the vertices through from system memory” is out of date, these days you’ve encouraged to encapsulate your geometry in buffer objects and push them through to VRAM) then things run quickly. However, getting things to and from your GPU is bottlenecked by your PCIe bandwidth and latency, which is orders of magnitude slower than either main RAM or VRAM.

            Geometry is much less expensive in terms of flinging polygons at the screen these days, and you’re right that it doesn’t often take up much of your VRAM in a video game. However, the amount of memory used for geometry isn’t insignificant, and it does scale geometrically. Say you want to represent a kilometre of terrain using only a height map: at a resolution of 1 metre (which will look terrible), and using 16 bit floats to represent position, normals and texture coordinates, you’re looking at about 24MB. Put that up to a resolution of 0.5m (which will still look terrible) and you’re talking about 96MB, because geometry is, well, geometric. At a resolution of 0.25m ? 370MB, and that’s just a heightmap.

            That’s the reason why tessellation is a thing in the first place. Assets for video games are generated at higher polygon counts in the first place, so why go to the bother to reduce the poly count and then increase it again through tessellation? Because your lower-poly geometry can be fit into VRAM alongside all your textures and what have you, and if you want to swap some geometry out there’s a much lower bandwidth cost for pushing the new geometry to VRAM. Because the PC’s actual graphics processing power is monumental, it works out cheaper to have your graphics card make up new geometry on the fly than to have to load it all in from system memory.

            i.e. therefore tessellation solves a problem for PCs, not so much for consoles.

      • SuicideKing says:

        True, they helped push Microsoft and Khronos (of course, Nvidia as well, though their DX drivers already have lower overhead than AMD’s*) into action.

        This is what AMD has to say about it:
        “AMD supports potential DirectX update, will outline Mantle’s future at GDC”


        One thing we didn’t expect to see was Nvidia’s Direct3D driver performing so much better than AMD’s. We don’t often test different GPU brands in CPU-constrained scenarios, but perhaps we should. Looks like Nvidia has done quite a bit of work polishing its D3D driver for low CPU overhead.

        Of course, Nvidia has known for months, like the rest of us, that a Mantle-enabled version of BF4 was on the way. You can imagine that this game became a pretty important target of optimization for them during that span. Looks like their work has paid off handsomely. Heck, on the 4770K, the GTX 780 Ti with D3D outperforms the R9 290X with Mantle. (For what it’s worth, although frame times are very low generally for the 4770K/780 Ti setup, the BF4 data says it’s still mainly CPU-limited.)

        link to techreport.com

    • TacticalNuclearPenguin says:

      FreeSync is incredibly different from G-Sync and is reliant on an ANSI standard that was created as a mean to reduce power consumption, by limiting pointless refreshes.

      G-sync works with the mutual communication between the GPU and the monitor’s module, whereas FreeSync is for now only possible on laptops that use their own embedded connection solutions, which are ironically more advanced than what desktop monitors can muster, let alone the fact that their scalers still don’t support dinamic VBLANK control.

      Furthermore, this control is done software level, it’s a prediction mechanic that has his own overhead and complications. Eitherway, there’s a reason FreeSync was shown on laptops and also that it didn’t impress that much. This feature was developed before FreeSync and it was never meant for gaming, which has it’s own set of needs and problems that have to be addressed very specifically, with absolutely precise control in a frame-by-frame basis as the most important one.

      • SuicideKing says:

        True, they’re not handled the same way, but there is indication that the current eDP (embedded display port) standard (used in those laptops ) will make it into the desktop DisplayPort 1.3 spec. of course, most existing monitors would need a firmware update to support variable VBLANK.

        And i don’t remember reading that the FreeSync demo “didn’t impress much”, and on laptops especially it could catch on sooner, seeing that they’re usually more frame rate limited.

        BTW i’m not saying AMD invented the tech behind it, just that they found another use for it, but that’s what makes it more feasible than G-Sync: It’s already a part of a standard and can be adopted and implemented for a negligible cost by anyone.

        Nvidia did good, but there is also some comment that they just took what was already coming, maybe made it a bit better and slapped a proprietary sticker on it.

        It’s in everyone’s interest that OpenGL and “FreeSync” gain wider usage than Mantle (maybe even DX, since MS and PC gaming…) and G-Sync, just like an OpenCL based physics engine would be far more useful than PhysX.

        You could argue that G-Sync won’t split game devs like PhysX and Mantle, as the application is oblivious to its presence, but then if they know they have something like G-Sync, they’d be tempted to let FPS drop below 60 for those users, leading to a PhysX-like situation.

        • TacticalNuclearPenguin says:

          That’s perfectly fine, i mean i don’t see a single problem with that thing becoming a reality especially as i will not be forced to buy a G-sync compliant monitor, which would probably would be a gaming oriented one and thus not my best preference.

          Still, considering the important core difference between the two technologies i am still interested in seeing more comparisons in the future, because i’m sure that they’ll stand on very different levels of refinement. My biggest gripe with AMD is their wild claim that they simply developed a free G-sync, which is untrue.

          Eitherway this is a grey area, since considering the current situation the only way G-sync could work NOW using a purely gaming oriented, frame by frame accurate method, would be for Nvidia to indeed have spent a serious amount of cash.

        • phelix says:

          they just took what was already coming, maybe made it a bit better and slapped a proprietary sticker on it.

          Why does that remind me of Apple?

    • chris1479 says:

      I have been deeply unimpressed with Mantle on the R9 280X. I’ve read good things and middling things and bad things about it, but personally it runs worse than DX11.1 and according to the patch notes it’s still only “optimised” for the 270x and 290x…… So yet again AMD leaves people swinging in the wind waiting interminably for a driver update that actually does what it’s supposed to

  2. GSGregory says:

    I am just getting tired of proprietary stuff everywhere, too few companies these days in any field with no competition.

    • rpsKman says:

      I have nothing against Nvidia, but I don’t need to be part of an ecosystem. Just make something I’ll want without the bullshit.

      “we’re dong enough”

  3. spaced says:

    Whenever a hardware executive starts a sentence with “We love gaming…” my bullshit detector goes off. How many of the people who work at these companies actually play PC games anyway? I’m sure they’re trained in what rhetoric to say to shield themselves from scary questions asked by gaming journalists, but still. If you need to tell everybody how much your company just loves gaming, it doesn’t really instill much confidence. Prove you love it by just making good products instead of trying to win a pissing contest with your competitor over who can make the next needless technology that no one will buy.

    • Metalmickey says:

      Based solely on him being an executive for a company that makes gaming hardware, I’d say that’s a little harsh. While some executives do only have business interests, others have worked their way up within an industry that they have a genuine enthusiasm for. While a tech company is more on the periphery of the gaming industry, it would take more than a statement like that to call BS on what seems to otherwise be a pretty decent interview. I regularly see far worse from other marketing execs anyhow. My friends and I love reading up on new hardware advances, and you could easily argue that the graphics cards are no more ‘needless’ than the games which use them.

    • Unclepauly says:

      I think needless is pretty off the mark. G-sync would be great if everyone could use it. No screen tearing and no latency at 40fps? Ehhh… yes please.

    • joa says:

      Who cares if they aren’t actually interested in PC gaming? Does it matter?
      If they came out and said “actually, most of us don’t care about gaming that much, we just like making money”, I wouldn’t care. To make money you have to make a good product. So the consumer benefits regardless of what the company’s true motivations are.

      • GSGregory says:

        THat only works when there are other people to make good products.

        • joa says:

          This is true. However the customer is still the one with all the power (i.e. the power to buy or not buy). If a monopoly grows and is able to charge higher and higher prices, then customers only have themselves to blame.

          • GSGregory says:

            Not exactly. They shouldn’t be allowed to grow that big no matter what.

          • malkav11 says:

            Most monopolistic companies are part of multinational corporations that a) do so much business that they would have to be boycotted globally to feel it, and b) have many revenue streams such that if they care about maintaining control of a market, they don’t have to run a profit to do so. Furthermore, while gaming is a market that is inherently optional, there are plenty that cannot reasonably be avoided. Most people aren’t in a position to produce their own food, refine their own oil for fuel, heat their own homes, haul their own trash, make their own clothes, etc etc etc. One consumer making a stand makes no difference at all, and as we’ve learned with a certain Blizzard ARPG, among other fiascos, consumers at large absolutely cannot be relied upon to make decisions that are in their best long term interests.

            And there’s no advantage to anyone except the corporations in question in allowing them to make moves that shrink options that far, so why let them?

    • RaveTurned says:

      “We love gaming. It helps us ship loads of units!”

    • Warskull says:

      Of course Nvidia loves gaming. It is what allows their company to continue existing. Without video games Intel integrated graphics would be more than enough for a vast majority of people.

  4. Metalmickey says:

    “If they don’t value what we’re dong enough […]” Hee hee, you said ‘dong’ :-)

    • Convolvulus says:

      I’m good enough, I’m dong enough, and doggone it, people like me.

  5. JP says:

    “G-Sync is not a gift to humanity”

    This is always how this argument comes out, and it’s always about short-term shareholder value BS. OpenGL wasn’t a gift to humanity either, but it was the Right Thing and ultimately it has created a lot more value than even DirectX, when you consider all the areas where it’s been applied. Plus the fact that it will be around in some form long after MS has lost interest in DirectX, which was only created as an arm of their 1990s era monopoly strategy anyway.

    Companies with a cultural mistrust of the value of open standards are sad, and ultimately I think they’ll either change their tune or be left behind.

    • joa says:

      Is OpenGL really the example you want to pick for why open standards are good?
      It’s a very poor quality graphics API, far inferior to DirectX. It will die off when Linux dies off (i.e. when all the hipsters and idiots move to a real OS). Open source and open standards are passing fads which are completely at odds with business. You cannot build a business around openness. Business depends on innovation, and innovation has no worth if your competitors have access to it too.

      • Pliqu3011 says:

        Not so subtle troll is not so subtle.

        • darkChozo says:

          Nah, man. Open standards are a passing fad! Just like this newfangled Internet dohicky and this IBM computer.

        • TacticalNuclearPenguin says:

          He’s no troll, he really believes in what he says. Which is worse, off course.

          Intel made x86 public, and that worked in their favor. By doing so, they assured the market would keep following their model. Now they’re huge, and not because they are keeping stuff to themselves, but because they build quality stuff.

          Anyway, OpenGL is being used for EVERYTHING bar the majority of PC gaming, the professional market would never touch D3D to save their lives. Microsoft spent a lot of effort to push their own crap down everyone’s troat to reinforce their egemony, and that’s the only reason this bloatware exists.

          Still, G-sync is a different beast, so i don’t think it really fits this debate, for more information check my post near the top.

          • DrManhatten says:

            What bullshit are you talking about x86 is not public. It was maybe in the really early days (of 8088 and 8086) where Intel was not strong enough to keep copy-cat chips out but now it will use its full force for intellectual property lawyers to on you if you try to adopt their instruction set. That’s why even AMD nor VIA can move it anywhere else. As the only legitimate licenses were IBM and AMD.

          • SuicideKing says:

            AMD and VIA have an x86 licence. Intel has an x86-64 license. ATX is what you’re thinking of.

      • GSGregory says:

        Phone service, electricity, the internet, html, java, cc+. Think about how things would be if those were all completely closed.

        • joa says:

          That’s different – it is necessary that these things be open otherwise nothing would work. If the things you mentioned were closed, everyone would need to reinvent the wheel all the time. We need this baseline to work from. However notice the things you mention aren’t innovative, they are static and baseline. The innovation is built on top of these things – and if that is open, then there is no longer a product.

          Think of web servers. There is Apache and nginx and so forth out in the open and free. There is no reason on earth anyone would pay for a web server now. So openness in this area has effectively killed off any business. I’m not saying that’s necessarily a bad thing – having a web server available for free is nice. However it’s not really a good thing either.

          • iniudan says:

            What are you talking about there still plenty of reason to pay for a web server: cost, bandwidth, I/O capacity, delay, localization of service, not having to bother with host configuration and maintenance.

            After all if your gonna host a web server externally, why bother renting a VM, as host, that you will have to manage, if you can just rent a pre-configured solution that fill your need.

          • darkChozo says:

            Different kind of web server. Web server refers both to the software used to field HTTP requests (like Apache) and the hardware you host that software on. You typically don’t pay for the former and do for the latter.

            Apache is also an open source software group, but that’s a confusing name issue unto itself.

          • malkav11 says:

            Phone service was completely closed for decades, and it thrived and spread. Monopolies aren’t bad for business nor do they stop things from functioning. They’re just really bad for consumers.

            Linux is completely open source and there are several companies that have made successful businesses out of selling it.

          • iniudan says:

            Actually you are not fully right darkChozo, a web server on software side is more then just http server, after all there is plenty of other protocol to deliver content on the Internet and if we start looking at those there is proprietary software available.

          • GSGregory says:

            The things I mentioned are baseline NOW. When we started using electricity there were two companies one with ac and one with dc. Imagine today if Thomas Edison inc. Was a thing and only they could make light bulbs.

            I am not saying everything needs to be open but we are getting to the point where almost nothing is. But what happens when you have less companies and less openiness is less innovation because why should nvidia even make new tech when they can just keep selling the same old thing. And if no one else can touch it and play with it no one can innovate on it.

        • elderman says:

          As GSGregory says, the technologies that every business needs access to otherwise nothing would work (generally) got that way because the tech was widely available. The tech that got trapped behind proprietary paywalls early on didn’t become a platform for later development.

          Without Apache (and/or similar projects) there wouldn’t have been an open internet that in any way resembled the one we’ve had (I’m being careful about my verb tenses there), so yeah, I willing to assert that it’s A Good Thing.

          And phone technology was open first. The Master Switch by Tim Wu is a well-researched and accessible short history of development of communications monopolies in the 19th and 20th century United States. Essential reading, IMO.

          I know there’s a clear and compelling argument to be made that open platform hardware is a Good Thing generally speaking so that general computing machines remain general computing machines, and that this applies to graphics chips too, but I’m too tired, and maybe not well informed enough, to make it.

          [Ugh, reply fail. Meant to be down a comment level.]

      • PopeRatzo says:

        It will die off when Linux dies off (i.e. when all the hipsters and idiots move to a real OS).

        Oh no he didn’t…

      • RaveTurned says:

        “Open source and open standards are passing fads which are completely at odds with business. “

        …typed the person on a QWERTY keyboard (sold by a business) attached to his IBM PC (made from parts sold by multiple businesses) by a USB interface (designed and implemented by a business) powered by a mains electricity company (generated, distributed and sold by businesses), before the message was sent over a number of TCP/IP links (run by several different businesses) over a variety of network hardware (sold by several different businesses) on the way to another PC (multiple businesses again) on another local internet connection (another business) to a HTTP web server (hosted by a business) running a WordPress-based blog about videogames, run by the good folks here at Rock Paper Shotgun Ltd. who are, in fact, a business.

        You may want to rethink your ideas about what openness really means, what businesses actually do, and the ways in which innovation relates to both of the above.

      • TacticalNuclearPenguin says:

        -wrong post, edited-

      • FriendlyFire says:

        I disagree with everything but your first bit. I really do think OpenGL has greatly suffered from trying to be so many things at once, and it ends up being an abomination of an API. Using it is always painful (not that DirectX is… enjoyable or anything, but it’s dramatically cleaner).

        Seeing the cries for more OpenGL only makes me cringe. I don’t want to touch the thing unless I absolutely have to!

        • malkav11 says:

          I don’t think anyone’s a particular fan of OpenGL in and of itself. But I hate seeing games tie themselves to Microsoft proprietary tech, especially as a former Mac user.

          • GSGregory says:

            Mac, linux, android and a ton of others. I have always thought it odd that people don’t choose to hit the biggest amount of clients.

  6. BarryK says:

    Limelight works really well using nVidia’s NVENC tech for streaming and runs on pretty much everything at this stage despite being early in development. Even an Ouya can manage 1080p60 streaming with very little latency thanks to the real time hardware decoder in the Tegra 3. nVidia have only released a client for the shield, but it’s not locked to it, not yet anyway

    Steams In-home streaming still requires the Steam client at both ends so for the time being it’s only going to work on a full x86 PC.

    • grundus says:

      Yeah, BarryK drew my attention to Limelight last time this topic came up, I tried it and bloody hell is it impressive. Provided you’re on a wired connection end-to-end and your Android device has a SoC that can do H.264 High Profile (or whatever it is) hardware decoding, like the Tegra 3 (so the Ouya is ideal since it has both a Tegra 3 and an ethernet port), you can enjoy streaming as it is to the Shield (maybe even faster, since it’s wired), without buying a Shield.

      It’s not as good as Steam’s home streaming in terms of image quality but the latency is as good, if not better, and that’s the key to successful streaming.

    • jimbonbon says:

      The NVENC engine was a damn good idea. NVIDIA GPUs were never bad at H.264 but having some dedicated hardware allows them to encode much quicker and without any impact on the rendering. At the end of the day though, its just an H.264 stream, so it shouldn’t be too difficult to create other hardware or software based receivers. Having said that the NVENC engine is quick – NVIDIA claim it can encode ‘up to 240 frames per second of 1920×1080 progressive video’.

      Streaming between a Kepler GeForce GPU and Shield is just part of the story though – they are re-using some of the same tech for GRID (both cloud gaming and shared graphics for virtual desktops).

  7. Pliqu3011 says:

    “Want Mantle and some general console-port goodness? You need AMD and GCN.”
    Well, since Mantle is an open standard, it’s up to nVidia to implement their part – AMD certainly isn’t going to do the work for them. Calling it proprietary is therefore definitely not accurate.

    • darkChozo says:

      If Wikipedia is to be trusted, it’s proprietary at the moment. AMD has said it will be open but they haven’t delivered yet.

      • iniudan says:

        True it is still closed, as development of it is currently not done and I would guess AMD still want to gain some advantage from developing an open standard, by only publicly releasing it, once version 1.0 is available, after all that make then the only one with an hardware compatibility until the competition adjust to the requirement.

    • FriendlyFire says:

      The thing is, it’s extremely likely that this API will basically only be usable if you happen to use the GCN architecture. It’d be like Ford open-sourcing their engine API and saying “if you want those features, implement the API yourself!” when their engine API is entirely dependent on how Ford has designed their engines, what components they use, their internal computer, etc.

      It’s a way for AMD to claim they’re taking the high road and not creating a new proprietary thing.

      • GSGregory says:

        Won’t be any different than physx.

        • FriendlyFire says:

          PhysX is also not really popular and generally merely a bullet point. Mantle is positioned such that it cannot be anything like that since it completely replaces DX/OGL.

          • BarryK says:

            Mantle isn’t a DirectX or OpenGL replacement. It works alongside them, not replacing them.

        • iniudan says:

          Actually physx is different as Nvidia specifically block physx rendering on their GPU is an AMD GPU is present, since version 186 of the Geforce driver.

    • SuicideKing says:

      Mantle is not an open API, AMD are just “open to it” becoming one. No timeline given. They were very vague. It’s even less open than Nvidia’s bindless textures, at least the methods to implement them are already a part of OpenGL and i think DirectX too (but it only works with Kepler).

  8. 00000 says:

    I just don’t like nVidia’s approach. It’s terrible PR, and it needn’t be.

    Their plan is to to licence or sell GSync chips anyway. Then why not make it available for everyone so GSync can get a higher market penetration, and the manufacturers will actually mainstream the technology.They will sell less GPU’s, but they’ll get a higher return on GSync itself. They’re trying to trap consumers, and the consumers can tell. If only they’d just pretend like it’s their gift to humanity, they might improve their brand as good guys and sell just as much.

    Their other option is to “pull an Intel”, and buy off all the manufacturers to put GSync in everything.

  9. nrvsNRG says:

    Wondering if I would notice much difference with G Sync compared to my current setup of the BenQ 144Hz 1ms monitor together with my 780Ti? If it comes along for the BenQ then I will have it installed, but as it is now things are already buttery smooth and tear free.

    Jeremy, if you read this, can you tell me what price range you think the upcoming benq XL2420G (gsync) monitors will be? Also, do you think there will be a Gsync kit available for the XL2420TE?

    • PopeRatzo says:

      BenQ 144Hz 1ms monitor

      I need one a those. You think you could explain to my wife why it’s more important than replacing our 15 year-old refrigerator? I’m getting nowhere.

    • Baines says:

      It sounds like Nvidia and the monitor makers plan to charge what they think/hope they can get away with, and only reduce the price if they decide not enough people are buying G-Sync monitors.

      In that regard, maybe it is worth waiting an extra year or two before buying a G-Sync monitor. At best, buying early will just have you buying tech before the quirks are worked out. At worst, you’ll be overpaying for it, encouraging companies to keep prices high, and encouraging Nvidia to keep it all locked down.

      • nrvsNRG says:

        Well, we’ve been able to buy the Asus VG248QE with Gysnc for a while now. It costs about £140 extra on top of what Ive already paid, so it really isnt that bad.

    • nimbulan says:

      Unless you are maintaining 144 fps at all times, it is definitely not buttery smooth (though it’s probably very close since microstutter becomes less prominent the higher the refresh rate is) and you will be able to tell the difference. It’s really amazing being able to keep that framerate = refresh rate perfect smoothness at any framerate.

      • nrvsNRG says:

        You dont need to keep frames over 144, it looks great even when it drops below.The difference coming from a 60Hz display is huge, especially in fast moving games. I usually take it down to 120Hz and leave lightboost on, which helps even more.

        • kael13 says:

          But what about the colours, man. The colours! TN panels give me tears!

          • nrvsNRG says:

            Lol, well its actually pretty good for a TN! You need to add a new profile and mess with the settings to make it look good, but for gaming (which is what I use it for) its second to none. Of course movies and such will look better on IPS.

    • Jeremy Laird says:

      I don’t know, but my impression is that G-Sync screens will not be cheap – at least not at first. If the market doesn’t swallow high pricing, that will change or the tech will simply disappear.

      • nrvsNRG says:

        Thanks. I assume they will initially be around £440-500 for the TN panel ones (same as the current Asus), I just wondered if you had any info on the benQ and when it will be released. I know the Asus ROG Swift will be around $800 but thats the 1440p display.

  10. The Sombrero Kid says:

    Competition doesn’t drive innovation. It is however what’s driving these proprietary lock-ins. Bundling a monopolistic product with an uncompetitive one was unheard of 10 years ago. It’s rife now and it’s designed to shortcut the sole virtue of the capitalist model – products competing on merit with each other for consumers financial backing. It’s Utterly disgusting and to see nVidia lauded for brazenly admitting it & proves we’ve no interest in rectifying it.

    • malkav11 says:

      Are you familiar with Glide?

    • Baines says:

      It is the gradual return to the bad old days of computer hardware, and has arguably been happening for a while. It is just that the differences had been a bit less important until now.

      (Though “a bit less important” itself has been driven by game makers. Items like Physx are nice and beneficial, but publishers like to sell to the entire market and not just half of it.)

  11. caff says:

    I wonder if a good “adoption route” for GSYNC or similar technologies could be the Oculus Rift?

    After all, they are both emergent technologies that can benefit one another. Perhaps the Oculus Rift more so for GSYNC than the other way around?

    (As a Rift Dev Kit owner, I can testify that screen tear is an issue, amongst other things.)

    • FriendlyFire says:

      Carmack was at the NVIDIA presentation in Montreal which covered G-Sync, among other things. I’m pretty sure he was asked whether there were plans to integrate it into the Rift, but of course he was rather ambiguous at that point. The fact he was presenting for NVIDIA though is probably a good sign for it.

  12. Wedge says:

    I don’t know why it’s even worth reporting on core PC tech anymore. Nothing relevant has come about in the last three years. It’s just been updates that poke at the margins of efficiency. PC hardware appears to be done as far as I can tell, we’ve finished, we won the game, it’s over let’s go home.

  13. tehfish says:

    Hmmm… there’s propriety and there’s PROPRIETY in my opinion.

    Making a tech that runs well on your own hardware seems fair enough. (AMD)

    But Nvidia has *capslock-worthy* HISTORY of buying out open standards then using every dirty trick in the book to cripple it on any other system.

    So Nvidia has a lot to prove over AMD regarding open standards… Nvidia literally killed PC hardware physics for many years with it’s underhand propriety shenanigans…

    • iniudan says:

      Yes and it is know that Nvidia, specifically blocked physx processing on their GPU it if an AMD GPU is present on the system, since version 186 of their driver.

      link to ngohq.com

  14. Ratchet says:

    All about the money with NVIDIA, this proves it pretty clearly (not blaming them, they are running a business afterall).

    Can you imagine how ridiculously expensive their stuff would be and how stagnant PC graphics would have become if they had no competition? We’d still be using Geforce hardware from 5 years ago and paying out the nose for it if not for AMD’s existence.

  15. DrManhatten says:

    NVidia always has been about proprietary tech already from the early days this goes way back to Cg Shaders to CUDA, PhysX, now GSync and ShadowPlay. They never learn!

    • DanMan says:

      But if you’re the first one to build stuff, it’s called innovation. Everything else is just politics.

      That’s the thing if you’re one of the biggest players in the market. You don’t have to negotiate, you just go ahead and do it. That may not be the kosher way to do it, but it’s less frustrating for the company doing it.

      • DrManhatten says:

        Except all of them have turned out to be failures. Cg disappeared pretty quickly. PhysX no one really cares about. CUDA is going to loose the game against OpenCL. So will GSync and ShadowPlay.

  16. khomotso says:

    And so a resurgence in the PC gaming market comes as it veers into proprietary systems for the living room? Why is the PC different again?

  17. SuicideKing says:

    You know, i can’t blame Nvidia. They’ve spent money on the R&D, on people, and those people have invested hundreds if not thousands of hours to make something that works well. Of course it would be annoying for those people to go unpaid for that labour, or Nvidia to not get a return on investment.

    However, I don’t believe that consumers should be penalised for it, though. That said, neither CUDA, or G-Sync, or Shadow Play is standards essential, and as long as they don’t behave like Apple and go around preventing others doing the same thing, i’m fine with it. All of these technologies use specific Nvidia features (except G-sync, arguably), so that’s ok.

    However it splits the community when things like PhysX (or even Gsync) are involved, and going out of your way from preventing others interfacing with your technology isn’t a good thing.