EA’s Project Pica Pica leads new wave of photorealistic ray-tracing graphics demos at GDC 2018

Project Pica Pica

You love games. We love games. We love 2D games, 3D games, pixel games, life-like games, even slightly shonky-looking games. But what about if games looked, I don’t know, even better? Like, cinematic rendering, photorealistic kind of better? Well, Nvidia are on the case, as they’ve just announced their brand-new, not-at-all-incomprehensible “RTX Ray-Tracing” technology at GDC 2018.

RTX ray-tracing is something Nvidia’s been working behind the scenes on for the last ten years, and will be introduced as a new feature in Microsoft’s DirectX 12. I’m not going to attempt to explain exactly how it works (mostly because we’d be here all day), but you can read Microsoft’s thorough explainer over here. Just make sure you come back afterwards.

Put simply, ray-tracing can simulate light and shadow much more effectively than current graphics techniques like rasterization, allowing for more photorealistic graphics in games that look more like the flashy CG movies we’re used to seeing at the cinema. To really understand what that means, take a look at this demo video Nvidia’s put together with Alan Wake creator Remedy Entertainment.

You’ll no doubt be hearing about lots of new demos that employ ray-tracing this week, as Metro devs 4A Games, Epic and EA are all out in force this week with their ray-tracing-enabled goodies. EA in particular have shown off a new AI-driven experience from their indie program SEED called Project Pica Pica, which you can see below.

It’s not yet clear whether Project Pica Pica will be turned into a full game or not, but it’s clear from the video what kind of benefit ray-tracing can bring to a game’s overall look, producing more natural shadows and lighting effects as well as and realistic reflections and depth of field effects.

At the moment, all of the GDC ray-tracing demos are running off Nvidia’s Volta-based GPUs rather than its consumer-orientated Pascal cards, so one would assume the technology still requires a heck of a load of graphical horsepower to actually pull off. Whether that means we’ll all have to buy new graphics cards to take advantage of it – perhaps one of Nvidia’s upcoming Turing graphics cards – remains to be seen, of course, but the fact that it’s coming to DirectX 12 suggests we’ll hopefully be able to see it in some shape or form with the cards we have at the moment, possibly as one of the many additional graphics options available in a game’s settings menu.

That doesn’t mean AMD graphics card owners will be left out in the cold, though, as AMD’s said they’re still collaborating with Microsoft to support DirectX12 and ray-tracing in the future. Indeed, AMD will be delivering a talk on real-time ray-tracing techniques at GDC tomorrow discussing exactly that, describing how it’s developing tools to work with existing renderers that will hopefully get ray-tracing into our hands as fast as possible.


  1. GallonOfAlan says:

    Nice. It’s using raytracing to fill in shadows and reflections on a conventionally rendered frame. We’re a good few years off 4K, 60FPS full raytracing yet I’d say.

    • Excors says:

      I’m not convinced that will ever really happen outside of tech demos. By the time GPUs are fast enough to raytrace an entire frame with today’s level of geometric complexity, they’ll be fast enough to render a far more complex scene with rasterization. People like fancy lighting (which raytracing is good at), but people also like seeing lots of animated high-polygon objects with long draw distances etc (which raytracing is bad at). Also people will always prefer cheaper lower-power hardware, so they’ll want games that look good by using all the available tricks to render as efficiently as possible, rather than by throwing enormous amounts of hardware at pure physically-accurate raytracing.

      It seems more likely that games will continue using a hybrid approach, where most of the geometric stuff is handled by rasterization, and raytracing is used to enhance stuff like shadows and other lighting effects that are very awkward with rasterization. The new hardware and APIs will make the raytracing parts more efficient and more convenient, but won’t fundamentally change the tradeoffs.

      • Stevostin says:

        Agreed with a possible provision : cloud gaming services like Shadow Blade will likely provide more GPU power on ppls screen. High End PCs with state of the art GPU for 360€ a year thx to factorisation of hardware is a game changer.

        Also it’s my understanding that raytracing isn’t dependant on scene complexities for performances, with every pixel rendered requiring just about everything to be computed – although maybe it’s an outdated conception of mine now.

      • Paj says:

        The demo looks great, but I agree with you. I’d much rather see the extra power being used for much more realistic physics, like fluid simulations that go beyond water textures or tessellated water surfaces. Stuff like this could improve graphical fidelity and also have big impacts on gameplay too.

        Some of the stuff they produce at SIGGRAPH for example is pretty mindblowing:

  2. Solidstate89 says:

    RTX ray-tracing is something Nvidia’s been working behind the scenes on for the last ten years, and will be introduced as a new feature in Microsoft’s DirectX 12.

    DXR and RTX are not the same thing. RTX is obviously nVidia specific, but it isn’t becoming a DX-12 specification. That would be DXR and will be available to all GPU vendors over multiple architectural generations. Last I heard RTX will only be available to nVidia’s new Volta architecture.

    It’s like nVidia’s CUDA compared to DirectCompute. They both accomplish the same thing except one will only run on nVidia hardware.

    • Vodka, Crisps, Plutonium says:

      On a side note: whenever I hear “Nvidia technology” I always remind myself “Nvidia PhysX” and how great it looked in Borderlands 2 / Pre-Sequel, and how terrible the performance was on brand new GTX1070 I bought specifically to enjoy higher framerate action (also, how lousy the 2K/Nvidia tech support feedback was – both basically told me to sod off as they won’t fix that ever).

      Nvidia Technology=Marketing Stunt without any stance for supporting it long-term or even mid-term.

  3. Babymech says:

    Well, I like games. I’m not sure I love them. They’re a little needy. If I fall asleep in the middle of a game and wake up three hour later, it’s still there, staring expectantly at me, waiting for my input. That never happens with a movie.

    • SanguineAngel says:

      Yes but sometimes I feel… I feel like a movie just doesn’t care if I’m there or not. It’s only interested in me when it’s not getting enough attendion – playing trailers at me whilst I’m in the middle reading something, getting in the way of what I am trying to look at.

      Games expect a lot more from you but they sure appreciate you spending the time on them. Well some breeds – I wouldn’t want to own a clicker – what’s even the point of Those yappy little so-and-so’s?

  4. Godwhacker says:

    Getting a bit of a ‘Rise of the Robots’ vibe from that trailer. Very pretty, but is it fun?

  5. Carra says:

    That Remedy demo looks really pretty. How long until we can’t tell the difference between a game and reality? Movie CGI is already there.

      • durrbluh says:

        If today’s movie CGI is indiscernible from reality, one needs to get their ass to an optometrist or a psychiatrist to get their prescriptions checked.

        • Lord Byte says:

          Open goal man… The only CG you recognise is the stuff that doesn’t exist, all the other is fucking CG biting you on the nose…
          link to fstoppers.com
          Yes close-ups of living creatures is still extremely tricky, but there’s tons of CG in movies you didn’t even know, or expect.

        • K_Sezegedin says:

          Depending on what’s being shown it can be obvious in context, but anyone who can tell the difference between practical and cgi effects 100% of the time either worked on the film or is lying.

    • Don Reba says:

      I thought Quake 3 was near photo-realistic.

      • Skabooga says:

        In my memory, the CGI cutscenes for Warcraft II were so realistic that they could have been filmed on location with real actors; the only thing giving away their unreality being the orcs and giant turtles.

  6. mattevansc3 says:

    Can the software see the raytracing? It looks pretty but can an AI in a stealth game use the reflections from the demo to spot you? Can a bot spot your dynamic raytraced shadow and throw a grenade in that vicinity?

    Is it just the new particle effect or will if have game changing properties?

    • Don Reba says:

      Only if the bot has its own GTX 1080.

    • Excors says:

      From Microsoft’s description, that should be possible – it’s not fundamentally restricted to graphics. The game sends a representation of a 3D environment to the GPU, then asks the GPU to simulate some rays passing through that environment. The GPU works out what the rays hit and passes that information to some user-defined shader code running on the GPU. The shaders can do whatever you want, including writing to CPU-visible memory.

      Instead of sending rays out from the camera, you could send rays out from all the NPCs’ heads. Instead of computing a colour when a ray hits the player model, you could write an “I see the player” flag to memory, and have the AI software check that flag later.

      I don’t know if that approach would actually be sensible, but at least it should be possible; game developers have the opportunity to find clever ways to use the hardware for more than just graphics.

      • Don Reba says:

        I don’t think this approach would let a bot see the player’s shadow, though.

        • Excors says:

          That just needs a little extra work. When a sight-ray hits a surface, the shader can send another ray towards the light source. If the light-ray intersects the player, and doesn’t intersect anything else, then that point on the surface is in the player’s shadow.

    • KenTWOu says:

      Can a bot spot your dynamic raytraced shadow and throw a grenade in that vicinity?

      It sure can, but players will find it extremely frustrating.

      Look, I’m a huge fan of stealth games, so I can tell your for sure that shadow detection is still a pipe dream of hardcore stealth fans. The thing hardcore fans didn’t understand that developers of stealth games already tried to implement this mechanic and realised that ‘shadow management’ is extremely frustrating for the players, because it’s too unpredictable, it’s hard to control it and feedback is very unclear.

      Harvey Smith (Deus Ex, Dishonored), Randy Smith (Thief 1,2,3) and Clint Hocking (Splinter Cell 1, 3) discussed this problem during Q&A at the end of GDC talk ‘Would the Real Emergent Gameplay Please Stand Up?’ (0:56:30).

      Note that we’re talking about games that were released more than ten years ago when in-game shadows and lighting wasn’t that complex.

  7. int says:

    I’m not ready to eat dirt and hair–yet.

  8. automatic says:

    Oh, look, a breakthrough revolution to game graphizzzzzzzzzzzzzzzzzzzzz

  9. The Sombrero Kid says:

    If these demos were looking to convince developers to use ray tracing, they’d show progress towards a solution to the ray tracing problem, that is animating objects, particularly skeletal animation. All I see is the same small, static scenes we’ve always seen. PBR has removed quite a lot of the obtuse complexity of rasterisation and you can achieve very close approximations of these scenes with PBR on modern hardware with spade loads of resources left over to further improve the image. So the question remains – can we finally address ray tracing’s weaknesses? These demos don’t provide an answer. I don’t believe a hybrid model is viable, it only adds complexity, reduces the flexability of the simulation and throws a bunch of new problems into the mix, all to take the lighting approximation up another increment. In exchange for that we’d have to give up the dynamicism of the scene, we’ve already lost far too much in that department in my opinion. So i’m sckeptical but still on the fence.

    • Stevostin says:

      I didn’t know RT had an inherent weakness regarding animation. That would explain why no demo use the go to move of fpv moving in room with mirrors, a big miss in every FPV gaming. Care to elaborate ?

      • The Sombrero Kid says:

        I should say, the animation problem is solvable, I have my own ideas about solutions to the problem and there have been promising demos in the past but I would’ve expected to see these things in the demos.

        The problem amounts to the fact that ray tracings cost is generally sold on the value of divorcing rendering costs from model/scene complexity, theoretically increasing object detail massively, for that to be possible you need to move those massively detailed objects into position, this is traditionally done in a vertex shader on a gpu as part of the rasterisation process and can be done that way for ray tracing (without the projection) but it will still be fairly expensive if the models are as detailed as you’d hope. Without this a ray tracing algorithm looks a lot like PBR but is much more expensive.

      • Excors says:

        I think the basic issue is that you need to convert the scene from a simple list of polygons into a carefully-optimised spatial data structure that can be efficiently searched for ray intersections. That conversion is expensive, and needs to be partially recomputed whenever the list of polygons changes. With animated models you have to recompute on every frame, and that can become a performance bottleneck.

        • The Sombrero Kid says:

          Yeah but that isn’t too distinct from PBR probe caches, which are used for a lot of the same calculations as ray tracing is.

          • The Sombrero Kid says:

            I should say out of date PBR probe caches only affect shading – global illumination and reflections. Out of date ray tracing cache will render the object in the wrong place, so there’s no tolerance for the PBR solution that is to run the cache generation at a much reduced frame rate or decouple from the frame rate altogether, or decouple from the world entirely in the cheapest implementations (ahem Unity ahem).

  10. criskywalker says:

    Now this is getting interesting. Imagine playing Portal with raytracing!

  11. nottorp says:

    Does the demo require an Origin account to view? How much do the loot crates cost?

  12. Aetylus says:

    “But what about if games looked, I don’t know, even better?”

    – My first decade of gaming had games moving from rubbish graphics to okay graphics, and it was wonderful.
    – My second decade of gaming had games moving from okay graphics to excellent graphics, and it was terrible, as the marketing people figured that graphics sold, huge resources were pumped into more graphics, game prices shot up, and game mechanics stagnated as the same game was reskinned year on year to avoid publisher risk while riding the graphics wave.
    – My third decade of gaming had games moving from excellent graphics to okay graphics, and it has been wonderful as it has resulted in development cost dropping to the point where we have the indie renaissance and tsunami of highly varied games arriving each week.

    So would it be better if games looked better?

    It depends.

    • edna says:

      Well that’s exactly the thing. I agree. Just because we render beautiful, detailed worlds, doesn’t mean that we can do it quickly. It still takes a long time for somebody to model all the complexity that is necessary to create something that is convincing and/or amazing. The eye is good at spotting tiny visual hints, so the closer we get to realism the amount of detail that must be painted/mapped/modelled increases quickly.

      What I find interesting about the subject of this article is that it is primarily about lighting calculations. Anybody who has mucked about with 3D rendering knows that a powerful engine that manages light well can add a huge amount of realism just through the lighting. A scene that used to require 20 carefully placed lamps and a whole lot of fiddling about with materials can look better nowadays with a single area light, such as from a window, and some physically-based materials properties. That’s 20 times less manual effort, because it has effectively been moved to the computer. Seems like the right way round to me.

      The example scenes above are relatively simple to produce (for the artist) and yet are deeply satisfying to watch. I’m thinking about the impact on something like Project Cars or Elite, where the modelling has already been done but better lighting could lift the effect considerably.

      So I’m excited about this development. It feels like it could add visual quality to our games without requiring another hike in production cost.

      How on earth they can make ray-tracing (which is processor intensive relative to all other methods) work quickly enough with a complex scene is beyond me. But if they can do it then I can see some great things coming out of it.

    • MajorLag says:

      Let us not forget that one of the best selling games of all time, let alone this decade, was intentionally designed to look like it was rendered with VGA.

      Graphics shmaphics… for sooth!

  13. Alien says:

    Imagine a Mario game with the graphics of Project Pica Pica…

  14. DatonKallandor says:

    I remember when someone wrote a <1mb FPS game with raytracing, during the era of Half Life 1. Looks like every buzzword and technology comes back eventually.

    Previously: Voxels.
    Next up: Raytracing.

  15. RaymondQSmuckles says:

    Pica Pica + Lego Games sounds like a nice combo to me.