Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Nvidia head reckons we’ll have games where AI generates ‘every pixel in real-time’ in under a decade

“ChatGPT is not only practical; in most cases, it's better”

Preparing to have a conversation with an AI-generated chef in Nvidia's Ace tech demo
Image credit: Nvidia

Generative AI is one of the biggest debates raging across not just video games, but art and culture as a whole at the moment. Into that debate has waded the CEO of graphics card giants Nvidia to drop a prediction that can only be described as searingly flammable: we’ll see games where everything seen on-screen is fully generated by AI, in real-time, within the next 10 years.

Jensen Huang dropped the claim during Nvidia’s recent GPU Technology Conference, responding to a question from a journalist during a press Q&A session about how far we are from a world where “every pixel [of a game] is generated at real-time frame rates”. (Thanks, Tom’s Hardware.)

Huang proposed that “we’re probably already two years into” the ‘S curve’ for the widespread adoption and capabilities of generative AI, suggesting that most technologies become “practical and better” within a decade - though it might happen as soon as five years from now.

“And, of course, ChatGPT is not only practical; in most cases, it's better,” Huang continued. “I think it's less than ten years away. In ten years’ time you're at the other end of that S curve.

“So I would say that within the next five to ten years, somewhere in between, it's largely the case."

Nvidia, like many tech - and, indeed, non-tech - companies, are already playing around with the controversial use of AI and machine learning. On the high-concept end, this has included showing off tech demos where players could interact with NPCs in entirely AI-generated conversations, something that they described as “the future of games”. On the lower end of things, AI is being leveraged in the GPU makers’ Deep Learning Anti-Aliasing (DLAA) and Deep Learning Super Sampling (DLSS) features, which use AI to effectively predict what your graphics card is about to display and aim to display a crisper version of the image.

Watch on YouTube

Of course, completely generating a game from a prompt - similar to how AI tools like ChatGPT work, which have already progressed to the point of generating video clips in a matter of seconds - is very different from using machine learning to simply sharpen up assets that have been carefully crafted by teams of artists and programmers to begin with.

It’s not entirely clear whether Huang’s answer implies that AI tools like DLAA and DLSS will become so commonplace and powerful that AI generation will almost work in place of physical GPUs to display top-end graphics based on human-made assets, or whether he means that entire games will be created on the fly in real-time. What is clear is that Huang believes AI is here to stay, and will only play a bigger part in video games in the years to come.

Given AI’s current penchant for stealing the work of artists and regurgitating it in a lesser form without any due credit or compensation, let’s hope that whatever the next decade of AI looks like, it doesn’t come at the cost of those developers and creators who might otherwise stand to benefit from its careful implementation if the sensitive technology is handled right.

Read this next