Nvidia’s DLSS 5 Clarifications Backfire: It’s Just “Enhancing” a Screenshot, Not the Game

0

 

DLSS 5 off vs DLSS 5 on

In a desperate attempt to quell the growing backlash following the announcement of DLSS 5, Nvidia has been forced to reveal the technical guts of its controversial new technology. The answers, provided via email to YouTuber Daniel Owen, have done little to calm the storm. Instead, they have confirmed gamers’ worst fears: DLSS 5 is essentially an AI model looking at a flat screenshot and painting over it, with no real understanding of the 3D world you are actually playing in.

The saga began when Nvidia unveiled DLSS 5 alongside a demo running on two RTX 5090 cards. The tech community, already skeptical after the underwhelming reception of DLSS 4’s frame generation, immediately raised red flags. The promised “enhancements” looked less like optimization and more like a generative AI filter—one that appeared to add hair where there was none, apply makeup to traumatized characters, and generally reinterpret the artistic intent of game developers.

After a public outcry, Nvidia attempted to provide "additional info," but the statements were dripping with the kind of carefully crafted verbiage that left everyone more confused than before. Enter Daniel Owen, a YouTuber known for his technical deep dives. Instead of accepting the PR spin, he went straight to the source, contacting Nvidia’s Jacob Freeman to get a straight answer.

Nvidia chose to respond via email rather than a live conversation, a decision that now seems strategic given the bluntness of the answers.

It’s Just a 2D Frame

Owen started with the question that was burning a hole in the internet. Following hints from the analysis wizards at Digital Foundry, he asked if DLSS 5 was simply taking a 2D frame (a screenshot of what is on your monitor), combining it with motion vectors, and feeding it into a generative AI model to "enhance" it.

Nvidia’s answer was a simple, devastating: “Yes.”

To spell that out: DLSS 5 is not optimizing your game’s engine. It is taking the final rendered image, discarding the complex 3D data that the GPU spent milliseconds calculating, and using an AI to hallucinate a "better-looking" version of that image. The final output is then rendered on top of the game engine’s work.

No Geometry, No Soul

The next question aimed to clarify the scope of the AI’s intelligence. Can DLSS 5 actually analyze 3D geometry and depth? Or is it just looking at a flat picture and making an educated guess?

Nvidia’s response was pure marketing dodge: *"DLSS 5 is trained end to end to understand complex scene semantics such as characters, hair, fabric and translucent skin along with environmental lighting conditions like front-lit, back-lit or overcast - all by analyzing a single frame."*

Notice the absence of the words "3D geometry." The answer confirms that DLSS 5 has no concept of the 3D space. It is inferring everything—lighting, material, depth—from a flat, 2D image. It is essentially a sophisticated "enhance" button on a photo editor, but running in real-time.

The "More Hair" Problem

If the underlying geometry and textures are unchanged (as Nvidia claims in its official primer), why did the demo footage show a character from Starfield seemingly growing extra hair on his temple?

Owen pushed Nvidia on this. How can the geometry be unchanged if the pixels on the screen are literally different?

Nvidia dodged the question entirely. They reiterated that the geometry is unchanged and reminded everyone that this was a "very early preview." Owen pressed further, arguing that while the geometry data in the engine may be safe, the final output on the screen—the image the gamer actually sees—can be whatever DLSS 5 deems "correct" through inference.

If an AI decides a character should have more hair, or a different nose shape, it will simply draw it.

PBR? More Like P-BS

In the official primer, Nvidia claimed DLSS 5 "enhances PBR properties on materials." PBR (Physically Based Rendering) is the gold standard for modern game graphics. It uses maps created by artists to tell the engine how rough or metallic a surface is.

Owen asked the critical question: Is DLSS 5 reading these artist-created PBR specifications (roughness, normal maps, etc.) from the engine? Or is it just looking at the final image and guessing what the materials should be?

Nvidia’s answer: “DLSS 5 only takes the rendered frame and motion vectors as inputs. Materials are inferred from the rendered frame.”

Owen argues that this clarification makes the original "enhances PBR properties" statement deeply misleading. DLSS 5 isn't enhancing the artist’s work; it is overwriting it with an AI’s best guess.

Who’s the Artist Here?

This leads to the most loaded question regarding artistic intent. Owen used the example of Grace from Resident Evil: Requiem. In the original scene, a character suffering from trauma appears without makeup, contributing to the somber tone. In the DLSS 5 demo, the AI rendered her with what looks like makeup.

Owen asked if developers can control this. Can they instruct DLSS 5 not to apply makeup? Can they preserve the emotional intent of a scene beyond just turning the effect down?

Nvidia gave a lengthy explanation about color grading, intensity sliders, and the ability to mask objects. But they did not provide a concrete way to prevent the AI from altering specific facial features or adding cosmetic details.

As it stands, developers are limited to blending the AI’s output with the original, or turning it off entirely. They cannot tell the model how to reinterpret a character’s face.

It Doesn’t Even See Ray Tracing

Perhaps the most technically damning revelation came last. Owen asked if DLSS 5 is limited to "screen space"—meaning it only sees what is currently visible on the monitor.

Nvidia confirmed it. DLSS 5 is indeed limited to screen space. This means it is not aware of off-screen light sources, reflections, ambient occlusion, or shadows.

So why would a gamer bother enabling hardware-intensive ray tracing if DLSS 5 is just going to ignore those complex calculations and draw its own version of lighting over the top? The demo, which required two RTX 5090 cards to run, certainly didn't help showcase efficiency.

A Glimmer of Hope?

Finally, Owen asked about the future. With Microsoft introducing DirectML for hardware-agnostic machine learning tasks, will Nvidia ever open-source DLSS to work on any hardware (like AMD or Intel)?

Nvidia’s response: “We have nothing to announce on this one at this time.”

While not a "no," it is a firm "not yet." For now, DLSS 5 remains a proprietary technology that is shaping up to be less about accelerating graphics and more about applying a locked-down, AI-powered Instagram filter to your games.

You can watch Daniel Owen’s full breakdown and the email exchange on his channel here.

For the full text of the Q&A and more technical analysis, check out the source at VideoCardz.

As it stands, DLSS 5 is shaping up to be one of the most controversial graphics technologies in recent memory. By admitting it is just a 2D filter, Nvidia has validated the skepticism of the community. The question now is whether developers will actually want to use a tool that undermines their artistic vision and ignores the expensive hardware rendering they spent years optimizing.



Tags:

Post a Comment

0 Comments

Post a Comment (0)