In the fallout of Nvidia’s DLSS 5 showcase, which ended up seeing quite a bit of criticism from journalists and developers alike, the company has now revealed more details. Company evangelist Jacob Freeman answered some questions asked by YouTuber Daniel Owen, and discussed how the technology works, how it was trained, and even how it handles PBR (physically based rendering) materials.
The questions asked by Owen all revolve around the fact that DLSS 5 works by taking 2D frames, along with motion vectors, as the input to then generate an output frame. Freeman confirmed this in his response. The discussion then went into whether DLSS 5 just takes a 2D image of a game and “reinterprets” it, comparing the idea to asking an AI model to enhance any given screenshot from the game.
Freeman said that the technology has been trained by Nvidia to “understand complex scene semantics,” ranging from materials like hair and fabric, to entire entities like characters and background objects, as well as environmental lighting conditions. However, all of this understanding comes from the analysis of a single frame.
On the subject of whether DLSS 5 can actually make significant changes to things like character designs. Owens uses a screenshot from the showcase’s Starfield example, noting that the character’s hairline was noticeably altered by DLSS 5. Similarly, he also brought up how Resident Evil Requiem‘s Grace Ashcroft had make-up added to her face. To this, Freeman simply responded that “the underlying geometry is unchanged. Also worth mentioning this is a very early preview of the tech.” This response essentially indicates that Nvidia is only discussing the fact that the game’s model geometry isn’t changed, and isn’t actually referring to how DLSS 5 alters characters’ features.
Similarly, DLSS 5 doesn’t seem to take in any input from artists about the state of PBR materials. Rather, the technology essentially infers what it can about the state of various materials in a game scene through the frame and motion vectors it takes in. “Materials are inferred from the rendered frame,” explained Freeman. As for how the tech, as Freeman put it in an earlier email, “enhances PBR properties on materials (roughness, more realism), with more realistic interaction of light,” however, Owens noted this as being a bit misleading.
While Nvidia has noted that developers and artists will have control over three core aspects of DLSS 5—intensity, color grading and masking—whether they will have finer control over things, like simply not wanting additional make-up applied to a character is currently unknown. Rather, Freeman reiterated the three core methods of controlling DLSS 5, noting that “developers can also mask specific objects or areas to be excluded from enhancement.” This essentially means that, short of singling Grace out in the Resident Evil Requiem example from working with DLSS 5 entirely, developers don’t seem to have options like “disable make-up” available to them.
“We continue to talk to developers to understand all the ways they would like to control the technology,” he said. “Ultimately, we see DLSS 5 as a tool for them to achieve their artistic vision, rather than be limited by the capabilities of traditional real-time rendering.”
Digging further into the idea of DLSS 5 working off a single frame rendered by the game, Owen also asked whether the technology could work on off-screen objects. Unfortunately, Freeman couldn’t answer this, once more reiterating that it only takes the on-screen visuals and motion vectors from a single frame as input.
For more on the DLSS 5 saga, take a look at what an animator for Death Stranding 2: On the Beach has to say about the technology. Also check out a report that indicates that developers were essentially left in the dark about the technology before the showcase went live. In response, Nvidia CEO Jensen Huang has been on the defensive, saying that critics are “completely wrong” about DLSS 5.















