Jensen Huang says gamers are 'completely wrong' about DLSS 5 — Nvidia CEO responds to DLSS 5 backlash

https://lemmy.world/post/44405764

Jensen Huang says gamers are 'completely wrong' about DLSS 5 — Nvidia CEO responds to DLSS 5 backlash - Lemmy.World

>“Well, first of all, they’re completely wrong,” Huang said in response to a question from Tom’s Hardware editor-in-chief Paul Alcorn about the criticism. >“The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI,” Huang continued. Just a elongated way to say AI slop.

So this dumb fuck’s own marketing material has said this operates off final pixel colour and motion vectors (for temporal stability presumably) - that says to me that it’s not working with actual geometry info at all. It probably has a step to infer geometry but it’s still just a fancy Instagram filter working with limited data and an obviously ill-suited training set.

the previous versions at least need the software to supply motion vectors. otherwise it’s just guesswork. i’m assuming there will be some way to supply lighting information as well.

whatever the final product can do, they certainly didn’t show it off in their examples.

Technically, at least on vulkan, these things can be inferred or intercepted with just an injected layer, though it’s not trivial. If you store a buffer history for depth you can fairly accurately compute an approximation of actual mesh surfaces from the pov of the view. But that isn’t the same as real polygons and meshes that the textures and all map to… pretty sure you can’t run that pipeline real time even with tiled temporal ss. Almost definitely works on the output directly, perhaps some buffers like motion vectors and depth for the same frame that they’ve needed since dlss2 anyway. But pretty suspect to claim full polygons, unless running with tight integration from the game itself, even then the frame budgets are crazy tight as it is, nevermind running extra passes on that level
Probably not meshes since it is way too expensive. But these guys write the GPU drivers, so they of course have access to the different frame buffers and textures buffers and light source data. So just from depth and normal map data you can get a good representation of geometry. Like deferred rendering lights the scene from the 2D images in the G-Buffer not geometry.