Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game

https://piefed.ca/c/games/p/593867/nvidia-announces-dlss-5-and-it-adds-an-ai-slop-filter-over-your-game

Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game

![Jyk0L8eLs7jd7es.png](https://media.piefed.ca/posts/Jy/k0/Jyk0L8eLs7jd7es.png) I'm completely speechless. This looks so terrible I thought it…

Isn’t DLSS by defintion always an AI slop filter?
Not really, DLSS mostly just reduces the resolution of a game and then upscales it back up. It does a pretty good job of making the game still look (almost) exactly the same. This, however, completely changes what you’re looking at.
DLSS is short for Deep Learning Super Sampling, it does the upscaling using deep learning, what people also call AI. The upscaler has to be trained on images. Depending on how you train it you either get something that looks almost exactly the same as the game at a higher resolution or you get AI slop.
I’m aware of how it works, but the results aren’t bad. Worst case scenario is you get some ghosting with DLSS, but it’s far from what I’d call AI slop.
But it literally follows the same process. Why is one slop, but not the other? You’re being hypocrital.
One is upscaling the image while preserving it as much as possible, the other is applying a filter to try and “enhance” it. What’s hard to get?
How is “upscaling while preserving it” not literally the exact same philosophy as “enhance by applying a filter?” You just don’t like the specific filter, it’s very literally the same process.

… How if flying a spaceship different from driving a car? They’re both controlled applications of kinetic energy to move people or objects.

At the end of the day, it’s all a pile of transistors and the only thing that is of import is the intent behind usage.

In one case it’s saying you can use a neural net to take something rendered at resolution A/4 and make it visually indistinguishable from the same render at resolution A.
The other is rendering something and radically changing the artistic or visual style.

Upsampling can be replicated within some margin by lowering framerate and letting the GPU work longer on each frame. It strives to restore detail left out from working quicker by guessing.
You cannot turn this feature off and get similar results by lowering the frame rate. It aims to add detail that was never present by guessing.

Upsampling methods have been produced that don’t use neural networks. The differences in behavior are in the realm of efficiency, and in many cases you would be hard pressed to tell which is which. The neural network is an implementation detail.
In the other case, the changes are more broad than can be captured by non AI techniques easily. The generative capabilities are central to the feature.

Process matters, but zooming out too far makes everything identical, and the intent matters too. “I want to see your art better” as opposed to “I want to make your art better”.

What…? It’s more like chemical vs nuclear rocket. You’re not even comparing the same thing while these are both denial things with different views. You don’t like this one, so suddenly it doesn’t met your arbitrary conditions to be acceptable, so now you’re coming up with incorrect analogies to try and make a point. Great job!

And you didn’t even read past the first sentence I see.

Saying they’re the same because they both use a neural network is roughly equivalent to saying things are they same because they’re both manipulating kinetic energy.