Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game

https://piefed.ca/c/games/p/593867/nvidia-announces-dlss-5-and-it-adds-an-ai-slop-filter-over-your-game

Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game

![Jyk0L8eLs7jd7es.png](https://media.piefed.ca/posts/Jy/k0/Jyk0L8eLs7jd7es.png) I'm completely speechless. This looks so terrible I thought it…

Isn’t DLSS by defintion always an AI slop filter?
Not really, DLSS mostly just reduces the resolution of a game and then upscales it back up. It does a pretty good job of making the game still look (almost) exactly the same. This, however, completely changes what you’re looking at.
DLSS is short for Deep Learning Super Sampling, it does the upscaling using deep learning, what people also call AI. The upscaler has to be trained on images. Depending on how you train it you either get something that looks almost exactly the same as the game at a higher resolution or you get AI slop.
I’m aware of how it works, but the results aren’t bad. Worst case scenario is you get some ghosting with DLSS, but it’s far from what I’d call AI slop.
But it literally follows the same process. Why is one slop, but not the other? You’re being hypocrital.
One is upscaling the image while preserving it as much as possible, the other is applying a filter to try and “enhance” it. What’s hard to get?
How is “upscaling while preserving it” not literally the exact same philosophy as “enhance by applying a filter?” You just don’t like the specific filter, it’s very literally the same process.

Because a pixelated circle being upscaled is a circle, but a pixelated circle being turned into a high definition pie is no longer a circle, and that’s especially problematic if the circle was just a cross hair or some other random circle like thing the AI thought was meant to be a pie.

Yes, both things are the same, but that’s like saying you had a tiny spider in your house and you were okay because it killed mosquitoes in your house, so you should be okay with having a colony of bats since they are also animals and eat mosquitoes. Yes, both are the same, but the scales and the amount of intrusion are completely different.

If your training data has a pixelated circle as an input and a circle as output, your neural network will “upscale” your pixelated circle to a circle. If your training data has a pixelated circle as input and a high definition pie as output, your neural network will “upscale” your pixelated circle to a high definition pie. It’s the same algorithm in both cases.
Yes, that’s precisely my point. The difference is in what the algorithm is trying to do, traditional DLSS uses the image rendered in resolution X as output and scaled down to X/2 as input (for example), so it’s trained to upscale images, whereas this new thing uses who knows what as either, and clearly outputs something that is not an upscaled version of the frame.