Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game

https://piefed.ca/c/games/p/593867/nvidia-announces-dlss-5-and-it-adds-an-ai-slop-filter-over-your-game

Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game

![Jyk0L8eLs7jd7es.png](https://media.piefed.ca/posts/Jy/k0/Jyk0L8eLs7jd7es.png) I'm completely speechless. This looks so terrible I thought it…

Isn’t DLSS by defintion always an AI slop filter?
Not really, DLSS mostly just reduces the resolution of a game and then upscales it back up. It does a pretty good job of making the game still look (almost) exactly the same. This, however, completely changes what you’re looking at.
DLSS is short for Deep Learning Super Sampling, it does the upscaling using deep learning, what people also call AI. The upscaler has to be trained on images. Depending on how you train it you either get something that looks almost exactly the same as the game at a higher resolution or you get AI slop.
I’m aware of how it works, but the results aren’t bad. Worst case scenario is you get some ghosting with DLSS, but it’s far from what I’d call AI slop.
But it literally follows the same process. Why is one slop, but not the other? You’re being hypocrital.
One is upscaling the image while preserving it as much as possible, the other is applying a filter to try and “enhance” it. What’s hard to get?
How is “upscaling while preserving it” not literally the exact same philosophy as “enhance by applying a filter?” You just don’t like the specific filter, it’s very literally the same process.

Current DLSS intent: We can only render this at like 720p with enough frames, so let’s do that and use AI anti-aliasing tricks so that when we present it at 4k, none of the jaggies are visible on-screen like they would be with raw 720p upscaling.

DLSS5 intent: Using our pile of stolen artwork neural net that we can now render at 60fps+ let’s “reimagine” the entire look of the game as we present it on screen, even if it was already running at 4k just fine.

Ideally you’d have a DLSS-like system trained specifically trained for only one game instead of a general system. Then you can train it on 4k with highest settings and you should get something that doesn’t mess with the style of the game.
Yep. Maybe it could actually be “modules” that the individual devs submit with their game, essentially.
You’re describing what DLSS 1.0 was I believe
Yeah, but they did that for like two games.

… How if flying a spaceship different from driving a car? They’re both controlled applications of kinetic energy to move people or objects.

At the end of the day, it’s all a pile of transistors and the only thing that is of import is the intent behind usage.

In one case it’s saying you can use a neural net to take something rendered at resolution A/4 and make it visually indistinguishable from the same render at resolution A.
The other is rendering something and radically changing the artistic or visual style.

Upsampling can be replicated within some margin by lowering framerate and letting the GPU work longer on each frame. It strives to restore detail left out from working quicker by guessing.
You cannot turn this feature off and get similar results by lowering the frame rate. It aims to add detail that was never present by guessing.

Upsampling methods have been produced that don’t use neural networks. The differences in behavior are in the realm of efficiency, and in many cases you would be hard pressed to tell which is which. The neural network is an implementation detail.
In the other case, the changes are more broad than can be captured by non AI techniques easily. The generative capabilities are central to the feature.

Process matters, but zooming out too far makes everything identical, and the intent matters too. “I want to see your art better” as opposed to “I want to make your art better”.

What…? It’s more like chemical vs nuclear rocket. You’re not even comparing the same thing while these are both denial things with different views. You don’t like this one, so suddenly it doesn’t met your arbitrary conditions to be acceptable, so now you’re coming up with incorrect analogies to try and make a point. Great job!

And you didn’t even read past the first sentence I see.

Saying they’re the same because they both use a neural network is roughly equivalent to saying things are they same because they’re both manipulating kinetic energy.

Because a pixelated circle being upscaled is a circle, but a pixelated circle being turned into a high definition pie is no longer a circle, and that’s especially problematic if the circle was just a cross hair or some other random circle like thing the AI thought was meant to be a pie.

Yes, both things are the same, but that’s like saying you had a tiny spider in your house and you were okay because it killed mosquitoes in your house, so you should be okay with having a colony of bats since they are also animals and eat mosquitoes. Yes, both are the same, but the scales and the amount of intrusion are completely different.

If your training data has a pixelated circle as an input and a circle as output, your neural network will “upscale” your pixelated circle to a circle. If your training data has a pixelated circle as input and a high definition pie as output, your neural network will “upscale” your pixelated circle to a high definition pie. It’s the same algorithm in both cases.
Yes, that’s precisely my point. The difference is in what the algorithm is trying to do, traditional DLSS uses the image rendered in resolution X as output and scaled down to X/2 as input (for example), so it’s trained to upscale images, whereas this new thing uses who knows what as either, and clearly outputs something that is not an upscaled version of the frame.
Not all answers are easy. This new dlss looks like it was trained on stolen work. Old dlss had a neutral network that was tuned before the plagiarism machine became popular.
Piracy is not stealing.
It is when it’s used by corporations for profit, IMO. Not for individual private enjoyment.

Oh yeah? Well vegatables are both in pig troughs and on dinner plates. Why’s one slop and not the other? They were grown with the same process!

Because one is shitty and the other isn’t.

If the vegetables are the same, they aren’t slop. Pigs aren’t fed vegetables, they use rotten vegetables. Your analogy doesn’t work, if you actually comprehend the basics of it….

If the vegetables were rotten, yeah most people would eat the “slop” since it’s just vegetables, you would let good food go to waste just because the “name” you’re arbitrarily and incorrectly using for all pig feed?

Are you really asking why compressing and uncompressing art made by a human being is different from slop produced by the slop machine?

One exists to reconstruct an image as closely to the original as possible while saving space, the other is meant to insert arbitrary changes to the initial image and produce something else.

I don’t like AI but christ Lemmy is getting annoying lately with kneejerk “slop” claims for anything with the letters AI in it. A lot of this stuff has been used for ages and yeah, they’re leaning into the current hype but the over reaction is just ridiculous (see: the “open slop” list of open source projects that includes those that have the audacity to allow developers the ability to use AI line completion)

It genuinely diminishes actual concerns with AI tech when people are losing it over things that have existed long before the current bubble but just have AI™️ on the package now

deep learning isn’t really the same thing as a large language model. People call LLMs AI.
LLMs were (and are) marketed as “AI”.
I agree. DLSS isn’t, though. It’s not AI though. Deep learning is like a close cousin
Llms aren’t the only type of ai….
AI isn’t real, I’m just saying what people call AI is pretty much LLMs. No one looks at DLSS and says ‘thats ai’

“Science fiction AI” isn’t real. AI is most definitely a thing. From the Oxford dictionary

​artificial intelligence = the study and development of computer systems that can copy intelligent human behaviour

By definition, a chess program is AI.

pretty irrelevant to my point, honestly

AI isn’t real

It is real. Better like this?

Your point being you’re making up your own definition of ai…?
you’re out of touch with your neighbors
DLSS actually uses Machine Learning models to do the upscaling, so in fact there is no AI Slop here.

Not really,

Nvidia just calls everything DLSS…

Like, it’s basically an anthology label at this point. If they think it’s a good idea, they call.it DLSS #

For example DLSS 4 was frame generation, nothing to do with super sampling.

You could call it temporal super sampling.

It does a pretty good job of making the game still look (almost) exactly the same

Isn’t that just displaying the image with extra steps? Why is my PC using all this extra processing power in order to make it look (almost) exactly the same?

I think that’s accurate. It’s making something out of nothing, which will certainly be graphics but not necessarily exactly what the game is supposed to look like.
No, and if that’s your opinion you don’t know what DLSS is
While it may have used machine learning, it was definitely not in the ‘slop’ category. I generally think of slop as things which try to imitate some kind of creative or human element (like the enhancements from DLSS 5), but FSR and earlier DLSS used machine learning to replace anti-aliasing like MSAA, etc., through super-sampling and temporal technologies (frame gen kinda sucked though). So, to answer your hopefully literal question, DLSS has, in the past, not been a AI slop filter.
Yes, jensen huang recently tried to defend it.