Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game
Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game
At least 2 layers.
LLMs don’t think. They copy paste something that’s been found repeatedly in the data it was trained on, statistical probability of words going with other words. Hell, it doesn’t even know what words are or much less mean. So it’s at least 2+ layers removed from the truth, one being the one you pointed out, and another being an amalgamation (mishmash) of the data it was trained on.
I get that lemmy hates AI, and I’m not going to try to talk you out of that, but please stop repeating this factually incorrect myth. LLMs are not stochastic parrots, despite what you may have heard. And they do think… to a degree. Note that they’re by no means everything CEOs and tech bros want them to be, but if you’re going to criticize them, please do it accurately.
They do know the meaning of words, but only in relation to other words. It’s how they work. It’s not a statistical thing like word frequency patterns— they’re not doing the same thing autocomplete does. Instead, they’re doing math on words in a several hundred-thousand dimensional array where placement on this grid indicates the meaning of the word— one vector direction indicates plurals, another indicates rudeness or politeness, another indicates frog-like, another might indicate related to 1993 ibm pentium CPUs, etc, etc, etc. It developed this array via training on terabytes of text, but it’s not storing a copy of that text, nor looking it up, nor copying anything from it… it’s defining words based on how they are used, then doing math on it to figure out what is the most appropriate thing to say next— not the most likely thing according to statistics, the most meaningful based on the definitions of the words it understands.
They really do not copy and paste. They do use definitions. They do think about the words in a very real way.
They don’t apply logical consistency and fact checking. There are hacks to make them talk to themselves in a way that following the meaningful definitions of words will more likely lead to fact checking and logical consistency, but it’s not 100% fool proof.
Having a number that relates words to other words is not underatanding words. Stop believing the hype for fuck’s sake. What they ‘know’ is NOT knowledge. They do not know anything. Period.
There is a reason they start to fail when trained on other slop; because they don’t know what any of it means!
They do build a representation of words and sequences of words and use that representation to predict what should come next.
A simplistic representation is this embedding diagram that shows how in certain vector spaces you can relate man/woman/king/queen/royal together:
The thing is, these are static representations and are only bound to the information provided to the model. Meaning there is nothing enforcing real world representations and only statistically consistent representations will be learned.
They don’t “learn” anything, though. They’re ‘trained’ (still a bad term but at least the industry uses it) to spit the correct answer out.
People, especially CEOs and advertising firms, need to stop anthropomorphizing them. They do not learn. They do not “know”. They have statistically derrived association and that’s it. That’s all.
Holy hell ELIZA effect is in full swing and it’s beyond sad.
“it doesn’t draw anything, it’s just a bunch of math” to describe vector graphics pipelines used to render frames for games.
I’m not actually disagreeing it’s just really funny seeing decades of engineers and mathmaticians collective output being hand waved as “just a bunch of math”