As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

It's literally a description of how they work.

The so-called training data is used to build a huge database of words and the probability of them fitting together.

Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.

Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

@leeloo on the flipside, I feel like some people use the term "stochastic parrot" or "it just completes the next token" to imply that "therefore it cannot be intelligent" - is that correct reasoning?
@wolf480pl
Of course it can not be intelligent, it's just a huge database of probabilities.

@leeloo pretty sure that's a fallacy, kinda like "a sculpture is just stone, therefore it can't be beautiful", or "a cell is just a bunch of proteins, therefore it cannot be a living creature".

Now, I'm not saying a huge database of probabilities can be intelligent (I hope it can't), just that I think a better argument is needed why in the case of a database of probabilities, what it's made of prevents it from being intelligent.

@wolf480pl
You would have to redefine intelligence for asking whether a list of numbers is intelligent to even make sense.

And your comparison is completely off. Beauty is not a property of the sculpture, it's, as they say, "in the eye pf the beholder". Some people find curves beautiful. Can a stone have curves? Yes, of course. Others may find sharp edges beautiful. Can a stone have sharp edges? Again, yes.

I suggest you consider once again whether you are elevating "AI" to a god-like status.

@leeloo
I guess evil gods are also a thing, but no, I'm not treating them as gods. If anything, more like Frankenstein's monster.

You're right that we'd have to define intelligence, and that'd be quite difficult on its own.

Also, the sculpture was a bad example, but the cell one still stands IMO.

1/

@wolf480pl @leeloo These models aren't intelligent, so much as they're auto-completing rules and patterns derived from almost inconceivably huge corpora of example material originally produced by human intelligence. That's interesting and can be very handy for a great many uses. But it's more computational brute force than intelligence

@lmorchard @leeloo
These specific models - yes, probably.

One plausible argument I heard for it is that there's a common failure mode in ML where the model fails to generalize, but if the verification set overlaps the training set, then data leakage will fool the authors into thinking it generalized.

Another one is that these models were "rewarded" for saying plausible things, not for interacting with a world in a way that doesn't get them killed.

But these arguments are specific.

@lmorchard @leeloo
I don't buy a general "no matrix multiplication will ever be intelligent".
@wolf480pl @lmorchard
That's exactly the magic I'm talking about.
@leeloo @wolf480pl @lmorchard I mean, I believe the human mind is the product of the physical human, largely of the brain (I don't believe in a non-physical soul), and it might indeed be basically an incredibly complex big bunch of matrix multiplications. And yeah I believe that's pretty magical.

@dragonfrog @leeloo @wolf480pl

"Imagine you have two machines. One you can open up and examine all of its workings, and if you give it every picture of a cat on the whole internet, it can reliably distinguish cats from non-cats. The other is a black box and it can also reliably distinguish cats from non-cats if you give it half a dozen pictures of cats, some apple sauce, and a hug. ... I am extremely confident in saying it doesn’t work the same way as the first one."

https://www.todayintabs.com/p/a-i-isn-t-people

A.I. Isn't People

How many Reddit posts does it take to learn to read?

Today in Tabs

@lmorchard @leeloo @wolf480pl good grief now I have to sound like Sam friggin Altman, and there is clearly something very wrong with that man.

But your description ignores that humans need a solid 6 months of "training data" to get object permanence, never mind the concept of categories or species of animals, never mind understanding the category differences between cats and foxes well enough to reliably tell one from the other.

@lmorchard @leeloo @wolf480pl I guess part of it is maybe that I don't think intelligence is some exclusively human thing. LLMs clearly aren't human-like intelligent. I'm personally confident they're not as intelligent as any primate.

But are they as intelligent as a shrimp? I think they've got to be more intelligent than a mosquito.

I wouldn't turn to a shrimp for advice but they're not *without* intelligence.

@dragonfrog @lmorchard @leeloo @wolf480pl

Are the images reflected in a distorted mirror the product of intelligence (of the mirror)?

They are coherent, a literal transform of the input images, reflected and produce a recognizable, if distorted and changed version.

A traditional function output. Let's add some noise to make it non-deterministic, a wind blowing through that minutely distorts the surface.

Intelligible output following from the input, but the mirror itself isn't intelligent.

@dragonfrog @lmorchard @leeloo @wolf480pl

The intelligence apparently making the meaning is pre-encoded in the input. Likewise, the vector math is extracting and exposing structure, encoded in language, put there originally by the intelligent humans.

There is no world model or understanding. That's why counting the "r" in strawberry or simply counting to 200 is so challenging.

The behavior can reasonably be called intelligent, but it's due to borrowed, reformulated, extracted intelligence

@dragonfrog
I think an ML model trained to speedrun a platformer game is intelligent like a mosquito, but LLMs probably aren't.
@lmorchard @leeloo