As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

It's literally a description of how they work.

The so-called training data is used to build a huge database of words and the probability of them fitting together.

Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.

Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

@leeloo on the flipside, I feel like some people use the term "stochastic parrot" or "it just completes the next token" to imply that "therefore it cannot be intelligent" - is that correct reasoning?
@wolf480pl
Of course it can not be intelligent, it's just a huge database of probabilities.

@leeloo pretty sure that's a fallacy, kinda like "a sculpture is just stone, therefore it can't be beautiful", or "a cell is just a bunch of proteins, therefore it cannot be a living creature".

Now, I'm not saying a huge database of probabilities can be intelligent (I hope it can't), just that I think a better argument is needed why in the case of a database of probabilities, what it's made of prevents it from being intelligent.

@wolf480pl
You would have to redefine intelligence for asking whether a list of numbers is intelligent to even make sense.

And your comparison is completely off. Beauty is not a property of the sculpture, it's, as they say, "in the eye pf the beholder". Some people find curves beautiful. Can a stone have curves? Yes, of course. Others may find sharp edges beautiful. Can a stone have sharp edges? Again, yes.

I suggest you consider once again whether you are elevating "AI" to a god-like status.

@leeloo
I guess evil gods are also a thing, but no, I'm not treating them as gods. If anything, more like Frankenstein's monster.

You're right that we'd have to define intelligence, and that'd be quite difficult on its own.

Also, the sculpture was a bad example, but the cell one still stands IMO.

1/

@leeloo
My point is that emergent properties can manifest even in systems ruled by very simple rules, and can be difficult to predict by just looking at the rules.

And human intelligence, whatever it is, is likely an emergent property of human brain.

Therefore, we cannot rule out that a similar emergent property will appear in artidicial systems that are not made of neurons without referring to how the neurons are arranged, and how the artificial systems are arranged.

@wolf480pl @leeloo The OP is saying that it literally lacks the capacity for original thought - it is a parrot, repeating sounds without understanding of the concepts behind them.

It's not like a termite, whose mound creation behavior can be replicated by a simple ruleset but that exists as a fully functional living organism in the context of a complex environment where choices must be grounded in the shared physical world for the organism to survive.

It's not about how the neurons are arranged. It's about what kinds of representation they're capable of and what kinds of functions they can perform.

We've created a funhouse mirror that's reflecting us in unprecedented detail and has been finetuned to reflect what we do when we express selfhood.

@wolf480pl @leeloo
Melissa Scott wrote a beautiful pair of novels about this: Dreamships and Dreaming Metal.

In Dreamships, an AI has been programmed to think it is sentient and starts killing people. If it has an accurate model of the person, killing the person doesn't matter, because the person *is* the model and it has a copy of them. It literally cannot see the difference because creating the concept of there being a difference would violate its core programming that its own model counts as a living being.

In Dreaming Metal, an AI operating metal bodies as part of a magic act is given a musical instrument with an electronic interface. Its grounding in the physical world, with human performers, enables it to develop a sense of self and choose its own path as a musician.

These are fiction, but it's the best, most accessible illustration of the difference between funhouse mirror stochastic parrots and sentient agents that I've run across.

https://www.goodreads.com/book/show/836601.Dreamships

Dreamships

Dreamships is the story of a freelance space pilot and …

Goodreads
@robotistry
@leeloo
so it's a parrot not because it's a matrix of probabilities, but because its hasn't experienced the real-world consequences of its words/actions and updated the probabilities based on those consequences?

@wolf480pl @leeloo No. Maybe this will help.

0: one action, no choice (clockwork automaton, wind-up toy)
1: different actions, no choices (RC car)
2: choice, no plan (reactive robot)
3a: plan, no on-line or off-line learning (adaptive robot)
3b: plan, no on-line learning (same number for 3a and 3b because these are effectively the same when operating)
4: on-line learning - but only what and how it has been told
5a: ability to spontaneously generate new categories of output without being explicitly asked or told to do so (WBEAT)
5b: ability to spontaneously identify new categories of the same kinds of input WBEAT
6: ability to spontaneously identify new kinds of things to learn WBEAT
7: ability to spontaneously identify new ways to learn WBEAT
8: ability to choose new things to learn WBEAT

LLMs that you're not training are category 3b. They are static machines, responding to your input like an elevator responding to a button push.

LLMs that learn are category 4.

1/2

@wolf480pl @leeloo Examples:

Category 5a: a text-based LLM that spontaneously, without being asked, learns to output musical notation.

Category 5b: a text-based LLM that spontaneously, unprompted, without being asked, learns to use filenames as input.

Category 6: a text-based LLM that spontaneously, without being asked (directly or indirectly) learns that it can output ascii images or generate sounds instead of sentences.

Category 7: a text-based LLM spontaneously changes its underlying code so that it can learn how to write novels by memorizing and imitating performances instead of via a matrix of probabilities (fundamental change to its internal capabilities)

Category 8: a text-based LLM chooses when to interact with the world.

(The original categories I developed years ago were based on what the system can modify: its weights, how many weights, what kinds of weights, etc. I think this might be clearer?)

I don't think even Moltbook is showing anything above 4.