As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

It's literally a description of how they work.

The so-called training data is used to build a huge database of words and the probability of them fitting together.

Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.

Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

@leeloo I hadn't thought about it as being something that takes magic away from folks like that. Honestly I always found it an accurate shortcut term for what's genuinely a fascinating but hilariously misused technology.

I think the worst part is then when folks hear "statistics" and go "See this is why it's safe to feed it raw data" and it's like oh my god NO.

@KayOhtie @leeloo honestly it’s safe to feed a model pretty much anything

But where you direct the outputs and how they are acted upon can get incredibly dangerous amazingly quickly. There’s a common misbelief that if you’re careful about inputs, LLMs are safe; and that’s almost exactly backwards

@calcifer @leeloo I meant 'safe' not as in "data leakage", but "getting anything remotely accurate out of it"