As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

It's literally a description of how they work.

The so-called training data is used to build a huge database of words and the probability of them fitting together.

Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.

Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

@leeloo I myself like calling LLMs "glorified autocomplete". Or "Т9 на максималках" in Russian.

It's surprising just how defensive some people get when I say that even when they agree with my definition. They keep believing that just give this thing more parameters and something magical, something more than sum of its parts will emerge, any moment now, just one more model generation, just one more order of magnitude, I promise.

@grishka
The fun part is that the next generation will have the current state of the internet as its training set. An internet that is flooded by #ai generated content.

The biggest issue those ai companies face at the moment is how to only ingest human generated content and filter out as much as possible of all of the ai generated crap that is out there.

Good luck with that.
@leeloo