While it's quite fun to say “Butlerian Jihad now!” it’s not actually accurate because despite what idiots and evangelists would have you believe LLMs are not actually reasoning entities and not made in the image of the mind of man.

That doesn’t mean they’re fine though, especially for what people are doing with them. Ed Zitron is right. Ceterum censeo LLMs esse delenda.

#ai #slop #llm #butlerianjihad

@eschaton

> not made in the image of the mind of man.

They're very much made and intended to imitate the human mind (on the surface ...). I think that qualifies. I am not willing deal in shades of grey here: They are all evil.

@glitzersachen @eschaton
No, they aren't intended or designed to mimic the human mind. The architecture of an LLM is such that, given a prompt, they produce output that is statistically likely to relate to the prompt in word and phrase usage, based on an enormous training set. They use statistics and linear algebra, with literally no understanding of the words and phrases. They are stochastic parrots.

That is not how any reasonable human being responds, when you talk to them.

@brouhaha @glitzersachen @eschaton "Reasonable" being the key. I've worked with a few managerial types that very much uttered what they thought the most influential person in the audience wanted to hear. Any no surprise that their cohort are some of the most ardent proponents of GenAI.
@ingram @glitzersachen @eschaton
Even "What they thought someone wanted to hear" is far beyond what LLMs can do, and still requires actual intelligence.
@brouhaha @glitzersachen @eschaton Hmm, not sure there. It felt stochastic most of the time. That can give the appearance of intelligence, just like GenAI