This is a view of LLMs that I find unsettling. I get the strong impression from lots of posts about LLMs that people project a personality and opinion onto them.

It reminds me of the paper about ELIZA from 1976: “extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people”. (https://archive.org/details/computerpowerhum0000weiz_v0i3/page/n10/mode/1up?q=realized)

Claude doesn’t have a scathing “take” on anything. It generated output based on its training data.

https://mastodon.social/@stroughtonsmith/116269956782244371

Computer power and human reason : from judgment to calculation : Weizenbaum, Joseph : Free Download, Borrow, and Streaming : Internet Archive

xii, 300 p. : 24 cm

Internet Archive
@neilgmacy I really like how I don't know if literally anything in that llm text is true and everyone in the comments is like "sick burn bro!" 🙄
@nerius next thing I saw (and reposted) after that post was about the Gell-Mann amnesia effect, where people give too much credence to stuff they don’t know about, yet when they do know about it they see it’s clearly wrong. I feel that a lot with LLMs.