This is a view of LLMs that I find unsettling. I get the strong impression from lots of posts about LLMs that people project a personality and opinion onto them.
It reminds me of the paper about ELIZA from 1976: “extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people”. (https://archive.org/details/computerpowerhum0000weiz_v0i3/page/n10/mode/1up?q=realized)
Claude doesn’t have a scathing “take” on anything. It generated output based on its training data.
