@michaelgemar @cjust @ianturton @Tattie You forgot the word "annoying" when describing the "podcast hosts"...
Anyway, the concept that an LLM thinks or it imitates human thinking mostly comes from marketing. For example, AI companies have decided to show LLM output in a chat-like interface because that increases adoption and trust, whether that's granted or not. There's a lot of literature on this, for example
Portraying Large Language Models as Machines, Tools, or Companions Affects What Mental Capacities Humans Attribute to Them
https://dl.acm.org/doi/abs/10.1145/3706599.3719710
The effects of human-like social cues on social responses towards text-based conversational agents—a meta-analysis
https://www.nature.com/articles/s41599-025-05618-w
The benefits and dangers of anthropomorphic conversational agents
https://www.pnas.org/doi/10.1073/pnas.2415898122
"When users cannot tell the difference between human interlocutors and AI systems, threats emerge of deception, manipulation, and disinformation at scale."