Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns
Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns
“Centaurs”
They think they are getting mythical abilities
They’re right but not in the way they think
It’s definitely sucking up to you. It’s programmed to confirm what you say, because that means you keep using it.
Consider how you phrase your questions. Try framing a scenario from the position of your boss, or ask “why was my boss right to say x, y, z”, and it’ll still agree with you despite the opposite position.
If you’re just shooting the shit, consider doing it with a human being. Preferably in person, but there are plenty of random online chat groups too
I use LLMs for work (low priority stuff to save time on search or things that I know I will be validate later in the process) and I can’t stand the writing style and the constant attempts to bring in adjacent unrelated topics (I’ve been able to tone down the cute language and bombastic delivery style in Gemini’s configuration).
It’s like Excel trying chat with me when I am working with a pivot table or transforming data in PowerQuery.