Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns

https://lemmy.world/post/44841628

Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns - Lemmy.World

Lemmy

I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.

“Centaurs”

They think they are getting mythical abilities

They’re right but not in the way they think

its like the AI BF/GFs the subs are posting about.
I certainly enjoy talking to LLMs about work for example, asking things like “was my boss an arse to say x, y, z” so far so the LLM always seems to be on my side… Now it could be my boss is an arse, or it could be a the LLM sucking up to me. Either way, because of the many examples I’ve read online, I take it with a pinch of salt.

It’s definitely sucking up to you. It’s programmed to confirm what you say, because that means you keep using it.

Consider how you phrase your questions. Try framing a scenario from the position of your boss, or ask “why was my boss right to say x, y, z”, and it’ll still agree with you despite the opposite position.

If you’re just shooting the shit, consider doing it with a human being. Preferably in person, but there are plenty of random online chat groups too

I use LLMs for work (low priority stuff to save time on search or things that I know I will be validate later in the process) and I can’t stand the writing style and the constant attempts to bring in adjacent unrelated topics (I’ve been able to tone down the cute language and bombastic delivery style in Gemini’s configuration).

It’s like Excel trying chat with me when I am working with a pivot table or transforming data in PowerQuery.