"It's alarming enough that people with no history of mental health issues are falling into crisis after talking to AI. But when people with existing mental health struggles come into contact with a chatbot, it often seems to respond in precisely the worst way..."
https://futurism.com/commitment-jail-chatgpt-psychosis
People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"

People experiencing "ChatGPT psychosis" are being involuntarily committed to mental hospitals and jailed following AI mental health crises.

Futurism

Wandering into my feed to say, “Oh, but *I* use it and *I’m* okay!" is not going to get you anything but a big ol' block.

Chrissakes, people. Respect yourselves more.

@lilithsaintcrow
Raised to think that we’re all a bit insane and the dangerous ones think that they’re perfectly okay.

@lilithsaintcrow

This is quite disturbing. I use it myself but mostly as a ‘private secretary’. I tell it the elements of my project, what I want to do with it and then AI ‘fetches’ turning out organized information from my cues. I do it only to amuse myself…no delusions…yet!

@lilithsaintcrow "As the hype around AI has risen to a fever pitch, many people have started using ChatGPT or another chatbot as a therapist, *often after they were unable to afford a human one.*"

I think I found the root cause of the issue. (Emphasis mine)

@ClariNerd

Even if you can afford a therapist, they might often be available when you need them.

I know around here there it might be months before you can see someone for the first time.

Where the AI is there all the time consistently, with no cost or having to jump through hoops, seemingly anonymous and no stigma attached.

Cost would be a big issue, but also other factors

@lilithsaintcrow

@lilithsaintcrow I suspect it's by design. The idea behind most AI chatbots is to maximize engagement. You don't do that by making people contented or happy.
@tknarr @lilithsaintcrow This, and also the gaping hole in most people’s well-being is similar because we’re all inhabiting a similarly sick (or better: profitably unhealthy) society. So I don’t think this even requires them trying to single out vulnerable people, they’re just going for the hole in people’s social fulfillment, personal agency, and sense of meaning the size of a barn door.
@lilithsaintcrow Should LLM chatbots be regulated like narcotics or gambling?
@GalbinusCaeli Far more stringently than either put together, frankly, given how far the grift has gone.

@lilithsaintcrow

Agreed. ChatGPT is like ELIZA on steroids, which apparently includes the ELIZA effect.

https://en.wikipedia.org/wiki/ELIZA_effect

ELIZA effect - Wikipedia

@nyrath @lilithsaintcrow Exactly this, but with booster rockets and cocaine.

@GalbinusCaeli @nyrath @lilithsaintcrow

> with booster rockets and cocaine

Peanut brain:
"they are talking about Elon Musk"

Normal brain (after reading back up the thread):
"no it's about AI and the ELIZA effect fulfilling personal delusions at great risk to themselves and the people around them"

Galaxy brain:
"they ARE talking about Elon Musk"

@lilithsaintcrow This is why I never talk to a chatbot as if it were a human being. I never say hi or thank you or anything, I just say what I want it to do, and then I just leave once I have the result.
And I only use them for silly things, not for anything actually useful.

@lilithsaintcrow I knew chat bots were bad, but this is a whole other level.

I'm bipolar and have experienced hypo-mania, aka mild mania. I know that feeling of grandiose and how special you feel, but not at those levels. If it's anything like that, I understand the allure.

Take it from me, don't let your loved ones use chat bots. It's just another form of designer drugs.

@nevonnen @lilithsaintcrow If designer drugs were legal, totally unregulated, and as ubiquitous as the internet. Plus people don’t feel as if they’re exposed to any peril in using them.