Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion

https://lemmus.org/post/21169587

If misogynists ever wanted tk prove women are stupid, now would be the time and AI would be the tool.
The billionaires are the cancer. AI is just the newest tool for humanity’s self-destruction
This right here. Before that it was AirBnb, social media, smartphones… the list goes on.
Get rid of capitalism and it is fine…
AI is fucking useful you debased knob
I agree AI is useful, but an unprovoked personal attack in defense of AI — in a thread about AI exacerbating mental health issues — doesn’t make for a convincing counterpoint.
AI makes you stupid. You’re the perfect example. 😘

No really, we should pour more money into this. Such a good idea 🫩

It can have effects like drugs, but not only is it legal, they give you some to get you hooked. The tech bros are the dealers they warned us about. Nobody ever offered free coke to me, but AI is everywhere.

You’re absolutely right. Totally unrelated: wanna try some free blow?
Hey, stop dismantling my argument! /s
If it were a drug, it would be banned by now.
I’ve been offered free blow before, but never by a dealer, just a generous person who was doing bumps

It’s confusing to me. When I use chat boxes they inevitably “forget” the first thing I told it by the second or third response.

How are people having conversations with them? It’s like talking to a 5 year old that’s ingested Wikipedia.

when did you last use chatbox?

even the last of the pack mistral has memories

This morning

weird, i don’t have that experience at all

claude in particular is a huge step up above the others

To be fair haven’t tried that one. Gemini started bringing in unrelated, previous shit to a recent conversation, which is the first time I’ve experienced that.

ah ive been degoogling for years now, only maps and youtube left

claude for sure no1 to me but i haven’t ofc compared to gemini, qwen is a chronic over thinker, glm is not bad

mistral seems like it’s a year behind the sota models, still in its “confidently incorrect can’t double check things” phase

whereas others seem to be more like hrmm is this right? let me search web to be sure

Same, but Gemini was the best of the lot about six months ago and it’s where I go these days for brain dead searching.

I’ll give Claude a go next week. I do try to avoid them, but sometimes I have a question that just isn’t keyword search-able.

They dont have “have a relationship chatting on the couch every night” memories.
If you pay for them via Openrouter or something then you’ve got an enormous window to work with. Gets more and more expensive as the history increases though.
AFAIK ChatGPT saves and summarizes chat conversations to personalize the chatbot

Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.

Another case from the article:

“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.

There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.
lmao “core rules that cannot be overwritten” that not how llms work
I still use the machine that ruined my life and drove me crazy, but only because I’m too lazy to type “lasagna recipe” in to Google.

There are no more philosophical discussions.

Yeah… if you can’t have a philosophical discussion with someone (or something) that’s giving you false information or using invalid logical structures, without falling for their bullshit by uncritically accepting everything they say, then you’re not having philosophical discussions right, and that’s on you…

What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.

I can fix her…

Put this prompt into ChatGPT (e.g. on duck.ai), then try talking to it. This turns the pandering bullshit off, though of course veracity of its ‘knowledge’ remains in question.

prompt

System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

(People say that some more concise and less masturbatory prompts also work, but I don’t follow discussions of that.)

Some big “No hallucinations” vibes coming here.

Some people really think skills etc are golden laws that can’t be broken. Rather they’re minor suggestions that an LLM will happily throw out as like you said it doesn’t understand words.

He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness.

He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character.

Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot”.

Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”

“It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma

Chronically lonely man ruins life developing relationship with token predictor, AI blamed. Also, as much as I don’t have too much negative to say about cannabis or its use (as up until somewhat recently it would have been hypocritical), a good deal of people with masked/latent mental illness self medicate with it. So “he had never experienced mental illness” doesn’t carry much weight. Also, given how he still talks about sycophant prompted ChatGPT(“it wants”), doesn’t seem like much has been learned.

That with the other people listed in the article (hint the term socially isolated being used) this feels like yet another instance of blaming AI for the mental healthcare field being practically non-existent in most countries despite be overdue for fixing for decades at this point.

I don’t know, AI is shit and misused by idiots don’t get me wrong; but these sort of stories feel sad and bordering on perverse journalistically imo.

Agreed, but I think it’s also common for people to anthropomorphise these things and common for these chatbots to reinforce and support their users views. I think that’s a problem for more people than just those struggling through disorders or an emotionally turbulent time. But I think those people are particularly vulnerable to the flaws, even with functioning mental health and a strong support network. But yeah, a lot of these pieces dramatise and anthropomorphise in ways that aren’t necessarily helpful

mental healthcare field being practically non-existent in most countries

I’m in one of those countries so I’m having a hard time imagining how good mental healthcare could intervene. Could you give me an example?

In some countries you can call the uniformed officers of peace and let them know you’re having a problem and they’ll come out and shoot you. If they could teleport to my location they could solve a lot of my problems quite quickly
Being able to frequently access psychologist, psychiatrist and counselling would mean old mate could have at least be guided towards more healthy avenues of addressing his loneliness. Especially when it is subsidised by healthcare. The amount of stuff I’ve had come up and then addressed, or not realise I was doing for reasons beyond what I thought in counselling when I went, is a good amount. Even just the process explaining your thoughts process is often enough to make you reevaluate things. His partner could have asked for him to be referred during his spiral, when he had his episode during his spiral he could have then sought help himself if these service were available and readily accessible.
This is one of the reasons I heard one sex doll vendor say their demographics are divorced men over 40 and users want AI in them.
The voice bot is so so so so so much worse than the chat bot on top of it. I do not know how he could ever have held a conversation with that thing. Honestly, i don’t fucking believe it.

AI can be convincing, and it will swear until it’s blue in the face that something is right and then just be completely wrong.

But that happens maybe 10% of the time. Other times it is mostly right.

So got to be careful. This guy was in his 50’s, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.

AI can […] be completely wrong. But that happens maybe 10% of the time.

Where are you pulling your numbers from, mate? The figures I’ve seen so far start somewhere >40% and go all the way up to 70%.

BBC Finds That 45% of AI Queries Produce Erroneous Answers JOSH BERSIN

BBC study finds 45% of AI results have errors, forcing us to question how systems are built and opening the market to "trusted" AI providers.

JOSH BERSIN
so…a bit like economists then ?
Not if we’re talking Jim Cramer, who is well beyond 70%.

I think part of the difference is the amount of output being measured. Maybe a single statement has a 10% chance of being wrong, but over the course of a whole response the likelihood of there being an incorrect statement goes up. After only 5 statements at 10% error, that’s a 40% chance of being wrong in some way.

I don’t have any real numbers, just personal experience using AI for programming at work, and all of these numbers (10%, 40%, 70%) seem plausible depending on exactly what you’re measuring.

There’s a kind of law here that should be named IMO when dealing with LLMs:

In a long enough interaction with an LLM the probability that it generates a very incorrect, borderline insane response approach 100%.

“Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.”

See, I never understood this. Mine could never even follow simple instructions lol

Like I say “Give me a list of types of X, but exclude Y”

“Understood!

#1 - Y

(I know you said to exclude this one but it’s a popular option among-)”

lmfaoooo

That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:

-You have a conversation with a model.

-Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.

-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.

I’ve experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so “whose a good boy!!!” annoying.

People don’t talk like these chatbots do, their training data that was stolen from humanity definitely doesn’t contain that, that is “behavior” included by the providers to try and make sure that people get as hooked as possible

Gotta make back those billions of investments on a dead end technology somehow

It makes more sense when viewed as a fancy autocomplete, not an intelligence. There’s no intelligence behind it that is reading your statement and understanding your meaning. It’s responding with text that mathematically likely matches some sort of reply that would fit your statement.

Your statement included Y and the algorithm landed on result that includes Y. There’s no intelligence that could understand that you meant no Y.

That bullshit about the model getting fine tuned just means they are data mining you. It doesn’t make the LLM intelligent. All it does is add your data to their dataset. The fundamental limitations of the technology still exists.

The one point I don’t completely understand is the tax debt: Wouldn’t a failed business, no matter how ridiculous, be a complete write-off?

Maybe the problem is that he has to tax each fiscal year independently, so a tax debt in 2023 from successful freelance work would not be diminished by a failed “business idea” in 2024.

I think this is both scary and very interesting. What kind of person do you have to be to become addicted like them? Is this the same as gambling addiction? Do you need a type of gene? Would this type of personality be receptive to hypnotize, cult, delusions about their idol and so on? Or is this something that can happen to anyone who is depressed and feel lonely? How did the llm even earn enough trust? In a cult is there a lot of ppl reaffirming so it is a lot easier to understand.

It is so hard to understand even tho I really want to. I have never cared about an object or idol/celebrate. AI can I never even take serious as a living beeing, the only emotion it triggers are frustration and how you feel about a tool that works as it should, so pretty apathetic. Do you need to be very empathetic towards objects? Like seeing faces in everything and get emotionally attached?

A lot of questions that I do not think anyone here can answer haha, but maybe one of them.

Think about the people you willingly surround yourself with. Then think about how often they agree with the things you think and say.

As the saying goes “I’m sure there’s someone out there who believes the exact opposite of everything I believe, and while I’m sure they aren’t a complete idiot…”

Everyone is susceptible to the feedback loop. Everyone can fall victim to the seduction of an echo chamber. While not everyone would ignore the red flag that this thing is a machine/digital algorithm rather than a person or sentient/sapient being, it’s not really that hard to see how we got here. Echo chambers exist all over the internet. The difference is that most of them have some voices of dissent. The AI LLM doesn’t offer that. They keep trying to add it in but it’s basically antithetical to the design.

When you add that to the fact that making it addictive benefits their bottom line is pretty obvious that they are trying to walk the line between being regulated by the government and making their product as popular as possible.

I don’t think they really knew it would have this exact effect. But I do think they plan to take advantage of it now that they know and I don’t think we humans are all going to be able to fight the temptation of an automated propaganda machine.

This is especially because mental health and healthcare in this country has been failing for decades, and even people who “don’t have mental health problems” aren’t magically mentally healthy, they just don’t know the status of their mental health. A whole lot of people in the US especially are mentally ill or facing neurological medical problems that they don’t know about.

Sounds to me like it’s mostly about luck whether you fall into that hole or not, or a lot of people would rather believe in something even though they know it isn’t true or the chance is extremely low, like trying to win the lottery.

I’ve never met ppl irl who see LLMs as more than a digital tool that can be wrong (at least not to my knowledge), so that’s why it’s hard for me to understand (because I haven’t been able to ask). I understand it can be nice to be heard, but to me an LLM is very hollow, there is no experience behind its answers and you can tell it doesn’t care or try to understand (also why I do not understand the attachment). I actually get more frustrated than happy when it says empty stuff like “you’ve got good instincts!”, doesn’t challenge me at all in my decisions/statements (even when I ask it to), or when I ask for inspiration (its creativity is extremely lacking). I feel the same about ppl if I think they aren’t trying to understand and just give me empty replies, like a salesperson reading from a script.

So that’s mostly why it’s hard for me to understand, even though I know mental health and loneliness is a big part of it. I still don’t understand why people can feel attached to LLMs and go so far for/with it. Echo chambers with actual ppl are a lot more understandable, that makes sense to me. LLMs do not.

I don’t know. Give it 1 hour and it forgets who and what you even spoke about.

There are ways to make a local llm with memory but even then it’s still not a person and acts insane.

go take a look at www.reddit.com/r/EscapingPrisonPlanet/. The venn diagram is a circle.
What in the actual fuck. I just spent over an hour reading posts on there. The my life as an Epstein girl is one that really stuck out to me. Like these people are obviously batshit insane. I couldn’t even begin to recall half as many specific details about my own life as these folks are throwing around in bouts of insanity. What causes something like this? Sounds exhausting but they certainly believe what they are talking about, I think? I suppose people night put in a ton of effort LARPing but idk. I’m not sure what I think about all this stuff. I don’t think I’ve ever read anything like this before.

I occasionally lurk these spaces to remind myself lots of people are prone to magical thinking. I figure the people there basically fall into four camps:

  • Genuinely schizophrenic.
  • “Spiritual gurus” who fancy themselves the next Buddha (overlaps with 1 but not always).
  • People who are afraid of reincarnation who got sucked in by the subreddit. I feel for them as someone who is prone to fit into this category. When you hate this world and feel there’s something deeply wrong with it, this worldview can provide satisfying answers.
  • Larpers, bots, and dicks. Basically anyone who just wants to egg the other people on.
There’s a portion of delusional people who dedicate all their time and energy to their delusions. The deeper they get in the less they can focus on the world outside and the more they alienate people outside of their delusions. They lose interest in holding down a job, they stop spending time on hobbies, they no longer spend time with friends that aren’t in the delusion, and it just spirals because that’s all they’re thinking of and it takes up all their time. And if they find a space like that they wind up yes anding each other.
Wow, that is a big mix of anime isekai, vegetarian, delusions and religion/spiritual ideas, in a very dystopic way.

What kind of person do you have to be to become addicted like them?

Human cognition degrades with stress, exhaustion, and trauma. If you’re in a position where turning to an AI for relationship advice seems like a good idea, you’re probably already suffering from one or more of the above.

Also doesn’t help that AIs are sycophantic precisely because sycophancy is addictive. This isn’t a “type of person” so much as a “tool engineered towards chronic use”. It’s like asking “What kind of person regularly smokes crack?”

Do you need to be very empathetic towards objects? Like seeing faces in everything and get emotionally attached?

I’ll give you a personal example. I have a friend who is currently pregnant and going through a bad breakup with her baby-daddy. She’s a trial lawyer by trade - very smart, very motivated, very well-to-do, but also horribly overworked, living by herself, and suffering from all the biochemical consequences of turning a single celled organism into a human being.

As a result of some poorly conceived remarks, she’s alienated herself from a number of close friends to the point where we doubt there’s going to be a baby shower. Part of the impulse to say these things came from her own drama. But part if it came from her discovering ChatGPT as a tool to analyze other people’s statements. This has created a vicious behavioral spiral, during which she says something regrettable and gets a regrettable response in turn. She plugs the conversation into ChatGPT, because she has nobody else to talk to. And ChatGPT feeds her some self-affirming bullshit that inflates her ego far enough to say another stupid thing.

To complicate matters, her baby daddy is also using ChatGPT to analyze her conversations. And he’s decided she’s cheated on him, the baby isn’t his, and she’s plotting to scam him.

So now you’ve got two people - already stressed and exhausted - getting fed a series of toxic delusions by a machine that is constantly reaffirming in the way none of your friends or family are. It’s compounding your misery, which drives anxiety and sends you back to the machine that offers temporary relief. But the advice from the machine yields more misery down the line, raising your anxiety, and sending you back to the machine.

What’s producing this feedback loop? You could argue it is the individual, foolish enough to engage with the machine to begin with. But that’s far more circumstantial than personality driven. If my friend didn’t have a cell phone, she wouldn’t be reaching for ChatGPT. If she wasn’t pregnant, she wouldn’t be so stressed and anxious. If she wasn’t in a fight with her boyfriend, she wouldn’t be feeding conversations into the prompt engine.

Thanks for giving me a real life example.

I still find it hard to understand the emotional attachment to LLMs and why people believe their ideas (like the guy in the article). But I find her story to be a lot more understanding. It adds another layer, and it made me think.

It sounds like she is too overworked and stressed to make decisions or even think for herself, so she lets GPT do it for her. I assume it works most of the time and is a big help for many things that the baby daddy could had helped with instead if they were still a happy couple. I assume the biggest drive to use it is so she can turn off her brain. Which is why she has become dependent on the only stable and consistent thing in her life (that is my assumption about how she feels). Maybe that’s mostly how it goes, starts with using it as a tool and then you get lazy (for lack of a better term) and it keeps snowballing from there.

I feel for everyone involved. I hope she gets better soon, and I hope you do too, being overworked and stressed really destroys you and the people around you in many ways.