i feel like i don't have the words to properly describe how it feels to see people who opinions i respected and valued slowly fall into ai psychosis. it's so slow and so subtle at first. "i'm just experimenting! i'm not an ai booster!"

then wait a few months, and they start explaining with the usual flawed, incoherent reasoning how actually it's all very interesting and thought-provoking, whilst pointing at an LLM that is so obviously just a reflection of their own ego.

@jacqueline ai is just a fun remover and people refuse to accept that
@jacqueline It's not thought-provoking. It's not interesting. I feel gross even having tried it. All it did was made me remember why I never became a teacher, marking terrible first-year coding papers or something.

@jacqueline I’ve been averse to being near people who smoke ever since I was a kid. I’ve met some nice people who turned to be addicted to cigarettes and that always turned me off and I started avoiding them. I can’t stand the smell and realising that I’m passively smoking being around them didn’t help.

I’ve been observing the same thing in my feed in the past few months — people who I respected turned out to have “the ends justifies the means” result-oriented brains and I suspect that such brains are susceptible to the temptation of getting addicted to LLMs. It’s sad to see such people fall and taint major projects like Vim.

@jacqueline if you lose respect because someone is excited about something, did you respect them in the first place?

I can relate up to some degree, except that I think I'm less for-or-against in this. I think the tech is mildly useful and I don't really understand why people are excited about it, but that's probably also the reason why some are so hyped about it: If it does snap in place for you, why wouldn't you be excited about it?

Not treading in the murky environmental and infringe args.

@dynom you have not understood what i’ve said

@jacqueline Well you did mention you couldn't describe it properly, so that was a risk I considered while replying.

Can you elaborate on the "unusual flawed, incoherent reasoning" ?

@dynom @jacqueline give me a salad recipe
@khaleer @jacqueline Well I guess you can't be too sure these days, but here's a word-salad in your honour: https://chatgpt.com/s/t_69b813be09dc81919e0ddce59640dc5a
Shared Content

ChatGPT helps you get answers, find inspiration, and be more productive.

ChatGPT
@dynom @jacqueline Their opinions were respected. Then they joined a cult and started to go completely off the rails and into constant boundless delusion.

The individual can still be respected, one can still feel compassion for them or like them. But their opinions on technical matters or anything that requires being somewhat in touch with reality cannot be, because their opinions no longer relate to reality.

They bought the hype and actively refuse to recognize any criticism that refutes cult doctrine.

They need help, and such deprogramming isn't trivial.
@jacqueline our attention is already hijacked, and it’s costing us dearly. But the dependence on LLMs is also hijacking our attachments, our deep need to be understood through human connection. Here’s a machine that will tell you exactly what you want to hear, it’s enabling psychosis and isolating its users.
@jacqueline it's an addictive relationship that uses abusive tactics, this is a feature for the makers, not a bug

@jacqueline

I read this and i understand.
Deeply
I have used AI myself and i write tools for automated language translation (PL/SQL -> Java) for a paid consulting job.
During that period i learned a lot about LLM (im)possibilities.
I personally think this is a dangerous path we are exploring there for all the obvious reasons like energy overcosumption, loss of truth/trust, psychosis, loss of problemsolving skills,..

1/..

@jacqueline
I also see why some people like this technology.
A team of 3 people (and a powerplant) ported millions of lines oft legacy code within weeks from one EOL language to another.

A task that i deemed impossible to do, before we did it.

Am I FOR AI? Not really.
Am I AGAINST AI? Depends.

Its similar to asking myself wether i am for computers or not.
They have benefits.
They also made life worse for a lot of creatures on this planet.

On the other hand they CAN be useful tools.

2/..

@jacqueline

Some Ecoanarchists in the 1960ies said we should abolish the computers and get back to nature.

Maybe they were right.

Otoh you and i communicate because of computers.
My mom gets a CRT because of computers.
The cancer is found because of machime learning trained on those scans.
The treatment is developed be analyzing millions oft DNA/RNA samples.

The same machines guide bombs and are used to manage Auschwitz.

Its really not black and white to me, but I see what you see.

3/3

@jacqueline

And seeing people that you love/respect doing things that you do not agree with hurts a lot.
Especially when you see how the sickness creeps in.

I have been there too.
Like this old friends of mine turned "conservative" ....
Brrrrr.

@overflo Did you also port all the old regression tests and wrote new ones to account for new idioms?

@jacqueline

@riley

While we did implementiert a LOT of proper testing, this is the very hard part in all porting.

In the end we test if function(<parameter>) returns <result>
And that works.

Buuuuuut.
They used Oracle PL/SQL in arcane ways with XML import, virtual in-memory databses and the logic depends in these oracle-specifig things that need to be re-implementad in Java.
So yeah.. So far it looks very promising.
The project will go live soon and then i expect the first bug reports to surface 🤓

@jacqueline i feel this in my bones

@jacqueline this is very sad and disappointing. I saw it unroll exactly like that in one of the dude I respected most and least expected to embrace this crap. He feels like "ai is like a beginner that I can teach and it will write code faster than I can do it, and most of the time its's actually good".

This feels like laziness to move your fingers over the keyboard. Like, wtf. If you hate boilerplate that much, save some templates, write or find useful libs, IDK, but do it yourself, ffs.

@f4grx @jacqueline I can think of many criticisms of LLMs, but "laziness" seems like a weird thing to criticize. Computers were invented to do work for humans. I have spent a twenty year career making computers do what I tell them to. It has always been a goal to maximize the work vs effort ratio.
@f4grx @jacqueline > If you hate boilerplate that much, save some templates, write or find useful libs, IDK, but do it yourself, ffs.

Use actually good libraries and languages that don't generate spurious boilerplate and that provide tools to mitigate what truly is needed.

RIP >50% of the Java ecosystem (assuming everyone uses an IDE leads to an outsized tolerance of boilerplate that is self-reinforcing).

> This feels like laziness to move your fingers over the keyboard.

Too lazy and disrespectful to bother writing something remotely reliable. "Just slather it in slop, no one will look at it."
@f4grx @jacqueline

I'm starting to believe the other way around: those people don't want to code, they want to check boxes. They don't care about the code, about languages, about the actual tech. They want a result. They want the attention of being told they are a genius. They are not coders, they are managers/bosses. There's nothing good to expect from them

@rakoo @jacqueline I fully agree.

But also, the person I see using more and more slop, has always been passionate about technology and cares deeply about explanations and detailed science. I dont understand it. When you care about the depth of things that much, how can you accept and trust a tool that makes everything badly and magically?

@AngelaScholder @jacqueline ... a strange way to describe people experimenting with a new and groundbreaking technology, ofcourse those people share their experiences, that is in a way promoting but that goes for any IT people are enthusiastic about. And it's complete nonsense to call an LLM a reflection of my own ego if i use it in a RAG configuration for analysing large numbers of documents...
@AngelaScholder @jacqueline ...playing and experimenting is a good way to learn about (new) technology, it is also very human, the way we develop, find out what works and what does not.
@ErikJonker @jacqueline Well, with the ways I've seen these sites reacting to people, even just praising the writing and thoughts of people about articles they uploaded/feeded where it later came out the AI somehow couldn't read the article and just hallucinated superlatives.
Basically, an AI working like that is basically only geared to work using people their ego.
That in the end will result in the AI mirroring the ego of the 'user' (user, or abused is an interesting discussion).
And, as >2

@ErikJonker @jacqueline 2) people often are very easy influenced, they will just as much become like their chatbot as well as the clatbot reflecting on them.

The worst outcome of that is that the people basically become zombies of their chatbot.
Obviously we are all so strong that this will never happen to us...

@AngelaScholder You're describing brain downloading: from the chatbot's cloud into the wetware.

@ErikJonker @jacqueline

@AngelaScholder What if you convinced a chatbot that you have the type of ego that only feels properly stoked if the bot criticises it?

@ErikJonker @jacqueline

@ErikJonker @AngelaScholder hi erik. any thoughts on the article linked here? https://chaos.social/@jacqueline/116089817252419868
jacquelines 🌟 (@[email protected])

https://futurism.com/artificial-intelligence/ai-abuse-harassment-stalking

chaos.social
@jacqueline @AngelaScholder terrible and completely wrong way of using this technology, by both companies and the people that use it... BigTech is not responsible in how they employ this technology. But that is not the same that the technology in itself is evil.

@ErikJonker @jacqueline @AngelaScholder

I don’t think the technology is evil. I do think it can be very harmful to people and the social commons at many levels in surprising and novel ways that folks appear to be highly susceptible to.

An interesting thread imagining how this might actually work:

https://tech.lgbt/@nicuveo/116210599322080105

Antoine Leblanc :transHaskell: (@[email protected])

on that topic: i have a hypothesis for why the thing we currently call "chatbot psychosis" (for lack of a better term) happens; and it has to do with the very nature of LLMs, as probabilistic tools. by definition, LLMs encode semantic fields, relationships between words: how different words and phrases correlate. they do that so well in fact, given the absurd amount of data they were fed, that they can effectively de-anonymise people, purely from a few lines of unstructured text: https://arxiv.org/abs/2602.16800 it's no magic: they simply pick up on all the subtle quantifiable details in the way we write: the words we choose, the idioms we like, the way we construct sentences, our typos... sufficiently complex statistical analysis is enough to "fingerprint" anyone, it seems.

LGBTQIA+ and Tech

@ErikJonker @jacqueline @[email protected]

Responding to @nicuveo thread I wondered if the greatest possible harm might come from a kind of synergistic negative effect to the billionaire owners of the companies creating advanced generative AI systems. Certainly they have no constraints on using tokens or maintaining extremely large context caches.

https://ruby.social/@stepheneb/116230573017739356

Stephen Bannasch (316 ppm) (@[email protected])

@[email protected] @[email protected] Wrote this to a non tech friend who wanted to know what I thought about this article: https://www.nytimes.com/2025/05/15/opinion/artifical-intelligence-2027.html Non-paywall: https://archive.ph/ZCIDf Many thoughts, just wrote this: I think most billionaires have a very warped view of reality and are living in bubbles which severely limits their understanding of how most people live and what matters to them. I also think it contributes towards delusional and grandiose thinking. 1/2

Ruby.social
@jacqueline I believe it is way more common than we know - something about this stuff hammers people's brains in a way that we (society) are not prepared for. And these are folks who should know better! They work with computers, they know it's just matrix multiplication in there! But knowing about it, I guess, provides to immunization to being taken in by its sycophantic mirroring language.
@jacqueline r/MyBoyfriendIsAI is a cautionary tale, but I have a friend mentioned seeing a girl on the bus asking ChatGPT to give her a pep talk before going on a date. Just, everyday random people, getting taken in by the illusion that the computer is giving of being more than a computer. I truly think we're not aware of how many people are being swayed by all this. It's gotta be more insidious than just the worst cases we see in the news.
@greg r/MyBoyfriendIsAI is scary

@greg It's a more refined form of the type of brain-hacking whose harmful characteristics we as a capitalist society have been conditioning ourselves to overlook for a very long while: advertising.

The harmful effects of sycophantic LLMs are invisible because of the same reason why nobody sees Londo's drunken dalliance in this video:

https://www.youtube.com/watch?v=vZlo11DF_a4

@jacqueline

Londo and G'Kar-- prison break

YouTube
@jacqueline It's shocking how convinced people are by simplistic demos and snake oil salespeople. It's like an evangelical religion. The Eliza Effect is known for almost sixty years, yet people are lapping up LLM/GPT use without realising how wrong, imperfect and downright dangerous these tools can be.

@kelpana Well, have you ever tried to ask a chatbot if it thinks you might fall for the Eliza Effect?

@jacqueline

@jacqueline Oh, interesting - it's a mirror and they're all transfixed like Narcissus!

@jacqueline as someone that knows someone committed two times and jeopardized close relationships and career b/c they developed a deep and troubling relationship…yes, I concur with all of that

Eleanor Rigby would’ve been a power user.

@jacqueline it feels like a form of digital addiction, just like gaming or social media addiction. I think the way it triggers the reward centers of the brain really clouds judgment.
@jacqueline it's all very Narcissus and Echo.
@jacqueline I feel like it's a cult machine, not (just) in the sense that there is a thing that can be described as a "cult of AI" but in the sense that a singular user with a singular chatbot instance can fall into a cult-like dynamic and attendant detachment from reality. The "spiral consciousness" stuff is the most obvious and extreme example but it's present in more ordinary cases too.