i feel like i don't have the words to properly describe how it feels to see people who opinions i respected and valued slowly fall into ai psychosis. it's so slow and so subtle at first. "i'm just experimenting! i'm not an ai booster!"

then wait a few months, and they start explaining with the usual flawed, incoherent reasoning how actually it's all very interesting and thought-provoking, whilst pointing at an LLM that is so obviously just a reflection of their own ego.

@AngelaScholder @jacqueline ... a strange way to describe people experimenting with a new and groundbreaking technology, ofcourse those people share their experiences, that is in a way promoting but that goes for any IT people are enthusiastic about. And it's complete nonsense to call an LLM a reflection of my own ego if i use it in a RAG configuration for analysing large numbers of documents...
@AngelaScholder @jacqueline ...playing and experimenting is a good way to learn about (new) technology, it is also very human, the way we develop, find out what works and what does not.
@ErikJonker @AngelaScholder hi erik. any thoughts on the article linked here? https://chaos.social/@jacqueline/116089817252419868
jacquelines 🌟 (@[email protected])

https://futurism.com/artificial-intelligence/ai-abuse-harassment-stalking

chaos.social
@jacqueline @AngelaScholder terrible and completely wrong way of using this technology, by both companies and the people that use it... BigTech is not responsible in how they employ this technology. But that is not the same that the technology in itself is evil.

@ErikJonker @jacqueline @AngelaScholder

I don’t think the technology is evil. I do think it can be very harmful to people and the social commons at many levels in surprising and novel ways that folks appear to be highly susceptible to.

An interesting thread imagining how this might actually work:

https://tech.lgbt/@nicuveo/116210599322080105

Antoine Leblanc :transHaskell: (@[email protected])

on that topic: i have a hypothesis for why the thing we currently call "chatbot psychosis" (for lack of a better term) happens; and it has to do with the very nature of LLMs, as probabilistic tools. by definition, LLMs encode semantic fields, relationships between words: how different words and phrases correlate. they do that so well in fact, given the absurd amount of data they were fed, that they can effectively de-anonymise people, purely from a few lines of unstructured text: https://arxiv.org/abs/2602.16800 it's no magic: they simply pick up on all the subtle quantifiable details in the way we write: the words we choose, the idioms we like, the way we construct sentences, our typos... sufficiently complex statistical analysis is enough to "fingerprint" anyone, it seems.

LGBTQIA+ and Tech

@ErikJonker @jacqueline @[email protected]

Responding to @nicuveo thread I wondered if the greatest possible harm might come from a kind of synergistic negative effect to the billionaire owners of the companies creating advanced generative AI systems. Certainly they have no constraints on using tokens or maintaining extremely large context caches.

https://ruby.social/@stepheneb/116230573017739356

Stephen Bannasch (316 ppm) (@[email protected])

@[email protected] @[email protected] Wrote this to a non tech friend who wanted to know what I thought about this article: https://www.nytimes.com/2025/05/15/opinion/artifical-intelligence-2027.html Non-paywall: https://archive.ph/ZCIDf Many thoughts, just wrote this: I think most billionaires have a very warped view of reality and are living in bubbles which severely limits their understanding of how most people live and what matters to them. I also think it contributes towards delusional and grandiose thinking. 1/2

Ruby.social