RE: https://mamot.fr/@pluralistic/116219642373307943

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth

The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. It pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/
AI autocomplete doesn’t just change how you write. It changes how you think

AI-powered writing tools are increasingly integrated into our e-mails and phones. Now a new study finds biased AI suggestions can sway users’ beliefs

Scientific American
Cory also correctly points out that "AI psychosis" is probably going to be gatekept by medical establishment scicomm types soon because "psychosis" probably isn't the right word and already carries an unwarranted stigma. And indeed, I think the biggest problem with "psychosis" as a metaphor is going to be that the ways in which AI can warp our minds are mostly NOT going to be catastrophic psychosis, and are not going to have great existing analogs in existing medical literature.
If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.
More to the point though in this metaphor where you're getting a potentially-infected scrape at work, we are living in the pre-germ-theory age of AI. We are aware that it might be dangerous sometimes, but we don't know to whom or why. We are attempting to combat miasma with bloodletting right now, and putting the miasma-generator in every home before we know what it's actually doing.

For me, this is the body horror money quote from that Scientific American article:

"participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

If you can see it, the basilisk has already won.

Now, for rhetorical effect, I'm obviously putting this fairly dramatically. Cory points out that people have been doing this *to each other* mediated by technology, in emergent and scary ways, with no need for AI. He shows that people prone to specific types of delusions (Morgellons, Gang Stalking Disorder) have found each other via the Internet and the simple availability of global distributed communication has harmed them. But obviously that has benefits, too.
I'm open to a future where we do some research and figure out the limits of how AI influence works, and where the safety valves are, and also the extent to which it's *fine* that AI can influence our views because honestly many different kinds of stimuli can influence our views, not least of which is each other. But it sure looks right now like it has a bunch of very dangerous feedback loops built-in, and it's not clear how to know if you're touching one.

But, as Cory puts it:

"""
It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale.
"""

I had a very visceral emotional reaction to this particular paragraph, and I find it very important to refute. Here are two points to consider:

1. YES THEY ARE.

They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

After outages, Amazon to make senior engineers sign off on AI-assisted changes

AWS has suffered at least two incidents linked to the use of AI coding assistants. See full article...

Ars OpenForum
2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good
The very fact that things like OpenClaw and Moltbook even *exist* is an indication, to me, that people are *not* making sober, considered judgements about how and where to use LLMs. The fact that they are popular at *all*, let alone popular enough to be featured in mainstream media shows that whatever this cognitive distortion is, it's widespread.

Furthermore, it is not "nuts" to dismiss LLM user experiences. In fact, you must dismiss all subjective experience of LLM use as evidence of objective phenomena, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too thick for your eyes to see.

Because the novel thing about LLMs, the thing that makes them dangerous, is that they are—by design—epistemic disruptors.

They can produce symboloids more rapidly than a thinking mind. Repetition influences cognition.

I have ADHD. Which means I am experienced in this process of self-denial. I have time blindness. I run an app that tells me how long I've been looking at other apps, because if I trust my subjective perception, I will think I've been looking at YouTube for 10 minutes instead of 4 hours. Every day I need to deny my subjective feelings about how using software is going, in order to function in society.
This disability gives me a superpower. I'm Geordi with the visor, able to see what everybody else's regular eyes are missing. This is basically where the idea for https://blog.glyph.im/2025/08/futzing-fraction.html originally came from: since I already monitor my time use, and I noticed that my time in LLM apps was WAY out of whack, consistently in "hyperfocus" levels of time-use, without any of the subjective impression of engagement or pleasure. Just dull frustration and surprising amounts of wasted time.
The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

The suggestion that the article makes is all about passive monitoring of the amount of time that your LLM projects *actually* take, so you can *know* if you're circling the drain of reprompting and "reasoning". Maybe some people really *are* experiencing this big surge in productivity that just hasn't shown up on anyone's balance sheet yet! But as far as I know, nobody bothers to *check*!
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.

Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.

But I'm still waiting.

@glyph I'm honestly wondering just how much undiagnosed long COVID is playing into this.

I'm slowly recovering now, well as much as I can, but at the time I was painfully aware weird stuff was happening to my brain because I got caught in the first wave in March 2020.

So I am wondering if the addictive effects of using these LLMs along with existing cognitive damage is a partial cause.

@onepict @glyph I suspect yes, because my non-tech friends who use it more are using it as assistive tech to keep them working through health things…

@crazyjaneway @glyph We had a client use it to give them permission to spam out their new thing, after we'd explained (and their local IT guy also explained) that if they did that on our servers we'd lock their account.

Which we then did. The client said, "ChatGPT said I could do it". The sycophancy combined with overconfidence is utterly frightening.

I don't particularly like it when my friends use it in their communication with me either.

https://dotart.blog/cobbles/ai-and-that-guy-at-the-bar

AI and that Guy at the bar

In tech we've always had evangelists, weither it's for FOSS, or Blockchain or now AI. It's a natural thing to do. You have a tech you'r...

cobbles

@onepict @glyph I've been starting to wonder the *exact* same thing

especially after I saw this earlier in the week: https://pubmed.ncbi.nlm.nih.gov/36819980/

while I don't think it's the only factor (the prompt products being shaped to be as sycophantic as they are is entirely a choice), I'd love to see some more research into it

New-Onset Hyperreligiosity, Demonic Hallucinations, and Apocalyptic Delusions following COVID-19 Infection - PubMed

Although there has been increasing research about the COVID-19 pandemic, there is much to be elucidated regarding the neuropsychiatric symptoms related to these infections. Similar to previous studies, our case describes a patient with no previous psychiatric history who developed severe psychotic s …

PubMed
@onepict @glyph this has crossed my mind (I could barely begin to code only after over a year). Who knows how much worse it would be if I relied on LLMs
@spinnyspinlock @onepict really sorry to hear that happened to you. LC sucks

@onepict @glyph

Yeah seems to be some overlap of cognitive symptoms between ADHD and Long Covid

https://pmc.ncbi.nlm.nih.gov/articles/PMC10102822/

I've also spoken to people with LC who find themselves in positions where they are just ferrying emails in and out of chatGPT from bed so they can keep their jobs/health insurance.

Checking your browser - reCAPTCHA

@az @onepict truly we live in hell
@glyph Very good analysis, thank you, I'll be passing this around 😁

@glyph this thread needs to be an essay, and then a research hypothesis.

I very much feel like I’m watching the last 35 years of my ever-enshittifying social network exposure, sped up 10x and replayed.

In 1991 I remember having the flash of insight - without the life experience to really go into it deeply then - that the way nascent social network tech constrained and shaped interaction was going to force a mass cognitive adaptation for which we were not ready.

@glyph

In 2021, we were still suffering the consequences of that, and still not sufficiently adapted to have avoided whatever the fuck is now driving our geopolitical dystopia engine.

And then suddenly our devolved capacity for social cognition had to deal with the fact that dealing with any humans at any distance far enough away that you couldn’t *lick* them came with no assurance that there even was a human there.

@glyph excellent thread. To me, studies like the one in that Scientific American article you linked to are evidence that LLM-based “AI” can indeed be toxic brain poison in ways that are effective even if people know they’re being manipulated. And the toxic brain poisoning mechanisms chatbots use (in aid of maximizing engagement) that contribute to AI-related psychosis are well-known; developers who think they’re immune are just kidding themselves; just because it’s not resulting in immediate decapitation (great analogy), it’s still quite possibly having an effect.

Anyhow I also haven’t seen any evidence of the kind you’re looking for, just claims by tech lobbyists that wrongful death suits are still fairly rare. Which, while true (at least so far) doesn’t exactly inspire confidence

@glyph i don't know if it's the best analogy at the end of the day, but my brain keeps going to lead pipes and asbestos. if we're not sure it's safe, should we be such a hurry to put it in everything?
@alys FYI the first health concerns with asbestos were being raised in 1907 and yet it was still legal to use it in UK buildings in, wait for it... 1999.

So the lesson with #LLMs is...?
@alys @glyph Careful, you wouldn't want the anti-vaxxers to read that ...
@glyph i've used the term "neural asbestos" before and it feels a lot like that may be the type of thing we're dealing with

@kirakira @glyph

That's good, mine is 'epistemic thalidomide'

@MrBerard @kirakira @glyph

Stochastic Errorism.

@davidtheeviloverlord @MrBerard @kirakira @glyph

What a fantastic thread.
Not black or white, but flavoursome.
Makes you think huh?

Humans as programmable entities.
Does a keyboard feel the fingertips?
Or does it think it's a content creator?

#Ai is a #Cognitivehazard and we don't have a firewall.

@MrBerard @kirakira @glyph Nice. I'm digging the vibe of "mental revigator" myself

And yet Doctorow thinks LLMs are great for him to use for copyediting. Maybe find a less hypocritical person to quote. All Gen AI horrifies me, I visualize environmental destruction with every "prompt."

@kirakira @glyph
https://floss.social/@sstendahl/116220713455956161

@kimcrawley @kirakira @glyph I do the same.

Several times this week I've come across people asking questions and I'm totally fine with that but then it's followed with "I asked Chat GPT" and I immediately despise them.

I needed to find an answer to something this week and it took me 10 to 20 minutes to find the exact answer I needed for this fairly obscure problem.

I used my brain and a search engine rather than being a lazy asshole.

@retrosponge @kirakira @glyph

It's so infuriating.

Every single "prompt" contributes more and more to the Earth becoming uninhabitable to humans.

@kimcrawley @kirakira @glyph Yep. It's horrifying. And these idiots just don't seem to grasp that.

Using AI is just sheer abject laziness at the expense of the future of the planet.

@retrosponge @kirakira @glyph

That's why I founded the only anti Gen AI political activism organization, Stop Gen AI.

Join us. ❤️

https://stopgenai.com

Stop Gen AI – Mutual Aid and Political Activism

@kimcrawley @kirakira @glyph It's the ultimate summation of a lot of what's wrong with modern society.

Selfishness.

"I want to be lazy and selfish, so I'm not going to do any actual work. Who cares if it hurts anybody?"

@kimcrawley Incidentally, I was on your site just yesterday.

You have my axe as the saying goes.😁

@kirakira @glyph "metacognitive sandblaster" is mine
@delta_vee @kirakira @glyph Leaded gasoline.

@bluewinds @delta_vee @kirakira @glyph I don't think the analogies are good because asbestos is a fantastic insulator, lead is a really helpful additive for petrol and makes fantastic pigments and is really convenient for piping... and the hidden side-effects are the problem. Whereas LLMs _don't_ deliver that primary benefit

LLMs are more like... cheap laminate flooring, produced with wood pulp harvested unsustainably from old-growth forests and made by grossly exploited factory workers overseas... superficially convenient when remodelling your kitchen and rapidly ubiquitous but also quite unsatisfying and a right faff to work around once it's established

@bluewinds @delta_vee @kirakira @glyph this post is brought to you by our kitchen floor
@jackeric @bluewinds @delta_vee @kirakira heh. I am not sure I 100% agree with your framing but all the analogies fall short (after all I do not think we have GOOD evidence that LLMs do any of these things, just hints) and this is an interesting contribution to the pile. but I definitely was thinking "wow it sounds like jack is thinking about laminate flooring really hard" the whole time I was reading it
@jackeric @bluewinds @kirakira @glyph Cheap laminate floors aren't a cognitohazard though (unless you're in interior design ;)

@jackeric @bluewinds @delta_vee @kirakira @glyph

I think it's crucial for everyone to realize that asbestos and fossil fuels etc aren't just innocent mistakes AND, that's STILL not a good analogy because this is WORSE.

This time it's not just a callous cover up, this time it's premeditated and deliberate, from the original foundations of how "AI" has been designed and implemented.

IT'S A WEAPON.

It's a weapon, and we are under enemy attack.

@jackeric @bluewinds @delta_vee @kirakira @glyph Only half seriously, but therefore also not totally *unseriously*:

Zip fuel.

If you've never heard of it, there's good reason. But it did attempt to address a legitimate concern at the time: getting more power out of a given volume of jet fuel. Just put highly reactive boron compounds in it. Specifically, *pyrophoric* boron compounds, which don't even need high heat to ignite.

The fuel did indeed produce more power, but it was very toxic (both in raw form and after combustion), and it seriously corroded jet engine parts, leading to an enormous maintenance headache for any aircraft that tried to use the fuel.

(Maintenance headaches ... sound familiar?)

A very good example of what can happen when "speed of one specific component", whether airplane flights or writing code, overwhelmingly dominates a thought process.

https://en.wikipedia.org/wiki/Zip_fuel

Zip fuel - Wikipedia

@kirakira @glyph A term I was introduced to on mastodon that I've taken to heart for this:

Coghazmat.

@glyph while I am not aware of any study showing the poisonous character of LLMs, two items are already proven:
1. LLMs have a more detrimental effect on software development than they have benefits. Google's DORA report showed now multiple years in a row, that LLM use in SW dev decreases performance and outcomes in most teams.
2. Abuse for malicious intent is rampant, yielding scary propaganda, misinformation, distraction campaigns and intensifies the threat from social engineering attacks
@nils_berger have you got a link for that report?

@glyph @nils_berger
this study argues that it encourages cognitive outsourcing on a new level, which in long term period could result in getting used to less cognitive activity, at least for certain tasks.

link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

Announcing the 2024 DORA report | Google Cloud Blog

Key takeaways from the 2024 Google Cloud DORA report that focused on the last decade of DORA, AI, platform engineering and developer experience.

Google Cloud Blog