RE: https://mamot.fr/@pluralistic/116219642373307943

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth

The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. It pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/
AI autocomplete doesn’t just change how you write. It changes how you think

AI-powered writing tools are increasingly integrated into our e-mails and phones. Now a new study finds biased AI suggestions can sway users’ beliefs

Scientific American
Cory also correctly points out that "AI psychosis" is probably going to be gatekept by medical establishment scicomm types soon because "psychosis" probably isn't the right word and already carries an unwarranted stigma. And indeed, I think the biggest problem with "psychosis" as a metaphor is going to be that the ways in which AI can warp our minds are mostly NOT going to be catastrophic psychosis, and are not going to have great existing analogs in existing medical literature.
If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.
More to the point though in this metaphor where you're getting a potentially-infected scrape at work, we are living in the pre-germ-theory age of AI. We are aware that it might be dangerous sometimes, but we don't know to whom or why. We are attempting to combat miasma with bloodletting right now, and putting the miasma-generator in every home before we know what it's actually doing.

For me, this is the body horror money quote from that Scientific American article:

"participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

If you can see it, the basilisk has already won.

Now, for rhetorical effect, I'm obviously putting this fairly dramatically. Cory points out that people have been doing this *to each other* mediated by technology, in emergent and scary ways, with no need for AI. He shows that people prone to specific types of delusions (Morgellons, Gang Stalking Disorder) have found each other via the Internet and the simple availability of global distributed communication has harmed them. But obviously that has benefits, too.
I'm open to a future where we do some research and figure out the limits of how AI influence works, and where the safety valves are, and also the extent to which it's *fine* that AI can influence our views because honestly many different kinds of stimuli can influence our views, not least of which is each other. But it sure looks right now like it has a bunch of very dangerous feedback loops built-in, and it's not clear how to know if you're touching one.

But, as Cory puts it:

"""
It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale.
"""

I had a very visceral emotional reaction to this particular paragraph, and I find it very important to refute. Here are two points to consider:

1. YES THEY ARE.

They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

After outages, Amazon to make senior engineers sign off on AI-assisted changes

AWS has suffered at least two incidents linked to the use of AI coding assistants. See full article...

Ars OpenForum
2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good
The very fact that things like OpenClaw and Moltbook even *exist* is an indication, to me, that people are *not* making sober, considered judgements about how and where to use LLMs. The fact that they are popular at *all*, let alone popular enough to be featured in mainstream media shows that whatever this cognitive distortion is, it's widespread.

Furthermore, it is not "nuts" to dismiss LLM user experiences. In fact, you must dismiss all subjective experience of LLM use as evidence of objective phenomena, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too thick for your eyes to see.

Because the novel thing about LLMs, the thing that makes them dangerous, is that they are—by design—epistemic disruptors.

They can produce symboloids more rapidly than a thinking mind. Repetition influences cognition.

I have ADHD. Which means I am experienced in this process of self-denial. I have time blindness. I run an app that tells me how long I've been looking at other apps, because if I trust my subjective perception, I will think I've been looking at YouTube for 10 minutes instead of 4 hours. Every day I need to deny my subjective feelings about how using software is going, in order to function in society.
This disability gives me a superpower. I'm Geordi with the visor, able to see what everybody else's regular eyes are missing. This is basically where the idea for https://blog.glyph.im/2025/08/futzing-fraction.html originally came from: since I already monitor my time use, and I noticed that my time in LLM apps was WAY out of whack, consistently in "hyperfocus" levels of time-use, without any of the subjective impression of engagement or pleasure. Just dull frustration and surprising amounts of wasted time.
The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

The suggestion that the article makes is all about passive monitoring of the amount of time that your LLM projects *actually* take, so you can *know* if you're circling the drain of reprompting and "reasoning". Maybe some people really *are* experiencing this big surge in productivity that just hasn't shown up on anyone's balance sheet yet! But as far as I know, nobody bothers to *check*!

@glyph I like your breakdown in those articles.

I think that some of the more valuable stuff has been not when juniors prompt and don’t get value, but when seniors prompt, go do something else for a bit while the machine churns for a couple of minutes, and then come back to something that is pretty close to a good solution.

Think about a thing that might take you 15 minutes to kinda menially do (add some CLI bo flag that then needs to get passed down 3 layers in some spot for example)

@glyph lowering of activation energy is how I see that. And while I agree that the futzing is way undercounted (and that, IMO, a lot of this falls over in longer sessions and is just not worth it)… a strong dev who knows exactly what the solution is supposed to look like can get paper cut-y stuff cleaned up. A lot.

The “whine on slack about a thing being busted” turns into a fix, and most of that you can just go get a cup of water or review something in the meantime. Cool party trick at least

@glyph totally to your point tho… the party trick might just be that. It feels fun to have progress happen when laundry is being folded but in the end I might end up churning anyways
@raphael Believe me, I understand the appeal of the hit of dopamine to get moving when one is stuck. I really want a tool that can do that for me, but I would like to know what other effects it has, and whether it's going to be a net detriment.

@raphael @glyph The thing that the LLM is getting you to not think about is that it shouldn't take passing things down three layers (much less more, which is more common). This is the boilerplate that everyone hates and the goal should be to remove the need for it at all, not produce more faster.

"The least worst way to use an LLM is to do something you already know how to do", now with the addendum that we don't know what we don't know.