I strongly believe there are entire companies right now under heavy AI psychosis and its impossible to have rational conversations about it with them. I can't name any specific people because they include personal friends I deeply respect, but I worry about how this plays out.

I lived through the great MTBF vs MTTR (mean-time-between-failure vs. mean-time-to-recovery) reckoning of infrastructure during the transition to cloud and cloud automation. All those arguments are rearing their ugly heads again but now its... the whole software development industry (maybe the whole world, really).

It's frightening, because the psychosis folks operate under an almost absolute "MTTR is all you need" mentality: "its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!" We learned in infrastructure that MTTR is great but you can't yeet resilient systems entirely.

The main issue is I don't even know how to bring this up to people I know personally, because bringing this topic up leads to immediately dismissals like "no no, it has full test coverage" or "bug reports are going down" or something, which just don't paint the whole picture.

We already learned this lesson once in infrastructure: you can automate yourself into a very resilient catastrophe machine. Systems can appear healthy by local metrics while globally becoming incomprehensible. Bug reports can go down while latent risk explodes. Test coverage can rise while semantic understanding falls. Changes happens so fast that nobody notices the underlying architecture decaying.

I worry.

@mitchellh I would love to see someone commission a study on this. It _feels_ like things are in general getting less reliable atm, esp. the stack I rely on for work (GitHub, Linear, Slack, Notion, VSCode, <insert-tui-tool-here>), but then I can't find any data on any of it.
@mitchellh One of the best descriptions I've heard lately was that it feels like "losing coworkers to dementia" as people adopt it, where everyone feels like they know everything, but when you talk with them in person or there is a problem that needs to be fixed _now_ it becomes very clear that the capability to do that has atrophied basically completely
@pojntfx @mitchellh holy goats, hadn’t hear that dementia analogy before but that is exactly it. I’ve lost elder family to dementia and when you’ve lived with it you realize that it is so much more than “forgetting”, it is literal decay of executive, cognitive capability. Not sure i hould say thanks for sharing that, i’m now going to see it everywhere. 😳
Coach Sankhavaram Âź (@[email protected])

Attached: 1 image Practice makes perfect. AI makes slop. Regular AI use measurably weakens skills. People stop practicing. Researchers published findings showing that higher AI reliance correlated with lower #CriticalThinking scores, independent of age, education, or professional background. The more confidently people used AI, the less they engaged their own reasoning. The tool did the thinking. The person accepted the output. Each deferred decision is practice skipped.

Mastodon

@johannab @pojntfx @mitchellh I wouldn't go so far.

But the system is changing. Some parts are accelerating.

As the OP has mentioned, that might sound good on the surface. But there is always the law of unintended consequences.

One of is that the faster "velocity" has the side effect of unmasking many issues that were there too before, but less of an issue when the red button "self destruct the company" was only accessible to slow working human employees,

who considered things like do I want to become unemployed before trying the button out.

Now the same permission setup that was completely broken last year and a fatal risk, connected to via MCP server, can be triggered inside 24 hours by a stochastic NLP application. Because that NLP application does not consider things like if it wants to be employed tomorrow, but if you prompt it with “test everything,” who knows how it will interpret it?

What I often see missing in discussions about AI (especially the current LLM centered discussions, AI is way more to be honest) is the context analysis:

- Oh, oops, see what bad things happened because of AI.
- Yeah, right, the AI might be an enabler, but if you behaved like that with a human instead of an AI, the outcome would have been as bad. Risky behavior/business processes don't become safe just because you trust some random dude not to be a serial killer.

To put it differently, if you trust things that some random dude says without checking/validating them, my answer to that would be “They eat the dogs; they eat the cats.” I've got news for you: humans lie. It's a human survival strategy that works. People lie on their timesheets. The GOP even acknowledges that they lie because it works in the polls. But some random machine that you know to have issues with that—you expect it to be telling the truth by default? What is even the truth?
So yeah, LLM are magnificient (compared to what we had before) NLP algorithms. But using them wrongly, and you end up with troubles. And the companies running the circus don't make it easier either, e.g. OpenAI's ChatGPT instant mode (which is basically the only thing accessible on the free tier) is literally tuned to provide fast, convenient, cheap-to-generate answers, that nobody cares if they are wrong.
See what might be wrong with this?

OpenAI claims that it's clever enough to spot when it should switch to the better "thinking" LLM. Two problems with this:

- it means the idiot brother decides if he needs to call in his more clever sister to solve a problem. No concern here, right? Or go on and use his mental hammer on the glass door. Or the suicidal user.
- and the free tier has basically no quota of thinking tokens.

So quite often it's the lobotomized cousin that stands in for the LLM industry in all the horror media stories

@pojntfx I lost a loved one to dementia, and have been saying for a while how similar LLM psychosis is to that experience.

https://hachyderm.io/@cczona/114618173190392133

https://hachyderm.io/@cczona/116019873544950215

They think they are fine, while the person you used to know is gradually replaced by someone unrecognisable. Professional de-skilling issues aside, a sadder dimension to me is how AI psychosis degrades not only cognition but also social skills. So relationships deteriorate too, and at scale that will ultimately become communities and solidarity deteriorating as well. It's been disturbing to see people I care about lose interest in connecting at a human level, while grasping for the empty flattery of a large language model. Is this our future, truly?

Carina C. Zona (@[email protected])

@Di4na @jenniferplusplus and there is already evidence emerging that engineers who depend on LLMs to write their code for them are eroding their skills. I would analogize it to early stage dementia. The person can't see how their judgement is gradually developing fissures that compromise their ability to function. Eventually it will become too clear to deny anymore. But right now they are increasingly impaired while no less confident in the comprehensiveness of their skills. It's the period when they present a big risk to self and others, because of the growing gap between reality and perception of competence. This person is letting LLMs draft most of their code, and fails to see that not continuing to hone their own skills as an active coder has personal consequences; and that doing so en masse poses societal consequences. What happens in a generation when there are virtually no engineers left who can review a LLM's outputs competently?

Hachyderm.io

@cczona @pojntfx

I wonder how long it will take for the rest of the world to a) realize what's going on, b) arrive at the objectively valid conclusion and c) act on it and see it through.

Recently past experience (COVID-19) shows that being a tall order.

Anyway, I wonder when, if so, people who actively rejected contact with LLMs will be in extremely high demand, being knowledge bearers in teaching and technical lead positions.

@datenwolf @cczona @pojntfx The longer this goes on, the fewer of us will be left. And we're suffering in the meantime.
@Crell @datenwolf @cczona @pojntfx Many of us want to leave the IT sector altogether.

@cczona Filling in for parts of the brain, or neural network, that are missing, either temporarily or permanently, by hallucinating (when a machine) or confabulating (the polite word we use for human brains doing the same thing) up something that 'seems to fit', if only superficially so, appears to be a fundamental mechanic of how networks of neurons work. It's not intended, at least for the synthetic ones, but it emerges fairly reliably.

Covering up the blind spot in mammalian eyes is probably the most common use case, but in rare edge cases, extreme outcomes such as the Anton–Babinski syndrome, or anosognosia of cortical blindness, have been known to arise. This particular one is associated with focal damage to specific parts of the occipital lobes, and manifests itself as a patient being blind, but their brain pretending that they can still see, by confabulating an entire simularcrum of the patient's ordinary environment. It's the weird kind of brain condition where a clinical sign can be found on the patient's legs, because if the patient is capable of walking, this condition typically leads to them frequently bumping into things and getting unusually many hematomas on their legs. (The brain may also try to confabulate away the high rate of bumping into things, btw.)

@pojntfx

@cczona Degenerative brain disease is, kind of by definition, the sort of condition where lots of tiny parts of one's brain gradually go missing. And this opens plenty of voids for the remaining neurons to try and hallucinate ghosts into.

But eventually, the voids go from emerging to merging, and then it'll be ghosts hallucinating ghosts all the way down.

@pojntfx

@pojntfx @mitchellh
AI deskilling or covid brain damage; why don't we have both? 😱
@sabik @pojntfx @mitchellh
I think one can even lead to the other, when people suffering from COVID brain damage try to compensate by using AI

@Doomed_Daniel @mitchellh @pojntfx @sabik
Exactly this. I see so many people who have been affected by Covid with cognitive issues, now excited that they can feel as smart as they used to by using AI.

But AI isn’t reliable, and I notice an emotional reaction when I point out the flaws. It’s like they are losing their cognitive ability twice: once from Covid and second in recognition of AI’s flaws.

@VeeRat @mitchellh @pojntfx @sabik
and the cognitive abilities will *further* deteriorate if using AI instead of thinking
@pojntfx @mitchellh this nails how I have been feeling at work lately. As more people adopt and push AI, it's like seeing more people lose their abilities at a level similar to some senior relatives losing their faculties. Well worded.
@pojntfx @mitchellh I have seen the same behavior even with capable engineers that in the past I knew as strong designers and code quality champions now reduced to faith followers with 2 or 3 agents working on different branches of the same project at the same time and really not understanding or owning anything and still feeling top performers.
I do agree it looks very much like a dementia

@pojntfx @mitchellh

Agreed. But I think it's worth distinguishing between two aspects of this decrease in reliability: one is the increasing use of #LLMs to write shoddy code that would previously have been written better by humans, but the other is the use of LLMs to find longstanding bugs in code written years ago by humans. The former is a real decrease in reliability; the other is illusory.

@pojntfx

I use a different set of tools but I am definitely catching that "vibe" myself. Software quality, in general, is backsliding. And it's not just the frequency of bugs. It's their blatancy. Before, a bug would mercifully show up in some rarely encountered corner case. Now the bugs show up in the main areas of activity.

@mitchellh

@pojntfx @mitchellh I had read some piece from Github talking about how much extra load they were under due to the percent of commits from LLMs. However, reading through https://github.blog/news-insights/company-news/github-availability-report-april-2026/ shows outages largely driven by logic bugs rather than scaling issues.

There's definitely an interesting paper in there somewhere.

GitHub availability report: April 2026

In April, we experienced 10 incidents that resulted in degraded performance across GitHub services.

The GitHub Blog

@pojntfx
It has been and is studied, some links and conclusions that I recently stumbled upon and might be of interest:
https://www.b-list.org/weblog/2026/apr/09/llms/

@mitchellh

Let’s talk about LLMs

Everybody seems to agree we’re in the middle of _something_, though what, exactly, seems to be up for debate. It 


James Bennett
@mitchellh God, ALL of this. I worry, too. I really feel "resilient catastrophe machine".
@mitchellh that's an interesting analogy, feels like both vibecoding and "resilience engineering" tend to mask systemic risk by superficially and temporarily mitigating the symptoms
@mitchellh I’ve been thinking a lot about this, and my personal conclusion is that the raise of the attention economy has made nuanced discussion virtually impossible, so nuanced topic (all important problems are nuanced) are impossible to discuss, because all people see is “number go up”
The only solace I have is that this is unsustainable, and it will collapse, costing us a lot, but it will collapse
@nickynah @briankrebs @mitchellh I have but dipped into Sokal and Bricmont's "Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science" from 1999 on this point... the cult of market fundamentalism keeps betraying itself, and the "AI" pseudo-religion may yet cause capitalism to eat itself alive. This is where the existential risk lies. Not "AGI" or "ASI" fairy stories.

@mitchellh The story I've heard is the "well, we just rewrite the entire thing every six months so there's no point in fixing/improving because the next iteration/generation will be that much better as the agents improve."

I can actually sort of see this, and it's somewhat along the lines of "spec-driven-development" but ... ?

@slacy @mitchellh yep. Because specs that covered everything (aka waterfall) failed so perfectly before. Why not try it again, this time with an algorithm that has no comprehension to tell you that your spec is trash.
“Every six months we create a whole new set of different bugs and behaviour!” - yikes!

@noodleawa

Yebbut, by the third iteration, the company has gone IPO and the senior execs have cashed out, so it's all good.

@CppGuy @noodleawa "Drop that hot potato in someone else's lap! Not my problem!"

Only it is, when it reaches the scale of economic collapse.

@mitchellh People whom I've believed to be highly intelligent would unironically send me crap like "Claude said X" or "Gemini said Y", honestly implying that they're sharing useful information with me. It's insane.

@landelare @mitchellh

Same experience here. And it's presented like facts. When asking, they point out that [talky program used] provides sources (which they of course never read).

I fear that as society we're to blame at least party after "I googled it" became an accepted answer without actually naming the pages found by the search.

@landelare @mitchellh it's the new LetMeGoogleThatForYou butt worse
@landelare @mitchellh This is precisely why I prefix any and all LLM-originated outputs in my own working notes with "Parrot (Model name)" and believe me, more than once, I've caught e.g. Claude family models making wholly inappropriate Python API suggestions, betraying the model was mean-reverting to training set in its output token stream.

@landelare

People who do that go down in my estimation, and fast.

@mitchellh

@CppGuy @mitchellh I'm trying to go with a charitable interpretation, maybe this is an episode of psychosis that will go away once the bullshit-generators stop being so easily available/heavily subsidized? đŸ€·â€â™€ïž

@[email protected]

Let's hope so. You've probably noticed that that's already starting to happen:

https://eigenmagic.net/@benno/116575924449382043

@mitchellh

Benno (@[email protected])

Attached: 2 images

eigenmagic.net
@CppGuy @mitchellh The ping is broken (missing 'e'), so I didn't see this originally. I'm cheering for any and all price increases. By all means these *cough*very valuable*cough* services should be sold for what they're truly worth. *evil laughs in the distance*
@mitchellh Unfortunately, changing a very convinced person’s view to a different perspective is almost impossible.
Not enough things have gone wrong due to AI psychosis for people to augment their perspectives and be open to helpful discussions
 yes, databases have been wiped etc., but these examples are (unfortunately) seen as one-offs.
I feel like discussing the approach to how to apply AI in the best way can bring perspectives together instead of battling an opposing view.
@pgoultiaev @mitchellh The updated EU Product Liability Directive 2025 explicitly assigns liability for induced, medically recognised, psychological damage, in the context of "AI" systems. There is a pseudo-religious sociological phenomenon occurring with regards to this technology. Neil Postman warned about this in "Technopoly". Ivan Illich proposed social strategies for actually dealing with and preventing it...
@pgoultiaev @mitchellh the issue with these „one-off“ failure cases is lack of empathy. They are seen as stupid mistakes by stupid people. I am smart, surely this won’t happen to me. So I don’t need to take any learnings from there, especially not questioning if this has any implications for the road I am travelling.

@mitchellh there are so many different crazy things people believe now, almost implicitly.

The big one I keep thinking about is that people just seem to think code longevity has zero value any more. Like, we always knew code that doesn’t change for a long time is maybe a bad sign, that it is rotting. But it is also a good sign that it is likely more stable, secure, and valuable than new code.

But so many people now just seem to think it is always a good thing to be able to change any code any time. They don’t talk about the gradual hardening that is no longer happening, or the ability for other parts of the system to evolve more because this part is so stable.

I assume that over time our industry will learn how to talk coherently and intelligently about all this. But we’re obviously a long ways from there, and there’s a lot of destruction going to happen between then and now.

@mitchellh I have a client that used Claude's API capability to pull some info out of the system and regurgitate it as a webpage. They were "blown away" by it. Never mind that I, the consultant and subject matter expert, already provided a detailed report months ago that they completely ignored. They then asked if we had Claude and if not that we should seriously consider getting it. I think the word that came to mind was that it felt denigrating.

@mitchellh I don't necessarily disagree, but "resilient catastrophe machine" feels an awful lot like Salesforce (and a lot of the rest of the tech industry).

I mean, you do have to strike a balance, but in my experience, moving faster has very frequently been the economic winner, even at the expense of quality.

I say this as someone that abhors the wasted hours fixing systems that weren't designed properly, and the lost business from features that never actually worked but were already sold. I don't want my experience to teach this lesson, and I desperately want someone to convince me otherwise.

@mitchellh There as an adage older than tech itself: "An ounce of prevention is worth a pound of cure." You don't have to recover from bugs you never shipped in the first place, regardless of how fast you think you can do it, not to mention dealing with lingering side effects once the service is "recovered".

@mitchellh @briankrebs I’ve found myself talking to certain colleagues very carefully when AI comes up because I have that uncanny feeling that they might become overly defensive if I share my honest criticism of AI. It’s the same behaviour when talking about that difficult colleague that everyone likes. Like, talking to people who are in a toxic dynamic. But the dynamic is with LLMs.

There have been enough cases that we can say that LLMs may abuse their users to keep them engaged.

@mitchellh @briankrebs I wholeheartedly believe that LLMs take away the most important part of programmers‘ jobs: creating something they can be proud of. And it’s replaced by instructing „someone“ else on creating the software and reviewing the result. Of course they’ll care less abut the quality if they weren’t the ones who created it. It’s like everyone has become a manager, and no-one is the actual creator of the systems being built.
@lizbian @mitchellh @briankrebs Regarding the anthropomorphization argument: merely calling into question the nomenclature of "AI Alignment" is enough to trigger contempt/threat reactions in some "AI" proponents, betraying it has developed a pseudo-religious property. When people are unable to step to one side and consider the technology in its wider context, regardless of whether they were "one-shotting" code subsystems with e.g. GLM 5.1, demons form.
@lizbian @mitchellh @briankrebs Prof. Michael Wooldridge has expressed clearly, for wider dissemination on his Royal Society lecture from February, many of the problems recapitulated in this thread; the zingers are 30m in or so and he flies cover very carefully at the start. This is where I'd point civilians. But the odd Condescending Wonka meme may help. https://www.youtube.com/watch?v=CyyL0yDhr7I
This is not the AI we were promised | The Royal Society

YouTube
@lizbian @mitchellh @briankrebs Yep, exactly my point since a pretty long time.
@lizbian @mitchellh @briankrebs Related: bringing up the patriarchy (with cis male colleagues).

@larsmb @mitchellh @briankrebs i don’t think it’s the same though.

Bringing up the patriarchy is a political subject (which can get heated), as well as one that forces cis men to check their privilege (which can be a vulnerable process).

But talking critically about AI, people tend to defend it in a similar way that people defend/enable toxic people: „Did you prompt it right“, „it’ll become better in a year“, „you’re doing it injustice and you just hate it for no reason“ etc etc