I'm worried about AI psychosis. Specifically, I'm worried about the psychosis that makes "capital allocators" spend *$1.4T* on the money-losingest technology in human history, in pursuit of a bizarre fantasy that if we teach the word-guessing program enough words, it will take all the jobs.

--

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2026/04/13/always-great/#our-nhs

1/

@pluralistic

I have been describing AI as a business cult. CEOs signing up to spend mountains of money, jettisoning any analytical discipline. I chalk it up to psychosis born of unchecked monopoly. They're all at the Davos circle jerk getting played by the most sociopathic grifters among them.

What could go wrong?

@jawarajabbi @pluralistic

It's difficult for CEOs to escape AI when their board or investors are asking how they're going to protect the company from a potentially instant devaluation of their codebase. If the AI bet turns out to work, then a 10 year codebase could lose 50% of its value because software can now be built in half the time using AI agents. Not to mention the commodification of software features that would occur when the barrier of entry to building software is lowered.

@jawarajabbi @pluralistic

It doesn't help that investors are now being pitched POCs that were built entirely using AI agents, which becomes an incentive to put pressure on existing investments to increase productivity using AI. Of course what's missing from that picture is what happens after the POC becomes a product, and security and compliance enter the picture.

@davidsonsr @jawarajabbi @pluralistic There is anecdata for the beginnnings of a pushback from professional risk control folk, credit reference agencies, reinsurers.

@bms48 @jawarajabbi @pluralistic

My suspicion is that AI companies will attempt to lobby for reduced compliance requirements with software built using AI, but I'd expect some pushback on that from regulators.

@davidsonsr @jawarajabbi @pluralistic There has already been, pre-dating and in parallel with the self-delusion of "vibe coding" (unless you consider throwaway prototypes of web CRUD apps the end goal vs actual engineering), a push back towards proper security compliance and audit, and assignment of liability in the software industry: https://cacm.acm.org/practice/the-software-industry-is-still-the-problem/ Reinsurers, credit agencies, risk control professionals, are starting to push back against GenAI proponents. @bsdphk
The Software Industry Is Still the Problem

Communications of the ACM

@bms48 @davidsonsr @jawarajabbi @pluralistic

Not to mention EU's revision of the Product Liability Directive, a revision they undertook only and specifically to apply "no-fault" product liability to software in the consumer market.

Coming to your EU-country later this year...

@bsdphk @davidsonsr @jawarajabbi @pluralistic
I snagged a high-level guide to the updated EU directive from... an insurance firm. Told ya! "“medically recognised damage to psychological health” is now also expressly included here."
I think the EU have identified Meta in their threat model and made it part of their legislative framework.
Right On Commander.
https://www.hdi.global/globalassets/_local/international/downloads/group_product-liability-directive/HDI_Factsheet_Liability_NewProductDirective_INT_en.pdf