I'm worried about AI psychosis. Specifically, I'm worried about the psychosis that makes "capital allocators" spend *$1.4T* on the money-losingest technology in human history, in pursuit of a bizarre fantasy that if we teach the word-guessing program enough words, it will take all the jobs.

--

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2026/04/13/always-great/#our-nhs

1/

@pluralistic

I have been describing AI as a business cult. CEOs signing up to spend mountains of money, jettisoning any analytical discipline. I chalk it up to psychosis born of unchecked monopoly. They're all at the Davos circle jerk getting played by the most sociopathic grifters among them.

What could go wrong?

@jawarajabbi @pluralistic

It's difficult for CEOs to escape AI when their board or investors are asking how they're going to protect the company from a potentially instant devaluation of their codebase. If the AI bet turns out to work, then a 10 year codebase could lose 50% of its value because software can now be built in half the time using AI agents. Not to mention the commodification of software features that would occur when the barrier of entry to building software is lowered.

@jawarajabbi @pluralistic

It doesn't help that investors are now being pitched POCs that were built entirely using AI agents, which becomes an incentive to put pressure on existing investments to increase productivity using AI. Of course what's missing from that picture is what happens after the POC becomes a product, and security and compliance enter the picture.

@jawarajabbi @pluralistic

So we have developers implementing AI because they're being pressured to optimize, CEOs implementing AI because they're pressured to protect their company and investors buying into AI because they fear the devaluation of their investments.

Some data seems to indicate that companies believe that AI adoption is in its nascent stage, even when their own bets have only paid off marginally, so it looks like this is going to continue for the foreseeable future.

@davidsonsr @jawarajabbi @pluralistic The problem is similar to that of the 00s or the 10s: slapping cloud tags on a proposition to attract investment, followed by slapping blockchain on everything, and now it's GenAI instead. The investors involved cannot be relied upon to take a sufficiently well-researched and scientific approach. They rely on mediated experience to reach judgements as we all do. But it is musical chairs now, given the level of mistruth and misrepresentation now happening.

@bms48 @jawarajabbi @pluralistic

I'd say that this is different because it's not just a marketing gimmick, it's a variable that promises an unknown amount of cost optimization.

For people who have spent their lives amassing a fortune, the prospect of losing a significant portion of it (or all of it) becomes an incentive to spend an insignificant amount of that fortune on AI in order to hedge the bet.

@bms48 @jawarajabbi @pluralistic

One thing to take into consideration is that AI is already replacing human tasks since companies have been embedding AI into their organizations for a long time now, so investors extrapolate on these cost savings and buy into the bet that AI will eventually replace significant amounts of organizational work.

@bms48 @jawarajabbi @pluralistic

I don't really know if this is likely to stop any time soon, but it looks like a lot of people are engaging in discussing the environmental effects of AI which seems to be having some political impact.