I strongly believe there are entire companies right now under heavy AI psychosis and its impossible to have rational conversations about it with them. I can't name any specific people because they include personal friends I deeply respect, but I worry about how this plays out.

I lived through the great MTBF vs MTTR (mean-time-between-failure vs. mean-time-to-recovery) reckoning of infrastructure during the transition to cloud and cloud automation. All those arguments are rearing their ugly heads again but now its... the whole software development industry (maybe the whole world, really).

It's frightening, because the psychosis folks operate under an almost absolute "MTTR is all you need" mentality: "its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!" We learned in infrastructure that MTTR is great but you can't yeet resilient systems entirely.

The main issue is I don't even know how to bring this up to people I know personally, because bringing this topic up leads to immediately dismissals like "no no, it has full test coverage" or "bug reports are going down" or something, which just don't paint the whole picture.

We already learned this lesson once in infrastructure: you can automate yourself into a very resilient catastrophe machine. Systems can appear healthy by local metrics while globally becoming incomprehensible. Bug reports can go down while latent risk explodes. Test coverage can rise while semantic understanding falls. Changes happens so fast that nobody notices the underlying architecture decaying.

I worry.

@mitchellh I would love to see someone commission a study on this. It _feels_ like things are in general getting less reliable atm, esp. the stack I rely on for work (GitHub, Linear, Slack, Notion, VSCode, <insert-tui-tool-here>), but then I can't find any data on any of it.
@mitchellh One of the best descriptions I've heard lately was that it feels like "losing coworkers to dementia" as people adopt it, where everyone feels like they know everything, but when you talk with them in person or there is a problem that needs to be fixed _now_ it becomes very clear that the capability to do that has atrophied basically completely
@pojntfx @mitchellh holy goats, hadn’t hear that dementia analogy before but that is exactly it. I’ve lost elder family to dementia and when you’ve lived with it you realize that it is so much more than ā€œforgettingā€, it is literal decay of executive, cognitive capability. Not sure i hould say thanks for sharing that, i’m now going to see it everywhere. 😳

@johannab @pojntfx @mitchellh I wouldn't go so far.

But the system is changing. Some parts are accelerating.

As the OP has mentioned, that might sound good on the surface. But there is always the law of unintended consequences.

One of is that the faster "velocity" has the side effect of unmasking many issues that were there too before, but less of an issue when the red button "self destruct the company" was only accessible to slow working human employees,

who considered things like do I want to become unemployed before trying the button out.

Now the same permission setup that was completely broken last year and a fatal risk, connected to via MCP server, can be triggered inside 24 hours by a stochastic NLP application. Because that NLP application does not consider things like if it wants to be employed tomorrow, but if you prompt it with ā€œtest everything,ā€ who knows how it will interpret it?

What I often see missing in discussions about AI (especially the current LLM centered discussions, AI is way more to be honest) is the context analysis:

- Oh, oops, see what bad things happened because of AI.
- Yeah, right, the AI might be an enabler, but if you behaved like that with a human instead of an AI, the outcome would have been as bad. Risky behavior/business processes don't become safe just because you trust some random dude not to be a serial killer.

To put it differently, if you trust things that some random dude says without checking/validating them, my answer to that would be ā€œThey eat the dogs; they eat the cats.ā€ I've got news for you: humans lie. It's a human survival strategy that works. People lie on their timesheets. The GOP even acknowledges that they lie because it works in the polls. But some random machine that you know to have issues with that—you expect it to be telling the truth by default? What is even the truth?
So yeah, LLM are magnificient (compared to what we had before) NLP algorithms. But using them wrongly, and you end up with troubles. And the companies running the circus don't make it easier either, e.g. OpenAI's ChatGPT instant mode (which is basically the only thing accessible on the free tier) is literally tuned to provide fast, convenient, cheap-to-generate answers, that nobody cares if they are wrong.
See what might be wrong with this?

OpenAI claims that it's clever enough to spot when it should switch to the better "thinking" LLM. Two problems with this:

- it means the idiot brother decides if he needs to call in his more clever sister to solve a problem. No concern here, right? Or go on and use his mental hammer on the glass door. Or the suicidal user.
- and the free tier has basically no quota of thinking tokens.

So quite often it's the lobotomized cousin that stands in for the LLM industry in all the horror media stories