AI psychosis among the C-suite is really high now. I’m seeing it at work, where they validate everything using AI even though they know it screws up. For example, if I tell them a reboot isn't needed for a CVE because we aren’t running the app directly on the server, its in Docker, they will immediately fact check me with AI right while we talking. It’s just 1 example, but I’ve never seen such bizarre behavior. They treat AI like some divine truth. Has anyone noticed this?

@nixCraft yes. The way it is set up, it creates easily digestible plausible bullshit.

Easily, because there is no social, emotional or cognitive friction or effort needed. It starts responding immediately and pleasantly.

Digestible, because it is trained on the most often occurring sentences, contexts and words. No new language, no cognitive effort to understand or investigate underlying concepts, no awkward idiosyncratic language by other humans who think feel and express differently.

Plausible, because it is a language model, so the grammar and tone and words fit expectations, with a high probability.

Bullshit, because the output can be either correct or wrong, but it has no basis in reality.

Something makes a certain part of society very susceptible to this.