The problem with AI is that it makes us too productive. It just generates so much, so quickly, that even if a human reviews it, we're going to miss things. A project at work resulted in generation of code that did exactly what I wanted, as well as generation of documentation. Because the code all functioned perfectly and was correct when I reviewed it, I was tempted to just skim the documentation. Good thing I didn't! It mentioned in multiple places how to send me bug reports over slack, and what slack channel to join for support. I don't have Slack. I don't use Slack. Nobody at my workplace uses Slack. It also invented a support employee who doesn't exist, who offers support for the code in the Slack that doesn't exist.
@fastfinge
LLMs as they're currently employed are counter productive.

Code or indeed text generated by LLM wastes so much time because it will contain errors and imo it's faster to just rewrite things without the LLM than trying to salvage what the LLM produced.
@Theriac Hard disagree. The problem with LLMs is that people think they’re a magic “just do it” button. But even reviewing all of the output, I’m able to produce 90 percent more than I could without the LLM. And about 95 percent of what it produces is fully correct. And that’s the problem. The same way pilots fall asleep if the autopilot does 90 percent of the work of flying the airplane, and then aren’t ready to take over in an emergency. If the LLM is 90 percent correct, humans easily get lulled into thinking everything is correct, so don’t catch the critical 10 percent. LLMs are in this awkward stage where they’re accurate enough to make reviewing feel annoying, but not accurate enough to work without review. And that’s right where we humans perform our worst.