A rogue AI led to a serious security incident at Meta

Last week, an AI agent similar to OpenClaw triggered a high-severity security incident at Meta by independently giving inaccurate technical advice on an employee forum.

The Verge
The two errors, then, were that the LLM hallucinated something, and that a human trusted the LLM without reasoning about its answer. The fix for this common pattern is to reason about LLM outputs before making use of them.
It's more like, the LLM "hallucinated" (I hate that term) and automatically posted the information to the forum. It sounds like the human didn't get a chance to reason about it. At least not the original human that asked the LLM for an answer