A rogue AI led to a serious security incident at Meta

Last week, an AI agent similar to OpenClaw triggered a high-severity security incident at Meta by independently giving inaccurate technical advice on an employee forum.

The Verge
The two errors, then, were that the LLM hallucinated something, and that a human trusted the LLM without reasoning about its answer. The fix for this common pattern is to reason about LLM outputs before making use of them.
However - Automation bias is a common problem (predating AI), the 'human-in-the-loop' ends up implicitly trusting the automated system.