A rogue AI led to a serious security incident at Meta

Last week, an AI agent similar to OpenClaw triggered a high-severity security incident at Meta by independently giving inaccurate technical advice on an employee forum.

The Verge
The two errors, then, were that the LLM hallucinated something, and that a human trusted the LLM without reasoning about its answer. The fix for this common pattern is to reason about LLM outputs before making use of them.
If "the level of awareness that created a problem, cannot be used to fix the problem", then you're asking too much if you expect a human to reason about an LLM output when they are the ones that asked an LLM to do the thinking for them to begin with.