A rogue AI led to a serious security incident at Meta

Last week, an AI agent similar to OpenClaw triggered a high-severity security incident at Meta by independently giving inaccurate technical advice on an employee forum.

The Verge
The two errors, then, were that the LLM hallucinated something, and that a human trusted the LLM without reasoning about its answer. The fix for this common pattern is to reason about LLM outputs before making use of them.

A big problem now both internally to a company and externally is that official support channels are being replaced by chatbots, and you really have no option but to trust their output because a human expert is simply no longer available.

If I post a question to the internal payment team's forum about a critical processing issue and some "payments bot" replies to me, should I be at fault for trusting the answer?