A rogue AI led to a serious security incident at Meta
https://www.theverge.com/ai-artificial-intelligence/897528/meta-rogue-ai-agent-security-incident
A rogue AI led to a serious security incident at Meta
https://www.theverge.com/ai-artificial-intelligence/897528/meta-rogue-ai-agent-security-incident
A big problem now both internally to a company and externally is that official support channels are being replaced by chatbots, and you really have no option but to trust their output because a human expert is simply no longer available.
If I post a question to the internal payment team's forum about a critical processing issue and some "payments bot" replies to me, should I be at fault for trusting the answer?
> The fix for this common pattern is to reason about LLM outputs before making use of them.
That is politics. Not engineering.
Assigning a human to "check the output every time" and blaming them for the faults in the output is just assigning a scapegoat.
If you have to check the AI output every single time, the AI is pointless. You can just check immediately.
"A human, however, might have done further testing and made a more complete judgment call before sharing the information"
Because a human would have been fired for posting something that incorrect and dangerous
I'm concerned that someone had the permissions to make such a change without the knowledge of how to make the change.
And there was no test environment to validate the change before it was made.
Multiple process & mechanism failures, regardless of where the bad advice came from.