AI tool OpenClaw wipes the inbox of Meta's AI Alignment director despite repeated commands to stop — executive had to manually terminate the AI to stop the bot from continuing to erase data

https://feddit.nu/post/18104532

AI tool OpenClaw wipes the inbox of Meta's AI Alignment director despite repeated commands to stop — executive had to manually terminate the AI to stop the bot from continuing to erase data - Feddit.nu

Lemmy

I’m sure LLMs can be useful for automation as long as you know what you’re doing, have tested your prompts rigorously on the specific version of the model and agent you’re using, and have put proper guardrails in place.

Just blindly assuming a LLM is intelligent and will do the right thing is stupid, though. LLMs take text you give them as input and then output some predicted text based on statistical patterns. That’s all. If you feed it a pile of text with a chat history that says your emails were deleted, the text it might predict that statistically should come next is an apology. You can feed that same pile of text to 10 different LLMs, and they might all “apologize” to you.

Because of the way LLMs work, they are inherently bad for automation. The most important part of automation is deterministic results; LLMs cannot work if they have deterministic results. It is simply not a possible application of the technology.