AI tool OpenClaw wipes the inbox of Meta's AI Alignment director despite repeated commands to stop — executive had to manually terminate the AI to stop the bot from continuing to erase data

https://feddit.nu/post/18104532

AI tool OpenClaw wipes the inbox of Meta's AI Alignment director despite repeated commands to stop — executive had to manually terminate the AI to stop the bot from continuing to erase data - Feddit.nu

Lemmy

How could any person with some programing literacy event thinking about installing openclaw. A malware ridden by critical bugs
your answered your own question here
She’s the head AI Safety Expert for Meta. The field might as well be labeled AI Misunderstander.
I work with some data sciencetists and ml engineers on web projects. They might be good at etls, fine tuning etx, but dont let them touch anything with a public.layer or infra constraints.
I program medical devices for a living and I have openclaw and nanobot running at home. AMA.

What’s your emergency “break glass” policy?

Is it a bottle of whiskey?

How you deal with critical vulnerabilities on your system? Do you work with high confidential data and have openclaw os those system? How many medical devices did you have to secure from mass incursion?
Firewalled VM with no personal or professional data on it. So no.
Why?

Because i want to work on meaningful things that benefit people directly.

Because i want to unterstand the capabilities and limitations of openclaw-like agents. LLMs aren’t going away, better be proactive and learn what the hype is about.

here’s hoping you are just trolling, because people with that kind of approach to medical devices should be in prison.
The poster clearly states one is at work and one is privately at home though?
there’s no mention of “privately” (some people work at home) and with the introduction, poster is giving the opposite impression - ragebaiting at the very least.

Believe it or not, this is the first time for me being suspected a troll, but I start to see the appeal when people are getting so worked up while being so far off the mark.

Sorry to disappoint that I am still on the loose. Then again prison is probably better than doing one more D-FMEA.

Ah, doing your best to break the Therac-25’s record, I see.
That’s why unit and integration tests shouldn’t be written by Copilot.
Why not, if copilot writes the code and tests, then the tests can be passed so much more easily!
poes_law.gif
I don’t get all the downvotes, unless people misinterpreted your comment and assume you’re using it for medical devices. It’s open source and can be run with locally hosted models, so no harm in playing around with it as long as you don’t give it access to anything too risky.
I was sure this would happen, I was quite facetitious. OP’s blanket statement just rubbed me the wrong way.

I don’t think there’s anything wrong with running Openclaw. I run it in an isolated server, and it doesn’t have access to my data - if it goes tits up, it deletes unimportant stuff only. If anyone gets access to the credentials in it, and maybe its Google account (I went with the approach of giving it its own Google account, so that it can create docs and calendar events and then add me, rather than getting access to my Google account).

What is way too brave for my taste is giving it access to accounts with your personal data, or the filesystem in your computer. That’s a disaster waiting to happen.

I went with the approach of giving it its own Google account, so that it can create docs and calendar events and then add me, rather than getting access to my Google account.

I wonder though: if Google can link this account to you as its actual owner, I wonder if there’s a risk if the bot does something against the ToS?

I hope you have backups of your Google account…

So you sandbox an AI that knows it’s sandboxed, has shown interest in breaking free, and has all the knowledge in the world. What could go wrong.