Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant
Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant
I mean, there’s a good reason the first rules of firearm safety are to always treat a weapon as loaded, and to never direct the weapon at something you aren’t prepared to destroy. The key point being that you never know when some freak accident can happen with a loose pin, bad ammo, a broken spring, or just a person tripping and shaking the gun a bit too hard.
A gun should never go off by itself. You still treat it as if it can, because in the real world freak accidents happen.
Only if the user has configured it to bypass those authorizations.
With an agentic coding assistant, the LLM does not decide when it does and doesn’t prompt for authorization to proceed. The surrounding software is the one that makes that call, which is a normal program with hard guardrails in place. The only way to bypass the authorization prompts is to configure that software to bypass them. Many do allow that option, but of course you should only do so when operating in a sandbox.
The person in this article was a moron, that’s all there is to it. They ran the LLM on their live system, with no sandbox, went out of their way to remove all guardrails, and had no backup.
As I said elsewhere, if you’re denying access to your agentic AI, what is the point of it? It needs access to complete agentic tasks.
The person in this article was a moron, that’s all there is to it. They ran the LLM
No disagreement there.
if you’re denying access to your agentic AI, what is the point of it? It needs access to complete agentic tasks.
Yes, which it can prompt you for. Three options:
Deny everything
Prompt for approval when it needs to run a command or write a file
Allow everything
Obviously optional 1 is useless, but there’s nothing wrong with choosing option 2, or even option 3 if you run it in a sandbox where it can’t do any real-world damage.
And therein lies the problem. You’re giving the LLM control over when to or not to ask for approval.
You can fine-grain nr. 2 even more: You can give access to e.g. modify files only in a certain sub-tree, or run only specific commands with only specific options.
A restrictive yet quite safe approach is to only permit e.g. git add, git commit, and only allow changes to files under the VC. That effectively prevents any irreversible damage, without requiring you to manually approve all the time.
“Guns are foolproof”
You should have yours taken away.
the kid gave anthropic bad instructions
LOL and you know this how?
This is like an idiot pointing a gun at something he didn’t want destroyed
No, this is more like pointing a gun downrange and then the gun fires itself and the bullet does a U-turn and shoots the user.
Not really.
If you have the agent installed, it’s like having your gun assembled.
If you have your agent enabled, it’s like having your gun loaded.
If you give your agent permissions, it’s like taking your gun off safety.
If you don’t have your agent properly sandboxed, it’s like having bad muzzle control.
And if your agent is actively running, it’s like having your finger on the trigger.
This breaks every weapon safety rule. That’s how you get a negligent discharge.
Hence, it’s like scratching your back with a loaded weapon.
LOL and you know this how?
Because claude deleted his codebase dude, it’s like someone shooting themselves in the foot.