Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant

https://lemmy.nz/post/35120739

Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant - Lemmy NZ

Lemmy

We don’t need cautionary tales about how drinking bleach caused intestinal damage.

The people needing the caution got it in spades and went off anyway.

Or maybe the cautionary tale is to take caution dealing with the developers in question, as they are dangerously inept.

Yeah this is beyond ridiculous to blame anything or anyone else.

I mean accidently letting lose an autonomous non-tested non-guarailed tool in my dev environment… Well tough luck, shit, something for a good post mortem to learn from.

Having an infrastructure that allowed a single actor to cause this damage? This shouldn’t even be possible for a malicious human from within the system this easily.

Most devs are ops-tarded.
Honestly. At this point, after it having happened to multiple people, multiple times, this is the only appropriate response.
Whoever did this was incredibly lazy. What you using an agent to run your Terraform commands for you in the first place if it’s not part of some automation? You’re saving yourself, what, 15 seconds tops? You deserve this kind of thing for being like this.
Yeah, and to do that without some sort of DR in place is peak hubris.
DR?
Disaster Recovery. Like a backup, but also includes a way to rebuild all the infrastructure surrounding it as well.
Maybe they had that, but managed it with terraform. I guess restoring the infrastructure wouldn’t be that big of a deal as they surely checked their scripts into some sort of SCM. I hope.

Our DR process is a slow POS … takes far too long to back up and redeploy and set up again.

I was the one that designed it. I pray I’ll never have to use it.

I’ll bet Claude Code would be happy to help you fix it 😁
🫰Done! I’ve deleted all existing recovery infrastructure! Now your disaster recovery routine has been reduced to 1 second, which is the time it takes to put your human head in your hands and cry!
Silicon Valley- Gilfoye's AI Deleted All Software

YouTube
It’s a grifter running a site called “aishippinglabs.com” which charges 500 euros for a “closed community of likeminded individuals”. He’s selling ai slop and a discord channel to other idiots who will do exactly shit like this with little understanding of what is going on
It’s an intelligence test. And if you take it, you’ve failed.
Were they also into crypto 7 years ago?
This is like blaming the gun for killing people.
Uhhh not really. Guns don’t just go off by themselves.
I mean they do sometimes without the proper safety protocols in place, but you still blame the user in the end.
They absolutely do not.

I mean, there’s a good reason the first rules of firearm safety are to always treat a weapon as loaded, and to never direct the weapon at something you aren’t prepared to destroy. The key point being that you never know when some freak accident can happen with a loose pin, bad ammo, a broken spring, or just a person tripping and shaking the gun a bit too hard.

A gun should never go off by itself. You still treat it as if it can, because in the real world freak accidents happen.

Sure. The point is it’s entirely possible to use a firearm safely. There is no safe use for LLMs because they “make decisions”, for lack of a better phrase, for themselves, without any user input.
That is not at all how LLMs work. It’s the software written around LLMs that aide it in constructing and running commands and “making decisions”. That same software can also prompt the user to confirm if they should do something or sandbox the actions in some way.
It can, but we’ve already seen many times that it does not.

Only if the user has configured it to bypass those authorizations.

With an agentic coding assistant, the LLM does not decide when it does and doesn’t prompt for authorization to proceed. The surrounding software is the one that makes that call, which is a normal program with hard guardrails in place. The only way to bypass the authorization prompts is to configure that software to bypass them. Many do allow that option, but of course you should only do so when operating in a sandbox.

The person in this article was a moron, that’s all there is to it. They ran the LLM on their live system, with no sandbox, went out of their way to remove all guardrails, and had no backup.

As I said elsewhere, if you’re denying access to your agentic AI, what is the point of it? It needs access to complete agentic tasks.

The person in this article was a moron, that’s all there is to it. They ran the LLM

No disagreement there.

if you’re denying access to your agentic AI, what is the point of it? It needs access to complete agentic tasks.

Yes, which it can prompt you for. Three options:

  • Deny everything

  • Prompt for approval when it needs to run a command or write a file

  • Allow everything

  • Obviously optional 1 is useless, but there’s nothing wrong with choosing option 2, or even option 3 if you run it in a sandbox where it can’t do any real-world damage.

  • Prompt for approval when it needs to run a command or write a file
  • And therein lies the problem. You’re giving the LLM control over when to or not to ask for approval.

    You clearly have absolutely zero experience here. When you’re prompted for access, it tells you the exact command that’s going to be run. You don’t just give blind approval to “run something”, you’re shown the exact command it’s going to run and you can choose to approve or reject it.
    Unless you’re managing app permissions on android 🙄

    You can fine-grain nr. 2 even more: You can give access to e.g. modify files only in a certain sub-tree, or run only specific commands with only specific options.

    A restrictive yet quite safe approach is to only permit e.g. git add, git commit, and only allow changes to files under the VC. That effectively prevents any irreversible damage, without requiring you to manually approve all the time.

    “Guns are foolproof”

    You should have yours taken away.

    They are not foolproof. They will absolutely cause problems in the hands of a fool. But they will not cause problems all on their lonesome. They’re inanimate objects. They cannot do absolutely anything without interaction from the user. If you can’t understand this, you should never be allowed to own one.
    And neither can anthropic, anthropic isn’t randomly deleting people’s websites, the kid gave anthropic bad instructions, it didn’t spontaneously decide anything.

    the kid gave anthropic bad instructions

    LOL and you know this how?

    This is like an idiot pointing a gun at something he didn’t want destroyed

    No, this is more like pointing a gun downrange and then the gun fires itself and the bullet does a U-turn and shoots the user.

    Not really.

    If you have the agent installed, it’s like having your gun assembled.

    If you have your agent enabled, it’s like having your gun loaded.

    If you give your agent permissions, it’s like taking your gun off safety.

    If you don’t have your agent properly sandboxed, it’s like having bad muzzle control.

    And if your agent is actively running, it’s like having your finger on the trigger.

    This breaks every weapon safety rule. That’s how you get a negligent discharge.

    Hence, it’s like scratching your back with a loaded weapon.

    LOL and you know this how?

    Because claude deleted his codebase dude, it’s like someone shooting themselves in the foot.

    More a problem with the marketing, right? Imagine if guns were marketed as safe and helpful back scratchers, and then someone shoots themselves because they used the gun to scratch their back.
    They would still be fucking dumb. Believing marketing is a mark of idiocy
    Courts generally agree that a reasonable person could believe claims made in official promotional material. That’s why it’s not legal to outright lie in marketing and they need to go through so much trouble to properly word their statements so that they’re technically true. In this case, they’re just lying. They’re saying the AI is safe to use for these tasks and it is not.
    I’m sorry you live in a shithole country where ads can outright lie.

    Imagine if your boss measured your productivity by your Gun Back scratch usage.

    Because it’s happening right now. In a lot of places.

    So you’re saying it’s a tool designed to be used by anyone, including idiots, and is dangerous in the hands of idiots. And we as a society should do better to make sure this potentially dangerous tool shouldn’t be used by idiots.

    Yep, agree.

    How do you even achieve that? I have to coax it into correctly running the project locally.
    I dont understand why people aren’t sandboxing these things.
    If he had had the sense to do that, he would have had the sense to not do it at all.

    sigh

    Use LLMs as instructional models not as production/development models. It’s not hard, people. You don’t need to connect credentials to any LLMs just like you’d never write your production passwords on post-it’s and stick them on your computer monitor.

    Or don’t use LLMs at all, because they fucking lie to you constantly?
    “Lie” implies they have some kind of agency. They’re basically a Plinko board.

    Meh, they work well enough if you treat them as a rubber duck that responds. I’ve had an actual rubber duck on my desk for some years, but I’ve found LLM’s taking over its role lately.

    I don’t use them to actually generate code. I use them as a place where I can write down my thoughts. When the LLM responds, it has likely “misunderstood” some aspect of my idea, and by reformulating myself and explaining how it works I can help myself think through what I’m doing. Previously I would argue with the rubber duck, but I have to admit that the LLM is actually slightly better for the same purpose.

    Hooray for outsourcing of critical thinking!

    What could possibly go wrong

    I think you’ve misunderstood the purpose of a rubber duck: The point is that by formulating your problems and ideas, either out loud or in writing, you can better activate your own problem solving skills. This is a very well established method for reflecting on and solving problems when you’re stuck, it’s a concept far older than chatbots, because the point isn’t the response you get, but the process of formulating your own thoughts in the first place.
    Right, but a rubber duck isn’t a sycophantic chatbot that isn’t capable of conceptualizing anything but responding to you anyway.

    That is correct. However, an LLM and a rubber duck have in common that they are inanimate objects that I can use as targets when formulating my thoughts and ideas. The LLM can also respond to things like “what part of that was unclear”, to help keep my thoughts flowing. NOTE: The point of asking an LLM “what part of that was unclear” is NOT that it has a qualified answer, but rather that it’s a completely unqualified prompt to explain a part of the process more thoroughly.

    This is a very well established process: Whether you use an actual rubber duck, your dog, writing a blog post / personal memo (I do the last quite often) or explaining your problem to a friend that’s not at all in the field. The point is to have some kind of process that helps you keep your thoughts flowing and touching in on topics you might not think are crucial, thus helping you find a solution. The toddler that answers every explanation with “why?” can be ideal for this, and an LLM can emulate it quite well in a workplace environment.

    True, I remember seeing so many articles about computer engineers getting psychosis and killing themselves after talking to a toddler.

    If you really cannot see the difference between what an LLM does, and the other processes you described, then I don’t know what to tell you. Good luck with the brain rot.

    It really seems like you’re wilfully misinterpreting what I’m writing.