AWS Deleted all data despite redundancy, backup, dead man’s switch. This is why you need to keep all your data offline. The 3-2-1 backup rule is a good data protection strategy that states that you kee 3 copies of your data, storing them on 2 different types of storage media, and keeping 1 copy offsite under your bed or office. Don't trust your hosting company's backup service.

https://www.seuros.com/blog/aws-deleted-my-10-year-account-without-warning/

#cloud #aws #sysadmin #IT

AWS deleted my 10-year account and all data without warning

After 10 years as an AWS customer and open-source contributor, they deleted my account and all data with zero warning. Here's how AWS's 'verification' process became a digital execution, and why you should never trust cloud providers with your only copy of anything.

Seuros Blog

@nixCraft Don't trust anyone but your own hardware that you manage yourself. Anything else is asking for disaster.

Also, he totally lost any sympathy when he blamed "1995-era Java parameter parsing" for the failure.

@derek @nixCraft it is worse than that. Your hardware can suddenly die or be stolen, so it shall be multiple hardware units in a few locations.
@nixCraft one of my dreams is a distributed platform that enables redundant hosting across multiple providers, so if any one provider deletes your account it won’t have much effect. This is basically why
@nixCraft If you use someone else‘s computer you have to trust they manage it and keep doing it.

@nixCraft Nowadays I would move to a more complex 33-22-11-00-Backup.

Keep at least 3 copies of the data
and regularly test 3 of them.

Use 2 different types of storage media
managed by than 2 fully independent software packages or service providers.

Keep at least 1 copy at a different location,
but also keep at least 1 of them at a location you own yourself.

Every service you haven't thoroughly checked yourself equals to 0 valid copies, also every copy that will automatically be overwritten without a forced retention (like RAID, most live-replication/geo-redundancy, 
) is 0 copies.

@adlerweb @nixCraft The “1 copy at a different location” has always bothered me. The advice should always have been to keep the data in at least two totally separate locations and providers. E.g, AWS US-East-1 and Azure US-East-1 are separate providers, but not separate locations; they’re down the street from each other.

Sure, it’s *ideal* for one of the providers to be you, but there are a lot of small companies which don’t have the expertise to manage that. Turnkey solutions like NAS units from major vendors tend to have nightmarish security flaws.

@nixCraft

Wow! What. A. Story. A lot of comments about better backup strategies etc but this is not really that warning story but more one about getting algorithmically nuked, for no reason. And then no redress, no recovery, because of poor (hostile even) commercial/ business practices. đŸ˜ŹđŸ€Ź

This especially resonated:
“ You might be thinking, “What are the odds they target me?” But that’s the wrong question. I thought the same thing—with my level of exposure and contributions, surely they could just write my name down and not bother me with stupid verification requests about whether I exist.

“But you’re not being targeted—you’re being algorithmically categorized. And if the algorithm decides you’re disposable, you’re gone.

“Doesn’t matter if you’re a verified open-source contributor. Doesn’t matter if you’ve been a customer for a decade. If you don’t fit the revenue model, if you don’t engage with support regularly, if your usage patterns look “suspicious” to a poorly trained ML model—you’re just another data point to be optimized away.”

So sorry! MENA AWS truly sucks. 😠

@Su_G @nixCraft that's Terry Gilliam's Brazil in the era of cloud
@nixCraft too expensive too maintain and Jeff has to pay the bills for the wedding in Venice!
@nixCraft Google and Apple also capriciously terminate accounts, leaving you stranded. Plan accordingly.
@fazalmajid yeah in case of Google there is no human tech support. You literally talk with bots or community forums where once a while google employee may read your case
I believe that if you pay Google One, you get exclusive human support 24/7 from them


But, who uses Google One?

@nixCraft if you have a developer account and have some dispute with them, they will also delete your personal account, that sort of thing.

Not having customer service humans can be a feature. There are no gullible humans for hackers to social-engineer into taking over your Google Voice number, for instance.

@nixCraft
I was on board until I read this bit:

"I was alone. Nobody understood the weight of losing a decade of work. But I had ChatGPT, Claude, and Grok to talk to. Every conversation revealed I wasn’t alone in being targeted by AWS—especially MENA."

Finding it hard to sympathise with someone who gets their reality from grok and chatgpt.

@nixCraft I have many sympathy for their struggles with a big cloud provider and the very poor treatements they got. When a digital service company or org acts as poorly it's so infuriating, giving the urges to just build something to never having anymore the need to rely on them.

Having this in mind, I'm not sure to get the idea to build a tool to migrate from one shitty provider to an other. I mean, Google, MS and especially Oracle (there are so many similar cases with Oracle cloud) aren't that better, when they delete your account, no appeals would help you (unless you are in EU with the DSA). The issue isn't proper to AWS only, but to this business style. I don't get why making the same stuff with an other company conducting business in the same style would lead to a different end.

Why not making tools to set up the same stuff at home, on a personnal server or any kind of infra, meaning this infra can be built anywere, instead ?

@nixCraft I totally get why this person is salty (particularly with the "AWS developers regularly contact me for help"), but that blog post shows little sign of learning from their own mistakes and blaming it all on AWS. They clearly failed to understand what the 3-2-1 rule for backups _actually_ means, and as a result of a provider screwup they've lost everything. The provider shouldn't screw up, but screwups happen and any professional should plan accordingly - and test that plan often.

@nixCraft the more I think about this, the more this feels like a CYA blog post ("they screwed up and I'm not taking it any more!") because a customer who realises their on-retainer tech person thinks "everything in one provider linked to one account" is a valid interpretation of the 3-2-1 practice for backup retention might well have questions that prompt them to seek out second opinions on that tech person's work...

Hell, even a monthly rsync to an external drive would have been enough here.

@nixCraft Ooof, learning the 3-2-1 backup rule the hard way I guess?

I've also lost 10+ years of data many years ago. But it was my own mess up. I tried to duplicate my backup but overwrite my data with the empty drive. A timeless classic.

@nixCraft A local "cloud" provider lost ALL client data last year due to hacking. There was no media coverage and social posts on a local dev group were quickly removed. Obviously the owners had thick connections. We found out only because a client of ours used the "cloud" provider services and lost a lot of data.
@nixCraft I don't want to sound mean, but is there any evidence other than this article?
@nixCraft Maybe if get AWS out of the government Jeff Bozos will let The Washington Post post the truth again.
@nixCraft The biggest failing of today's tech world is the fact that nobody can speak with people who actually know what's going on at any of the mega corporations.

@nixCraft

Nice post, however, moving to Google cloud is not better. Use NextCloud or self host it.

@nixCraft - I had a similar experience with OwnCube, which about 4 years back I was using to host a remote NextCloud instance. Shortly after purchasing for a one time fee what was supposed to be a perpetual subscription, they had a massive failure and lost all my data, and basically closed out my account when they finally restored service after a couple week outage. Luckily I had everything synced on a local drive, but suffice it to say I take my business elsewhere.

@nixCraft

Years ago when having the Benefits-of-Cloud argument someone threw at me the "Sysadmins just don't like having their pets taken away and replaced with cattle" line.

I thought about that a bit and gave the response: "I'll gladly give up the pets for the resources a herd of cattle bring. But on my ranch, not some spread I have no control over.

That's the point. If you do not have your IP on your own hardware then you don't actually have any IP worth a dime.

@nixCraft The "cloud" is just someone else's computer. The take away is not "AWS is bad", it is backup on your own media. Also: test your backups regularly - sometimes formats and processes change and things go awry.
Also: if all that is true, I think he has a legal case for restitution. Which conversely makes me wonder if he is perhaps exaggerating the damage in order to get more out of AWS.
@nixCraft Note the follow-up, that it was escalated to AWS management, the account was restored and AWS is running an internal never-do-this again fix. However, my own AWS backup approach is to have a separate S3 only account for archives and mirror that account into Google Data Store.

@nixCraft Yeah, even if you have the data, code, pipelines and everything for their vendor lock-in services and they decide to do this you are screwed.

Your story is just a noise and the only worry for them now is it will blow up somewhere publicly.

AWS Restored My Account: The Human Who Made the Difference

The untold story of how one AWS employee turned a 20-day nightmare into a lesson in corporate accountability. Sometimes all it takes is one person who actually gives a damn.

Seuros Blog