A single point of failure triggered the Amazon outage affecting millions
A single point of failure triggered the Amazon outage affecting millions
We also need more individuals paying for “business” Internet connections at home. We need self-hosters to be able to feel comfortable running public services from their homes. And so we need a set of practices and recipes to follow, so a self-hoster can feel confident that, if one thing gets broken into, the other few dozen things they’re hosting will stay safe.
The “family nerd” hosting things for the family needs to be a thing again. Sorry, friends, I know family tech support sucks. It’ll suck so much more when it’s a web site down and nobody can reach their kid’s softball team page, and there’s a game next weekend, etc. But we’ve seen what happens when we abdicate our responsibilities and let for-profit companies handle it for us.
(I wish so hard that I had a solution ready, a corporate LAN in a box, that someone can just install and use. I’m working on something, but I’m pretty sure I over-complicated it. It doesn’t need to be Fort Knox, it just needs to be pretty good. And I suck at ops stuff.)
Well the rest (0.1%) needs good upload speed for their home servers. /s
Companies with lots of traffic certainly needs fast uplinks.
So upload speeds are not irrelevant and needs to accompany downloads speed for the health of the Internet infrastructure, as it trickles down to the households eventually.
Tell me more about your thing!
I’m working on a decentralised sharing protocol, and I’d love getting likeminded people together.
Homes should come with a static IP address.
Web Advertising companies, data harvesting companies, oppressive governments have entered the chat “I fully support this proposal. Every home should have a static IP address.”
You’re right to be frustrated. Mine is the same way. It’s ok to be passionate about that, and to value punishing greedy ISPs by not paying extra for a business account. (In many cases you could even need both, if you might worry about occasional denial of service attacks and need to be sure attackers can’t also knock out your ability to work from home, for example.)
I think there’s a compelling argument in favor of protecting diversity of hosting and preventing a monoculture or a monopoly. It’s not super compelling, but it’s out there.
Aren’t those even less reliable? I participated on MetaFilter for a long time, a website running on a server in a guy’s closet. It was up and down, up and down. It became the bane of his existence. It was slow. I’ve heard other similar stories over the years.
So I’m genuinely curious - how would this solve anything?
We need to democratize the internet again, every generation there’s a ma bell pretending they own the internet. Current Gen is Google, AWS, Azure and the like, with ISPs just making sure they get their cut.
I don’t have an issue with these services existing, but in such a way that everything depends on a couple companies? Dangerous for everyone.
“There’s a monopoly” — proceeds to list 3 separate providers. Don’t forget there’s also Akami, now we’re up to 4.
The issue is more so with companies that choose to use cloud providers. They’re the ones attempting to cheap out because they don’t want to pay infrastructure costs. You also have a lack of knowledge by engineers on how to create redundant/reliable systems.
Not everything on the internet went down. There’s plenty that was just fine. So I don’t really don’t know what “democratizing” it would gain, or how.
It’s not the responsibility of the cloud providers to democratize the internet, I don’t know why you thought anyone was making that argument.
Cloud providers however are responsible for their negligence given their role in the current internet.
“There’s a monopoly” — proceeds to list 3 separate providers. Don’t forget there’s also Akami, now we’re up to 4. Oh, and Cloud Flare… so that’s 5.
Thats called a Cartel. and a cartel can fucking monopolize shit, dumbass.
proceeds to list 3 separate providers
Just don’t look to hard at the market share or the client composition, sure.
The issue is more so with companies that choose to use cloud providers. They’re the ones attempting to cheap out because they don’t want to pay infrastructure costs.
I mean, do you tell people they’re cheaping out because they hire a plumber rather than spending eighteen months learning to DIY every pipe in their house? There’s nothing fundamentally wrong with outsourcing to cloud services on its face. A couple big warehouses at strategic points in town specifically designed to operate as central hubs for digital traffic makes far more sense than every single office building having a dozen different floors with two IT guys of dubious quality in a badly ventilated closet manning cobbled together rack space.
For anyone downvoting, I’d love to hear what “democratizing” the internet means, how it would work, or be functional.
One of the more successful American models for publicly owned and operated data infrastructure:
For starters: thank you for a thought out response. It feels like most people are missing the core point and just blaming the provider.
Even if there were a “public” public cloud, the underlying issue I’m getting at is with the companies that are using it. AWS has multiple regions. There are multiple cloud providers such as GCP and Azure too. Yet the companies are the ones defaulting to a single region, single provider configuration, which as we all know is still a SPOF, no matter what redundancy is built in.
To that point nowhere im saying that you can’t democratize things.
Monopolies exist exactly like this. With them not competing fairly and coordinating with one another so as to not encroach on theory territory.
Ever wonder why despite there being dozens of ISPs in the country, you’ve only ever got an option for like a main one, and an intentionally shitty one to make the main one look better?
It’s all a rigged game.
My main point, which may have been buried in my quickness to type things, is that it is on the individual companies to choose how they design and architect their systems. This was only a problem in us-east-1. They could have used other AWS regions, they could have used Azure or GCP. They could have used a multi-cloud or hybrid solution, and none of this would be an impact.
AWS is offering infrastructure, but it’s still on the companies to decide how they’ll use it. The ire should be placed on them, just as much, if not more, for taking the easy way out.
Even if you were to have a co-op owned style cloud solution (democratized as it were). If companies choose to only host in one Datacenter/region it’s squarely on them.
A lot of these big names that went down have very poor infrastructure practices if a single region of a single provider took them out. It’s definitely not for lack of money on their part.
You’re right, though. AWS has far more data centers/regions. Even if a company only uses AWS, they can set up High Availability/Disaster Recovery solutions that replicate across AWS regions.
But they won’t because:
Reminder to everyone, if you aren’t necessarily worried about uptime too much, and have a spare device at home, you can host personal websites and various services that might be useful for yourself or friends and family. To keep it simple, all you would really need is
Keep your device and router updated and reboot it every once in a while to load the updated kernel. Then just install some web server software or whatever on your device and point your domain to it.
Together, we can decentralize the web a little bit 🙂
It will totally depend on the equipment you plan on using, but in general, your router’s manual/documentation should say whether it supports Dynamic DNS, how to configure your firewall, and how to enable port forwarding.
From there, your device’s operating system should have documentation on how to perform maintenance, and the web server software you plan on using should have guides on how to get it running on your OS of choice.
Thats why I suggested an up-to-date router that isn’t end-of-life. If you keep your router firmware updated, your firewall on, and your “server” updated, then you are as protected as any VPS that has ever been deployed.
Tailscale is centralized and prevents you from accessing your devices if it goes down, which is what the OP points out. If we want some decentralization, we can configure our current equipment to do so. Its not so difficult if you spend some time reading your router’s documentation and keep everything behind it updated. NAT routing is pretty good at keeping bad things out.
What kind of services?
I’m having trouble imagining what’s possible and worth hosting for friends and family.
You are likely to get away with this if your website gets little traffic.
But to much and your ISP is likely to tell you to knock it off, or just close your subscription.
We need to put Amazon in the cloud.
Cuz, you know, the cloud never goes down 👎. /s
The inverse of the old axiom “The cloud is just someone else’s computer” is “Yes, duh, that’s how you get economies of scale”.
In-housing would mean an enormous increase in demand for physical hardware and IT technical services with a large variance in quality and accessibility. Like, it doesn’t fix the problem. It just takes one big problem and shatters it into a thousand little problems.
I think some of you younger folks really don’t know what the Internet was like 20 years ago.Shit was up and down all the time.
I worked on a project back in 2008 where I had to physically haul hardware from Houston to Dallas just to keep a second rate version of a website running until we got power back at the original office. Latency at the new location was so bad that we were scrambling to reinvent the website in real time to try and improve performance. We ended up losing the client. They ended up going bankrupt. An absolute nightmare.
Getting screamed at by clients. Working 14 hour days in a cramped server room on something way outside my scope.
Would have absolutely killed for something as clean and reliable as AWS. Not like it didn’t even exist back then. But we self-hosted because it was cheaper.
I certainly don’t miss dealing with air conditioning, dry fire protection, and redundant internet connections.
I also don’t miss trying to deal with aging servers out and bringing new hardware in.
That work is still being done by someone in a data centre. But all these jobs went from in-house positions to the centres.
The difference is scale. When in-house, the person responsible for managing the glycol loop is also responsible for the other CRACs, possibly the power rails, and likely the fire suppression. In a giant provider, each one of those is its own team with dozens or hundreds of people that specialize in only their area. They can spend 100% on their one area of responsibilty instead of having to wear multiple hats. The small the company, the more hats people have to wear, and the worse to overall result is because of being spread to thin.
We need to ditch cloud entirety and go in house again.
For many many companies that would be returning to the bad-old-days.
I don’t miss getting an emergency page during the Thanksgiving meal because there’s excessive temperature being reported in the in-house datacenter. Going into the office and finding the CRAC failed and its now 105 degree F. And you knew the CRAC preventive maintenance was overdue and management wouldn’t approve the cost to get it serviced even though you’ve been asking for it for more than 6 months. You also know with this high temp event, you’re going to have an increased rate of hard drive failures over the next year.
No thank you.
There’s a huge gulf between pub clowd and shitty on-prem. My daytime contract is with an organization almost completely on-prem for privacy, although on-prem to them means priv-cloud. Space has been rented. Redundant everything piped in. Redundant everything set up. We run VMs by terraform. Wheeeeee
Point is, posing shitty on-prem as the alternative to the clowd is moving the goalposts a bit.