An angry admin shares the CrowdStrike outage experience

https://lemm.ee/post/37448621

An angry admin shares the CrowdStrike outage experience - lemm.ee

IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface. Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.” He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints. Sadly, for our administrator, things are less than ideal. Another Redditor posted: "They sent us a patch but it required we boot into safe mode. "We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

Pity the administrators who dutifully kept a list of those keys on a secure server share, only to find that the server is also now showing a screen of baleful blue.

Lol, can you imagine? It empathetically hurts me even thinking of this situation. Enter that brave hero who kept the fileshare decryption key in a local keepass :D

Seems like an argument for a heterogeneous environment, perhaps a solid and secure Linux server to host important keys like that.
Linux can shit the bed too. You need to maintain a physical copy.
CS did take down Linux a few years back… I forget the exact details.
Sounds like we may have an easier conclusion to draw here

Yes, but has it taken both OS’ out at the same time? It hasn’t but it could happen, however, the chances are even less. There’s obvious risk mitigation in mixing vendors in infrastructure for both hardware and software in the enterprise.

If some critical services were lost in your enterprise last time until RH updated their kernel then you could have benefitted from running that service from Windows as well. Now the reverse is true. You could have another DC via Samba on Linux in your forest if you wanted to, in order to have an AD still for example. Same goes for file share servers, intermediary certificate servers (hopefully your Root CA is not always on the network) and pretty much most critical services.

Most enterprises run a lot of services off of a hypervisor and have overhead to scale (or they are already in a sinking ship), so you can just spin up VMs to do that. It isn’t as if it is unreasonably labor intensive compared to other similar risk mitigation implementations. Any sane CCB (obviously there are edge cases but we are talking in general here) will even let you get away without a vendor support contract for those, since they are just for emergency redundancy and not anywhere near critical unless the critical services have already shit the bed.

Sure but the chances of your Windows and Linux machines shitting the bed at the same time is less than if everything is running Windows. It’s exactly the same reason you keep a physical copy (which after all can break/burn down etc.) - more baskets to spread your eggs across.
Very few businesses are going to spend the money running redundant infrastructure on two different operating systems. Most of them won’t even spend the money on a proper DR plan.
Then they get to suffer the consequences when shit like this happens

Then they get to suffer the consequences when shit like this happens

Oh, they are.

Their point is not that linux can’t fail, it’s that a mix of windows and linux is better than just one. That’s what “heterogeneous environment” means.

You should think of your network environment like an ecosystem; monocultures are vulnerable to systemic failure. Diverse ecosystems are more resilient.

Hey Ralph can you get that post-it from the bottom of your keyboard?

That’s why the 3-2-1 rule exists:

  • 3 copies of everything on
  • 2 different forms of media with
  • 1 copy off site

For something like keys, that means:

  • secure server share
  • server share backup at a different site
  • physical copy (either USB, printed in a safe, etc)
  • Any IT pro should be aware of this “rule.” Oh, and periodically test restoring from a backup to make sure the backup actually works.

    We have a cron job that once a quarter files a ticket with whoever is on-call that week to test all our documented emergency access procedures to ensure they’re all working, accessible, up-to-date etc.
    Sounds like the best time to unionize
    Any time is a good time to unionize
    Agreed, just here they have then by the metaphorical balls.

    I’m in. This world desperately needs an information workers union. Someone to cover those poor fuckers in the help desk and desktop support as well as the engineers and architects that keep all of this shit running.

    Those of us that aren’t underpaid are treated poorly. Today is what it looks like if everybody strikes at once.

    This dude here coming in hot with a name, Information Workers Union (IWU). Love it

    Soo are you gonna create the community or am I?

    Nah.

    Bureau of Information & Technology Servicers.

    To preface, I want to see a tech workers union so, so bad.

    With that said, I genuinely don’t believe that most tech workers would unionize. So many of them are brainwashed into thinking that a union would dictate all salaries, would force hiring to be domestic-only, or would ensure jobs for life for incompetent people. Anyone that knows what a union does in 2024 knows that none of that has to be true. A tech union only needs to be a flat fee every month, guaranteed access to a lawyer with experience in your cases/employer, and the opportunity to strike when a company oversteps. It’s only beneficial.

    Even if you could get hundreds of thousands of signatories, the recent layoffs have shown that tech companies at the highest level would gladly fire a sizable number of employees if it meant stamping out a union. As someone that has conducted interviews in big tech, the sheer numbers at peak of people that had applied for some roles was higher than the number of active employees in the whole company. In theory, Google could terminate everyone and replace them with brand-new workers in a few months. It would be a fucking mess, but it (in theory) shows that if a Google or Apple decided that it wanted no part of unions they could just dig into their fungible talent pool, fire a ton of people, promote people that stayed, and fill roles with foreign or under-trained talent.

    I feel you with this. They do not see themselves as workers. Thank you for the preface.
    Agreed, sadly to many there is still the view of tech being a meritocracy, and that they’re in FAANG because of their hard work over everything else, so fuck everyone else. Naturally, many change their tune once their employer actions regressive policies, but it’s surprising how many people just have zero understanding of what a union does. They see cop shows or The Wire and assume it’ll be like the unions there…
    All true… All sad. Time to snap some fools to reality I guess

    Lemmy appears to be weathering the storm quite well…

    …probably runs on linux

    It runs on hundreds of servers. If any of them ran windows they might be out but unless you got an account on them you’d be fine with the rest. That’s the whole point of federation.
    I’m so proud of this community!
    The overwhelming majority of webservers run Linux (it’s not even close, like high 90 percent range)
    I wonder if any Lemmy servers run on Windows without WSL. I can’t think of any hard dependencies on Linux, so it should be possible.
    I doubt many Lemmy servers are running enterprise level antivirus.

    If you have EC2 instances running Windows on AWS, here is a trick that works in many (not all) cases. It has recovered a few instances for us:

    • Shut down the affected instance.
    • Detach the boot volume.
    • Move the boot volume to a working instance in the same region (us-east-1a or whatever).
    • Remove the file(s) recommended by Crowdstrike
    • Detach and move the volume back over to original instance
    • Boot original instance

    Alternatively, you can restore from a snapshot prior to when the bad update went out from Crowdstrike. But that is not always ideal.

    A word of caution, I’ve done this over a dozen times today and I did have one server where the bootloader was wiped after I attached it to another EC2. Always make a snapshot before doing the work just in case.

    Lmao this is incredible

    Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

    "We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

    “Most of our comms are down, most execs’ laptops are in infinite bsod boot loops, engineers can’t get access to credentials to servers.”

    N.B.: Reddit link is from the source

    I hope a lot of c-suites get fired for this. But I’m pretty sure they won’t be.

    Fired? I hope they get class-actioned out of existence as a warning to anyone who skimps on QA

    C-suites fired? That’s the funniest thing I’ve heard yet today. They aren’t getting fired - they are their own ass-coverage. How can they be to blame when all these other companies were hit as well?

    I guess this is a good week for me to still be laid off.

    Our administrator is understandably a little bitter about the whole experience as it has unfolded, saying, "We were forced to switch from the perfectly good ESET solution which we have used for years by our central IT team last year.

    Sounds like a lot of architects and admins are going to get thrown under the bus for this one.

    “Yes, we ordered you to cut costs in impossible ways, but we never told you specifically to centralize everything with a third party, that was just the only financially acceptable solution that we would approve. This is still your fault, so we’re firing the entire IT department and replacing them with an AI managed by a company in Sri Lanka.”

    Stupid argument though, honestly just chance that crowdstrike was the vendor to shit the bed. Might aswell have been set. You should still have procedures for this
    At least no mission critical services were hit, because nobody would run mission critical services in Windows, right?

    RIGHT??

    We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

    Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.

    We also backup our bitlocker keys with our RMM solution for this very reason.
    I hope that system doesn’t have any dependencies on the systems it’s protecting (auth, mfa).
    It’s outside the primary failure domain.

    I remember a few career changes ago, I was a back room kid working for an MSP.

    One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.

    I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.

    It was our air-gapped encryption key backup.

    I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.

    They also don’t seem to have a process for testing updates like these…?

    This seems like showing some really shitty testing practices at a ton of IT departments.

    Apparently from what I was reading these are forced updates from Crowdstrike, you don’t have a choice.
    I’ve heard differently. But if it’s true, that should have been a non-starter for the product for exactly reasons like this. This is basic stuff.
    Companies use crowdstrike so they don’t need internal cybersecurity. Not having automatic updates for new cyber threats sorta defeats the purpose of outsourcing cybersecurity.
    Not bothering doing basic, minimal testing - and other mitigation processes - before rolling out updates is absolutely terrible policy.
    Automatic updates should still have risk mitigation in place, and the outage didn’t only affect small businesses with no cyber security capability. Outsourcing does not mean closing your eyes and letting the third party do whatever they want.

    Outsourcing does not mean closing your eyes and letting the third party do whatever they want.

    It shouldn’t, but when the decisions are made by bean counters and not people with security knowledge things like this can easily (and frequently) happen.

    Unfortunately, the pace of attack development doesn’t really give much time for testing.
    More time that the zero time than companies appear to have invested here.
    I was just thinking about something similar. I can understand wanting to get a security update as quickly as possible, but it still seems like some kind of rolling update could have mitigated something like this. When I say rolling, I mean for example split all of your customers into 24 groups and push the update once an hour to another group. If it causes a massive fuck up it’s only some or most, but not all.