Christian Stadelmann

@genodeftest@digitalcourage.social
148 Followers
527 Following
226 Posts

Verfechter von Klima- und Artenschutz, Datenschutz, Demokratie und Freier Software | wissbegierig

Aktiv bei #ÖDP

Profile picture sourcehttps://climatejustice.social/@stefanmuelller/113890214500384569
LanguagesGerman, English
Warum Gendern?https://zeitzeichen.net/node/10874?utm_source=browser
Born at355 ppm

'Meredith,' some guys ask, 'why won't you shove AI into Signal?'

Because we love privacy, and we love you, and this shit is predictable and unacceptable. Use Signal ❤️

Dieses Schweigen in der CDU zur Personalie Spahn, erzählt auch viel über das Innenleben dieser Fraktion. Dort hat offenbar niemand den Anstand das zu fordern, was bei einem solchen Totalversagen angemessen wäre - den Rücktritt des Fraktionsvorsitzenden und ja auch die Niederlegung des Mandats. 1/5

Verzweiflungsmedien
Wie prekär mittlerweile die Lage von Jans Spahn geworden sein muss, erkennt man übrigens auch daran, dass Springer Redaktionen wieder angefangen haben Habeck anzugreifen. …

Zum Weiterlesen den Link benutzen.
https://cartoons.guido-kuehn.de/verzweiflungsmedien/
#csu #habeck #korruption #masken #spahn #union

The brutal, daily bombing and murder of innocent civilians, in #Palestine and #Ukraine are signs of a collapsing rules-based world order.

It is not collapsing because of #Russia or #Israel, but because the west, who was supposed to uphold it, chose not to for the simple reason that it was inconvenient.

Today Ukrainians and Palestinians are suffering the consequences. Tomorrow it will be Europeans and Americans.

If you don't stop evil in it's infancy you might not be able to when it grows.

Looks like today's theme is
I love that cats that aren’t domesticated don’t meow when they grow up, but domesticated cats do because they learned humans don’t understand their natural communication, so they keep meowing beyond the kitten stage just for us. So basically cats made up a language just to talk to us. And that language is essentially baby talk.
Ob Scheuer, Spahn oder Dobrindt – rechte Akteure hinterlassen gern einen Scherbenhaufen. Oft kommen sie dann auch noch ungeschoren davon.
https://taz.de/!6091530
Konservative Politik: Arbeit für die Aufräumer

Ob Scheuer, Spahn oder Dobrindt – rechte Akteure hinterlassen gern einen Scherbenhaufen. Oft kommen sie dann auch noch ungeschoren davon.

TAZ Verlags- und Vertriebs GmbH

„Wir müssen alle Grenzen zu machen!“
„Ja genau, mauern wir uns ein, treiben Inzucht, sitzen nach Generationen wieder sabbernd auf den Bäumen. Egal, die Welt dreht sich dennoch weiter.“

*Manche sind einfach nicht ganz dicht im Oberstübchen bzw. gänzlich ausgelaufen.

@billyjoebowers An old joke:

Trump aide tells him excitedly about a dream she had of a big celebration for him. Huge crowds of people, all shouting and laughing, waving flags as Trump passed by. Trump asked the aide "How did my hair look, was it ok?" The aide responded "I couldn't see, it was a closed casket."

@Mela meine Frau sagt grad, wenn Klöckner so an Neutralität liegt, hätte sie auch nicht zum Kirchentag gedurft.
Ich finde, da ist was dran...
×
Looks like today's theme is
@cR0w I'd put that time frame at 1 hour, but I'd also include mitigating controls as acceptable alternatives to patching.
@rebootkid 1 hour for mitigating controls maybe but that definitely not enough time to test and deploy a patch in general.

@cR0w +9001%

I'd put this number down to 2hrs simply by having experienced #WordPress in the past...

@kkarhan Wordpress is a whole separate category.

@cR0w not really...

Obviously it's the #1 target and every #Skiddie has their own index of #WordPress sites waiting to deploy their #Cryptojacking #malware the second they get their hands on an exploit before people have patched it, but the same applies to #Windows (espechally #WindowsServer!) and other shitty applications...

@kkarhan It's different in that I agree that two hours is reasonable for Wordpress. But two hours is not reasonable for testing and deploying a lot of critical services. I think 24 hours is. But that's all starting to get down to specific teams, risk appetite, etc. The sentiment is the same.
@cR0w @kkarhan i would say thing like after 24h gov need to start to probe to hit with fine in case of critical infrastructure.
@cR0w harsh, but more and more true, and that number will probably go down further once there's malicious tools to automatically create and deploy malware based on patch notes right after release, which I suspect will happen sooner or later. I wonder if we'll ever get to the point where we have to somehow sneakishly deploy patches before announcing it to avoid issues
@anthropy That's attempted by some but doesn't scale well and keeps others in the dark. Lots of vendors try to quietly release patches to customers without any public announcement so they can patch before it's well known. It's a valid idea but flawed in practice.
@cR0w
Looking back at Hafnium:
make that 20 minutes.

@cR0w

this reads to me like: be a slave to the machine. never take a day off. never take care of yourself out of fear of missing a single patch in the hundreds/thousands of components that make up today's machines.

I don't agree with that sentiment.

@lattera Then you're reading too far into it. The point is that there needs to be a process to quickly deploy a patch when necessary, not that every patch needs to be pushed in 24 hours. If there is not a process in place to apply emergency patches, then putting it directly on the Internet is a Bad Idea ™️ .

@cR0w @lattera Furthermore, if the only way to do that is for an org to have one poor sap working weekends, that's an organizational failure.

This is absolutely a burden that ought to be shared amongst a whole team.

@lattera @cR0w you know that company have more than 1 person in their IT, so they call take time for themselves and even sleep sometime

@ElysianEve @cR0w that piece of context (for-profit organizations with multiple employees) is entirely missing from the original post.

Not every machine out there is driven by for-profit organizations with multiple employees.

The original post is broad enough to encapsulate machines administered by only a single person--a single volunteer, perhaps.

@lattera @cR0w technically running a public facing service when you have only one person to manage it is incredibly risky, and must be avoided at all cost.

just imagine you are maintained in hospital a week (even for false positive), and a log4j type of issue happen.

Also many service (aka OS, software) allow automatic security update / reboot while maintaining other update totally manual (it's what i've opted in for my own services).

@ElysianEve @cR0w it would be great to have multiple volunteers helping with infrastructure, but that's not always possible. A little compassion for those in that situation helps reduce burnout of those volunteers.

Let's try to focus on helping each other out. The original post doesn't; it only serves to contribute to burnout.

@lattera @ElysianEve Okay, I'll take the bait. How does expecting an emergency patching process for Internet-facing infrastructure contribute to burnout?
@ElysianEve @lattera @cR0w We used to do it back in the 90s without issue. A distributed internet meant that the the targets were small, hardware/software diverse and of low value. Something happened? Just restore from backup an move on.

However, today, people seem, for some reason, want to collect more data than they need and want to roll their own 'crypto'. This is on them and those companies. A majority of infrastructure doesn't come under this umbrella so Shawn is technically correct, there was insufficient context to make such a sweeping statement.
@Tubsta @lattera @cR0w what you just say bear no sense at all, the size of internet change nothing.
@cR0w I read this as... don't patch
@kajer What are you, my cloud team? 😒
@kajer @cR0w weird, just about every company I've ever worked for has interpreted it identically.

@rootwyrm @cR0w Someone has to have the voice of corp-tech.

If I can't patch... invalid way to address the problem, because conditions will apply.

If i never patch, then there are no conditions, and we're still running prod! WOO!

@kajer @cR0w most shops, the excuse is "it's stable! We shouldn't patch it might introduce problems."

And that's how your extremely critical Internet facing infrastructure is running Docker containers from 2018 that have been abandoned upstream.

@rootwyrm @kajer blows dust off SCADA HMI keyboard for host with more uptime than a lot of fedi users

@cR0w @rootwyrm DO NOT UNPLUG

if you pull the PS/2 keyboard, the driver will unload and will not reload until next boot, which is

checks notes

never.

@kajer @cR0w fun related fact: I know of a very large institution which runs their own Certificate Authority.

This CA is basically openssl on a fully air-gapped laptop.
That laptop in 2021 was running RHEL4. Because it was *COMPLETELY* airgapped. No network. Only one USB port not filled with epoxy. Kept in a safe.
And this was deemed safe and secure because it was completely and totally airgapped.

@rootwyrm @cR0w that sounds delightfully 90s... but... how does one do cert issues? or verification of the full chain?

I only have questions

@kajer @rootwyrm Click Advanced and then Proceed ( unsafe ) like with any good enterprise system.
@cR0w @kajer @rootwyrm It's funny how meaningless full page big red scary "Security Risk Ahead" screens are when there's certificate issues since a lot of users I've seen will just follow these exact steps.

There are domains with HSTS which manage to convince the browser to delete the proceed button, but it's definitely a minority.

@kajer @rootwyrm @cR0w

This must have been the internal root CA. I know, because I used to run the internal root at a previous job. The workflow looked like this:

* get root CA from safe
* generate CSR on new issuing CA, copy to new flash drive
* plug flash drive into root CA, issue cert, copy to flash drive
* plug flash drive back into online machine, copy cert to issuing CA
* put root CA back in safe

There was a similar process for issuing the root CA's CRL every month

@ducksauz @kajer @cR0w yep, exactly that! There were actually *multiple* internal Root CAs.
The part that always made me giggle is that incomprehensibly, one of those internal roots was used with *HSMs*.

@rootwyrm @kajer @cR0w

HSMs in plural? My root had an HSM card in it (it was a desktop).

Though, I could see maybe having redundant USB attached HSMs and encrypting the root's PrivKey to KEKs stored in each HSM.

@ducksauz @kajer @cR0w HSMs very, VERY, *VERY* plural.

I was not directly involved, but my understanding was that they used the offline root CA as part of the authentication system to ensure they had not somehow wandered off the network.
These kind of HSMs.

@rootwyrm @ducksauz @cR0w $previous_job - We had HSMs in AWS and paid a VERY pretty penny to keep those legacy as fuck machines running until we found something better... (we didnt by the time covid layoff happened)

@kajer @ducksauz @cR0w you really, really have to be an absolute idiot to pay for "cloud" HSMs, honestly. They are INSANELY expensive to say the very least, and it is completely impossible for them to actually be secure.
It just is. PHYSICAL inspection and PHYSICAL tamper indicators are a non-optional part of it.

Meanwhile a Thales Luna 7 hardware HSM at the very tippy top end (max perf, 5 partitions, ent support) costs less than half that.
For 3 years.

@kajer @ducksauz @cR0w and "less than half" is being... generous. I have priced out "cloud" HSMs for certificate services.

$135,000 per year for a miserable enterprise Java Beans "cloud HSM."
The "less secure" version that is just as insecure is still over $75k per year.
Venafi has a nice product. They charge you $100,000 per year to manage it "in the cloud." Not including HSM.

I can literally just whip out a credit card and buy a Luna 7 for $52k. And I actually own it.

@rootwyrm @ducksauz @cR0w not my money, it was a DevOps thing... Until I moved to the SecOps team, then we were tasked with finding HSM solutions to support our hashicorp integrations...

Luckily my SecOps experience was revolving around defending the network from Devops and constantly pulling logs from the F5 and Palo to prove that the latest devops push was to blame for application problems, and not thew FW/LB policy  

Dealing with HSMs and FEKs and the like was not what I would consider to be fulfilling work.

@kajer @cR0w @rootwyrm I'll never plug/unplug PS/2 keyboards ever again after the Harddisk incident. I should've take the two times the computer just managed to start serious and do backups.

And that's how I met my new computer 🥲

@rootwyrm @kajer @cR0w you just triggered my PTSD from the number of conversations I've had to have around this 🤦‍♂️
@cR0w @catsalad can I have more patching of systems hosting arbitrary users from the internet in exchange for less patching of my build docker containers with no inbound network access that live for 10 minutes? 😅
@cR0w @catsalad brb telling the compliance people at work “but @crow said…”
@malwareminigun @catsalad @crow Be sure to record that conversation. 😆
@cR0w @malwareminigun @crow This shirt, but with cR0w
@catsalad @malwareminigun It's like the Ron Swanson permission slip.
@catsalad @cR0w @malwareminigun @crow Do we care what Infosec says? Are we caring about that?

@jimfl @catsalad @cR0w I can assure you that our compliance folks VERY MUCH care about that. And have automated scanners that look at our container registries and scream at us if any tag isn't patched.

(To be clear, I agree with this behavior by default. I just wish there was a distinction between 'build lab' containers and 'web serving' containers because those are very different threat environments)

@malwareminigun @jimfl @catsalad INFOSEC says they are not different. Everything must be patched. Now. Do it. What are you waiting for? Go patch.
@cR0w @jimfl @catsalad But INFOSEC also says 'we want reproducible builds' . I make one group happy I just piss the other one off
@malwareminigun @jimfl @catsalad For real, the struggle is legit.
@cR0w true and for "but i need access it everywhere" they can just check system like Netbird, Tailscale or Twingate that would allow worldwide access while maintaining it outside of public access.