This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
This sort of crap keeps getting upstreamed into Debian.
Consider devuan for your next machine. I've switched almost all my linux boxes to it, and it's great.
This comment is particularly concerning (as is the functionality regression implied by this new "more secure" approach):
> This means for example, that an encrypted system must use an ext4 /boot partition; it is no longer possible to encrypt the /boot partition.
So, they want to let attackers modify /boot, including grub.conf and the kernel command line? This is better? Look at all these fun knobs attackers will be able to turn!
https://www.kernel.org/doc/Documentation/x86/x86_64/boot-opt...
This lets you disable machine check exceptions + the iommu. That means it'll force people to use a configuration that lets attackers stick a memory probe hardware device into the system + bypass a bunch of hardware security checks. Nice!
I also found module.sig_enforce which lets the attacker disable kernel module signature verification. Sadly, I couldn't find anything that lets you directly load a kernel module from /boot.
However, init.rd lives in /boot. I wonder if its signature is verified or not. At the very least, this approach implies that attackers can piecemeal downgrade stuff early in the boot process.
I think people are using "liquid glass" as a blanket term that includes other changes in iOS 26, like completely breaking message delivery with the world's dumbest spam filter, aggressively waking some people up in the middle of the night, siri somehow getting even worse, breaking the incoming call state machine (again), bluetooth regressions, regressions to their (already poor) UI accessibility, and so on.
Those other things add up and are definitely noticed by non-tech users that don't care that things like the alarm UI are massively regressed.
Most of those services are paid by terrible people when you use them.
Look at their revenue breakdowns.
The earliest example I know of for this is CLKSCREW, but security hardware (like for holding root CA private keys) was hardened against this stuff way before that attack.
Has anyone heard of notable earlier examples?
If you do that, it expands your test matrix quadratically.
So, it makes sense if you have infinite testing budgets.
Personally, I prefer exhaustively testing the upgrade path, and investing in reducing the time it takes to push out a hot fix. Chicken bits are also good.
I haven’t heard of any real world situations where supporting downgrades of persistent formats led to best of class product stability.
Would love to hear of an example.
For there to be a solution, there needs to be a problem. These bills are not addressing a problem. Assume the online platform has a video feed of my kid, or their SSN, or a zero-knowledge-proof of age, or whatever.
Now, what will the platform do with it? Concretely? As in: Name one bad outcome a reasonable parent would care about that's prohibited under these bills. If the bad thing happens due to willful negligence, then there needs to be some actual material consequence to someone at the platform provider.