0 Followers
0 Following
8 Posts
https://github.com/bri3d
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup
Not really the same. There are proposals to require OEMs to install driver monitoring, but it’s usually IR camera based rather than blow in a tube fuel cell based. These systems are probably going to be a mess but the technology isn’t really comparable to DUI interlock devices and the unreliability of those systems is orthogonal.
Irrelevant to this issue - the devices didn’t get bricked over the air, but rather they have a “calibration” time lock which must be reset at a service center and the service centers are ransomwared.
It's not new - fault injection as a vulnerability class has existed since the beginning of computing, as a security bypass mechanism (clock glitching) since at least the 1990s, and crowbar voltage glitching like this has been widespread since at least the early 2000s. It's extraordinarily hard to defend against but mitigations are also improving rapidly; for example this attack only works on early Xbox One revisions where more advanced glitch protection wasn't enabled (although the author speculates that since the glitch protection can be disabled via software / a fuse state, one could glitch out the glitch protection).
In terms of fault injection as a security attack vector (vs. just a test vector, where it of course dates back to the beginning of computing) in general, satellite TV cards were attacked with clock glitching at least dating back into the 1990s, like the "unlooper" (1997). There were also numerous attacks against various software RSA implementations that relied on brownout or crowbar glitching like this - I found https://ieeexplore.ieee.org/document/5412860 right off the bat but I remember using these techniques before then.

I don't think the ancient nature of the exploit chain has much bearing on the origin. I think it points away from the actual 2025 campaigns being USG-attached, but I don't think anyone was suggesting that to start with - the Google report makes it pretty clear that they believe the same code was resold to several parties, either in parallel or sequentially, around this time frame.

I think the notion here is that either:

* There's a shared upstream origin or author between this toolkit and the Operation Triangulation toolkit ahead of the use in Operation Triangulation (ie - someone sold this chain to both the Operation Triangulation authors and a third party). I actually think that the uses of specifically structured code-names internally and the overall structure of the codebase described in the Google writeup make this theory less likely; building an exploit toolkit while using these practices to cosplay as a US-government affiliated engineer would be clever and fun, but it's not something we've really seen before.

* This toolkit originated from (whether it was leaked, compromised, or resold) the same actor who was responsible for Operation Triangulation.

It also makes a lot of really useful features like on device OCR, captions, voice isolation, temporal antialiasing in metalfx, an enormous host of things in the apple pro apps, etc. work

The article's deep dive into the math does it a disservice IMO, by making this seem like an arcane and complex issue. This is an EC Cryptography 101 level mistake.

Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.

GitHub - cloudflare/circl: CIRCL: Cloudflare Interoperable Reusable Cryptographic Library

CIRCL: Cloudflare Interoperable Reusable Cryptographic Library - cloudflare/circl

GitHub

> We need Good Samaritan laws that legally protect and reward white hats.

What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?

I'm genuinely curious about how you suggest we achieve

> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.

And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."