This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
I don't think the ancient nature of the exploit chain has much bearing on the origin. I think it points away from the actual 2025 campaigns being USG-attached, but I don't think anyone was suggesting that to start with - the Google report makes it pretty clear that they believe the same code was resold to several parties, either in parallel or sequentially, around this time frame.
I think the notion here is that either:
* There's a shared upstream origin or author between this toolkit and the Operation Triangulation toolkit ahead of the use in Operation Triangulation (ie - someone sold this chain to both the Operation Triangulation authors and a third party). I actually think that the uses of specifically structured code-names internally and the overall structure of the codebase described in the Google writeup make this theory less likely; building an exploit toolkit while using these practices to cosplay as a US-government affiliated engineer would be clever and fun, but it's not something we've really seen before.
* This toolkit originated from (whether it was leaked, compromised, or resold) the same actor who was responsible for Operation Triangulation.
The article's deep dive into the math does it a disservice IMO, by making this seem like an arcane and complex issue. This is an EC Cryptography 101 level mistake.
Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.
> We need Good Samaritan laws that legally protect and reward white hats.
What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?
I'm genuinely curious about how you suggest we achieve
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.
And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."