NSA and IETF, part 3: Dodging the issues at hand
NSA and IETF, part 3: Dodging the issues at hand
D. J. Bernstein is very well respected and for very good reason. And I don't have firsthand knowledge of the background here, but the blog posts about the incident have been written in a kind of weird voice that make me feel like I'm reading about the US Government suppressing evidence of Bigfoot or something.
Stuff like this
> Wow, look at that: "due process".... Could it possibly be that the people writing the law were thinking through how standardization processes could be abused?"
is both accusing the other party of bad faith and also heavily using sarcasm, which is a sort of performative bad faith.
Sarcasm can be really effective when used well. But when a post is dripping with sarcasm and accusing others of bad faith it comes off as hiding a weak position behind contempt. I don't know if this is just how DJB writes, or if he's adopting this voice because he thinks it's what the internet wants to see right now.
Personally, I would prefer a style where he says only what he means without irony and expresses his feelings directly. If showing contempt is essential to the piece, then the Linus Torvalds style of explicit theatrical contempt is probably preferable, at least to me.
I understand others may feel differently. The style just gives me crackpot vibes and that may color reception of the blog posts to people who don't know DJT's reputation.
Sure! First, while I’m in no position to judge cryptographic algorithms, the success of cha-cha and 25519 speak for themselves. More prosaically, patriecia/critbit trees and his other tools are the right thing, and foresighted. He’s not just smart, but also prolific.
However, he’s left a wake of combative controversy his entire career, of the “crackpot” type the parent comment notes, and at some point it’d be worth his asking, AITA? Second, his unconditional support of Jacob Appelbaum has been bonkers. He’s obviously smart and uncompromising but, despite having been in the right on some issues, his scorched earth approach/lack of judgment seems to have turned his paranoia about everyone being out to get him into a self-fulfilling prophecy.
Very very incorrect.
EDIT: Adding more to my post here because it would be hypocritical for you to complain:
1. I feel like given how I can make accurate predictions about Henry’s sphere of influence, that might gain me a little credibility: https://news.ycombinator.com/item?id=45495180
2. The reason I insulted you is because I know for a fact that when the mob came and demanded you shun and persecute someone, you caved.
de Valence accuses Bernstein of specific academic misconduct and you come back with this Encyclopedia Dramatica stuff? Why bother commenting at all?
I don't think "I insulted you because" is ever a good way to start an HN comment, for what it's worth, but thanks for laying your cards on the table.
Because Bernstein addresses this:
>>> There is a committee at TU/e charged by law with ensuring proper
grading, and I have recently learned that claims by Mr. de Valence
related to this topic have been formally investigated and rejected by
that committee. Now that Mr. de Valence has issued public accusations,
it would seem that a public resolution will be necessary, starting with
Mr. de Valence making clear what exactly his accusations are.
He also points out that de Valence is himself likely guilty of academic misconduct based on his own admissions.
We have two people making contradictory statements. The only ways to resolve it are facts (which were presumably reviewed by the committee) and credibility. You clearly think de Valence is more credible because he’s one of your feline friends, and because your other feline friends accused Appelbaum of sexual crimes, and you hate that Bernstein worked with Appelbaum because in your mind a sexual abuse accusation is as good as guilt of sexual abuse.
de Valence chose the same credibility-destroying path as Lovecruft, Honeywell, et al. did: make serious accusations in the public sphere instead of letting our public institutions charged with addressing these type of accusations do their job. Wise people realize that you can’t be criminally charged for publishing a smear campaign online, but you can be criminally charged for filing a police report, and evaluate accordingly.
It's very simple.
ECC is well understood and has not been broken over many years.
ML-KEM is new, and hasn't had the same scrutiny as ECC. It's possible that the NSA already knows how to break this, and has chosen not to tell us, and NIST plays the useful idiot.
NIST has played the useful idiot before, when it promoted Dual_EC_DRBG, and the US government paid RSA to make it the default CSPRNG in their crypto libraries for everyone else... but eventually word got out that it's almost certainly an NSA NOBUS special, and everyone started disabling it.
Knowing all that, and planning for a future where quantum computers might defeat ECC -- it's not defeated yet, and nobody knows when in the future that might happen... would you choose:
Option A): encrypt key exchange with ECC and the new unproven algorithm
Option B): throw out ECC and just use the new unproven algorithm
NIST tells you option B is for the best. NIST told you to use Dual_EC_DRBG. W3C adopted EME at the behest of Microsoft, Google and Netflix. Microsoft told you OOXML is a valid international standard you should use instead of OpenDocument (and it just so happens that only one piece of software, made by Microsoft, correctly reads and writes OOXML). So it goes on. Standards organisations are very easily corruptable when its members are allowed to have conflicts of interest and politick and rules-lawyer the organisation into adopting their pet standards.
20+2 (conditional support) versus 7.
22/29 = 76% in some form of "yea"
That feels like "rough consensus"
See https://news.ycombinator.com/item?id=46035639
A consensus isn’t always 100%
Within the IETF it’s not 100%.
See section 3.3 of one of their RFCs for proof.
https://www.rfc-editor.org/rfc/rfc2418.html#section-3.3
“ Working groups make decisions through a "rough consensus" process.
IETF consensus does not require that all participants agree although
this is, of course, preferred. In general, the dominant view of the
working group shall prevail. (However, it must be noted that
"dominance" is not to be determined on the basis of volume or
persistence, but rather a more general sense of agreement.) Consensus
can be determined by a show of hands, humming, or any other means on
which the WG agrees (by rough consensus, of course). Note that 51%
of the working group does not qualify as "rough consensus" and 99% is
better than rough. It is up to the Chair to determine if rough
consensus has been reached.”
Amongst the numerous reasons why you _don't_ want to rush into implementing new algorithms is even the _reference implementation_ (and most other early implementations) for Kyber/ML-KEM included multiple timing side channel vulnerabilities that allowed for key recovery.[1][2]
djb has been consistent in view for decades that cryptography standards need to consider the foolproofness of implementation so that a minor implementation mistake specific to timing of specific instructions on specific CPU architectures, or specific compiler optimisations, etc doesn't break the implementation. See for example the many problems of NIST P-224/P-256/P-384 ECC curves which djb has been instrumental in fixing through widespread deployment of X25519.[3][4][5]
[1] https://cryspen.com/post/ml-kem-implementation/
[2] https://kyberslash.cr.yp.to/faq.html / https://kyberslash.cr.yp.to/libraries.html
[3] https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplic...
This logic does not follow. Your argument seems to be "the implementation has security bugs, so let's not ratify the standard." That's not how standards work though. Ensuring an implementation is secure is part of the certification process. As long as the scheme itself is shown to be provably secure, that is sufficient to ratify a standard.
If anything, standardization encourages more investment, which means more eyeballs to identify and plug those holes.
Reconcile this claim with, for instance, aes_ct64 in Thomas Pornin's BearSSL?
I'm familiar with Bernstein's argument about AES, but AES is also the most successful cryptography standard ever created.
Okay, I should've said implementing AES in C without a timing sidechannel performantly enough to power TLS for a browser running on a shitty ARMv7 phone is basically impossible. Also if only Thomas Pornin can correctly implement your cipher without assembly, that's not a selling point.
I'm not contesting AES's success or saying it doesn't deserve it. I'm not even saying we should move off it (especially now that even most mobile processors have AES instructions). But nobody would put something like a S-Box in a cipher created today.
If your point is "reference implementations have never been sufficient for real-world implementations", I agree, strongly, but of course that cuts painfully across several of Bernstein's own arguments about the importance of issues in PQ reference implementations.
Part of this, though, is that it's also kind of an incoherent standard to hold reference implementations to. Science proceeds long after the standard is written! The best/safest possible implementation is bound to change.
I don't think it's incoherent. On one extreme you have web standards, where it's now commonplace to not finalize standards until they're implemented in multiple major browser engines. Some web-adjacent IETF standards also work like this (WebTransport over HTTP3 is one I've been implementing recently).
I'm not saying cryptography should necessarily work this way, but it's not an unworkable policy to have multiple projects implement a draft before settling on a standard.
Look at the timeline for performant non-leaking implementations of Weierstrass curves. How long are you going to wait for these things to settle? I feel like there's also a hindsight bias that slips into a lot of this stuff.
Certainly, if you're going to do standards adoption by open competition the way NIST has done with AES, SHA3, and MLKEM, you're not going to be able to factor multiple major implementations into your process.
This isn’t black and white. There’s a medium between:
* Wait for 10 years of cryptanalysis (specific to the final algorithm) before using anything, which probably will be relatively meager because nobody is using it
* Expect the standardization process itself to produce a blessed artifact, to be set on fire as a false god if it turns out to be imperfect (or more realistically, just cause everybody a bunch of pain for 20 years)
Nothing would stop NIST from adding a post-competition phase where Google, Microsoft, Amazon, whoever the hell is maintaining OpenSSL, and maybe Mozilla implement the algorithm in their respective libraries and kick the tires. Maybe it’s pointless and everything we’d expect to get from cryptographers observing that process for a few months to a year has already been suitably covered, and DJB is just being prissy. I don’t know enough about cryptanalysis to know.
But I do feel very confident that many of the IETF standards I’ve been on the receiving end of could have used a non-reference implementation phase to find practical, you-could-technically-do-it-right-but-you-won’t issues that showed up within the first 6 months of people trying to use the damn thing.
I don't know what you mean by "kick the tires".
If by that you mean "perfect the implementation", we already get that! The MLKEM in Go is not the MLKEM in OpenSSL is not the MLKEM in AWS-LC.
If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable. It's the publication of the standard itself that is the forcing function for high-quality competing implementations. In particular, part of arriving at high-quality implementations is running them in production, which is something you can't do without solving the coordination problem of getting everyone onto the same standard.
Here it's important to note that nothing we've learned since Kyber was chosen has materially weakened the construction itself. We've had in fact 3 years now of sustained (urgent, in fact) implementation and deployment (after almost 30 years of cryptologic work on lattices). What would have been different had Kyber been a speculative or proposed standard, other than it getting far less attention and deployment?
("Prissy" is not the word I personally would choose here.)
I mean have a bunch of competent teams that (importantly) didn’t design the algorithm read the final draft and write their versions of it. Then they and others can perform practical analysis on each (empirically look for timing side channels on x86 and ARM, fuzz them, etc.).
> If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable.
The forcing function can potentially be: this final draft is the heir apparent. If nothing serious comes up in the next 6 months, it will be summarily finalized.
It’s possible this won’t get any of the implementers off their ass on a reasonable timeframe - this happens with web standards all the time. It’s also possible that this is very unlikely to uncover anything not already uncovered. Like I said, I’m not totally convinced that in this specific field it makes sense. But your arguments against it are fully general against this kind of phased process at all, and I think it has empirically improved recent W3C and IETF standards (including QUIC and HTTP2/3) a lot compared to the previous method.
Again: that has now happened. What have we learned from it that we needed to know 3 years ago when NIST chose Kyber? That's an important question, because this is a whole giant thread about Bernstein's allegation that the IETF is in the pocket of the NSA (see "part 4" of this series for that charming claim).
Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers. All of them had the knowhow and incentive to write implementations of their constructions and, if it was going to showcase some glaring problem, of their competitors. What makes you think that we lacked implementation understanding during this process?
I don’t think IETF is in the pocket of the NSA. I really wish the US government hadn’t hassled Bernstein so much when he was a grad student, it would make his stuff way more focused on technical details and readable without rolling your eyes.
> Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers.
That’s actually my point! When you’re trying to figure out if your standard is difficult to implement correctly, that everyone who worked on the reference implementations is a genius who understands it perfectly is a disadvantage for finding certain problems. It’s classic expert blindness, like you see with C++ where the people working on the standard understand the language so completely they can’t even conceive of what will happen when it’s in the hands of someone that doesn’t sleep with the C++ standard under their pillow.
Like, would anyone who developed ECC algorithms have forgotten to check for invalid curve points when writing an implementation? Meanwhile among mere mortals that’s happened over and over again.
I don't think this has much of anything to do with Bernstein's qualms with the US government. For all his concerns about NIST process, he himself had his name on a NIST PQC candidate. Moreover, he's gotten into similar spats elsewhere. This isn't even the first time he's gotten into a heap of shit at IETF/IRTF. This springs to mind:
https://mailarchive.ietf.org/arch/msg/cfrg/qqrtZnjV1oTBHtvZ1...
This wasn't about NSA or the USG! Note the date. Of course, had this happened in 2025, we'd all know about it, because he'd have blogged it.
But I want to circle back to the point I just made: you've said that we'd all be better off if there was a burning-in period for implementors before standards were ratified. We've definitely burnt in MLKEM now! What would we have done differently knowing what we now know?
> What would we have done differently knowing what we now know?
With the MLKEM standard? Probably nothing, Bernstein would have done less rambling in these blog posts if he was aware of something specifically wrong with one of the implementations. My key point here was that establishing an implementation phase during standardization is not an incoherent or categorically unjustifiable idea, whether it makes sense for massive cryptographic development efforts or not. I will note that something not getting caught by a potential process change is a datapoint that it’s not needed, but isn’t dispositive.
I do think there is some baby in the Bernstein bathwater that is this blog post series though. His strongest specific point in these posts was that the TLS working group adding a cipher suite with a MLKEM-only key exchange this early is an own goal (but that’s of course not the fault of the MLKEM standard itself). That’s an obvious footgun, and I’ll miss the days when you could enable all the standard TLS 1.3 cipher suites and not stress about it. The arguments to keep it in are legitimately not good, but in the area director’s defense we’re all guilty of motivated reasoning when you’re talking to someone who will inevitably accuse you of colluding with the NSA to bring about 1984.
No, the argument is that the algorithm (as specified in the standard) is difficult to implement correctly, so we should tweak it/find another one. This is a property of the algorithm being specified, not just an individual implementation, and we’ve seen it play out over and over again in cryptography.
I’d actually like to see more (non-cryptographic) standards take this into account. Many web standards are so complicated and/or ill-specified that trillion dollar market cap companies have trouble implementing them correctly/consistently. Standards shouldn’t just be thrown over the wall and have any problems blamed on the implementations.
> No, the argument is that the algorithm (as specified in the standard) is difficult to implement correctly, so we should tweak it/find another one.
This argument is without merit. ML-KEM/Kyber has already been ratified as the PQC KEM standard by NIST. What you are proposing is that the NIST process was fundamentally flawed. This is a claim that requires serious evidence as backup.
DJB has specific (technical and non-conspiratorial) bones to pick with the algorithm. He’s as much an expert in cryptographic implementation flaws and misuse resistance as anybody at NIST. Doesn’t mean he’s right all the time, but blowing him off as if he’s just some crackpot isn’t even correctly appealing to authority.
I hate that his more tinfoil hat stuff (which is not totally unjustified, mind you) overshadows his sober technical contributions in these discussions.
> I hate that his more tinfoil hat stuff (which is not totally unjustified, mind you) overshadows his sober technical contributions in these discussions.
Currently he argues that NSA is likely to be attacking the standards process to do some unspecified nefarious thing in PQ algorithms, and he's appealing to our memories of Dual_EC. That's not tinfoil hat stuff! It's a serious possibility that has happened before (Dual_EC). True, no one knows for a fact that NSA backdoored Dual_EC, but it's very very likely that they did -- why bother with such a slow DRBG if not for this benefit of being able to recover session keys?
No it didn't. The problem with Dual EC was published in a rump session at the next CRYPTO after NIST published it. The widespread assumption was that nobody was actually using it, which was enabled by the fact that the important "target" implementations (most importantly RSA BSAFE, which I think a lot of people also assumed wasn't in common use, but I may just be saying that because it's what I myself assumed) were deeply closed-source.
None of this applies to anything else besides Dual EC.
That aside: I don't know what this has to do with anything I just wrote. Did you mean to respond to some other comment?
You can't be serious. "The standard was adopted, therefore it must be able to be implemented in any or all systems?"
NIST can adopt and recommend whatever algorithms they might like using whatever criteria they decide they want to use. However, while the amount of expertise and experience on display by NIST in identifying algorithms that are secure or potentially useful is impressive, there is no amount of expertise or experience that guarantees any given implementation is always feasible.
Indeed, this is precisely why elliptic curve algorithms are often not available, in spite of a NIST standard being adopted like 8+ years ago!
In context, this particular issue is that DJB disagrees with the IETF publishing an ML-KEM only standard for key exchange.
Here's the thing. The existence of a standard does not mean we need to use it for most of the internet. There will also be hybrid standards, and most of the rest of us can simply ignore the existence of ML-KEM -only. However, NSA's CNSA 2.0 (commercial cryptography you can sell to the US Federal Government) does not envisage using hybrid schemes. So there's some sense in having a standard for that purpose. Better developed through the IETF than forced on browser vendors directly by the US, I think. There was rough consensus to do this. Should we have a single-cipher kex standard for HQC too? I'd argue yes, and no the NSA don't propose to use it (unless they updated CNSA).
The requirement of the NIST competition is that all standardized algorithms are both classical and PQ-resistant. Some have said in this thread that lattice crypto is relatively new, but it actually has quite some history, going back to Atjai in '97. If you want paranoia, there's always code theory based schemes going back to around '75. We don't know what we don't know, which is why there's HQC (code based) waiting on standardisation and an additional on-ramp for signatures, plus the expensive (size and sometimes statefulness) of hash-based options. So there's some argument that single-cipher is fine, and we have a whole set of alternative options.
This particular overreaction appears to be yet another in a long running series of... disagreements with the entire NIST process, including "claims" around the security level of what we then called Kyber, insults to the NIST team's security level estimation in the form of suggesting they can't do basic arithmetic (given we can't factor anything bigger than 15 on a real quantum computer and we simply don't have hardware anywhere near breaking RSA, estimate is exactly what these are) and so on.
Except when the government starts then mandating a specific algorithm.
And yes. This has happened. There’s a reason there’s only the NIST P Curves in the WebPKI world.
The standard will be used, as it was the previous time the IETF allowed the NSA to standardize a known weak algorithm.
Sorry that someone calling out a math error makes the NIST team feel stupid. Instead of dogpiling the person for not stroking their ego, maybe they should correct the error. Last I checked, a quantum computer wasn't needed to handle exponents, a whiteboard will do.
ML-KEM and ML-DSA are not "known weak". The justification for hybrid crypto is that they might have classical cryptanalytical results we aren't aware of, although there's a hardness reduction for lattice problems showing they're NP-hard, while we only suspect RSA+DLog are somewhere in NP. That's reasonable as a maximal-safety measure, but comes with additional cost.
Obviously the standard will be used. As I said in a sibling comment, the US Government fully intends to do this whether the IETF makes a standard or not.
While it's true that six others unequivocally opposed adoption, we don't know how many of those oppose the chairs claiming they have consensus. This may be a normal ratio to move forward with adoption, you'd have to look at past IETF proceeding to get a sense for that.
One other factor which comes in to play, some people can't stand his communication style. When disagreed with, he tends to dig in his heels and write lengthly responses that question people's motives, like in this blog post and others. Accusing the chairs of corruption may have influenced how seriously his complaint was taken.