@filippo While I see the savings for CTLogs, using the unsigned certs in a TLS handshake (what google seems to want), means clients will need up-to-date meta information, right?

Which would result in clients either needing to contact logs live during the handshake or having access to an updating infrastructure (as browsers do for CRLs now).

If I got this right, won't this split browser/non-browser clients further apart?

https://datatracker.ietf.org/doc/draft-ietf-plants-merkle-tree-certs/

Merkle Tree Certificates

This document describes Merkle Tree certificates, a new form of X.509 certificates which integrate public logging of the certificate, in the style of Certificate Transparency. The integrated design reduces logging overhead in the face of both shorter-lived certificates and large post-quantum signature algorithms, while still achieving comparable security properties to traditional X.509 and Certificate Transparency. Merkle Tree certificates additionally admit an optional signatureless optimization, which decreases the message size by avoiding signatures altogether, at the cost of only applying to up-to-date relying parties and older certificates.

IETF Datatracker

@icing Contacting CAs (== logs) during the handshake is not an option.

If you have update infrastructure, you signal what you have to the server, and get served a small Merkle proof-based certificate. If not, or if the certificate is too new, you get served the signed certificate.

Yes, this is another reason (like CRLite, CT, and intermediate preloading) non-browser clients need to figure out how to get updates not to get left behind. I am hopeful about upki.

@filippo Thanks for the explanation. Already suspected that it was a client hello extension to signal acceptance of a shorter cert.

So that requires the client remembering which CA the cert last seen had been coming from, I suspect. More state on the client...and ways to fingerprint them.

upki is a good project. But last state I know is that they rely on the Mozilla infrastructure. That org is not currently filling everyone with joy and confidence.

@icing @filippo No, that's still not right. The optimization does not rely on per-site state. It relies on site-independent information that you get from your update service, just like CRLsets/CRLite/upki. (Indeed we've been the one trying to hold the line at that *because* it's a better privacy story.)

And if you don't get that update, for whatever reason, the standalone certificates work just fine. Servers are expected to have both. (Even in browsers, we cannot assume 100% of all clients are up-to-date with component updates.)

@davidben @icing Re: upki, there is no hard dependency on Mozilla or on any specific infrastructure.

We are planning to work with them to specify the data format, so that *how* you get that data is up to you: maybe your distro has its infra, like Ubuntu will, maybe you spin up your own, maybe someone will run public benefit endpoints.

Mozilla is currently the original data source for CRLite, but the server software is open source, anyone can run it.

@filippo @icing Likewise for MTCs. The blob can even be authenticated on the receiving client if you send down the signatures and consistency proofs, though that does make the blob a bit bigger.

@davidben @filippo #curl is running on a few more devices than chromium and with a greater variety. Many of them will not magically get a service that provides all the new miracles.

That‘s not to say it can‘t be done, but there are restrictions in place that may prohibit a regular/frequent „blob“ update service. So we have to think a bit harder here what this all means.

Again, thanks for your explanations.

@icing @filippo Indeed. Like I said, this design does not need the blob to function. We did think of this. 🙂 The standalone certificates work, as the name says, standalone. They are comparable in size to what you'd have gotten if we hadn't done anything at all.

It would be great if upki lets many curl installs get the optimization. But if some curl installs cannot (just as some Chrome installs cannot), it is still fine.

@icing I really recommend reading the spec. If you have feedback on it from the point of view of curl, would love to hear it!

As @davidben said, the updates are an optional optimization. But anyway, I think we (including Go) have been too complacent in letting our verifiers fall behind (or even hold back) the browsers. In some places we *can* build infrastructure, and we should.

There will be places that truly can't do better, but they are only a subset, and we are currently failing the rest.

@filippo @icing +1! Feel free to send any thoughts you have here, to us directly, on the GitHub, PLANTS list, or whenever you find most comfortable. (I hope we can keep PLANTS chill, at least by IETF standards, but I know those can sometimes be daunting.)

@filippo @icing I should say, it really is important to me that this work for things like curl. There's more stuff we need to fill into the spec right now, but we'll have a really hard time moving to this if you can't *at least* slot the basic standalone cert + only checking for CA signature behavior into all the Web-adjacent places today.

(I also hope we can broader transparency enforcement than today and the broad deployment of the optimization, but as you say, I'm sure not every install will be able to do everything.)