0 Followers
0 Following
6 Posts

This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup

> In both cases, you need to start moving now and gain little from trying to time the switchover.

There are a number of "you"s here, including:

- The SDOs specifying the algorithms (IETF mostly)

- CABF adding the algorithms to the Baseline Requirements so they can be used in the WebPKI

- The HSM vendors adding support for the algorithms

- CAs adding PQ roots

- Browsers accepting them

- Sites deploying them

This is a very long supply line and the earlier players do indeed need to make progress. I'm less sure how helpful it is for individual sites to add PQ certificates right now. As long as clients will still accept non-PQ algorithms for those sites, there isn't much security benefit so most of what you are doing is getting some experience for when you really need it. There are obvious performance reasons not to actually have most of your handshakes use PQ certificates until you really have to.

Yes, though we do know how to solve this problem by using hash-based timestamping systems. See: https://link.springer.com/article/10.1007/BF00196791

Of course, the modern version of this is putting the timestamp and a hash of the signature on the blockchain.

How to time-stamp a digital document - Journal of Cryptology

The prospect of a world in which all text, audio, picture, and video documents are in digital form on easily modifiable media raises the issue of how to certify when a document was created or last changed. The problem is to time-stamp the data, not the medium. We propose computationally practical procedures for digital time-stamping of such documents so that it is infeasible for a user either to back-date or to forward-date his document, even with the collusion of a time-stamping service. Our procedures maintain complete privacy of the documents themselves, and require no record-keeping by the time-stamping service.

SpringerLink

As a practical matter, revocation on the Web is handled mostly by centrally distributed revocation lists (CRLsets, CRLite, etc. [0]), so all you really need is:

(1) A PQ-secure way of getting the CRLs to the browser vendors.
(2) a PQ-secure update channel.

Neither of these require broad scale deployment.

However, the more serious problem is that if you have a setting where most servers do not have PQ certificates, then disabling the non-PQ certificates means that lots of servers can't do secure connections at all. This obviously causes a lot of breakage and, depending on the actual vulnerability of the non-PQ algorithms, might not be good for security either, especially if people fall back to insecure HTTP.

See: https://educatedguesswork.org/posts/pq-emergency/ and https://www.chromium.org/Home/chromium-security/post-quantum...

[0] The situation is worse for Apple.

How to manage a quantum computing emergency

You go to war with the algorithms you have, not the ones you wish you had

What's the incentive for individual sites or browsers to do this?

From the site's perspective, they're going to need to have a WebPKI certificate for the foreseeable future, basically until there is no appreciable population of WebPKI-only clients, which is years in the future. So DANE is strictly more work.

From the browser's perspective, very few sites actually support DANE, and the current situation is satisfactory, so why go to any additional effort?

In order for technologies to get wide deployment, they usually need to be valuable to individual ecosystem actors at the margin, i.e., they have to get value by deploying them today. Even stipulating that an eventual DANE-only system is better, it doesn't provide any benefit in the near term, so it's very hard to get deployment.

This isn't correct.

There are two authentication properties that one might be interested in:

1. The binding of some real world identity (e.g., "Google") to the domain name ("google.com).
2. The binding of the domain name to a concrete Web site/connection.

The WebPKI is responsible for the second of these but not the first, and ensures that once you have the correct domain name, you are talking to the right site. This still leaves you with the problem of determining the right domain name, but there are other mechanisms for that. For example, you might search for the company name (though of course the search engines aren't perfect), or you might be given a link to click on (in which case you don't need to know the binding).

Yes, it is useful to know the real world identity of some site, but the problem is that real world identity is not a very well-defined technical concept, as names are often not unique, but instead are scoped geographically, by industry sector, etc. This was one of the reasons why EV certificates didn't really work well.

Obviously, this isn't a perfect situation, but the real world is complicated and it significantly reduces the attack surface.

I agree with you about the category error.

In all fairness, though, there are quite a few application protocols which are built directly on top of UDP with no explicit intermediate transport layer. DNS, RTP, and even sometimes SIP come immediately to mind.