I have recently been asked by @panoptykon if it was possible to create an online age verification system that would not be a privacy nightmare.

I replied that yes, under certain assumptions, this is possible. And provided a rough sketch of such a system.

But privacy is not the only issue with systems like that:
https://rys.io/en/178.html

#Privacy #AgeVerification #Web

Privacy of online age verification

I have recently been asked by the Panoptykon Foundation if it was possible to create an online age verification system that would not be a privacy nightmare. I replied that yes, under certain assumpti

Songs on the Security of Networks

@rysiek @panoptykon I disagree that your protocol would not be a privacy nightmare. IMHO just the fact that your proposal leaks the fact that a verification was attempted to the government disqualifies it.

Additional problems that jump out to me on a very cursory reading are that it allows the requesting website to link the browser loading it to the verifying device, e.g. linking the device fingerprints of a laptop and a phone.

(1/n)

@rysiek @panoptykon Additionally, the verifying device has no way of verifying that the verification request actually comes from the website in question. For instance, a malicious news website could display a verification prompt that it proxies from say, a porn website that the user never visited. Afterwards, the porn website has a non-repudiated token that someone visited it, and the signature provider could link that token to a user identity.
@rysiek @panoptykon Finally, traffic analysis attacks undermine the whole secrecy idea of the system from the perspective of a network observer in any setting where verification is a rare occurrence, think e.g. the home router in a family home of four people, one of which is a LGBT teen looking for resources on sexuality. Using something like tor provides nothing in this setting since the traffic will still stand out badly.
@rysiek @panoptykon that’s just what I figured out about that scheme while shopping for groceries just now. The reason people call schemes like this a privacy nightmare is that they are unavoidable once implemented while simultaneously having myriad possibly critical corner cases that each can have grave consequences up to the death of people (think for instance someone committing sucide due to being outed to their family). It’s just not something to fuck around with.
@rysiek @panoptykon I think in this setting analysis using any sort of straw man construction is largely meaningless since the whole difficulty of the setting lies exactly in the sort of detail one would leave out in a straw man construction.

@jaseg @panoptykon

> I disagree that your protocol would not be a privacy nightmare.

We don't have to agree where "nightmare" begins and ends.

However, we are talking about public websites (porn or otherwise), used by regular non-techie users.

In that particular context, schemes like "scan your face to prove your age" – like the scheme being rolled out by Discord in the UK currently – are a privacy nightmare in ways that I hope we can agree my protocol sketch is not:
https://www.theverge.com/news/650493/discord-age-verification-face-id-scan-experiment

Discord is verifying some users’ age with ID and facial scans

Discord is preventing some users from accessing sensitive content unless they allow the platform to verify their age by scanning their face or ID.

The Verge

@jaseg @panoptykon

> IMHO just the fact that your proposal leaks the fact that a verification was attempted to the government disqualifies it.

In the blogpost I provide some examples of how this could be mitigated, at least partially.

For example, if more different services used that kind of a system, then the trusted app could add multiple irrelevant questions to the request to the e-ID service, such that it would be difficult for the e-ID service to know which question is the relevant one.

@jaseg and just to be very clear, I am not advocating for that, as I am not advocating for age verification in the first place!

But that discussion is happening, that kind of legislation is happening, and in most places the systems being deployed are horrendously bad.

I see this as harm reduction, not a solution. What I want to achieve is that if these systems become mandated, at least they are not Discord-face-scanning-level shit.

@panoptykon

@jaseg

> it allows the requesting website to link the browser loading it to the verifying device, e.g. linking the device fingerprints of a laptop and a phone.

That's a fair concern. However, as I note in my blogpost, there is no reason why the trusted app could not run on the same device as the website is being visited, at which point there is no additional IP address exposure to the website. To improve the privacy of the system, this could even be mandated in the protocol.

@panoptykon

@jaseg

> verifying device has no way of verifying that the verification request actually comes from the website in question.

I do mention that the person using the trusted app would get a confirmation dialog each time a confirmation is requested. That dialog would obviously have to contain the domain name of the website, taken from the URL where the response is to be sent.

This is not perfect – the malicious proxy service could still rely on people blindly clicking stuff.

@panoptykon

@jaseg

But okay, say the porn site now has a non-repudiated token, issued via proxy, that could be linked to user identity.

Traffic does not match that, so the only thing linking it is the nonce, as long as the porn website or the proxy service, and the e-ID provider, kept it.

The e-ID provider can only link the user to this particular request if the porn site or the proxy provides them with the nonce. Correct?

I'm sure it could be mitigated in some way. I'll have a thunk.

@panoptykon

@jaseg

> Finally, traffic analysis attacks undermine the whole secrecy idea of the system from the perspective of a network observer in any setting where verification is a rare occurrence

No doubt. But again, we are talking about publicly available websites, visited without the use of tools like Tor. If we are worried about a global network observer, than the mere fact of that IP address visiting that website at that time is already where the gig is up.

@panoptykon

@jaseg

The protocol I sketched out does not add to that problem. Or am I missing your point here?

> Using something like tor provides nothing in this setting since the traffic will still stand out badly.

Which traffic, between the user device and the website, or between the user device and e-ID provider? Or both?

Observed from where, the ISP level or the home WiFi network level?

How is the sketched protocol adding to the global observer's ability to link a person to a visit?

@panoptykon

@jaseg

> The reason people call schemes like this a privacy nightmare is that they are unavoidable once implemented while simultaneously having myriad possibly critical corner cases that each can have grave consequences up to the death of people

I am not a proponent of these systems. But I am absolutely terrified by the schemes – like the Discord one I mentioned – that are already being rolled out. So I wanted to show that it is possible to design a better scheme.

@panoptykon

@jaseg

> I think in this setting analysis using any sort of straw man construction is largely meaningless since the whole difficulty of the setting lies exactly in the sort of detail one would leave out in a straw man construction.

This is a valid way of looking at this of course. And I appreciate your poking at it.

I did miss the malicious proxy issue. I'm also bothered by the nonce. And I kinda feel both of these could be solved in one fell swoop with some funky crypto. 👀

@panoptykon

@rysiek @jaseg @panoptykon maybe this problem is not to be solved? I think we all could benefit from less surveillance and more of the '90s web.

@uint8_t I would not mind that at all. I hope you're involved in pushing back against age verification online.

@jaseg @panoptykon