I have recently been asked by @panoptykon if it was possible to create an online age verification system that would not be a privacy nightmare.

I replied that yes, under certain assumptions, this is possible. And provided a rough sketch of such a system.

But privacy is not the only issue with systems like that:
https://rys.io/en/178.html

#Privacy #AgeVerification #Web

Privacy of online age verification

I have recently been asked by the Panoptykon Foundation if it was possible to create an online age verification system that would not be a privacy nightmare. I replied that yes, under certain assumpti

Songs on the Security of Networks

By the way, this is a first in a long time blogpost of mine that is both in PL and EN.

I don't use any LLMs to translate my bloggo, so you know that this is all organic, hand-made, artisan translation – and any and all bullshit is mine and mine alone. 

@rysiek I tend to think there’s a pretty big hazard of people under 18 getting locked out of things for no actual reason beyond people being ‘careful’.

@lightspill that hazard extends way beyond just people under 18, and reasons for such exclusion extend way beyond "people being careful".

I dive a bit into that in the blogpost.

@rysiek you mean David Chaum & Jan Camenisch 2002, right?
@rigo oh I should read that paper! thanks!

That protocol is simplified to the point where it makes no sense and thus we cannot really evaluate its security, but yeah, I was glad to see that you do get around to semi-acknowledging that a scheme like that would need to rely on some kind of service like Tor to begin to provide any semblance of privacy. Even then the central authority would have a record of it every time each of us hit an age gate, which is valuable metadata to be giving away whether or not it's probably just pornography.

Lots of people seem eager to claim that it can feasibly be done in a privacy-safe way but I still have yet to be convinced of it.

And it's all just to set up a new system of oppression with the other problems you mention. It seems utterly ridiculous.

One particular way in which the story makes no sense to me: The website wants to ask a question about "the visitor." How does it identify the visitor in its message to the central authority? If nothing prevents it, said visitor could simply pass the question on to Charlie's Web Age Verifier Bypass Service down the road which is in posession of an age-appropriate keypair, and relay the response in an automated fashion. How does one prevent that?

I mean it's not as if people wouldn't do it. Borrowing an older kid's ID to buy beer was commonplace when I was younger. Imagine if it could be done automatically, instantly, on a large scale. Shutting down The Pirate Bay is already nigh-impossible for the powers of law and order, it seems. Imagine if every kid had to use it if they wanted to connect to Instagram.

@kbal @panoptykon the website does not identify the visitor to the e-ID service, that's what the trusted app on the visitor's device is for.

The website provides a question and an URL to the trusted app. The trusted app sends a request containing the question, signed with the visitor's key, to the e-ID service. The e-ID service responds with a signed response also containing the question, to the trusted app. The trusted app then forwards that response to the website, using the URL.

@kbal @panoptykon the website knows it's a response related to this particular visit thanks to the nonce. And then verifies the signature on the response against a well-known long-term public key of the e-ID service.

Obviously the e-ID service is not any e-ID service. Perhaps it's government-run. Perhaps it is run by institutions that are somehow "anointed" by the government.

Okay, there is a nonce. Presumably it is negotiated somehow to prevent the Website from hiding any info in it. But then the question for the ID server is simply "Does a user who knows this nonce have access to a keypair indicating the right age range?" The user (i.e. the "trusted app" that is in their control) can then simply send that question off to Charlie or whoever and get the desired answer to relay to the Website without revealing to anyone any secrets of their own. The ID server has no way to know it was proving the age of the wrong person, the Website doesn't know who it actually got an age for, and neither can identify the actual user.

I think the people implementing these age verification schemes do want to try and defend against that sort of thing, because both the ones I've seen so far in reality (the one from Spain and some other thing a couple years ago that was closer to your idea) seem to have willingly sacrificed any semblance of privacy in their efforts to prevent it.

@kbal

> The ID server has no way to know it was proving the age of the wrong person

Did you actually read the post?

The e-ID server knows who it is providing a response about, as the request from the trusted app is signed with the key associated with that person.

That key is authenticated by logging into that application through the e-ID service.

The ID server knows who its response is about, but it does not know if they're the person using the website.
@kbal @panoptykon just like when an adult passes any other kind of age verification we could imagine – even most intrusive ones, with fingerprints and whatnot – and then hands over the laptop to a kid. Your point?
I'm not imaging a friendly adult signing in for their child, I'm thinking a completely automated service that instantly gets past age verification for anyone who signs up for it by sharing a pool of stolen, purchased, fraudulently obtained, or willingly shared IDs.
Maybe that could be kept under control by making the keys valuable — hard to replace and maybe even the same keypair that's used for age verifying your mastodon login also used for more important things. But key management is probably going to be a nightmare. Just ask the cryptocurrency guys.
@kbal @panoptykon this is a problem solved well enough for this by any e-government services or e-banking app out there.

Okay I can imagine it being similar to a yubikey, albeit one with a totally new protocol along the lines of what you described. But it occurs to me that people wanting to share their age credentials with friends or strangers wouldn't actually need to give up the keys to anyone. People could run rate-limited age verifier relay services using their legit keypairs without having to compromise their ownership of those keys, similar to the way some people are willing to run bittorrent clients today.

If we assume that anyone who puts in a little effort being able to bypass this elaborate system is not a problem because it's only meant to deter people who can't be bothered, I guess it only remains to design the Tor-like service over which this will run and make it resistant to traffic analysis.

Anyway, thank you for being patient with my questions.

@kbal @rysiek Hey. Thanks for the discussion.
Indeed, the point of the government seems to be to protect younger kids that wander around the internet without supervision and may be harmed by unintentionally accessing adult content. We don't think they are fooling themselves a 100% airtight system can be deployed.
Also, our position as a human rights watchdog, is that age verification, if introduced, must be limited to porn websites, we strongly oppose it being stretched to other services, ia IG
@panoptykon
My position (as a non-Pole and non-EU citizen) is that parents and guardians should be more active in their children's lives in order to protect them. If they don't have the time for that, then that's something we should try to fix.
@kbal @rysiek
@light @kbal @rysiek Consumers alone can't stop the climate change with their choices, similarly parents alone can't protect their children from all harms caused by internet companies (incl. platforms with porn). Actually we believe a combo of activities by states, businesses, schools, parents, and society as a whole, is necessary. We elaborated on it on the occasion of politicians discussing banning smartphones in schools. Here, if you are interested, in Polish though: https://panoptykon.org/zakaz-smartfonow-w-szkolach-jak-wyrwac-dzieci
Zakaz smartfonów w szkołach? Kto i co powinien zrobić, by wyrwać dzieci ze smartfonów | Fundacja Panoptykon

@kbal

> the Website doesn't know who it actually got an age for

The website knows it got a response related to a specific request based on the nonce in the response. And that the response was signed by the e-ID service.

Can an adult let a kid into an age-gated website by scanning the code themselves in their own trusted app, and getting response for themselves? Sure. But any other age verification system would allow an adult to just pass their physical device to a kid after age verification.

@rysiek @panoptykon I enjoyed reading that write up, it clearly laid out your system.

But, age is (and has been used) as a proxy. Should you be able to drive at 16? Certainly age isn't really the best measure for this, but it's easy to implement.

Maybe we should be trying to unravel what we're really after when we do age verification and try to implement that instead (more of a legal question enabled by technology than a technology question.)

@scerruti and if you click through to @panoptykon's piece on that (first link in the post; in Polish, but auto-translation should work well), you will find a discussion of exactly these deeper issues:
https://panoptykon.org/dzieci-weryfikacja-wieku

I was asked about a specific thing, I provided the response (quoted in their piece), and then I wrote a blog post to dive deeper into this particular side of things.

Jak ograniczyć dostęp dzieci do pornografii? | Fundacja Panoptykon

@rysiek @panoptykon if we can't decide these issues, what is pornographic content and does it have to be the purpose of the site?

Could we blacklist the entire Internet unless you were age verified. And then could we somehow allow parents to whitelist specific sites for their children but still do it anonymously?

@rysiek @panoptykon Chelsea Jarvis at Strathclyde is doing a PhD on privacy preserving age verification and has some good results.
@DanielRThomas Can I ask what you mean by [good](https://doi.org/10.3390/children11091068)?
> We find that governments are applying a responsibilization strategy, which has led to widespread deployment of privacy-invasive or ineffective age verification. The former violates the privacy of underage users, with the latter undermining the overarching aims of the legislation. We have also found general disengagement and a lack of trust in the government amongst the public with regards to new online age verification laws within the UK. To conclude, despite governments globally looking to put more robust online age verification mechanisms in place, there remains a general lack of privacy preservation and affordable technological solutions. Moreover, the overarching aims of the online safety and age verification legislative changes may not be satisfied due to the general public stakeholder group’s disengagement and lack of trust in their government.
@DavyJones
"We are studying this topic and we are making good progress in our understanding of the problem" VS "Policies as they are implemented now are good"
@DanielRThomas
@j_bertolotti @DavyJones Yes good understanding of the problem (which is bad) and also some good work on solutions, latter I think not yet published.
EU Digital Identity Wallet Home - EU Digital Identity Wallet -

@sirobsidian @panoptykon maybe? it's an app that runs on the user's device, so it *could* implement a protocol like this. Is it open-source? Has it been audited?

Anyway, the point of the post wasn't to list all possible existing apps like that – only to give an example of what shape such a system could take.

@rysiek @panoptykon

Switzerland has an interesting proposition for a privacy preserving e-ID implementation: https://www.eid.admin.ch/en/

I hope people will understand the benefits of such a system and support it when voting. Unfortunately, a referendum (lead by anti-vax, etc) successfully collected 50k signatures against it.

#eid #switzerland #privacy

Digital identity e-ID

The digital identity (e-ID) allows Swiss citizens and people with a Swiss residence permit to prove their identity online. The federal government will start issuing the e-ID in the swiyu wallet app in the third quarter of 2026 at the earliest. The Trust Infrastructure also allows other authorities and private individuals to issue electronic credentials.

@rysiek @panoptykon
I saw this talk a while back on electronic IDs at 38c3. https://youtu.be/PKtklN8mOo0?si=SqbVfmew2Q5-6Oni
At 15:55 they talk about a solution with ZKProofs.
Does your solution have advantages I'm missing over this one?
38C3 - EU's Digital Identity Systems - Reality Check and Techniques for Better Privacy

YouTube

@Kroppeb no, my "solution" is a thought experiment for @panoptykon so that they can explain to non-techies that such a system can be made in a privacy-preserving way. I mention that better systems could be made using zero-knowledge proofs very early in my post.

Also, a more privacy-friendly link to the video:
https://media.ccc.de/v/38c3-eu-s-digital-identity-systems-reality-check-and-techniques-for-better-privacy

EU's Digital Identity Systems - Reality Check and Techniques for Better Privacy

Digital identity solutions, such as proposed through the EU's eIDAS regulation, are reshaping the way users authenticate online. In this ...

media.ccc.de

@rysiek Bit surprised by your assumption later in the piece that there was only on e-Id provider. I'd think that in an international context there'd need to be multiple acceptable providers. Perhaps even multiple providers for one person: e.g., government and bank(s).

@panoptykon

@edavies @panoptykon yeah, this was written in the Polish context. You're right of course, in the international context there would be many e-ID providers.

@rysiek @panoptykon I disagree that your protocol would not be a privacy nightmare. IMHO just the fact that your proposal leaks the fact that a verification was attempted to the government disqualifies it.

Additional problems that jump out to me on a very cursory reading are that it allows the requesting website to link the browser loading it to the verifying device, e.g. linking the device fingerprints of a laptop and a phone.

(1/n)

@rysiek @panoptykon Additionally, the verifying device has no way of verifying that the verification request actually comes from the website in question. For instance, a malicious news website could display a verification prompt that it proxies from say, a porn website that the user never visited. Afterwards, the porn website has a non-repudiated token that someone visited it, and the signature provider could link that token to a user identity.
@rysiek @panoptykon Finally, traffic analysis attacks undermine the whole secrecy idea of the system from the perspective of a network observer in any setting where verification is a rare occurrence, think e.g. the home router in a family home of four people, one of which is a LGBT teen looking for resources on sexuality. Using something like tor provides nothing in this setting since the traffic will still stand out badly.
@rysiek @panoptykon that’s just what I figured out about that scheme while shopping for groceries just now. The reason people call schemes like this a privacy nightmare is that they are unavoidable once implemented while simultaneously having myriad possibly critical corner cases that each can have grave consequences up to the death of people (think for instance someone committing sucide due to being outed to their family). It’s just not something to fuck around with.
@rysiek @panoptykon I think in this setting analysis using any sort of straw man construction is largely meaningless since the whole difficulty of the setting lies exactly in the sort of detail one would leave out in a straw man construction.

@jaseg @panoptykon

> I disagree that your protocol would not be a privacy nightmare.

We don't have to agree where "nightmare" begins and ends.

However, we are talking about public websites (porn or otherwise), used by regular non-techie users.

In that particular context, schemes like "scan your face to prove your age" – like the scheme being rolled out by Discord in the UK currently – are a privacy nightmare in ways that I hope we can agree my protocol sketch is not:
https://www.theverge.com/news/650493/discord-age-verification-face-id-scan-experiment

Discord is verifying some users’ age with ID and facial scans

Discord is preventing some users from accessing sensitive content unless they allow the platform to verify their age by scanning their face or ID.

The Verge

@jaseg @panoptykon

> IMHO just the fact that your proposal leaks the fact that a verification was attempted to the government disqualifies it.

In the blogpost I provide some examples of how this could be mitigated, at least partially.

For example, if more different services used that kind of a system, then the trusted app could add multiple irrelevant questions to the request to the e-ID service, such that it would be difficult for the e-ID service to know which question is the relevant one.

@jaseg and just to be very clear, I am not advocating for that, as I am not advocating for age verification in the first place!

But that discussion is happening, that kind of legislation is happening, and in most places the systems being deployed are horrendously bad.

I see this as harm reduction, not a solution. What I want to achieve is that if these systems become mandated, at least they are not Discord-face-scanning-level shit.

@panoptykon

@jaseg

> it allows the requesting website to link the browser loading it to the verifying device, e.g. linking the device fingerprints of a laptop and a phone.

That's a fair concern. However, as I note in my blogpost, there is no reason why the trusted app could not run on the same device as the website is being visited, at which point there is no additional IP address exposure to the website. To improve the privacy of the system, this could even be mandated in the protocol.

@panoptykon

@jaseg

> verifying device has no way of verifying that the verification request actually comes from the website in question.

I do mention that the person using the trusted app would get a confirmation dialog each time a confirmation is requested. That dialog would obviously have to contain the domain name of the website, taken from the URL where the response is to be sent.

This is not perfect – the malicious proxy service could still rely on people blindly clicking stuff.

@panoptykon

@jaseg

But okay, say the porn site now has a non-repudiated token, issued via proxy, that could be linked to user identity.

Traffic does not match that, so the only thing linking it is the nonce, as long as the porn website or the proxy service, and the e-ID provider, kept it.

The e-ID provider can only link the user to this particular request if the porn site or the proxy provides them with the nonce. Correct?

I'm sure it could be mitigated in some way. I'll have a thunk.

@panoptykon

@jaseg

> Finally, traffic analysis attacks undermine the whole secrecy idea of the system from the perspective of a network observer in any setting where verification is a rare occurrence

No doubt. But again, we are talking about publicly available websites, visited without the use of tools like Tor. If we are worried about a global network observer, than the mere fact of that IP address visiting that website at that time is already where the gig is up.

@panoptykon

@jaseg

The protocol I sketched out does not add to that problem. Or am I missing your point here?

> Using something like tor provides nothing in this setting since the traffic will still stand out badly.

Which traffic, between the user device and the website, or between the user device and e-ID provider? Or both?

Observed from where, the ISP level or the home WiFi network level?

How is the sketched protocol adding to the global observer's ability to link a person to a visit?

@panoptykon

@jaseg

> The reason people call schemes like this a privacy nightmare is that they are unavoidable once implemented while simultaneously having myriad possibly critical corner cases that each can have grave consequences up to the death of people

I am not a proponent of these systems. But I am absolutely terrified by the schemes – like the Discord one I mentioned – that are already being rolled out. So I wanted to show that it is possible to design a better scheme.

@panoptykon

@jaseg

> I think in this setting analysis using any sort of straw man construction is largely meaningless since the whole difficulty of the setting lies exactly in the sort of detail one would leave out in a straw man construction.

This is a valid way of looking at this of course. And I appreciate your poking at it.

I did miss the malicious proxy issue. I'm also bothered by the nonce. And I kinda feel both of these could be solved in one fell swoop with some funky crypto. 👀

@panoptykon

@rysiek @jaseg @panoptykon maybe this problem is not to be solved? I think we all could benefit from less surveillance and more of the '90s web.

@uint8_t I would not mind that at all. I hope you're involved in pushing back against age verification online.

@jaseg @panoptykon

@rysiek @panoptykon Any "trusted" app is a privacy nightmare. And given the spread of residential-proxy malware I'd expect that any age gate that depends on an app will promptly have a malware-backed bypass service.

@AMS can you please offer a specific scenario of such an attack?

@panoptykon

@rysiek @panoptykon Something like https://www.humansecurity.com/learn/blog/satori-threat-intelligence-alert-proxylib-and-lumiapps-transform-mobile-devices-into-proxy-nodes/ but instead of selling proxy service it sells sending ID verification token clearance. Passes ID tokens to the victim's trusted app as though they visited the site.
Satori Threat Intelligence Alert: PROXYLIB and LumiApps Transform Mobile Devices into Proxy Nodes

HUMAN's Satori Threat Intelligence team uncovered a group of 28 apps that turned user devices into residential proxy nodes.

HUMAN Security

@AMS @panoptykon sure, but the user of the trusted app would still get a notification asking them if they want to confirm their age to a given website.

And to have that trusted app even be able to issue requests to e-ID provider, they would need to log in using this app to the e-ID provider and verify their long-term key held in that trusted app.

I'm sure there are ways to improve on that, but it's not like that kind of proxy service could operate in a clandestine manner.

@rysiek Specifically for the "above the age of X" question I'd like to see a way to have a long-lived attestation without needing to go to the eID provider for each request -- after all, they don't need to know at which times I like to browse porn. That of course gets difficult because then I could just use my older brother's attestation for illegal hornyness...

@cm yeah, long-term-ish attestation is definitely one way to improve the privacy of the system.

Another way is for the trusted app to randomly ask for age verification at random intervals, to create noise such that the e-ID service cannot easily tell which requests were chaff and which were actually related to any actual visit.

There are many ways this could be improved. Again, the point was to show that a system like this is, technically, possible.

@rysiek I was thinking they could add a code to lottery scratch cards in ink that fades on exposure to light.

The ink will have faded on old used cards so if anyone under age gets hold of one it'll be of no use to them.

@panoptykon