System76 on Age Verification Laws

https://discuss.tchncs.de/post/56088364

System76 on Age Verification Laws - tchncs

Lemmy

I and many others I know who grew up with unrestricted internet access (before and after the corporatization of the internet) were exposed to terrible shit. Like, I grew up with unusually tech savvy parents who were able to protect me from the worst of it, but even I have been somewhat traumatized by accessing graphic content I shouldn’t have. I personally know people who grew up with worse parents who grew up browsing shock/gore websites and who were repeatedly groomed and abused by pedophiles.

Honestly, I don’t really get the backlash to this legislation, beyond that its prehaps being applied to devices it shouldn’t be. While yes, freedom is important, we’re talking about providing the option to limit access to mature content, not preventing them from downloading python or using the internet. There is a justified reason for wanting this, and this seems like the ideal way to do it.

even I have been somewhat traumatized by accessing graphic content I shouldn’t have

Why did you access it if it made you feel bad? It is (and has been since I remember) very difficult to accidentally run across anything shocking on the Internet.

Because I was a stupid kid and didn’t realize that watching combat footage might be a bad idea. I thought I was just learning about military history. Same way kids don’t realize they’re being groomed or don’t realize that watching graphic horror movies might be a bad idea. Kids are dumb - and to be clear, I know you can’t shield them from everything and parents are still the primary solution. Still, a local flag for age range seems like exactly the sort of tool that would help a parent to moderate access without limitting privacy or freedom.
No one ever linked you to lemonparty, huh? No escalating chains of “hot singles in your area” ads? No, you know, human tendency to explore and pursue novel experiences?

OK, if someone actively links me to it, then yes, but there’s also no solution to that because they could just send it (or a screenshot of it) directly to me and circumvent any filters there might be.

I’ve never clicked on a “hot singles in your area” ad, so no idea what that is about.

The entire Internet is of course IMHO about exploring and pursuing novel experiences; but how quickly do you imagine children can get from websites actively recommended by parents to shocking websites? Not very, I think?

It didn’t take me long! I learned the shortcuts to hide what I doing from them and was pretty quickly the one being asked for tech tips. Plotting revolution and pirating media in IRC while mom thought I was playing “Where In The World Is Carmen San Diego?” >:)

Kids are way smarter than a lot of people want to admit. I would say more intelligent than adults on average, balanced out by lack of experience of course. That’s why I’m so against government measures to limit their exposure and experience, whatever the pretext. They are our future and they will surpass our capabilities, we’re fucked as a species if they don’t. They deserve our support, not disingenuous constraints or to be weaponized for fear mongering

I definitely agree with all of that.

But if you “learned the shortcuts to hide” what you were doing, then you were clearly accessing things you actively wanted to see, which was my entire point.

Not like alt-tab is rocket surgery :p

What I wanted to see was “the world”, you know? That drive to explore and pursue novelty we talked about? Think it’s a pretty universal experience, and one companies have absolutely learned to prey on. I don’t think yearning to know the unknown is quite equivalent to actively wanting to see anything specific, and you seem like a smart enough guy to be aware of the ways companies abuse that curiousity. That people, children or not, are only shown things they actively want to see is measurably, provably not true. We go down rabbitholes and off on tangents and towards intensity and in all kinds of directions all kinds of people have all kinds of motivations for influencing

Alright, I agree with you that modern “social media recommendation algorithms” are a bad thing that shouldn’t have been invented, if that is what you’re getting at.

This won’t fix that.

we’re talking about providing the option to limit access to mature content, not preventing them from downloading python or using the internet.

We’re talking about stopping adults from using a computer without surrendering their privacy. Whatever excuses you make about that, will not last. This is a flying leap down a slippery slope, and it won’t even fucking work.

The California law is a local flag for age range. Its not a law that requires ID, or tracking, or anything else like that. Given that its set by the user optionally, and from my understanding illegal to use for anything but age verification, I don’t understand how this is that negative for privacy or freedom.

And it stops here. Yeah? Today is the end of history. Nevermind any resemblance to rampant demands for facial scans and government ID, just to use a website; this demand for every computer to be 18+ will never cause problems.

Have you ever taken a hint in your life.

This is a slippery slope falicy. Just because the option is provided to self-identify age, doesn’t mean that it will be replaced with more complex data collection later - esspecially considering that if its based on this law, it would be literally impossible. 4a bans the collection of data from your system besides age, and the fact that it is all handled locally and sharing it is prohibited means that it would be impractical to implement anything fancier than a text box to collect data. If anything, this looks like a way to be seen “doing something” without having to change anything for most users. Hell, if California wantted to implement a law for data collection, why would they have implemented the CCPA, why would they have written this law to ban the sharing of data, and why wouldn’t they just write the data collection law instead, given (as you said) there is already significant backing for the idea.

The worst-case scenario is already happening - aforementioned facial scans are not theoretical. Only their scope has been limited, and suddenly we’re talking about legally-mandated age gating at an OS level.

Pattern recognition is a requirement for survival.

Many abuses start small so that people like you will let it happen. Some caveats only exist for you to point to while bickering with critics, and when you’re not looking, they quietly vanish. Others were just empty words the whole time.

This law is not some compromise over widely-demanded change. It would be a pointless intrusion even if, by some miracle, it stopped right here. It will not stop here. Be serious. You lived through last year; you know the general state of everything. These exact companies have been spying on you. These governments sure aren’t stopping them, for some mysterious reason. Scoffing about blindingly obvious expectations is a choice of comforting fantasy over worthwhile argument.

Okay, but should we not oppose laws about data collection and facial recognition in that case, rather than a law that implements an entirely separate, optional, user driven approach. Saying this is bad because those are bad is not an argument any more so than saying CCPA and GDPR are bad because the government want to collect data. Your argument isn’t against this law, or even the concept of having age verification in general. Its against government overreach as a broad concept. You’re again relying on slipery slope falacy to say that because I’m okay with this one specific form of age gating, I’m okay with every other one, which I have repeatedly made clear is not true.

Mandatory OS integration is not separate, optional, or user-driven.

I have explicitly argued against, in itself, for its own sake.

Under the other submission, I am even arguing against age verification in general.

But sure, let’s talk about this on its merits, in a vacuum, like there’s nothing else happening. What the fuck is it for? You endlessly insist it’s super minor, barely an inconvenience, and obviously any idiot can bypass it. That is your defense. If you freely acknowledge all of the other went too far and didn’t work, why is this one worth trying? How is this encroachment on all operating systems not a waste of time, at best?

System76 on Age Verification Laws - sh.itjust.works

Liberty has costs, but it’s worth it.

If you’re going to reference the slippery slope fallacy so much, you should probably read where and when it actually applies.

From the wikipedia entry:

When the initial step is not demonstrably likely to result in the claimed effects, this is called the slippery slope fallacy.

You yourself just acknowledged that the worst-case is already happening, so the assumption that the worst case will continue to happen is reasonable.

Unless you wish to argue that :

The worst-case scenario is already happening

followed by you saying

Okay, but

isn’t an acknowledgement ?

The fallacy isn’t assuming that it will happen. Clearly, there is a significant push towards it, and its something we need to be fighting against. The reason its a slippery slope fallacy is the assumption that this law is a direct appempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods, technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally), and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.

The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience. So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.

The fallacy is the expectation that following escalating events would arise from the event in question.

It’s only a fallacy if it’s unreasonable to expect the subsequent steps to occur or in this case, be attempted.

Does that mean it’s a guarantee, of course not, just that the fallacy doesn’t apply.

The intention or plan for escalating steps doesn’t have to be laid out perfectly to draw the parallels between this and previous similar events that were then subsequently used as foundations for greater reach.

Your reasoning around the technical implementation of such escalation isn’t applicable here (in the conversation about whether or not the fallacy applies)

If you want to argue that they won’t escalate, or it’s not possible , go right ahead, but raising a fallacy argument when it doesn’t apply isn’t a good start.

If you want i can address your arguments around implementation directly,as a seperate conversation?

My interpretation was that slipery slope was more about the event in question (AC1043) being predicted to directly lead to escalation (AI/ID verification). As from you’re Wikipedia quote, “to result in the claimed effects”. I don’t see any reason to predict that this law will directly influence their decision to escalate or not. That said, perhaps its a disagreement on how much cultural influence a law like this would have, and how seperate a parent/user-managed system of age verification is from a government managed one technically.

I would be interested to hear your argument for technical implementation, however.

Ah, i think i see where the difference in opinion is, claiming this event leads directly to (as in the very next step is) AI/ID verification could be considered an unreasonable jump i suppose.

In my case i was interpreting the argument as this event will almost certainly lead to further encroachment events into privacy, one of which would probably be the AI/ID verification.

To me this is a reasonable assumption because it’s what has happened in pretty much all of the recent instances of similar event occurring and therefore not a slippery slope fallacy.

TL;DR

On further examination, the technical things you mention seem to be correct if you assume that this bill alone is the vector for privacy encroachment, but they don’t pan out at all if it is assumed that other steps will follow; which, given precedent, is highly likely to happen.

On the technical implementation:

The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods,

As an aside i’m not sure anyone is claiming that this bill is a direct attempt at a hard AI/ID verification system, rather they are claiming that this another step in a series of encroachments that will lead to escalating requirements and enforcement, AI/ID verification being an obvious step in that series.

From a technical standpoint you are correct, it outright states that photo ID upload isn’t required, yet.

Opinion : A cynic might see this as indication that the politicians understand that political and public appetite for full photo id requirements is less than optimal, so this is just a small step in shifting the Overton window on this subject.

technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally),

That is only correct in a very narrow set of circumstances, that local requirement isn’t set in stone at all.

All that needs to happen to go from this to full ID checks is to mandate they use a “trusted” service for verification. It wouldn’t need to be an always online thing either, think of how the bullshit online verification systems that already exist work, i.e. you need to go online every x days or your system/service/app will stop working.

opinion: I fully expect any “trusted” service they designate to be something that serves the governmental and corporate desire for as much data as they can get away with, this isn’t even a stretch, just look at the service discord was trying to implement, the one with deep ties to palantir

and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.

This isn’t wrong as much as it seems naive, we are talking about bills that change laws, any law introduced can be revoked, superseded or have “exceptions” carved out, such as the current favourite “think of the children” thin veneer they are using.

It wouldn’t take much to move from “all data is protected” to “all data is protected, unless we need it to protect the children”

That’s not even taking in to account that the laws are only as good as the system upholding them, the current US system is sketchy AF, other countries have similar issue with uneven application of laws.

Not to say we should throw out hands up, say “what’s the point?” and just do nothing, but pretending that these laws aren’t susceptible to the same issue affecting everything else doesn’t help anyone either.

The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience.

Agreed.

So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.

Mostly agreed.

the points i’d raise are that the whole idea of age verification is an encroachment upon personal freedoms for some, so there’s an aspect of subjectivity to that.

I addition, relying on data collection regulations at this point is almost dangerously naive, corporations and governments alike have shown that they will basically ignore them outright or make up some exception, this isn’t conjecture, this is something easily searchable, think flock, ring camera’s, stringray , PRISM, anything palantir is involved in, cambridge analytica, broad warrantless data requests etc.

There is absolutely no reason to give the benefit of the doubt to parties that have repeatedly proven to be doing sketchy shit.

Secure Identity Verification Solutions | Persona

Online identity verification software that helps any business, from any industry, collect, verify, and manage user identities throughout a customer lifecycle.

But the sound of it, ghe disagreement is mostly in how direct an impact AB1043 will have on government plans for data collection and authoritarianism.

Like, as you said, laws can be changed or removed, but the fact that it would be necessary to do so to implement AI/ID suggests to me that this isn’t that, and is instead a disconnected route. On a legal level, having this does nothing but add a speedbump to future authoritarianism - one they are likely to cross, but it doesn’t advance their goals, legally.

Technically, I have no doubt that the government will continue to push for more data collection and more control, but it seems that a local value that the user can access/edit (even if they were to use a online-verification system, that issues tokens) isn’t going to be secure or enforceable enough to achive their goals. Anyone can copy, modify, share, reverse-engineer, ect.

Similarly with the Overton window, where it has been standard practice for over a decade to have a “are you at least 18?” popup, and for every single service to ask you your age, if not more. We absolutely need more data protections for systems such as this (ideally an outright ban on saving this information) but this doesn’t seem to make it worse.

Basically, from my understanding, this isn’t a step towards data collection or authoritarianism, and provides no significant benifit to either of those causes - its effectively a technical standard. Like, if this age-verification flag was proposed by the Linux Foundation, and agreed to by others, would the backlash be this big? Similarly, I don’t see any contradition between wanting a ban on storage/sharing of user data, and the implementation of a flag like this - even if we are able to ban all storage of user data, this law would be unaffected. That’s what I’m trying to figure out - how do people think that this leads towards those end goals? How would blocking it improve anything?

Is it just a difference in opinion about the signicance of the Overton window?

Is there a technical aspect I’m missing?

Is there some legal advantage this provides to survailance that I’ve missed?

Right now, it seems like everyone is arguing against a strawman, implying that I support the idea of government/corporate surveillance and censorship, but given how unanimous it is, I’m guessing I’m missing something?

By the sound of it, the disagreement is mostly in how direct an impact AB1043 will have on government plans for data collection and authoritarianism.

That’s not really the original disagreement i was referencing, nor is it a position i’ve taken, we agree that the local only bill isn’t the big bad.

You twice referenced the slippery slope fallacy when replying to comments clearly describing future actions, i was pointing out that it doesn’t meet that criteria because there is a reasonable assumption that the described escalation will occur.

Your original responses to which i was referring:

This is a slippery slope falicy. Just because the option is provided to self-identify age, doesn’t mean that it will be replaced with more complex and direct data collection (which I am against, if it wasn’t clear) later

You’re again relying on slipery slope falacy to say that because I’m okay with this one specific form of age gating, I’m okay with every other one, which I have repeatedly made clear is not true.

The first one is the main issue i was pointing out, the second one isn’t how the fallacy is applied at all.

As no one is taking the position that AB1043 is the actual danger most of what you are arguing doesn’t really apply.

Similarly with the Overton window, where it has been standard practice for over a decade to have a “are you at least 18?” popup, and for every single service to ask you your age, if not more. We absolutely need more data protections for systems such as this (ideally an outright ban on saving this information) but this doesn’t seem to make it worse.

Emphasis mine.

Hard disagree, moving the responsibility of this from individual websites to the OS is a big jump in scope.

The same kind of jump as making it the ISP’s responsibility if they serve illegal content from individual websites ( as has been suggested ).

Aside from that it centralises the surface area for future changes and enforcement.

Basically, from my understanding, this isn’t a step towards data collection or authoritarianism, and provides no significant benifit to either of those causes - its effectively a technical standard.

This is the disagreement, i (and obviously many others) are pointing at the long and comprehensive list of similar initiatives, both recent and historic, that were stepping stones to further encroachment and saying “oh look another small step in the continued and provable encroachment upon privacy” and you seem to be advocating for the benefit of the doubt.

Like, if this age-verification flag was proposed by the Linux Foundation, and agreed to by others, would the backlash be this big?

If the linux foundation had the same history of shenanigans, then yes.

Similarly, I don’t see any contradition between wanting a ban on storage/sharing of user data, and the implementation of a flag like this - even if we are able to ban all storage of user data, this law would be unaffected. That’s what I’m trying to figure out - how do people think that this leads towards those end goals? How would blocking it improve anything?

Ignore the technical implementation of this one step, nobody is saying this is the endgame big bad.

Think of it as a prevention measure, a single ant in the kitchen isn’t a problem in and of itself, but it’s almost certainly an indication of a larger potential future problem.

You are arguing it’s not a problem because the ant only has 5 legs, everyone else is saying the leg count doesn’t matter it’s still an ant.

Is it just a difference in opinion about the signicance of the Overton window?

See above

Is there a technical aspect I’m missing?

Not necessarily , it’s just that you are arguing a single technical issue in a conversation about perceived intentionality.

Is there some legal advantage this provides to survailance that I’ve missed?

See above

Right now, it seems like everyone is arguing against a strawman, implying that I support the idea of government/corporate surveillance and censorship, that I don’t expect that they’ll continue to be evil, or they’re simply saying its bad because its cosmetically similar to laws that do impede on freedoms. Given how unanimous the backlash is, I must be missing something?

That you are using a point nobody disagrees with to imply correctness in a context where said point doesn’t really apply makes it seem like you are coming at this in bad faith.

When bad faith is assumed, people look for underlying reasons.

Its a local, safe option for reducing child access to things they shouldn’t access.

With the proposed measures in place, any app can know exactly which devices children are using, something noone can do now.

When you implement a feature, there’s no way in the world you can guarantee only “good people” can use it, and malicious individuals are way more interested in getting info about children than anyone else.

That doesn’t protect children, it puts them even in more danger than they are now.

I mean, from my understanding, this would be both hyper-illegal and extremely impractical. You’d need to have a large enough site to lure users in, and collect identitying information and republish it, but can’t draw enough attention to become a target for data poisoning (given that this flag is freely set by the user) or for law enforcement. It seems like this would be unlikely enough that the benifit gained from having this flag would far outweigh the risks, esspecially in the modern, hyper-corporate internet.

There is no benefit.

You can’t glibly assert that people can just lie, so it’s not a big deal - and then pretend it’ll do the thing it’s for. Which again, is a bad idea anyway, which this approach would not achieve, if it even worked. It’s fractally stupid. It is dangerous bullshit, at every scale.

There is no benefit.

This is obvious hyperbole is you know it. Kids are stupid and vulnerable, and measures to protect them aren’t useless. That said, I am open to the idea that this law isn’t worth the cost. Basically every other age verification law (esspecially those based on use ID or AI) is very clearly not. I just haven’t seen a compelling argument as to why this one isn’t.

You can’t glibly assert that people can just lie, so it’s not a big deal - and then pretend it’ll do the thing it’s for. Which again, is a bad idea anyway, which this approach would not achieve, if it even worked. It’s fractally stupid. It is dangerous bullshit, at every scale.

Okay, but why? You keep repeating that its dangerous, limits freedoms, and causes privacy issues, but so far, the only argument I’ve seen is that it can help kids identity themselves, but given that its handled locally and is unreliable, I don’t see this being usable on any meaningful scale. Setting up a, “free candy” website or app is going to be way less effective and way more dangerous than just creating a Roblox account. Is there something I’m missing?

Companies shouldn’t even be allowed to demand more than a username and password, on any machine I could pick up and throw. Making anything beyond that a legal requirement is intolerable, in itself. My age is not this object’s business. It sure isn’t this website’s business.

Stop excusing these intrusions against adult life, for the sake of children who will bypass them anyway. You know they will. You use the flimsiness of this alleged protection as an excuse for enabling it. There is literally no benefit if it doesn’t fucking work. Even pretending the immediate goal is something you should want - this won’t do that.

Companies shouldn’t even be allowed to demand more than a username and password, on any machine I could pick up and throw. Making anything beyond that a legal requirement is intolerable, in itself. My age is not this object’s business. It sure isn’t this website’s business.

Stop excusing these intrusions against adult life, for the sake of children who will bypass them anyway. You know they will. You use the flimsiness of this alleged protection as an excuse for enabling it. There is literally no benefit if it doesn’t fucking work. Even pretending the immediate goal is something you should want - this won’t do that.

I do know they will. The whole reason I’m even okay idea is because it is completely optional for the user. I don’t see how it’ll impact adult life. That is why I’m so confused at the backlash. Its asking for an option to increase user control and user choice over their experience. Hell, from my understanding, this would provide a means for users to make it actually illegal to collect any user data, but I need to re-read the CCPA to confirm this. It seems that the benifits of user choice provided by this option far outweight the loss of having one more fingerprinting metric - nonetheless one that is illegal to share.

If I had to take a photo of my genitals to sign into my own computer, promises against storage or sharing are not addressing my complaints about privacy. Asking my age is a lot less personal - but it’s still information about me, which this object does not need.

‘I’m only okay with this idea because I know it won’t work’ is, just, why are we even talking? What is the function of an argument when you’re not listening to yourself?

If I had to take a photo of my genitals to sign into my own computer, promises against storage or sharing are not addressing my complaints about privacy. Asking my age is a lot less personal - but it’s still information about me, which this object does not need.

If you’re that concerned, leave the field at its default value, or (since its your PC and there will absolutely be a way to) set it to a null value. Or set it based on the amount of legal protections you want on your data, because that also appears to work.

‘I’m only okay with this idea because I know it won’t work’ is, just, why are we even talking? What is the function of an argument when you’re not listening to yourself?

Saying it can be bypassed doesn’t mean it doesn’t work. Like most safety and security measures, the point is to disincentivise and prevent errs of convenience - esspecially since children particularly lack impulse control. In the same way, having a railing or fence on a cliff won’t prevent people from passing, but will make them think twice. It doesn’t mean having that railing/fence is pointless.

Or set it based on the amount of legal protections you want on your data

… do you ever step back and wonder if civilization was a mistake?

would be both hyper-illegal and extremely impractical

Does that ever stopped criminals before?

illegal

Yes, in that they can be stopped if noticed. Police are incompetent, but if something is that bad, and draws enough attention, the person will generally be arrested.

extremely impractical

Yes, all the time. Thats why safes, passwords and similar exist. Or, more relevant in this case, the adage that the best way to avoid a break-in is to be a less appealing target than your neighbors. Roblox, Minecraft, Discord, and other platforms where kids gather and regularly self-identify are still going to exist, and they are far safer and far more appealing for targetted abuse of children. On the other hand, setting up a public website/app and trying to lure children to it is expensive, risky, and unlikely to succeed on the modern internet.

On the other hand, setting up a public website/app and trying to lure children to it is expensive, risky, and unlikely to succeed on the modern internet.

Right, when has any website become a platform where kids gather and regularly self-identify?

You’re completely ignoring my argument. How many of these websites where children gather and self-identity are created and maintained by paedophiles specifically to prey on childen? So far as I know, there has never been a site like this on the modern internet, nonetheless one that remains up and has been running for an extended period. I don’t see any reason to expect this to change.

How many of these websites where children gather and self-identity are created and maintained by paedophiles specifically to prey on childen?

In light of the Epstein files I would hesitate to say that number is zero. Nevermind that most such platforms are smaller than the giants you mentioned. Or that anyone working for or with kid-filled sites of any size could make it incidentally about preying on said kids. Apparently people manage when they’re just anonymous users.

Or that anyone working for or with kid-filled sites of any size could make it incidentally about preying on said kids. Apparently people manage when they’re just anonymous users.

But like, thats exactly my point. Its platforms like Roblox that predators seek out to prey on children. They don’t create their own. An age verification law will have no effect on that. A hidden backend value thats illegal to share doesn’t make it significantly easier for predators. Even if they did have unrestricted access to user data, wouldn’t a hundred other variables better identify vulnerable users, like use of voice chat and past text messages? Hell, I would expect children with the age flag not set to be more vulnerable, given that it would likely mean the parent is less likely to be tech-savy and/or less likely to be paying attention to their child.

‘This law is fine because it won’t affect child predators’ is a brave argument.

What is it for? You’ve found so many ways to say it’s toothless, optional, trivially dodged. So why fucking bother? Critics seem to agree, it’s a foot in the door for all of the other privacy-defeating efforts going on, now running in protection ring zero. What does this nonsense do, besides set off those red flags? What impact do you honestly expect, versus telling websites to have an ‘18+ only’ click-through?

‘This law is fine because it won’t affect child predators’ is a brave argument.

This obviously isn’t the argument I’m making. This law obviously isn’t meant to stop predators. Its meant to provide a parental control option for parents to limit their own children’s access to potentially harmfull or mature materials.

Critics seem to agree, it’s a foot in the door for all of the other privacy-defeating efforts going on, now running in protection ring zero. What does this nonsense do, besides set off those red flags?

This huge uproar is the point of my confusion. You and others in the field seem certain that this is a direct first step towards ID and AI data collection. Meanwhile, before this, I actually saw this occasionally proposed as a good option in privacy-related blogs/communities specifically because it was optional and entirely handled by the users.

What impact do you honestly expect, versus telling websites to have an ‘18+ only’ click-through?

More convenience for adults (not having to click “yes” every time), and having a more effective way of slowing down children accessing content that might be dangerous. For example, if I was a parent who had access to this, I’d likely set up two accounts for my kids: one set to 18+ for when I’m directly supervising them, and one set to under 18 for when I’m supervising them less thoroughly.

Software freely adding an option to somehow report ‘this user is underage’ is unavoidably distinct from the government mandating any form of requesting, storing, or sharing the user’s age.

Even if you honestly believe there’s no connection to states demanding ID collection before looking at porn - how can you not understand the people recoiling at this? ‘I get it but you’re mistaken’ would see a polite argument. Your apparent bewilderment is inexplicable. ‘Microsoft legally requires your birthdate before you boot up and the internet will work differently based on that’ must be a dark aside in some Cory Doctorow story. How is it our actual reality, which some people think is normal?

Well, from a privacy/freedom standpoint, how is this different from a website requiring you to enter your age and/or asking you to confirm that you’re 18? They record your age, store it with your data, then let you continue. The fact that baffles me is that this is widely accepted as standard practice, and not a significant privacy concern, while having an account-level flag that does the exact same thing isn’t. Like, is it because its managed by the browser/OS/app store? In that case, why isn’t there the same backlash against the existance of things like system theme flags, user agents, and even usernames.

As if there’s no backlash for those things! No popular culture reflecting the baby boom on January 1st, 1900. No widespread browser plugins to make e-mail nags and sign-in pop-ups fuck off.

As if legally mandatory age reporting is in any way the same thing as haphazard adoption of a Dark Mode flag. Wikipedia’s not even smart enough to make Automatic the default.

On some level, a website named Porn Hub needing an interstitial that says ‘btw, this is porn’ is the original sin of the internet. It’s borne of the same puritanical horseshit that tried banning pornography entirely. It’s not about children. They’re the excuse. This ongoing moral panic uses them in a widespread and not entirely unsuccessful effort to deny adult-ass adults the things that most of them want. This has been happening my entire life, and yours, and is why I cannot respect the hair-splitting insistence that forcing your OS to report your age is - somehow! - totally unrelated, utterly disconnected, having nothing to do with the many conservative governments who want to track every video you ever jerked off to.

For the children.

I’m trying to give you the benifit of the doubt, but at this point you seem to increasing be resorting to insults, and arguing against stawmen, to the point where I’m having trouble even understanding what you’re saying. I’m doing my best to remain respectful and civil, but you aren’t returning the favour. That said, I am trying to give you a chance, and want to be open to being convinced. So…

If I understand what you’re trying to say, you think there should never be any prompt, warning, or other safety measure on any content? Not gore videos, not dating sites, not shock sites? Am I understanding you correctly, and if not, can you please restate your argument more clearly.

I don’t think I’ve said shit about you, as a person, beyond ‘your arguments are bad and you should feel bad,’ with an abundant side of ‘and here’s why.’ You’re getting the toned-down version of reflexive sarcasm at some baffling things you continue to say. By all means, let loose, because blunt honesty might get us closer to sharing the same reality.

I’ve already linked to where I said, content warnings good, age gating bad. You think this should replace all ‘I am 18’ prompts.

I’ve belabored the distinction between freely adopted implementation and any form of state enforcement. Like, there’s plenty wrong with user-agent strings, but even a simple requirement to accurately report browser version would be quietly horrifying. Robbing software developers of the ability to say ‘that was a bad security decision, let’s just not do it,’ is intrinsically fucked.

If you need it restated:

I despise the idea of my own damn machine needing to know my birthdate. Largely, but not entirely, because that points toward verification demands which you agree would be intolerable. The internet should not work differently based on who you are.

I don’t think this law will achieve anything worthwhile, and I’m not convinced you do either. Your defense of it is full of things I would say as condemnation.

I fully expect this to get worse, based on all recent visible trends. Countries are banning young people from using entire categories of website. Glorified chatrooms are asking to see your driver’s license. The last thing a liberated internet needs is more personal information.

System76 on Age Verification Laws - sh.itjust.works

Liberty has costs, but it’s worth it.

even a simple requirement to accurately report browser version would be quietly horrifying

Maybe this is where the confusion comes from. The reason I think this is an acceptable idea, is specifically because there is no requirement for it to be accurate, and technically, it doesn’t seem possible to tack on a more intrusive system after the fact (owing to the fact that everything is stored locally). In effect, it seems to just be a, “filtering level” flag - something a user can chose to use (or not) to filter different types of content. This seems like its happening in parallel of government/corporate survailance, rather than in service to it.

Robbing software developers of the ability to say ‘that was a bad security decision, let’s just not do it,’ is intrinsically fucked.

Actually, this is the part I have the biggest issue with - esspecially because I don’t agree with some of the implementation details, like the requirement that the original input be a numerical/date input field, labeled as age rather than a bracket selection, or something else more clear and granular. At the same time, I think there is something to be said for government intervention in areas where private companies have failed to innovate/standardize, USB-C being the prime example.

That said, honestly, thinking about how suboptimal this is, even as a content filtering system… I think you’re right that this is the wrong approach. Something like flags marked for “hide sexual content”, “hide gore”, and “hude potentially disturbing content” would make far more sense than a set of unified age brackets. So, at least as a technical standard, consider me convinced that it shouldn’t be implemented.

Individual sites will have their data leaked and aggregated by data brokers. Those data brokers both sell the aggregated data and experience data leaks themselves. The data keeps moving from actor to actor while the aggregation is continued until eventually finding it’s way into a public repo or security researcher data sets.

This is a compelling argument, but do you think its really a significant attack vector? Its already illegal to share or leak, even unintentionally this data, and from my understanding, if you chose to set your age to a lower bracket via this process, companies sharing (also collecting? Currently unclear on this.) this data would also break CCPA and possibly COPPA, and from my understanding, the companies are required to provide additional data privacy measures under California Civil Code.

Yes, these laws will be broken, but will it be on a significant enough scale, and with reliable enough information to be worth-while? Like, since this bans the use of data from those who set their age low, wouldn’t this likely reduce the data collection pool overall, not to mention inventiving adults to poison this data. For those who do illegally collect this data anyway, is it that much of an advantage compared to just asking the user’s age upon reaching the site as most sites currently do? Beyond that, when these sites operating illegally do leak their data, will that data be a realistic attack vector? Like I said to another commenter, collating data in this way seems extremely impractical and unreliable for predators. Wouldn’t those who want to seek out children just go to existing spaces where they can connect directly like Roblox or Discord?

I think it is one vector that can contribute to identification through fingerprinting. While the data brokers are aggregating data from this vector, they are also aggregating data from all other vectors within their capability. The data sets from each vector are cross referenced to create unique fingerprint ids for each individual believed to be found in the data. Every vector the brokers are able to add increases the overall accuracy of the model they use to connect those ids to real world people. These data sets don’t take a lot of resources to store while they gain monetary and strategic value over time so they will be duplicated across many actors. If all they were getting access to is this single data point that would not be an issue but it’s the sum of all data points being provided to brokers that brings growing risk. This isn’t the first or last attempt to add mandatory data collection. Each time we add a mandatory data point, we’re extending the runway for brokers to get their operations off the ground. They threat actors were already headed to Roblox and Discord but now the tools available to them are made slightly more sophisticated, increasing the changes of their success.

Providing false data for your age would contribute to reducing the reliability of the data for data brokers but I believe it would take collective action to make this significant. Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.

I separate the legal effects from real world effects. Online devices are exposed to all jurisdictions worldwide at once. Laws in those jurisdictions are subject to constant change and interpretation while the data can move between jurisdictions in a moment. Data brokers accept the risk of breaking laws when the risk/reward calculation looks favorable to them, the same as publicly traded corporations do. This is the same reason they will continue to collect data of minors even if the laws tells them not to. It just takes one event for a targeted individual to have their life changed forever. Law may try to punish the broker but rarely will it restore the victim. State and other large actors are going to collect the data regardless of what the law says. They can fall back on a differing interpretations, employee incompetence claims, fall guys or just saying big oops if they’re ever caught.

Friend, thank you for the dialogue as well. You’re getting down voted because the votes reflect our community’s emotions on the topic, regardless of the quality or relevance of the comment.

Honestly, I re-read the legislation, and I while I’m still not convinced something like this is a bad idea, all the specifics are.

Like, ultimately, its a use-set flag, stored locally, and would provide users more choice in content filtering.

Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.

You’re right, and the design of this law basically ensures that. I was thinking of it being implemented (at least in user-friendly UI) as a dropdown showing the four provided age brackets. Instead, it is required to be a numeric or date of birth input, seemingly without allowing a default value, which means users are more likely to enter accurate data. Similarly, stored age information isn’t required to use the brackets provided. This means that a lazy or immoral developer will use the exact age, rather than abstracting it as the law suggests. I had misinterpreted 1798.500. (b) and thought that the abstraction of age data as suggested was required.

If something like this is to be implemented, it needs to use a more abstracted format (ideally with a default value), and if its going to be implemented into law, it should be a better system of content filter than simply using an age-based metric.