System76 on Age Verification Laws
System76 on Age Verification Laws
I and many others I know who grew up with unrestricted internet access (before and after the corporatization of the internet) were exposed to terrible shit. Like, I grew up with unusually tech savvy parents who were able to protect me from the worst of it, but even I have been somewhat traumatized by accessing graphic content I shouldn’t have. I personally know people who grew up with worse parents who grew up browsing shock/gore websites and who were repeatedly groomed and abused by pedophiles.
Honestly, I don’t really get the backlash to this legislation, beyond that its prehaps being applied to devices it shouldn’t be. While yes, freedom is important, we’re talking about providing the option to limit access to mature content, not preventing them from downloading python or using the internet. There is a justified reason for wanting this, and this seems like the ideal way to do it.
Its a local, safe option for reducing child access to things they shouldn’t access.
With the proposed measures in place, any app can know exactly which devices children are using, something noone can do now.
When you implement a feature, there’s no way in the world you can guarantee only “good people” can use it, and malicious individuals are way more interested in getting info about children than anyone else.
That doesn’t protect children, it puts them even in more danger than they are now.
There is no benefit.
You can’t glibly assert that people can just lie, so it’s not a big deal - and then pretend it’ll do the thing it’s for. Which again, is a bad idea anyway, which this approach would not achieve, if it even worked. It’s fractally stupid. It is dangerous bullshit, at every scale.
There is no benefit.
This is obvious hyperbole is you know it. Kids are stupid and vulnerable, and measures to protect them aren’t useless. That said, I am open to the idea that this law isn’t worth the cost. Basically every other age verification law (esspecially those based on use ID or AI) is very clearly not. I just haven’t seen a compelling argument as to why this one isn’t.
You can’t glibly assert that people can just lie, so it’s not a big deal - and then pretend it’ll do the thing it’s for. Which again, is a bad idea anyway, which this approach would not achieve, if it even worked. It’s fractally stupid. It is dangerous bullshit, at every scale.
Okay, but why? You keep repeating that its dangerous, limits freedoms, and causes privacy issues, but so far, the only argument I’ve seen is that it can help kids identity themselves, but given that its handled locally and is unreliable, I don’t see this being usable on any meaningful scale. Setting up a, “free candy” website or app is going to be way less effective and way more dangerous than just creating a Roblox account. Is there something I’m missing?
Companies shouldn’t even be allowed to demand more than a username and password, on any machine I could pick up and throw. Making anything beyond that a legal requirement is intolerable, in itself. My age is not this object’s business. It sure isn’t this website’s business.
Stop excusing these intrusions against adult life, for the sake of children who will bypass them anyway. You know they will. You use the flimsiness of this alleged protection as an excuse for enabling it. There is literally no benefit if it doesn’t fucking work. Even pretending the immediate goal is something you should want - this won’t do that.
Companies shouldn’t even be allowed to demand more than a username and password, on any machine I could pick up and throw. Making anything beyond that a legal requirement is intolerable, in itself. My age is not this object’s business. It sure isn’t this website’s business.
Stop excusing these intrusions against adult life, for the sake of children who will bypass them anyway. You know they will. You use the flimsiness of this alleged protection as an excuse for enabling it. There is literally no benefit if it doesn’t fucking work. Even pretending the immediate goal is something you should want - this won’t do that.
I do know they will. The whole reason I’m even okay idea is because it is completely optional for the user. I don’t see how it’ll impact adult life. That is why I’m so confused at the backlash. Its asking for an option to increase user control and user choice over their experience. Hell, from my understanding, this would provide a means for users to make it actually illegal to collect any user data, but I need to re-read the CCPA to confirm this. It seems that the benifits of user choice provided by this option far outweight the loss of having one more fingerprinting metric - nonetheless one that is illegal to share.
If I had to take a photo of my genitals to sign into my own computer, promises against storage or sharing are not addressing my complaints about privacy. Asking my age is a lot less personal - but it’s still information about me, which this object does not need.
‘I’m only okay with this idea because I know it won’t work’ is, just, why are we even talking? What is the function of an argument when you’re not listening to yourself?
If I had to take a photo of my genitals to sign into my own computer, promises against storage or sharing are not addressing my complaints about privacy. Asking my age is a lot less personal - but it’s still information about me, which this object does not need.
If you’re that concerned, leave the field at its default value, or (since its your PC and there will absolutely be a way to) set it to a null value. Or set it based on the amount of legal protections you want on your data, because that also appears to work.
‘I’m only okay with this idea because I know it won’t work’ is, just, why are we even talking? What is the function of an argument when you’re not listening to yourself?
Saying it can be bypassed doesn’t mean it doesn’t work. Like most safety and security measures, the point is to disincentivise and prevent errs of convenience - esspecially since children particularly lack impulse control. In the same way, having a railing or fence on a cliff won’t prevent people from passing, but will make them think twice. It doesn’t mean having that railing/fence is pointless.
Or set it based on the amount of legal protections you want on your data
… do you ever step back and wonder if civilization was a mistake?
would be both hyper-illegal and extremely impractical
Does that ever stopped criminals before?
illegal
Yes, in that they can be stopped if noticed. Police are incompetent, but if something is that bad, and draws enough attention, the person will generally be arrested.
extremely impractical
Yes, all the time. Thats why safes, passwords and similar exist. Or, more relevant in this case, the adage that the best way to avoid a break-in is to be a less appealing target than your neighbors. Roblox, Minecraft, Discord, and other platforms where kids gather and regularly self-identify are still going to exist, and they are far safer and far more appealing for targetted abuse of children. On the other hand, setting up a public website/app and trying to lure children to it is expensive, risky, and unlikely to succeed on the modern internet.
On the other hand, setting up a public website/app and trying to lure children to it is expensive, risky, and unlikely to succeed on the modern internet.
Right, when has any website become a platform where kids gather and regularly self-identify?
How many of these websites where children gather and self-identity are created and maintained by paedophiles specifically to prey on childen?
In light of the Epstein files I would hesitate to say that number is zero. Nevermind that most such platforms are smaller than the giants you mentioned. Or that anyone working for or with kid-filled sites of any size could make it incidentally about preying on said kids. Apparently people manage when they’re just anonymous users.
Or that anyone working for or with kid-filled sites of any size could make it incidentally about preying on said kids. Apparently people manage when they’re just anonymous users.
But like, thats exactly my point. Its platforms like Roblox that predators seek out to prey on children. They don’t create their own. An age verification law will have no effect on that. A hidden backend value thats illegal to share doesn’t make it significantly easier for predators. Even if they did have unrestricted access to user data, wouldn’t a hundred other variables better identify vulnerable users, like use of voice chat and past text messages? Hell, I would expect children with the age flag not set to be more vulnerable, given that it would likely mean the parent is less likely to be tech-savy and/or less likely to be paying attention to their child.
‘This law is fine because it won’t affect child predators’ is a brave argument.
What is it for? You’ve found so many ways to say it’s toothless, optional, trivially dodged. So why fucking bother? Critics seem to agree, it’s a foot in the door for all of the other privacy-defeating efforts going on, now running in protection ring zero. What does this nonsense do, besides set off those red flags? What impact do you honestly expect, versus telling websites to have an ‘18+ only’ click-through?
‘This law is fine because it won’t affect child predators’ is a brave argument.
This obviously isn’t the argument I’m making. This law obviously isn’t meant to stop predators. Its meant to provide a parental control option for parents to limit their own children’s access to potentially harmfull or mature materials.
Critics seem to agree, it’s a foot in the door for all of the other privacy-defeating efforts going on, now running in protection ring zero. What does this nonsense do, besides set off those red flags?
This huge uproar is the point of my confusion. You and others in the field seem certain that this is a direct first step towards ID and AI data collection. Meanwhile, before this, I actually saw this occasionally proposed as a good option in privacy-related blogs/communities specifically because it was optional and entirely handled by the users.
What impact do you honestly expect, versus telling websites to have an ‘18+ only’ click-through?
More convenience for adults (not having to click “yes” every time), and having a more effective way of slowing down children accessing content that might be dangerous. For example, if I was a parent who had access to this, I’d likely set up two accounts for my kids: one set to 18+ for when I’m directly supervising them, and one set to under 18 for when I’m supervising them less thoroughly.
Software freely adding an option to somehow report ‘this user is underage’ is unavoidably distinct from the government mandating any form of requesting, storing, or sharing the user’s age.
Even if you honestly believe there’s no connection to states demanding ID collection before looking at porn - how can you not understand the people recoiling at this? ‘I get it but you’re mistaken’ would see a polite argument. Your apparent bewilderment is inexplicable. ‘Microsoft legally requires your birthdate before you boot up and the internet will work differently based on that’ must be a dark aside in some Cory Doctorow story. How is it our actual reality, which some people think is normal?
As if there’s no backlash for those things! No popular culture reflecting the baby boom on January 1st, 1900. No widespread browser plugins to make e-mail nags and sign-in pop-ups fuck off.
As if legally mandatory age reporting is in any way the same thing as haphazard adoption of a Dark Mode flag. Wikipedia’s not even smart enough to make Automatic the default.
On some level, a website named Porn Hub needing an interstitial that says ‘btw, this is porn’ is the original sin of the internet. It’s borne of the same puritanical horseshit that tried banning pornography entirely. It’s not about children. They’re the excuse. This ongoing moral panic uses them in a widespread and not entirely unsuccessful effort to deny adult-ass adults the things that most of them want. This has been happening my entire life, and yours, and is why I cannot respect the hair-splitting insistence that forcing your OS to report your age is - somehow! - totally unrelated, utterly disconnected, having nothing to do with the many conservative governments who want to track every video you ever jerked off to.
For the children.
I’m trying to give you the benifit of the doubt, but at this point you seem to increasing be resorting to insults, and arguing against stawmen, to the point where I’m having trouble even understanding what you’re saying. I’m doing my best to remain respectful and civil, but you aren’t returning the favour. That said, I am trying to give you a chance, and want to be open to being convinced. So…
If I understand what you’re trying to say, you think there should never be any prompt, warning, or other safety measure on any content? Not gore videos, not dating sites, not shock sites? Am I understanding you correctly, and if not, can you please restate your argument more clearly.
I don’t think I’ve said shit about you, as a person, beyond ‘your arguments are bad and you should feel bad,’ with an abundant side of ‘and here’s why.’ You’re getting the toned-down version of reflexive sarcasm at some baffling things you continue to say. By all means, let loose, because blunt honesty might get us closer to sharing the same reality.
I’ve already linked to where I said, content warnings good, age gating bad. You think this should replace all ‘I am 18’ prompts.
I’ve belabored the distinction between freely adopted implementation and any form of state enforcement. Like, there’s plenty wrong with user-agent strings, but even a simple requirement to accurately report browser version would be quietly horrifying. Robbing software developers of the ability to say ‘that was a bad security decision, let’s just not do it,’ is intrinsically fucked.
If you need it restated:
I despise the idea of my own damn machine needing to know my birthdate. Largely, but not entirely, because that points toward verification demands which you agree would be intolerable. The internet should not work differently based on who you are.
I don’t think this law will achieve anything worthwhile, and I’m not convinced you do either. Your defense of it is full of things I would say as condemnation.
I fully expect this to get worse, based on all recent visible trends. Countries are banning young people from using entire categories of website. Glorified chatrooms are asking to see your driver’s license. The last thing a liberated internet needs is more personal information.
even a simple requirement to accurately report browser version would be quietly horrifying
Maybe this is where the confusion comes from. The reason I think this is an acceptable idea, is specifically because there is no requirement for it to be accurate, and technically, it doesn’t seem possible to tack on a more intrusive system after the fact (owing to the fact that everything is stored locally). In effect, it seems to just be a, “filtering level” flag - something a user can chose to use (or not) to filter different types of content. This seems like its happening in parallel of government/corporate survailance, rather than in service to it.
Robbing software developers of the ability to say ‘that was a bad security decision, let’s just not do it,’ is intrinsically fucked.
Actually, this is the part I have the biggest issue with - esspecially because I don’t agree with some of the implementation details, like the requirement that the original input be a numerical/date input field, labeled as age rather than a bracket selection, or something else more clear and granular. At the same time, I think there is something to be said for government intervention in areas where private companies have failed to innovate/standardize, USB-C being the prime example.
That said, honestly, thinking about how suboptimal this is, even as a content filtering system… I think you’re right that this is the wrong approach. Something like flags marked for “hide sexual content”, “hide gore”, and “hude potentially disturbing content” would make far more sense than a set of unified age brackets. So, at least as a technical standard, consider me convinced that it shouldn’t be implemented.
This is a compelling argument, but do you think its really a significant attack vector? Its already illegal to share or leak, even unintentionally this data, and from my understanding, if you chose to set your age to a lower bracket via this process, companies sharing (also collecting? Currently unclear on this.) this data would also break CCPA and possibly COPPA, and from my understanding, the companies are required to provide additional data privacy measures under California Civil Code.
Yes, these laws will be broken, but will it be on a significant enough scale, and with reliable enough information to be worth-while? Like, since this bans the use of data from those who set their age low, wouldn’t this likely reduce the data collection pool overall, not to mention inventiving adults to poison this data. For those who do illegally collect this data anyway, is it that much of an advantage compared to just asking the user’s age upon reaching the site as most sites currently do? Beyond that, when these sites operating illegally do leak their data, will that data be a realistic attack vector? Like I said to another commenter, collating data in this way seems extremely impractical and unreliable for predators. Wouldn’t those who want to seek out children just go to existing spaces where they can connect directly like Roblox or Discord?
I think it is one vector that can contribute to identification through fingerprinting. While the data brokers are aggregating data from this vector, they are also aggregating data from all other vectors within their capability. The data sets from each vector are cross referenced to create unique fingerprint ids for each individual believed to be found in the data. Every vector the brokers are able to add increases the overall accuracy of the model they use to connect those ids to real world people. These data sets don’t take a lot of resources to store while they gain monetary and strategic value over time so they will be duplicated across many actors. If all they were getting access to is this single data point that would not be an issue but it’s the sum of all data points being provided to brokers that brings growing risk. This isn’t the first or last attempt to add mandatory data collection. Each time we add a mandatory data point, we’re extending the runway for brokers to get their operations off the ground. They threat actors were already headed to Roblox and Discord but now the tools available to them are made slightly more sophisticated, increasing the changes of their success.
Providing false data for your age would contribute to reducing the reliability of the data for data brokers but I believe it would take collective action to make this significant. Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.
I separate the legal effects from real world effects. Online devices are exposed to all jurisdictions worldwide at once. Laws in those jurisdictions are subject to constant change and interpretation while the data can move between jurisdictions in a moment. Data brokers accept the risk of breaking laws when the risk/reward calculation looks favorable to them, the same as publicly traded corporations do. This is the same reason they will continue to collect data of minors even if the laws tells them not to. It just takes one event for a targeted individual to have their life changed forever. Law may try to punish the broker but rarely will it restore the victim. State and other large actors are going to collect the data regardless of what the law says. They can fall back on a differing interpretations, employee incompetence claims, fall guys or just saying big oops if they’re ever caught.
Friend, thank you for the dialogue as well. You’re getting down voted because the votes reflect our community’s emotions on the topic, regardless of the quality or relevance of the comment.
Honestly, I re-read the legislation, and I while I’m still not convinced something like this is a bad idea, all the specifics are.
Like, ultimately, its a use-set flag, stored locally, and would provide users more choice in content filtering.
Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.
You’re right, and the design of this law basically ensures that. I was thinking of it being implemented (at least in user-friendly UI) as a dropdown showing the four provided age brackets. Instead, it is required to be a numeric or date of birth input, seemingly without allowing a default value, which means users are more likely to enter accurate data. Similarly, stored age information isn’t required to use the brackets provided. This means that a lazy or immoral developer will use the exact age, rather than abstracting it as the law suggests. I had misinterpreted 1798.500. (b) and thought that the abstraction of age data as suggested was required.
If something like this is to be implemented, it needs to use a more abstracted format (ideally with a default value), and if its going to be implemented into law, it should be a better system of content filter than simply using an age-based metric.