System76 on Age Verification Laws
System76 on Age Verification Laws
I and many others I know who grew up with unrestricted internet access (before and after the corporatization of the internet) were exposed to terrible shit. Like, I grew up with unusually tech savvy parents who were able to protect me from the worst of it, but even I have been somewhat traumatized by accessing graphic content I shouldn’t have. I personally know people who grew up with worse parents who grew up browsing shock/gore websites and who were repeatedly groomed and abused by pedophiles.
Honestly, I don’t really get the backlash to this legislation, beyond that its prehaps being applied to devices it shouldn’t be. While yes, freedom is important, we’re talking about providing the option to limit access to mature content, not preventing them from downloading python or using the internet. There is a justified reason for wanting this, and this seems like the ideal way to do it.
Its a local, safe option for reducing child access to things they shouldn’t access.
With the proposed measures in place, any app can know exactly which devices children are using, something noone can do now.
When you implement a feature, there’s no way in the world you can guarantee only “good people” can use it, and malicious individuals are way more interested in getting info about children than anyone else.
That doesn’t protect children, it puts them even in more danger than they are now.
This is a compelling argument, but do you think its really a significant attack vector? Its already illegal to share or leak, even unintentionally this data, and from my understanding, if you chose to set your age to a lower bracket via this process, companies sharing (also collecting? Currently unclear on this.) this data would also break CCPA and possibly COPPA, and from my understanding, the companies are required to provide additional data privacy measures under California Civil Code.
Yes, these laws will be broken, but will it be on a significant enough scale, and with reliable enough information to be worth-while? Like, since this bans the use of data from those who set their age low, wouldn’t this likely reduce the data collection pool overall, not to mention inventiving adults to poison this data. For those who do illegally collect this data anyway, is it that much of an advantage compared to just asking the user’s age upon reaching the site as most sites currently do? Beyond that, when these sites operating illegally do leak their data, will that data be a realistic attack vector? Like I said to another commenter, collating data in this way seems extremely impractical and unreliable for predators. Wouldn’t those who want to seek out children just go to existing spaces where they can connect directly like Roblox or Discord?
I think it is one vector that can contribute to identification through fingerprinting. While the data brokers are aggregating data from this vector, they are also aggregating data from all other vectors within their capability. The data sets from each vector are cross referenced to create unique fingerprint ids for each individual believed to be found in the data. Every vector the brokers are able to add increases the overall accuracy of the model they use to connect those ids to real world people. These data sets don’t take a lot of resources to store while they gain monetary and strategic value over time so they will be duplicated across many actors. If all they were getting access to is this single data point that would not be an issue but it’s the sum of all data points being provided to brokers that brings growing risk. This isn’t the first or last attempt to add mandatory data collection. Each time we add a mandatory data point, we’re extending the runway for brokers to get their operations off the ground. They threat actors were already headed to Roblox and Discord but now the tools available to them are made slightly more sophisticated, increasing the changes of their success.
Providing false data for your age would contribute to reducing the reliability of the data for data brokers but I believe it would take collective action to make this significant. Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.
I separate the legal effects from real world effects. Online devices are exposed to all jurisdictions worldwide at once. Laws in those jurisdictions are subject to constant change and interpretation while the data can move between jurisdictions in a moment. Data brokers accept the risk of breaking laws when the risk/reward calculation looks favorable to them, the same as publicly traded corporations do. This is the same reason they will continue to collect data of minors even if the laws tells them not to. It just takes one event for a targeted individual to have their life changed forever. Law may try to punish the broker but rarely will it restore the victim. State and other large actors are going to collect the data regardless of what the law says. They can fall back on a differing interpretations, employee incompetence claims, fall guys or just saying big oops if they’re ever caught.
Friend, thank you for the dialogue as well. You’re getting down voted because the votes reflect our community’s emotions on the topic, regardless of the quality or relevance of the comment.
Honestly, I re-read the legislation, and I while I’m still not convinced something like this is a bad idea, all the specifics are.
Like, ultimately, its a use-set flag, stored locally, and would provide users more choice in content filtering.
Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.
You’re right, and the design of this law basically ensures that. I was thinking of it being implemented (at least in user-friendly UI) as a dropdown showing the four provided age brackets. Instead, it is required to be a numeric or date of birth input, seemingly without allowing a default value, which means users are more likely to enter accurate data. Similarly, stored age information isn’t required to use the brackets provided. This means that a lazy or immoral developer will use the exact age, rather than abstracting it as the law suggests. I had misinterpreted 1798.500. (b) and thought that the abstraction of age data as suggested was required.
If something like this is to be implemented, it needs to use a more abstracted format (ideally with a default value), and if its going to be implemented into law, it should be a better system of content filter than simply using an age-based metric.