So, I have actually read the text of California law CA AB1043 and, honestly, I don't hate it. It requires operating systems to let you enter a date when you create a user account and requires a way for software to get a coarse-grained approximation of this that says either 'over 18' or one of three age ranges of under-18s. Importantly, it doesn't require:

  • Remote attestation.
  • Tamper-proof storage of the age.
  • Any validation in the age.

In short, it's a tool for parents: it allows you to set the age of a child's account so that apps (including web browsers, which can then expose via JavaScript or whatever) can ask questions about what features they should expose.

In a UNIX-like system, this is easy to do, with a tiny amount of new userspace things:

  • Define four groups for the four age ranges (ideally, standardise their names!).
  • Add a /etc/user_birthdays file (or whatever name it is) that stores pairs of username (or uid) and birthdays.
  • Add a daily cron job that checks the above file and updates group membership.
  • Modify user-add scripts / GUIs to create an entry in the above file.
  • Add a tool to create an entry in the above file for existing user accounts.

This doesn't require any kernel changes. Any process can query the set of groups that the user is in already.

If a parent wants to give their child root, they can update the file and bypass the check. And that's fine, that's a parent's choice. And that's what I want.

I like this approach far more than things that require users to provide scans of passports and other toxically personal information to be able to use services. If we had this feature, then the Online Safety Act could simply require that web browsers provide a JavaScript API to query the age bracket and didn't work unless it returned 'over 18'.

@david_chisnall

Pretty sure the law **requires** all apps (not just web browsers) to query for a signal, otherwise the Dev is in violation.

I don't see a requirement for the app to actually show age-inappropriate content to a minor. Even a completely kid-friendly app would violate the law.

@cava

It's not clear (and probably should be clarified), but that's not how I read 1798.501(b). I interpreted it as 'if a law requires you to do some age-related blocking, you must use this API and not something else', which seems to be a laudable intent (in particular, it prohibits asking for passports and so on for age verification). In particular, 1798.501(b)(4) seems to indicate that this was the intent.

EDIT: Note that, in my proposed groups-based approach, it would be trivial for CRT initialisation to query group membership. That would automatically meet your interpretation of the requirement (being required to query it but not being required to do anything with the data is largely indistinguishable from not being required to query it). An OS could even put these values in the ELF aux args vector to make sure that every application 'queries' the data if that's how a judge would interpret it.

But also note that the law provides penalties for operating systems that do not provide the API, but no penalties for applications that do not comply. This, I presume, is because the intent is for those to be delegated by other laws that require age verification for specific purposes (some of which already exist).

@david_chisnall
I am not a legal expert but both 1798.501 a and b seem to use the same language to me. I don't find b4 as incompatible with requiring a request to be made.

As for the penalties are they not set out at 1798.503 a? There it says "a person that violates" while on b which is good faith exceptions clearly spells out OS and app stores' providers.

I suppose it could also be a clarification not a contradiction.

It's good that there could be a mechanism to protect Foss developers.

@david_chisnall

Of course, my POV is very suspicious of the intentions of such initiatives and their goals in the 1st place.