Online Safety “Duty of Care” is failing to launch in the USA? Good.
Key to the Online Safety Act was a novel idea from UK academics Lorna Woods & William Perrin that Online Harm is like Physical Harm, so platforms and venues both should have “duties of care” to prevent harm occurring amongst punters. In short: platforms should police user speech for “safety”, and the state should demand such.
Surprise: this is fundamentally illegal in some countries. But they knew that.
The whole point of the “duty of care” approach was to try and knowingly do an end-run around the US first amendment (and its application to platform content via Section 230) which otherwise electrically grounds the entire Internet with a huge nation, full of huge platforms, full of speech that upsets … many people, especially certain kinds of academic.
Here’s commentary on the First Amendment (via Section 230) from Baroness Kidron’s “5 Rights Foundation” which was instrumental in shaping the UK Online Safety Act. Note especially the framing that this is all a matter of “big tech profits against the little people” — we shall return to that:
With the ongoing debate in the US about Section 230 it is more important than ever that the protection of children online is put above the commercial interests of big tech firms. Section 230 is already controversial and has been criticised for giving tech firms the latitude to ignore the law and the needs of users. In Canada, the free trade agreement between the United States, Canada and Mexico saw the inclusion of Section 230-style protections for tech firms. Canada is the base for Pornhub, the largest pornography site in the world. When Pornhub was found to be monetising child rape and child sexual abuse material, the Canadian Government representative in the Senate, Senator Marc Gold, had to admit that “there are provisions in the North American Free Trade Agreement that make it difficult to deal with a company like Pornhub.”
Encouragingly, both Republicans and Democrats want change, and the US Supreme Court has criticised the way Section 230 lets online services off the hook for promoting illegal content, and for refusing to police their own platforms. While US Congress is likely to consider reform, it is not a given that the new administration will act swiftly or in the UK’s interests. The voice of the tech lobby is powerful, and it is vital that that the principle of non-regression is applied to the protections for children contained in the Online Safety Bill.
https://5rightsfoundation.com/wp-content/uploads/2024/10/Ambitions_for_the_Online_Safety_Bill.pdf
Ignoring for the moment the question of whether online harms really are the same as real-world harms — in a warzone perhaps some tweets are as problematic as AK47s — clearly the British thinkers at the 5 Rights Foundation were confident that America could be swung round to their way of thinking; after all, the problem was “the voice of the tech lobby”, right, not to mention the problem of billionaire robber barons of the tech industry, failing to protect little users? And everyone hates billionaires.
Alas, the problem was not “the voice of the tech lobby”
Unfortunately for the thinkers behind the online safety “duty of care”, the critical issues being raised were not ones of tech company profits; they were legitimate and well-worn issues of free speech. Hence this posting at ITIF from June 2024, flushing some of them out including a straight pot-shot at KOSA, the contentious American Kids Online Safety Act (KOSA, familiar acronym?)
NetChoice has sued Arkansas, Ohio, and Utah over their social media age verification laws, arguing that they violate the First Amendment by requiring users to hand over sensitive personal information in order to access online communication tools. Meanwhile, the Free Speech Coalition, a trade association representing the adult industry, has sued Louisiana, Utah, and Texas over their adult website age verification laws for similar reasons.
…
While age-appropriate design provides a useful set of guiding principles for online services with underaged users, enforcing these standards the way the CAADCA does would cause more harm than good. Requiring companies to act in the best interests of children—or face fines up to $7,500 per affected child—is an incredibly broad and ill-defined standard that is difficult, if not impossible, for online services to perfectly follow. Additionally, as NetChoice outlined in its lawsuit, the CAADCA may also violate the First Amendment by giving the government of California power to dictate online services’ editorial decisions.
…
Finally, KOSA would establish a “duty of care” for any online service that is reasonably likely to be used by a minor. Specifically, these online services would have a duty to ensure their design features prevent and mitigate harm to minors. This provision has caused the most controversy of any part of the bill, with critics arguing that the language is vague and undefined by existing case law, which would complicate compliance. Online services may overcorrect and make it more difficult for minors, or potentially all users, to access helpful content related to mental health, suicide, addiction, eating disorders, sexuality, and more. The duty of care provision may even violate the First Amendment, as the government cannot dictate an online service’s editorial decisions, which could include design features.
https://itif.org/publications/2024/06/03/how-to-address-childrens-online-safety-in-united-states
See also the reference to CAADCA, the contentious Californian attempt to import (again) the 5 Rights Foundation-informed Age-Appropriate Design Code (AADC) thinking. Others in the UK have previously made fine critiques of the AADC, but as with the rest of the Online Safety Act, the entirety of Government, regulation, and policy wonkery has been so high on its own supply of goodliness and anticapitalism that nobody was going to pay attention to a few dissenting experts on technology and global law.
So where are we now?
So all of this is a roundabout background to suggest you go read this Verge piece on the current state of KOSA, also archived:
One of the biggest flashpoints for internet regulation, the Kids Online Safety Act, is poised for a revival — but possibly without the central feature that’s kept people fighting over it for the past three years.
…
[…] KOSA could return to the House of Representatives with the duty of care provision removed. The rumored changes could amount to KOSA’s core provision going out with a whimper, even as lawmakers are rumored to be planning a package of several kids safety bills soon after the government reopens from the shutdown.
Meanwhile, for some longtime opponents of KOSA, removing the duty of care could resolve a central concern they have with the bill: that it could incentivize social media companies to remove helpful and potentially lifesaving resources for kids from marginalized communities. But the overall kids safety package could make that a Pyrrhic victory, placing the gutted KOSA alongside bills with potentially similarly troubling implications for online speech.
Yeah, it looks like the lights have finally come on. I strongly suspect that KOSA is mostly an illiberal and misconceived proposal even with Duty of Care removed… but I would welcome seeing a few dramatic story arcs being aired, pointing out what a damnfool idea it was in the first place.
Some of them even got awards for thinking it up.
#dutyOfCare #kosa #onlineSafety #onlineSafetyAct