Or, given these headaches, maybe they’ll just avoid the UK entirely. That’s especially likely if they don’t have the financial resources to compete with the Big Tech incumbents (or are just nonprofits). Because there is no straightforward way to comply with the Bill as it stands. None at all. The House of Lords *has* to fix that. [14/14]
Guess what: platforms will be pressured to either be scofflaws or (more likely) to over-moderate lawful content. And to deprive all members of the public from content moderation responsibilities. Or... [13/14]
Have a scroll through Schedule 7 of the
#OnlineSafetyBill. All 4-5 pages of it. And then, just for fun, pick some of its offences. Go on. At random. Read up on them. Then think how hard it would be to operate proactive content moderation for some of these (“assisting unlawful immigration", anyone?). Think how especially hard it would be when you have a global platform of users and moderators (or if, like Reddit and Wikipedia, you rely on moderation by members of the public) [12/14]
There’s no offence under s4a *if* both persons are located in the same or distinct “dwellings” at the time the words are said/displayed! Oh, and also not in Scotland. If you’re in Scotland, or in a dwelling, the fleshy or silicon moderator needs to consider different laws. So let’s hope the platform’s content moderation algorithm knows exactly where you and all your audience are, at all time! Privacy (and common sense) be damned. [11/14]
Going back to (3) in my earlier list: consider, for example, the Online Safety Bill’s obligation to prevent you, the user, from seeing any content amounting to use of words (…) with intent to cause alarm or distress to another person, pursuant to s4a of the Public Order Act 1986. But, dear (human or robotic moderator), don't forget the nuance! Because as any fule kno: [10/14]
This will include: (1) whether UK law is relevant to something, (2) whether it’s harmful to children in any way (unless they’ve age-gated everything – see my last thread), (3) whether it’s illegal under a list of UK laws that turn on info the platform won't know; and (4) whether any other OSB provisions apply, e.g. the crap ones about “recognised news content” -- designed to protect Jeremy Clarkson but not you (nor any non-UK organizations). [9/14]
Algorithms and human moderators will now have to study everything that’s uploaded or said, in any language (to the extent “proportionate…”), and then make impossibly hard calls on whether to allow it up (or rather, to allow UK users to see it). [8/14]
Second, for the legal scholars amongst you: recognise just what a massive step away this is from notice & takedown. While N&T is not the *only* approach to platform moderation – platforms routinely screen for CSAM, for example, or for copyright infringing materials – the analytics and pre-emptive moderation we’re talking about here would go way, way beyond any of that. [7/14]
First, let’s pause to recognise just how patronising that is. This is the construction of a digital Nanny State, through delegation to Big Tech (and to all the other platforms the Bill applies to, from Mumsnet to Pensionersforum.co.uk; though if they’re brave, they’ll hide behind s. 9(2)’s use of the word “proportionate” – which can mean exactly what you, or rather Ofcom and courts of entirely unpredictable friendliness, will want it to mean). [6/14]
Of course, there’s always more than one way to handle risk. It’s why we allow humans to have relationships, but have police forces, healthcare and counselling in case of abuse. It’s why we walk our kids to school on icy days, but hold their hand in case it gets slippery. Etc. But DCMS and the House of Commons, somewhat amazingly, reached for the most draconian option: obligations on platforms to prevent you from *seeing things*, in case that’s an intolerable risk. [5/14]