This profile of me in *The New Yorker* came out really well, if I do say so myself:

https://www.newyorker.com/culture/the-new-yorker-interview/cory-doctorow-wants-you-to-know-what-computers-can-and-cant-do

@woozle Excellent bit at the end of Doctorow's New Yorker profile on content moderation:

I worry that, because of the attacker’s advantage, the people who want to break the rules are always going to be able to find ways around them, and that we’re never going to be able to make a set of rules that is comprehensive enough to forestall bad conduct. We see this all the time, right? Facebook comes up with a rule that says you can’t use racial slurs, and then racists figure out euphemisms for racial slurs. They figure out how to walk right up to the line of what’s a racial slur without being a racial slur, according to the rule book. And they can probe the defenses. They can try a bunch of different euphemisms in their alt accounts; they can see which ones get banned or blocked, and then they can pick one that they think is moderator-proof.

Meanwhile, if you’re just some normie who’s having racist invective thrown at you, you’re not doing these systematic probes—you’re just trying to live your life. And they’re sitting there trying to goad you into going over the line. And as soon as you go over the line they know chapter and verse. They know exactly what rule you’ve broken, and they complain to the mods and get you kicked off. And so you end up with committed professional trolls having the run of social media and their targets being the ones who get the brunt of bad moderation calls. Because dealing with moderation, like dealing with any system of civil justice, is a skilled, context-heavy profession. Basically, you have to be a lawyer. And, if you’re just a dude who’s trying to talk to your friends on social media, you always lose.

https://www.newyorker.com/culture/the-new-yorker-interview/cory-doctorow-wants-you-to-know-what-computers-can-and-cant-do

I think Doctorow's touching on a universal truth: that any rules-based system ultimately ends up being a sort of barristered hell. It's why content moderation is so damned context-sensitive. And also why and how extremists on both sides of a divide can drive out moderates and give rise to a highly-partisan shriekfest. Closely related to SSC's "Toxoplasma of Rage":

https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/

@pluralistic

#CoryDoctorow #NewYorker #ContentModeration #Lawyering #ToxoplasmaOfRage

@dredmorbius @woozle @pluralistic I’m hopeful that the building social norms on the fediverse will limit this: if someone says something intolerable, don’t engage, report and block.

Overtime it should “push down” offenders, limiting their voice, not amplifying it. Which is what we want, I think

@leadegroot There are a few counterarguments, not all of which I agree with, though I mention them here:

  • Norms modelling. Telling people unambiguously "this is not OK" may change minds. If not of the person the comment is directed to, then to those listening in. (These discussions we're having are public, there are many silent participants.) I've done this, had it done to me, and observed it in interactions between others. I see some merits. The modelling may be followed by a block or ban. Hacker New's moderator, dang, practices this often, and is instructive to study: https://news.ycombinator.com/threads?id=dang

  • Persuasion. Above and beyond norms, there's actual rational argument. My faith in its capacity has been profoundly shaken over the past decade or so...

  • Echo chambers. If two groups A & B sever all or most ties, then you end up with two separate communities with little interaction. There are those who suggest that this may not be a Bad Thing...

  • Some speech is directly threatening. This is what Doctorow's passage refers to mostly: that open/public discourse tends to be dominated by the most aggressive and repressive elements. This is especially true in the case of a major state OR non-state regime of oppression. Examples of the former being, say, North Korea, Iran, Syria, and Russia. Examples of the latter being narcoterrorists, racial/religious supremacists, and organised crime, as in Latin America, the United States, India, and offshore-banking locales. In practice, the distinction between state and non-state may be distinctly indistinct.

  • Specific communities may face greater threats. The rules for moderating, say, a private school's intranet discussion might be quite different from a service frequented by children or teens in an area strongly influenced by gang activity.

  • Disempowered groups both need and are threatened by open communications. There's a history going back millennia of slang and in-group language used to discuss issues in a way that the broader community can't understand or has difficulty in following. That this might translate to the Internet is hardly surprising. Groups need to communicate, but also to protect themselves from surveillance, censorship, manipulation, and propaganda. That these needs are inherently in conflict is simply part of the landscape. A concern I've had with the Fediverse is that many people have/are indicating that it is safe, in ways that I strongly suspect it is not. It's been protected to some extent by its small scale and obscurity. Those defences are melting away like fog under a hot sun as we speak. (#AlexStamos has commented on this recently as I've mentioned a few days ago.) #BlackMastodon (and other groups) have been increasingly vocal about the abuse they've found directed toward themselves, and they're not the only group with this issue.

I don't think that the problem can ultimately be solved just through moderation, though that's one tool. Ultimately there need to be political, legal, institutional, and cultural defences and remedies. But moderation can be a part of that.

Put another way: All cultures have limits on free speech, on privacy, cases under which the State can , will, and should investigate individuals and be able to demand information or sanction both action and speech. It's the ones that do so in a principled way that protects the least privileged and strengthens the #CommonWeal (see my pinned toots on that topic) which seem to me to best serve their inhabitants and themselves. And when that value breaks down ... nothing can save you. Certainly not individual initiative and technological fixes.

@woozle @pluralistic

Edits: tyops, speling.

dang's comments | Hacker News

@dredmorbius @leadegroot @pluralistic

I'll just note that this is one reason... well, okay, actually two... that moderation in the fediverse works:

  • There's a much lower user-to-moderator ratio, so we can afford to use personal judgement rather than some kind of algorithmically-enforceable (or perhaps quickly-evaluatable-by-humans) ruleset. (Tootcat: <800 active users, 2 active mods. I don't know how many moderators Twitspace!prelon or FB have, but at that ratio they'd have to have something on the order of a million moderators.)

  • Each moderator is not representing a multibillion dollar company that controls the entire venue and has to look good for advertisers and investors (maybe that's actually 2 reasons); they are instead representing the interests of a labor of love (ish) that doesn't depend on the goodwill of anyone except its users and peer-instances.

  • (okay, one more): The moderators are also known accounts that can interact on their own behalf, rather than being anonymous actors who can only read and decide.

  • Just as a convenient example, yesterday I took the time to hash out an entire discussion with a free speech troll -- giving him plenty of rope to hang himself -- before suggesting that others might want to block him (which, based on the ample evidence, they did not hesitate to do).

    He did his best, but you can't rules-lawyer your way out when a moderator is actually paying close attention to the content and context of what you're saying.

    @woozle @dredmorbius @leadegroot @pluralistic for now, the growth rate of the network will test this concept. I actually think this is why no one has tried to write the tool I am trying to write now, too much belief in human moderation. Your use case works, for now. But I can see how this is going to get much harder in the near future.

    @d3cline @dredmorbius @leadegroot @pluralistic

    I do agree that better tools and a better underlying design would make this more effective, and is likely to become more necessary as the network scales up.

    I do think it's an error, however, to try to design a system that doesn't involve sentient decisionmaking.

    @woozle @dredmorbius @leadegroot @pluralistic The tool I am writing now seems to only can provide a report for an admin to do the final work. But hopefully it will allow thresholds to be set for some automatic action. There is actually a lot of info in the DNS system RDAP we can use. That and pytorch.