Moderation is a hard problem.

It is an •intrinsically• hard problem. It’s not hard because of design choices or tech choices or incentives or laws or capitalism. It’s hard because humans.

Ignoring it doesn’t magically make it easy.

Grand edicts don’t magically make it easy.

AI doesn’t magically make it easy.

And federation doesn’t magically make it easy.

It’s •work•. Real work. And when that work doesn’t happen, it causes harm.

@mekkaokereke
https://hachyderm.io/@mekkaokereke/109387434218657900

mekka okereke :verified: (@[email protected])

Content warning: Racism, death threats, anti-Semitism, homophobia

Hachyderm.io
@mekkaokereke Replies to concerns like mekka’s with anything of the form “it’s easy!” or “you just have to…” indicate that the person replying does not yet understand the problem.

Fellow software developers, does anything make you want to start punching faces faster than questions from your stakeholders that start with “Can’t you just…?”

Don’t be that person. You know that other people think your problems are small because they’re at a distance, because they don’t even understand what the problem is. This principle applies to you too.

Every time “Can’t you just” starts to come out of your mouth, that is a cue to:

1. STOP
2. repeat to yourself, “I do not understand”
3. and then listen.

Practice the habit in advance, so you’re ready in the moment. I’ve certainly found •I• have to. It is so hard to tame this unhelpful reflex! But it’s also possible.

The general principle is “respond with curiosity instead of judgement.”

I’ve found this to be life-changing.

@inthehands
I've found that people generally listen in order to understand; they listen in order to reply.
@inthehands This is so very true. I should really get better at that. Thanks! 🌺
@Heike_Naumann I don’t think there’s a human alive who doesn’t need to work on this.
@inthehands Yes, the phrase "How hard can it be?"

@brian @inthehands What if I've already made self-driving cars and reusable rockets... it's bound to be easy then... right?

I ran an aggregator site for years and feel like most of the problems are the ones in gray areas. You can kick off the people who advocate violence and the racists - that isn't hard and could probably be automated.

At issue are the people who poison discussion more subtly, and slowly drag everything down to their level. They pull up the noise and drown out the signal.

@grahamsz @brian @inthehands One issue that admins face is where to draw the line. Most of these problems are not black and white. Sure, someone who threatens people is obviously not acceptable. But what about two people with a disagreement? When does that stop being discussion and become unacceptable behavior?

@BertL @brian @inthehands Plus often you really need to look at the totality of the interactions which is totally overwhelming.

Take Lauren Bobert, she's repeatedly advocated against red flag laws, and stoked hatred towards the colorado lgbtq+ community but will surely point to her tweets condemning the most recent shooting as evidence that she's opposed to violence.

I think the notion that you can assess a post in isolation is deeply flawed, even if i once believed otherwise.

@grahamsz @BertL @brian Yes. It’s all context and nuance and judgement. Thus my calling it an intrinsically hard problem. Everything difficult you describe is part of humans and human interaction, not just a particular platform or particular product approach.

@inthehands @BertL @brian I think niche communities work well. Subreddits for baking and photography are largely troll-free because it's easier to have bright-line rules.

New sites also work well because the sheer optimism and energy of the userbase can overcome the will of the trolls, but i'm skeptical that the average hobbyist operator of a mastodon instance will have the energy to outlast the kind of troll who makes it their full time job.

@grahamsz @inthehands @brian Good point. As far as the Federverse goes, the admins will silence an infected site, leaving the trolls talking to themselves if they cannot be stopped individually. I am concerned that this sudden influx of new people fleeing Twitter will bring some of Twitter's problems with them.
@grahamsz This is definitely a drawback of federation. Shared blocklists and limited scale can help, but it’s something that the larger metacommunity needs to take seriously. Small admins are going to need support, ready-to-use resources and processes, guidance
@inthehands
I'm still new to Mastodon but my sense is the mods are incredibly swamped with the huge influx of users. Not excusing them just saying this probably isn't their finest hour for moderation...
@mekkaokereke

@homebrewer @mekkaokereke Oh, yeah, 1000%. I really feel for the folks who’ve been running quiet little Mastodon villages for years, and now are in •the thick• of it, with exiles swarming in and Nazis banging at the gates.

I have much less patience for the people saying, “What, it’s easy! Quit complaining about the harassment you’re experiencing! This is your problem!”

@inthehands @mekkaokereke , monitoring any resource is notoriously difficult because it represents a second-order public goods problem. Yes, it's absolutely necessary but it's also costly, which leads to incentives to shortcut, neglect, or just ignore monitoring altogether.

The more you can lower the costs of monitoring, the more likely you'll be successful, but that is absolutely not an easy task.

@inthehands @mekkaokereke I want to experience a communication platform that implements a web of trust filtering mechanism. I think this would make moderation much less difficult. Let me assign my follows a trust rating, 0-100%. Multiply those with their ratings of their follows, and so on. Show me only posts that there's a 25%-or-better trust chain to, and if I don't want to see something then show me the trust chain that led to it so I can break it.
@sparr Approaches like this are interesting — I’ve heard mention of some ideas about using Bayesian models along these lines — but its foolish to me to imagine that this makes things less difficult until/unless we have something empirical to base that on. Systems always work better when they’re hypothetical.
@inthehands The closest I ever saw was on Slashdot. The default view had a score filter, with different moderation votes affecting post scores. But there was also a score bonus/malus for people you friend or "foe", two degrees out. So you could effectively mute your foes, foes of your friends, and friends of your foes, and prioritize your friends and friends of friends (and foes of foes!).
I think it was very useful and effective, but few people used it fully.
@inthehands @mekkaokereke I couldn't agree more. Moderation is hard, often thankless, but necessary for a healthy space. I've been on forums dating back to the dialup BBS days in the early 1980's. I've seen the same patterns repeat over & over again. One consistent pattern is positive, solid moderation keeps places alive. The lack of it kills them in a social sense, like Lord of the Flies Online. Being inclusive involves affirmative, proactive work. I'm hopeful; I like the potential I see here.
@inthehands @mekkaokereke The problem of moderation only becomes salient when you design for its absolute absence — like an individual broadcasting tool that lets people say whatever they want with impunity. You can’t put a bandaid on the problems that result because they are deep-seated, the result of embracing an opposing value. What if we *started with* the problem of moderation and community responsibility instead? What would we build? #criticaltech

@inthehands @mekkaokereke Federation makes it harder, I think. (Example: If I block you, that works on my instance. But you can probably still see my posts on yours.)

Which is not to say that federation is bad…

@andy_twosticks Federation certainly complicates things. The “collective action” aspect for whole instances blocking other whole instances creates dynamics that may be good or bad or both, but are •fascinating• without a doubt.
@inthehands @mekkaokereke I know that’s right, I had the “pleasure” of being a moderator once, never again.
@mekkaokereke @cjcrew Hats off to all who do the work. It’s a tough job.
@inthehands @mekkaokereke It's easy to generalize about opposing Nazis, but there are a lot of moderation issues that are harder to call. In many cases, moderators may not have the cultural awareness to spot antisemitic or racist or misogynist or antimuslim tropes. For example, in the past several years there were multiple cases of people passing around actual 1930s Nazi antisemitic propaganda art claiming it was just anticapitalist or anti-elite. Moderation is hard.

@richard_merren @inthehands Agreed, moderation is super hard. And many of the cases are nuanced.

But we're failing the easy straightforward cases too. Eg, If a toxic instance exists and explicitly and publicly says, "We exist to be as racist as possible, and to cause maximum harm to the following groups [group list]," then we should at least be able to ensure that we don't recommend new users join an instance that does zero to shield its users from their harm.

@mekkaokereke @richard_merren Yup. Nuance, scaling, and basic anti-racist good faith are distinct problems — and various instances are failing at •all three•.