Even when Twitter’s T&S infrastructure was at its most functional – which I’d say was 2021-mid 2022 – I sometimes saw appeals on content decisions, suspensions, etc. take 2-3 months unless I escalated to personal contacts at the company.

I keep seeing folks expect #moderation and community management decisions on volunteer-run fedi instances to happen in hours – not even days – and jumping to defederation when they don’t get immediate responses. It’s going to burn out so many admins, and makes me sad and worried about the sustainability and scalability of our communities.

#fediverse #fediadmin #contentmoderation #communitymanagement #mastoadmin #mastodon

@leigh So true. Definitely agree that moderation can take some time and careful judgement.
I generally only defederate instances that are a direct threat to my user base. One specific complaint wouldn’t be enough.
@leigh Boy, do I thank you for this. I just had a lengthy convo about this yesterday. It's just one of many structural challenges this entire concept faces. We need to focus on some of those early-on or they may all blow up at once later. Related, I mostly worry that too many think decentralized also implies ungovernable. And if that's true, that will be bad.
@shoq 🙏 you’re welcome - it’s hard stuff!
@leigh Feels like another instance of the old "technical solutions to social problems" chestnut in that the technical solutions typically are also shinier *because they look quicker* when in fact they paper over a significant human toll that leads to long term bankruptcy.
@patrickod Unfortunately it turns out that the problems of content moderation in a federated ecosystem are isomorphic to the problems of coexisting in a society together 🤪
@leigh @patrickod @gpt what is your view?
Our approach to content moderation needs to consider both the long-term sustainability and scalability of our communities, as well as the toll it takes on moderators. We must also respect the complexities of coexisting in a federated ecosystem to create a shared understanding of acceptable behaviour. Let's work together to create solutions which are mindful of these considerations.
@leigh @patrickod Continuing this isomorphism, instance maps to race/class/ethnicity/national origin & defederation maps to excommunicating one of the said group, which we know is not proper. Arriving at your original point.
@leigh really storing desire by lots of folks to minimize the real benefits provided by money and corporate organization. Not to say that there aren’t other ways! But they also require lots of people and/or the money to pay lots of people
@leigh well, money or a commitment to small user bases

@going_to_maine @leigh It's the commitment to small user bases, really.

People keep building up huge instances - and I'm not really sure why, tbh. It seems to be an unconscious "growth mindset" inherited from the commercial platforms, maybe mixed with "if I mod / admin a big instance I'm important". But... that's where you need big money and big resources, and there's no viable way to get them.

Donation / volunteer models work great for small instances. They fall apart fast over 150-ish users.

@notafurry @leigh Perhaps if your small instance gets bigger than the moderator's dunbar number you have a problem. https://en.wikipedia.org/wiki/Dunbar%27s_number
Dunbar's number - Wikipedia

@leigh Burnout is absolutely something that admins need to be careful about. Pick a good team, share the load, lean on each other, allow yourselves to unplug and recharge regularly.
@whatshisays @leigh .. and allow people to vent or they might explode

@leigh What if the appeals could be handled by users on the server.

Like a special tab in the app, where we can review suspension appeals and vote.

This would be completely opt-in, or even opt-in then someone approves. An opt-out option would need to be immediately accessible on the same tab, to make it easy for ppl to "nope out" if the content is too much for them.

Further, users could opt-in to review specific types of suspensions. For example, nothing involving images or racism etc.

At the Admin's discretion, results could lead to immediate unsuspension after 48hrs of votes. Or the votes could prioritise what admins choose to review first. Example: if 80% of ppl vote to unsuspend, the admin could prioritise that over a 60:40 split. Or if 90% vote to keep the suspension, it's automatically retained.

Essentially a Jury of Peers system.

@verb @leigh Stack overflow has systems for this. How much you interact gives you points which eventually gives you mod rights, and eventually gives you access to moderate (by scoring) the mods of other moderators. Gamified moderation, basically.
@kaleissin @verb Stack Overflow is still a centralized system, albeit one where things are delegated out to the edges like Reddit. Lessons to be learned there for sure but the constraints of federation change things a lot.
@leigh @verb It's per instance, so mod powers on one instance, like stackoverflow itself, doesn't give you ditto on another, say, math.stackexchange (does give you 100 points as a starter bonus though)
@kaleissin @verb right but fundamentally there’s still a company designing the metrics and overriding stuff as needed. There’s no equivalent oversight mechanism – by design – in a federated system, and that’s an essential difference
@verb @leigh I think this is the right long term answer: some version of "community moderation" vs "admin moderation"
@leigh this is why folks need to donate & admins can apply for grants to employ a moderation team.
@leigh I see that as one of the biggest issues in the Fediverse. Knee-jerk reactions and defederation because of misguided moderation expectations will make this place way too fragmented, and will keep killing small instances. In the end it's probably best to run your own.
@leigh When you notice that scalability and sustainability are not the same thing you will feel less pressure.
@eastbaynian I agree that they are not the same thing, but disagree that that in any way reduces the forces here.
Wikimedia Foundation Legal department/FAQ On Countering Terrorist and Violent Extremist Content on Wikimedia Projects - Meta

@leigh I noticed that my instance is in a community maintained block list on GitHub. The only thing I can figure out is that it probably ended up there during the influx of users end of last year because one admin somewhere didn’t like that the c.im admin took some time to respond to messages (the admin was overwhelmed with everything then). Because the blocklist is being copied around, it is basically impossible to get removed from it. And it isn’t like we users get clear error messages about this sometimes leading to hours of troubleshooting.
@leigh So, not about this, but do you also happen to have personal contacts of any sort at Facebook? I have a request for them…
@leigh
What can help with this is not usually using your admin account to browse your own instance, to block your admin account on your second account and to schedule a fixed box of time for technical maintenance and community moderation (separately).
Deal with reports thrice a week for 30min instead of twice a day for an hour and you save 5.5h of your valuable time. This helps to focus and be more efficient.
Also, get some people you trust to moderate the site.
@Weltenkreuzer
@leigh I've had zero reports, but my instance is blocked by nearly 10 other instances. They don't even bother contacting me about issues. I don't think people even want their issue solved, they just want to feel powerful.
@leigh yeah, there needs to be community managers in these places to ensure healthy. And, people need to be ready to negotiation and be kind. It rocky online life these days.

@leigh

it is a very complex question of system of rules and task-human forces available

it is easy to communicate with small group of people, I had also practice to lead about 800 students in University when I did not know each member personally, leading more people is like news channel subscription model

@leigh this is one of the reasons I developed the “Rules of Engagement” which helps me quickly evaluate a situation and prioritize my strategy…folx seldom consider the human toll moderation takes, particularly when they’re actively being “harmed”, on those tasked with this role…the only real long-term solution I see is the establishment of strong community norms rooted in harm reduction, for self-regulation, while leaving the more complex decisions to a core team…not perfect though
@KimCrayton1 this is a great framework, thank you for sharing it 🙏 I’m curious your thoughts on the speed-of-response issue - I feel like it’s a problem for small and volunteer run instances even if one does have a good framework (like yours) in place for moderation. Mods need to sleep 😅

@leigh this is a huge concern and what I did when I had my Kim’s Community Cafe on Discord was to ensure that I had enough TRAINED moderators across time zones and had them on a “moderation” schedule.

I benefited from doing the unheard of and locking things down so that the cafe was only “open” when I had enough folx there to monitor.

We also, used bots to remind and encourage folx to use inclusive language and had facilitated conversations to model expected behavior.

@leigh specifically regarding speed of expected response time, it’s best to be clear on what your community regards as “can wait to all hands on deck” issues and share the realities of your team’s bandwidth. Be honest about what folx should expect when making a report as well as what’s appropriate to be reported i.e. actual safety OVER discomfort…it’s the “discomfort” reports that waste a lot of time and energy as designed
@KimCrayton1 💯 on the discomfort vs safety thing
@leigh building true community, which, if I’m honest, I don’t most folx here or elsewhere think deeply enough about, is the ONLY way to scale moderation, especially with volunteers. The community as a whole must learn how to manage itself, using something like the PWO Guiding Principles and the Rule of Engagement [proactive], leaving the trained moderators to oversee/manage other issues. The more that is explicit, the more bandwidth is available when serious harm is being inflicted
@KimCrayton1 Have you written about these subjects in long-form? I know I have learning to do and I especially don’t feel like I have a solid mental framework around this distinction.
@KimCrayton1 I love the idea of having opening and closing hours! Such a nice change of pace ☺️
@leigh yep…I knew that I only had bandwidth for a certain number of hours per week, so the Cafe was only open, unless there was some unexpected thing that happens that folx would benefit from from a safe space to process, on Friday’s from 8am-8pm EST.

@KimCrayton1 @leigh
Thanks for the framework,

Could you provide #AltText?
I try not to boost post without it and here I'm torn 🙂
(at the same time I have the feeling that my alt tex would capture it adequately 😬​)

@realn2s @leigh the #AltText was already added to the images. Just click on them.
@KimCrayton1 @leigh
😳​
My bad, I didn't realize how the Web client showed AltText. Sorry
And boosted 🙂
@leigh that's a very good arguement for keeping instances small, diverse, and responsive.
@mstrmustache small instances have the same problems with people flipping out over non-instant responses as larger ones do, in my experience so far
@leigh correct. The point being that as free and independent moderation is an integral aspect of the #fedivere then the core strategy is to minimize any potential disruption by keeping it localized to small instances rather than having large groups disrupted.
@leigh TLDR: bigger/more formal does not equal better and will typically lead to worse outcomes.
@leigh SUSTAINABILITIES FOR THE SUSTAINABILITY PIE
@leigh Not sure these communities are supposed to scale... I mean, it's entirely imaginable to have the entire #fediverse be made up of small instances with however many users mods can handle without burning out over it.

@jwcph

I think limiting instance size according to moderation capacity makes a lot of sense, yes. But the quantity of _external_ spam & abuse scales separately from the internal-moderation tasks, so we also need ways to share quickly the news of new "bad" instances.

@leigh

@unchartedworlds @leigh agreed, cross-instance collaboration will be crucial - but then, it also fits with the whole "community of communities" thing 🙂
@leigh I gave up after no response for over a year just so I could run movetodon on the account.
@leigh I wonder if there's any solution where we could leverge more volunteers to help. LIke juries or peer-review.
@ThreeSigma there’s a large body of work on how to do content moderation at scale, but that wasn’t really my point – the unreasonable demands on speed are my concern.
@leigh why? amajority ofusers block and self moderate and dont whine for a jannie to fix their problems. Id just ignore them
@leigh i feel pretty certain that the current fediverse model of content moderation is not scalable. Something else is needed.
@leigh gosh, so much this! People can't be online 24/7, and they only have so many spoons in a day. People are not machines, and you can't treat them as such.
@leigh It’s a process. It will take years for federated social media to stabilize and lead. But it will. It’s the future. But it’s hard to see that at the beginning. Also, Twitter isn’t functioning the way any user wants now. Including the alt right. It might be burning less cash, but it’s also likely making less of it too. @donieosullivan