How (racist/sexist/whatever) harassment on Mastodon works:

1. Harasser replies to their target's post, with the reply set to "followers only", saying the most vile stuff you can imagine.

2. All the harasser's followers join in on the harassment, posting more vile stuff.

3. Nobody but the target and the harassment crew can see the vile stuff that was said.

4. Target is traumatized. Nobody else can see why.

5. Everybody says "I don't see it so it's not happening."

https://community.hachyderm.io/blog/2024/08/12/hachyderms-introduction-to-mastodon-moderation-part-1/

Hachyderm's Introduction to Mastodon Moderation: Part 1

The first post in a series about Mastodon moderation tooling. This post focuses on context for the upcoming posts.

Hachyderm Community

@dave

I really wish there was a more sophisticated way to share moderation examples. Eg. individuals and servers could "trust" each other and then automatically share lists of reported domains and users *including* the offending posts and comments about context.

Right now the software offers to let your report a user to their home server and this is almost always a terrible idea since their whole server is probably terrible and it just makes more slur filled posts show up.

@futurebird @dave the big challenge is to design it to limit the extent to which any shared moderation decision list becomes a tool for harassment and control and silencing communities
@ireneista @futurebird @dave exactly. i'm on some fuckass bluesky blocklist for a reason i still am not entirely sure of :P

@GroupNebula563 @ireneista @dave

This is why I refused to simply adopt long block lists where I couldn't look up why the names were on the list. Most of those lists were well-intentioned and probably useful for cutting down the volume of spammy slur fill garbage, but they were also often imprecise. I don't judge anyone for using them since there isn't a better system.

My home server sauropods.win is on such a list because we didn't want to just block everyone on the shared spreadsheet.

@futurebird
Those block list have the same “vibe” as credit check agencies.
They delegate “decision making” responsibilities to some anonymous delegate without evidence, only authority - “do it because I said so!”
@GroupNebula563 @ireneista @dave
@futurebird @dave we have thoughts on how to do that, but it's not an easy problem

@ireneista @dave

There isn't a simple technology silver bullet here. It's about servers and people having a reputation for making good moderation calls and being reasonable.

Certain servers and groups would develop a reputation for quality and they would have the most subscribers to their block lists.

And there is the need for different levels of moderation to meet different needs.

@futurebird @dave oh absolutely. even so, though, the medium is the message... the affordances of the technology do end up steering the cultural patterns that grow around it, at least somewhat. so we do think it makes sense to think about ways in which some technologies are structurally antisocial, while others are less so
@futurebird @dave in particular some of these things are better than others at spreading malicious gossip and silencing anyone who questions it