I've been participating in the fediverse for about 8.5 years now, and have run infosec.exchange as well as a growing number of other fediverse services for about 7.5 of those years. While I am generally not the target of harassment, as an instance administrator and moderator, I've had to deal with a very, very large amount of it. Most commonly that harassment is racism, but to be honest we get the full spectrum of bigotry here in different proportions at different times. I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows. If you're just in it to be mad, I recommend chalking this up to "just another white guy's opinion" and move on to your next read.

The situation nearly always plays out like this:

A black person posts something that gets attention. The post and/or person's account clearly designates them as being black.

A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harrassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

A small army of "helpful" fedi-experts jumps in with replies to point out how Mastodon provides all the tools one needs to block bad actors.

Now, more exasperated, the victim exclaims that it's not their job to keep racists in check - this was (usually) cited as a central reason for joining the fediverse in the first place!

About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on. After all, these sea lions are just asking questions since they don't see anything of what the victim is complaining about anywhere on the fediverse.

Lots of well-meaning white folk usually turn up about this time to shout down the seal lions and encouraging people to believe the victim.

Then time passes... People forget... A few months later, the entire cycle repeats with a new victim.

Let me say that the fediverse has a both a bigotry problem that tracks with what exists in society at large as well as a troll problem. The trolls will manifest themselves as racist when the opportunity presents itself, anti-trans, anti-gay, anti-women, anti-furry, and whatever else suits their fancy at the time. The trolls coordinate, cooperate, and feed off each other.

What has emerged, in my view, on the fediverse is a concentration of trolls onto a certain subset of instances. Most instances do not tolerate trolls, and with some notable exceptions, trolls don't even bother joining "normal" instances any longer. There is no central authority that can prevent trolls from spinning up fediverse software of their own servers using their own domains names and doing their thing on the fringes. On centralized social media, people can be ejected, suspended, banned, and unless they keep trying to make new accounts, that is the end of it.

The tools for preventing harassment on the fediverse are quite limited, and the specifics vary between type of software - for example, some software like Pleroma/Akkoma, lets administrators filter out certain words, while Mastodon, which is what the vast majority of the fediverse uses, allows both instance administrators and users to block accounts and block entire domains, along with some things in the middle like "muting" and "limiting". These are blunt instruments.

To some extent, the concentration of trolls works in the favor of instance administrators. We can block a few dozen/hundred domains and solve 98% of the problem. There have been some solutions implemented, such as block lists for "problematic" instances that people can use, however many times those block lists become polluted with the politics of the maintainers, or at least that is the perception among some administrators. Other administrators come into this with a view that people should be free to connect with whomever on the fediverse and delegate the responsibility for deciding who and who not to block to the user.

For this and many other reasons, we find ourselves with a very unevenly federated network of instances.

Wit this in mind, if we take a big step back and look at the cycle of harassment I described from above, it looks like this:

A black person joins an instance that does not block m/any of the troll instances.

That black person makes a post that gets some traction.

Trolls on some of the problematic instances see the post, since they are not blocked by the victim's instance, and begin sending extremely offensive and harassing replies. A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

Cue the sea lions. The sea lions are almost never on the same instance as the victim. And they are almost always on an instance that blocks those troll instances I mentioned earlier. As a result, the sea lions do not see the harassment. All they see is what they perceive to be someone trying to stir up trouble.

...and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls. Is that the Mastodon developers' fault for not figuring out a way to more effectively block trolls through their software? Is it the instance administrator's fault for not blocking troll instances/troll accounts? Is it the victim's fault for joining an instance that doesn't block troll instances/troll accounts?

I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem. At every step, things are working as designed. The Mastodon software allows people to participate in a federated network and gives both administrators and users tools to control and moderate who they interact with. Administrators are empowered to run their instances as they see fit, with rules of their choosing. Users can join any instance they choose. We collectively shake our fists at the sky, tacitly blame the victim, and go about our days again.

It's quite maddening to watch it happen. The fediverse prides itself as a much more civilized social media experience, providing all manner of control to the user and instance administrators, yet here we are once again wrapping up the "shaking our fist at the sky and tacitly blaming the victim" stage in this most recent episode, having learned nothing and solved nothing.

@jerry 100%.

One interesting idea I've seen floated recently is a "known-good" list(s), so a new instance can federate *only* with those on some known good list(s). Then someone joining a server can see if their server is part of the "X-approved list" and decide to join or not.

Obviously not a complete solution, but are we maybe at the size where it's a part of the picture? Make new instances prove they're good, rather than wait for them to prove they're bad?

@Crell it's antithetical to what the fediverse is intended to be, but it is a reasonable solutiion to this problem

@jerry Sadly, I think the preponderance of evidence suggests that a "wild west libertarian self-organizing environment" (the dream of the early-90s Internet) will devolve into a Nazi troll farm 100% of the time with absolute certainty.

It's a wonderful idea, but doomed.

The barrier to the accept-list could be low (eg, do they have a halfway decent TOS/CoC), but I don't think we have an alternative.

cf: https://peakd.com/community/@crell/why-you-can-t-just-ignore-them

Why you can't just ignore them | PeakD

Recent events in the PHP community in the past few days have reminded me of an important point that bears repeating. Qu... by crell

PeakD

@Crell On #Freenet (now #Hyphanet ¹) that problem already struck 20 years ago and the solution was to add propagating visibility where visibility grows slowly while you interact and dies almost instantly once you harass. It actually works in keeping communication civil even though Freenet doesn’t only attract idealists but also horrible people.
There are PR’s to enable similar systems in Mastodon:

https://github.com/mastodon/mastodon/pull/28958
https://gitlab.com/babka_net/mastodon-babka/-/merge_requests/22

¹ sad reasons: https://www.hyphanet.org/freenet-renamed-to-hyphanet.html
@jerry

Add follow/mute/block.removed webhook events by CSDUMMI · Pull Request #28958 · mastodon/mastodon

This PR provides the inverse Webhook events to #28744: As all present Webhook events are triggered by the creation of some object in the database, the logic to trigger a webhook used for these pass...

GitHub

@Crell The main advantage of the system in #Freenet / #Hyphanet is that visibility scales worse than losing it: interactions increase it slowly, being reported destroys it quickly: a person connected to you by propagating interaction reports them as harasser. This report is public, which works in Hyphanet to make people lose visibility for false claims, too.

Background:
https://www.draketo.de/software/decentralized-moderation

The prototype linked there (wispwot) can be linked into Mastodon with the mentioned PRs.

@jerry

The path towards decentralized moderation

Verstreute Werke von ((λ()'Dr.ArneBab))

@ArneBab @jerry Interesting. What's the mechanism for preventing hostile brigading, or a bunch of trolls reporting someone just to down vote them? (Where "troll" is defined as "activist I disagree with", because not only the racist right dire those things.)

@Crell When they do, they expose their downvoting, so they lose visibility which they had to build up over time by actual interaction.

A few years ago we had an attack on this by a group of organized Neonazis — which luckily failed. I took that chance to record a dataset of the network:
https://figshare.com/articles/dataset/The_Freenet_social_trust_graph_extracted_from_the_Web_of_Trust/4725664?file=7715467

That also shows a weakness of that system: this information must be public.

The important point: we know the attackers, so it’s possible to use this to check moderation systems.
@jerry

The Freenet social trust graph extracted from the Web of Trust

The dataset trust-deduplicated.csv shows the trusts-value given by anonymous users of the censorship resistant Freenet network to other users. IDs are anonymized by mapping each ID to an integer-index in a non-shared list of known IDs. The format is trust-giver;trust-receiver;trust-value.The dataset trust-sone.csv only includes trust relationships which fit the trust-allocation style in the Sone social networking plugin (either 75 for non-spammers or -25 for disruptive behaviour).The dataset trust-anonymized-2020-11-01-under-attack.csv shows a newer snapshot after the web of trust has been under attack for half a year. The dataset attacking-nodes-2020-11-01.csv includes the IDs of the likely attackers.The *.scm files were used to crawl the data from a locally running Freenet node (see https://freenetproject.org ).trust-deduplicated-force-atlas-hub-centrality.png and trust-sone-reingold.png provide first looks into the data visualized with the free tool Gephi.The 2020-11-01*scm files were used to create the 2020-11-01 datasets.Additional details are available at http://www.draketo.de/english/freenet/social-graph-snapshot

figshare
@ArneBab @jerry Hm, so like, it burns karma to report someone?

@Crell It burns karma to report someone *falsely*.

Because if most people agree with your reporting, then they won’t reduce your visibility for that.
@jerry

@Crell One other reason why this works pretty well in Hyphanet is that you can always create a new ID — you can always start fresh — but this also starts with low visibility.

And I think it works as moderation, because it operates solely on the value spammers and trolls care about: visibility.

You gain visibility by interaction (which is roughly gratis for humans).

You lose it when you get reported by someone who has visibility.

The strength of reporting is reduced by social distance.
@jerry

@Crell Conceptually this gives up the idea of a global truth about who should be visible — similar to how different domain blocks on different Mastodon instances give up this idea, but down to individual decisions, while still scaling.

That’s how it avoids the problem of centralized power.

Using it in the Fediverse could combine both concepts:
Initially see (and therefore trust) everyone on your home instance, but everyone outside it only gets visibility incrementally by interaction.
@jerry

@ArneBab @Crell @jerry This is pretty fascinating, and aligns with stuff I’ve been thinking about a lot.

How would this system cope with some person or group who needs visibility urgently? I’m thinking of marginalized groups in the midst of an emergency: police or government crackdowns, loss of connection due to nation-state interference, etc.

I suppose highly visible folks could boost the information somehow? I’m kinda thinking out loud…

@sesamecreek yes, they’ll need to get well-connected people to see them and to explicitly endorse them (set them as trusted).

Pretty similar to real life: talk to people other people trust and if they consider your cause important, they can signal-boost you.

If they signal boost falsely, they likely lose their ability to signal-boost (at least for many people) and also a lot of visibility.
@Crell @jerry

@ArneBab @Crell @jerry How would that work? That is, how would a relatively unknown person get well-connected people to see them? They can contact well-connected people despite not having “high visibility” or being well connected?

In real life, this is a problem, too. How do you talk to people other people trust?

Is there some way to establish trust quickly? (1/3)

I get that this is a hard problem. I’ve thought about it a lot, but coming from a privileged position, I’m trying to come up with some somewhat naive ideas about how it would work for minorities and marginalized folks. (2/3)
I like the system you describe a lot. It would certainly work very well for folks who have the time to be patient and establish their reputation. And, of course, any method you come up with to short-circuit the reputation system for emergencies could be exploited. Still just thinking out loud, sorry. (3/3)

@sesamecreek If you want to try how different scenarios work out, you can try the implementation in wispwot:
https://hg.sr.ht/~arnebab/wispwot/browse/HOWTO.org?rev=4d004e58c26e#L14

It operates on a simple plain text store (folders with text-files you can inspect easily) and exposes a rest interface besides being usable from the command line.

If you find scenarios that enable corrupting the calculations, those may be things to fix.

There’s already one part I fixed compared to the original in Hyphanet:
https://hg.sr.ht/~arnebab/wispwot/browse/README?rev=4d004e58c26e#L97

@Crell @jerry

@Crell also everyone the downvoted already interacted with will still see the person (local trust wins over propagated trust).

Measures to make this stronger would be to have stronger indicators if propagated trust disagrees strongly with your local trust — and where in the network that originates (via graph algorithms; there’s a bunch of different ones that should be cheap enough).
@jerry