I've been participating in the fediverse for about 8.5 years now, and have run infosec.exchange as well as a growing number of other fediverse services for about 7.5 of those years. While I am generally not the target of harassment, as an instance administrator and moderator, I've had to deal with a very, very large amount of it. Most commonly that harassment is racism, but to be honest we get the full spectrum of bigotry here in different proportions at different times. I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows. If you're just in it to be mad, I recommend chalking this up to "just another white guy's opinion" and move on to your next read.

The situation nearly always plays out like this:

A black person posts something that gets attention. The post and/or person's account clearly designates them as being black.

A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harrassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

A small army of "helpful" fedi-experts jumps in with replies to point out how Mastodon provides all the tools one needs to block bad actors.

Now, more exasperated, the victim exclaims that it's not their job to keep racists in check - this was (usually) cited as a central reason for joining the fediverse in the first place!

About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on. After all, these sea lions are just asking questions since they don't see anything of what the victim is complaining about anywhere on the fediverse.

Lots of well-meaning white folk usually turn up about this time to shout down the seal lions and encouraging people to believe the victim.

Then time passes... People forget... A few months later, the entire cycle repeats with a new victim.

Let me say that the fediverse has a both a bigotry problem that tracks with what exists in society at large as well as a troll problem. The trolls will manifest themselves as racist when the opportunity presents itself, anti-trans, anti-gay, anti-women, anti-furry, and whatever else suits their fancy at the time. The trolls coordinate, cooperate, and feed off each other.

What has emerged, in my view, on the fediverse is a concentration of trolls onto a certain subset of instances. Most instances do not tolerate trolls, and with some notable exceptions, trolls don't even bother joining "normal" instances any longer. There is no central authority that can prevent trolls from spinning up fediverse software of their own servers using their own domains names and doing their thing on the fringes. On centralized social media, people can be ejected, suspended, banned, and unless they keep trying to make new accounts, that is the end of it.

The tools for preventing harassment on the fediverse are quite limited, and the specifics vary between type of software - for example, some software like Pleroma/Akkoma, lets administrators filter out certain words, while Mastodon, which is what the vast majority of the fediverse uses, allows both instance administrators and users to block accounts and block entire domains, along with some things in the middle like "muting" and "limiting". These are blunt instruments.

To some extent, the concentration of trolls works in the favor of instance administrators. We can block a few dozen/hundred domains and solve 98% of the problem. There have been some solutions implemented, such as block lists for "problematic" instances that people can use, however many times those block lists become polluted with the politics of the maintainers, or at least that is the perception among some administrators. Other administrators come into this with a view that people should be free to connect with whomever on the fediverse and delegate the responsibility for deciding who and who not to block to the user.

For this and many other reasons, we find ourselves with a very unevenly federated network of instances.

Wit this in mind, if we take a big step back and look at the cycle of harassment I described from above, it looks like this:

A black person joins an instance that does not block m/any of the troll instances.

That black person makes a post that gets some traction.

Trolls on some of the problematic instances see the post, since they are not blocked by the victim's instance, and begin sending extremely offensive and harassing replies. A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

Cue the sea lions. The sea lions are almost never on the same instance as the victim. And they are almost always on an instance that blocks those troll instances I mentioned earlier. As a result, the sea lions do not see the harassment. All they see is what they perceive to be someone trying to stir up trouble.

...and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls. Is that the Mastodon developers' fault for not figuring out a way to more effectively block trolls through their software? Is it the instance administrator's fault for not blocking troll instances/troll accounts? Is it the victim's fault for joining an instance that doesn't block troll instances/troll accounts?

I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem. At every step, things are working as designed. The Mastodon software allows people to participate in a federated network and gives both administrators and users tools to control and moderate who they interact with. Administrators are empowered to run their instances as they see fit, with rules of their choosing. Users can join any instance they choose. We collectively shake our fists at the sky, tacitly blame the victim, and go about our days again.

It's quite maddening to watch it happen. The fediverse prides itself as a much more civilized social media experience, providing all manner of control to the user and instance administrators, yet here we are once again wrapping up the "shaking our fist at the sky and tacitly blaming the victim" stage in this most recent episode, having learned nothing and solved nothing.

@jerry 100%.

One interesting idea I've seen floated recently is a "known-good" list(s), so a new instance can federate *only* with those on some known good list(s). Then someone joining a server can see if their server is part of the "X-approved list" and decide to join or not.

Obviously not a complete solution, but are we maybe at the size where it's a part of the picture? Make new instances prove they're good, rather than wait for them to prove they're bad?

@Crell it's antithetical to what the fediverse is intended to be, but it is a reasonable solutiion to this problem

@jerry Sadly, I think the preponderance of evidence suggests that a "wild west libertarian self-organizing environment" (the dream of the early-90s Internet) will devolve into a Nazi troll farm 100% of the time with absolute certainty.

It's a wonderful idea, but doomed.

The barrier to the accept-list could be low (eg, do they have a halfway decent TOS/CoC), but I don't think we have an alternative.

cf: https://peakd.com/community/@crell/why-you-can-t-just-ignore-them

Why you can't just ignore them | PeakD

Recent events in the PHP community in the past few days have reminded me of an important point that bears repeating. Qu... by crell

PeakD

@Crell @jerry I think the idea that an otherwise terrible person had like 20 years ago holds up pretty well: paying for initial access results in you having an investment in a service that encourages you to follow the rules to protect that investment. You can see this with how Something Awful has turned into a stable and mature forum with varied subforums and at least one thread for anything you can think of.

Of course the downside to that is that if the person setting the rules is terrible then the culture will be terrible and require a coup to fix, but... that seems to be a universal part of the human condition.

@teknogrot @jerry "The culture of an organization is defined as the worst behavior its leadership is willing to tolerate."

No amount of federation will change that dynamic.

@Crell I think it does change it, but not for the better. As @jerry pointed out, the nature of the fediverse can hide the behaviour from some people resulting in a de-facto tolerance of behaviour worse than the leadership (in this case again @jerry) would actually accept, while denying them the tools to do something about it.

Federation may actually not be a good idea at all for social media.

@teknogrot @Crell @jerry
Metafilter has (or had?) a $5 one-time entry fee that served the same purpose pretty well.

@Crell @jerry Jerry, firstly, thank you for the thoughtful, nuanced take. As a person who does somewhat high profile activism, I appreciate that your efforts have resulted in me experiencing very little harassment here.

The problem with having a list of "approved instances" is that it makes personal/tiny instances untenable.

This really reminds me of issues with email hosting and spam control - I run a personal email server and I have problems with providers assuming everyone is a spammer unless they have a history of sending non-spam.

How to establish that history if you can't send, though? If you're a business, you can pay protection money to certain companies that will bootstrap your reputation, but I can't afford that.

APIs for publishing opinions on other instances could help, if consumed "web of trust" style - you'd have two values, how much you trust the instance itself, and how much you trust it's trust decisions. These values might be negative. I'm not sure how well this would work in practice.

@Crell @jerry Meanwhile, yesterday someone went out of their way on the birdsite to tag me in a post calling me an assortment of slurs.

@Crell @jerry speaking of the birdsite, before the API got locked down, I spent a fair amount of effort building network analysis tools to proactively identify and block bigots. Turns out assholes like to follow each other.

It was deeply satisfying when news about me came out and a bunch of people who had never interacted with me and weren't on any shared blocklists were complaining about being blocked by me.

@Crell @jerry I also had an IFF (identify friend or foe) script that would pull following/follower data, compare against my my own block, mute, following, and follower lists, and compute a score.
@ryanc @jerry @Crell perhaps there’s a way to make this available to members so they can implement it when they sign up?
@Rickd6 @jerry @Crell It's not clear to me that it would work here. Part of the issue is it sounds like the trolls often spin up new disposable instances for trolling purposes and wouldn't have useful data.
@Crell @ryanc @jerry is it possible to ‘fight fire with fire’ in that when someone identifies that they are receiving harassment a group of individuals- obviously a prearranged group- can be contacted who will respond overwhelmingly to the harassing individual to call them out? Sounds childish when said out loud and may make them dig in further but …..
@Rickd6 @ryanc @jerry "My gang is bigger than your gang" is the approach used in a failed society.
@ryanc @Crell I used to work with someone who had this saying "the operation was a success, unfortunately the patient died". I feel like it's that sort of situation - we could indeed solve the problem by killing the patient.

@ryanc @jerry The spam analogy is very apt, I think, given Fediverse is often analogized to email.

And the wild-west-anyone-runs-anything approach is largely a failure there, too. I also used to run a personal mail server. It only worked if I proxied every message through my ISP's mail server.

A similar network-of-trust seems the only option here, give or take details.

@ryanc @jerry In the abstract sense, we're dealing with the scaling problems of the tit-for-tat experiment dynamics. Reputation-building approaches to social behavior only work when the # of actors is small enough that repeated interactions can build reputation. The Internet is vastly too big for that, just like society at large.
@Crell @jerry there's several phd thesis level problems to solve here
@ryanc @jerry True dat.
@Crell @jerry My big concern with the web of trust model is that it's complicated, and has lots of nontrivial decisions to make. An effective tool would probably have to distill the decision to trust/neutral/distrust and have a standard scoring algorithm, and notify admins of conflicting data.
@Crell @jerry I do think keyword/regex filters as a quarantine/alert admin thing would be helpful, but as mentioned up thread, part of the problem is people unkowingly joining instances that don't protect their users from harassment and not understanding why that's a problem. The guides saying "instance doesn't matter much" don't help.
@ryanc @jerry Yeah, the onboarding experience is definitely still a sore point. Like, I'd like to get my brother or the NFP I work with onto Mastodon, but I don't know what server to send them to. Mine isn't appropriate for them, mastodon.social isn't a good answer, and the alternative is... *citation needed*
@Crell @jerry Yeah, I've absolutely no idea what "general but friendly to members of frequently harassed groups" instances exist. This instance is really nice, as I've always been a hacker first and foremost. Yes, I'm queer on several dimensions and open about it, but most of the time I don't want to focus on that.

@ryanc @Crell @jerry

>„APIs for publishing opinions on other instances could help, if consumed "web of trust" style - you'd have two values, how much you trust the instance itself, and how much you trust it's trust decisions. These values might be negative. I'm not sure how well this would work in practice.”

Fediseer may be something like this (created on Threadiverse because of Lemmy spam wave): https://gui.fediseer.com

Fediseer

@74, Dziesiony pierwsze na cenzurowanym. 😂
@AubreyDeLosDestinos @74 alfabet nie wybiera
@74 @74, litery kontra cyfry. Czytałem cały ten wątek od rana i w sumie sporo osób dość trzeźwo na to patrzy. Odstawiając emocje na bok, jest to bardzo ciekawe zagadnienie, jak Fedi się wykuwa w bólach.

@Crell @jerry I wonder, isn't "creating an instance" a barrier? In the sense that a new troll instance, after some time, should be blocked by most other instances and blocklists. Then they have to set up a new one, and so on, and so on.

Also, I guess new people either know people on Mastodon they trust or they go to joinmastodon.org and pick a server. That's kind of an accept-list, no? The barrier to get there is relatively low: https://joinmastodon.org/covenant

Mastodon Server Covenant for joinmastodon.org

@skaphle @jerry Too low, apparently.

And with tools like Mastohost, setting up an attack instance is quite easy.

Always assume that malicious actors are willing to put in more work than benevolent actors. They're more motivated.

@Crell On #Freenet (now #Hyphanet ¹) that problem already struck 20 years ago and the solution was to add propagating visibility where visibility grows slowly while you interact and dies almost instantly once you harass. It actually works in keeping communication civil even though Freenet doesn’t only attract idealists but also horrible people.
There are PR’s to enable similar systems in Mastodon:

https://github.com/mastodon/mastodon/pull/28958
https://gitlab.com/babka_net/mastodon-babka/-/merge_requests/22

¹ sad reasons: https://www.hyphanet.org/freenet-renamed-to-hyphanet.html
@jerry

Add follow/mute/block.removed webhook events by CSDUMMI · Pull Request #28958 · mastodon/mastodon

This PR provides the inverse Webhook events to #28744: As all present Webhook events are triggered by the creation of some object in the database, the logic to trigger a webhook used for these pass...

GitHub

@Crell The main advantage of the system in #Freenet / #Hyphanet is that visibility scales worse than losing it: interactions increase it slowly, being reported destroys it quickly: a person connected to you by propagating interaction reports them as harasser. This report is public, which works in Hyphanet to make people lose visibility for false claims, too.

Background:
https://www.draketo.de/software/decentralized-moderation

The prototype linked there (wispwot) can be linked into Mastodon with the mentioned PRs.

@jerry

The path towards decentralized moderation

Verstreute Werke von ((λ()'Dr.ArneBab))

@ArneBab @jerry Interesting. What's the mechanism for preventing hostile brigading, or a bunch of trolls reporting someone just to down vote them? (Where "troll" is defined as "activist I disagree with", because not only the racist right dire those things.)

@Crell When they do, they expose their downvoting, so they lose visibility which they had to build up over time by actual interaction.

A few years ago we had an attack on this by a group of organized Neonazis — which luckily failed. I took that chance to record a dataset of the network:
https://figshare.com/articles/dataset/The_Freenet_social_trust_graph_extracted_from_the_Web_of_Trust/4725664?file=7715467

That also shows a weakness of that system: this information must be public.

The important point: we know the attackers, so it’s possible to use this to check moderation systems.
@jerry

The Freenet social trust graph extracted from the Web of Trust

The dataset trust-deduplicated.csv shows the trusts-value given by anonymous users of the censorship resistant Freenet network to other users. IDs are anonymized by mapping each ID to an integer-index in a non-shared list of known IDs. The format is trust-giver;trust-receiver;trust-value.The dataset trust-sone.csv only includes trust relationships which fit the trust-allocation style in the Sone social networking plugin (either 75 for non-spammers or -25 for disruptive behaviour).The dataset trust-anonymized-2020-11-01-under-attack.csv shows a newer snapshot after the web of trust has been under attack for half a year. The dataset attacking-nodes-2020-11-01.csv includes the IDs of the likely attackers.The *.scm files were used to crawl the data from a locally running Freenet node (see https://freenetproject.org ).trust-deduplicated-force-atlas-hub-centrality.png and trust-sone-reingold.png provide first looks into the data visualized with the free tool Gephi.The 2020-11-01*scm files were used to create the 2020-11-01 datasets.Additional details are available at http://www.draketo.de/english/freenet/social-graph-snapshot

figshare
@ArneBab @jerry Hm, so like, it burns karma to report someone?

@Crell It burns karma to report someone *falsely*.

Because if most people agree with your reporting, then they won’t reduce your visibility for that.
@jerry

@Crell One other reason why this works pretty well in Hyphanet is that you can always create a new ID — you can always start fresh — but this also starts with low visibility.

And I think it works as moderation, because it operates solely on the value spammers and trolls care about: visibility.

You gain visibility by interaction (which is roughly gratis for humans).

You lose it when you get reported by someone who has visibility.

The strength of reporting is reduced by social distance.
@jerry

@Crell Conceptually this gives up the idea of a global truth about who should be visible — similar to how different domain blocks on different Mastodon instances give up this idea, but down to individual decisions, while still scaling.

That’s how it avoids the problem of centralized power.

Using it in the Fediverse could combine both concepts:
Initially see (and therefore trust) everyone on your home instance, but everyone outside it only gets visibility incrementally by interaction.
@jerry

@ArneBab @Crell @jerry This is pretty fascinating, and aligns with stuff I’ve been thinking about a lot.

How would this system cope with some person or group who needs visibility urgently? I’m thinking of marginalized groups in the midst of an emergency: police or government crackdowns, loss of connection due to nation-state interference, etc.

I suppose highly visible folks could boost the information somehow? I’m kinda thinking out loud…

@sesamecreek yes, they’ll need to get well-connected people to see them and to explicitly endorse them (set them as trusted).

Pretty similar to real life: talk to people other people trust and if they consider your cause important, they can signal-boost you.

If they signal boost falsely, they likely lose their ability to signal-boost (at least for many people) and also a lot of visibility.
@Crell @jerry

@ArneBab @Crell @jerry How would that work? That is, how would a relatively unknown person get well-connected people to see them? They can contact well-connected people despite not having “high visibility” or being well connected?

In real life, this is a problem, too. How do you talk to people other people trust?

Is there some way to establish trust quickly? (1/3)

I get that this is a hard problem. I’ve thought about it a lot, but coming from a privileged position, I’m trying to come up with some somewhat naive ideas about how it would work for minorities and marginalized folks. (2/3)
I like the system you describe a lot. It would certainly work very well for folks who have the time to be patient and establish their reputation. And, of course, any method you come up with to short-circuit the reputation system for emergencies could be exploited. Still just thinking out loud, sorry. (3/3)

@sesamecreek If you want to try how different scenarios work out, you can try the implementation in wispwot:
https://hg.sr.ht/~arnebab/wispwot/browse/HOWTO.org?rev=4d004e58c26e#L14

It operates on a simple plain text store (folders with text-files you can inspect easily) and exposes a rest interface besides being usable from the command line.

If you find scenarios that enable corrupting the calculations, those may be things to fix.

There’s already one part I fixed compared to the original in Hyphanet:
https://hg.sr.ht/~arnebab/wispwot/browse/README?rev=4d004e58c26e#L97

@Crell @jerry

@Crell also everyone the downvoted already interacted with will still see the person (local trust wins over propagated trust).

Measures to make this stronger would be to have stronger indicators if propagated trust disagrees strongly with your local trust — and where in the network that originates (via graph algorithms; there’s a bunch of different ones that should be cheap enough).
@jerry