I've been participating in the fediverse for about 8.5 years now, and have run infosec.exchange as well as a growing number of other fediverse services for about 7.5 of those years. While I am generally not the target of harassment, as an instance administrator and moderator, I've had to deal with a very, very large amount of it. Most commonly that harassment is racism, but to be honest we get the full spectrum of bigotry here in different proportions at different times. I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows. If you're just in it to be mad, I recommend chalking this up to "just another white guy's opinion" and move on to your next read.

The situation nearly always plays out like this:

A black person posts something that gets attention. The post and/or person's account clearly designates them as being black.

A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harrassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

A small army of "helpful" fedi-experts jumps in with replies to point out how Mastodon provides all the tools one needs to block bad actors.

Now, more exasperated, the victim exclaims that it's not their job to keep racists in check - this was (usually) cited as a central reason for joining the fediverse in the first place!

About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on. After all, these sea lions are just asking questions since they don't see anything of what the victim is complaining about anywhere on the fediverse.

Lots of well-meaning white folk usually turn up about this time to shout down the seal lions and encouraging people to believe the victim.

Then time passes... People forget... A few months later, the entire cycle repeats with a new victim.

Let me say that the fediverse has a both a bigotry problem that tracks with what exists in society at large as well as a troll problem. The trolls will manifest themselves as racist when the opportunity presents itself, anti-trans, anti-gay, anti-women, anti-furry, and whatever else suits their fancy at the time. The trolls coordinate, cooperate, and feed off each other.

What has emerged, in my view, on the fediverse is a concentration of trolls onto a certain subset of instances. Most instances do not tolerate trolls, and with some notable exceptions, trolls don't even bother joining "normal" instances any longer. There is no central authority that can prevent trolls from spinning up fediverse software of their own servers using their own domains names and doing their thing on the fringes. On centralized social media, people can be ejected, suspended, banned, and unless they keep trying to make new accounts, that is the end of it.

The tools for preventing harassment on the fediverse are quite limited, and the specifics vary between type of software - for example, some software like Pleroma/Akkoma, lets administrators filter out certain words, while Mastodon, which is what the vast majority of the fediverse uses, allows both instance administrators and users to block accounts and block entire domains, along with some things in the middle like "muting" and "limiting". These are blunt instruments.

To some extent, the concentration of trolls works in the favor of instance administrators. We can block a few dozen/hundred domains and solve 98% of the problem. There have been some solutions implemented, such as block lists for "problematic" instances that people can use, however many times those block lists become polluted with the politics of the maintainers, or at least that is the perception among some administrators. Other administrators come into this with a view that people should be free to connect with whomever on the fediverse and delegate the responsibility for deciding who and who not to block to the user.

For this and many other reasons, we find ourselves with a very unevenly federated network of instances.

Wit this in mind, if we take a big step back and look at the cycle of harassment I described from above, it looks like this:

A black person joins an instance that does not block m/any of the troll instances.

That black person makes a post that gets some traction.

Trolls on some of the problematic instances see the post, since they are not blocked by the victim's instance, and begin sending extremely offensive and harassing replies. A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

Cue the sea lions. The sea lions are almost never on the same instance as the victim. And they are almost always on an instance that blocks those troll instances I mentioned earlier. As a result, the sea lions do not see the harassment. All they see is what they perceive to be someone trying to stir up trouble.

...and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls. Is that the Mastodon developers' fault for not figuring out a way to more effectively block trolls through their software? Is it the instance administrator's fault for not blocking troll instances/troll accounts? Is it the victim's fault for joining an instance that doesn't block troll instances/troll accounts?

I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem. At every step, things are working as designed. The Mastodon software allows people to participate in a federated network and gives both administrators and users tools to control and moderate who they interact with. Administrators are empowered to run their instances as they see fit, with rules of their choosing. Users can join any instance they choose. We collectively shake our fists at the sky, tacitly blame the victim, and go about our days again.

It's quite maddening to watch it happen. The fediverse prides itself as a much more civilized social media experience, providing all manner of control to the user and instance administrators, yet here we are once again wrapping up the "shaking our fist at the sky and tacitly blaming the victim" stage in this most recent episode, having learned nothing and solved nothing.

@jerry 100%.

One interesting idea I've seen floated recently is a "known-good" list(s), so a new instance can federate *only* with those on some known good list(s). Then someone joining a server can see if their server is part of the "X-approved list" and decide to join or not.

Obviously not a complete solution, but are we maybe at the size where it's a part of the picture? Make new instances prove they're good, rather than wait for them to prove they're bad?

@Crell it's antithetical to what the fediverse is intended to be, but it is a reasonable solutiion to this problem

@jerry @Crell

I really appreciate your top post - it clarified a lot for me.

I'm a total noob to the Fediverse, so I don't know what core tenet goes against using allow lists as opposed to deny lists. Is there an easy answer you can give me?

@jztusk @Crell I think this reply is a very good example of why that would be a problem: https://mk.aleteoryx.me/notes/9wexilu5kwnb05ot

Basically, the fediverse is premised on the idea of many people running their own personal instance, and in adopting an allow-list model, we effectively make it difficult or impossible for these individual instances to participate.

Frog Dorothy Haze (Powered by a DFC-72-F chassis) (@admin)

@[email protected] @[email protected] this is problematic for anyone like me, who hosts a personal instance. it would be an obscene increase in the barrier-to-entry RE: @[email protected] 100%. One interesting idea I've seen floated recently is a "known-good" list(s), so a new instance can federate *only* with those on some known good list(s). Then someone joining a server can see if their server is part of the "X-approved list" and decide to join or not. Obviously not a complete solution, but are we maybe at the size where it's a part of the picture? Make new instances prove they're good, rather than wait for them to prove they're bad? RE: ...

mk.aleteoryx.me

@jerry @jztusk @Crell

Why not both? Some servers can run open federation, some can run allowlist-only, some can run in quarantine-first mode, and over time I'm sure we'll see shared lists, reputation signals, and trusted upstream servers to help manage the onboarding/allowing.

"Disallow all, but allow all servers already allowed by x, y and z" is one way to approach.

Almost none of the asks I've seen are either/or propositions, they are generally admin options to enable or not.

@jaz @jerry @jztusk @Crell
@jsit was talking about this the other day, and I keep feeling like I shot this idea down too soon...

https://social.coop/@jsit/112876102135328617

...but maybe that would be a good plan for some of these new and small instances, especially the ones that are trying to be safe spaces for minority groups. Get some momentum going, get some connections with other servers, get some contact with other server staffs, maybe eventually open it up.

Yeah, I think a federated whitelist would be a good idea.

Still, I'm looking at how many of these groups making block lists purport to be going after bigotry and harassment or whatever, but then you see them blocking a bunch of queer instances or black instances or something, and I wonder who might actually be trusted with this sort of thing. I can even imagine TechHub and Infosec showing up because someone with list access doesn't like the "techbros" or whatever...

Jay (@[email protected])

I'm beginning to wonder if the only solution to hate speech and harassment on the Fediverse might be allowlist-only instances.

social.coop

@jaz @jerry @jztusk @Crell @jsit
It also occurs to me that this can't be run by the instances using it, because they won't be able to see new instances to whitelist, which means you're going to need a few large servers to be the "Canary in the coal mine" for these instances. I feel like Tech Hub, with our somewhat squeamish block policies, could be a really useful server here, and I'd be happy to help maintain such a list.

What I think we need is some framework for how this list is put together and maintained, without too much overhead. We would need to account for the fact that such a list needs to be absolutely huge, and that while it should prioritize safety, there is an ethical obligation to get as many servers on it as possible.

As I told Jsit, it might be useful for someone to make this list now, just so we can see what it looks like.

@Raccoon @jaz @jerry @jztusk @Crell The refrain of “allowlists/blocklists are bad because it means you won’t hear from me” misses the point: This is why they are GOOD.

People don’t have a “right” to talk to your instance, this is a privilege that should be EARNED. And the protection of vulnerable people on social media is more important than my ability to make sure they can see my dumb posts.

This is not antithetical to the Fediverse. Choosing which instances to federate with is central to it!

@Raccoon @jaz @jerry @jztusk @Crell Because I am not among a group that is a frequent target of abuse, I have the privilege of enjoying the benefits of being on an “open” instance without having to worry about the drawbacks. I will probably always prefer to be on an instance that is blocklist-based instead of allowlist-based. But many people do not have that privilege.

@jsit @jaz @jerry @jztusk @Crell
> "Because I am not among a group that is a frequent target of abuse, I have the privilege of enjoying the benefits of being on an “open” instance without having to worry about the drawbacks."

But here's the flip side of that, one of the main things that makes people a bit squeamish about this: because you're not a member of a marginalized group, you haven't been on a server that has been brigaded with false reports trying to get the mainstream to block you, and then suddenly find a bunch of other marginalized groups' servers have blocked you without checking up on those reports. This is one of the things we keep seeing between queer fedi and black fedi.

What's to stop a member of one group, bigoted towards another, from getting in here and keeping servers that should be on the list off of it?

It then becomes a question of who will bell the cat: who will take on the responsibility, and thus open themselves up to abuse, of maintaining this?

@jsit @jaz @jerry @jztusk @Crell
And this post here also summarizes the big problems we've seen with FediBlock and The Bad Space.

We have people posting marginalized group instances on FediBlock, misrepresenting or exaggerating or even fabricating issues with those instances, and then suddenly finding that like 10% of the network has blocked them because no one is vetting these posts. I recently even appeared on there for attempting to vet some of those posts.

Meanwhile, every issue that The Bad Space has has basically turned into a timeline nightmare for its creators. Yeah, TBS has a problem with the number of instances it calls out for "racism" that no one else can find, and we could always make the argument that they could respond differently, but some people go absolutely insane about the people running it.

With a whitelist it would be even worse, because simply not including a server is doing a very real harm to its connections, and someone is going to answer for that.

@Raccoon Yes, who decides what to put on an allowlist/blocklist and what are the criteria they use continues to be a fraught problem with no simple solution.

But I was countering the claim a lot of people make that shared allowlists/blocklists in principle -- even if "perfectly curated" -- are antithetical to the Fediverse, which I think isn't true.

Some people bristle at the idea of these lists not because they think they might not be perfect, but because they want a nearly 100% open Fedi.

@jsit
I think you're talking about people who aren't in the conversation though: everyone who would be involved in this thread maintains a substantial block list, even if we have different standards for it. No one here is going to suggest a 100% open Fedi.

Our issue is the number of new and marginalized instances that are going to find a chunk of the network cut off by this sort of thing. We want new servers to be made, and we want those servers to thrive, because new servers add new life to the network, and a very important part of both of all of that is that good posts need to be able to spread far and wide and fast.

The Content Must Flow.

How does one create a new marginalized instance in an environment where instances with great content from marginalized groups is going to be cut off from them for however long it takes to get on the list? How do we let people on these new instances know more content will come, and why would they join a server that's blocked off?

@Raccoon I think maybe part of my confusion is not fully understanding how allowlists work. Can someone on a LIMITED_FEDERATION_MODE instance be *followed by* someone on a non-allowlisted instance?

For instance (heh), limited.example is in limited federation mode with only safe.example in its allowlist.

Someone on unknown.example wants to follow @ user @ limited.example. Can they do this?

#MastoAdmin #FediblockMeta

@jsit
As someone who doesn't deal with that directly, I forget that we have options like that. That is a good question, because if that's the case, it changes the nature of how disconnected these instances would be.

@Raccoon I have a test instance that I will enable limited federation on.

I would love to know if there are any big instances that do this already.

@Raccoon Yep, I can confirm. If you turn on LIMITED_FEDERATION_MODE, not only can’t you see posts from other instances, but users on other instances can't see *your* posts, either. (Unless you add their instance to your allowlist.)

#MastoAdmin

@jsit
This sounds like a feature request: a mode that treats all instances as Silenced unless whitelisted.

@jsit @Raccoon

To put it in perspective, Truth Social, Trump's social site, is a LIMITED_FEDERATION_MODE Mastodon instance, running Mastodon 3.4.1. https://truthsocial.com/api/v1/instance