I've been participating in the fediverse for about 8.5 years now, and have run infosec.exchange as well as a growing number of other fediverse services for about 7.5 of those years. While I am generally not the target of harassment, as an instance administrator and moderator, I've had to deal with a very, very large amount of it. Most commonly that harassment is racism, but to be honest we get the full spectrum of bigotry here in different proportions at different times. I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows. If you're just in it to be mad, I recommend chalking this up to "just another white guy's opinion" and move on to your next read.

The situation nearly always plays out like this:

A black person posts something that gets attention. The post and/or person's account clearly designates them as being black.

A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harrassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

A small army of "helpful" fedi-experts jumps in with replies to point out how Mastodon provides all the tools one needs to block bad actors.

Now, more exasperated, the victim exclaims that it's not their job to keep racists in check - this was (usually) cited as a central reason for joining the fediverse in the first place!

About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on. After all, these sea lions are just asking questions since they don't see anything of what the victim is complaining about anywhere on the fediverse.

Lots of well-meaning white folk usually turn up about this time to shout down the seal lions and encouraging people to believe the victim.

Then time passes... People forget... A few months later, the entire cycle repeats with a new victim.

Let me say that the fediverse has a both a bigotry problem that tracks with what exists in society at large as well as a troll problem. The trolls will manifest themselves as racist when the opportunity presents itself, anti-trans, anti-gay, anti-women, anti-furry, and whatever else suits their fancy at the time. The trolls coordinate, cooperate, and feed off each other.

What has emerged, in my view, on the fediverse is a concentration of trolls onto a certain subset of instances. Most instances do not tolerate trolls, and with some notable exceptions, trolls don't even bother joining "normal" instances any longer. There is no central authority that can prevent trolls from spinning up fediverse software of their own servers using their own domains names and doing their thing on the fringes. On centralized social media, people can be ejected, suspended, banned, and unless they keep trying to make new accounts, that is the end of it.

The tools for preventing harassment on the fediverse are quite limited, and the specifics vary between type of software - for example, some software like Pleroma/Akkoma, lets administrators filter out certain words, while Mastodon, which is what the vast majority of the fediverse uses, allows both instance administrators and users to block accounts and block entire domains, along with some things in the middle like "muting" and "limiting". These are blunt instruments.

To some extent, the concentration of trolls works in the favor of instance administrators. We can block a few dozen/hundred domains and solve 98% of the problem. There have been some solutions implemented, such as block lists for "problematic" instances that people can use, however many times those block lists become polluted with the politics of the maintainers, or at least that is the perception among some administrators. Other administrators come into this with a view that people should be free to connect with whomever on the fediverse and delegate the responsibility for deciding who and who not to block to the user.

For this and many other reasons, we find ourselves with a very unevenly federated network of instances.

Wit this in mind, if we take a big step back and look at the cycle of harassment I described from above, it looks like this:

A black person joins an instance that does not block m/any of the troll instances.

That black person makes a post that gets some traction.

Trolls on some of the problematic instances see the post, since they are not blocked by the victim's instance, and begin sending extremely offensive and harassing replies. A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

Cue the sea lions. The sea lions are almost never on the same instance as the victim. And they are almost always on an instance that blocks those troll instances I mentioned earlier. As a result, the sea lions do not see the harassment. All they see is what they perceive to be someone trying to stir up trouble.

...and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls. Is that the Mastodon developers' fault for not figuring out a way to more effectively block trolls through their software? Is it the instance administrator's fault for not blocking troll instances/troll accounts? Is it the victim's fault for joining an instance that doesn't block troll instances/troll accounts?

I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem. At every step, things are working as designed. The Mastodon software allows people to participate in a federated network and gives both administrators and users tools to control and moderate who they interact with. Administrators are empowered to run their instances as they see fit, with rules of their choosing. Users can join any instance they choose. We collectively shake our fists at the sky, tacitly blame the victim, and go about our days again.

It's quite maddening to watch it happen. The fediverse prides itself as a much more civilized social media experience, providing all manner of control to the user and instance administrators, yet here we are once again wrapping up the "shaking our fist at the sky and tacitly blaming the victim" stage in this most recent episode, having learned nothing and solved nothing.

NB: I am far, far from perfect, both as a person and as a moderator/administrator. I love this place we've built, and it breaks my heart to see what people go through here.

@jerry Pretty sure several people have reporting this user on your instance by now. Perhaps you could contribute more than just a lot of words?

https://infosec.exchange/@SearingTruth/112879577133879921

SearingTruth (@[email protected])

@[email protected] Let evil and good speak fellow citizen Mastodon•ART 🎨 Curator. Or we will never know who they are. ST "I did not like them. I did not like what they said. I did not like the way they said it. But I let them speak anyway, and offered protection against those who would have them silenced." SearingTruth

Infosec Exchange
@jerry Thanks for this! So the shorter version is that people on instances that block the nasties.... think that there aren't nasties on the platform. (People like me. Mathstodon.xyz ) Except they are. What's a better script? What's a new category of people who can say "go to this instance where the trolls can't go?" Is that possible?

@jerry As a longtime moderator and community owner, this is a brilliantly cogent description of what happens and the challenges in managing it.

I have also learned that, having been blocked, a surprisingly large number of hateful trolls will spend an inordinate amount of time and energy to continue to poison the well, even to the point that people recognize them instantly and ban them. I call these OCD trolls.

@jerry not to mijimize, but I thought this was exactly the kind of problem the federation was meant to solve. People can react to this by moving to instances with stricter defederation policies, no? Or even, if they are so inclined, MAKE instances that are as strict as they need/want. Also, many instances are small enough that we will know our admins by first name, and some might be willing to accommodate their policies if asked.
@alberto_cottica that presumes people want to do more than just sign up/sign in and start posting, which is what most all people i observe in this situation seem to be doing. As I mentioned in the original post, people don’t want to/don’t think they should have to be the ones to “take out the trash”.

@jerry I see. When I signed up, I spent a bit of time researching how to choose an instance, and stayed well away from anyone with "free speech in the description. I might need to leave my current instance depending on how the federation with Threads evolves.

In fairness, even on the birdsite my experience was quite OK. I left not because of harassment or similar, but because of mounting enshittification.

@jerry I am frequently reminded how fortunate I am that I hitched my horse to this specific corner of the fediverse
@jerry I greatly appreciate all your efforts at vile pest control, Jerry. I don’t believe anyone has the right to harass others, and blocking early and often at the most effective level is the only way to protect those who are most likely to be harassed. Keep up the good work. ✊🏽

@jerry
I agree with everything you observe, the cycle is both predictable and all too frequent.

What concerns me the most, and I will pick on Mastodon here as the predominent platform, the devs do not sufficiently consider safety as a priority, nor seemingly as a factor in their design decisions. It feels like it would take a fork to properly implement safety mechanisms to counter the apparent race to "help build engagement".

@doug @jerry I'm going to stand up for the devs here and say that they absolutely do factor in these things, just not always in the ways that are most apparent. There are a number of features that don't get added (at least as quickly as folks demand) specifically because of their impact on user privacy, safety, security, etc. (Quote toots, for example.)

There's a triad of engagement, safety, and accessibility that has to be factored into everything. Then how those features are maintained going forward.

@vmstan @doug Additionally, I am not sure what additional safety mechanisms are missing, to be honest. Perhaps making block lists more frictionless? Allowing admins to block certain words? (Which btw, would cause it's own set of backlash for flitering out legitimate use of some words)...
@jerry word-based filtering has many many issues. As server blocklists do. Before having tools that reinforce this, we want those tools to not be invisible to users and provide some auditing. Not doing so, in our experience, creates very bad experiences for users.
Add the fact that being a federated network makes most of the things much more difficult to implement properly.
@vmstan @doug
@jerry and this is also why we introduced the severed relationship mechanism, as well as the (still needikg improvements) filtered notification system. Now that we have those, which allow more auditing and decision visibility, we will need to able to add more tools, like blocklist syncing.
@vmstan @doug
@renchap mind, pleroma implemented things like MRF years ago (there's a helpful thread from ariadne conill that lists the pleroma moderation/security features); mastodon frequently ignored calls for implementation of such features or delayed them for years, and rochko managed to alienate many potential contributors by doing things like dropping already reviewed pull requests with implemented features because something irritated him.
@jerry @vmstan @doug
Ariadne Conill 🐰:therian: (@[email protected])

things i would like to see in mastodon that pleroma has been able to do for years: - the ability to defederate an instance except for *explicitly approved* accounts (pleroma has supported this since the beginning of MRF in 2018) - the ability to defederate a hashtag (pleroma has supported this since 2019) - the ability to quarantine unknown instances until they are approved by the admin (pleroma has supported this through a combination of multiple features since 2019)

Treehouse Mastodon
@renchap even now you clearly prioritise ios app development over security and moderation features, and it's not the first time people bring up the sorry state of mastodon moderation tooling. @jerry @vmstan @doug
@renchap @jerry @vmstan @doug One useful tool I could think of would be having a list of words that moderation could set at the instance level to be flagged for review before it appears to the users. If it's a legitimate use of the word, a simple checkbox can allow the post to appear/federate.
This won't catch all abuse, of course, but it'll at least offer some protection before harm is caused. Especially to newer instances who may not have a large blocklist yet.
@jerry @vmstan @doug Something that might help would be allowing individuals to subscribe to curated block lists, not just admins. Not sure how disruptive that would be to the fediverse.
@adamrice @jerry @vmstan @doug https://www.blockpartyapp.com/#blockpartyclassic
^ This was a winning model, back when twitter's API was open
Privacy Party | Keep your personal information secure

Secure your social media accounts with Privacy Party. Our browser extension provides expert recommendations and automated updates to keep you safe.

@jerry
I think there are a lot of marginalised people - users, mods and admins - who would have a lot to say about additional safety features, and would appreciate being consulted in design and testing before it's released.
@vmstan
@jerry As we all know, Trust and Safety is hard, and a challenge is that when it fails it hits some users far harder than others. The idea that it's unduly onerous on those users to block trolls is new to me - I'm not a domain expert. But I want to hear Black voices, so their problem is, to an extent, my problem. Could a mitigation be curated block lists? I have a foggy recollection of such a facility being available on a certain legacy microblogging platform.

@jerry @vmstan @doug I've seen several people asking for some means to sync block lists cooperatively between instances. Not for post content, but for accounts etc.

Does that seem like a reasonable ask?

@draeath @jerry @vmstan @doug There is also a danger to this. I remember Twitter had this via 3rd party clients, and then the trolls would try to get their victims on those lists, and if succeeded, that victim would then have to convince some person they don't know, that they are trustworthy. Even though there have been made many complaints against them by the trolls, which the maintainer doesn't know are trolls.
@yeep @draeath @jerry @vmstan @doug I believe that was ‘block together’. I recall that being a whole thing

@jerry @vmstan @doug

I think opt out is a bad model for federation.

@jerry @vmstan @doug I've never run an instance, so I know things are much more complicated than I imagine, but it seems to me that the current model of "fully trusted unless action is taken" is never going to provide the level of safety necessary for some at a reasonable level of admin effort.

I can imagine many flavors of cooperative tools for gradually increasing trust as instances participate and show that they are worthy of that trust. In most aspects of society we don't give everybody full access by default and only limit reactively - there are scales of acceptance in order to limit damage from intensional bad actors.

@jerry @vmstan @doug incredibly, reply-gating seems to be a much valued feature the development of which stalls on the fediverse. My impression is that developers are overcomplicating how permission mechanisms should propagate across the fediverse, when the primary objective is that the author "doesn't want to see" responses.

Ref to this
https://qoto.org/@mapto/112641375519863118

@vmstan
I have utmost respect for the hard work if the devs, but I read the public roadmap and see barely any feature that relates to safety or accessibiliy.

I don't doubt it is going to be an aspect of some of the work, but read the original post of the thread, and where do we think anything is being actively worked or planned for that could alleviate the problems, for users or admins?

@jerry

@vmstan @doug @jerry Every Black person I've heard express an opinion has said that the lack of quote toots makes the Fedi *less safe* for them. And I think one of their big complaints is that when they express that opinion they are told that their opinion is incorrect.

@tacertain @vmstan @doug @jerry

I was going to say the same thing. Patronizing devs can mistakingly think they know what's best. Perhaps listen to a more diverse set of users?

Black Twitter, quoting, and white views of toxicity on Mastodon

Does quoting really cause toxicity?

The Nexus Of Privacy
@vmstan @doug @jerry What's funny is your example actually illustrates the opposite of the point you're trying to make: not adding quotes (as a first-class feature, because we have it now as a harassment mechanism, it just sucks to use casually) while ignoring the things minority folks are requesting shows they're deliberately *not* listening to the needs of the people who have it worse, and that their goal is preventing the harassment of - or hostility towards - folks like themselves.
@jerry I have to say, somewhat tangentially related, I am very grateful for your moderation work. I know the vile shit that is out there on the internets, and I have no reason to believe that it wouldn't be on Mastodon either. The fact that I rarely see it means that you, directly or indirectly, have already filtered it out.

@jerry 100%.

One interesting idea I've seen floated recently is a "known-good" list(s), so a new instance can federate *only* with those on some known good list(s). Then someone joining a server can see if their server is part of the "X-approved list" and decide to join or not.

Obviously not a complete solution, but are we maybe at the size where it's a part of the picture? Make new instances prove they're good, rather than wait for them to prove they're bad?

@Crell it's antithetical to what the fediverse is intended to be, but it is a reasonable solutiion to this problem

@jerry Sadly, I think the preponderance of evidence suggests that a "wild west libertarian self-organizing environment" (the dream of the early-90s Internet) will devolve into a Nazi troll farm 100% of the time with absolute certainty.

It's a wonderful idea, but doomed.

The barrier to the accept-list could be low (eg, do they have a halfway decent TOS/CoC), but I don't think we have an alternative.

cf: https://peakd.com/community/@crell/why-you-can-t-just-ignore-them

Why you can't just ignore them | PeakD

Recent events in the PHP community in the past few days have reminded me of an important point that bears repeating. Qu... by crell

PeakD

@Crell @jerry I think the idea that an otherwise terrible person had like 20 years ago holds up pretty well: paying for initial access results in you having an investment in a service that encourages you to follow the rules to protect that investment. You can see this with how Something Awful has turned into a stable and mature forum with varied subforums and at least one thread for anything you can think of.

Of course the downside to that is that if the person setting the rules is terrible then the culture will be terrible and require a coup to fix, but... that seems to be a universal part of the human condition.

@teknogrot @jerry "The culture of an organization is defined as the worst behavior its leadership is willing to tolerate."

No amount of federation will change that dynamic.

@Crell I think it does change it, but not for the better. As @jerry pointed out, the nature of the fediverse can hide the behaviour from some people resulting in a de-facto tolerance of behaviour worse than the leadership (in this case again @jerry) would actually accept, while denying them the tools to do something about it.

Federation may actually not be a good idea at all for social media.

@teknogrot @Crell @jerry
Metafilter has (or had?) a $5 one-time entry fee that served the same purpose pretty well.

@Crell @jerry Jerry, firstly, thank you for the thoughtful, nuanced take. As a person who does somewhat high profile activism, I appreciate that your efforts have resulted in me experiencing very little harassment here.

The problem with having a list of "approved instances" is that it makes personal/tiny instances untenable.

This really reminds me of issues with email hosting and spam control - I run a personal email server and I have problems with providers assuming everyone is a spammer unless they have a history of sending non-spam.

How to establish that history if you can't send, though? If you're a business, you can pay protection money to certain companies that will bootstrap your reputation, but I can't afford that.

APIs for publishing opinions on other instances could help, if consumed "web of trust" style - you'd have two values, how much you trust the instance itself, and how much you trust it's trust decisions. These values might be negative. I'm not sure how well this would work in practice.

@Crell @jerry Meanwhile, yesterday someone went out of their way on the birdsite to tag me in a post calling me an assortment of slurs.

@Crell @jerry speaking of the birdsite, before the API got locked down, I spent a fair amount of effort building network analysis tools to proactively identify and block bigots. Turns out assholes like to follow each other.

It was deeply satisfying when news about me came out and a bunch of people who had never interacted with me and weren't on any shared blocklists were complaining about being blocked by me.

@Crell @jerry I also had an IFF (identify friend or foe) script that would pull following/follower data, compare against my my own block, mute, following, and follower lists, and compute a score.
@ryanc @jerry @Crell perhaps there’s a way to make this available to members so they can implement it when they sign up?
@Rickd6 @jerry @Crell It's not clear to me that it would work here. Part of the issue is it sounds like the trolls often spin up new disposable instances for trolling purposes and wouldn't have useful data.
@Crell @ryanc @jerry is it possible to ‘fight fire with fire’ in that when someone identifies that they are receiving harassment a group of individuals- obviously a prearranged group- can be contacted who will respond overwhelmingly to the harassing individual to call them out? Sounds childish when said out loud and may make them dig in further but …..
@Rickd6 @ryanc @jerry "My gang is bigger than your gang" is the approach used in a failed society.
@ryanc @Crell I used to work with someone who had this saying "the operation was a success, unfortunately the patient died". I feel like it's that sort of situation - we could indeed solve the problem by killing the patient.

@ryanc @jerry The spam analogy is very apt, I think, given Fediverse is often analogized to email.

And the wild-west-anyone-runs-anything approach is largely a failure there, too. I also used to run a personal mail server. It only worked if I proxied every message through my ISP's mail server.

A similar network-of-trust seems the only option here, give or take details.

@ryanc @jerry In the abstract sense, we're dealing with the scaling problems of the tit-for-tat experiment dynamics. Reputation-building approaches to social behavior only work when the # of actors is small enough that repeated interactions can build reputation. The Internet is vastly too big for that, just like society at large.
@Crell @jerry there's several phd thesis level problems to solve here
@ryanc @jerry True dat.
@Crell @jerry My big concern with the web of trust model is that it's complicated, and has lots of nontrivial decisions to make. An effective tool would probably have to distill the decision to trust/neutral/distrust and have a standard scoring algorithm, and notify admins of conflicting data.
@Crell @jerry I do think keyword/regex filters as a quarantine/alert admin thing would be helpful, but as mentioned up thread, part of the problem is people unkowingly joining instances that don't protect their users from harassment and not understanding why that's a problem. The guides saying "instance doesn't matter much" don't help.
@ryanc @jerry Yeah, the onboarding experience is definitely still a sore point. Like, I'd like to get my brother or the NFP I work with onto Mastodon, but I don't know what server to send them to. Mine isn't appropriate for them, mastodon.social isn't a good answer, and the alternative is... *citation needed*
@Crell @jerry Yeah, I've absolutely no idea what "general but friendly to members of frequently harassed groups" instances exist. This instance is really nice, as I've always been a hacker first and foremost. Yes, I'm queer on several dimensions and open about it, but most of the time I don't want to focus on that.

@ryanc @Crell @jerry

>„APIs for publishing opinions on other instances could help, if consumed "web of trust" style - you'd have two values, how much you trust the instance itself, and how much you trust it's trust decisions. These values might be negative. I'm not sure how well this would work in practice.”

Fediseer may be something like this (created on Threadiverse because of Lemmy spam wave): https://gui.fediseer.com

Fediseer