I've been participating in the fediverse for about 8.5 years now, and have run infosec.exchange as well as a growing number of other fediverse services for about 7.5 of those years. While I am generally not the target of harassment, as an instance administrator and moderator, I've had to deal with a very, very large amount of it. Most commonly that harassment is racism, but to be honest we get the full spectrum of bigotry here in different proportions at different times. I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows. If you're just in it to be mad, I recommend chalking this up to "just another white guy's opinion" and move on to your next read.

The situation nearly always plays out like this:

A black person posts something that gets attention. The post and/or person's account clearly designates them as being black.

A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harrassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

A small army of "helpful" fedi-experts jumps in with replies to point out how Mastodon provides all the tools one needs to block bad actors.

Now, more exasperated, the victim exclaims that it's not their job to keep racists in check - this was (usually) cited as a central reason for joining the fediverse in the first place!

About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on. After all, these sea lions are just asking questions since they don't see anything of what the victim is complaining about anywhere on the fediverse.

Lots of well-meaning white folk usually turn up about this time to shout down the seal lions and encouraging people to believe the victim.

Then time passes... People forget... A few months later, the entire cycle repeats with a new victim.

Let me say that the fediverse has a both a bigotry problem that tracks with what exists in society at large as well as a troll problem. The trolls will manifest themselves as racist when the opportunity presents itself, anti-trans, anti-gay, anti-women, anti-furry, and whatever else suits their fancy at the time. The trolls coordinate, cooperate, and feed off each other.

What has emerged, in my view, on the fediverse is a concentration of trolls onto a certain subset of instances. Most instances do not tolerate trolls, and with some notable exceptions, trolls don't even bother joining "normal" instances any longer. There is no central authority that can prevent trolls from spinning up fediverse software of their own servers using their own domains names and doing their thing on the fringes. On centralized social media, people can be ejected, suspended, banned, and unless they keep trying to make new accounts, that is the end of it.

The tools for preventing harassment on the fediverse are quite limited, and the specifics vary between type of software - for example, some software like Pleroma/Akkoma, lets administrators filter out certain words, while Mastodon, which is what the vast majority of the fediverse uses, allows both instance administrators and users to block accounts and block entire domains, along with some things in the middle like "muting" and "limiting". These are blunt instruments.

To some extent, the concentration of trolls works in the favor of instance administrators. We can block a few dozen/hundred domains and solve 98% of the problem. There have been some solutions implemented, such as block lists for "problematic" instances that people can use, however many times those block lists become polluted with the politics of the maintainers, or at least that is the perception among some administrators. Other administrators come into this with a view that people should be free to connect with whomever on the fediverse and delegate the responsibility for deciding who and who not to block to the user.

For this and many other reasons, we find ourselves with a very unevenly federated network of instances.

Wit this in mind, if we take a big step back and look at the cycle of harassment I described from above, it looks like this:

A black person joins an instance that does not block m/any of the troll instances.

That black person makes a post that gets some traction.

Trolls on some of the problematic instances see the post, since they are not blocked by the victim's instance, and begin sending extremely offensive and harassing replies. A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

Cue the sea lions. The sea lions are almost never on the same instance as the victim. And they are almost always on an instance that blocks those troll instances I mentioned earlier. As a result, the sea lions do not see the harassment. All they see is what they perceive to be someone trying to stir up trouble.

...and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls. Is that the Mastodon developers' fault for not figuring out a way to more effectively block trolls through their software? Is it the instance administrator's fault for not blocking troll instances/troll accounts? Is it the victim's fault for joining an instance that doesn't block troll instances/troll accounts?

I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem. At every step, things are working as designed. The Mastodon software allows people to participate in a federated network and gives both administrators and users tools to control and moderate who they interact with. Administrators are empowered to run their instances as they see fit, with rules of their choosing. Users can join any instance they choose. We collectively shake our fists at the sky, tacitly blame the victim, and go about our days again.

It's quite maddening to watch it happen. The fediverse prides itself as a much more civilized social media experience, providing all manner of control to the user and instance administrators, yet here we are once again wrapping up the "shaking our fist at the sky and tacitly blaming the victim" stage in this most recent episode, having learned nothing and solved nothing.

@jerry 100%.

One interesting idea I've seen floated recently is a "known-good" list(s), so a new instance can federate *only* with those on some known good list(s). Then someone joining a server can see if their server is part of the "X-approved list" and decide to join or not.

Obviously not a complete solution, but are we maybe at the size where it's a part of the picture? Make new instances prove they're good, rather than wait for them to prove they're bad?

@Crell it's antithetical to what the fediverse is intended to be, but it is a reasonable solutiion to this problem

@jerry Sadly, I think the preponderance of evidence suggests that a "wild west libertarian self-organizing environment" (the dream of the early-90s Internet) will devolve into a Nazi troll farm 100% of the time with absolute certainty.

It's a wonderful idea, but doomed.

The barrier to the accept-list could be low (eg, do they have a halfway decent TOS/CoC), but I don't think we have an alternative.

cf: https://peakd.com/community/@crell/why-you-can-t-just-ignore-them

Why you can't just ignore them | PeakD

Recent events in the PHP community in the past few days have reminded me of an important point that bears repeating. Qu... by crell

PeakD

@Crell @jerry I think the idea that an otherwise terrible person had like 20 years ago holds up pretty well: paying for initial access results in you having an investment in a service that encourages you to follow the rules to protect that investment. You can see this with how Something Awful has turned into a stable and mature forum with varied subforums and at least one thread for anything you can think of.

Of course the downside to that is that if the person setting the rules is terrible then the culture will be terrible and require a coup to fix, but... that seems to be a universal part of the human condition.

@teknogrot @jerry "The culture of an organization is defined as the worst behavior its leadership is willing to tolerate."

No amount of federation will change that dynamic.

@Crell I think it does change it, but not for the better. As @jerry pointed out, the nature of the fediverse can hide the behaviour from some people resulting in a de-facto tolerance of behaviour worse than the leadership (in this case again @jerry) would actually accept, while denying them the tools to do something about it.

Federation may actually not be a good idea at all for social media.

@teknogrot @Crell @jerry
Metafilter has (or had?) a $5 one-time entry fee that served the same purpose pretty well.

@Crell @jerry Jerry, firstly, thank you for the thoughtful, nuanced take. As a person who does somewhat high profile activism, I appreciate that your efforts have resulted in me experiencing very little harassment here.

The problem with having a list of "approved instances" is that it makes personal/tiny instances untenable.

This really reminds me of issues with email hosting and spam control - I run a personal email server and I have problems with providers assuming everyone is a spammer unless they have a history of sending non-spam.

How to establish that history if you can't send, though? If you're a business, you can pay protection money to certain companies that will bootstrap your reputation, but I can't afford that.

APIs for publishing opinions on other instances could help, if consumed "web of trust" style - you'd have two values, how much you trust the instance itself, and how much you trust it's trust decisions. These values might be negative. I'm not sure how well this would work in practice.

@Crell @jerry Meanwhile, yesterday someone went out of their way on the birdsite to tag me in a post calling me an assortment of slurs.

@Crell @jerry speaking of the birdsite, before the API got locked down, I spent a fair amount of effort building network analysis tools to proactively identify and block bigots. Turns out assholes like to follow each other.

It was deeply satisfying when news about me came out and a bunch of people who had never interacted with me and weren't on any shared blocklists were complaining about being blocked by me.

@Crell @jerry I also had an IFF (identify friend or foe) script that would pull following/follower data, compare against my my own block, mute, following, and follower lists, and compute a score.
@ryanc @jerry @Crell perhaps thereโ€™s a way to make this available to members so they can implement it when they sign up?
@Rickd6 @jerry @Crell It's not clear to me that it would work here. Part of the issue is it sounds like the trolls often spin up new disposable instances for trolling purposes and wouldn't have useful data.
@Crell @ryanc @jerry is it possible to โ€˜fight fire with fireโ€™ in that when someone identifies that they are receiving harassment a group of individuals- obviously a prearranged group- can be contacted who will respond overwhelmingly to the harassing individual to call them out? Sounds childish when said out loud and may make them dig in further but โ€ฆ..
@Rickd6 @ryanc @jerry "My gang is bigger than your gang" is the approach used in a failed society.
@ryanc @Crell I used to work with someone who had this saying "the operation was a success, unfortunately the patient died". I feel like it's that sort of situation - we could indeed solve the problem by killing the patient.

@ryanc @jerry The spam analogy is very apt, I think, given Fediverse is often analogized to email.

And the wild-west-anyone-runs-anything approach is largely a failure there, too. I also used to run a personal mail server. It only worked if I proxied every message through my ISP's mail server.

A similar network-of-trust seems the only option here, give or take details.

@ryanc @jerry In the abstract sense, we're dealing with the scaling problems of the tit-for-tat experiment dynamics. Reputation-building approaches to social behavior only work when the # of actors is small enough that repeated interactions can build reputation. The Internet is vastly too big for that, just like society at large.
@Crell @jerry there's several phd thesis level problems to solve here
@ryanc @jerry True dat.
@Crell @jerry My big concern with the web of trust model is that it's complicated, and has lots of nontrivial decisions to make. An effective tool would probably have to distill the decision to trust/neutral/distrust and have a standard scoring algorithm, and notify admins of conflicting data.
@Crell @jerry I do think keyword/regex filters as a quarantine/alert admin thing would be helpful, but as mentioned up thread, part of the problem is people unkowingly joining instances that don't protect their users from harassment and not understanding why that's a problem. The guides saying "instance doesn't matter much" don't help.
@ryanc @jerry Yeah, the onboarding experience is definitely still a sore point. Like, I'd like to get my brother or the NFP I work with onto Mastodon, but I don't know what server to send them to. Mine isn't appropriate for them, mastodon.social isn't a good answer, and the alternative is... *citation needed*
@Crell @jerry Yeah, I've absolutely no idea what "general but friendly to members of frequently harassed groups" instances exist. This instance is really nice, as I've always been a hacker first and foremost. Yes, I'm queer on several dimensions and open about it, but most of the time I don't want to focus on that.

@ryanc @Crell @jerry

>โ€žAPIs for publishing opinions on other instances could help, if consumed "web of trust" style - you'd have two values, how much you trust the instance itself, and how much you trust it's trust decisions. These values might be negative. I'm not sure how well this would work in practice.โ€

Fediseer may be something like this (created on Threadiverse because of Lemmy spam wave): https://gui.fediseer.com

Fediseer

@74, Dziesiony pierwsze na cenzurowanym. ๐Ÿ˜‚
@AubreyDeLosDestinos @74 alfabet nie wybiera
@74 @74, litery kontra cyfry. Czytaล‚em caล‚y ten wฤ…tek od rana i w sumie sporo osรณb doล›ฤ‡ trzeลบwo na to patrzy. Odstawiajฤ…c emocje na bok, jest to bardzo ciekawe zagadnienie, jak Fedi siฤ™ wykuwa w bรณlach.

@Crell @jerry I wonder, isn't "creating an instance" a barrier? In the sense that a new troll instance, after some time, should be blocked by most other instances and blocklists. Then they have to set up a new one, and so on, and so on.

Also, I guess new people either know people on Mastodon they trust or they go to joinmastodon.org and pick a server. That's kind of an accept-list, no? The barrier to get there is relatively low: https://joinmastodon.org/covenant

Mastodon Server Covenant for joinmastodon.org

@skaphle @jerry Too low, apparently.

And with tools like Mastohost, setting up an attack instance is quite easy.

Always assume that malicious actors are willing to put in more work than benevolent actors. They're more motivated.

@Crell On #Freenet (now #Hyphanet ยน) that problem already struck 20 years ago and the solution was to add propagating visibility where visibility grows slowly while you interact and dies almost instantly once you harass. It actually works in keeping communication civil even though Freenet doesnโ€™t only attract idealists but also horrible people.
There are PRโ€™s to enable similar systems in Mastodon:

https://github.com/mastodon/mastodon/pull/28958
https://gitlab.com/babka_net/mastodon-babka/-/merge_requests/22

ยน sad reasons: https://www.hyphanet.org/freenet-renamed-to-hyphanet.html
@jerry

Add follow/mute/block.removed webhook events by CSDUMMI ยท Pull Request #28958 ยท mastodon/mastodon

This PR provides the inverse Webhook events to #28744: As all present Webhook events are triggered by the creation of some object in the database, the logic to trigger a webhook used for these pass...

GitHub

@Crell The main advantage of the system in #Freenet / #Hyphanet is that visibility scales worse than losing it: interactions increase it slowly, being reported destroys it quickly: a person connected to you by propagating interaction reports them as harasser. This report is public, which works in Hyphanet to make people lose visibility for false claims, too.

Background:
https://www.draketo.de/software/decentralized-moderation

The prototype linked there (wispwot) can be linked into Mastodon with the mentioned PRs.

@jerry

The path towards decentralized moderation

Verstreute Werke von ((ฮป()'Dr.ArneBab))

@ArneBab @jerry Interesting. What's the mechanism for preventing hostile brigading, or a bunch of trolls reporting someone just to down vote them? (Where "troll" is defined as "activist I disagree with", because not only the racist right dire those things.)

@Crell When they do, they expose their downvoting, so they lose visibility which they had to build up over time by actual interaction.

A few years ago we had an attack on this by a group of organized Neonazis โ€” which luckily failed. I took that chance to record a dataset of the network:
https://figshare.com/articles/dataset/The_Freenet_social_trust_graph_extracted_from_the_Web_of_Trust/4725664?file=7715467

That also shows a weakness of that system: this information must be public.

The important point: we know the attackers, so itโ€™s possible to use this to check moderation systems.
@jerry

The Freenet social trust graph extracted from the Web of Trust

The dataset trust-deduplicated.csv shows the trusts-value given by anonymous users of the censorship resistant Freenet network to other users. IDs are anonymized by mapping each ID to an integer-index in a non-shared list of known IDs. The format is trust-giver;trust-receiver;trust-value.The dataset trust-sone.csv only includes trust relationships which fit the trust-allocation style in the Sone social networking plugin (either 75 for non-spammers or -25 for disruptive behaviour).The dataset trust-anonymized-2020-11-01-under-attack.csv shows a newer snapshot after the web of trust has been under attack for half a year. The dataset attacking-nodes-2020-11-01.csv includes the IDs of the likely attackers.The *.scm files were used to crawl the data from a locally running Freenet node (see https://freenetproject.org ).trust-deduplicated-force-atlas-hub-centrality.png and trust-sone-reingold.png provide first looks into the data visualized with the free tool Gephi.The 2020-11-01*scm files were used to create the 2020-11-01 datasets.Additional details are available at http://www.draketo.de/english/freenet/social-graph-snapshot

figshare
@ArneBab @jerry Hm, so like, it burns karma to report someone?

@Crell It burns karma to report someone *falsely*.

Because if most people agree with your reporting, then they wonโ€™t reduce your visibility for that.
@jerry

@Crell One other reason why this works pretty well in Hyphanet is that you can always create a new ID โ€” you can always start fresh โ€” but this also starts with low visibility.

And I think it works as moderation, because it operates solely on the value spammers and trolls care about: visibility.

You gain visibility by interaction (which is roughly gratis for humans).

You lose it when you get reported by someone who has visibility.

The strength of reporting is reduced by social distance.
@jerry

@Crell Conceptually this gives up the idea of a global truth about who should be visible โ€” similar to how different domain blocks on different Mastodon instances give up this idea, but down to individual decisions, while still scaling.

Thatโ€™s how it avoids the problem of centralized power.

Using it in the Fediverse could combine both concepts:
Initially see (and therefore trust) everyone on your home instance, but everyone outside it only gets visibility incrementally by interaction.
@jerry

@ArneBab @Crell @jerry This is pretty fascinating, and aligns with stuff Iโ€™ve been thinking about a lot.

How would this system cope with some person or group who needs visibility urgently? Iโ€™m thinking of marginalized groups in the midst of an emergency: police or government crackdowns, loss of connection due to nation-state interference, etc.

I suppose highly visible folks could boost the information somehow? Iโ€™m kinda thinking out loudโ€ฆ

@sesamecreek yes, theyโ€™ll need to get well-connected people to see them and to explicitly endorse them (set them as trusted).

Pretty similar to real life: talk to people other people trust and if they consider your cause important, they can signal-boost you.

If they signal boost falsely, they likely lose their ability to signal-boost (at least for many people) and also a lot of visibility.
@Crell @jerry

@ArneBab @Crell @jerry How would that work? That is, how would a relatively unknown person get well-connected people to see them? They can contact well-connected people despite not having โ€œhigh visibilityโ€ or being well connected?

In real life, this is a problem, too. How do you talk to people other people trust?

Is there some way to establish trust quickly? (1/3)

I get that this is a hard problem. Iโ€™ve thought about it a lot, but coming from a privileged position, Iโ€™m trying to come up with some somewhat naive ideas about how it would work for minorities and marginalized folks. (2/3)
I like the system you describe a lot. It would certainly work very well for folks who have the time to be patient and establish their reputation. And, of course, any method you come up with to short-circuit the reputation system for emergencies could be exploited. Still just thinking out loud, sorry. (3/3)

@sesamecreek If you want to try how different scenarios work out, you can try the implementation in wispwot:
https://hg.sr.ht/~arnebab/wispwot/browse/HOWTO.org?rev=4d004e58c26e#L14

It operates on a simple plain text store (folders with text-files you can inspect easily) and exposes a rest interface besides being usable from the command line.

If you find scenarios that enable corrupting the calculations, those may be things to fix.

Thereโ€™s already one part I fixed compared to the original in Hyphanet:
https://hg.sr.ht/~arnebab/wispwot/browse/README?rev=4d004e58c26e#L97

@Crell @jerry

@Crell also everyone the downvoted already interacted with will still see the person (local trust wins over propagated trust).

Measures to make this stronger would be to have stronger indicators if propagated trust disagrees strongly with your local trust โ€” and where in the network that originates (via graph algorithms; thereโ€™s a bunch of different ones that should be cheap enough).
@jerry

@jerry @Crell In the early days of IRC (I wasn't there for it), my understanding was that EFnet was meant to be similar - allow any server to join - and hence their name Eris Free Net. But they've since changed their policy given the risks, and I think that's one of few reasonable approaches. Increasing the friction for everyone sucks, but it disproportionately hurts trolls, so I guess it may be worthwhile?

@jerry @Crell

I really appreciate your top post - it clarified a lot for me.

I'm a total noob to the Fediverse, so I don't know what core tenet goes against using allow lists as opposed to deny lists. Is there an easy answer you can give me?

@jztusk @Crell I think this reply is a very good example of why that would be a problem: https://mk.aleteoryx.me/notes/9wexilu5kwnb05ot

Basically, the fediverse is premised on the idea of many people running their own personal instance, and in adopting an allow-list model, we effectively make it difficult or impossible for these individual instances to participate.

Frog Dorothy Haze (Powered by a DFC-72-F chassis) (@admin)

@[email protected] @[email protected] this is problematic for anyone like me, who hosts a personal instance. it would be an obscene increase in the barrier-to-entry RE: @[email protected] 100%. One interesting idea I've seen floated recently is a "known-good" list(s), so a new instance can federate *only* with those on some known good list(s). Then someone joining a server can see if their server is part of the "X-approved list" and decide to join or not. Obviously not a complete solution, but are we maybe at the size where it's a part of the picture? Make new instances prove they're good, rather than wait for them to prove they're bad? RE: ...

mk.aleteoryx.me

@jerry @jztusk @Crell

Why not both? Some servers can run open federation, some can run allowlist-only, some can run in quarantine-first mode, and over time I'm sure we'll see shared lists, reputation signals, and trusted upstream servers to help manage the onboarding/allowing.

"Disallow all, but allow all servers already allowed by x, y and z" is one way to approach.

Almost none of the asks I've seen are either/or propositions, they are generally admin options to enable or not.

@jaz @jerry @jztusk @Crell
@jsit was talking about this the other day, and I keep feeling like I shot this idea down too soon...

https://social.coop/@jsit/112876102135328617

...but maybe that would be a good plan for some of these new and small instances, especially the ones that are trying to be safe spaces for minority groups. Get some momentum going, get some connections with other servers, get some contact with other server staffs, maybe eventually open it up.

Yeah, I think a federated whitelist would be a good idea.

Still, I'm looking at how many of these groups making block lists purport to be going after bigotry and harassment or whatever, but then you see them blocking a bunch of queer instances or black instances or something, and I wonder who might actually be trusted with this sort of thing. I can even imagine TechHub and Infosec showing up because someone with list access doesn't like the "techbros" or whatever...

Jay (@[email protected])

I'm beginning to wonder if the only solution to hate speech and harassment on the Fediverse might be allowlist-only instances.

social.coop

@jaz @jerry @jztusk @Crell @jsit
It also occurs to me that this can't be run by the instances using it, because they won't be able to see new instances to whitelist, which means you're going to need a few large servers to be the "Canary in the coal mine" for these instances. I feel like Tech Hub, with our somewhat squeamish block policies, could be a really useful server here, and I'd be happy to help maintain such a list.

What I think we need is some framework for how this list is put together and maintained, without too much overhead. We would need to account for the fact that such a list needs to be absolutely huge, and that while it should prioritize safety, there is an ethical obligation to get as many servers on it as possible.

As I told Jsit, it might be useful for someone to make this list now, just so we can see what it looks like.

@Raccoon @jaz @jerry @jztusk @Crell The refrain of โ€œallowlists/blocklists are bad because it means you wonโ€™t hear from meโ€ misses the point: This is why they are GOOD.

People donโ€™t have a โ€œrightโ€ to talk to your instance, this is a privilege that should be EARNED. And the protection of vulnerable people on social media is more important than my ability to make sure they can see my dumb posts.

This is not antithetical to the Fediverse. Choosing which instances to federate with is central to it!

@Raccoon @jaz @jerry @jztusk @Crell Because I am not among a group that is a frequent target of abuse, I have the privilege of enjoying the benefits of being on an โ€œopenโ€ instance without having to worry about the drawbacks. I will probably always prefer to be on an instance that is blocklist-based instead of allowlist-based. But many people do not have that privilege.

@jsit @jaz @jerry @jztusk @Crell
> "Because I am not among a group that is a frequent target of abuse, I have the privilege of enjoying the benefits of being on an โ€œopenโ€ instance without having to worry about the drawbacks."

But here's the flip side of that, one of the main things that makes people a bit squeamish about this: because you're not a member of a marginalized group, you haven't been on a server that has been brigaded with false reports trying to get the mainstream to block you, and then suddenly find a bunch of other marginalized groups' servers have blocked you without checking up on those reports. This is one of the things we keep seeing between queer fedi and black fedi.

What's to stop a member of one group, bigoted towards another, from getting in here and keeping servers that should be on the list off of it?

It then becomes a question of who will bell the cat: who will take on the responsibility, and thus open themselves up to abuse, of maintaining this?

@jsit @jaz @jerry @jztusk @Crell
And this post here also summarizes the big problems we've seen with FediBlock and The Bad Space.

We have people posting marginalized group instances on FediBlock, misrepresenting or exaggerating or even fabricating issues with those instances, and then suddenly finding that like 10% of the network has blocked them because no one is vetting these posts. I recently even appeared on there for attempting to vet some of those posts.

Meanwhile, every issue that The Bad Space has has basically turned into a timeline nightmare for its creators. Yeah, TBS has a problem with the number of instances it calls out for "racism" that no one else can find, and we could always make the argument that they could respond differently, but some people go absolutely insane about the people running it.

With a whitelist it would be even worse, because simply not including a server is doing a very real harm to its connections, and someone is going to answer for that.

@Raccoon Yes, who decides what to put on an allowlist/blocklist and what are the criteria they use continues to be a fraught problem with no simple solution.

But I was countering the claim a lot of people make that shared allowlists/blocklists in principle -- even if "perfectly curated" -- are antithetical to the Fediverse, which I think isn't true.

Some people bristle at the idea of these lists not because they think they might not be perfect, but because they want a nearly 100% open Fedi.

@jsit
I think you're talking about people who aren't in the conversation though: everyone who would be involved in this thread maintains a substantial block list, even if we have different standards for it. No one here is going to suggest a 100% open Fedi.

Our issue is the number of new and marginalized instances that are going to find a chunk of the network cut off by this sort of thing. We want new servers to be made, and we want those servers to thrive, because new servers add new life to the network, and a very important part of both of all of that is that good posts need to be able to spread far and wide and fast.

The Content Must Flow.

How does one create a new marginalized instance in an environment where instances with great content from marginalized groups is going to be cut off from them for however long it takes to get on the list? How do we let people on these new instances know more content will come, and why would they join a server that's blocked off?

@Raccoon I think maybe part of my confusion is not fully understanding how allowlists work. Can someone on a LIMITED_FEDERATION_MODE instance be *followed by* someone on a non-allowlisted instance?

For instance (heh), limited.example is in limited federation mode with only safe.example in its allowlist.

Someone on unknown.example wants to follow @ user @ limited.example. Can they do this?

#MastoAdmin #FediblockMeta

@jsit
As someone who doesn't deal with that directly, I forget that we have options like that. That is a good question, because if that's the case, it changes the nature of how disconnected these instances would be.

@Raccoon I have a test instance that I will enable limited federation on.

I would love to know if there are any big instances that do this already.