@mjausson @welshpixie

Yes. New admins that set up instances, often don't block the instances that are well known in the admin community for existing solely to harass Black people.

It's a design choice/trade-off to decide not to make it easy for new admins to default into a "moderation provider." This thinking is changing though, so yay for Mastodon. 👍🏿

@mekkaokereke @mjausson we often say in our fedi admins chat that new admins should have something like being presented with a selection of blocklists from the get go and can choose one to import rather than having to find out the hard way that they should be importing one, then having to find one, see if it's trustworthy, etc. Right now the process involves too much 'find out the hard way' or 'already been on fedi long enough to know you need one' and that's not great for new people.
@welshpixie @mekkaokereke these responses seem irresponsible to me. 'trust, but verify' is far too permissive when it comes to social media. since the earliest days of bbs moderation, i have practiced, "deny all and allow only after trust has been proven and only to specific instances". As i have been watching numbers grow, i have been shocked at the open federation policies of many new instances. A federated moderation system like 'fediblock' will rely on a trust framework that doesn't exist yet. this is a very challenging task. it's a great idea, but in the meantime, maybe we can all advocate a locked down instance as a default?
@imklg @mekkaokereke yeah, an allowlist instead of a denylist is also something we talk about a lot. But for those of us who have been here since 2016 there is definitely a trust framework - we know the bad actors. We have a catalogue of them ranging from the worst of the worst to 'your mileage may vary' at blocklists like @oliphant 's. Almost every time something appears in fediblock it's already known to us and that encounter could have been prevented if they had used a blocklist -
@imklg @mekkaokereke @oliphant - but if course they need to know about the blocklists first and often, sadly, people only realise their worth after having a run in with one of the instances on there. The system isn't good enough.
@imklg @mekkaokereke @oliphant and, yes, for us instance admins who have been here a while, we're always yelling about open federation *and* open membership.
@imklg @mekkaokereke @oliphant Also also, we have been asking for literal years for stuff like this to address safety from Eugen and only within the last few months has it even been possible to import blocklists through the UI. These are things responsible instance admins have been clamoring for since 2016. People are too quick to respond with 'just fork your own with the features you want' and that is absolutely not viable/practical/possible for most people; the main branch should be better.
@welshpixie @mekkaokereke @oliphant what you describe is an informal trust association, not a framework. we have general criteria for membership concerning appropriate speech and behavior at the instance level and fairly robust tools for enforcing this criteria, but no framework for trust federation within the protocol. yes, this is a difficult technical problem and runs into free speech vs hate speech issues, but the issue of trust ought to be crystal clear. an informal trust association needs to become a framework built into the protocol.
@welshpixie @mekkaokereke @mjausson As the admin of my small friends owned I would be really happy to have that kind of list already available, indeed... Thanks for advocating for that!
@welshpixie is there an available blocklist that you would recommend (I have been manually adding to my fediblock list when I see things come up)?
Blocklists

Mastodon Blocklists (For Download) Do you want a blocklist? Because this is how you get a blocklist. How To Sync Up Blocklist Changes ...

The Oliphant
@welshpixie @oliphant thanks, have bookmarked for when I am on an actual computer later

@welshpixie @mekkaokereke @mjausson
So what we need is the instance to host a central block list that can be subscribed to and handled upstream.

Where personal block lists work as aggregation sources for the central one. E.g 1% of individual listings all blocking a user triggers that name to move into the central list and everyone on that instance gains a central block.

So a targeted group will get herd protection by doing what they are already doing. This seems relatively easy to do. Experts how do I submit this?

mastodon/mastodon

Your self-hosted, globally interconnected microblogging community - mastodon/mastodon

GitHub

@welshpixie @mekkaokereke @mjausson Makes total sense. This is similar to installing a new operating system. There are certain preferences where intelligent defaults should and are presented at install time. Blocking racism, white supremacy, TERFs, et al should be offered at startup time.

Similarly, I would hope that these lists are in-fact subscriptions. That poses risks of de facto centralized moderation, but if this resulted in notifications to the admin with an opt-in on each update, it could work. Requires admins to own moderation but then any responsible admin should, right?