@mjausson @welshpixie

Yes. New admins that set up instances, often don't block the instances that are well known in the admin community for existing solely to harass Black people.

It's a design choice/trade-off to decide not to make it easy for new admins to default into a "moderation provider." This thinking is changing though, so yay for Mastodon. 👍🏿

@mekkaokereke @mjausson we often say in our fedi admins chat that new admins should have something like being presented with a selection of blocklists from the get go and can choose one to import rather than having to find out the hard way that they should be importing one, then having to find one, see if it's trustworthy, etc. Right now the process involves too much 'find out the hard way' or 'already been on fedi long enough to know you need one' and that's not great for new people.
@welshpixie @mekkaokereke these responses seem irresponsible to me. 'trust, but verify' is far too permissive when it comes to social media. since the earliest days of bbs moderation, i have practiced, "deny all and allow only after trust has been proven and only to specific instances". As i have been watching numbers grow, i have been shocked at the open federation policies of many new instances. A federated moderation system like 'fediblock' will rely on a trust framework that doesn't exist yet. this is a very challenging task. it's a great idea, but in the meantime, maybe we can all advocate a locked down instance as a default?
@imklg @mekkaokereke yeah, an allowlist instead of a denylist is also something we talk about a lot. But for those of us who have been here since 2016 there is definitely a trust framework - we know the bad actors. We have a catalogue of them ranging from the worst of the worst to 'your mileage may vary' at blocklists like @oliphant 's. Almost every time something appears in fediblock it's already known to us and that encounter could have been prevented if they had used a blocklist -
@welshpixie @mekkaokereke @oliphant what you describe is an informal trust association, not a framework. we have general criteria for membership concerning appropriate speech and behavior at the instance level and fairly robust tools for enforcing this criteria, but no framework for trust federation within the protocol. yes, this is a difficult technical problem and runs into free speech vs hate speech issues, but the issue of trust ought to be crystal clear. an informal trust association needs to become a framework built into the protocol.