Someone should set up a third-party tool that mastodon/fediverse users can report abuse to. It should be staffed by people who are skilled at evaluating this stuff and who are paid to do it.

Recommendations/instructions to block instances/individuals could then be sent out to instance admins or even individual users (in an automated way), either as a collective or by subscription.

Or, rather, is anyone doing this already? #federatemoderation

@blaine The instinct the centralize is common but I find it surprising from you.

Some sort of web-of-trust distributed thing is what the infrastructure needs. I'll trust instance B as much as my trust in A times my trust in it's trust.

Coz the thing is, the "third-party".. "users can report abuse to" will immediately become corrupted.

Especially since they will need to be funded.

You can have a marketplace of censors, but that will be hard to stop free-riding, I can just copy the blocklist.

I think it would have to be some kind of delegated trust, with a network of weights between all the nodes more than some overseer service, even a marketplace of them.

@blaine

@pre totally! Apologies if I appeared to imply that this should be centralized. I imagine something like this to be opt-in, and that there would be many such organizations, not just one. They might cooperate to weed out e.g. phishing attacks and other overt/automated/malicious spam, but social norms will undoubtedly vary.

@blaine Yeah, a moment's further thought and I realized you probably meant a standard for many of them to operate in, that marketplace of censors.

Standards for passing around blocklists sound good, and then probably you'll apply your local weighting to them I guess, especially if they come with error-bars and you have a trust metric on their source already.