The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

@david_chisnall this is from the perspective of managing individual user identities, but what about taking this logic to the *instance* level? Dont just automatically open federate with any instance because the reputation of instance moderation is not established, and only allow federation between instances with compatible moderation policies. Creating tools for federations of instance operators to monitor their population of instances and moderate out of the federationif they cant/wont adhere to shared moderation standards. That does make the cost of approving a personal single-user instance higher, you cant participate, or are severely rate limited, until the existing federation membership governance body approves, which is similiar validation cost for 1000 users or 1.

@raven667 @david_chisnall

allowlist federation with optional blocklist could work.

the idea is that every instance has a small list of instances with which they federate with, and you can adjust the "n-umbra" ( https://neuromatch.social/@jonny/116067489838079786) to get each instance's graph.

0-umbra (depth 0) = only that instance's content

umbra (depth 1) = that instance's content + their federations

penumbra (depth 2) = that instance's content + their federations + the federations' federations

n-umbra (depth n) = that instance's content + their federations + the federations' federations + ... + etc.

if one instance starts federating with a lot of questionable instances, then you can simply cut that instance from your allowlist or reduce the "n-umbra" of that instance, effectively cutting the bad content off in one fowl swoop.

you could also share allowlists and blocklists between instances, or use something like the server covenant to find known good instances to start federating with.