The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

@david_chisnall I'm seeing a lot of talk about reputation systems at the moment, applying to open source contributing and social media.

Every time, I'm reminded of how awful it was getting started on Stack Overflow.

I had an account for years before I ground through the painful process of building a reputation.

I'm not surprised that they're dying, it's not just AI; if you build walls in front of new users they'll give up and go somewhere else.

Much of my angst was that I'd put in the work elsewhere, but there was seemingly no means of transferring that reputation.

But there will always be new people trying to start from scratch, and somehow we need to welcome them whilst keeping out the abusers.

@cpswan @david_chisnall So, either you have a system with anonymity and abuse, or you have a system where new users struggle. It's very naive to believe once again that technology could solve such a NON-technical, social dilemma. Good technology can optimize/minimize these issues, and it should. But it cannot make them go away.

@david_chisnall it couldn't work here, but I like SomethingAwful's approach: you pay a one-time nominal fee ($10USD) to get to post there.

It stops all but the most determined, demented bad actors (there is one specific lunatic who keeps re-registering accounts with the names all numbers, but apart from him the system works pretty well).

@david_chisnall
Would love your thought on moderation @shlee, because here’s a possible shortcut to keeping out bad actors from the beginning: crowdfund a new instance, so costs are covered and users have skin in the game from the start. Anyone done that?

@gusseting @david_chisnall @shlee

social.coop implements this idea, but from more of a democratic co-op intent rather than just for moderation purposes

https://join.social.coop/home.html

Join Social.Coop | WELCOME

@david_chisnall

Admins and moderators themselves are often ignored as being part of the threat model.

A key difference between one large instance and a federation of many small instances is that the "social pressure" to ensure good moderation decisions is a lot smaller too.

It isn't always malicious. The long tail of small instances are run by people who are tech enthusiasts first, not trained regulators and phds in the contentious topics being moderated.

But the sum of many small such biased decisions leads to a large effect.

Kinda like how money laundering breaks large sums into smaller sums to pass under the regulatory filters.

This is just my opinion, formed by experience, I may be wrong.

@david_chisnall how do new users get discovered if they can't even comment in other threads?

@david_chisnall does it have to be at instance level? Can we let users turn on a rainbow of 'trusted by N community-trusted users', or 'boosted by someone I follow', or 'banned by less than N users I personally trust' or 'lives on an instance known for strict moderation'... filters individually?

Is this a path towards community moderation? What would be an efficient set of filters to implement and update?

@TomBerend @david_chisnall this has been tried - web of trust. The biggest problem is that trust is subjective. It varies not only by person, but by person+topic. Furthermore, as cliques form it becomes harder for an authentic newcomer to enter a circle of trust. Conflict resolution is hard too: I trust 100 people who trust person A, but I don't trust A. Does person B who trusts me trust A because the 100 people I trust outweigh me? Finally, there's the matter of dealing with account compromise: A highly trusted person's account becomes a sought after target for threat actors.
@TomBerend @david_chisnall i think any system which depends solely on individual action to block individuals isnt scalable and puts the greatest burden on socially marginalized people who will end up catching most of the flak. Of course individuals should be able to block whoever they like for whatever reason, and instance moderaters can use the number of individual blocks as a signal when reviewing reports, but individuals shouldt _have_ to do more than be a member of an instance in a federation to take advantage of moderation policies. Telling someone getting hit with abuse that they need to "just get gud" and use a different blocklist or whatever isnt effective or kind.
@david_chisnall a downside of the negative-starting-reputation model is the privacy posture erosion, as it encourages users to stay on the same account by putting a barrier on starting a new account. This is particularly risky for marginalized groups where (pseudo-)anonymity can be a matter of life and death.

@david_chisnall

moderation is always essentially a game of defense, nothing is going to change that

i fear what you're saying will just turn new users off

i could see a posting limit for new accounts though

and it shouldn't be "after 7 days the limits are off" it should be "at the moment they first post, the number of posts they can make in the next hour/ day/ whatever has a ceiling" because otherwise spammers will just a create a new account and sit on them until they are able to firehose

@david_chisnall even collecting reputation over time is not going to help. reddit is a best example of that. many bot accounts lurk around and contribute mediocre reposts and comments for years before being used for something like smear or astroturfing campaigns (thus completely negating account age or reputation filters)...
@grepe @david_chisnall fighting against inauthentic accounts is a constant battle which must be fought but can never be catagorically won, but that doesnt mean its not worth fighting or that better isnt better

@david_chisnall

This feels like one of the problems that contributed to Stack Overflow's decline. They put up high barriers to entry and many people (myself included) never bothered to overcome them.

@david_chisnall How significant a problem is this? It's easy to block a user or a site. It's easy to post follower-only messages.

I've never thought of myself as an anarchist, but I'd like to see us succeed better with our system of offline laws before we start imposing them on the Fediverse. Look how good age verification is going.

I see the Fediverse as being the next iteration of Usenet, only with lots more pictures and dramatically less useful topical organization. (I really regret that lack. It would be better to be able to organize by topic as well as poster, and to have a working notion of posts already seen.) Moderation on Usenet was much the same: killfiles and voluntary association.

I feel like we're in a minimally-acceptable place, and that we risk injuring the system by imposing restrictions on it. Instead, we might develop more user-selectable tools for navigating content.

@mason Read some of the threads from folks receiving harassment discussing how it’s enabled by the current systems.
@david_chisnall Yeah, the description of how people use follower-only replies was dismaying. How prevalent is that kind of toxic behaviour on the Fediverse?
@david_chisnall this is from the perspective of managing individual user identities, but what about taking this logic to the *instance* level? Dont just automatically open federate with any instance because the reputation of instance moderation is not established, and only allow federation between instances with compatible moderation policies. Creating tools for federations of instance operators to monitor their population of instances and moderate out of the federationif they cant/wont adhere to shared moderation standards. That does make the cost of approving a personal single-user instance higher, you cant participate, or are severely rate limited, until the existing federation membership governance body approves, which is similiar validation cost for 1000 users or 1.

@david_chisnall

We've solved that problem years ago, in online forums.

New users don't get the privileges of creating new threads unless they contribute meaningfully to the already existing conversations, effectively gaining reputation, before they can start a conversation on their own.

Stack overflow did something similar.

I'm sure we can find ways to make it work for the fediverse.

@david_chisnall What about an instance reputation system as an alternative, or in addition to a user reputation system?

IMHO, the only viable moderation-at-scale system relies on building communities/instances that self-moderate and/or cross-moderate.

@whyrl It’s definitely an interesting direction but it has problems at both extremes:

  • For small instances, it!s roughly analogous to a per-user reputation.
  • Very large instances like mastodon.social have a terrible informal reputation but are too big to block.

@david_chisnall my initial reaction on reading was "oh no"...

But then it became "oh yes".

Solid point.

That as part of the onboarding mechanism makes sense.

And you pulled that out of me with good argument.🙃

@david_chisnall I have a deeply unpopular opinion. You know how when you get a library card at your local, public library, it allows access to many things? Not only can you loan out books, but they also have DVDs, periodicals, computers you can log into and use the internet, a communal printer, and there's even a crafts room where free googly eyes are supplied. It's kind of amazing: the free-feeling access to lots of things. Why can't a Mastodon account be one more of those things? Let some librarians - who are great with people, and organizing things - be Mastodon moderators as part of their jobs.
#library #librarians