In the interests of starting a more productive dialogue than yesterday's main character was interested in, let's make a #brainstorm thread about design changes to ActivityPub and/or client UI that could actually help address drive-by (often racist) harassment on the fediverse.

Feel free to discuss pros/cons but don't feel an idea needs to be perfect to suggest it. Also since this is a brainstorm don't worry about complexity/implementation cost. If you have a great-but-hard-to-implement idea someone else may think of a way to simplify it.

Note that the underlying problem *is* a social one, do there won't be a technological fix! But tech changes can make social remedies easier/harder.

I've got some to start:

1. Have a "protected mode" that users can voluntarily turn on. Some servers might turn it on by default. In protected mode, users whose accounts are less than D days old and/or who have fewer than F followers can't reply to or DM you. F and D could have different values for same-sever vs. different-server accounts, and could be customized by each user. Obviously a dedicated harasser can get around this, but it ups the activation energy for block evasion and pile-ons a bit. Would be interesting to review moderation records to estimate how helpful this might or might not be. Could also have a setting to require "follows-from-my-server" although that might be too limiting on private servers. Restriction would be turned off for people you mention within that thread and could be set to unlimit anyone you've ever mentioned. Would this lock new users out of engagement entirely? If everyone had it on via a default, you'd have you post your own stuff until someone followed you (assuming F=1). One could add "R non-moderated replies" and/or "F favorites" options to soften things; those experiencing more harassment could set higher limits. When muting/blocking/reporting someone who replied to your post, protected mode could be suggested with settings that would have filtered the post you're reporting.

2. Enable some form of public moderation info to be displayed when both moderator and local server opt-in. Obviously each server would be able to ignore federated public tags. I'm imagining "banned from X server for R reason (optional link to evidence)" appearing on someone's profile & an icon on their PFP in each post viewed by someone on server Y *if* the mods of server X decide it's appropriate *and* server Y opts in to displaying such tags from server X specifically. Alliances of servers with similar moderation preferences could then have moderation action on one server result in clear warning propagation to others without the other mods needing to decide whether to also take action immediately. In some cases different moderation preferences would mean you wouldn't take action yourself but would keep the notice up for your users to consider. Obviously the "Scarlet Letter" vibe ain't great, but in some cases it's deserved, and when there's disagreement between servers about that, mods on server Y could either disable a specific tag or disable federation of mod tags from that server in general. Even better shared moderation tools are of course possible.

3. Different people/groups have different norms around boosting. Currently we only have a locked/public binary. Without any big protocol changes, adding a "prefers boosts/doesn't" setting which would warn in the UI before a viewer chooses to boost if the preference is "doesn't" could help. This could be set per-post, but could also have defaults and could have different values for same-server or not, or for particular servers. For example, I could say "default to prefer boosts from users on my server but not from users on other servers" or "default to prefer boosting on all servers except mastodon.social." Last option might be harder to implement I guess.

#ActivityPub #Meta #Harassment

@tiotasram Great thoughts! Your first thought made me wonder about bot farms: an opponent wishing to defame a person could overcome all of the defenses you suggest simply by hiring the services of a bot farm.

Unfortunately I see to defense against that.

@aeveltstra

Bot farms would become known pretty quickly and could be blocked in a shared blocklist, no?

@tiotasram

@alessandro @aeveltstra yeah that's a hard problem to solve especially if they do something like use LLMs to break CAPTCHA and sign up for mastodon.social or some other instance with open sign-ups automatically. But presumably very few of the racist harassers on here are that dedicated. It's definitely not an outright solution to the problem but it might be able to help a bit?

@alessandro @aeveltstra there is interesting research on this (it's called a Sybil attack in general), but it's a very hard problem.

Might be interesting to be able to define interaction rules based on open-signup vs closed-signup servers and have each server publish which category they're in.

I do think "the perfect is the enemy of the good" here as well. If something like this could make even a few percent dent in racist harassment it might be worth it?

@tiotasram @alessandro @aeveltstra

Another and easier harrassment vector is public (or follower-only) data extraction from the fediverse to learn PII about fedizens and spread their opinionated quotes as red meat in extremist (usually right-wing) circles. Let the festering anger in these crowds do the work, and fedizens getting harrassed in all the channels they are known to communicate in.

@smallcircles @alessandro @aeveltstra

This is a much harder harassment vector to defend against but might be less common than the direct racist replies? You really do need something like locked mode (or better) to defend against this type of stuff, which then comes with the tradeoff that your social circle is shrunk considerably... Most people don't want to be forced into locked mode all the time, I suspect.

@tiotasram @alessandro @aeveltstra

Yes, good protection would ideally be part of a social network that has more fundamental changes in its core mechanics to be able to support that, and not go from the basis of fediverse-we-have - following well-known (traditional) and microblogging-heavy social media models - and try to add safeguards and (top-down) governance after the fact.

@tiotasram @alessandro @aeveltstra

What is ironic is that we model online social networks in all kinds of ways, except after our own offline social networks. 😅

SX introduces Personal social networking where more attention to that is given..

https://coding.social/blog/reimagine-social/#personal-social-networking

How We Reimagine the Social Web

We find novel ways to collaborate and create value together.

Social coding commons

@tiotasram @alessandro @aeveltstra

If you want to ponder how to #ReimagineSocial then my blog post on #SX and #SocialCoding may be intriguing. It contains a brainstorm section and encouragement to chime in on that too..

https://social.coop/@smallcircles/116379158584600016