In the interests of starting a more productive dialogue than yesterday's main character was interested in, let's make a #brainstorm thread about design changes to ActivityPub and/or client UI that could actually help address drive-by (often racist) harassment on the fediverse.

Feel free to discuss pros/cons but don't feel an idea needs to be perfect to suggest it. Also since this is a brainstorm don't worry about complexity/implementation cost. If you have a great-but-hard-to-implement idea someone else may think of a way to simplify it.

Note that the underlying problem *is* a social one, do there won't be a technological fix! But tech changes can make social remedies easier/harder.

I've got some to start:

1. Have a "protected mode" that users can voluntarily turn on. Some servers might turn it on by default. In protected mode, users whose accounts are less than D days old and/or who have fewer than F followers can't reply to or DM you. F and D could have different values for same-sever vs. different-server accounts, and could be customized by each user. Obviously a dedicated harasser can get around this, but it ups the activation energy for block evasion and pile-ons a bit. Would be interesting to review moderation records to estimate how helpful this might or might not be. Could also have a setting to require "follows-from-my-server" although that might be too limiting on private servers. Restriction would be turned off for people you mention within that thread and could be set to unlimit anyone you've ever mentioned. Would this lock new users out of engagement entirely? If everyone had it on via a default, you'd have you post your own stuff until someone followed you (assuming F=1). One could add "R non-moderated replies" and/or "F favorites" options to soften things; those experiencing more harassment could set higher limits. When muting/blocking/reporting someone who replied to your post, protected mode could be suggested with settings that would have filtered the post you're reporting.

2. Enable some form of public moderation info to be displayed when both moderator and local server opt-in. Obviously each server would be able to ignore federated public tags. I'm imagining "banned from X server for R reason (optional link to evidence)" appearing on someone's profile & an icon on their PFP in each post viewed by someone on server Y *if* the mods of server X decide it's appropriate *and* server Y opts in to displaying such tags from server X specifically. Alliances of servers with similar moderation preferences could then have moderation action on one server result in clear warning propagation to others without the other mods needing to decide whether to also take action immediately. In some cases different moderation preferences would mean you wouldn't take action yourself but would keep the notice up for your users to consider. Obviously the "Scarlet Letter" vibe ain't great, but in some cases it's deserved, and when there's disagreement between servers about that, mods on server Y could either disable a specific tag or disable federation of mod tags from that server in general. Even better shared moderation tools are of course possible.

3. Different people/groups have different norms around boosting. Currently we only have a locked/public binary. Without any big protocol changes, adding a "prefers boosts/doesn't" setting which would warn in the UI before a viewer chooses to boost if the preference is "doesn't" could help. This could be set per-post, but could also have defaults and could have different values for same-server or not, or for particular servers. For example, I could say "default to prefer boosts from users on my server but not from users on other servers" or "default to prefer boosting on all servers except mastodon.social." Last option might be harder to implement I guess.

#ActivityPub #Meta #Harassment

@tiotasram here's some thoughts from a couple of years ago. https://privacy.thenexus.today/steps-towards-a-safer-fediverse/

FIRES is a mechanism that could be useful for your point #2. https://fires.fedimod.org/

Steps towards a safer fediverse

Part 5 of "Golden opportunities for the fediverse -- and whatever comes next."

The Nexus Of Privacy

@jdp23 not having good knowledge of the spec, I wonder how hard it would be to create an opt-in mechanism for people to post as a "moderated thread" such that the OP could remove replies from their own thread directly (but only for this special type of thread, and the reply posts would still exist, they just wouldn't be displayed as part of the thread by default).

People could always quote or link to interact externally without permission (usual instance moderation would be the resort to abuse of that) but it would give people who want it more immediate control to respond to harassment/trolling without having to wait for a mod team response. Those who think this gives too much power to OP could simply not engage with such threads or with people they feel are abusing them.

The ability to remove replies from threads is certainly something that's been requested. It's challenging because Mastodon doesn't really have a concept of "thread" (as opposed to Lemmy, Piefed, NodeBB, and other "threadiverse" apps). In fact today there isn't even the ability to remove a reply to a post, let alone something several posts down in a thread.

(In principle the same kind of mechanism that's used to revoke authorization for a quote post could be used for removing replies to posts. But, there's a challenge that replies on Mastodon today are implemented without checking for any authorization (so there isn't anything to revoke). Mastodon is planning on implementing interaction controls on posts in general (not sure of the time frame), and last I heard they were planning on reusing the same mechanism they use for quote posts ... if so, then it seems to me like the ability to remove replies to posts will be there in general. But that by itself doesn't help address the thread aspects you're talking about.)

@tiotasram

@jdp23 I guess if you wanted to do it right now you'd have to provide metadata about user moderation decisions attached to these OP and let clients try to respect it as they assemble threads. Then it would only work for clients that implemented the feature, which defeats most of the point.
@tiotasram yeah making any changes like this in a federated system is really tricky -- you have to take other platforms as well as clients into account.