I gotta admit, I am loving how little of the conversation is just "BlueSky bad! Mastodon good! 🤡" and how much of it is "BlueSky is not ideal for Black users, but let's be for real, neither is Mastodon. We don't have control over BlueSky, but we do have some agency with Mastodon. How can we make Mastodon better? Where are we with improving the issues that make this place unwelcoming for Black users? Clearly, more Black users chose BlueSky than Mastodon. Have we addressed the reasons why?" ♥️🥹

Seriously, I count ~5 conversations in the improvement framing direction. I love to see it! Shame on me for having lower expectations.

I'm unapologetically backing improvements across ActivityPub and ATProto. I back Hachyderm/Mastodon and BlackSky. You can just back both teams! Nothing in the rules says you can't do that!

@mekkaokereke

On the subject of improving Mastodon. This may be an opportunity to rekindle developer attention on the 'Followers Only' dogpiling harassment vector. Felt like some progress on the issue was made back in November, but don't know where it stands now.

cc: @stefan

@mastodonmigration

I wonder if @scottjenson might be interested in connecting with
@mekkaokereke, that is, if he'd like to share some thoughts.

(Unless you two already spoke, in which case, please disregard!)

@mastodonmigration But yes, that particular issue, I have not heard/seen any updates either.

@scottjenson @mekkaokereke

@stefan @mastodonmigration
Yes, @mekkaokereke and I spoke about to how best present Quote Posts and his advice had a direct impact on what we shipped. We're about to reach out for another round of discussions with a wide range of people (but I don't think we've contacted Mekka just yet)

It's so tempting to take the engineering approach and think "this feature will do it!" when we likely need to back up and talk about bigger issues such as culture and moderation.

@scottjenson @stefan @mekkaokereke

It is great to see this conversation take off. You did a fabulous job with quote posts and it would be wonderful if this issue could get the same kind of careful attention. Completely agree that a proper requirements driven approach is warranted. Thank you.

@mastodonmigration always happy to chat

@scottjenson
@mekkaokereke
@stefan

Great. Just to be really clear. What seems to be the issue is a type of hidden dogpiling or 'brigading.'

A tight group folks who's purpose is to harass someone follow each other, 'the brigade'.

One of them composes a harassing post specifically targeting someone who they @ mention, and post it using "Followers Only" reply controls.

The rest of the 'brigade' piles on.

The post is only seen by the targeted person(s) and the harassers.

@scottjenson @mekkaokereke @stefan

There may be other similar issues, but this one clearly seems to be a problem that is often cited.

@mastodonmigration @scottjenson @mekkaokereke @stefan

I think I have client-side improvements for this that effectively hide the harassment.

- https://pachli.app/pachli/2024/11/28/2.9.0-release.html#anti-harassment-controls-for-notifications
- https://pachli.app/pachli/2025/02/28/2.10.0-release.html#anti-harassment-controls-for-conversations-private-mentions

I haven't received much feedback about either, so if you have any, or know anyone who would benefit from these changes, please let me know.

Pachli 2.9.0 released

Pachli 2.9.0 is now available. This release adds additional anti-harrassment controls for notifications, improves accessibility, works around Pleroma bugs, and more.

Pachli
@nikclayton @mastodonmigration @scottjenson @mekkaokereke @stefan That looks like a very good idea. Though, I'd wish for an option to whitelist whole instances (which I trust). And I assume it only blocks notifications about comments, not favourites and boosts?

@audunmb @mastodonmigration @scottjenson @mekkaokereke @stefan

No, it filters notifications about favourites and boosts too. It always allows notifications about:

- posts you interacted with (voting in a poll, a post *you* boosted or favourited has been edited)
- moderation reports you made
- broken follower relationships (e.g., moderators blocked a server with accounts you follow)
- moderation actions on your account
- notifications you get if you're a server admin

https://github.com/pachli/pachli-android/blob/2a1743b93fb690cf2193ae0210368f2d0612ece5/app/src/main/java/app/pachli/components/notifications/NotificationHelper.kt#L670-L690

pachli-android/app/src/main/java/app/pachli/components/notifications/NotificationHelper.kt at 2a1743b93fb690cf2193ae0210368f2d0612ece5 · pachli/pachli-android

The Pachli Android app. Contribute to pachli/pachli-android development by creating an account on GitHub.

GitHub

@audunmb @mastodonmigration @scottjenson @mekkaokereke @stefan

Worth noting this comes with a startup cost, because Pachli has to fetch details about all the accounts you follow to implement this. If you follow thousands of people (as I was surprised to discover some accounts do) this can easily take multiple seconds.

https://github.com/mastodon/mastodon/issues/33066 on the Mastodon side would make this significantly easier.

Include `relationship` property in embedded `Account`s · Issue #33066 · mastodon/mastodon

Pitch When a client makes an authenticated API request for data that includes an account, the account should have a relationship property with a single Relationship (https://docs.joinmastodon.org/e...

GitHub

@audunmb @mastodonmigration @scottjenson @mekkaokereke @stefan

Filtering notifications about boosts and favourites is to block a possible harrassment vector: someone signs up with an account with a name that's a slur, and then starts boosting/favouriting posts from targeted accounts so they see the notification containing the slur.

@nikclayton @mastodonmigration @scottjenson @mekkaokereke @stefan

This is a good step, although not enough by itself. Eg the 30 days old filter increases the cost to abusers by forcing them to create and age accounts before using them in order to bypass the filter, which creates a window for mods to detect patterns of sock puppet creation and bulk-suspend them.

Do mods have tools for detecting those patterns of account creation?

@nikclayton @mastodonmigration @scottjenson @mekkaokereke @stefan It would be helpful to support these sorts of filters server side, in a way that can be easily reused by other Fediverse server codebases as well, and/or client-side in other clients.

@david42 @mastodonmigration @scottjenson @mekkaokereke @stefan

True. But also, richer information in the API results would make it easier for clients to innovate around this functionality.

And even without it, it is possible for clients to do this, as Pachli demonstrates. I started this last year because a lot of the online discourse about it seemed to be about how anti-harassment features could only work server-side, which struck me as a significant failure of imagination.

@mastodonmigration @scottjenson @mekkaokereke @stefan

Thanks for the explanation. That looks like really vile bullying tactics.

@mastodonmigration @stefan
Can you help me understand how followers only posts are harder for moderation to catch? I understand they are not public but they can still be reported? I'm trying to tackle this problem from the moderation agle as a server block helps so many more people (if we can pull it off)

@mastodonmigration @stefan My other question is accounts like this seem likely to get blocked from your server for other reasons. They would have to use this trick 100% of the time to avoid detection.

I'm NOT saying this isn't happening. I'm just trying to understand how these accounts behave so we can find, I hope, an even better way of shutting them down.

@scottjenson @mastodonmigration @stefan followers-only posts require the *victim* to report the attack. Depending on the volume and ferocity of the harassment, the victim may not be in a position to do this (either due to harassment across several channels, or unawareness of reporting and moderation options).

As an example, I piled into this thread to help out with an example, but I wouldn't have seen it to help out if it were "followers only".

I can see the positive value in being able to restrict a discussion, but it seems like "all my friends plus one more" might be a dangerous model.

Take all this with a grain of salt, as I haven't actually been subject to this kind of abuse, and am privileged in a bunch of ways which probably shield me from having to consider the worst of it.

@evana @mastodonmigration @stefan This is very helpful thank you. The workaround suggested is to have new filter that blocks all followers only posts that also include you. For this to be effective, it would need to default to being ON, which might rub many the wrong way. Defaulting to OFF means victims need to find and turn this on (which seems unlikely)

I'm trying to brainstorm other solutions that offer more protection (but I'm coming up short) Are there any others?

@evana @mastodonmigration @stefan One additional thought. If we default this filter to on and it DOES fire, this could be a moderator visible event?

The other solution is just to somehow flag any followers-only posts that @ include you in a way that makes reporting it a one-click event for the victim.

@scottjenson Not sure if I understand the question myself. Do you mean whether someone posting a followers-only + 1 post would automatically flag that post for moderation?

That's a tricky one. Now that I think about it, I might've actually received replies to my posts that were followers-only+1 (me). No abuse, just regular replies, I suppose the person wanted a bit more privacy?

@evana @mastodonmigration

@stefan @evana @mastodonmigration Exactly, that's why I asked the second question: make it a simple one-click for the target to report it.

The filter (if it defaults to off) isn't good enough. Most people just won't know how to turn it on.

@scottjenson

I know we're very early into the conversation, and I'm sure more ideas will come up, but so far everything is just telling me that followers-only+1 posts should not be possible and rejected as "+1 is not a follower".

The workarounds are getting confusing.

@evana @mastodonmigration

@stefan @evana @mastodonmigration I agree! But you're pointing out one of the pros/cons of the fediverse. Restricting followers-only to not have a +1 is a client limitation, something that could be avoided with a custom client.

Repeat after me: "Federation makes everything harder"

@scottjenson Right, but could the message get rejected by the server when it sees a "followers only" visibility, and the recipient is not a follower?

Almost like a quick, temporary auto-block of the sender.

@evana @mastodonmigration

@stefan @evana @mastodonmigration yes, if this is a server feature and not a client one, then my concern goes away.

But I can 100% guarantee you that there is a small group of people that do this for very positive and supportive reasons that will be quite miffed if we do this (which just might be necessary!)

This is why I'm trying to find other ways of looking at this problem. I want to solve it! Just trying to find the right lever.

@scottjenson It just sounds like we might need to turn the conversation around and instead of asking how to mitigate this feature's potential for abuse, a better question might be, why is this useful?

If to limit a posts visibility, maybe using "quiet public" is a better option?

@evana @mastodonmigration

@scottjenson

Just trying to imagine this playing out IRL. Someone pulls me to the side to talk to me, surrounds me with their buddies. Now, they might all be very nice people. But this situation just sounds inherently threatening.

@evana @mastodonmigration

@stefan @evana @mastodonmigration These are indeed the harder questions to ask! I'm glad you're asking them

@scottjenson @mastodonmigration @stefan I don't have good ideas yet, but a couple probably-obvious observations:

* New and less-technical users are probably more likely to completely exit the platform due to harassment
* Experienced and technical users will probably have connections and better ability to bring tools into play
* Followers-only specifically separates the participants from any other network than the original poster. This probably needs to be communicated _really clearly_
* I can see followers-only as a good solution for sensitive discussions, but you want the recipients to understand that the information is sensitive so they don't allude to it/repost it without that privacy
* There's a tension between privacy defaults and broadening the web of social connection and discovery. The most private default would remove a lot of the social network value, so you'll rarely get a clear "win" without at least some damage to other cases

@evana @mastodonmigration @stefan Agree with your points but we're still circling around the issue of how likely this happens (and how)

I DONT want to imply I don't believe people that say it happens, I'm just trying to understand the broader flow, i.e. how can a Brigade operate in secrecy? It just seems very fragile as they likely do other things that get them banned. Have we seen a large scale brigade that worked this way for a while? What causes them to trip up? Let's focus on that.

@scottjenson @evana @mastodonmigration @stefan

> they likely do other things that get them banned

not necessarily? think of a messaging app that supports group messages. you create a group chat with your buddies and one other person. the person being added can:

- not accept the invite
- remove themselves from the group
- block people in the group
- report messages in the group

in the last scenario, mods do not have full context. the user has to attach any relevant context.

@scottjenson @evana @mastodonmigration @stefan but because there is a private aspect, you would be free to act differently than you would otherwise act in public, and your only avenue for consequences would be *if* the added person reports y'all.

so the gap here is that people aren't being made aware that they can/should report such harassment. i don't think doing away with private posts solves anything.

one thing that could be done is to filter followers-only like mentioned-only, but...

@scottjenson @evana @mastodonmigration @stefan ...such a change might be unexpected if not communicated appropriately ahead-of-time. in effect, it would collapse the "public"/"followers"/"direct" into just "public"/"not public".

you'd probably also want the filter to be a bit smarter about what counts as "unsolicited", because even public mentions can be "unsolicited".

and of course i'd be remiss to leave out my usual advocacy for allowing people to create explicit contexts which they control!

@scottjenson @mastodonmigration @stefan I suspect in some cases, the brigades happen on servers with loose moderation (whether that's intentional, understaffed mods, cultural gaps, etc).

And, as mentioned elsewhere, if the target is new or a frequent target of harassment, they may not trust the moderation team. Right now, I think the moderation teams generally don't have visibility into this activity -- this means that they probably handle it less effectively, *increasing* the distrust of moderation from the targets. (And, if we assume that the targets are generally more-marginalized to begin with, they may have a learned distrust of authority to begin with.)

I'm not sure that removal is the only option (and I don't understand which bits are client and server here), but it seems like both a client "do you want to participate in this conversation" and a moderator "user X was added to a conversation in this pattern" as default behaviors would be a starting point.

@[email protected] @[email protected] @[email protected] @[email protected] would it help if selecting "followers only" didn't also include any mentioned people who are not followers? so mentions only are only shown the post and notified if you post publicly or if they are already following you.

@mayel @evana @scottjenson @mastodonmigration @stefan It would only make it a little more difficult. The 'brigading' is now a two-step process for every participant.

The vector is really the 'followers-only' option. It is what allows the secret preparation. If we want to keep it, no way around sending a mandatory copy to a designated 'brigading moderator'.

@scottjenson @stefan

This was in another thread discussing the issue. Not sure if what it reports is accurate vis a vis moderator limitations. Could it be a GDPR issue?

"This technique is insidious in another way too. As a moderator, you can't look at non-public posts unless someone specifically reports them, so your ability to understand the context is severely limited. Sometimes you literally can't see the harassment even when you go looking for it."

https://sfba.social/@EverydayMoggie/115330983420918538

moggie (@[email protected])

This technique is insidious in another way too. As a moderator, you can't look at non-public posts unless someone specifically reports them, so your ability to understand the context is severely limited. Sometimes you literally can't see the harassment even when you go looking for it. @[email protected] @[email protected] @[email protected]

SFBA.social

@mastodonmigration @stefan The "math" checks out. I don't deny this is happening. At the same time, none of us in this thread says we've ever experienced it. We can't fight a problem we don't properly understand.

What would help is talking to people/moderators that have had to deal with this. I have to assume this is a fairly common problem so finding people to talk to should be fairly easy I would hope.