Lead Lemmy developer [email protected] Appears to Have Had Their Account Compromised After Moderation Actions Raise Serious Concerns

https://lemmy.world/post/43978041

Lead Lemmy developer [email protected] Appears to Have Had Their Account Compromised After Moderation Actions Raise Serious Concerns - Lemmy.World

Cross posted here as it was highlighted that the individual is a lead Lemmy developer, raising serious concerns about the direction of Lemmy, a leading Fediverse platform, and the impact on future user adoption. Hi There have been some rather concerning actions taken by an admin of the [email protected] [/c/[email protected]] community, [email protected] [[email protected]]. Based on recent moderation decisions and a complete lack of communication, it seems like their account may have been compromised, or even more concerning if these actions are deliberate. 1. Erroneous Rule 4 Enforcement, First Instance: a guide posted to [email protected] [/c/[email protected]], despite receiving many positive votes and comments, was removed under Rule 4: > If you have a question, please try searching for previous discussions, maybe it has already been answered However, this post was a guide and not a question, so Rule 4 does not apply. Attempts were made to reach out for clarification but there has been no response, despite their account showing recent activity. 2. Erroneous Rule 4 Enforcement, Second Instance: an on-topic informational video also posted to [email protected] [/c/[email protected]], despite also receiving many positive votes, was again removed under Rule 4. This post was again not asking a question, so again Rule 4 does not apply. Again, no explanation has been given. 3. User Bans in Completely Unrelated Communities: user bans of over a month have been applied for not only [email protected] [/c/[email protected]] but several completely unrelated communities: - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] - [email protected] [/c/[email protected]] This is especially concerning given that the above posts have no relation to these communities, and no recent activity has been made in any of them, meaning none of their rules could have been broken. Moreover, a public track record of positive contributions across Lemmy has been established, with many positive votes and comments received over a sustained period. Given all of this, the bans appear to be highly disproportionate, only adding to the growing concerns around moderation practices. 4. Repost with Disclaimer Removed: a repost of the guide, despite a disclaimer explaining the original removal appears to be in error and that attempts to contact the admin have failed, despite again receiving many positive votes and constructive comments, was once again removed. Again, no explanation has been given. Given all of this, it’s hard to avoid the conclusion that something is not right. Mistakes in moderation happen but the complete lack of communication, the disproportionate actions, and the ongoing bans from unrelated communities raise serious concerns. It seems as though the account is most likely compromised, or even more concerning if these actions are deliberate. Has anyone else experienced similar issues with this admin, or does anyone have more insight into what might be happening? TL;DR: Admin [email protected] [[email protected]] of [email protected] [/c/[email protected]] appears to be making seriously concerning moderation decisions, including erroneous enforcement of Rule 4 in at least two separate instances, failing to respond to messages, and applying user bans in completely unrelated communities despite a long track record of positive contributions across Lemmy. This has led to speculation that their account is likely compromised, or even more concerning if these actions are deliberate. Any thoughts or similar experiences would be appreciated. Cross post with https://lemmy.world/post/43944126 [https://lemmy.world/post/43944126]

This is literally how he always acts. There’s a reason that many instances do not federate with lemmy.ml, his instance.

He’s incredibly pro communist, pro Russia, pro China, and will ban you for even mentioning that something could be foreign propaganda.

At the end of the day, hopefully your instance builds their server code from source and inspects it to make sure there’s nothing nefarious. It would be nice to use a decentralized Reddit like platform not build by him, but no one else has the time, resources, ability, and dedication, to step up.

It would be nice to use a decentralized Reddit like platform not coded by him, but no one else has the time, resources, ability, and/or dedication, to step up.

You literally mentioned piefed in your next sentence.

Tbf Piefed also does have opinionated moderation literally hardcoded into the source code.

It’s pretty easy to modify since it’s python and not rust, but still not great

Ok what opinionated moderation?
There are some disabled by default, filters built into piefed. That the main Dev uses on their instance. It’s not really even remotely the same thing or as controversial. But it’s the closest thing piefed has and gets brought up regularly because of it.

Honestly I would consider hardcoded shadowbanning just as bad.

Just because I’m closer to agreeing with the PieFed dev’s opinions a little bit more doesn’t mean that I’d support shadow banning someone because the trivially-evaded checks caught a false positive in the crossfire. Piefed’s auto moderation/social scoring is pretty much textbook definition security-by-obscurity. The second anyone knows how it works, it’s useless. It will pretty much exclusively catch people who just wanted to post a harmless meme or something.

At least (for now) Dessalines isn’t hardcoding his tankie beliefs into Lemmy’s source code.

Piefed doesn’t shadowban:

The reputation system doesn’t shadowban content. You don’t get comments silently autoremoved for having a low reputation. You don’t get throttled either.

https://lemmy.zip/post/58102975/24342240

Piefed admin settings that allow to enable or disable content filters (they are disabled by default, see body for details) - Lemmy.zip

Edit about the 4chan image blocking, I asked Rimu directly: > I wrote a long message about how that checkbox only notifies about federated posts. > So the difference is for local posts it blocks the creation of the post entirely, but for federated posts it just notifies the admin. https://chat.piefed.social/#narrow/channel/3-general/topic//near/10529 [https://chat.piefed.social/#narrow/channel/3-general/topic//near/10529] – Original message: https://codeberg.org/rimu/pyfedi/src/commit/b168820a089ff6e835059f0d806f81b612987a79/app/models.py#L3513 [https://codeberg.org/rimu/pyfedi/src/commit/b168820a089ff6e835059f0d806f81b612987a79/app/models.py#L3513] A few people in the other thread assumed that it was required to fork the code to disable those filters. That’s not the case, the filters can be configured, and are off by default. To hide the reputation system, here’s a line of CSS that admins can add in the admin area to hide it for every user https://piefed.social/c/piefed_css/p/1722358/hide-red-triangle-warnings-on-accounts-with-bad-reputation [https://piefed.social/c/piefed_css/p/1722358/hide-red-triangle-warnings-on-accounts-with-bad-reputation] That CSS line can also be used by any user wanting to hide the score at the user level.

Thanks for clarifying, I guess I misremembered the shadowbanning part. I think I was mixing together the fact that reputation isn’t really transparent (users’ reputation can change by even attempting to upload an image that gets flagged, and the vague error means they’ll probably try multiple times without realizing they’re being moderated) and the fact the communities can autoban any user whose global reputation is low enough.

I still think the security-by-obscurity approach to moderation is inherently flawed though, and I hate to imagine how the dev approaches actual account security if that’s their approach to moderation.

The code is open source. Nothing is obscured. The main objective is to identify trolls and toxic users who won’t bother looking at the code.

It’s not a silver bullet, it’s just supposed to grab the low hanging fruit, but it’s fine for me

The code is open source. Nothing is obscured.

“Security-by-obscurity” is a phrase used for any measure that is useless once you know how it works. In this case it’s hoping that a troll doesn’t know about the specific hardcoded rules. None of the rules in PieFed actually work if you are at all aware of them.

Yet I still regularly see toxic users being flagged as such, with a 95% accuracy. Either they don’t care or they don’t know enough about the system to bypass it.

There were a few, not exaustive since it’s been a few months since I looked through the source code, some of this might have changed and there’s also a few other checks that I’m forgetting:

  • 4chan screenshots (specifically anything that OCR identified as having “Anonymous #(number)” in it) were banned. Honestly this one is fine as a toggle but I think for a while it was just on by default in the code
  • any community that had specific words in it were blocked at instance level. I think “meme” was there, a few swear words, and a few carryover reddit meme community names (196, I think nottheonion was also there, anything with “shitpost” in the name, etc.)
  • There’s a hidden karma/social credit score based on a user’s interactions and net total karma hidden from them that gets impacted by any moderation actions, including some of the automated hardcoded ones (e.g. even trying to upload an image that gets flagged by the hardcoded checks)
  • users with a low enough net score get shadow-banned without being informed