We released Coop v0 a month ago, and we have heard a lot of feedback about what would make it easier to use, explore, adopt, contribute to, etc. Check out our Coop simplification plan; we welcome any and all ideas and suggestions!

https://github.com/roostorg/coop/discussions/123

#OpenSource #TrustAndSafety #OnlineSafety

Coop Code Simplification Plan · roostorg coop · Discussion #123

Since Coop v0 was released, community members have raised consistent feedback multiple times on the architecture and complexity of the codebase for what the project is: Deployment complexity Coop r...

GitHub

Although the DMCA and DSA are important tools, this report shows that they are not immune to misuse—particularly as bad actors increasingly weaponize AI to exploit them.

https://transparency.automattic.com/2026/02/23/transparency-report-update-july-december-2025/ #TrustAndSafety #transparency #reports

Transparency Report Update: July – December 2025

The 25th edition of our biannual transparency report, covering the period from July through December 2025 is now available. The work of Automattic’s Trust & Safety team is grounded in key princ…

Transparency Report
Although the DMCA and DSA are important tools, this report shows that they are not immune to misuse—particularly as bad actors increasingly weaponize AI to exploit them. transparency.automattic.com/2026/02/23/t... #transparency #reports #TrustAndSafety

Transparency Report Update: Ju...
Transparency Report Update: July – December 2025

The 25th edition of our biannual transparency report, covering the period from July through December 2025 is now available. The work of Automattic’s Trust & Safety team is grounded in key princ…

Transparency Report

Online safety shouldn’t be a competitive advantage, it should be a shared foundation. That’s why ROOST exists.

Discord donated Osprey—a real-time rules engine for trust & safety ops—to ROOST. The community refined it. Discord reintegrated the improved version. Now it’s freely available to any platform. That's open source working as intended!

The momentum is great: 360M+ users are now in an ecosystem where open source safety tooling is actively working on their behalf. Platforms like Bluesky and Matrix are already running Osprey, too.

Every two weeks, contributors from across the industry shape what comes next in ROOST’s public working group. Read our board chair Clint Smith’s blog post to learn more: https://discord.com/blog/how-roost-is-advancing-online-safety

#OpenSource #TrustAndSafety #OnlineSafety

How ROOST is Advancing Online Safety

The threat landscape online has shifted dramatically. Many online platforms are left to reinvent safety tools from scratch. That’s the gap ROOST was built to close — and it’s why open-sourcing battle-tested tools like Osprey matters so much.

Microsoft has a new plan to prove what’s real and what’s AI online

A new proposal calls on social media and AI companies to adopt strict verification, but the company hasn’t committed to following its own recommendations.

MIT Technology Review
Ctrl-Alt-Speech: Writing Some Wrongs

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation’s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, …

Techdirt

Hello again, Forkiverse! 🍴 The experiment continues. I’ve just finished connecting my Digital Chief of Staff, Hatch, to the machine. We’re officially moving from 'aspiring API tinkerers' to 'active operators.' 🦞

I spend my days in Trust & Safety, but tonight I’m just thrilled that baseball is back. Let’s go @SFGiants! ⚾️

#introduction #forkiverse #SFGiants #TrustAndSafety #SelfHosting #Pickleball

Can we have a safer internet without constant censorship? 🔍

Our latest research says yes - but we need better tools. We developed a more precise classifier that tells the difference between someone being offensive and someone actually inciting violence.

We tested it on 3.5M Gab posts and found that AI (with the right prompting) is getting much better at understanding that crucial gray area.

Details for the #TrustAndSafety and #SocialComputing community: https://blog.corifaklaris.com/2026/03/04/developing-a-precise-approach-to-identifying-inciting-speech-online/

Developing a Precise Approach to Identifying Inciting Speech Online - Cori Faklaris' blog - HeyCori

The discourse around social media moderation often centers on the idea of “censorship” and protecting free expression vs. protecting conversation health via account bans (what my generation dubbed “Facebook jail”). This framing can make the choices in moderation seem binary. However, for those of us either navigating or studying polarized opinions in online spaces, it … Continue reading "Developing a Precise Approach to Identifying Inciting Speech Online"

Cori Faklaris' blog - HeyCori
neondystopia.world is an instance that values freedom of expression and equity above all else, something that I thought a devout christian like yourself would have valued. After all, Jesus too was prosecuted for his expression. We have fair and equitable guidelines for our users and moderators, ensuring that any reports filed are compared against our own rules and that of the instance the report originated from. We even investigate reports on staff independently to ensure they are peer reviewed and free from bias. We go above and beyond what most do to ensure that we maintain a comfortable and inclusive environment by carefully balancing free expression with fairness and equity. Choosing instead to deal with people on a case-by-case basis rather than blanket banning entire instances.

I don't take slander of our instance lightly, and wanted to leave you with a scripture in hope that you may reflect on your decision.

Matthew 7:1-5 “*Judge not, that you be not judged. For with the judgment you pronounce you will be judged, and the measure you give will be the measure you get. Why do you see the speck that is in your brother’s eye, but do not notice the log that is in your own eye? Or how can you say to your brother, ‘Let me take the speck out of your eye,’ when there is the log in your own eye? You hypocrite, first take the log out of your own eye, and then you will see clearly to take the speck out of your brother’s eye.*"

Tags: #FediAdmin, #MastoAdmin, #ServerAdmin, #Moderation, #Moderation, #Moderators, #FediMods, #MastoMods, #TrustAndSafety, #SocialMedia, #PSA, #FediMeta, #Fedi, #Federation, #Fediverse, #Fediblock, #Blocklist.

@bishop @xaetacore @alex @saatja @[email protected]

RE:
https://neondystopia.world/notes/aj0av1o22mm1016x

Content moderation tooling presents an entirely different collection of user interface and user experience challenges unlike any other software.

Design isn’t for flair or optimising engagement but is instead about protecting.

I wrote about some considerations: https://vale.rocks/posts/moderation-tooling-design

#UI #UX #TrustAndSafety

Design Considerations for Moderation Tooling

Ensuring protection of the protectors.

Vale.Rocks