We released Coop v0 a month ago, and we have heard a lot of feedback about what would make it easier to use, explore, adopt, contribute to, etc. Check out our Coop simplification plan; we welcome any and all ideas and suggestions!
We released Coop v0 a month ago, and we have heard a lot of feedback about what would make it easier to use, explore, adopt, contribute to, etc. Check out our Coop simplification plan; we welcome any and all ideas and suggestions!
Although the DMCA and DSA are important tools, this report shows that they are not immune to misuse—particularly as bad actors increasingly weaponize AI to exploit them.
https://transparency.automattic.com/2026/02/23/transparency-report-update-july-december-2025/ #TrustAndSafety #transparency #reports
Online safety shouldn’t be a competitive advantage, it should be a shared foundation. That’s why ROOST exists.
Discord donated Osprey—a real-time rules engine for trust & safety ops—to ROOST. The community refined it. Discord reintegrated the improved version. Now it’s freely available to any platform. That's open source working as intended!
The momentum is great: 360M+ users are now in an ecosystem where open source safety tooling is actively working on their behalf. Platforms like Bluesky and Matrix are already running Osprey, too.
Every two weeks, contributors from across the industry shape what comes next in ROOST’s public working group. Read our board chair Clint Smith’s blog post to learn more: https://discord.com/blog/how-roost-is-advancing-online-safety
Ctrl-Alt-Speech: Writing Some Wrongs
https://fed.brid.gy/r/https://www.techdirt.com/2026/03/12/ctrl-alt-speech-writing-some-wrongs/
Hello again, Forkiverse! 🍴 The experiment continues. I’ve just finished connecting my Digital Chief of Staff, Hatch, to the machine. We’re officially moving from 'aspiring API tinkerers' to 'active operators.' 🦞
I spend my days in Trust & Safety, but tonight I’m just thrilled that baseball is back. Let’s go @SFGiants! ⚾️
#introduction #forkiverse #SFGiants #TrustAndSafety #SelfHosting #Pickleball
Can we have a safer internet without constant censorship? 🔍
Our latest research says yes - but we need better tools. We developed a more precise classifier that tells the difference between someone being offensive and someone actually inciting violence.
We tested it on 3.5M Gab posts and found that AI (with the right prompting) is getting much better at understanding that crucial gray area.
Details for the #TrustAndSafety and #SocialComputing community: https://blog.corifaklaris.com/2026/03/04/developing-a-precise-approach-to-identifying-inciting-speech-online/

The discourse around social media moderation often centers on the idea of “censorship” and protecting free expression vs. protecting conversation health via account bans (what my generation dubbed “Facebook jail”). This framing can make the choices in moderation seem binary. However, for those of us either navigating or studying polarized opinions in online spaces, it … Continue reading "Developing a Precise Approach to Identifying Inciting Speech Online"
Content moderation tooling presents an entirely different collection of user interface and user experience challenges unlike any other software.
Design isn’t for flair or optimising engagement but is instead about protecting.
I wrote about some considerations: https://vale.rocks/posts/moderation-tooling-design