MIT study on deceentralized moderation as a model for countering disinformation:

"This work shows that a decentralized approach to moderation can lead to higher content reliability on social media, says Jahanbakhsh. This approach is also more efficient and scalable than centralized moderation schemes, and may appeal to users who mistrust platforms, she adds."

#Mastodon

https://news.mit.edu/2022/social-media-users-assess-content-1116

Empowering social media users to assess content helps fight misinformation

MIT researchers built a prototype social media platform to study the effects of giving users more agency to assess content for accuracy and control the posts they see based on accuracy assessments from others. Users were able to make accurate assessments, despite having no prior training, and they valued and utilized the assessment and filtering tools they designed.

MIT News | Massachusetts Institute of Technology
@tchambers Reminds me of Slashdot where the users are all of the moderators and part of the 'social agreement' of the site is you'll help moderate when/where you can.
@erswippe @tchambers thanks for reminding me that slashdot is where I first encountered this moderation structure.

@andrewlinke @erswippe @tchambers For a very good paper on the Slashdot distributed moderation model, see

‘Slash(dot) and Burn: Distributed Moderation in a Large Online Conversation Space’

Cliff Lampe, Paul Resnick
School of Information University of Michigan
(2004 preprint of a ACM CHI 2004 paper)

http://www.presnick.people.si.umich.edu/papers/chi04/LampeResnick.pdf

@Roundtrip @andrewlinke @erswippe @tchambers when I was paying attention (a very long time ago), slashdot worked remarkably well, at least to the extent that you could go in and read with the filter set to hide posts below a score of, say, 4 or 5, and get a decent read of the room - “here are some curated most popular / consensus reactions to the topic at hand”. I’m a little surprised no one else has (apparently) tried this technique since.