MIT study on deceentralized moderation as a model for countering disinformation:

"This work shows that a decentralized approach to moderation can lead to higher content reliability on social media, says Jahanbakhsh. This approach is also more efficient and scalable than centralized moderation schemes, and may appeal to users who mistrust platforms, she adds."

#Mastodon

https://news.mit.edu/2022/social-media-users-assess-content-1116

Empowering social media users to assess content helps fight misinformation

MIT researchers built a prototype social media platform to study the effects of giving users more agency to assess content for accuracy and control the posts they see based on accuracy assessments from others. Users were able to make accurate assessments, despite having no prior training, and they valued and utilized the assessment and filtering tools they designed.

MIT News | Massachusetts Institute of Technology
@tchambers this is very interesting and I wonder how it applies “in the wild”. For example, on this platform, in what ways do we already “provide and view structured accuracy assessments on posts, indicate others they trust to assess posts, and use filters to control the content displayed in their feed.”?
@tchambers with all due respect this is a classical example of press release that hyped it up a little too much... The paper il based on a non representative sample and on a prototype that involved 14 people. What could have been missed of the social dynamics that regulate information practices of millions of users?

@tchambers Yeah. It's fundamentally about increasing the overall moderation *bandwidth* compared to the tsunami of content that gets posted every day. For that to work there needs to be more opportunity to organize information, i.e a shared data layer where platforms are essentially clients instead of silos. You should take a look at Satellite.

https://satellite.earth/

Satellite

Satellite | Powered by Nostr

satellite.earth
@tchambers
👆🏾 that's a dunk on birdsite fr

@tchambers

I have some doubt on the validity of how the candidates were chosen.

Previous studies also showed that humans are bad at judging #misinformation .

That said, I like the rest of their methodology & I think this contributes to this area of research in ways most papers fail to.

(Yes, I have published research in this area)

@tchambers Reminds me of Slashdot where the users are all of the moderators and part of the 'social agreement' of the site is you'll help moderate when/where you can.
@erswippe @tchambers thanks for reminding me that slashdot is where I first encountered this moderation structure.

@andrewlinke @erswippe @tchambers For a very good paper on the Slashdot distributed moderation model, see

‘Slash(dot) and Burn: Distributed Moderation in a Large Online Conversation Space’

Cliff Lampe, Paul Resnick
School of Information University of Michigan
(2004 preprint of a ACM CHI 2004 paper)

http://www.presnick.people.si.umich.edu/papers/chi04/LampeResnick.pdf

@Roundtrip @andrewlinke @erswippe @tchambers when I was paying attention (a very long time ago), slashdot worked remarkably well, at least to the extent that you could go in and read with the filter set to hide posts below a score of, say, 4 or 5, and get a decent read of the room - “here are some curated most popular / consensus reactions to the topic at hand”. I’m a little surprised no one else has (apparently) tried this technique since.
@erswippe @tchambers I remember being so excited to get to moderate and meta moderate. That place was such a well run community.

@tchambers

Very intesting indeed. Thank you for sharing!

@tchambers Hello! Will def check this out! I'm only several days old on Mastedon, so looking for people to follow. Nice to so quickly find others knowlegable and sharing on this platform.
@tchambers I’ve always been a fan of the concept that members of a social platform are required to serve as jury. If a post is flagged, X number of random active accounts which are not directly associated with the flagged poster (and have not been polled in the last Y days) are polled on whether the content is objectionable. Perfect? No. But fairly hard to game and completely open.
(Obviously CSAM reports should be handled more cautiously and gore probably should be as well.)
The Man Behind Mastodon, Eugen Rochko, Built It for This Moment

People fleeing Twitter have turned to Eugen Rochko’s alternative. He says social networks can support healthy debate—without any one person in control.

WIRED
@tchambers feels like each Subreddit could have the option of standing up as a Mastodon server and Voila, Fediverse gets immediate scale and Reddit adds direct social functions.