@JMMaok @pensato @futurebird

@jeffjarvis @jayrosen_nyu

J, really good start, needs politics re known, well understood, provable unreliable sources, disinformation spreaders; need help from the Trust Project and the Journalism Trust Initiative from Reporters Without Borders

@craignewmark @JMMaok @pensato @futurebird @jeffjarvis @jayrosen_nyu

New, so learning, & have a question to clarify . I understand you are hoping to formalize an overall minimal standard for all instances & that would mean enforcement at some point . Which I assume would be universally having the same moderating body & list, or something similiar ? Also, want to note whatever happens the fact moderation with a fair ,open face is what happens here is an achievement & makes a difference .Ty

@PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu this is part of why I'm suggesting a model similar to Creative Commons. It would allow instances to self-select from a menu and post the appropriate moderation label/badge somewhere public-facing. People could follow the link to where the detailed moderation paper exists (universally), which saves time and creates consistency. If there are exceptions or specifics on implementation, the moderator can post that.
@pensato @PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu there's no reason to tie this to the instance. Moderation is just a way of labeling content---just like boosting. Anyone should be able to offer "moderation" and everyone should be able to choose their own moderators.
@pensato @PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu we've designed and tested one prototype of this approach focused on misinformation, and are now building another, more general-purpose one.

@karger @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu

Not really sure I follow, but I'm guessing I should.

More importantly, are you, at the least, using the folks at Harvard Berkman Klein Center like @Zittrain? Thanks!

@craignewmark @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain I've had some interaction with the Berkman folks (and my student @axz was a fellow there) but not yet on this specific project. Here's an MIT news article on it: https://news.mit.edu/2022/social-media-users-assess-content-1116
Empowering social media users to assess content helps fight misinformation

MIT researchers built a prototype social media platform to study the effects of giving users more agency to assess content for accuracy and control the posts they see based on accuracy assessments from others. Users were able to make accurate assessments, despite having no prior training, and they valued and utilized the assessment and filtering tools they designed.

MIT News | Massachusetts Institute of Technology

@karger @craignewmark @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain @axz

Interesting I have a student looking at countering misinformation this quarter and he has a small grant from Koret Foundation to do it.

@dangrsmind @craignewmark @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain @axz if your student want to build prototypes, we have (open source) infrastructure of various sorts that might help them save time---feel free to reach out

@karger @craignewmark @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain @axz

I definitely will mention this to him. We have an upcoming Zoom call in the next week or so. Thanks for the reply!