@JMMaok @pensato @futurebird

@jeffjarvis @jayrosen_nyu

J, really good start, needs politics re known, well understood, provable unreliable sources, disinformation spreaders; need help from the Trust Project and the Journalism Trust Initiative from Reporters Without Borders

@craignewmark @JMMaok @pensato @futurebird @jeffjarvis @jayrosen_nyu

New, so learning, & have a question to clarify . I understand you are hoping to formalize an overall minimal standard for all instances & that would mean enforcement at some point . Which I assume would be universally having the same moderating body & list, or something similiar ? Also, want to note whatever happens the fact moderation with a fair ,open face is what happens here is an achievement & makes a difference .Ty

@PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu this is part of why I'm suggesting a model similar to Creative Commons. It would allow instances to self-select from a menu and post the appropriate moderation label/badge somewhere public-facing. People could follow the link to where the detailed moderation paper exists (universally), which saves time and creates consistency. If there are exceptions or specifics on implementation, the moderator can post that.
@pensato @PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu there's no reason to tie this to the instance. Moderation is just a way of labeling content---just like boosting. Anyone should be able to offer "moderation" and everyone should be able to choose their own moderators.
@karger @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu
Exactly the structure I've been dying for: pick your own moderation. @Zittrain tried to convince Facebook to offer this years ago; they didn't listen, sadly.
@jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain this would be platform-killing for Facebook; I can understand why they wouldn't pick it up.
@karger @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain the trust network for fact checkers as an aspect of moderation would require FB to navigate AOL Community Manager & Mavrix v LiveJournal precedents for volunteer vs labour & the “publisher” implications of “at the direction of the service” created by paid fact checkers suppressing user-created misinfo.
Social media corps see that as a liability landmine.
@karger @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain None of the social media corporations want to be the test case for “your AUP enforcement is biased against free speech / Republicans \ isn’t covered by Section 230’s language \ breaches your DMCA Safe Harbour \ makes you a publisher” litigation / legislation. Every aspect of moderation they can push off, outsource, or sidestep, they do.
@PennyOaken @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain from that perspective empowering individuals as moderators could help platforms shed some of the moderation burden they are currently shouldering (badly), and get *out* of the crosshairs of those complaining about moderation choices.

@karger

To talk about Mastodon in particular the moderation system is OK. I would like to see a ticket system where user reports would create a ticket that could be shared across servers (including notes and links to posts) I'd like to see an *option* to inform users who make reports about what happened.

I'd also like a true shadow-ban option-- limiting is close, but a way to mute a user over a whole server. (been dealing with people who keep making new accounts)

@futurebird yeah there are lots of opportunities for improvement in the moderation system.

@karger moderation's more than just labeling content. It's also about de-escalating situations before they turn into trashfires, protecting people and communities from bad actors, and reinforcing positive norms. People on an instance that prohibits hate speech shouldn't be able to choose "freeze peach" absolutists as their moderators. @jeffjarvis I assume @Zittrain's pitch to FB addressed this?

@jdp23 @futurebird @jeffjarvis @Zittrain I agree all these things are important, but they should be enforced at the community level rather than the instance level. Take gmail for example---is that a "community"? should google be making enforcement decisions about what kinds of email to deliver? They don't; instead many different communities with different norms share the same gmail infrastructure for communication. Social media should be similar; many communities on common infrastructure.

@karger

Twitter was like one big massive instance and has moderation. I choose to leave when they pulled away from what I consider the bare minimum -- not because I care if I see that stuff personally, but because I don't want to be a part of server without those kinds of minimum standards.

I wouldn't want to be on an instance that also hosted nazis --

@futurebird but I doubt that you have abandoned gmail, even though there are plenty of nazis sending their hate speech through it.
@futurebird @karger This is a very poor example. Email is a one to one system. It is not a social network, which is built to form communities.
@Tupp_ed @futurebird communities existed long before social media; they used alternative distribution channels such as mailing lists but the issues are the same.

@karger @futurebird OK but your analogy is bunk.

Phone lines, fax machines and email do not create communities, though they do require network level moderation (to prevent spam, harassment etc).

Social media also requires active moderation to set and maintain community standards.

@Tupp_ed @futurebird *communities* require active moderation to set and maintain community standards. our infrastructure should empower communities to make their own choice about that, not force them into one-size-fits-all moderation.

@karger @futurebird Again, all the infrastructure, especially email, is subject to moderation at a full network level.

I mean, you may not be aware of it, but it is there and crucial for that infrastructure to continue to give value.

Possibly your primary point is valid (though I’m unpersuaded) but your analogy to support it isn’t.

@Tupp_ed @futurebird yes, as I mentioned before there is "moderation" at the network level. Content with forged sender headers is blocked, illegal content such as child porn may be detected and blocked. But it's very limited. At the next level down, things like spam are *labeled* as such but delivered anyway so the end user can decide what to do about them.

@karger @futurebird Ah here. Projects such as Spamhaus ensure that literally billions of spam messages a day are blocked before they ever reach an inbox.

Sure lookit, go on. I won’t bother you further.

@Tupp_ed @futurebird Yes; I'm familiar as I've published work on spam and spam blocking. Spamhaus focuses on malware, forgery, phishing---things that violate the infrastructure contracts. Meanwhile, spam like my opportunity to save 40% buying socks, or the opportunity to open a franchise, or the plea for money from the political party, are delivered just fine. Labeled as spam or social so I can decide what to do with them.