@JMMaok @pensato @futurebird

@jeffjarvis @jayrosen_nyu

J, really good start, needs politics re known, well understood, provable unreliable sources, disinformation spreaders; need help from the Trust Project and the Journalism Trust Initiative from Reporters Without Borders

@craignewmark @JMMaok @pensato @futurebird @jeffjarvis @jayrosen_nyu

New, so learning, & have a question to clarify . I understand you are hoping to formalize an overall minimal standard for all instances & that would mean enforcement at some point . Which I assume would be universally having the same moderating body & list, or something similiar ? Also, want to note whatever happens the fact moderation with a fair ,open face is what happens here is an achievement & makes a difference .Ty

@PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu this is part of why I'm suggesting a model similar to Creative Commons. It would allow instances to self-select from a menu and post the appropriate moderation label/badge somewhere public-facing. People could follow the link to where the detailed moderation paper exists (universally), which saves time and creates consistency. If there are exceptions or specifics on implementation, the moderator can post that.
@pensato @PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu there's no reason to tie this to the instance. Moderation is just a way of labeling content---just like boosting. Anyone should be able to offer "moderation" and everyone should be able to choose their own moderators.
@karger @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu
Exactly the structure I've been dying for: pick your own moderation. @Zittrain tried to convince Facebook to offer this years ago; they didn't listen, sadly.
@jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain this would be platform-killing for Facebook; I can understand why they wouldn't pick it up.
@karger @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain the trust network for fact checkers as an aspect of moderation would require FB to navigate AOL Community Manager & Mavrix v LiveJournal precedents for volunteer vs labour & the “publisher” implications of “at the direction of the service” created by paid fact checkers suppressing user-created misinfo.
Social media corps see that as a liability landmine.
@karger @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain None of the social media corporations want to be the test case for “your AUP enforcement is biased against free speech / Republicans \ isn’t covered by Section 230’s language \ breaches your DMCA Safe Harbour \ makes you a publisher” litigation / legislation. Every aspect of moderation they can push off, outsource, or sidestep, they do.
@PennyOaken @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain from that perspective empowering individuals as moderators could help platforms shed some of the moderation burden they are currently shouldering (badly), and get *out* of the crosshairs of those complaining about moderation choices.

@karger

To talk about Mastodon in particular the moderation system is OK. I would like to see a ticket system where user reports would create a ticket that could be shared across servers (including notes and links to posts) I'd like to see an *option* to inform users who make reports about what happened.

I'd also like a true shadow-ban option-- limiting is close, but a way to mute a user over a whole server. (been dealing with people who keep making new accounts)

@futurebird yeah there are lots of opportunities for improvement in the moderation system.

@karger moderation's more than just labeling content. It's also about de-escalating situations before they turn into trashfires, protecting people and communities from bad actors, and reinforcing positive norms. People on an instance that prohibits hate speech shouldn't be able to choose "freeze peach" absolutists as their moderators. @jeffjarvis I assume @Zittrain's pitch to FB addressed this?

@jdp23 @futurebird @jeffjarvis @Zittrain I agree all these things are important, but they should be enforced at the community level rather than the instance level. Take gmail for example---is that a "community"? should google be making enforcement decisions about what kinds of email to deliver? They don't; instead many different communities with different norms share the same gmail infrastructure for communication. Social media should be similar; many communities on common infrastructure.

@karger @futurebird @jeffjarvis @Zittrain Instances are currently the primary mechanism for community in the fediverse so I'm not sure about the distinction you're making.

And Google actually does make decisions about what email to deliver and what to moderate by labeling it as social or spam.

@jdp23 @futurebird @jeffjarvis @Zittrain moderation currently conflates labeling and delivery. I'm all in favor of gmail continuing to label mail as spam or social---because they let *me* decide what to do about those labels, rather than invisibly deleting it.

for contrast, the *do* drop mail with forged sender info immediately, and I think that's the right choice because it violates the *infrastructure* contract (identifiable senders) rather than a particular community norm.

@karger @futurebird @jeffjarvis @Zittrain it sounds like you think the infrastructure contract can't be about values. So do you instances are wrong to defederate from Gab and Nazi instances?

@jdp23 I guess the turnabout question is, if a nazi group sets up their own mail server, should other mail servers refuse to exchange email with it?

Given the current state of affairs with mastodon, since there is too little control at the individual level, defederation is the best of bad choices.

but i think we'd be far better off with a social network infrastructure modeled on the email one---a reliable delivery layer *on top of which* communities can internally manage norms

@karger Defederation *is* a community decision: "We o this instance don't want anything to do with Nazi servers". That sounds like a good choice to me (not just the best of bad choices) but I guess we see it differently.

As for email servers, they already refuse to exchange email with servers and IP addresses known to be spammers, or sites with DKIM etc set up wrong. I agree that they're more tolerant of Nazis than spam but that's just a question of which values they enforce prioritize.

@jdp23 defederation isn't a community decision. it's entirely in the hands of the server operator.

r.e. email servers, I don't think it's a question of value so much as layers. sources that violate the email delivery contract by forging headers or sending too much volume get blocked, but there is rarely blocking based on *content*.

@karger Great conversation, thanks for taking the time.

I'd say defederation is a community decision taken by the admins (who may or may not solicit input or provide transparency) on behalf of the community. No argument that there's a lot of room for improvement in instance governance! But the same's true or email list moderation, which it sounds like you do consider to be at the community level.

On email servers, CSAM and malware are blocked based on content.