@JMMaok @pensato @futurebird

@jeffjarvis @jayrosen_nyu

J, really good start, needs politics re known, well understood, provable unreliable sources, disinformation spreaders; need help from the Trust Project and the Journalism Trust Initiative from Reporters Without Borders

@craignewmark @JMMaok @pensato @futurebird @jeffjarvis @jayrosen_nyu

New, so learning, & have a question to clarify . I understand you are hoping to formalize an overall minimal standard for all instances & that would mean enforcement at some point . Which I assume would be universally having the same moderating body & list, or something similiar ? Also, want to note whatever happens the fact moderation with a fair ,open face is what happens here is an achievement & makes a difference .Ty

@PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu this is part of why I'm suggesting a model similar to Creative Commons. It would allow instances to self-select from a menu and post the appropriate moderation label/badge somewhere public-facing. People could follow the link to where the detailed moderation paper exists (universally), which saves time and creates consistency. If there are exceptions or specifics on implementation, the moderator can post that.
@pensato @PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu there's no reason to tie this to the instance. Moderation is just a way of labeling content---just like boosting. Anyone should be able to offer "moderation" and everyone should be able to choose their own moderators.
@karger @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu
Exactly the structure I've been dying for: pick your own moderation. @Zittrain tried to convince Facebook to offer this years ago; they didn't listen, sadly.
@jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain this would be platform-killing for Facebook; I can understand why they wouldn't pick it up.
@karger @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain the trust network for fact checkers as an aspect of moderation would require FB to navigate AOL Community Manager & Mavrix v LiveJournal precedents for volunteer vs labour & the “publisher” implications of “at the direction of the service” created by paid fact checkers suppressing user-created misinfo.
Social media corps see that as a liability landmine.
@karger @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain None of the social media corporations want to be the test case for “your AUP enforcement is biased against free speech / Republicans \ isn’t covered by Section 230’s language \ breaches your DMCA Safe Harbour \ makes you a publisher” litigation / legislation. Every aspect of moderation they can push off, outsource, or sidestep, they do.
@PennyOaken @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain from that perspective empowering individuals as moderators could help platforms shed some of the moderation burden they are currently shouldering (badly), and get *out* of the crosshairs of those complaining about moderation choices.

@karger

I think a lot of people make the mistake of thinking that moderation is just about "giving people what they want" -- and when there are conflicts about moderation decisions we try to make everyone happy with some idea about everyone having their own personalized system of blocks and filters.

But are we just making information delivery services or are we making communities? Conflicts about moderation are important conversations and a chance to strengthen that sense of community.

@karger @futurebird
I would always prioritize community building/strengthening over information delivery. Social networks vs Social media…

@futurebird @karger
Being considerate means staying away from language that may upset others
It's not PC or woke to have empathy

If it may be objectionable, drop it. People will soon understand what's acceptable if they want to be in the conversation
#woke

@futurebird I think the distinction between information delivery services and communities is an important one. But it's not either or. We need both. We need good information delivery services to act as infrastructure for our communities. But the place to impose norms is within the communities, not the information delivery services. Because different communities will have different norms that may conflict. They shouldn't be forced to adopt different information delivery services because of that.
@karger @futurebird So, let me see if I have correctly understood your position. Users of social media should be allowed to abuse and harass other people however they prefer, and it is incumbent upon the target(s) to either manage that abuse or flee the service. Is that what you envision? Because that's what it sounds like.
@knottedthreads @futurebird in fact we published a whole paper about how platform-level moderation *fails* to protect people from harassment and abuse, and showed how it is necessary to give individuals the power to define harassment and how they want it handled. https://homes.cs.washington.edu/~axz/squadbox.html
Amy X. Zhang - UW CSE

Amy X. Zhang is a professor at UW CSE focusing on social computing and HCI research.

@karger @futurebird

I find it interesting that your proposed solution is to absolve platforms of moderation work and force harassment targets to outsource that work to their friends and family, so that the interplay of guilt and perceived obligation reduce the number of complaints against the service and about the moderation. I'm not sure that shoving such a toxic burden squarely into the relationships that a target most depends on should be regarded as a panacea.

@knottedthreads @futurebird my counter argument is that whether we absolve then or not the platforms are simply incapable of doing it well---see our paper on squadbox https://homes.cs.washington.edu/~axz/pub_details.html?id=squadbox . And that there is no platform level one size fits all moderation solution. Do you think liberals will ever want conservatives setting moderation policies? Or vice versa?
Amy X. Zhang - UW CSE

Amy X. Zhang is a professor at UW CSE focusing on social computing and HCI research.