@JMMaok @pensato @futurebird

@jeffjarvis @jayrosen_nyu

J, really good start, needs politics re known, well understood, provable unreliable sources, disinformation spreaders; need help from the Trust Project and the Journalism Trust Initiative from Reporters Without Borders

@craignewmark @JMMaok @pensato @futurebird @jeffjarvis @jayrosen_nyu

New, so learning, & have a question to clarify . I understand you are hoping to formalize an overall minimal standard for all instances & that would mean enforcement at some point . Which I assume would be universally having the same moderating body & list, or something similiar ? Also, want to note whatever happens the fact moderation with a fair ,open face is what happens here is an achievement & makes a difference .Ty

@PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu this is part of why I'm suggesting a model similar to Creative Commons. It would allow instances to self-select from a menu and post the appropriate moderation label/badge somewhere public-facing. People could follow the link to where the detailed moderation paper exists (universally), which saves time and creates consistency. If there are exceptions or specifics on implementation, the moderator can post that.

@pensato @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu

I'm just delighted the conversation is openly occurring and don't have much of an a opinion as not enough familiarity/info yet . But ,in my case I had s to seek the rules out, ended up on the server/instance I did by a fluke . - all fine . Readily available info is always a plus. Perhaps if I had to read/agree to server rules at time of sign up might help - for future growth ?.

@pensato @PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu there's no reason to tie this to the instance. Moderation is just a way of labeling content---just like boosting. Anyone should be able to offer "moderation" and everyone should be able to choose their own moderators.
@pensato @PBruce @craignewmark @JMMaok @futurebird @jeffjarvis @jayrosen_nyu we've designed and tested one prototype of this approach focused on misinformation, and are now building another, more general-purpose one.

@karger @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu

Not really sure I follow, but I'm guessing I should.

More importantly, are you, at the least, using the folks at Harvard Berkman Klein Center like @Zittrain? Thanks!

@craignewmark @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain I've had some interaction with the Berkman folks (and my student @axz was a fellow there) but not yet on this specific project. Here's an MIT news article on it: https://news.mit.edu/2022/social-media-users-assess-content-1116
Empowering social media users to assess content helps fight misinformation

MIT researchers built a prototype social media platform to study the effects of giving users more agency to assess content for accuracy and control the posts they see based on accuracy assessments from others. Users were able to make accurate assessments, despite having no prior training, and they valued and utilized the assessment and filtering tools they designed.

MIT News | Massachusetts Institute of Technology
@karger @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain @axz
Looks good, appreciated! How might you prevent bad actors from responding with fake fact checks, etc? (Okay to take question as rhetorical.)
@craignewmark @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain @axz besides the idea of everyone-is-a-moderator, the other critical component is a trust network. you won't see fact checks from people you don't trust.

@karger @craignewmark @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain @axz

Interesting I have a student looking at countering misinformation this quarter and he has a small grant from Koret Foundation to do it.

@dangrsmind @craignewmark @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain @axz if your student want to build prototypes, we have (open source) infrastructure of various sorts that might help them save time---feel free to reach out

@karger @craignewmark @pensato @PBruce @JMMaok @futurebird @jeffjarvis @jayrosen_nyu @Zittrain @axz

I definitely will mention this to him. We have an upcoming Zoom call in the next week or so. Thanks for the reply!

@karger @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu
Exactly the structure I've been dying for: pick your own moderation. @Zittrain tried to convince Facebook to offer this years ago; they didn't listen, sadly.
@jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain this would be platform-killing for Facebook; I can understand why they wouldn't pick it up.
@karger @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain the trust network for fact checkers as an aspect of moderation would require FB to navigate AOL Community Manager & Mavrix v LiveJournal precedents for volunteer vs labour & the “publisher” implications of “at the direction of the service” created by paid fact checkers suppressing user-created misinfo.
Social media corps see that as a liability landmine.
@PennyOaken @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain indeed there's an important framing issue between "you are moderating for the platform" so of course should be compensated vs "you are using the platform to communicate your moderation opinions too your friends" (so you should pay the platforms, by subscription or ad models). On to of that you could also have paid moderation services (which I think is not unrelated to journalism)
@karger @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain None of the social media corporations want to be the test case for “your AUP enforcement is biased against free speech / Republicans \ isn’t covered by Section 230’s language \ breaches your DMCA Safe Harbour \ makes you a publisher” litigation / legislation. Every aspect of moderation they can push off, outsource, or sidestep, they do.
@PennyOaken @jeffjarvis @pensato @PBruce @craignewmark @JMMaok @futurebird @jayrosen_nyu @Zittrain from that perspective empowering individuals as moderators could help platforms shed some of the moderation burden they are currently shouldering (badly), and get *out* of the crosshairs of those complaining about moderation choices.

@karger

I think a lot of people make the mistake of thinking that moderation is just about "giving people what they want" -- and when there are conflicts about moderation decisions we try to make everyone happy with some idea about everyone having their own personalized system of blocks and filters.

But are we just making information delivery services or are we making communities? Conflicts about moderation are important conversations and a chance to strengthen that sense of community.

@karger @futurebird
I would always prioritize community building/strengthening over information delivery. Social networks vs Social media…

@futurebird @karger
Being considerate means staying away from language that may upset others
It's not PC or woke to have empathy

If it may be objectionable, drop it. People will soon understand what's acceptable if they want to be in the conversation
#woke

@futurebird I think the distinction between information delivery services and communities is an important one. But it's not either or. We need both. We need good information delivery services to act as infrastructure for our communities. But the place to impose norms is within the communities, not the information delivery services. Because different communities will have different norms that may conflict. They shouldn't be forced to adopt different information delivery services because of that.
@karger @futurebird So, let me see if I have correctly understood your position. Users of social media should be allowed to abuse and harass other people however they prefer, and it is incumbent upon the target(s) to either manage that abuse or flee the service. Is that what you envision? Because that's what it sounds like.
@knottedthreads @futurebird in fact we published a whole paper about how platform-level moderation *fails* to protect people from harassment and abuse, and showed how it is necessary to give individuals the power to define harassment and how they want it handled. https://homes.cs.washington.edu/~axz/squadbox.html
Amy X. Zhang - UW CSE

Amy X. Zhang is a professor at UW CSE focusing on social computing and HCI research.

@karger @futurebird

I find it interesting that your proposed solution is to absolve platforms of moderation work and force harassment targets to outsource that work to their friends and family, so that the interplay of guilt and perceived obligation reduce the number of complaints against the service and about the moderation. I'm not sure that shoving such a toxic burden squarely into the relationships that a target most depends on should be regarded as a panacea.

@knottedthreads @futurebird my counter argument is that whether we absolve then or not the platforms are simply incapable of doing it well---see our paper on squadbox https://homes.cs.washington.edu/~axz/pub_details.html?id=squadbox . And that there is no platform level one size fits all moderation solution. Do you think liberals will ever want conservatives setting moderation policies? Or vice versa?
Amy X. Zhang - UW CSE

Amy X. Zhang is a professor at UW CSE focusing on social computing and HCI research.

@karger

Going back to the list of layers that form this social network there are types of moderation that will occur at each layer. From filtering spam, and "system abuse" (anything that tries to break or exploit the way the network was meant to work) to filtering obvious disgusting content (think goatsie and spamming the n-word)-- to the less obvious and more subjective calls that may vary from one community to the next.

@karger

To talk about Mastodon in particular the moderation system is OK. I would like to see a ticket system where user reports would create a ticket that could be shared across servers (including notes and links to posts) I'd like to see an *option* to inform users who make reports about what happened.

I'd also like a true shadow-ban option-- limiting is close, but a way to mute a user over a whole server. (been dealing with people who keep making new accounts)

@futurebird yeah there are lots of opportunities for improvement in the moderation system.

@karger moderation's more than just labeling content. It's also about de-escalating situations before they turn into trashfires, protecting people and communities from bad actors, and reinforcing positive norms. People on an instance that prohibits hate speech shouldn't be able to choose "freeze peach" absolutists as their moderators. @jeffjarvis I assume @Zittrain's pitch to FB addressed this?

@jdp23 @futurebird @jeffjarvis @Zittrain I agree all these things are important, but they should be enforced at the community level rather than the instance level. Take gmail for example---is that a "community"? should google be making enforcement decisions about what kinds of email to deliver? They don't; instead many different communities with different norms share the same gmail infrastructure for communication. Social media should be similar; many communities on common infrastructure.

@karger

Twitter was like one big massive instance and has moderation. I choose to leave when they pulled away from what I consider the bare minimum -- not because I care if I see that stuff personally, but because I don't want to be a part of server without those kinds of minimum standards.

I wouldn't want to be on an instance that also hosted nazis --

@futurebird but I doubt that you have abandoned gmail, even though there are plenty of nazis sending their hate speech through it.
@futurebird @karger This is a very poor example. Email is a one to one system. It is not a social network, which is built to form communities.
@Tupp_ed @futurebird communities existed long before social media; they used alternative distribution channels such as mailing lists but the issues are the same.

@karger @futurebird OK but your analogy is bunk.

Phone lines, fax machines and email do not create communities, though they do require network level moderation (to prevent spam, harassment etc).

Social media also requires active moderation to set and maintain community standards.

@Tupp_ed @futurebird *communities* require active moderation to set and maintain community standards. our infrastructure should empower communities to make their own choice about that, not force them into one-size-fits-all moderation.

@karger @futurebird Again, all the infrastructure, especially email, is subject to moderation at a full network level.

I mean, you may not be aware of it, but it is there and crucial for that infrastructure to continue to give value.

Possibly your primary point is valid (though I’m unpersuaded) but your analogy to support it isn’t.

@Tupp_ed @futurebird yes, as I mentioned before there is "moderation" at the network level. Content with forged sender headers is blocked, illegal content such as child porn may be detected and blocked. But it's very limited. At the next level down, things like spam are *labeled* as such but delivered anyway so the end user can decide what to do about them.

@karger @futurebird Ah here. Projects such as Spamhaus ensure that literally billions of spam messages a day are blocked before they ever reach an inbox.

Sure lookit, go on. I won’t bother you further.

@Tupp_ed @futurebird Yes; I'm familiar as I've published work on spam and spam blocking. Spamhaus focuses on malware, forgery, phishing---things that violate the infrastructure contracts. Meanwhile, spam like my opportunity to save 40% buying socks, or the opportunity to open a franchise, or the plea for money from the political party, are delivered just fine. Labeled as spam or social so I can decide what to do with them.
@futurebird @karger The Twitter share icon on the bottom of many publications and posts needs to be replaced by a Mastodon share icon.
@futurebird I also agree that a platform with *no* moderation is a disaster. But I think that a platform with personalized moderation would be better than on with centralized moderation.

@karger @futurebird @jeffjarvis @Zittrain Instances are currently the primary mechanism for community in the fediverse so I'm not sure about the distinction you're making.

And Google actually does make decisions about what email to deliver and what to moderate by labeling it as social or spam.

@jdp23 @futurebird @jeffjarvis @Zittrain moderation currently conflates labeling and delivery. I'm all in favor of gmail continuing to label mail as spam or social---because they let *me* decide what to do about those labels, rather than invisibly deleting it.

for contrast, the *do* drop mail with forged sender info immediately, and I think that's the right choice because it violates the *infrastructure* contract (identifiable senders) rather than a particular community norm.

@karger @futurebird @jeffjarvis @Zittrain it sounds like you think the infrastructure contract can't be about values. So do you instances are wrong to defederate from Gab and Nazi instances?

@jdp23 I guess the turnabout question is, if a nazi group sets up their own mail server, should other mail servers refuse to exchange email with it?

Given the current state of affairs with mastodon, since there is too little control at the individual level, defederation is the best of bad choices.

but i think we'd be far better off with a social network infrastructure modeled on the email one---a reliable delivery layer *on top of which* communities can internally manage norms

@karger Defederation *is* a community decision: "We o this instance don't want anything to do with Nazi servers". That sounds like a good choice to me (not just the best of bad choices) but I guess we see it differently.

As for email servers, they already refuse to exchange email with servers and IP addresses known to be spammers, or sites with DKIM etc set up wrong. I agree that they're more tolerant of Nazis than spam but that's just a question of which values they enforce prioritize.

@jdp23 defederation isn't a community decision. it's entirely in the hands of the server operator.

r.e. email servers, I don't think it's a question of value so much as layers. sources that violate the email delivery contract by forging headers or sending too much volume get blocked, but there is rarely blocking based on *content*.

@karger Great conversation, thanks for taking the time.

I'd say defederation is a community decision taken by the admins (who may or may not solicit input or provide transparency) on behalf of the community. No argument that there's a lot of room for improvement in instance governance! But the same's true or email list moderation, which it sounds like you do consider to be at the community level.

On email servers, CSAM and malware are blocked based on content.

@jdp23 @futurebird @jeffjarvis @Zittrain as for the distinction i'm drawing, it's basically the usual one that computer scientists draw between the physical and logical architecture. consider email again: a particular email server might host many mailing lists, but moderation is generally considered a job for each mailing list to tackle itself, not something the email server does uniformly to all of them.

@karger @futurebird

This is analogous to how Twitch moderation works. There are service level expectations (no slurs, organized harassment, etc.) and then on a per channel basis (think instances) there are varying behavior expectations enforced by moderators for that channel (swearing? gameplay suggestions? talking about current events? sharing links?)

Service level expectations are enforced automatically when possible, but channel moderators are also responsible for enforcement.

@drewww @futurebird yes, reddit also does this, with light moderation done at the platform level and individual subreddits empowered to choose their own communal moderation standards. It's close to what I think we should have, but I think that further layers of delegation should be possible.

@karger @drewww to @futurebird’s comment about channels vs instances… as far as I can tell, an instance is not a very meaningful center of community.

I have conversations about music, design, television programs, politics, etc., with different groups of people, and I don’t imagine centering those conversations around any particular instance.

Is there an equivalent of a subreddit (a place to have a conversation around a particular topic) in the fediverse?

@karger @drewww @futurebird to be clear, I don’t know if it’s a good idea to have different places for different topics. I like having all my conversations mashed together which is why I previously gravitated toward places like twitter.

@skuwamoto @karger @futurebird

This does seem like a big difference to me. Twitch channels have a single person as an organizing point. Subreddits have a topic. That is helpful at establishing norms and decision-making.

It's clear you're on "their" territory, as opposed to the way people say "my page" to refer to their social media presence, often. It does seem to me like the metaphors clash in a tricky way here.

@skuwamoto

Probably the best way is by following and using hash tags.

I'm on an instance with a lot of paleo-art people and paleontologists.

I care about math and bugs mostly and was worried for a bit I couldn't find the bug people. But I made a post that was just a long list of hash tags, then clicked them all, followed a bunch of the people I found AND followed the hash tags.

Last, tag your posts.

(Though I still could use more #ants people but that was thin on twitter too.)

@futurebird thanks!

My previous comment was in the context of moderation. People keep talking about how cool it is that different instances can have different moderation rules and it doesn’t make a lot of sense to me.

To me, picking an instance is like picking an email server.

Any my question is: If I follow a topic hashtag, why would I want different people on that topic to have different moderation rules based on what server they picked?

@skuwamoto

Well that's the missing level of moderation between the fediverse and individual servers. In practice now there isn't a whole lot of difference if you are in fediverse servers although a new wave of expectations is spreading due to new users and changes in the activity of trolls.

If you aren't a mod or an admin, beyond picking a sever with a decent reputation IDEALLY *you* shouldn't need to think much about it.

Beyond maybe reporting something like once a month.