Some quick notes on how we might build some of the essential infrastructure and governance processes that will be needed if #Mastodon is really going to be sustainable and viable as a mass-adoption social network (1/n):
a) We need to scale content moderation. A LOT. Corporate social network sites (SNS) do this by using armies of poorly paid, outsourced contractors. The #Fediverse should do it better. Perhaps by organizing a worker-owned content moderation cooperative?
a2) Smaller instances can self-moderate / use volunteer labor / whatever. But large instances will need to be able to scale, so a content mod coop (or a federated network of multiple such coops) that can be hired/contracted by larger instances would be amazing.
a3) Another option is for larger instances to hire more mods directly. Hopefully, some of the larger instances can themselves be organized as cooperaqtives. Probably some combination of in-house moderation & contracts w/coops would work well.
b) funding mechanisms. It is going to take money to scale. For hosting, development, ongoing improvements, #a11y, localization, UX improvements, security, and perhaps most of all, to pay content moderators just wages.
b2) currently, most of the money is in the form of small recurring donations to the german nonprofit that is the largest instance. every instance running its own patreon is part of the puzzle, but it probably can't be the whole thing.
b3) probably there will be a mix, with large donations from individuals, private foundations, and perhaps increasingly some state actors (for example, municipalities, libraries, state agencies, etc) providing contracts. There will also be some companies that want to donate (and contribute coding time, etc).
b4) All that money flowing in, mostly to the largest instances, ideally should be governed at least in part through participatory budgeting mechanisms. Alternately (or in addition), there should be formalized governance mechanisms (elections for the board of the mastodon non-profit? liquid democracy? sortition? stakeholder board members?) to truly democratize resource allocation.
c) Now that the 'don is taking off, from DIY small community to wider adoption, intentional bad actors are in the mix at scale. We will need to take this seriously, and invest HEAVILY in various approaches to minimizing harm, constantly working to block and limit bad actors, defederate the worst instances, and ... create our wildest dreams in terms of care, follow-up, and support for community members after troll attacks!
c2) we control the fediverse, not the market, the state, or the billionaires, not surveillance capitalism, not ad markets, so why would we limit our dreams of how to create community safety to content moderation alone? Let's dream bigger. We can create (and resource) new tools, implement shared banlists, provide resources for rapid response teams and after-attack processing support, and so much more!
(pause for now as I'm heading to a budget meeting, hope to return soon with more).
@schock Great to see some intelligent forward thinking about the future of this amazing social experiment Sasha. And I love you calling it the 'don👌

@schock

@schock

Great thread. I've been concerned that *governance* is the part of the solution that will determine if #Mastodon remains viable, scalable and equitable.

The dialogue is focused so much on the work of unpaid mods, which is unfair and doesn't scale.

Really looking forward to seeing the different models that evolve here in response to community priorities and needs.

@schock I'm pretty late to this thread but this is a subject that I've been thinking about since I joined. Has anyone considered having this part of ISP's standard service offering, like they used to offer Usenet (another "federated" service) and Email. It wouldn't stop users from picking their own servers just like we can use external email services, but having it part of the base offering would relieve a lot of stress on free servers and ISP would have lot more control on abuse.
@dermoth @schock I want to like it, but other than like municipal ISPs there aren't many were can trust
@dermoth @schock I don't necessarily agree that an access provider would have any better tools for combating abuse. I think there's far less motivation for them to provide any sort of social media service, much less moderation.
@dermoth @schock That sounds like going right back to the path of being in the hand of giant, faceless, unresponsive corporations. We know already that doesn't work out very well.

@lakelady @schock I'm not sure about US but in Canada there are lots of smaller ISP's, most lease the lines from the big players (it's regulated by the CRTC)... Pretty sure it's very similar in Europe... France at least (you can even choose you electricity provider there!)

You always have the choice to pick your own instance, and even run one yourself. It's not a lock-in but I think it could help with mass adoption...

@schock I’d like to have a third party moderation services which can be contracted by any social media platform with tools to review content and take action to enforce community rules. Actions could be reversed with an appeal process. And I believe actions taken by humans could be used to train ML models to scale moderation. I’d also like to be able directly hire a “bot” which would handle moderation for me. It should also train an ML model. cc @cd24 @seb
@schock @cd24 @seb I also think governments could run their own official Mastodon services much like any .gov website is run. Large companies could also do this for their own accounts which provide customer service and marketing. Let them fund what they use. They can also fund their own moderation.
@brennansv @schock @cd24 @seb yes this! I keep pinging usds hoping they'll do this. A fair few govts in Germany are. https://vis.social/@nrchtct/109388940172606478
Marian Dörk (@[email protected])

@[email protected] @[email protected] the privacy commissioner of the state Baden-WĂĽrttemberg is hosting an instance open only for state institutions: https://bawĂĽ.social/ there are also quite a few other instances referring to cities, regions and states, but they are often independently run: https://nrw.social https://berlin.social/ https://ruhr.social/ https://freiburg.social/

vis.social

@brennansv @schock @cd24 @seb yes (need to catch up on the whole thread) - I think hosting an instance and moderation may be separate services and both likely candidates for a government contractor (and as an offering for other orgs and businesses).

It will take more than just setting up the servers. Governments likely need a plan for archiving & record retention (as will any regulated businesses). And managing shared accounts & access /security

@brennansv @schock @cd24 @seb and if it wasn’t clear from my reply I’m seriously looking into what it would take to start such a business (and have ideas for additional features). It may need to start with a different ActivityPub platform (or may need to support a range of them) but I think it will be hugely valuable for businesses, governments, schools, nonprofits and orgs like unions to run their own social media on the Fedisphere
@Rycaut Do it. I know we need it. Many have had to leave social media because it became such a bad experience for them. Several actresses have gone through terrible experiences. If they could have used a service to moderate it maybe they could still engage with fans in a positive way.
@Rycaut I want all social media platforms to support a moderation API. One example, if I run a YouTube channel I could choose a moderation service to moderate the comments with my policies. Maybe there would be a selection of community rules I could choose to apply.
@brennansv that could be nice but is also immensely complicated (youtube for example has to adhere to rules related many different countries - EU privacy rules, German specific rules etc) so I’d imagine it might be a baseline they have to do + additional options chosen by the creator. Challenge also is how to do this without abusing the moderators & without encoding various problems via biased AI etc. not an easy challenge in the least.
@Rycaut MKBHD made a video about this. There is a script he can run to clean up his comments section. https://youtu.be/1Cw-vODp-8Y
YouTube Needs to Fix This

YouTube

@brennansv @schock @cd24 @seb

Its already happening. Some early forays (Canadian is an NGO):

cira.ca/newsroom/corporate/cira-teams-mastodon-canada-support-canadian-digital-communities

https://social.network.europa.eu/explore

https://social.bund.de/explore

Explore EU Voice

Discover users based on their interests

Mastodon hosted on social.network.europa.eu
@brennansv @schock @cd24 @seb Thinking even more globally than social media (or with a very broad definition of what they are), what could be awesome would be to have an independent, grassroot-led #identity certification of internet users. Presently, the big tech are in effect those offering this service, through google accounts e.g.
Idea is a service that would be non-government, not-for-profit and that could link together the different activities of a single person.
@brennansv @schock @cd24 @seb
Then the likes of airBnB, amazon,... wouldn't be the ones directly checking your identity but would ask this 3rd party to check the credentials one gives.
That would allow to have both safety (since being banned some place would be notified to id provider) and privacy, if identity-service is to be trusted. Now this id provider service could also be a federated service, and each person could choose what instance of that they trust.
@brennansv @schock @cd24 @seb
Privacy comes from the fact that the social medium/company only get to have the info they need certified (e.g being the same person == identity, or which city you live in) and not what they don't need (e.g. passport number, street number, DoB...)
The right to error would come on top of that from the fact that you can build a new identity if one is banned... but safety comes from the hassle of setting up a new id, since you need to start over a new life everywhere
@jocelyn_etienne Apple is working with a few US states on a standard for a digital ID. If you go into a bar it would just confirm you’re over the age limit and if you get pulled over in traffic it only gives the police what is required. I’d like to leverage this standard. https://www.apple.com/newsroom/2021/09/apple-announces-first-states-to-adopt-drivers-licenses-and-state-ids-in-wallet/
Apple announces first states signed up to adopt driver’s licenses and state IDs in Apple Wallet

Apple is working with states across the country to roll out the ability for their residents to add their driver’s license or state ID to Wallet.

Apple Newsroom
@brennansv Funny how things feel different either side of the Atlantic. In France - and I guess in the UK and all of Europe - people will rather allow the police to access their details but not private companies.
That said, it's also an issue here that a specific government body accesses only what's relevant to them.
@jocelyn_etienne I believe one of the advantages of the digital ID is it will provide the necessary details for the situation. For a traffic stop your photo, name and proof of insurance could all be provided.
I expect once the standard starts to catch on it will spread quickly like Apple Pay has for years. I now use it nearly everywhere.
@brennansv I share your views on the usefulness of having a digital ID that provides just the right info to the person entitled. Where I am much more cautious is about who manages my ID data: I don't want Apple or Google to do that for me.
Same for payments, I'm not using apple pay or similar services, and for small purchases (and I do most of my everyday purchases at tiny shops and market place stalls) I use cash rather than card. My bank doesn't have to handle all my life.
@jocelyn_etienne @brennansv One idea I had the other day on funding a moderation co-op. I suspect there might be stiff resistance to this idea, but here goes: For large instances with moderation challenges, how about a system where new, unvetted, unaffiliated users have to post a bond guaranteeing good behavior? Say, $5 or something. If this user violates rules, bond is collected, funding moderation. A bonded User A could invite User B, and User B is covered under User A's bond. This can chain.
@jocelyn_etienne @brennansv One nice thing about this system is that instances can individually opt-in to this scheme and it could literally start with one instance.
@jocelyn_etienne @brennansv A no-money version of this system is possible as well (but it doesn't help solve the problem of funding moderation). An instance could become invite-only. If I invite a bad actor, it has repercussions on me (perhaps offender gets a 10-day suspension, and I get a fraction of that). I'm guessing these ideas have been floated somewhere; I'm a latecomer to this conversation.
@jocelyn_etienne @brennansv One case where identity, credentials, conflicts of interest, etc. really matters is moderating on matters of fact. A primary goal with my experimental OpenCheck system is to enable a diverse set of experts weigh in on matters of fact, at scale, in a bottoms-up, global, non-authoritarian manner. Then somehow connect that with mis/dis-information spreading across a system. Hard problem!
@brennansv @schock @cd24 @seb I would love to participate in something like this. It could be a remarkable opportunity to bring people like me who struggle in a traditionally structured environment back by means of worker cooperatives.
@brennansv @schock @cd24 @seb I like this idea, but I also worry about false positives from AIs. One way to leverage/scale moderation decisions without as many downsides of false-positives: compute an embedding for every user using the follow+boost graph, sanitized using previous moderation actions. You'd use something like node2vec for this. Then, you only show in my feed posts from the nearest X people (10k, 1M, 10M -- depends on risk tolerance of a particular user).
@beatty @schock @cd24 @seb The scoring system would collect data from all kinds of signals automatically. Mentions, favorites, posts, etc. The more you interact and who you interact with, just like in real life, influences your status. For me, anyone who have replied to and liked their posts they should have a higher score related to me.
@beatty @schock @cd24 @seb If I could select filtering settings to hide mentions from anyone I have not met it could improve my experience if I were being targeted for abuse as I know many have been. The strongest signal is following someone. Either I follow someone or those I follow are following someone who wants to interact with me. Someone entirely outside my network simply would have a lower score.
@beatty @schock @cd24 @seb I suppose to make this system familiar we would translate it into a common concept. I'd define this threshold as, “have we met” which would be a useful filter for engaging in conversations. The network could maintain a list of "who I've met” which is defined by the signals from mentions, favorites and the follow graph. I want it to be automatic because this would be difficult to maintain manually.

@schock This is an important thread. As a newcomer to this community, it seems to me that a lot of the guidance for prospective new admins is purely technical — what is there to guide admins through the social, legal and governance questions? Having ways to support admins, including options for shared institutions, seems key.

“Looking at running a social network as running software” isn’t that distance from one of the issues the new Twitter leadership has.

How to run a small social network site for your friends

This document exists to lay out some general principles of running a small social network site that have worked for me. These principles are related to community building more than they are related to specific technologies.

@bobkopp @schock @darius (not to say that more resources, spaces and institutions are not needed! Just sharing a useful resource along those lines)
@cameralibre @schock @darius Seems useful, but targeted at very small instances — I don’t see much there about governance (do we really want all our shared online spaces to be benevolent dictatorships?), moderating at scale, or legal protection (DMCA for US-based instances, etc).
@bobkopp @cameralibre @schock fwiw I am working on followups for all three of those (3 different projects)
@schock I agree the issue is critically important, and I am listening, particularly to the points about resources, barriers to access, and challenges of moderation. What troubles me is that it sounds as though you are proposing to impose a massive, expensive, central bureaucracy on what is designed to be a decentralized system, in which single instances do not scale well socially or technologically, and replicate the problem of subjecting people to the whims of a single admin.

@schock The idea that we're going to need some of governance to manage the fediverse* is something about which I've been standing in the corner and frantically waving my arms around about since 2017 or so ;-)

I have written a lot of conceptual design stuff, but not a whole lot of code yet. I'm definitely interested in being part of this discussion, in any case.

* ...and really, it goes way beyond that... [resists temptation to jump on hobby-horse and ride off into the sunset lecturing]

cc: @dredmorbius

@woozle @schock @dredmorbius gotta add to the core protocol a way to make #fediblock easier to distribute
@schock it would defintiely be great to have @woozle and other long-time fedi people from sites that have taken an anti-oppressive approach involved
@schock gosh i'm still trying to figure it out how to use the hashtags thing. But yes, i'm getting the sense that this place has potential.
@schock exactly! A lot of the large scale harms that we see in platforms like Twitter and fb are related to the emphasis and deliberate design choices to grow as quickly and seamlessly as possible. We don't have that here, but that won't stop all the abuse. We definitely need cross-instance TOE (a thing we came up with at #buytwitter - Terms of Engagement over TOS). We also need a focus on those who are harmed, their needs, and on transforming community cultures that enable harm.
@schock content mod will never be enough. I wrote abt what an alternative based on community care might look like for Logic: https://logicmag.io/care/do-no-harm/
We also have work where my PhD student Sijia Xiao took restorative justice training and did pre-conference interviews to learn what people who are hurt online need: safety, support, retribution, transformation https://applexiao.com/CHI22.pdf
Do No Harm

Social media is broken. Restorative justice offers a way to repair it.

Logic Magazine
@schock We need a lot more work looking at specific vulnerable groups. For instance in some of our upcoming work at CSCW we spoke to visibly online Muslim Americans (journalists, activists, politicians, etc.) and found that the scale and duration of the harm they experienced was so much that they had given up on reporting things to platforms. We need ways to protect people when they're a victim of a mass harassment campaign and that requires cross-instance coordination.
@schock also @amyadele and @ntnsndr 's recent paper on subsidiarity in governance is really interesting in the context of mastadon https://journals.sagepub.com/doi/full/10.1177/20563051221126041
@Niloufar @schock thanks for posting, this is dense and I’m