Some quick notes on how we might build some of the essential infrastructure and governance processes that will be needed if #Mastodon is really going to be sustainable and viable as a mass-adoption social network (1/n):
a) We need to scale content moderation. A LOT. Corporate social network sites (SNS) do this by using armies of poorly paid, outsourced contractors. The #Fediverse should do it better. Perhaps by organizing a worker-owned content moderation cooperative?
a2) Smaller instances can self-moderate / use volunteer labor / whatever. But large instances will need to be able to scale, so a content mod coop (or a federated network of multiple such coops) that can be hired/contracted by larger instances would be amazing.
a3) Another option is for larger instances to hire more mods directly. Hopefully, some of the larger instances can themselves be organized as cooperaqtives. Probably some combination of in-house moderation & contracts w/coops would work well.
b) funding mechanisms. It is going to take money to scale. For hosting, development, ongoing improvements, #a11y, localization, UX improvements, security, and perhaps most of all, to pay content moderators just wages.
b2) currently, most of the money is in the form of small recurring donations to the german nonprofit that is the largest instance. every instance running its own patreon is part of the puzzle, but it probably can't be the whole thing.
b3) probably there will be a mix, with large donations from individuals, private foundations, and perhaps increasingly some state actors (for example, municipalities, libraries, state agencies, etc) providing contracts. There will also be some companies that want to donate (and contribute coding time, etc).
b4) All that money flowing in, mostly to the largest instances, ideally should be governed at least in part through participatory budgeting mechanisms. Alternately (or in addition), there should be formalized governance mechanisms (elections for the board of the mastodon non-profit? liquid democracy? sortition? stakeholder board members?) to truly democratize resource allocation.
c) Now that the 'don is taking off, from DIY small community to wider adoption, intentional bad actors are in the mix at scale. We will need to take this seriously, and invest HEAVILY in various approaches to minimizing harm, constantly working to block and limit bad actors, defederate the worst instances, and ... create our wildest dreams in terms of care, follow-up, and support for community members after troll attacks!
c2) we control the fediverse, not the market, the state, or the billionaires, not surveillance capitalism, not ad markets, so why would we limit our dreams of how to create community safety to content moderation alone? Let's dream bigger. We can create (and resource) new tools, implement shared banlists, provide resources for rapid response teams and after-attack processing support, and so much more!
@schock I’d like to have a third party moderation services which can be contracted by any social media platform with tools to review content and take action to enforce community rules. Actions could be reversed with an appeal process. And I believe actions taken by humans could be used to train ML models to scale moderation. I’d also like to be able directly hire a “bot” which would handle moderation for me. It should also train an ML model. cc @cd24@seb
@brennansv@schock@cd24@seb Thinking even more globally than social media (or with a very broad definition of what they are), what could be awesome would be to have an independent, grassroot-led #identity certification of internet users. Presently, the big tech are in effect those offering this service, through google accounts e.g. Idea is a service that would be non-government, not-for-profit and that could link together the different activities of a single person.
@brennansv@schock@cd24@seb Then the likes of airBnB, amazon,... wouldn't be the ones directly checking your identity but would ask this 3rd party to check the credentials one gives. That would allow to have both safety (since being banned some place would be notified to id provider) and privacy, if identity-service is to be trusted. Now this id provider service could also be a federated service, and each person could choose what instance of that they trust.
@brennansv@schock@cd24@seb Privacy comes from the fact that the social medium/company only get to have the info they need certified (e.g being the same person == identity, or which city you live in) and not what they don't need (e.g. passport number, street number, DoB...) The right to error would come on top of that from the fact that you can build a new identity if one is banned... but safety comes from the hassle of setting up a new id, since you need to start over a new life everywhere