Some quick notes on how we might build some of the essential infrastructure and governance processes that will be needed if #Mastodon is really going to be sustainable and viable as a mass-adoption social network (1/n):
a) We need to scale content moderation. A LOT. Corporate social network sites (SNS) do this by using armies of poorly paid, outsourced contractors. The #Fediverse should do it better. Perhaps by organizing a worker-owned content moderation cooperative?
a2) Smaller instances can self-moderate / use volunteer labor / whatever. But large instances will need to be able to scale, so a content mod coop (or a federated network of multiple such coops) that can be hired/contracted by larger instances would be amazing.
a3) Another option is for larger instances to hire more mods directly. Hopefully, some of the larger instances can themselves be organized as cooperaqtives. Probably some combination of in-house moderation & contracts w/coops would work well.
b) funding mechanisms. It is going to take money to scale. For hosting, development, ongoing improvements, #a11y, localization, UX improvements, security, and perhaps most of all, to pay content moderators just wages.
b2) currently, most of the money is in the form of small recurring donations to the german nonprofit that is the largest instance. every instance running its own patreon is part of the puzzle, but it probably can't be the whole thing.
b3) probably there will be a mix, with large donations from individuals, private foundations, and perhaps increasingly some state actors (for example, municipalities, libraries, state agencies, etc) providing contracts. There will also be some companies that want to donate (and contribute coding time, etc).
b4) All that money flowing in, mostly to the largest instances, ideally should be governed at least in part through participatory budgeting mechanisms. Alternately (or in addition), there should be formalized governance mechanisms (elections for the board of the mastodon non-profit? liquid democracy? sortition? stakeholder board members?) to truly democratize resource allocation.
c) Now that the 'don is taking off, from DIY small community to wider adoption, intentional bad actors are in the mix at scale. We will need to take this seriously, and invest HEAVILY in various approaches to minimizing harm, constantly working to block and limit bad actors, defederate the worst instances, and ... create our wildest dreams in terms of care, follow-up, and support for community members after troll attacks!
c2) we control the fediverse, not the market, the state, or the billionaires, not surveillance capitalism, not ad markets, so why would we limit our dreams of how to create community safety to content moderation alone? Let's dream bigger. We can create (and resource) new tools, implement shared banlists, provide resources for rapid response teams and after-attack processing support, and so much more!
(pause for now as I'm heading to a budget meeting, hope to return soon with more).
@schock I’d like to have a third party moderation services which can be contracted by any social media platform with tools to review content and take action to enforce community rules. Actions could be reversed with an appeal process. And I believe actions taken by humans could be used to train ML models to scale moderation. I’d also like to be able directly hire a “bot” which would handle moderation for me. It should also train an ML model. cc @cd24 @seb
@brennansv @schock @cd24 @seb I like this idea, but I also worry about false positives from AIs. One way to leverage/scale moderation decisions without as many downsides of false-positives: compute an embedding for every user using the follow+boost graph, sanitized using previous moderation actions. You'd use something like node2vec for this. Then, you only show in my feed posts from the nearest X people (10k, 1M, 10M -- depends on risk tolerance of a particular user).
@beatty @schock @cd24 @seb The scoring system would collect data from all kinds of signals automatically. Mentions, favorites, posts, etc. The more you interact and who you interact with, just like in real life, influences your status. For me, anyone who have replied to and liked their posts they should have a higher score related to me.
@beatty @schock @cd24 @seb If I could select filtering settings to hide mentions from anyone I have not met it could improve my experience if I were being targeted for abuse as I know many have been. The strongest signal is following someone. Either I follow someone or those I follow are following someone who wants to interact with me. Someone entirely outside my network simply would have a lower score.
@beatty @schock @cd24 @seb I suppose to make this system familiar we would translate it into a common concept. I'd define this threshold as, “have we met” which would be a useful filter for engaging in conversations. The network could maintain a list of "who I've met” which is defined by the signals from mentions, favorites and the follow graph. I want it to be automatic because this would be difficult to maintain manually.