Mastodon's federated design and lack of moderation are a huge weakness. I'm white, male, and therefore don't get much abuse.

But Mastodon's idea of how do deal with abuse is literally 'self help', which is not scalable and not viable long term. Some niche communities may successfully migrate, but overall it won't work.

Lot of other architectural problems too.

My recommendation: Keep your Twitter account, when it finally goes chapter 11 and competent management takes over, move back.

@ncweaver can you elaborate on what do you mean that self help is not scalable?
@ncweaver Agreed. Musk firing all the content moderation/T&S people, and mastodon relying only on volunteers to do the same job are effectively the same thing. Both trivialize the problem.
@ncweaver Also, you’re not talking much about racism, racial/social justice, and such nor are you an activist in such areas. Mastodon does have a history of being a safe space for some marginalised peoples, but IMO the expectation is that the people are quite & talk amongst themselves rather than advocate, educate others, agitate, etc.
@ncweaver So far, I am not seeing visible overt racist, misogynistic, homophobic, transphobic, antisemitic, and other bigoted toots or hashtags. (I cannot search toot contents, only hashtags. Also, don’t know what’s DMed.) So many instances moderation might work in on one way… BUT in other ways, silence those talking & working against bigotry, hatred, etc. via content moderation, CW tagging insistence, etc. Sorta like telling peaceful protesters to whisper & cover up their signs.

@JonAbolins @ncweaver I agree. I think also the thematic communities on different instances help. Those immediately in your surroundings tend to be more similarly minded and with similar sensibilities to your own.

And because most people are decent, each instance tends to be populated by decent people. This may force toxic users to aggregate away in far away and easy to block instance.

Also for this I think the mastodon model may have a naturally higher resiliency to toxicity. Hence lower requirements on moderation efforts.

@ncweaver That may well be, but moderation is in some ways social media's hard problem anyway. When I used FB's tools to report a Greek neo-Nazi for blatant antisemitism, I was told I could mute or unfriend him. When I've reported people on Twitter for similar things, I'm not sure I've even gotten a response.
@ncweaver Finally, people who are seeking virality to be heard are going to be severely disappointed. To catch journalistic attention, need to DM them & have them follow you. A mixed deal. Also, researchers who want to study Mastodon environs will have challenges.
@JonAbolins Crawling it is not going to be a huge problem, its just twitter gave researchers "Easy Mode" where you just ask nicely and get the firehose...
@ncweaver Yes, crawling instances and/or hashtags is an avenue. Not so much a technical problem as a social one on Mastodon. Twitter users with public feeds are more accustomed to “it’s publicly visible, therefore it can be cited & quoted without explicit consent”. For Mastodon norms, it’s often seen as a significant breach.

@ncweaver Hard disagree, Mastodon moderation is already much better than Twitter’s ever was.

Usually it’s thrown around that you need at least 1 full time moderator per 100k active users

Based on 2020 numbers Twitter had around 1500 FTE in moderation, that’s around one for every 150k DAU.

Mastodon has 7.5 million users atm, and thousands of instances, but let’s say for argument’s sake 3000. That’s one volunteer moderator for every 2500 users.

Volunteers already outscaled twitter.

@ncweaver assuming a volunteer mod is only maybe 10% as effective as a full time employee, it’s still 25k vs 150k. And this assumes only 3k mods for 7.5m accounts, where there are a.) more than 3k instances b.) more mods per instance than 1.
@ncweaver big instances might suck in moderation, but small - medium sized ones are absolutely better on average than Twitter ever was.
@ncweaver I’m hedging by being in both places.
@ncweaver Distributed email evolved to the current duopoly of Google and Microsoft, mostly because dealing with spam is too hard. Mastodon does have built-in defenses: you mostly only get toots from people whom you follow. But there are loopholes, such as direct messages. You can use setting/notifications to only receive DM from people you follow, but that's closing the bubble, cannot be the default. Facebook has a default of "friends of friends", a better compromise. Maybe work and add that?
@huitema @ncweaver I would suggest there was a practical cost issue underlying the centralization of email. Google and Microsoft made it difficult for institutions to justify maintaining their own. I saw upfront how this helped expedite the ongoing erosion of IT within higher ed for example.
@jtk @ncweaver Indeed there is a cost issue. (And yes enterprises should not try to run Exchange on premises -- because Microsoft really wants to handle all your email in the cloud.) But the cost is largely due to the difficulty of handling spam, avoid being listed on spam black lists, increased complexity of the software, etc. The root is the email promise that anyone can send email to anyone on the Internet.
@ncweaver Why everybody just assumes that volunteers on Mastodon must by definition be volunteers? Just because you are not the product and your instance doesn't sell your data and raise VC funds doesn't mean it can't pay mods. If enough people donate to their instance, this could be achievable. (Also, not every instance has to be free, but that's not my point here.)

@ncweaver respectfully, I disagree. I don't think that content moderation is more viable by a single central actor making decisions on behalf of everyone in the world. Enforcement is certainly more efficient, but as demonstrated by Twitter, it is impossible to provide moderation that is satisfactory for all users. Rather, I think localized, context-specific, and diverse approaches could lead to better quality moderation. Same for account authentication.

Additionally, I think the potential for Fediverse service providers to collaborate is generally overlooked and underestimated. For example, several nodes may choose to maintain lists of well moderated hosts (and rogue hosts). Each provider does not necessarily need to reinvent the wheel.

Ultimately, I think it's (collectively) lazy for us to delegate this to some SF corporation. Besides, their leadership is purposefully eliminating moderation, so why exactly do we want to stay over there?