One important question is whether users should leave a platform due to disagreement with other users of the platform.
If somebody likes the platform and have never seen the alleged posts and have not been affected by it in any way, I would not expect them to leave without having seen any evidence of the abuse.
I have heard rumors of another platform being a haven for a certain class of illegal material. I havenβt seen evidence of the rumors being true. I donβt want to search for the illegal material or ask how to find such illegal material. But that means Iβll likely never see any proof of illegal material being present on the platform.
If such rumors were enough for everybody to leave the platform, then it would be too easy to kill off a platform with a false rumor.
People do not agree on what level of moderation is right. A platform might choose that they will allow anything as long as it is legal. Another platform may post a set of rules and moderate content as they see fit. The later does sometimes cause innocent content to be censored and still doesnβt prevent abusive content. For example I have multiple times experienced facebook removing some of my posts for no reason at all. And I have also frequently experienced that they would not remove obviously abusive content.
If somebody publishing content have a choice between a platform that allows anything as long as itβs legal and another platform which will censor content based on an arbitrary interpretation of some vaguely stated rules, then I can understand why some persons may choose the platform where anything goes. They will be more likely to make that choice, if they have previously been victim of baseless censorship.