I wrote a bit about an effect that I'd seen for a while but had difficulty explaining: we call it "Content Moderation Survivor Bias", and it's an effect that can muck up social media analyses and lead to dubious conclusions.

I define it thusly: in a retrospective sample of moderated social media platform, ToS-violating or inauthentic content tends to appear most prevalent in the immediate past. This appearance is misleading, however.

https://cyber.fsi.stanford.edu/io/news/content-moderation-survivor-bias

Content Moderation Survivor Bias

@det Would a spam campaign like this occur under Mastodon? I doubt it, what are your thoughts?