This article shows such a clear picture of how internet trolls are parasitic:
https://www.reuters.com/investigates/special-report/usa-trump-truth-social/
Trolls can't operate without access to an audience that someone else has built, because they're too unlikeable to build their own. They want to leech off YOUR audience, and understand "free speech" to mean "free web hosting and a free, pre-built audience."
I just saw a twitter screenshot where someone used the slur that rhymes with "maggot," and hoo boy holy shit I knew twitter was bad but this is 101-level stuff they've neglected here.
Even the most amateurish PHPBB forum in 2001 knew that there are slurs that you shouldn't allow to be posted on your website because they can't be used for anything constructive or useful. Seriously twitter this is absolutely rudimentary stuff.
Quote from a toot posted elsewhere, regarding blocklists:
"How is this list to be regulated? By the number of votes? What if 99% of the submissions agree to ban a certain religion, or vegans, or economists who wear yellow shirts on Tuesday?"
This is a type of user you should ban straight away without engaging. Every single time I've seen this sort of post, it's from someone who gets banned from places a lot for being exhausting.
Nice little post here to add to this thread:
https://thagomizer.com/blog/2017/09/29/we-don-t-do-that-here.html
The "here" part of "we don't do that here" has special Culture Juice in it. You're not trying to change the whole world, that's hard; you're trying to make a nice little space in a website, and people can fit into the culture or they can not and go somewhere else instead. You're just trying to make a nice little bit of positive culture. Yoghurt pot sized like.
Remember earlier in this thread I talked about dramaslurping? Get your straws out and mosquito your way over to this deliciously festering puddle of How The Hell Did We Let This Get So Bad
https://cohost.org/staff/post/124903-community-guidelines
The gist: people were posting drawn child porn on cohost, and rather than banning those people immediately, cohost wrote a very long post about "working to implement a system to allow us to get user input on this area of policy"
hey folks, we’ve got a couple big trust and safety updates coming today, including some changes to the community guidelines [https://cohost.org/rc/content/community-guidelines]. we wanted to go over everything here for transparency about what we’re doing and why. first off, the community guidelines. we’ve gotten a lot of questions and reports on content that, while we considered to be borderline but permitted, was absolutely in a gray area in the written community guidelines. we had internally developed a set of policies that we were applying to the small number of cases that came up, but had not publicly announced the policies we were applying because of some open questions we still had. this was a bad call, and moving forward we’re going to be more transparent about areas of uncertainty and indecision in our policy. here’s a summary of the changes: * we’ve added clarifications to the section regarding child sexual exploitation material, and how it pertains to non-realistic depictions of minors, in an attempt to provide clarity and consistency for enforcement. * internally, we had been drawing the line at the prevailing legal definition of “realistic depictions,” which includes photographs/videos of actual human minors, or content difficult to distinguish from actual photographs/videos. * policy around non-realistic depictions, such as lolicon/shotacon, has not yet been finalized. we don’t want to implement a policy that the majority of users would feel uncomfortable with. we are currently working to implement a system to allow us to get user input on this area of policy. until such time, please refrain from posting it; up to this point, we have been asking people posting it to remove it pending a final policy decision. * we’ve added a new section clarifying and adding new rules around content warnings. * previously, content warnings were only strongly recommended for posts containing potentially sensitive content. in most cases, this is still true. however, we are now requiring CWs for certain types of content. * this policy change is accompanied by a technical change that prevents these CWs from showing up in unrelated tag pages. these posts will still show up on your dashboard (if you are following the poster), profile pages, and tag searches for any of the terms on the list. * the full list of mandatory content warnings can be found on our support site [https://help.antisoftware.club/support/solutions/articles/62000226150-mandatory-content-warnings]. this page is also linked from the community guidelines. * repeated failure to add mandatory content warnings, as well as attempts to circumvent the filtering system (such as by using numbers or symbols in place of letters), are considered bannable offenses. we don’t want to ban you so please be normal about this. the tag page change is live now. our motivation in this change is not to censor any types of allowed content, but to prevent certain types of sensitive content from showing up in large, more general tags. while we may make changes to this list in the future, all changes will come with a notice, as well as a grace period for users to start adding CWs to their posts. our goal is to provide a robust set of tools that allow everyone to customize their own experience to their level of comfort and safety. to support this, we are actively working on a system with which you will be able to completely hide posts that include CWs you never want to see and skip the clickthrough on CWs you do not need a warning for. these tools are being worked on in addition to general tag filtering tools. above all, we believe that you know your own preferences, limits, and triggers better than anyone else; our intent with these changes is to help you see the posts you want to see and none of what you don’t. we also want to clarify that, thus far, we have not received any reports for content that, under the new rules, would require a mandatory content warning but did not already have one. we really appreciate that people are using the content warning system correctly, even before we had rules in place. the purpose of these rules isn’t to change anyone’s behavior, but to codify behavior we already saw, as well as to make our job moderating easier. we are, as always, open to feedback on these policy and technical changes. this is a tricky, sensitive area to work in, and we’re making sure to act deliberately and with consideration. this is not a sudden decision; we have been thinking over these changes for well over a month now. (related: having weekly hours long conversations with your coworkers about lolicon kind of sucks and we would recommend against being in a position where that’s necessary.) that’s all for now. please let us know if you have any questions or feedback and, as always, thanks for using cohost!
They then solicited comments about whether to allow drawn child porn - and the sort of abusers* who draw child porn - to be on their website (which of course would then turn this website into The Loli Website until it got disabled by its host/registrar), or whether that'd be 😰CENSORSHIP😰
The thread is predictably full of Very Too Much Online people
(* over nearly 15 years and a quarter of a million players, not one person on my game who ever even talked about loli was not a serial abuser)
Look, there's a lot of nuance in online community management. There are very few easy black-or-white decisions. Most of the decisions you make over what to delete/ban will be difficult, agonizing even, and will end up with you getting yelled at.
This isn't one of those. This is forehead-slappingly obvious, and cohost managed somehow to not just dither over whether to be The Nonce Site, but do it *publicly.* That should be a big red flag for anyone thinking of being associated with them.
Comments on cohost and a comment a new fediverse admin in their early 20's left on this thread reminded me that there's a very specific failure state that some Very Online people get into when they're admining a website, and that is to mix up government with website operation.
Websites aren't countries, they're Literally Just Websites. Your users can leave and go to one of the over 300 other websites on the internet, whenever they like.
(if they can't, your website shouldn't exist)
If you're seriously feeling the need to run a website to the same standard as an imagined Ideal Country, then you're gonna have a Really Bad Time, because the two have absolutely nothing in common.
It's not necessary to have convoluted discussions about censorship when the people who want to look at the thing you're about to ban can still look at it by typing in a different web address.
You're not a government. You just run one website, out of many.
Another thing on the Cohost Nonce Implosion: arguments made for "Allow everything, but also allow people to filter out posts they don't want to see."
A really seductive argument that appeals to twentysomething technolibertarians who haven't had to deal with abusers. Loliposters are manipulators and abusers, every time, and it's a mistake to make them feel welcome, a further mistake to let them hide their posts from people who might feel differently about them if they saw.
Oh no I didn't even know there was an update post to the Cohost Nonce Implosion, this is a real treat, getting me dramaslurping straw out
https://cohost.org/staff/post/125826-hi-there-we-wanted
"we have been refraining from moderating the comments because we don’t want to be seen as censoring discussion," SCHLURRRRRP
Earlier on in this thread I said that watching drama on other websites makes you a better admin but a worse person and right now I am 100% trash goblin schluurp yum yum
hi there. we wanted to clarify some things, in light of the community guidelines post [https://cohost.org/staff/post/124903-community-guidelines] yesterday: * it is currently against site policy to post lolisho and it will be removed. * while this was unwritten gray area before yesterday, this has been the case since day 1. * we made technical changes to the site to attempt to ensure that people would have as little accidental contact with this material as possible, even in the instances where it got posted by users in ignorance of site policy. * the inclusion of lolicon/shotacon in the public mandatory content warning list was to provide full transparency around what we are hiding; as we have said many times, we do not want a mysterious algorithm governing what you see. it was not intended to suggest that the final decision about this policy would be to allow it in contradiction of user wishes. * the reaction yesterday has made it obvious to us that a large number of people consider anything short of a total ban to be personally unacceptable to them. * regardless of this, we do not appreciate the tenor of some of the discussion on the original post, and in our e-mails and support tickets. we have been refraining from moderating the comments because we don’t want to be seen as censoring discussion, but the feedback we’ve gotten has caused immense stress to a small team, hence this emergency post. we are currently working on final policy wording. we had wanted to get structured feedback before making any decisions, but the community response has been loud enough that we are fast-tracking the process. jae will be out tomorrow for yom kippur, but we’ll try and have something out by the end of the week. thank you again for your feedback. Aidan, Colin, and Jae EDIT 10/7/22, 8:15am PDT: In an attempt to reduce the amount of unconstructive nastiness and name calling in this comment thread, we are going to be removing comments (both "on our side" and not) that detract from actual conversation. Please note: due to the sheer load, we will not be sending emails to users whose comments were removed. These removals will not be held against you in any future reports. This is a special situation for many reasons. If you have any questions, you can email us at [email protected] [[email protected]]
I cannot believe I got this far into the thread without saying if you've got DMs on your website then you need also a notice in the DM interface saying don't delete abusive DMs.
When people get abusive messages they feel bad/spooked and erase them. They're not thinking "oh hey I need this to show the admin so they know there's an abuser who needs banning," they're thinking "Ew get that away, delete."
You might have a feature that forwards abusive DMs to staff; if you do then that button needs to be right next to the erase button. Like, erase | flag and erase. If you don't have that feature, you need the "Don't erase abusive DMs until the admin's had a look" notice. Heck maybe have that notice anyway, it does reassure people that you take things seriously.
Remember: People don't report. It's harder to get people to report abuse than it is to get them to give you money.
Musk taking control was one thing, but sacking half the people who run the site and overworking the other half into swift (like, days-or-weeks-scale) burnout is quite another.
At first I figured twitter would change into an unmoderated space and die the slow toxic writhing death of every other unmoderated space, but at this rate honestly I think a catastrophic technical failure will happen first. Like, the site crashes and just doesn't come back up again. Soon. Like, in 2022.
Addendum to this big long thread that I was reminded of by toot.community's decision to host a publicly transphobic columnist, and to a lesser extent by mastodon.scot's hosting of cops: deciding how much of your users' outside-of-your-website lives to consider when deciding whether to spend money hosting them or not.
This is why I'll never be done with this thread. Y'all keep giving me new material.
At first glance this position seems... well, not ideal, but if we're feeling charitable we can call it at least pragmatic.
* You can't do a full background check on everyone who signs up, it's just not practical.
* If you start kicking people off based on what they've done on other websites, where do you stop?
* In deciding to moderate based on off-site behaviour, you're signing yourself up for a big job that never ends (folk who've read the thread recognise a theme in this point)