Bluesky is saying that torture and self-harm posts are acceptable. That's the end of Bluesky as far as I'm concerned. They don't have a clue what they're letting themselves in for.

@lauren Im still trying to confirm if this is their actual policy. That no user or server admin in bluesky can actually ban or delete content, but only end users can choose to see, or not see it.

So far, from what I see, it might be that later scenario.

@tchambers That control panel image seems explicit. It has a SHOW option for torture and self-harm, etc. Q.E.D., I would think.

@lauren I was kinda amazed that any social network would launch with what in essence was a "Show bloody gore, spam and hate, and fake account content" toggle, too.

I was HOPING that like the Fediverse that each admin of a BlueSky service can mute or block or ban such content for all. But not sure that is so, yet.

@tchambers @lauren I don't believe "instance" admins have any say over what content federates and what doesn't.

As in, instances (or "nodes" in BS parlance, I think?) are just account/data storage. Admins, as far as I know, have no agency and barely any power in the system.

@rysiek @tchambers If so, this could likely expose them to notable legal risks. Claiming they didn't know the content was there is unlikely to be convincing to most regulators or courts.
@lauren @tchambers the whole BlueSky shtick strikes as "implement decentralization in a way that hype can be exploited, making sure any risks and costs are externalized to node operators, but making sure none of the power in the system is actually shared with them".