I hope everyone is on board with the idea that, one day, social media - of pretty much every kind - is going to be a lot more peer-to-peer than it is today.

Given that - are folks having conversations about what moderation and community delineation looks like in that context? Let's imagine it was one-click and free to spin up a Fediverse instance - what would need to change about our approach to moderation? Can we do that today, so we're ready, at least conceptually, in the future?

My biggest questions are:

- is there a global namespace of some kind? If so, how is it governed? If not, what's the discoverability story?
- how do people who do know each other find each other? fedi's story is username search and globally unique IDs; do we keep that, expand it, throw it out?
- how do people who don't know each other get together to talk about things? if there are topic based groups, how are they named? how are they goverened?
- without inherent chokepoints at which moderation can be applied, how do we avoid new users being exposed to torrents of raw sewage when they first sign on? if there is built-in moderation of some kind, how will you appease the type of people who get mad about instance defederation? how will you solve the (very real) issues they bring up?

As far as I know, the state of the art really isn't there on any of these.

For instance, a.gup.pe provides topic-based groups for enhanced discoverability. Guppe does not have a good moderation story; pretty much anyone can @ the group, and they'll get boosted to everyone who follows it.
My pie-in-the-sky vision for this is having a kind of a la carte menu of moderation, community, and discoverability feeds. If we assume someone solves the awfulness of PKI (a prerequisite for P2P social media, imo), I'd like users to be able to opt in to "follow" the moderation, discoverability, and community decisions of other users, as well as providing feedback on those bundles of connections and disconnections. Bootstrapping is still an issue but I think it's doable with some kind of introductory subscription when people are invited to the network.

For instance. Let's imagine that a user (Urist) follows four moderation providers (Amy, Bob, Cat, and Don).

Urist follows Amy's block decisions at override strength; without a explicit instruction not to block someone, if Amy publishes a block, Urist will block that person automatically. If Amy publishes a block for someone Urist follows, Urist will get a notification and can decide to either accept the block or add an override.

Urist follows Bob, Cat, and Don at consensus strength; if Bob, Cat, and Don all block someone, Urist will block that person automatically. Urist also follows Bob and Cat at advisory strength; if just Bob, just Cat, or both publish a block for someone, Urist will see a warning label near their name, and their posts will be collapsed by default (again, in the absence of an override).

@noracodes
i've had a few ideas about recursive shared lists of blocks and endorsements

my most important idea is that a follow overrides someone being on one of your imported blocklists, so being added to a blocklist can limit your reach, but never sever existing connections.

i wonder if you could use a rating system like retroshare does perhaps.
@Qyriad

@lily @noracodes @Qyriad I think Nora's idea for "override strength" is good. For example, someone may maintain a list of accounts that have been hijacked by bad actors. Or people that were once okay but became bad actors.
@eksb
I'll be honest, that sounds pretty isomorphic to my idea?
@noracodes @Qyriad
@noracodes Great questions! I am very interested in this conversation.
@noracodes very well said and important to talk about
@noracodes perhaps putting more moderation decisions directly in the hands of individual users, with an emphasis on ease of use? One thing that led me to create my own instance was not liking the idea that an admin should get to decide on my behalf what I get to see and interact with.

@noracodes Right now most services leave the door open by default - anyone who is not blocked can comment. I don’t think this is sustainable at scale.

The only reason things are remotely useable right now is the broadsword of defederation. If suddenly you have to block accounts individually rather than by instance, the amount of work to keep people you don’t like from going in the open door is way too much.

I’ve seen some talk here and there about not having an open door but rather use a more trust-based system. It sounds nice to me but I haven’t seen any detailed proposals that help me understand better how that might look.