@jeffjarvis Here are my thoughts on opt-in decentralized verification and moderation networks:
more: you could choose this when you post.
so if you sometimes speak for an organization with your personal account, you can choose that verification as relevant to your post -- employers already have these social media rules anyway.
but you don't have to associate your professional credential to a post on something that isn't relevant to it, and it won't be shown there, as it shouldn't. IRL this is solved with context that isn't available on social media, ie. "which hat you're wearing".
also my apologies if this already exists or i'm missing something that would make this infeasible, or maybe it's already been attempted. i don't know many in the mastodon community.
is anyone working on something like this @mmasnick?
an app or a server could lay all this on top of the mastodon protocol, and focus on a great UX, trust-based search, and user-selectable algorithms for feed and content discovery, not to mention federated groups/chats.
they wouldn't be burdened by doing moderation and verification themselves.
I would pay for that app.
to me this fixes both verification and moderation by putting the costs and burdens onto those who benefit from the verification and moderation.
and it creates smaller, less centralized servers.
and recognizes the fact that "notoriety" is a useless concept when there are over 300k youtube channels with over 100,000 subscribers. it's a boomer concept.
it's expensive for social media companies to do this, so they sell you ads and they sell your data.
at the server level, each server could choose to adopt those moderation providers they deem qualified, and trust other servers or users based on the same.
this could enable other very useful crowdsourced labeling schemes, webs of trust
split up the idea of federation to allow for federated verification. or federated anything -- federated features if you will -- that hook into your mastodon client.
after all *credentials are good*. credentialism is bad.
this scheme allows moderation but preserves freedom of speech
it's all opt-in
if you don't want to opt in, fine. but many of us won't see your content.
if you're willing to verify you're a real human, i'm more willing to listen.
in the moderation example, only those verified by that org could submit a violation to that org, and they would put their own bond at risk for false reports.
this also scales financially, as country-specific groups would be doing this in the local currency.