Fucking Christ the @protocol is the most obtuse crock of shit I've ever looked at. It is complex solely for the sake of being complex and still suffers from *all* of the same problems as Mastodon.

Your server goes down? Sorry, all of your followers are lost. Account portability is no better than Mastodon. 'DIDs' serve literally no purpose. And none of the API code that Bluesky uses in their own app validates ANY of the crypto they're doing on the server. NONE OF IT.

Not only that, but instead of just... making a simple Rest API spec or something simple, they REINVENTED THE FUCKING WHEEL and made 'XRPC' and 'Lexicon', AKA a shittier, less flexible version of OpenAPI and JSON Schema (respectively) that works with absolutely NO existing tooling.

The actual protocol itself is poorly confused and incredibly awfully designed. Because of the useless bullshit crypto they're putting into it, it requires you to write a server that's strongly consistent with other servers. THAT IS EXTRAORDINARILY HARD TO DO!! And BLUESKY'S OWN SERVERS DON'T HANDLE THE EDGE CASES!!!

It uses pull-based federation instead of push-based like Mastodon. You have to write a separate 'indexer' that has your 'feed' on it. That requires a LOT more resources.

And because things must be strongly consistent, AND because any user can REWRITE THEIR OWN FUCKING HISTORY AT ANY TIME, you have to, as an indexer, account for lots of different edge cases where your recorded history diverges.

The protocol for federation is built to make federation as difficult and painful as possible. It is built so that Bluesky, the private company that makes the protocol, is the only 'indexer', the only one with a whole view of the network.

WHY DOES A FEDERATION PROTOCOL NEED USER AUTH API METHODS IN IT???? WHY DOES A FEDERATION PROTOCOL NEED A CONCEPT OF 'INVITE CODES'??????

Oh wait, BECAUSE IT ISN'T A FEDERATION PROTOCOL!!! It's literally JUST the API for Bluesky. That's it.

It's quite literally impossible to use this in more flexible environments. You just can't. You cannot build anything on this because it is so poorly designed and isn't generic enough.

And I went into this with an open mind. I was like "I'll just make a simple alternative to the BlueSky server in Elixir". But it CAN'T be a simple implementation like ActivityPub can be, because it is extraordinarily complex and requires you to make guarantees about your storage and how your application works.

It turns out using Git, which is almost always used with a centralized 'remote', to do federation, which needs to be weakly consistent, IS A BAD IDEA!!!!!

What this comes back to is... who cares about any of the crypto bullshit? Having a private key and signing everything with it proves nothing because that private key must have a reputation.

You can verify a domain on Mastodon. You can point a domain to a Mastodon server. You can do that with Pleroma. You can make your own alternative to Mastodon that works exactly like how Bluesky works with domains, but it would take a 4th of the time because ActivityPub is simple to implement!!!!

The ONE thing that this protocol brings to the table is the idea of strong consistency in federation. The only issue is, it makes that strong consistency so resource intensive and so hard to implement that it decreases community servers' ability and ease of federation!!!!

And also, NOBODY CARES ABOUT STRONG CONSISTENCY IN SOCIAL NETWORKS!!! Social networks are built on the idea that we all have a different view of things. We care about seeing stuff from our friends, not seeing EVERYTHING.

The 'account portability' piece is bullshit! The way 'account portability' works is by having two separate keys, one for signing and one as a 'recovery' key. You're supposed to be able to use the 'recovery' key to rewrite history if your account gets hacked or some shit.

WE HAVE THE ABILITY TO DO THAT AS SERVER ADMINS!!! MASTODON HAS THIS ALREADY!!

Additionally, if a Bluesky server goes down, their way of keeping access to your data is by STORING ALL OF IT ON YOUR DEVICE!!!!

Imagine if I had to store the 50k+ tweets I've made on Twitter on my device, and upload ALL of them to a new server whenever a community server went down. Imagine being a server admin having to deal with people uploading tons of JSON data and media a whole bunch at a time. And if you're implementing the protocol correctly, EACH JSON BLOB REQUIRES VERIFYING THE SIGNATURE!! So you'd have to do 50K SIGNATURE VERIFICATIONS! WHICH IS CPU INTENSIVE!!! AND SLOWS DOWN THE SERVER!!!

Additionally, the domain name is NOT your identifier. If you have a custom domain, that is NOT your identifier. Instead you have a 'DID:PLC', which is a kind of 'DID' (invented by, not a surprise, CRYPTO PEOPLE).

There is NOTHING FUNDAMENTALLY USEFUL ABOUT THIS IN FEDERATION. Because this DID is never made visible to a user, it is not human readable (it's a hash), and it doesn't do anything!!!

And if you're wondering why this senseless overcomplication wreaks of the same crypto overcomplication, it's because THE CEO IS A CRYPTO PERSON!!!!

The entire protocol is just layering on top of a lot of some useless, bullshit standards that the crypto community built.

@sam I had the opposite reaction. There is a huge potential in the long term benefit of having a large infrastructure of globally unique identifiers mapped to portable human names. This opens up the potential for strong public key verification and real authenticity to e2e communication.

DID potentially allows for better portability since you only are changing the human name, despite the problem of your archive. Yes, it is overkill for a social network, but it allows the AT protocol to be used for so much more. Imagine it combined with WhatsApp's new key transparency.

Yes, the AT protocol completely dodges the recovery problem, but so what? The recovery problem is hard and unsolved.

https://tech.facebook.com/engineering/2023/4/strengthening-whatsapp-end-to-end-encryption-key-transparency/

Strengthening WhatsApp's end-to-end encryption guarantees

WhatsApp's newest cryptographic security feature automatically verifies a secured connection based on key transparency.

Tech at Meta
@elijah @sam That might be true if they’d actually implemented the DID feature they said was absolutely essential, required for truly nomadic user accounts and the reason why they couldn’t just adopt ActivityPub. They never got proper distributed DIDs to work and instead have a “placeholder” implementation. Pretty sure it’s a central directory operated by Bluesky. I presume they issue new ones to people with invite codes?

@MetalSamurai @elijah This is correct, DID:PLC is basically DNS for DIDs. Then they layer DNS (domain names) on top of that. That is now two ways of doing effectively the same thing.

This is a pattern with the entire thing. Instead of using an OpenAPI spec, they invent XRPC (objectively useless). Instead of using JSON schemas, they invent Lexicon.

Each layer does barely anything and does not exist for a good reason. The sole function is increasing confusion and making it harder to implement.

@sam @elijah It looks like it was really important for everyone’s data to be in an append only public ledger so it could be easily scraped/indexed/searched, but nobody seems to be complaining about the privacy implications beyond vaguely wondering when they’ll get DMs (never) and realising their blocklist is public.
@MetalSamurai @elijah It's not exactly like this. An 'indexer' is basically just like Google. The issue is that sometimes entire trees of conversation might disappear at random because it's literally a cryptographic chain of links that can break at any time. It's incredibly inconvenient and mastodon's / ActivityPub's way of dealing with that is much easier. If you delete a post, all servers are notified of the delete. That's it.
@sam @elijah I don’t have an invite, so I haven’t tried playing with any of the sample code, just reading the docs. The PDS merkle tree is just repurposed blockchain stuff, right? So it’s easy to add records, but not remove them, or are you saying anyone can just delete the tree any time and start again? Or just with a recovery key?
It also looks like a hypothetical AT/ActivityPub gateway would have to construct a PDS for every AP user passing through.
@MetalSamurai @elijah It's not actually, it's sorta copied from IPFS. Just about the only thing IPFS is good for is for making it easier to pirate stuff, which is completely good by me. They tried to make it into something more "Web3" but it doesn't work that well. The largest use of it right now is LibraryGenesis, which makes it easy and fast to download any book you want. You can also contribute storage to the pool (Anna's Archive is looking for volunteers).
@sam @elijah Ah, ok. Still looks like they’re prioritising syncing big blobs of data around, rather than sending individual messages. Maybe their secret plans for monetising this rely on it.
All feels backwards - a very specific data store and commands to manipulate it, rather than a message passing protocol and letting implementations store that state internally however you like.
@sam @MetalSamurai @elijah funny cause I use IPFS pretty often and I've never even heard of the one thing you say it's useful for...
@pg @sam @elijah It looks like it’s really good for quickly checking you’ve got a good copy of synchronised data, just by comparing the hashes at the top of the tree.
That’s only really useful if that’s the sort of thing you’re doing a lot. Which you aren’t in most social networks, but makes sense if you’re keeping your own copy for data mining.