A topic I once tweeted about but didn't really pursue, that's now looking more interesting/relevant, is what I call a lightweight "nocoin" notary chain/web. Some thoughts on what that would be, and why it's interesting. A 🧵
This idea started from looking at the definition of "blockchain" (merkle tree + consensus protocol) and asking whether such a thing necessarily facilitates running stupid toy money ponzi schemes on top of it.
In particular, can you make an underlying structure that can't represent the concept of "double spend" (and thus can't guard against it) so that it can't be used to trade virtual assets with parties you distrust? If so, is that useful?
The key property to achieve that seems to be making membership in the chain efficiently testable but not iterable. For example, storing only hashes. This is sounding good, because it also suggests you can make it really light.

From the user side, something like:

Submit at time T a small fixed size (hash) piece of data X you want notarized.

Get back something you can present to others to prove X existed at time T.

(In practice the data likely involves signatures and I'm glosing over matters of how you use them at this layer.)

Now, suppose we want to make a system like this scale without needing a gigantic shared ledger.

The "get back something" above is potentially doing some heavy lifting. You can make the user responsible for carrying a fairly large number of entries as part of their proof, up to a next waypoint where the notary gets one or more other lower-traffic notaries (ones it's probably paying) to notarize its state.

In this way you can build up a distributed hierarchical merkle tree where no one party even has a complete picture (unless they work really hard trying to archive it, but even then it's only hashes) but proofs work and editing things into history isn't possible.
So, since someone asked, how you bind human memorable names to key identities with something like this... there are lots of ways but yes DNS is one!
In particular, a notarized timestamp together with a complete DNSSEC signature chain from the root establishes the validity of a particular DNS record (RRset) at that time, and can serve as a 100% machine verifiable source of "domain at time" identity.
Of course if you were building identity on top of a notary system, you'd probably want to have intermediary layers of protocol that let you do things like change your keys, recover from lost keys, etc. where an identity is a chain of notarized events signed with valid-at-time keys.
One can think of this as an analog of on-chain computation, except that instead of executing on miners, it just executes on a VM evaluating whether to accept a key as corresponding to the identity.

Why is a machine-evaluable identity system interesting to begin with?

For me, it goes back to my thread on the end of Twitter, what is being lost, and what we eventually need to build in its place.

https://twitter.com/RichFelker/status/1585402718524706819

Rich Felker on Twitter

“On the verge of Twitter possibly becoming (even more of a) cesspit, and folks rushing to offer inadequate alternatives, some thoughts on what value Twitter has...”

Twitter

Particularly, the value Twitter had as a unified public social graph of curated trust-as-source-of-information relationships.

The same kind of trust-as-source-of-information has come up in #reprobuilds and software provenance fields.

@dalias
I wrote my master thesis on Reproducible Builds and transparency logs.

There are several papers published on this that I can find if it's of interest :)

@Foxboron Very possibly! There was an epic thread with taviso (the one where he blocked me and a bunch of other folks for not agreeing with him that they're useless) a long time ago on birdsite over the value of #reprobuilds and webs of trust, where a lot of this came up.
@dalias
I was one of the people arguing with Tavis on that thread :) He did not block me though!

@dalias
The formative paper from the Reproducible Builds community is from Benjamin Hof; https://arxiv.org/abs/1711.07278

My paper takes the same ideas and applies this to Reproducible Builds and rebuilders with in-toto attestations;
https://bora.uib.no/bora-xmlui/handle/1956/20411

A few of the ideas from my thesis end up in the Transparency Log implementation that underpins sigstore, described in a recently published paper:
https://dl.acm.org/doi/10.1145/3548606.3560596

Software Distribution Transparency and Auditability

A large user base relies on software updates provided through package managers. This provides a unique lever for improving the security of the software update process. We propose a transparency system for software updates and implement it for a widely deployed Linux package manager, namely APT. Our system is capable of detecting targeted backdoors without producing overhead for maintainers. In addition, in our system, the availability of source code is ensured, the binding between source and binary code is verified using reproducible builds, and the maintainer responsible for distributing a specific package can be identified. We describe a novel "hidden version" attack against current software transparency systems and propose as well as integrate a suitable defense. To address equivocation attacks by the transparency log server, we introduce tree root cross logging, where the log's Merkle tree root is submitted into a separately operated log server. This significantly relaxes the inter-operator cooperation requirements compared to other systems. Our implementation is evaluated by replaying over 3000 updates of the Debian operating system over the course of two years, demonstrating its viability and identifying numerous irregularities.

arXiv.org
@dalias
There are more stuff here, but the sigstore paper also has a great number of relevant citations as @sangy has been working on this for a very long time :)
We can't all be the experts in everything or know who the experts in everything are. But we can share with one another knowledge of who we trust for what purposes. Doing that with decentralized protocols seems appealing.

@dalias The other related question I've wondered about is whether it's possible to create a "proof of *useful* work" chain.

Can we take some compute that had to be done anyway, and convince cryptobros to throw computing power at it?

Think protein folding or similar.

@dalias Every time I look at magic internet money I think about the amount of power and engineering work being wasted on generating heat bruteforcing SHA256s when we have real problems to solve.

If we can't stop them from burning all that power, can we at least do something useful with it?

@azonenberg @dalias I had an idea about that a long while ago, on training neural networks, but there are some problems with that. Especially with over-fitting and consensus.
@azonenberg @dalias sadly not really - the requirements for PoW schemes are, basically (a) easy/cheap for a third party to verify, (b) many small tries for miners (ie there’s no progress/long-running computation). Both exclude protein folding and I’ve never seen anyone suggest something truly useful and credible. Best anyone has done is primes of specific structure but (1) it’s still basically useless and (2) it’s not very good PoW either (T/M tradeoffs hurt b).

@azonenberg @dalias

the original idea was hashcash where it was an anti-spam feature requiring users to spend cpu time for posting

@dalias I don't see how we ever solve the problem of binding human memorable name to random keys without signed merkle + consensus, unless we depend on central point of trust like DNS (which can be ok sometimes)
@elijah I'm getting there. 😁
@dalias $dayjob built one called Rekor :)
@ariadne Will look! Yes, CT-like things are a great use for notary chain. Not sure how close it is to what I have in mind but we'll see.