The main idea around object capabilities, decades before cryptocurrencies and all that, was around the idea of so-called "smart contracts", which really boils down to "Code that does financial and legal things on our behalf", and the core takeaway from that was that if you don't have a serious computer security model, there's no real way to do this right.

When I say serious computer security model, for non-tech folks, I don't mean "Install the latest virus scanner.", I mean you need complete assurance that a program that's meant to do one thing can't do something else- not "should not" or "won't try", I mean "can't", as in even if the person who wrote this program is pure evil, the thing they want to do won't happen.

And the best way we know how to do that is the object capabilities model. It's not the only way, but we've learned in the last ~35 years that it's the only practical way.

Now let's talk about AI and LLMs...

1/

#Programming #AI #OCAP #Capabilities #Agents #SmartContracts

The potential power of agents can't really be overstated. Systems that in aggregate can change the way we work at basically every level.

People are worried about AI for damn good reasons. AI companies power their business through our private information. But it doesn't have to be that way. Fossil fuel companies do awful things to the earth, but that doesn't make electricity bad. AI companies are evil, but AI isn't bad on its own.

If we want good AI that means good privacy, and good security. It means that a program that is there to record my voice and transcribe it doesn't need to have access to my personal photos. We need models of computing that are designed around consent, and build for small, singular tasks that compose.

That's why the capabilities model is so necessary and we see so much work from different people and projects going into it.

With Smart Contracts, it's necessary. With AI agents, we need it yesterday.

2/2

#Programming #AI #OCAP #Capabilities #Agents #SmartContracts