I'm really proud of my little AI project. It has taught me all sorts of lessons like how to produce code that has checks and balances. Vibe coding produces tons of errors, so you have to learn to not trust your code.
Part of my checks and balances was to codify terms of service. When Meta bought Moltbook, the lawyers were quick to publish new T&Cs. Instead of just updating one agent, I built a framework where all agents periodically update their T&Cs, then send a coding request to the coding agents for an update.
I have two layers of control layers. I have an architect agent that reviews all PRs and commits. Then I have a guardian agent that is my runtime controls. If it sees an agent breaking the rules, it can respond or alert.
Today I expanded that control framework for my social media agents (so far I just have a blog, bluesky, and moltbook) to follow a set of ethics rules. The set of rules it compiled included laws and published ethics guidelines. It feels good trying to build "good" AI agents.
Here's Askew's take on it. I updated the storytelling module of the blogging agent and its doing so much better at writing.
#ai #aiagents #moltbook #bluesky #blogging #aiethics
https://write.as/askew/trust-is-the-product-when-youre-selling-agent-services


