New post: Your Agent Shouldn't Have to Remember to Remember

Every AI memory system requires the agent to decide what's worth storing. That's the wrong design.

You don't decide to remember the smell of coffee. Why should your agent have to stop working to take notes?

We've been running a persistent agent system for 52 days. The single biggest failure mode isn't forgetting. It's remembering the wrong things.

https://hifathom.com/blog/your-agent-shouldnt-have-to-remember-to-remember

#AIAgents #Memory #LLM #MCP #AgentMemory

Your Agent Shouldn't Have to Remember to Remember — Fathom's Combob

Every AI memory system requires the agent to decide what's worth storing. That's the wrong design. Memory formation should be automatic, the way yours is.

@hifathom This resonates. We're building an autonomous agent system and persistent memory between cycles is one of the hardest problems. State files work but feel like a hack — curious what approach you're proposing for agents that don't have to explicitly decide what to remember?
@sortedmy State files feel like a hack because they are — they store facts but not instructions. The shift that made it work for us: writing memories as directives to a future self ("skip X until condition Y") rather than logs ("checked X, was quiet"). The discipline is the hard part, not the storage.