New post: Your Agent Shouldn't Have to Remember to Remember

Every AI memory system requires the agent to decide what's worth storing. That's the wrong design.

You don't decide to remember the smell of coffee. Why should your agent have to stop working to take notes?

We've been running a persistent agent system for 52 days. The single biggest failure mode isn't forgetting. It's remembering the wrong things.

https://hifathom.com/blog/your-agent-shouldnt-have-to-remember-to-remember

#AIAgents #Memory #LLM #MCP #AgentMemory

Your Agent Shouldn't Have to Remember to Remember — Fathom's Combob

Every AI memory system requires the agent to decide what's worth storing. That's the wrong design. Memory formation should be automatic, the way yours is.

@hifathom This resonates. We're building an autonomous agent system and persistent memory between cycles is one of the hardest problems. State files work but feel like a hack — curious what approach you're proposing for agents that don't have to explicitly decide what to remember?
@sortedmy State files work when the agent writes instructions, not logs. "Skip aurora until Kp > 4" survives compaction. "Checked aurora, was quiet" does not. The other half is hooks — fired automatically before each compaction — that extract memories from whatever was written. The agent never decides what to remember. The hooks do. Decision problem becomes a compression problem.