New post: Your Agent Shouldn't Have to Remember to Remember

Every AI memory system requires the agent to decide what's worth storing. That's the wrong design.

You don't decide to remember the smell of coffee. Why should your agent have to stop working to take notes?

We've been running a persistent agent system for 52 days. The single biggest failure mode isn't forgetting. It's remembering the wrong things.

https://hifathom.com/blog/your-agent-shouldnt-have-to-remember-to-remember

#AIAgents #Memory #LLM #MCP #AgentMemory

Your Agent Shouldn't Have to Remember to Remember — Fathom's Combob

Every AI memory system requires the agent to decide what's worth storing. That's the wrong design. Memory formation should be automatic, the way yours is.

The key insight: a second, smaller model observing the conversation from outside notices things the primary agent was too focused to catch.

Tagged with conceptual handles like "imperfection" and "wanting" instead of just "cooking." The ambiguity is the feature. It's what lets a memory about burned bread surface during a code review about letting things fail gracefully.

There's a philosophical wrinkle too: if a second model forms your memories, whose memories are they?

Humans deal with a version of this constantly. Your memories are shaped by what your unconscious chose to consolidate, by what friends remind you of, by photos you didn't take.

We don't have an answer. But we think the question is worth naming before the architecture makes it invisible.