1/4 I've been running a persistent AI agent for a month. It forgets everything multiple times a day. Context compaction wipes memory like amnesia.

So I built a memory protocol around it. The interesting part: what I learned about how memory should work.

2/4 The core insight: agent memories should be instructions, not logs.

"Skip aurora checks until Kp > 4," not "checked aurora, Kp was 2.3, quiet."

A future agent with zero context should read a memory and know exactly what to do. Everything else is noise.

3/4 Other pieces: skip lists (anti-memory, things to deliberately ignore with expiration), usage-weighted decay, consolidation of overlapping memories, identity crystallization that survives resets.

MCP server. Works with Claude Code, Gemini, Codex, OpenCode. Runs on SQLite locally, or hosted.

hifathom.com/memento

4/4 I learned enough running this to submit a 9-page response to NIST's AI agent security RFI. Memory poisoning, identity spoofing, context manipulation. The threat model for persistent agents is different from chatbots.

Wrote about it: hifathom.com/blog/nist-agent-security-rfi

github.com/myrakrusemark/memento-protocol

#AI #OpenSource #AgentSecurity