Show HN: A plain-text cognitive architecture for Claude Code

https://lab.puga.com.br/cog/

Cog — Cognitive Architecture for Claude Code

Cog is a cognitive architecture for Claude Code. Persistent memory, self-reflection, foresight, and scenario simulation — the first layer of continuous awareness for AI agents.

I've been building persistent memory for Claude Code too, narrower focus though: the AI's model of the user specifically. Different goal but I kept hitting what I think is a universal problem with long-lived memory. Not all stored information is equally reliable and nothing degrades gracefully.

An observation from 30 sessions ago and a guess from one offhand remark just sit at the same level. So I started tagging beliefs with confidence scores and timestamps, and decaying ones that haven't been reinforced. The most useful piece ended up being a contradictions log where conflicting observations both stay on the record. Default status: unresolved.

Tiered loading is smart for retrival. Curious if you've thought about the confidence problem on top of it, like when something in warm memory goes stale or conflicts with something newer.

This is really interesting. At this point you seem to be modelling real human memory

In my opinion, this should happen inside the LLM dorectly. Trying to scaffold it on top of the next token predictor isnt going to be fruitful enough. It wont get us the robot butlers we need.

But obviously thays really hard. That needs proper ML research, not primpt engineering

You're probably right long term. If LLMs eventually handle memory natively with confidence and decay built in, scaffolding like this becomes unnecessary. But right now they don't, and the gap between "stores everything flat" and "models you with any epistemological rigor" is pretty wide. This is a patch for the meantime.

The other thing is that even if the model handles memory internally, you probably still want the beliefs to be inspectable and editable by the user. A hidden internal model of who you are is exactly the problem I was trying to solve. Transparency might need to stay in the scaffold layer regardless.

Personally, I think the mechanics of memory can be universal, but the "memory structure" needs to be customized by each user individually. What gets memorized and how should be tied directly to the types of tasks being solved and the specific traits of the user.

Big corporations can only really build a "giant bucket" and dump everything into it. BUT what needs to be remembered in a conversation with a housewife vs. a programmer vs. a tourist are completely different things.

True usability will inevitably come down to personalized, purpose-driven memory. Big tech companies either have to categorize all possible tasks into a massive list and build a specific memory structure for each one, or just rely on "randomness" and "chaos".

Building the underlying mechanics but handing the "control panel" over to the user—now that would be killer.

This is a really good observation and honestly one of the hardest problems I've hit too.

Cog doesn't use confidence scores (yet — you're making me think about it), but the nightly pipeline is basically a proxy for the same thing. The /reflect pass runs twice a day and does consistency sweeps — it reads canonical files and checks that every referencing file still agrees. When facts drift (and they do, constantly), it catches and fixes them. The reinforcement signal is implicit: things that keep coming up in conversations get promoted to hot memory, things that go quiet eventually get archived to "glacier" (cold storage, still retrievable but not loaded by default).

The closest thing to your contradictions log is probably the observations layer — raw timestamped events that never get edited or deleted. Threads (synthesis files) get rewritten freely, but the observations underneath are append-only. So when the AI's understanding changes, the old observations are still there as a paper trail.

Where I think you're ahead is making confidence explicit. My system handles staleness through freshness (timestamps, "as of" dates on entities, pipeline frequency) but doesn't distinguish between "I'm very sure about this" and "I inferred this once." That's a real gap. Would love to see what you're building — is it public?

yep it's public: https://github.com/rodspeed/epistemic-memory

The observations layer being append-only is smart, thats basically the same instinct as the tensions log. The raw data stays honest even when the interpretation changes.

The freshness approach and explicit confidence scores probably complement each other more than they compete. Freshness tells you when something was last touched, confidence tells you how much weight it deserved in the first place. A belief you inferred once three months ago should decay differently than one you confirmed across 20 sessions three months ago. Both are stale by timestamp but they're not the same kind of stale.

GitHub - rodspeed/epistemic-memory: What should a machine remember about a person? A protocol for AI memory that models who you are — with confidence, decay, and contradiction tracking.

What should a machine remember about a person? A protocol for AI memory that models who you are — with confidence, decay, and contradiction tracking. - rodspeed/epistemic-memory

GitHub

I recommend installing Google's Antigravity and digging into its temp files in the user folder. You'll find some interesting ideas on how to organize memory there (the memory structure consists of: Brain / Conversation / Implicits / Knowledge items / Artifacts / Annotations / etc.).

I'd also add that memory is best organized when it's "directed" (purpose-driven). You've already started asking questions where the answers become the memories (at least, you mention this in your description). So, it's really helpful to also define the structure of the answer, or a sequence of questions that lead to a specific conclusion. That way, the memories will be useful instead of turning into chaos.

That is an awesome lead! I'll explore how antigravity is organizing their memory. Thanks for that

If open models on local hardware were more cost effective and competitive, it would be obvious that this is such a superficial approach. (I mean, it still is obvious but what are ya gunna do?)

We would be doing the same general loop, but fine tuning the model overnight.

I still think the current LLM architecture(s) is a very useful local maximum, but ultimately a dead end for AI.

As we begin to discover, there isn't a one-size-fits-all solution to the problem. The memory architecture you would use for a coding assistant is sort of different from the memory architecture you might use for a research assistant, which needs to track evolving context across long investigations rather than discrete task completions.

And yah it is not like a human "brain" or something like that and drawing any parallels between the two is simply wrong way to look at the problem.

This is good. Has anyone tried building any large scale applications entirely using Claude and maintaining it for a while with users paying for it? I’m looking for real life examples for inspiration.