Yesterday I saw that Anthropic had (accidentally) leaked Claude Code source via a source map in an npm package.

There’s already debate about whether it was “really” a leak or if the code was effectively public anyway.

I wasn’t that interested in the discourse.

I was interested in the source.

Specifically:
• What it tells us about AI agent plumbing (the harness)
• And what that means for security - beyond prompt injection and model-centric thinking

So I pulled it apart.

The result ended up being… longer than expected 😅
So I split it into a 3-part series.

Part 1 is here:
https://cirriustech.co.uk/blog/agent-harness-abuse-part-1/

If you know me, it won’t surprise you:
This is very much about identity, trust boundaries, and systems thinking.

None of this is new.

But it is increasingly relevant as agent runtimes become distributed systems in their own right.

Curious what others think - especially if you’ve looked at similar architectures

The Model Isn't the Risk. The Harness Is (Part 1): The Leak, the Context, and the Framework

Part 1 of 3. The Anthropic Claude Code source map leak — why the real story isn't the secrets that weren't there, it's the architecture that was. Introducing the three-phase methodology and what Phase 1 Recon revealed.

CirriusTech | Serious About Tech
@cirriustech I presume parts 2 and 3 are WIP? Links 404 at the moment.
@bobthomson70 yep all written and will drop part 2 on 7th and part 3 on 14th
@cirriustech when does your podcast start? ;)