The Claude Code Leak

What the accidental Claude Code source code leak tells us about the real value of code, product market fit, and why integration is what actually makes software great.

> But then the clean room implementations started showing up. People had taken Anthropic’s source code and rewritten Claude Code from scratch in other languages like Python and Rust.

Seems like the phrase "clean room" is the new "nonplussed"... how does this make any sense?

I think it means you write a spec from the implementation. Then you write a new implementation from the spec. You might go so far as to do the second part in a "clean" room.
right. that's not what people are doing here though, at all

in a typical clean-room design, the person writing the new implementation is not supposed to have any knowledge of the original, they should only have knowledge of the specification.

if one person writes the spec from the implementation, and then also writes the new implementation, it is not clean-room design.

I believe the argument is that LLMs are stateless. So if the session writing the code isn't the same session that wrote the spec, it's effectively a clean room implementation.

There are other details of course (is the old code in the training data?) but I'm not trying to weigh in on the argument one way or the other.

Heh, the original being entirely vibed had me thinking of an interesting problem: if you used the same model to generate a specification, then reset the state and passed that specification back to it for implementation, the resulting code would by design be very close to the original. With enough luck (or engineering), you could even get the same exact files in some cases.

Does this still count as clean-room? Or what if the model wasn't the same exact one, but one trained the same way on the same input material, which Anthropic never owned?

This is going to be a decade of very interesting, and probably often hypocritical lawsuits.

Heya, post author here. I think I was just wrong about this assertion. I got into a discussion with a copyright lawyer over on Bluesky[^1] after I wrote this and came away reasonably convinced that this wouldn’t be a valid example of a clean room implementation.

[^1]: https://bsky.app/profile/mergesort.me/post/3mihhaliils2y

Joe Fabisevich (@mergesort.me)

Gotcha, that makes sense to me! I think I was conflating the specifics here with the normal case, and I suspect if there was only one trial to be had then your interpretation would win in court.

Bluesky Social

The most fitting method would be to be to train an LLM on the Claude Code source-code (among other data).

Then use Anthropic's own argument that LLM output is original work and thus not subject to copyright.

> Many software developers have argued that working like a pack of hyenas and shipping hundreds of commits a day without reading your code is an unsustainable way to build valuable software, but this leak suggests that maybe this isn’t true — bad code can build well-regarded products.

The product hasn't been around long enough to decide whether such an approach is "sustainable". It is currently in a hype state and needs more time for that hype to die down and the true value to show up, as well as to see whether it becomes the 9th circle of hell to keep in working order.

Hey there, author of the post here. I actually agree with this! That is in fact why I used the word maybe — my comment really was meant to be more speculative than definitive.

I think one thing that goes unmentioned is that maybe code quality is really not that important for trivial things, because they can be trivially reproduced if need be. I would argue Claude Code is exactly such a project; coding agents are incredibly simple and rewriting CC wouldn't be much of a problem.

Non-trivial things tend to be much more sensitive to code quality in my experience, and will by necessity be kept around for longer and thus be much more sensitive to maintenance issues.

> maybe code quality is really not that important for trivial things

I hear this narrative being pushed quite a bit, and it makes my spidey senses tingle every time.
Secure programs are a subset of correct programs, and to write and maintain correct programs you need to have a quality mindset.

A 0-day doesn't care if it's in a part of your computer you consider trivial or not.

I wonder what happened to the person that wrote "Coding as Creative Expression" (https://build.ms/2022/5/21/coding-as-creative-expression/)?

I'm not (just) being glib. That earlier article displays some introspection and thoughtful consideration of an old debate. The writing style is clearly personal, human.

Today's post is not so much. It has LLM fingerprints on it. It's longer, there are more words. But it doesn't strike me as having the same thoughtful consideration in it. I would venture to guess that the author tried to come up with some new angles on the news of the Claude Code leak, because it's a hot topic, and jotted some notes, and then let an LLM flesh it out.

Writing styles of course change over time, but looking at these two posts side by side, the difference is stark.

Coding As Creative Expression

Exploring whether coding is art or science, and how programming serves as a creative medium for solving problems and expressing ideas through code.

Hey there, author of the post here! I actually wrote this piece myself on my phone while I was out for a walk this morning. It was initially meant to be a quick note more than a full blog post —- whereas Coding As A Creative Expression took me a couple of days to write.

I made a commitment to write more this year and put my thoughts out quicker than I used to, so that’s likely the primary reason it’s not as deep of a piece of writing as the post you’re referencing. But I do want to note that this wasn’t written using AI, it just wasn’t intended to be as rich of a post.

The reason it came out longer is that I’ve honestly been thinking about these ideas for a while, and there is so much to say about this subject. I didn’t have any particular intention of hopping on a news cycle, but once I started writing the juices were flowing and I found myself coming up with five separate but interrelated thoughts around this story that I thought were worth sharing.

Reminds me of the classic Mark Twain quote: "Apologies, I didn't have time to write a short letter, so I wrote a long one."

Have you noticed that comments like "this post seems written with AI" are now appearing on all posts, even those written without AI?

We're starting to become wary due to the abuse of AI and proliferation of sloppy content, but also because we often have trouble distinguishing authentic from sloppy content.

Another feature of this AI era that I hate.

What changed is you, the reader. In 2026 we treat the smallest signs as evidence of LLM writing. Too long? LLM. Too short? LLM. Too grammatically correct? Must be LLM.
I personally found it really amusing how they weaponized the legal system to DMCA all the claude code source code repositories. Code ingested into the model is not copyrightable, but produced code apparently is when by legal definition computer generated code can not be copyrighted and that's one of their primary arguments in legal cases.

> It should serve as a warning to developers that the code doesn’t seem to matter, even in a product built for developers.

Code doesn't matter IN THE EARLY DAYS.

This is similar to what I've observed over 25 years in the industry. In a startup, the code doesn't really matter; the market fit does.

But as time goes on your codebase has to mature, or else you end up using more and more resources on maintenance rather than innovation.

alternatively the code can go the way of "fast fashion" and even "3d-print your garments in the morning according to your feelings and weather and recycle at the end of the day".

If dealing with a functionality that is splittable into microfeatures/microservices, then anything that you need right now can potentially be vibe-coded, even on the fly (and deleted afterwards). Single-use code.

>But as time goes on your codebase has to mature, or else you end up using more and more resources on maintenance rather than innovation.

tremendous resource sink in enterprise software. Solving it, even if making it just avoidable - may be Anthropic goes that way and leads the others - would be a huge revolution.