I remember cynically joking last year that all the "advanced" AIs like Claude were just the same kind of black-box LLMs but with a bunch of regexes glued onto them

now that the source has leaked: HAHAHA

@foone wait what happened lol, I know of the leak but that's all
@VegaHarmonia @foone claude code leaked
@ShadowJonathan @foone yeah no I mean, are they just using someone else's llm
@VegaHarmonia @ShadowJonathan the model itself isn't included, I didn't mean it like that. maybe I should rephrase
@VegaHarmonia @foone there’s already decent analyses of the code, and indeed a lot of it comes down to careful system prompts
@VegaHarmonia @foone also word on the street is that claude is actually kinda shit in comparison, so makes even more sense
@VegaHarmonia I've seen some screenshots of the code, like the sentiment analysis part of the code is just a regex against a list of "angry" words

@foone @VegaHarmonia wait what, not even a proper text embedding model which runs through that???

also why would they be using sentiment analysis, for what, to tell angry devs to get a cup of water?

@foone @VegaHarmonia oh wait nvm it can be implicit negative feedback submissions when they detect the human being angry at the AI

hm

jonny (good kind) (@[email protected])

So the reason that Claude code is capable of outputting valid json is because if the prompt text suggests it should be JSON then it enters a special loop in the main query engine that just validates it against JSON schema (it looks like the schema just validates that something in fact and object and its keys are strings) and then feeds the data with the error message back into itself until it is valid JSON or a retry limit is reached. This code is so eye wateringly spaghetti so I am still trying to see if this is true, but this seems to be how it not only returns json to the user, but how it handles *all* LLM-to-JSON, including internal output from its tools. There appears to be an unconditional hook where if the JSON output tool is present in the session config at all, then all tool calls must be followed by the "force into JSON" loop. If that's true, that's just *mind blowingly expensive* edit: please note that unless I say otherwise all evaluations here are just from my skimming through the code on my phone and have not been validated in any way that should cause you to be upset with me for impugning the good name of anthropic edit2: this is both much worse and not as bad as i thought on first read - https://neuromatch.social/@jonny/116326861737478342

neurospace.live

@foone

I stopped scripting bots back in... 2016, so anything done with my dev accounts isn't on me. Lmfao.

ID theft is a magical thing.

So is the statute of limitations, lmfao.

@foone honestly

from a description i got of claude's planning mode (that @iamada gave), i basically went "okay so its a bunch of prompts which shits out a few files in a specific way and then later on pastes them in the context window in a smart way so it can focus on one task at a time"

this is what ive been doing for myself for the last 6 years, its not anything special if an AI does it, and it can have an even worse track record at global recall than i do, lmao

@ShadowJonathan @foone app defined markdown-based (executive) memory cache  

but like, for a recursive predictive statistical model

@foone I mean was there any doubt?
@foone wait, source? where? what?

@freya they fucked up an NPM package and leaked all the source (well, the code, the models aren't there)

https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/

Anthropic goes nude, exposes Claude Code source by accident

: Oopsy-doodle: Did someone forget to check their build pipeline?

The Register
@foone oh that is fucking fascinating, I love it
@foone I want the models
@freya I'm always saying this too, I just don't mean the AI things
@foone silly girl
@freya officially! I am pro-silliness! I am professionally silly!
@foone you are, it's one of the hottest things about you!
@foone I defintely mean the AI things, they're cute
@freya @foone I mean let's be real. Autoencoders? Look at those curves!
@foone I feel vindicated for using a grep+vim workflow instead of a fancy IDE for years prior to the LLM devs rediscovering that trick.

@foone Ha! This reminds me: At work today, I needed to analyze data in an Excel file.

I asked our team’s Excel guru. He took a look, told me “the data is crap, go ask an LLM rather than build a pivot table.”

I did. The LLM spat out a Python script to analyze the data.

So I guess it worked - just not the way I thought it would.

@foone wait I thought everyone knew this? Like these tools are just packages of pre-made prompts and loops to reprocess things until it works with a pretty UI...

I guess the code is interesting as it tells you how it works and stuff more exactly which is definitely fascinating but I hope nobody genuinely thought it was something more advanced than scaffolding on the main chat models

@foone this is the type of shit a fuckwitted recruiter was saying we need to use out in the field or we don't get hired

@foone

512,000 lines of sophisticated agentic architecture, multi-agent orchestration, custom React renderers, and constrained decoding...

...there's a big regex full of profanity checking if you're angry.

#Anthropic coded that section for my alone when I #vibecode 💀

Good way to save tokens, you gotta admit

#claudecode

@foone I wonder if the mistake was related to the conflict with the Pentagon, but maybe is just coincidence...
@foone LLM development is mostly indistinguishable from satire