Reading analysis of the Claude Code leak (not reading the code itself, of course) is evidence towards what I had kind of suspected, that the whole thing is a giant magic trick not only in the straightforward LLMentalist way, but also in the sleight of hand way off making you think that this pile of regexes and JSON schema validation loops is *actually* the LLM doing LLM things.
Like, you don't need LLMs, the things that work, work well, and that have worked well for decades are all there, being called by the chatbots... you can just actually use those without 500k lines of spaghetti code and markdown files tricking you into thinking that the JSON parser is alive and has feelings.

@xgranade the worst part?

It occurred to me that we can already easily tokenize code, and know if a string of tokens is valid.

So they could just have "start json" and "end json" tokens and not pick invalid tokens in the middle

@astraluma It continues to be incredibly strange to me that llmbros keep limiting their approach to in-band signaling.
@xgranade @astraluma clearly they have learned nothing from people with blueboxes.....
@freya @astraluma Yes, though I might also submit that "clearly they have learned nothing" is true even more generally.
@xgranade @astraluma you're not wrong. and I say this as a girlie who uses LLMs on the regular for accessibility stuff, even I, a girl about as far from an outright no AI girlie as you can find, think these fucking techbros are incredibly, stunningly fucking useless