RE: https://neuromatch.social/@jonny/116324676116121930

On reading this thread, I think that Anthropic subscribers who use Claude Code have a very strong case for fraud on the part of Anthropic, given there are multiple redundant - and token-expending - calls to the API baked into this, combined with a lack of ability to choose one's own front-end interface, thus mandating the inefficient and costly expenditures, artificially pumping up usage.

I'd like to hear a lawyer's opinion on that matter.

Not to mention there are several things highlighted here which are implemented exactly wrong.

e.g. requiring multiple API calls to generate JSON (instead of having a process that deterministically generates valid JSON - annoying to write, but very possible)

and having a regex for sentiment analysis (instead of calling the LLM that is advertised as capable of performing sentiment analysis)

This entire thing is ass-backwards.

And that's even before the "system prompt" backdoor they put in there.

They are inline signaling a backdoor.

This is flat-out fucking disgusting.

These 'strong protections' that Anthropic advertises appear to be implemented as begging the LLM to do the correct thing, in-line.

So this is not security at all.

This is what I would describe as "kayfabe" and it is entirely unsuited for any kind of production load.

@munin "prompt engineering" is begging a very large algebriaic operation to maintain its character.

We get a lot of "ignorance in charge" in the tech industry but this is an area where even the experts have no idea what the technology they're "securing" even is.

@patcharcana

Yeah so about that.

I do not think these are experts who have tried to comprehend. I do not think there is any expertise on display at all here.

I think these are people who have deskilled themselves entirely and no longer use their brains to think.

I think the assertion that "nobody knows" how this works is false - I think there are plenty of people who do know, and that we are being ignored because we are identifying this as -not working correctly- and are thus "naysayers"

@munin @patcharcana we had someone in our mentions last week trying to tell us that we simply must admit that it "works", otherwise we'll never get people to agree to not use it, and then this code leaks, showing that it does not, in fact, work, except by active deception, and the vindication is hollow.