RE: https://neuromatch.social/@jonny/116324676116121930

On reading this thread, I think that Anthropic subscribers who use Claude Code have a very strong case for fraud on the part of Anthropic, given there are multiple redundant - and token-expending - calls to the API baked into this, combined with a lack of ability to choose one's own front-end interface, thus mandating the inefficient and costly expenditures, artificially pumping up usage.

I'd like to hear a lawyer's opinion on that matter.

Not to mention there are several things highlighted here which are implemented exactly wrong.

e.g. requiring multiple API calls to generate JSON (instead of having a process that deterministically generates valid JSON - annoying to write, but very possible)

and having a regex for sentiment analysis (instead of calling the LLM that is advertised as capable of performing sentiment analysis)

This entire thing is ass-backwards.

And that's even before the "system prompt" backdoor they put in there.

They are inline signaling a backdoor.

This is flat-out fucking disgusting.

These 'strong protections' that Anthropic advertises appear to be implemented as begging the LLM to do the correct thing, in-line.

So this is not security at all.

This is what I would describe as "kayfabe" and it is entirely unsuited for any kind of production load.

@munin i want everyone who hears the word "guardrails" to picture this, cuz this is what they mean