RE: https://neuromatch.social/@jonny/116324676116121930

On reading this thread, I think that Anthropic subscribers who use Claude Code have a very strong case for fraud on the part of Anthropic, given there are multiple redundant - and token-expending - calls to the API baked into this, combined with a lack of ability to choose one's own front-end interface, thus mandating the inefficient and costly expenditures, artificially pumping up usage.

I'd like to hear a lawyer's opinion on that matter.

Not to mention there are several things highlighted here which are implemented exactly wrong.

e.g. requiring multiple API calls to generate JSON (instead of having a process that deterministically generates valid JSON - annoying to write, but very possible)

and having a regex for sentiment analysis (instead of calling the LLM that is advertised as capable of performing sentiment analysis)

This entire thing is ass-backwards.

And that's even before the "system prompt" backdoor they put in there.

They are inline signaling a backdoor.

This is flat-out fucking disgusting.

These 'strong protections' that Anthropic advertises appear to be implemented as begging the LLM to do the correct thing, in-line.

So this is not security at all.

This is what I would describe as "kayfabe" and it is entirely unsuited for any kind of production load.

This is what is taking up billions of dollars and causing the firing of thousands of people to replace them?

Fucking disgusting.

@munin yep! the entire thing is built upon fraud and deception! and i am screaming
@munin i blame a lovely combination of humanity's tendency to anthropomorphize whenever possible, and engineering rigor in software having outright gone into the fucking _toilet_. it's a fucking scam and we're falling for it as a profession

@pikhq

I think it's flat-out fucking psychosis.

@munin i think that's mean to people who suffer psychosis

@pikhq

Yes, I think it's very cruel to them that these people have chosen to implement a gaslighting machine that they then use as a drug to generate psychosis in themselves and others, and it creates a world much more hostile to those who have to deal with that kind of challenge who aren't interested in this.

@munin "prompt engineering" is begging a very large algebriaic operation to maintain its character.

We get a lot of "ignorance in charge" in the tech industry but this is an area where even the experts have no idea what the technology they're "securing" even is.

@patcharcana

Yeah so about that.

I do not think these are experts who have tried to comprehend. I do not think there is any expertise on display at all here.

I think these are people who have deskilled themselves entirely and no longer use their brains to think.

I think the assertion that "nobody knows" how this works is false - I think there are plenty of people who do know, and that we are being ignored because we are identifying this as -not working correctly- and are thus "naysayers"

@munin No, you're actuially entirely correct.

The experts on these technologies aren't working at companies like Anthropic because they actually do understand these systems and what they can and cannot be expected to be useful for.

The pushers, on the other hand, are part of the Cult of Roko's Basalisk and think that throwing enough math at an LLM will give the damn thing a theory of mind.

Which is why they're "securing" it through prompting; they actually think a thing is present which can follow instructions.

@munin It's not even that this is "not working correctly" it is "you cannot ask a forklift to turn screws, that's not what his machine does"
@munin @patcharcana we had someone in our mentions last week trying to tell us that we simply must admit that it "works", otherwise we'll never get people to agree to not use it, and then this code leaks, showing that it does not, in fact, work, except by active deception, and the vindication is hollow.
@munin i want everyone who hears the word "guardrails" to picture this, cuz this is what they mean