this is nearly as dumb as elon’s “show me your 5 best lines of code” shit while he was err, downsizing twitter. What are you supposed to do when a code review flags some bad code? fondle your prompts repeatedly until that part gets fixed? Sounds like a solution that will often be much less efficient than making edits by hand. Maybe they just don’t do code reviews now, that would be cool.
It seems clear that every single company that makes money off of software is or will soon be in a race to the bottom on software quality and that’s just amazing, i love it for everyone. I choose to laugh rather than cry.
i don’t know if it’s a convention even in the “serious” AI research industry to use anthropomorphic jargon, but it drives me up a wall to see shit like this:
17.6 Theory of Mind Limitations in Agentic Systems
Agentic systems don’t have “theory of mind”, they cannot infer mental state. they are probabilistic word generators operating within non-deterministic frameworks. They can have a system prompt that tells them to generate text that appears to be an interpretation of another entity’s “mental state”, and they can even be directed to refer to it as context, but it is not theory of mind and the entity they’re generating in reference to may not have a mind at all.
I wish there was some way to stop these dorks from stealing the imprimatur of cognitive science.
the answer is definitely not to sanction and attempt to destabilize them on behalf of your two equally evil regional client states. The corollary to that is that you cannot produce the necessary conditions for future prosperity by destroying their economy in a way that harms the average person more than the elites.
And that’s assuming that we (the west) even want them to prosper or care about their future as a nation. Perhaps in an alternate universe, that would be the motivation for regime change but that is not and has never been the case.
you’re right, Amodei and others have published a lot of criti-hype, shameless hype, and delusional anthropomorphization in the past few years. While i was looking for other examples of their bullshit I found this article which was published just after my comment, with a nice sneer:
x.com/MrinankSharma/status/2020881722003583421
Anthropic safety research lead quits the field entirely to write poetry with a somewhat cryptic note. Trying to read between the lines here, the most likely explanation (IMO) is that he developed a guilty conscience and anthropic doesn’t actually give a shit about any of the human harms created by the technology. Ah well, nevertheless they persisted.
They’re not cutting jobs because their financials are in the shitter
Their financials are not even in the shitter! except insofar as their increased AI capex isn’t delivering returns, so they need to massage the balance sheet by doing rolling layoffs to stop the feral hogs from clamoring and stampeding on the next quarterly earnings call.