For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

They do literally only one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

@thomasfuchs

EDIT: Lol, Thomas Fucks blocked me for this post. These hater types are just drags on science and technology. Its just their flailing around like a toddler because they aren't getting their way.

LLMs definitely can act. They can query the internet. They can use tools I teach them (MCP).

Do they think? I'm not particular sure that many humans even think. Or better yet, many humans respond in rote to the same stimuli (aka parse tokens and respond programmatically).

Given the recent neuroanatomy of LLMs, their findings are showing how LLMs start to work. What's surprising is that the starting circuits are decoding language, and the exiting circuits reencode language. And there appears to be a universal grammar (thanks Chomsky) internally, shared by many LLM models.

https://dnhkng.github.io/posts/rys/

LLM Neuroanatomy: How I Topped the LLM Leaderboard Without Changing a Single Weight

ML, Biotech, Hardware, and Coordination Problems. Sometimes I write about hard problems and how to solve them.

David Noel Ng