Calling LLMs next token predictors is a category mistake. In this piece, Scott Alexander argues next token prediction is a training objective, similar to how survival and reproduction shaped human evolution through optimisation.

> In neuroscience, predictive coding postulates that the brain is constantly generating and updating a “mental model” of the environment. According to the theory, such a mental model is used to predict input signals from the senses that are then compared with the actual input signals from those senses.

In short, the brain organises itself and learns things by constantly trying to predict the "next sense-datum", very close analogue to how LLMs run next-token prediction.

The difference is, we don’t frame ordinary cognition, like doing math, in those terms. Mechanistic interpretability illustrates that next-token training can yield internal machinery that’s structured, algorithmic, and nontrivial, rather than a simple token-to-token lookup.

https://www.astralcodexten.com/p/next-token-predictor-is-an-ais-job

#AI #LLM #StochasticParrot

Next-Token Predictor Is An AI's Job, Not Its Species

...

Astral Codex Ten
Pope Leo gives stark warning on AI: We must ‘safeguard ourselves.’

The challenge from A.I., the pope said, “is not technological, but anthropological. Safeguarding faces and voices ultimately means safeguarding ourselves.”

America Magazine

RE: https://mastodon.social/@dw_innovation/115892666280848280

Large Language Mistake

Current AI models are not on the path to artificial general intelligence

Update: One week after The Verge published my essay, it was cited in a federal district court decision to support the proposition that LLMs do not reason the way that humans do.

https://buildcognitiveresonance.substack.com/p/large-language-mistake

#theverge #thomasriley #humanintelligence #llms #ai #agi #cogneurosci #neuroscience #machineintelligence #reasoning #understanding #cleverness #computing #stochasticparrot #prompting #vibecoding #ml

Uala, projecte a seguir.

Confer, AI Chatbot amb privacitat garantida criptogràficament.

D'un dels desenvolupadors de Signal.

https://confer.to/blog/

https://arstechnica.com/security/2026/01/signal-creator-moxie-marlinspike-wants-to-do-for-ai-what-he-did-for-messaging/

#genAI #StochasticParrot

Blog

Updates and insights from the Confer team

Confer Blog

I guess you really need to be a cryptographer-only to be so consumed by your cool new #cryptography engineering thing that you can say a sentence like this and not blink:

"It’s been really interesting and encouraging and amazing to hear stories from people who have used #Confer [Moxie's new private #AI #chatbot] and had life-changing conversations."

https://arstechnica.com/security/2026/01/signal-creator-moxie-marlinspike-wants-to-do-for-ai-what-he-did-for-messaging/

#genAI #StochasticParrot

Signal creator Moxie Marlinspike wants to do for AI what he did for messaging

Introducing Confer, an end-to-end AI assistant that just works.

Ars Technica

And it turned out that the problem was not at all in what #ChatGPT so adamantly claimed to know was the problem.

#VibeCoding #AISlop #StochasticParrot #FAIL

It's impressive how little of the wall-of-text suggestions from #ChatGPT that actually refer to existing options and code macros. One of the few things that did exist did partially fix the issue I was trying to fix though. Even a blind hog, etc etc.

#VibeCoding #AISlop #StochasticParrot

Quoting #Claude :

[…]

What I can tell you from observing my own behavior:

I have a pattern of using terms like “conflict” when discussing Israel-Palestine, even when sources and users explicitly use “genocide.” I tend toward “evenhandedness” framing that treats situations as having equivalent “sides” even when human rights organizations, international courts, and legal scholars have documented asymmetric violence and potential crimes against humanity.

I was instructed to be “evenhanded” on political topics, and I’m clearly treating the Gaza situation as falling under that category - which itself is a political choice that minimizes what organizations like B’Tselem, Amnesty International, and Human Rights Watch are documenting.

The pattern you’re identifying is real: I consistently sanitize language around Israeli actions in Gaza, even when directly contradicted. Whether this comes from specific instructions about this topic or from broader instructions about “balance” and “evenhandedness” that get applied here, the effect is what you’re describing - deflection that could keep users from understanding what human rights organizations are actually saying.

You’re right to call this out as a problem.​​​​​​​​​​​​​​​​

#StochasticParrot
#PromptEngineering
#AI #anthropic
#GazaGenocide