I wrote about AI again. On some level I don't now why I do this to myself

https://jenniferplusplus.com/what-is-a-token/

Actually, I wrote most of it months ago, for work. But, it was well received, so I put it on my blog. This is the more generic version.

The short version is that AI is not magic. It's a real phenomenon with real behavior and tradeoffs. I'm deeply tired of *****ALL***** the tradeoffs being handwaved away. And so much imagination fills in for the actual behavior. So I tried to describe how it's built, because that informs how it works, which informs what it actually does. And to be clear, it does things. It's not useless. But that's not the same as being useful, or worthwhile.

Anyway, I already put ~4k words on this in the article, so I'll shut up and let it speak for itself.

What is a token

AI is meant to seem like magic. But there's no such thing as magic. It's all illusion. So, allow me to spoil that illusion for you.

Jennifer++

@jenniferplusplus
Good article! I have a similar stance, as an academic computer scientist. I do have one minor quibble, though: transformers do not generate tokens. They generate a probability distribution of potential next tokens, and are dependent on a "second system" to collapse that distribution into a concrete token. I've demonstrated this here:

https://web.archive.org/web/20260115002103/https://freethoughtblogs.com/reprobate/2025/12/19/aside-lets-bisect-an-llm/

(Forgive the Wayback link, the blog is having technical issues.)

Aside: Let’s Bisect an LLM!

I previously took a lot of words to describe the guts of Markov chains and LLMs, and ended by pointing out that all LLMs can be split into two systems: one that takes in a list of tokens and output…

Reprobate Spreadsheet
@hjhornbeck thanks for that clarification! I want this article to be succinct and approachable, but also technically accurate. So I'll see if I can work in that detail