Common people are genuinely surprised when I explain that LLMs, or what's called "AI" these days", is just brute force statistics.
> OpenAI's large language models process text using tokens, which are common sequences of characters found in a set of text. The models learn to understand the statistical relationships between these tokens, and excel at producing the next token in a sequence of tokens.
