Mapping the Mind of a Large Language Model
https://lemmy.world/post/15661170

Mapping the Mind of a Large Language Model - Lemmy.World
I often see a lot of people with outdated understanding of modern LLMs. This is
probably the best interpretability research to date, by the leading
interpretability research team. It’s worth a read if you want a peek behind the
curtain on modern models.
This is a really good science communication article, it describes their work in clear terms (finding structures that relate to abstract concepts, seeing when they are activated and how strengthening and weaking them modifies outputs) and goes intobthe implications for it. Im probably going to save this link as a rebuttal for the people who claim LLMs just predicr the next word and have no concepts embedded in them.
I doubt that anyone saying that LLM are calculating next word solely based on previous sequence. It’s still statistics, regardless of complexity.
Yes, but people forget that our brains, and therefore our minds, are also “simply” statistics, albeit very complex.