whoever looks for adjacent info in pithy comments, see this thread where a little detail on the poverty-of-stimulus argument is shared and some musings of how that could inform how we grok what LLMs can and cannot do and how metaphors frame how we think about machines.
#LLM are brute-forcing their way through absurd amounts of data to generate an autocomplete output for any given input that approximates outputs a human might give instead. They lack a few distinct properties of human cognition, including language, that more brute force alone cannot compensate for. Because they can only ever internalize and compute *intra*textual context. Incidentally, humans need much less input(!) to learn language. Probably because they can contextualize across domains. 🧵