Stubsack: weekly thread for sneers not worth an entire post, week ending 1st March 2026
Stubsack: weekly thread for sneers not worth an entire post, week ending 1st March 2026
Good news! We’ve solved consciousness.
Kolmogorov complexity:
So we should see some proper definitions and basic results on the Kolmogorov complexity, like in modern papers, right? We should at least see a Kt or a pKt thrown in there, right?
Understanding IS compression — extracting structure from data. Optimal compression is uncomputable. Understanding is therefore always provisional, always improvable, never verifiably complete. This kills “stochastic parrot” from a second independent direction: if LLMs were memorizing rather than understanding, they could not generalize to inputs not in their training data. But they do. Generalization to novel input IS compression — extracting structure, not regurgitating sequences.
Fuck!
@lagrangeinterpolator can you understand without generalizing? arguably yes. can you generalize without understanding? also, arguably yes. how else can a mathematical theory of physics give “right answers” in novel physical circumstances?
you could say, I suppose, that it’s the humans doing the calculations that are doing the generalization but one can do the calculations without understanding them.