@HalvarFlake

8.7K Followers
394 Following
2K Posts

I do math. And was once asked by R. Morris Sr. : "For whom?"

Accidental two-time founder. Mathematician by education. Infosec luminary (has-been?).

Current events:

Today's experiment: What will a deep neural network learn if I train it on the very sparse set of points on the left, sampled from the shape on the right?

Will it recognize a circle-ish shape? What shape will it learn?

Incompetent even at incompetence.
Today's insanity:
I feel like my early-2023 math hunch about NN training and Kolmogorov complexity will age pretty well -- some folks have published a paper that confirms the hunch in many ways: https://arxiv.org/abs/2412.09810
The Complexity Dynamics of Grokking

We investigate the phenomenon of generalization through the lens of compression. In particular, we study the complexity dynamics of neural networks to explain grokking, where networks suddenly transition from memorizing to generalizing solutions long after over-fitting the training data. To this end we introduce a new measure of intrinsic complexity for neural networks based on the theory of Kolmogorov complexity. Tracking this metric throughout network training, we find a consistent pattern in training dynamics, consisting of a rise and fall in complexity. We demonstrate that this corresponds to memorization followed by generalization. Based on insights from rate--distortion theory and the minimum description length principle, we lay out a principled approach to lossy compression of neural networks, and connect our complexity measure to explicit generalization bounds. Based on a careful analysis of information capacity in neural networks, we propose a new regularization method which encourages networks towards low-rank representations by penalizing their spectral entropy, and find that our regularizer outperforms baselines in total compression of the dataset.

arXiv.org
No words.