🚀 "Oh wow, look at arXiv with its tiny-brained 'TinyLoRA' trying to solve world problems with a whopping 13 parameters! 😂 Meanwhile, the rest of us are learning to reason with at least 14 parameters and a cup of coffee. ☕ #CuttingEdgeTech #ArxivComedy"
https://arxiv.org/abs/2602.04118 #CuttingEdgeTech #ArxivComedy #TinyLoRA #MachineLearning #Humor #HackerNews #ngated
Learning to Reason in 13 Parameters

Recent research has shown that language models can learn to \textit{reason}, often via reinforcement learning. Some work even trains low-rank parameterizations for reasoning, but conventional LoRA cannot scale below the model dimension. We question whether even rank=1 LoRA is necessary for learning to reason and propose TinyLoRA, a method for scaling low-rank adapters to sizes as small as one parameter. Within our new parameterization, we are able to train the 8B parameter size of Qwen2.5 to 91\% accuracy on GSM8K with only 13 trained parameters in bf16 (26 total bytes). We find this trend holds in general: we are able to recover 90\% of performance improvements while training $1000x$ fewer parameters across a suite of more difficult learning-to-reason benchmarks such as AIME, AMC, and MATH500. Notably, we are only able to achieve such strong performance with RL: models trained using SFT require $100-1000x$ larger updates to reach the same performance.

arXiv.org