Large Reasoning Models (LRMs) fail to use explicit algorithms and reason inconsistently across puzzles (for high complexity tasks).

https://machinelearning.apple.com/research/illusion-of-thinking

#lrm #llm #towersofhanoi

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes…

Apple Machine Learning Research
A knockout blow for LLMs?

LLM “reasoning” is so cooked they turned my name into a verb

Marcus on AI