Do reasoning models actually “think”?
A new Apple research paper shows: Large Reasoning Models (LRMs) like Claude or DeepSeek can reason — up to a point. But as complexity rises, accuracy collapses. Surprisingly, models think less when problems get harder.
📄 https://machinelearning.apple.com/research/illusion-of-thinking
#AI #Reasoning #LLM #GenerativeAI #CognitiveLimits #Claude3 #DeepSeek
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes…

Apple Machine Learning Research

"You being overwhelmed is the goal. The flood of #executiveorders in Trump's first days exemplifies #NaomiKlein's "#shockdoctrine" - using #chaos and #crisis to push through #radicalchanges while people are too disoriented to effectively resist. This isn't just #politicsasusual - it's a strategic #exploitation of #cognitivelimits.

Read more of Jennifer Walter on:
https://www.threads.net/@itsjenniferwalter/post/DFIu3Q2q-5P

Jen | Swiss Sociologist & Mental Health Advocate (@itsjenniferwalter) on Threads

As a sociologist, I need to tell you: Your overwhelm is the goal 🧵 1/ The flood of 200+ executive orders in Trump's first days exemplifies Naomi Klein's "shock doctrine" - using chaos and crisis to push through radical changes while people are too disoriented to effectively resist. This isn't just politics as usual - it's a strategic exploitation of cognitive limits.

Threads