Apple’s new paper on the limits of AI reasoning is the most grounded research I’ve read in a while. Under complexity, LLMs don’t degrade. They collapse. If you’re building on top of LRMs/LLMs, give this a read and let me know what you think. https://leotsem.com/blog/the-illusion-of-thinking/