Apple did the research; LLMs cannot do formal reasoning. Results change by as much as 10% if something as basic as the names change.

https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and

LLMs don’t do formal reasoning - and that is a HUGE problem

Important new study from Apple

Marcus on AI
Not too surprising, but a big weight like Apple standing behind this should shift sentiment more and more

@ShadowJonathan

This is not surprising at all and I don't understand why anyone had to waste time and resources on demonstrating a self-evident fact that was known before the research even started.

@lulu @ShadowJonathan yeah I don't know anything about LLMs really but I thought this was a fundamental. There's no reasoning, they don't understand meaning, they just make sentences by jumbling words together based on probabilities.

@levi @lulu @ShadowJonathan

Yes, the problems posed to the #LLMs in this study are mathematical in nature or logic problems – why are systems that are trained to produce text expected to produce any meaningful results here?

@feliz @levi @lulu because thats how they're being sold as; replacement for humans
@ShadowJonathan @feliz @lulu are they though? I don't know enough about LLMs and their implementations to refute you but it seems as though anyone who has ever entered a prompt and looked at the response could understand that there is no reasoning.

@ShadowJonathan @feliz @levi

Yes. That's why so many people called it out for the lie that it is. LLMs are nothing like their marketing. They are not even AI. It's nothing but autocorrect powered by stolen intellectual property and enough energy to destroy our planet. So yeah, very advanced autocorrect (for an unacceptably high price) but not even slightly resembling AI.