No shit “this decline [in the capability of LLMs to perform logical reasoning through multiple clauses] is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data.”

Paper from Apple engineers showing that genAI models don’t actually understand what they read but just regurgitate what they’ve seen like a puppy wanting to please its owner:
https://arxiv.org/pdf/2410.05229

@CatherineFlick maybe LLMs need to be combined with reasoning systems https://en.wikipedia.org/wiki/Reasoning_system
Reasoning system - Wikipedia

@mpeg2tom or how about we just give up, because it won't even happen in a sustainable way? Even the guard rails are essentially brute-forced. That's not sustainable at all, there will always be new guard rails needed. This hodge-podge of different systems slapped together with the hope that it might suddenly start fulfilling the promise of AI will just cause even more trouble than it currently does!