Reasoning skills of large language models are often overestimated

MIT CSAIL researchers developed an evaluation framework for large language models about counterfactual tasks. They found that LLMs can recite answers, but struggle to reason as it relates to abstract task-solving.

MIT News | Massachusetts Institute of Technology