Apple did the research; LLMs cannot do formal reasoning. Results change by as much as 10% if something as basic as the names change.
https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and
Apple did the research; LLMs cannot do formal reasoning. Results change by as much as 10% if something as basic as the names change.
https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and
@dalias @ShadowJonathan @anderspuck no, never reliable enough. This stems from how they are designed.
They are incapable of asking for help if they don’t understand a passage, for example, writing down something hallucinated* instead.
*) I’m aware that this is not a good term to use for this but I don’t have a better one handy before coffee.