My latest Substack post:

Can Large Language Models Reason?

https://aiguide.substack.com/p/can-large-language-models-reason

Can Large Language Models Reason?

What should we believe about the reasoning abilities of today’s large language models? As the headlines above illustrate, there’s a debate raging over whether these enormous pre-trained neural networks have achieved humanlike reasoning abilities, or whether their skills are in fact “a mirage.”

AI: A Guide for Thinking Humans
@melaniemitchell DRH told us that intelligent machines will be bad at math. (I guess I should note that the inference doesn’t go in the other direction.)
@melaniemitchell Betteridges Law working overtime on this one.
@melaniemitchell more and more, I believe all of the terms we use regularly for AI fall on a continuum (understand, reason, judge, consciousness, concept). What would be interesting is pose the question: which of these terms are absolutes, and on what grounds? I feel like that is where you are headed here anyway. It also seems like there is an unavoidable need to tradeoff structure and function, as in the works of Antonio Lieto (his structure/function ratio). All of these issues compound.
@melaniemitchell are you familiar with this type of effort at reasoning? More trickeration claimed to improve reasoning. https://reasoning-tokens.ghost.io/reasoning-tokens/
Self-Reasoning Tokens, teaching models to think ahead.

What is the mathematical formulation of reasoning? How can we make LLMs like chatGPT think before they speak? And how can we make that baked into the model so it can learn to think in a self-supervised way without having to "explain it step by step" (or another famous prompt

Reasoning Tokens
@melaniemitchell I tech math at university level. Occasionally I meet students who are able to think at an abstract level. But I would be happy enough if they can apply patterns (method) to solve a group of problems to instances in this group. Perhaps, to a very large extent, human intelligence just means pattern-matching.