What are your favourite projects investigating if / how different large #foundational #models do #logical #reasoning ? Or how their "next token prediction mechanism" emulates reasoning.

Still trying to make my mind up whether the internal dynamics of these models are worth investigating.

Very curious to hear people's thoughts!

#NLProc #LLM #genAI #AI #ML #logic @cogsci @cognition @neuroscience #neuroscience #cognition

Linking @taylorwwebb paper on "Emergent Analogical Reasoning in Large Language Models" here as it might be of interest to people seeing the thread:

https://arxiv.org/abs/2212.09196

Emergent Analogical Reasoning in Large Language Models

The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here, we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of GPT-3) on a range of analogical tasks, including a novel text-based matrix reasoning task closely modeled on Raven's Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

arXiv.org

@achterbrain This one is my favorite because the task definition is well defined. Logical reasoning = automata theory AKA classical CS definition of algorithms.

https://arxiv.org/abs/2210.10749

Anything delivered in the form of natural language seems like a confounded experimental setup to me. And I haven’t seen a convincing set of controls yet.

Transformers Learn Shortcuts to Automata

Algorithmic reasoning requires capabilities which are most naturally understood through recurrent models of computation, like the Turing machine. However, Transformer models, while lacking recurrence, are able to perform such reasoning using far fewer layers than the number of reasoning steps. This raises the question: what solutions are learned by these shallow and non-recurrent models? We find that a low-depth Transformer can represent the computations of any finite-state automaton (thus, any bounded-memory algorithm), by hierarchically reparameterizing its recurrent dynamics. Our theoretical results characterize shortcut solutions, whereby a Transformer with $o(T)$ layers can exactly replicate the computation of an automaton on an input sequence of length $T$. We find that polynomial-sized $O(\log T)$-depth solutions always exist; furthermore, $O(1)$-depth simulators are surprisingly common, and can be understood using tools from Krohn-Rhodes theory and circuit complexity. Empirically, we perform synthetic experiments by training Transformers to simulate a wide variety of automata, and show that shortcut solutions can be learned via standard training. We further investigate the brittleness of these solutions and propose potential mitigations.

arXiv.org

@darsnack This is such a good reference, thanks! Will have to look into this in more detail but this seems like a super valuable analysis. I was wondering about shortcuts as well, especially with regard to reasoning steps. Glad that somebody already had a good look at this.

What do you think about the "Out-of-distribution shortcomings of shortcut solutions" that the authors discuss to aim towards more generalisable solutions? Perhaps not a perfect control but seems valuable to me.

@achterbrain I think there are two views you could take:

(a) algorithms as we understand them are fundamentally recurrent, so no feedfoward model will generalize (unless you over-constrain the task like they do in the paper)

(b) the scaling is so good (10^6 iterations in 6 layers for any FSM!) that we can get away with loop unrolling + shortcuts

For any model (or brain circuit) we need to disambiguate between (a) or (b). So I think you are correct that this gives us a reasonable control.

@achterbrain another form of generalization that is probably out of scope for the paper is related to memory. The key difference going from finite automata => push down automata => Turing machines is more sophisticated forms of memory (no memory => stack memory => infinite tape memory). Being able to simulate recurrent control + memory with good scaling in a feedforward network would be a very surprising result!
@achterbrain In fact, since reading this paper, I’ve been in neuro talks about cognitive maps where this exact control was the first question that popped into my mind!
@darsnack This all makes sense. One thing that I have been thinking about, related to your point (a), whether this is still true for Diffusion Models where inputs are denoised in multiple steps and so this then allows for recurrent computations to come back into the picture? It seems to me as if this should bring us closer to the "computational depth" achieved by recurrent networks.
@achterbrain if you consider a vanilla RNN as an MLP that’s recurrently applied, then diffusion models are U-Nets recurrently applied. So I think that there’s an under-explored space that tries to balance between spatial and temporal integration.

@achterbrain Though I think it’s important to distinguish between the different types of recurrence here. What’s in the linked paper and referenced in automata theory is discrete recurrence (i.e. loops). They describe repeated logical steps for N repetitions.

Diffusion models are more like a continuous process converging to a fixed point. Practically, we perform using discrete steps, but this is an approximation.

@achterbrain These two types operate on different timescales and data type, so they have different underlying dynamics. Mechanistically we might have feedback connections to implement both, but I don’t think the feedback will be operating in the same way.

@achterbrain @cogsci @cognition @neuroscience

Maybe of interest:

"How" in the sense of compared to human reasoning: https://arxiv.org/abs/2207.07051

A hypothesis on the "how" on a technical level: https://arxiv.org/abs/2212.07677

Language models show human-like content effects on reasoning tasks

Reasoning is a key ability for an intelligent system. Large language models (LMs) achieve above-chance performance on abstract reasoning tasks, but exhibit many imperfections. However, human abstract reasoning is also imperfect. For example, human reasoning is affected by our real-world knowledge and beliefs, and shows notable "content effects"; humans reason more reliably when the semantic content of a problem supports the correct logical inferences. These content-entangled reasoning patterns play a central role in debates about the fundamental nature of human intelligence. Here, we investigate whether language models $\unicode{x2014}$ whose prior expectations capture some aspects of human knowledge $\unicode{x2014}$ similarly mix content into their answers to logical problems. We explored this question across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task. We evaluate state of the art large language models, as well as humans, and find that the language models reflect many of the same patterns observed in humans across these tasks $\unicode{x2014}$ like humans, models answer more accurately when the semantic content of a task supports the logical inferences. These parallels are reflected both in answer patterns, and in lower-level features like the relationship between model answer distributions and human response times. Our findings have implications for understanding both these cognitive effects in humans, and the factors that contribute to language model performance.

arXiv.org

@lpag
Super interesting refs, thanks so much!

Abou the second ref: Do you know to which degree re-feeding interim predictions of the model back to the model would be equivalent to adding additional layers of self-attention? So whether having a 5 layer setup but refeeding input once is equivalent to a 10 layer setup? Just thinking of this in context of iterative diffusion processses.

@achterbrain Re-feeding would be equivalent to parameter sharing across layers. I’m sure you can find something on that topic.

My intuition says that different layers are supposed to learn different things, ie a low layer that gets the raw word embeddings might not work as well when you supply more high level representations. Sure you can train them already in this way but then you would compromise model capacity.

@lpag Yes, you are right, that would be parameter sharing. I feel generally speaking I have seen quite a lot of stuff that this often works without performance loss though here they state that they don't tie parameters across layers, so they might rely on unique layer setups - though I don't see them actively discussing it.
@achterbrain Yes I guess it's a simplification they make. They have some discussion at end of page 8 though and comparison in Fig 3. They call their parameter-shared variant 'recurrent'.