What are your favourite projects investigating if / how different large #foundational #models do #logical #reasoning ? Or how their "next token prediction mechanism" emulates reasoning.

Still trying to make my mind up whether the internal dynamics of these models are worth investigating.

Very curious to hear people's thoughts!

#NLProc #LLM #genAI #AI #ML #logic @cogsci @cognition @neuroscience #neuroscience #cognition

@achterbrain @cogsci @cognition @neuroscience

Maybe of interest:

"How" in the sense of compared to human reasoning: https://arxiv.org/abs/2207.07051

A hypothesis on the "how" on a technical level: https://arxiv.org/abs/2212.07677

Language models show human-like content effects on reasoning tasks

Reasoning is a key ability for an intelligent system. Large language models (LMs) achieve above-chance performance on abstract reasoning tasks, but exhibit many imperfections. However, human abstract reasoning is also imperfect. For example, human reasoning is affected by our real-world knowledge and beliefs, and shows notable "content effects"; humans reason more reliably when the semantic content of a problem supports the correct logical inferences. These content-entangled reasoning patterns play a central role in debates about the fundamental nature of human intelligence. Here, we investigate whether language models $\unicode{x2014}$ whose prior expectations capture some aspects of human knowledge $\unicode{x2014}$ similarly mix content into their answers to logical problems. We explored this question across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task. We evaluate state of the art large language models, as well as humans, and find that the language models reflect many of the same patterns observed in humans across these tasks $\unicode{x2014}$ like humans, models answer more accurately when the semantic content of a task supports the logical inferences. These parallels are reflected both in answer patterns, and in lower-level features like the relationship between model answer distributions and human response times. Our findings have implications for understanding both these cognitive effects in humans, and the factors that contribute to language model performance.

arXiv.org

@lpag
Super interesting refs, thanks so much!

Abou the second ref: Do you know to which degree re-feeding interim predictions of the model back to the model would be equivalent to adding additional layers of self-attention? So whether having a 5 layer setup but refeeding input once is equivalent to a 10 layer setup? Just thinking of this in context of iterative diffusion processses.

@achterbrain Re-feeding would be equivalent to parameter sharing across layers. I’m sure you can find something on that topic.

My intuition says that different layers are supposed to learn different things, ie a low layer that gets the raw word embeddings might not work as well when you supply more high level representations. Sure you can train them already in this way but then you would compromise model capacity.

@lpag Yes, you are right, that would be parameter sharing. I feel generally speaking I have seen quite a lot of stuff that this often works without performance loss though here they state that they don't tie parameters across layers, so they might rely on unique layer setups - though I don't see them actively discussing it.
@achterbrain Yes I guess it's a simplification they make. They have some discussion at end of page 8 though and comparison in Fig 3. They call their parameter-shared variant 'recurrent'.