The unreasonable effectiveness of pattern matching

We report on an astonishing ability of large language models (LLMs) to make sense of "Jabberwocky" language in which most or all content words have been randomly replaced by nonsense strings, e.g., translating "He dwushed a ghanc zawk" to "He dragged a spare chair". This result addresses ongoing controversies regarding how to best think of what LLMs are doing: are they a language mimic, a database, a blurry version of the Web? The ability of LLMs to recover meaning from structural patterns speaks to the unreasonable effectiveness of pattern-matching. Pattern-matching is not an alternative to "real" intelligence, but rather a key ingredient.

arXiv.org

@RefurioAnachro You are right that some responses to Wigner seem to have gone further than his original argument.

I think that such responses may have been at least *partly* due to the use of mathematical aesthetic judgement as one way of evaluating fundamental physical theories that go beyond what can currently be empirically tested.

People before Wigner had drawn attention to the uncanny way in which ever-more-abstract mathematics found applications in physics (e.g. Einstein in a 1921 lecture on ‘Geometry and Experience’, Whitehead in his 1925 book ‘Science and the Modern World’). But, although it was perhaps only incidental to his argument, Wigner seems to have been the first person to draw attention to the mystery that mathematics pursued for at least partly *aesthetic* reasons turned out to be useful in physics, and several of the people who responded to him used aesthetic arguments.

Shameless advertisement: for more on this, see chs 24+25 of my #OpenAccess book ‘Form & Number: A History of Mathematical Beauty’ [https://archive.org/details/cain_formandnumber_ebook_large] :-)

#Wigner #UnreasonableEffectiveness #PhilSci

Form & Number: A History of Mathematical Beauty (Ebook, large format) : Alan J. Cain : Free Download, Borrow, and Streaming : Internet Archive

This book offers a history of beauty in mathematics and of the study of beauty in mathematics. Its intention is to examine the historical development of the...

Internet Archive
🥱 "The Unreasonable Effectiveness of the Fourier Transform" – where we dive into the thrilling realm of slide PDFs and expired patents. 🎉 Spoiler alert: it's just as riveting as it sounds! 📈🔧
https://joshuawise.com/resources/ofdm/ #UnreasonableEffectiveness #FourierTransform #SlidePDFs #ExpiredPatents #DataScience #HackerNews #ngated
The Unreasonable Effectiveness of the Fourier Transform

The Unreasonable Effectiveness of the Fourier Transform

Beyond Semantics: The Unreasonable Effectiveness of Reasonless Intermediate Tokens

Recent impressive results from large reasoning models have been interpreted as a triumph of Chain of Thought (CoT), and especially of the process of training on CoTs sampled from base LLMs in order to help find new reasoning patterns. In this paper, we critically examine that interpretation by investigating how the semantics of intermediate tokens-often anthropomorphized as "thoughts" or reasoning traces and which are claimed to display behaviors like backtracking, self-verification etc.-actually influence model performance. We train transformer models on formally verifiable reasoning traces and solutions, constraining both intermediate steps and final outputs to align with those of a formal solver (in our case, A* search). By constructing a formal interpreter of the semantics of our problems and intended algorithm, we systematically evaluate not only solution accuracy but also the correctness of intermediate traces, thus allowing us to evaluate whether the latter causally influences the former. We notice that, despite significant improvements on the solution-only baseline, models trained on entirely correct traces still produce invalid reasoning traces when arriving at correct solutions. To further show that trace accuracy is only loosely connected to solution accuracy, we then train models on noisy, corrupted traces which have no relation to the specific problem each is paired with, and find that not only does performance remain largely consistent with models trained on correct data, but in some cases can improve upon it and generalize more robustly on out-of-distribution tasks. These results challenge the assumption that intermediate tokens or "Chains of Thought" induce predictable reasoning behaviors and caution against anthropomorphizing such outputs or over-interpreting them (despite their mostly correct forms) as evidence of human-like or algorithmic behaviors in language models.

arXiv.org
The Unreasonable Effectiveness of an LLM Agent Loop with Tool Use

How a simple loop enables powerful AI assistants

sketch.dev

"The Unreasonable Ineffectiveness of" ...

1. Mathematics
2. Deep learning
3. Machine learning
4. Macroeconomics
5. Security
6. Philosophy
7. Mathematics in economics
8. Fisherian tests in biology
9. Mathematics in the natural sciences
10. Considering things harmful
11. Factoring
12. Macroeconomics in political science
13. Mathematics education
14. Mathematics in cognitive science
15. Mathematics in biological science
16. Philosophy in physics

#UnreasonableEffectiveness
#UnreasonableIneffectiveness