In case you missed it: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf

Anyone knows of a good farming 101 tutorial?

@civodul this looks like the exact same use case as the OpenAI press release about "a new discovery in physics". Which, AFAICT seems similar to me, to the protein folding problem that LLMs proved effective at tackling.

Basically all of these problems appear to be that of searching the language space for a general solution pattern that can be validated against a bunch of specific results. LLMs seem quite effective at converging to valid solutions compared to simple brute forcing.

@siddhesh_p @civodul right, LLMs are known to be good in pattern matching: https://arxiv.org/abs/2601.11432
The unreasonable effectiveness of pattern matching

We report on an astonishing ability of large language models (LLMs) to make sense of "Jabberwocky" language in which most or all content words have been randomly replaced by nonsense strings, e.g., translating "He dwushed a ghanc zawk" to "He dragged a spare chair". This result addresses ongoing controversies regarding how to best think of what LLMs are doing: are they a language mimic, a database, a blurry version of the Web? The ability of LLMs to recover meaning from structural patterns speaks to the unreasonable effectiveness of pattern-matching. Pattern-matching is not an alternative to "real" intelligence, but rather a key ingredient.

arXiv.org
@siddhesh_p @civodul The important point every enthousiast is missing is that Claude provided a "code solution" that worked for "a restrain number of use case" (my words, not Knuth’s quotes). Knuth wrote the actual formal solution himself based on piece of code he rewrote. This very different from saying "Claude solved a mathematical problem" it also took it 31 iterations before providing a valid solution, with elusive computing ressources.