@fchollet do you think it's possible that LLMs can be successful at the ARC challenge?
@guyreading From first principles, this seems highly unlikely. This is circumstantially confirmed by the fact that, when translating ARC problems to sequences, the largest LLMs out there (not just GPT-3, but much larger ones as well) don't work at all.
@fchollet appreciate your perspective! Yeh bottom-up it's just predicting the next word, no reasoning in that (?). Top-down, looking at some coding problems it can solve, and explaining multi-step processes, it at least gives the illusion of reasoning. What is an ocean but a multitude of drops (a sentence to explain a process but a sequence of individual words). I guess success at ARC could be a bit like validating whether it does do reasoning somehow or whether it's doing the Chinese Room...