The main reason I expect roughly the current paradigm (multimodal large language models with tools and inner monologue, trained via self-supervised learning and reinforcement learning) to go all the way isn’t just the fact that it’s currently achieving tasks that, only a few years ago, basically everyone familiar with the field would have called “AI-complete” (i.e. if you can do them, you’ve basically solved AI). 1/n
It’s the fact that, so far as I’ve ever come across, the connectionist school of cognitive science which has birthed this paradigm is the only game in town theoretically, in the sense that it has a fairly complete, convincing, mechanistic story for how our own minds work: it explains how world models form, how systematicity and combinatorial reasoning arise from grounded, pattern-based thought, it explains the immense plasticity of the brain, it explains how 2/n

it does credit assignment of reward signals and learns how to achieve goals. I’m not aware of another paradigm that can do all that, theoretically.

To be clear, I’m of course not claiming that we know (or could hope to know) every detail of how we do every cognitive task (and I’m also leaving the hard philosophical problems of mind such as qualia aside in this discussion). The situation is analogous to that with the theory of evolution: 3/n

We do not and cannot know every contingent fact of evolutionary history and every biochemical detail of every cellular process. But evolution gives us an in-principle explanation of how all of that came to be, makes copious well-tested predictions, and offers a framework for discovering specific details in specific cases. No other alternative framework exists which can claim to do so. The same seems to hold here. 4/4