One way to understand why #LLM #chatbots are precursor #AGI is that we have been very, very good for about a decade already in making insanely intelligent #AI models.

They have just been very special, as opposed to general. They win humans in go, or fold all the known proteins.

We weren't very good at making general AIs. Now with both large deep reinforcement learning models and large language models separately and analogously we actually achieved generalism.

LLMs in specific can do almost anything! Chat, control robots, make evaluations and decisions, play as autonomous agents. Who cares if they aren't great at those tasks yet! We know how to make them great already, that's a solved problem!

Combine the narrow super-intelligence we already know how to make to this somewhat smart general intelligence we got, and do the fusion dance, what do we get?

AGI.

@tero There will be a flourishing of formal methods and classical software, as LLMs generate specs/proofs/test cases/etc. The reliability problems of outputted code seem solvable with current LLMs if you chain calls together and/or fine-tune on particular domains. Combine that with LLM labeling of data to train specialized ML models that are more computationally efficient. That will certainly be superhuman in many varied and important domains, even if it's not "AGI".