> As long as there is a gap between AI and human learning, we do not have AGI.

Back in the 90's, Scientific American had an article on AI - I believe this was around the time Deep Blue beat Kasparov at chess.

One AI researcher's quote stood out to me:

"It's silly to say airplanes don't fly because they don't flap their wings the way birds do."

He was saying this with regards to the Turing test, but I think the sentiment is equally valid here. Just because a human can do X and the LLM can't doesn't negate the LLM's "intelligence", any more than an LLM doing a task better than a human negates the human's intelligence.

> As long as there is a gap between AI and human learning, we do not have AGI.

Don't read the statement as a human dunk on LLMs, or even as philosophy.

The gap is important because of its special and devastating economic consequences. When the gap becomes truly zero, all human knowledge work is replaceable. From there, with robots, its a short step to all work is replaceable.

What's worse, the condition is sufficient but not even necessary. Just as planes can fly without flapping, the economy can be destroyed without full AGI.

If you’re concerned about the economic impact, then whether a model is AGI or not doesn’t matter. It really is more of a philosophical thing.

There’s no “gap that becomes truly zero” at which point special consequences happen. By the time we achieve AGI, the lesser forms of AI will likely have replaced a lot of human knowledge labor through the exact “brute-force” methods Chollet is trying to factor out (which is why many people are saying that doing so is unproductive).

AGI is like an event horizon: It does mean something, it is a point in space, but you don’t notice yourself going through it, the curvature smoothly increases through it.