When this "AI" bubble pops, the men pretending they weren't pushing the hype, like "critics" whose position is "AGI is real but LLMs aren't the way," who were in eugenicist and "AI existential risk"🙄 circles, will get specials discussing what they saw coming, when its the women who actually told you so.

And then it will be rinse and repeat, onto the next grift.

@timnitGebru
Aren't you mixing things?

AGI will be real some day but LLM are not the way (LeCunn), hence the current trend IS indeed a bubble.

@darkphysics @timnitGebru What makes you say "AGI will be real" instead of "AGI might be real"?

@skaphle @timnitGebru

Just quoting LeCunn in X.

My personal view is the same, current LLM are lacking many human skills.

@darkphysics @timnitGebru Ah okay. I wanted to ask whether there was a deeper argument behind that. Obviously that person is who Timnit addressed, so she sees it different. Two different and opposing arguments need to be checked outside and you can't use one side to say the other is wrong. Does that make sense? Maybe, if you follow what this LeCunn person says, could you rephrase the argument they make? Or what brought you to that same view?

@skaphle @timnitGebru

Good point on the argumentation but LeCunn is the creator of CNNs, and helped in many other AI architectures, that's why I wanted Timnit to give a more detailed explanation.

https://en.wikipedia.org/wiki/Yann_LeCun

What is his (deeper) view on the subject? I can only guess:
LLM at their current state are a sequence to sequence generator, no evolving nature, no "new" knowledge, no ability to set own new goals, forgetful, no continuous learning, etc, etc

As I said, just guessing.

Yann LeCun - Wikipedia

@darkphysics @timnitGebru Thanks for the clarification. Your question had appeared to me more like a question of her conclusion than a request for explanation.

I understand the limits of LLMs and that probably there's no AGI coming out of it. And I think that is the consensus here. As I understand it, the diverging views are over whether other methods, like CNNs, can lead to AGI. My position is I don't know, but I'd like to know the reasoning for one or the other sure statement. It seems to me Timnit is sure that it's so questionable that people who argue for it are lying grifters. And it seems to me that you, or LeCun, are so sure that you make statements that there *will* be AGI.

If you don't want to elaborate, that's perfectly fine too.