It’s basically impossible to create conciousness when we don’t even fully understand what conciousness is or how it works.

I’m not saying they’re conscious, because not even fully understanding what consciousness is precludes saying that. But it also precludes saying it’s “impossible” they are conscious.

Consciousness and AGI however, are two different things. I believe my cat is conscious, but it’s not even close to being intelligent. AGI is, you know, a thing. I’m quite certain this dude’s LLM isn’t AGI because if nothing else, it’s not “his” LLM. It’s based on a black box public model he knows nothing about and which very likely changes frequently on the back end without his knowledge.

Intelligence is not reduced to producing speech or complex reasoning. Hence why calling LLMs AI was always disingenuous.

Intelligence is an extremely complex and multi factor phenomenon. Your cat is intelligent, some ML models are very intelligent. But, so they are certain blobs of fungi rhizome. A cluster of neurons in a petri dish, and a few hyper specific automation scripts can also be intelligent. An LLM can display intelligence. But that doesn’t mean it is conscious or that it is AGI, or that it can be classified as a person.

Those are all entirely different things.