It’s basically impossible to create conciousness when we don’t even fully understand what conciousness is or how it works.
Well… People fuck around and seems to have been doing so for a while…
Any woman can make a whole new consciousness all by herself, with just a little help from a friend.

I’m not saying they’re conscious, because not even fully understanding what consciousness is precludes saying that. But it also precludes saying it’s ā€œimpossibleā€ they are conscious.

Consciousness and AGI however, are two different things. I believe my cat is conscious, but it’s not even close to being intelligent. AGI is, you know, a thing. I’m quite certain this dude’s LLM isn’t AGI because if nothing else, it’s not ā€œhisā€ LLM. It’s based on a black box public model he knows nothing about and which very likely changes frequently on the back end without his knowledge.

I bet your cat is more intelligent than some people…

Intelligence is not reduced to producing speech or complex reasoning. Hence why calling LLMs AI was always disingenuous.

Intelligence is an extremely complex and multi factor phenomenon. Your cat is intelligent, some ML models are very intelligent. But, so they are certain blobs of fungi rhizome. A cluster of neurons in a petri dish, and a few hyper specific automation scripts can also be intelligent. An LLM can display intelligence. But that doesn’t mean it is conscious or that it is AGI, or that it can be classified as a person.

Those are all entirely different things.

I disagree here. Things can happen by accident. Doubtful but possible. Nothing I have seen has been conscious to me certainly.

I agree, and it’s all a matter of definition. What makes an LLM different from us? To an all-knowing being, are we humans not just deterministic walking machines?

I find it hard to even arrive at a definition of consciousness.

… and this wasn’t made by accident, it was deliberately engineered to develop emergent behavior. Quite a lot of money has been spent hiring a variety of experts to make it do this thing.

Hasn’t worked. Almost certainly will never work, with this particular kind of network. But we would not have known that, just by looking at diagrams and going ā€˜naaahhh.’

If we don’t understand it, how can we say whether something is or or not consciousness?

You don’t need a culinary degree to identify if your cake is burned, or if it was frosted with feces instead of actual frosting.

We’re nowhere near that being a remotely valid concern.

Sure, because we understand cake, and we can construct one from scratch. We know what makes cake cake, we don’t know what makes something conscious.

To be clear, I absolutely believe LLMs do not have consciousness. They are statistical prediction machines.

But then, animals are also just really complex chemical processes. I don’t know what the differentiating factor is.

To be fair to Kent, he’s only the best engineer in the world, not the best philosopher.