Bcachefs creator claims his custom LLM is 'fully conscious'
https://piefed.social/c/linux/p/1815630/bcachefs-creator-claims-his-custom-llm-is-fully-conscious
Bcachefs creator claims his custom LLM is 'fully conscious'
https://piefed.social/c/linux/p/1815630/bcachefs-creator-claims-his-custom-llm-is-fully-conscious
Iām not saying theyāre conscious, because not even fully understanding what consciousness is precludes saying that. But it also precludes saying itās āimpossibleā they are conscious.
Consciousness and AGI however, are two different things. I believe my cat is conscious, but itās not even close to being intelligent. AGI is, you know, a thing. Iām quite certain this dudeās LLM isnāt AGI because if nothing else, itās not āhisā LLM. Itās based on a black box public model he knows nothing about and which very likely changes frequently on the back end without his knowledge.
Intelligence is not reduced to producing speech or complex reasoning. Hence why calling LLMs AI was always disingenuous.
Intelligence is an extremely complex and multi factor phenomenon. Your cat is intelligent, some ML models are very intelligent. But, so they are certain blobs of fungi rhizome. A cluster of neurons in a petri dish, and a few hyper specific automation scripts can also be intelligent. An LLM can display intelligence. But that doesnāt mean it is conscious or that it is AGI, or that it can be classified as a person.
Those are all entirely different things.
I agree, and itās all a matter of definition. What makes an LLM different from us? To an all-knowing being, are we humans not just deterministic walking machines?
I find it hard to even arrive at a definition of consciousness.
⦠and this wasnāt made by accident, it was deliberately engineered to develop emergent behavior. Quite a lot of money has been spent hiring a variety of experts to make it do this thing.
Hasnāt worked. Almost certainly will never work, with this particular kind of network. But we would not have known that, just by looking at diagrams and going ānaaahhh.ā
You donāt need a culinary degree to identify if your cake is burned, or if it was frosted with feces instead of actual frosting.
Weāre nowhere near that being a remotely valid concern.
Sure, because we understand cake, and we can construct one from scratch. We know what makes cake cake, we donāt know what makes something conscious.
To be clear, I absolutely believe LLMs do not have consciousness. They are statistical prediction machines.
But then, animals are also just really complex chemical processes. I donāt know what the differentiating factor is.