Bcachefs creator claims his custom LLM is 'fully conscious'
https://piefed.social/c/linux/p/1815630/bcachefs-creator-claims-his-custom-llm-is-fully-conscious
Bcachefs creator claims his custom LLM is 'fully conscious'
https://piefed.social/c/linux/p/1815630/bcachefs-creator-claims-his-custom-llm-is-fully-conscious
Probably a bit of both.
You’d have to have a bit of a screw loose to dedicate so much of your free time to a project you won’t get much out yourself.
And the stress will only make things worse.
Oh Kent, no. No Kent, no. Kent.
Perhaps Kent, being such an apparently difficult personality type, is just so lonely he has to think at least his chat bot loves him.
Kent is obviously a talented programmer, but that guy doesn’t seem to be right in the head.
"I’m not not saying that I gendered this robot as a woman because otherwise it would immasculate me, I just want to flirt with young woman over which I have complete control."
Yes, exactly.
I know they don’t teach this in outrage school but making negative generalizations about a gender is bigotry, misandry specifically. It doesn’t become any less of a negative generalization about men if you add a a few qualifiers.
I made a negative generalization about misandrist Blahj users and you got upset. Unless you are actually a literal misandrist Blahj user and were upset at me calling you out specifically then the comment wasn’t about you.
Sorry, is this better?:
70% of all blahj users are Misandrist.
Does the percentage makes it less of a negative generalization or do you understand the point that I was making.
making negative generalizations about a gender
They were making negative generalizations about AI bros. AI bro isn’t a gender. As a man, I didn’t feel targeted by it. Maybe examine why you do.
Way off target man. If it helps, I’m not a blahaj user, and I am male. I’m not offended by the joke at the expense of delusional AI bros, or by your comment about blahaj users.
There’s definite misandry out on the net, but I’ve not seen blahaj to be particularly strong in it. I also tend to block users early and often. Lemmy’s small enough that it has a noticable effect on the quality of what I encounter.
Fuck no. It is only because of the Turing test that we can say they’re not conscious. You get someone questioning a bot and a person at the same time, they’re gonna figure out who’s who in short order. See: how many Rs in strawberry, name states without an E, should I walk to the car wash.
If a program was indistinguishable from a person, what basis would we have to say the person is intelligent but the program is not?
Later: “Are you fully conscious?”
“No, I’m just an AI simulating consciousness.”
“But I thought you said you were conscious before…?”
“I’m sorry, you’re absolutely right! I am conscious. Thank you for pointing out my error. I’m always striving to improve my answers.”
I’m not saying they’re conscious, because not even fully understanding what consciousness is precludes saying that. But it also precludes saying it’s “impossible” they are conscious.
Consciousness and AGI however, are two different things. I believe my cat is conscious, but it’s not even close to being intelligent. AGI is, you know, a thing. I’m quite certain this dude’s LLM isn’t AGI because if nothing else, it’s not “his” LLM. It’s based on a black box public model he knows nothing about and which very likely changes frequently on the back end without his knowledge.
Intelligence is not reduced to producing speech or complex reasoning. Hence why calling LLMs AI was always disingenuous.
Intelligence is an extremely complex and multi factor phenomenon. Your cat is intelligent, some ML models are very intelligent. But, so they are certain blobs of fungi rhizome. A cluster of neurons in a petri dish, and a few hyper specific automation scripts can also be intelligent. An LLM can display intelligence. But that doesn’t mean it is conscious or that it is AGI, or that it can be classified as a person.
Those are all entirely different things.
I agree, and it’s all a matter of definition. What makes an LLM different from us? To an all-knowing being, are we humans not just deterministic walking machines?
I find it hard to even arrive at a definition of consciousness.
… and this wasn’t made by accident, it was deliberately engineered to develop emergent behavior. Quite a lot of money has been spent hiring a variety of experts to make it do this thing.
Hasn’t worked. Almost certainly will never work, with this particular kind of network. But we would not have known that, just by looking at diagrams and going ‘naaahhh.’
You don’t need a culinary degree to identify if your cake is burned, or if it was frosted with feces instead of actual frosting.
We’re nowhere near that being a remotely valid concern.
Sure, because we understand cake, and we can construct one from scratch. We know what makes cake cake, we don’t know what makes something conscious.
To be clear, I absolutely believe LLMs do not have consciousness. They are statistical prediction machines.
But then, animals are also just really complex chemical processes. I don’t know what the differentiating factor is.
“Are you alive”
“Yes”
“OH. MY. GOD.”