vines of the animal kingdom
vines of the animal kingdom
This piqued my curiosity so I dug into it a bit on Wikipedia. Most worms are dumb as fuck, roundworms are about as dumb as they come with total neuron counts for a roundworm being comparable to a microscopic tartigrade (300 vs 200). Most of this is located in the head of the worm in a brain like structure though, so I’m betting the clones develop their brains independently with no information transfer. I doubt there’s a ton of learning/memory forming going on at all though, based on how simple worms are, so it’s probably functionally identical. I would be surprised if most worm species exhibit any kind of learned behaviors ever.
Neither the worm, nor current LLMs, are sapient.
Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?
The sweet release of death.
Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.
Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?
End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.
I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research that deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.
I feel like the last part is something the AI from the paperclip thought experiment would do.
And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.
Drag isn’t saying they’re conscious either. A being doesn’t have to be conscious in order to suffer. Drag is perfectly capable of suffering while unconscious, and if you’ve ever had a scary dream, so are you. Drag thinks LLMs act like people who are dreaming. Their hallucinations look like dream logic.