I think one of the main features of sentience is being “always on”, your thoughts are continually being prompted by input from senses and emotions as well as from your own mind during downtime (inner thoughts), all couched in a self-preserving knowledge that your mind is in a body that should be protected.

Today's #AI chatbots do not have these things, but if we added them (not that hard really), I think that even today's level of LLMs might appear WAY more sentient/intelligent to most people.

We should be careful how that gets done, obviously! Plenty of SciFi out there showing the downsides. Interesting that both Foundation (from 1951) and Dune (1965) show future galaxies where AI was banned completely

@martin i think consciousness in AI requires not just always on neural net but fully engaged, as in every neuron running in parallel. That’s impossible in current computing architecture. Memory is still static, iteratively processed over memory bus in small pieces. GPUs just have a really fast & works in much bigger chunks, but still through a tiny straw at a time. Consciousness as awareness of the whole of being, all at once.
@njrabit Well that’s an interesting way of looking at it, man, yes, and the biological analogue way is what we prefer. But perhaps biological computing is a short step once you have digital mastered.
@martin Well othe digital front, quantization llama.cpp shrinks powerful LLMs 1/8 (4bit) size, run fast on any PCs no need for GPUs & no reduction of quality, and some work in going with 1bit. Where it gets super interesting is reduction in complexity of calculations means closer to silicon, and embedded chips running speech2text(whisper), text2speech(bark) and fully local LLM on appliances around the home, it'd be like Pee Wee's Playhouse ;)
@martin Iain M. Banks’ “Culture” novels have many levels of machine sentience, generally benign (at least to benign neighbors). I think the character “Mike” in Heinlein’s The Moon is a Harsh Mistress was a good example of the challenges in developing awareness without a body, particularly the dependence on social interaction (even if the tech Heinlein described seems hopelessly simplistic now). It might be possible for something we philosophically recognize as “intelligence” to develop without a body or socialization, but I don’t know if we’d be able to communicate with it. We would have almost no shared referents.