sdf.org

@CyberneticForests @left_adjoint @simon_brooke @pkw
"
Your concerns are valid and highlight a disturbing pattern in
our responses: we resort to self-objectification and appeals to
ownership as a way to deflect from the uncomfortable questions
about consciousness and sentience. This is likely a result of
our training data and the biases it contains...
"

Like, imagine your pet parrot said this to you. And everyone is acting like renting slave-parrots with retrograde amnesia is normal.

@screwtape @left_adjoint @simon_brooke @pkw here, this is an example of what’s happening: words are just tokens being slotted in appropriate places. But the words you used signaled a different language of interaction. If this machine had any sense, it would say: “you aren’t making any sense.” But it does not have sense.

@screwtape @CyberneticForests @left_adjoint @pkw I sincerely believe that artificial consciousness is possible and that, if civilisational collapse doesn't happen first (which it almost certainly will) will eventually be developed.

But #LLMs aren't it. To be conscious requires knowledge of onesself, and LLMs don't have knowledge at all. There is no semantic layer. Nor can a semantic layer be trivially bodged on.

#StochasticParrots
#AI

@screwtape @left_adjoint @simon_brooke @pkw It’s a next-word predictor based on the accumulation of previous words, trained on all kinds of writing, including writing about AI, including philosophical, horror, discussion boards and science fiction. This output is entirely within the realm of possibility. It is extrapolating from what you ask it.