@gnat @BestGirlGrace I'm iffy on this particular line of reasoning, because "it just can't count because of X fundamental requirement of the definition" is 1) really limiting and 2) not terribly honest to how humans actually think about sentience.
Fundamentally the question is not "will we convince every cognitive scientist that this thing counts as a person", it's "would most people look and say this AI is self-aware". Given the full transcript that I read, I'd have trouble arguing that it *isn't* sentient in that sense. It is expressing opinions, feelings, an understanding of self... If I believed it was genuine, I would be desperately wanting someone to look at the internals and see if this is somehow supported.
But as is, I can't believe what is there, because it just doesn't make sense. This is leaps and bounds past not only preexisting chatbots and text transformers, but the entire field of narrative intelligence to say nothing of other fields that this supposedly flies past. Its hard to believe.