The only time I will engage with "but what if my chatbot has real thoughts and feelings" shit is to point out how if you believe that's true, your actions in running said bot are unethical. Because even if you're running on a wrong idea, YOUR actions are based on YOUR belief of the situation, and if these AI fans running bots truly believe it then they're kinda fucked up.
I call this the haunted doll problem (although the djinn ring problem would be more accurate but less relatable of a concept. People sell purportedly haunted dolls on eBay. Most of the people selling them don't actually believe they're haunted. But they talk as if they do because that's part of selling it. Essentially, it's a scam.
Now, some percentage of buyers of these things also don't believe in it. They're buying it as a goof or whatever. So I guess that's fine, you see it as them selling you a fun fictional story, I wouldn't say these people got scammed.
But some percentage of buyers are serious. They believe the dolls are haunted, they are buying a ghost. And that's where we run into the other half of the problem. Sure, they're being scammed, that's bad. But consider: if you believe you are buying someone's ghost, is...that ethical? I don't know that it is a good thing to buy a ghost! I don't believe these dolls are haunted, but I do believe YOU believe that, and if you do I find it weird you'd find a ghost acceptable to buy or sell.
I don't think you have to entertain the ideas about an LLM being a person to have THIS conversation, because ultimately the discussion is about the operator, not the bot.