1/ So this article didn't sit well with me. First let me get a few things straight.

1. #AI is not sentient. It just recognises patterns and replicates stuff.
2. Whatever it pumps out is a product of what you feed it, the parameters you set, and the request you input.
3. AI has improved leaps and bounds, but there is no way to programme emotions and desires at this point in time.

But my issue with this is the language used to describe AI.

https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html

Why a Conversation With Bing’s Chatbot Left Me Deeply Unsettled

A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me.

The New York Times
@omata Aren’t you pre-supposing your argument by saying it “just” recognizes patterns? We also do that, but I assume you don’t want to say we do it like a large language model.

@willreed they way I see it, if humans were passive, non reactionary, and incapable of emotions and desires, we kinda do function like robots 😅

Hence my point 3. For the sake of brevity, soc med and all, this is just a high level "AI is not sentient" summary. Of course there are way more intricacies and differences between AI and human brain.

I claim no specialty in AI, I only claim some experience in writing about technology, and my issue is really with the article humanising AI. My POV.