1/ So this article didn't sit well with me. First let me get a few things straight.

1. #AI is not sentient. It just recognises patterns and replicates stuff.
2. Whatever it pumps out is a product of what you feed it, the parameters you set, and the request you input.
3. AI has improved leaps and bounds, but there is no way to programme emotions and desires at this point in time.

But my issue with this is the language used to describe AI.

https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html

Why a Conversation With Bing’s Chatbot Left Me Deeply Unsettled

A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me.

The New York Times

2/ These points were clarified in the article, but only at the end, in a few lines. A huge portion of the article was humanising the AI.

AI should be discussed in the capacity of it being a machine, a tool to improve processes. It shouldn't be humanised based on perceived sentience.

3/ When you don't separate the machine from the equation, or at least describe its actions as what it is, you're 1. Fearmongering, 2. Indirectly feeding into the idea of creating parasocial relationships... with machines.

As if we don't already have real-life examples of problems stemming from people getting too attached to fictional materials or online strangers.

4/It's a funny excerpt about AI and how closely it mimics human speech, but the way this is presented, just does not sit well with me.

Especially as a tech column, hiding the clarification at the bottom, when you've spent a good portion just describing it as if it's sentient.

5/ It may be unsettling, sure, and I know having any other headline would not have gotten the reader's attention (case in point you've successfully gotten mine) but I can't shake the feeling it's irresponsible for a tech column to spend quite as many words humanising a software.

Having said that I will be a crap journo myself if I didn't do a search to see if NYT had wrote better stuff on this topic.

If you're gna read nyt, pls read this instead.
https://www.nytimes.com/2023/02/16/technology/chatbots-explained.html

Why Chatbots Sometimes Act Weird and Spout Nonsense

No, chatbots aren’t sentient. Here’s how their underlying technology works.