Reminder: your toaster doesn’t dream of artisanal bread. Neural networks don’t have opinions. They’re glorified autocomplete engines trained on our collective overconfidence. 🤖🍞 #AI #NotSentient #CalmDown

"Much of the assertion that #Altman and #Schmidt make about #generativeAI helping with #climatechange is built on the notion that the technology underlying #ChatGPT will soon get so smart, because of all the data it’s taking in, that it will surpass human intelligence and will be able to come up with ideas that humans couldn’t.

But such advocates don’t ever really explain how a technology that’s #notsentient and doesn’t have any real notion of the actual physical world will somehow evolve into what they like to call artificial general intelligence, said @emilymbender and #AlexHanna, co-authors of the new book “The #AICon: How to Fight Big Tech’s Hype and Create the Future We Want.”

https://www.sfexaminer.com/news/technology/why-chatgpt-generative-ai-unlikely-to-solve-climate-change/article_3af9df0f-0cb6-41d0-93bd-417c5ead8c99.html

Why ChatGPT is unlikely to solve climate change

Critics say boosters conflating different types of AI.

San Francisco Examiner

Sending this one out to everyone declaring their undying love for #chatbots! 😂 Some great and important perspective by #TheVerge.

ELIZA designer Joseph Weizenbaum observed: “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test

#ai #chatbot #bing #languagemodel #languagemodels #lamda #notsentient

Introducing the AI Mirror Test, which very smart people keep failing

The launch of Microsoft’s AI chatbot Bing has captivated users. But too many people believe the bot could be sentient when it only reflects our language back to us. Believing in chatbot sentience is failing the AI mirror test.

The Verge
If you build a chatbot with the intent of it mimicking conversation, do not be surprised when it mimics conversation. Using the ability of humans to distinguish between a human and an AI as a test of sentience is a horrible idea and is not science. Humans are known to anthropomorphize and are known for assigning human qualities to everything from animals and boats to storms and celestial bodies.
#AI #notsentient