David Weinberger on AI

David Weinberger joins the Plutopia podcast to weigh AI’s real strengths, especially pattern recognition, against its major dangers: hallucinations, bias, corporate power, and energy costs. He’s less focused on sci-fi doom than on how AI reshapes how we think about knowledge and ourselves. We dig into surveillance and facial recognition failures, “human-in-the-loop” debates in medicine and justice, job disruption, and whether copyright is the right tool for regulating training data.

https://media.blubrry.com/plutopia_news_network/plutopia.io/wp-content/uploads/2026/02/David-Weinberger.mp3

Podcast: Play in new window | Download

David Weinberger:

I am less concerned, but I may just be wrong about this — I am less concerned about machine learning AI becoming conscious and consciously hostile to us and subjugating us. I cannot evaluate the risk of it in a non-malignant way, taking over for us. I mean, there’s some popular scenarios from very knowledgeable and responsible people saying, you know, this conceivably could… even if we tell it, do no harm to humans, only do good, do what’s good for humans… that it could come to very bad conclusions about what’s good for humans and get us into a situation that we don’t want to be in.

YouTube video version:

#ai #artificialIntelligence #languageLearningModels #llm #technology

Check out this fascinating and slightly scary conversation about the future of #AI on the #GadgetLab #Podcast, featuring Joshua Rothman speaking with Geoffrey Hinton, a pioneer in Language Learning Models:

‘It’s Far Too Late’ to Stop Artificial Intelligence
https://overcast.fm/+RJ19AH9AQ

#NewYorker #RadioHour #AI #ArtificialIntelligence #Wired #JoshuaRothman #tech #GeoffreyHinton #LLM #LanguageLearningModels #GadgetLabPodcast

Geoffrey Hinton: ‘It’s Far Too Late’ to Stop Artificial Intelligence — Gadget Lab: Weekly Tech News from WIRED

Artificial intelligence has made headlines all year long, but the turn of events this week was extraordinary. OpenAI was thrown into chaos with the firing and eventual rehiring of CEO Sam Altman. There was a shakeup in the company’s board of directors and fierce debates about how much influence ethics should have on the company’s direction. That uncertainty of how to philosophically approach artificial intelligence will keep casting a shadow over the tech industry even after the dust settles around the OpenAI drama. Researchers, proponents of ethical AI, and corporate customers of these new generative AI tools will continue to ask how these technologies are going to shape our future, and what influence they will have over our lives. This week, we’re bringing you an episode of The New Yorker Radio Hour podcast in which New Yorker writer Joshua Rothman talks to Geoffrey Hinton, the so-called godfather of AI, about how rapidly AI has advanced and how it may alter the future of humanity. Show Notes: This episode…

I'm posting here for the Local feed.

Have you heard of #HumanityInFiction? It's an advocacy group of authors, publishers, and others concerned with how #LanguageLearningModels are being used in writing fields.

HiF has put together a #survey for writers, editors, and readers better see how "AI" is perceived in our creative processes. This survey takes approximately five minutes to complete.

https://sy5qybdoqjl.typeform.com/to/lZdyZvjJ

More info is available from the HiF homepage: https://humanityinfiction.org/.

Initial AI/LLM Survey

Online Language Generation and Assistance Tools: Impact on the Speculative Fiction Market.