What does “good” coverage of AI mean to you? I wrote about how the disparate views that very smart people have about existential risk are making it hard to calibrate how to cover advancements in artificial intelligence.

https://www.platformer.news/p/why-im-having-trouble-covering-ai

Why I'm having trouble covering AI

If you believe that the most serious risks from AI are real, should you write about anything else?

Platformer

@caseynewton I’ve been having similar issues reporting on AI for Ars. Trying to walk a middle path is difficult with the wide range of viewpoints. It may owe to how nebulous “AI” is

Also, since the “AI doom” issue is 100% speculative, the topic has moved from tech into politics—a realm where people disagree over opinions and beliefs

We’ve been increasingly covering AI as a policy issue at Ars and I don’t think it’s a coincidence. So you’re a political reporter now and you didn’t know it 😁

@benjedwards @caseynewton AI has always been a technopolitical issue, and its specific form in this phase (deep learning) has specific political implications, but many of the aforementioned 'smart' people are blind to concrete social dynamics in a way that's widespread in white men with an engineering mindset