What does “good” coverage of AI mean to you? I wrote about how the disparate views that very smart people have about existential risk are making it hard to calibrate how to cover advancements in artificial intelligence.

https://www.platformer.news/p/why-im-having-trouble-covering-ai

Why I'm having trouble covering AI

If you believe that the most serious risks from AI are real, should you write about anything else?

Platformer

@caseynewton I’ve been having similar issues reporting on AI for Ars. Trying to walk a middle path is difficult with the wide range of viewpoints. It may owe to how nebulous “AI” is

Also, since the “AI doom” issue is 100% speculative, the topic has moved from tech into politics—a realm where people disagree over opinions and beliefs

We’ve been increasingly covering AI as a policy issue at Ars and I don’t think it’s a coincidence. So you’re a political reporter now and you didn’t know it 😁

@caseynewton In a way, “AI” is so nebulous that its definition widely varies by personal opinion. So I have come to see AI more as the subjective application or social perception of machine learning research

Due to that, and how everyone grew up on sci-fi, AI is more culture and feeling than science. So if you’re reporting about “AI,” you are reporting about various conflicting belief systems. What people think it is, what they think it should be. Like I said, political reporter 😁

@benjedwards @caseynewton I’ve decided to try to stop calling them “AI” or “artificial intelligence,” it’s not incorrect for some meanings, but it gives the wrong idea that LLMs are “intellegent.” And also the cultural baggage like you say. I’m trying to stick to LLM, ML, machine learning, etc