What does “good” coverage of AI mean to you? I wrote about how the disparate views that very smart people have about existential risk are making it hard to calibrate how to cover advancements in artificial intelligence.

https://www.platformer.news/p/why-im-having-trouble-covering-ai

Why I'm having trouble covering AI

If you believe that the most serious risks from AI are real, should you write about anything else?

Platformer

@caseynewton my experience is that if the article doesn't come out and clearly define that AI is a silly term and everything we see to day is actually machine learning and there's nothing intelligent about it... it's not going to be a very good article.

Machine learning has some neat applications and possibilities, but everyone who's gung ho for "AI" are the same kinds of people who think cryptocurrency will save the world. But they're just refusing to see that the emperor is wearing no clothes