What does “good” coverage of AI mean to you? I wrote about how the disparate views that very smart people have about existential risk are making it hard to calibrate how to cover advancements in artificial intelligence.

https://www.platformer.news/p/why-im-having-trouble-covering-ai

Why I'm having trouble covering AI

If you believe that the most serious risks from AI are real, should you write about anything else?

Platformer

@caseynewton I'm not worried about robots taking over; I'm worried about LLM lies being taken as fact and distorting/ruining everything that they touch. That's where I'd like the coverage to focus: on the real problems we're going to see right now and in the near future.

Another aspect I'm interested in: the many issues with the training datasets (bias, unauthorized use, etc.).