What does “good” coverage of AI mean to you? I wrote about how the disparate views that very smart people have about existential risk are making it hard to calibrate how to cover advancements in artificial intelligence.

https://www.platformer.news/p/why-im-having-trouble-covering-ai

Why I'm having trouble covering AI

If you believe that the most serious risks from AI are real, should you write about anything else?

Platformer

@caseynewton There are two problems with most coverage, including your piece: (1) a lack of informative descriptions of how these machines are built and work; and (2) a focus on positive functionality to the exclusion of any well ordered discussion of limitations.

Both angles are tough to cover in the absence of transparency, but that makes them more important, not less. Both would also blunt the buzz of the topic over time, which I hope is not a reason for gliding past them

@fgbjr appreciate the note but 2 seems wrong to me — I read way more about risk than anything else, including in mainstream pubs

On tech stuff — how do more detailed descriptions improve the coverage in your view?

@caseynewton It was an impressionistic note, I'm not widely read enough to dig in my heels. But the two impressions are related. Talk of risk that I've seen from the most prominent (non-tech) outlets is often premised on software becoming "too good," or developing intelligence beyond human capacity. Although there's a critical difference between mimicry and the sentience of a trusted mind, it's easy to slip across the boundary; and that's where popular tech description can help.