However, like many Farrow and Marantz seem to take the so-called "existential risk" framing of AI seriously. I really wish people would stop doing that. In this case it makes the article feel incoherent in places.
This technology by itself does not pose a unique risk. It's the people, organizations, and governments around it, and their behavior with respect to it, that generate risk. Treating the technology alone as uniquely existentially risky provides cover for a wide variety of bad actors to both continue doing their work as well as to shrug and say "oops" if something goes catastrophically wrong or if smaller harms accumulate into intolerably large ones. The very framing provides an accountability shield, which by my read contradicts what Farrow himself suggests is needed, namely more accountability. I take this from this article, his previous work, and comments he makes in interviews (e.g., this one with Decoder.
We need to stop catastrophizing. It's thought and action terminating.
#AI #GenAI #GenerativeAI #OpenAI #SamAltman #RonanFarrow #AndrewMarantz #NewYorker #xrisk #ExistentialRisk #AISafety







