@Riedl what's the best case you've seen against 'AGI' as an existential risk? Not: (1) best case against arguments in Time magazine/turning us into paperclips, or (2) best case against current ML/AI systems. The simple argument seems strong to me: the systems we're building to be more capable than we are will become more capable, at which point we lose our dominant role as a species, taking a place alongside dolphins, chimpanzees...