P(doom), the "probability" that #AI or #AGI will doom humanity, is a quantity that #longtermist / #TESCREAL zealots seem to care a lot about. It is the quintessential example of the reasoning error that ecological rationality calls out. There is no way to quantify the likelihood of "doom", no matter how you define that word, and it's pure nonsense to try or pretend you have. Doom is a large world phenomenon. The people credited with inventing the frameworks and techniques that allow you to even think in terms of P(doom), like Leonard Savage, explicitly called out just this sort of application as preposterous.

Nevertheless, US Senate majority leader Chuck Schumer invited a bunch of tech CEOs and technologists, among whom numbered many #longermist and similar kinds of zealots, to opine on their personal assessments of P(doom) in a legitimate hearing in front of the US Congress (there's good reporting on this here: https://www.techpolicy.press/us-senate-ai-insight-forum-tracker/ ).

I lack the words to express what I feel every time I'm reminded of this. Not good things.
US Senate AI ‘Insight Forum’ Tracker | TechPolicy.Press

Gabby Miller rounds up what we do (and don’t) know about Senate Majority Leader Schumer's forums, including documents and statements.

Tech Policy Press
@arcanesciences
Reducing humans to the status of perhaps-obsolete-equipment is a necessary moral precondition to the #Longermist program of replacing us with 10**48 (or whatever their made-up quantity is) hypothetical simulations of humans at some hypothetical future time.
#Longtermism #ExistentialRisk
@emilymbender
@kurtsh @jamesbritton @Toastie
& the whole 'save humanity' part really benefits from understanding how he defines 'save' & 'humanity' - as a #longermist he believes that the hypothetical lives of hypothetically quadrillions of hypothetical virtual people are non-hypothetically more important than the lives of billions of actual people now living.