/1 https://aisnakeoil.substack.com/p/is-avoiding-extinction-from-ai-really… New substack from @random_walker writing in conjunction w/ @sethlazar & Jeremy Howard
/2 Subtitle says it all: "The history of technology suggests that the greatest risks come not from the tech, but from the people who control it."
3/ "2023 has seen a leap...in AI capabilities, which undoubtedly brings new risks... But we are not convinced that mitigating risk [of rogue AI wiping out humanity] is a global priority. Other AI risks are as important, and are much more urgent."
/4 "What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a “rogue human” with AI’s assistance."
/5 History suggests "that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that."
/6 "[I]n calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters."
/7 "And why focus on extinction in particular?...We’re still in the middle of a global pandemic"
Ongoing Russian aggression.
"Catastrophic climate change, not mentioned in the statement, has very likely already begun.
Is the threat of extinction from AI equally pressing?"

/8 Nice intervention here:

"Do the signatories believe" their AI systems "might wipe us all out? If they do...[they] should immediately shut down their data centres and hand everything over to national governments."

/9 "The researchers should stop trying to make existing AI systems safe" & "instead call for their elimination."

"We think that...most signatories to the statement believe that runaway AI is a way off yet" & "will take a significant scientific advance" we can't anticipate.

/10 "If this is so, then at least two things follow." 1) Attend more urgent concerns by mitigating inequality and concentrated power now:
/11 2. "[I]nstead of alarming the public w/ ambiguous projections about the future...we should focus less on what we should worry about, & more on what we should do.

/12 Love this...

The future of AI "perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control."

/13 "This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all"
/14 It "means empowering voices and groups underrepresented on this AI power list—many of whom have long been drawing attention to societal-scale risks of AI without receiving so much attention.
[L]et’s focus on the things we can study, understand and control..."

15/15 "—the design and real-world use of existing AI systems, their immediate successors, and the social and political systems of which they are part."

Nicely done!