There's yet another "AI will kill us all! It poses a risk of extinction!" letter going around, and I just… Y'all i am just so fucking tired.

CAPITALISM poses risk of extinction (climate change, right the fuck now).

WHITE SUPREMACY poses risk of extinction (genocide, eugenics).

HEGEMONY poses risk of extinction (nuclear FUCKING WAR).

And whatever "risk of extinction" "AI" poses, it poses because it is BUILT FROM THOSE EXTREMELY HUMAN VALUES.

Even if you stopped every "AI" project running, RIGHT THIS SECOND, those values would still kill us. And no matter how long you "pause" your "AI" projects, if you don't address those values? Then when you start your "AI" back up? You'll KEEP BUILDING THOSE SAME VALUES IN.

This is not hard. At this point, as much as it pains me to say it, it's not even novel. And yet you're still not fucking getting it.

I'm so goddam tired.

@Wolven My main issue is that they won't say WHY it poses an extinction risk. Are they going to cite a short story by Ray Bradbury or something? Will it be Idiocracy where we all forget the recipe for ice and start sprinkling our crops with Gatorade? What?
@dianarajchel Even if we take them at their word that they really believe this, literally anything they could point to that "AI" might "decide" to do to kill us all, it would only "think" to do because we, humans, modeled that to it as something to learn from and emulate. And they genuinely refuse to grapple with that fact.
@Wolven @dianarajchel so many of them mention it, but they all seem to take it as a fundamental characteristic of the world rather than imagine a world without those things, or a way to avoid training those things into AI.
@Wolven @dianarajchel It's like we're on the Titanic with water up to our ankles and all the rich passengers are worried about the crime rate in New York.

@dianarajchel @Wolven Two of the founders of the "Center for AI Safety", Dan Hendrycks and Oliver Zhang, are apparently affiliated with the "LessWrong" apocalyptic AI cult, whose leadership has advocated nuclear war over "AI" under the pretext of some imaginary superintelligence that could turn the Earth into self-replicating grey goo.

Judge their nonsensical claims about "existential risk" and "AI" accordingly.

@michael_w_busch @Wolven Thank you for that context. Also wheee a new cult to examine! (Some people watch true crime. Me, it's cults.)

@dianarajchel @Wolven One misfortune of my time living in Silicon Valley was encountering a couple of members of the LessWrong cult.

They have a very distinctive vocabulary: https://rationalwiki.org/wiki/LessWrong (that review is some years old now and so does not include Eliezer Yudkowsky of LessWrong calling for nuclear war over AI in a Time magazine piece on 2023 March 29).

LessWrong

LessWrong is a community blog focused on "refining the art of human rationality." To this end, it focuses on identifying and overcoming bias, improving judgment and problem-solving, and speculating about the future. The blog is based on the ideas of Eliezer Yudkowsky, a research fellow for the Machine Intelligence Research Institute (MIRI); previously known as the Singularity Institute for Artificial Intelligence, and then the Singularity Institute). Many members of LessWrong share Yudkowsky's interests in transhumanism, artificial intelligence (AI), the Singularity, and cryonics.

RationalWiki
@michael_w_busch @Wolven somehow, I managed to avoid that, but then when I lived in San Francisco I struggled to leave the house!