There's yet another "AI will kill us all! It poses a risk of extinction!" letter going around, and I just… Y'all i am just so fucking tired.

CAPITALISM poses risk of extinction (climate change, right the fuck now).

WHITE SUPREMACY poses risk of extinction (genocide, eugenics).

HEGEMONY poses risk of extinction (nuclear FUCKING WAR).

And whatever "risk of extinction" "AI" poses, it poses because it is BUILT FROM THOSE EXTREMELY HUMAN VALUES.

Even if you stopped every "AI" project running, RIGHT THIS SECOND, those values would still kill us. And no matter how long you "pause" your "AI" projects, if you don't address those values? Then when you start your "AI" back up? You'll KEEP BUILDING THOSE SAME VALUES IN.

This is not hard. At this point, as much as it pains me to say it, it's not even novel. And yet you're still not fucking getting it.

I'm so goddam tired.

@Wolven My main issue is that they won't say WHY it poses an extinction risk. Are they going to cite a short story by Ray Bradbury or something? Will it be Idiocracy where we all forget the recipe for ice and start sprinkling our crops with Gatorade? What?
@dianarajchel Even if we take them at their word that they really believe this, literally anything they could point to that "AI" might "decide" to do to kill us all, it would only "think" to do because we, humans, modeled that to it as something to learn from and emulate. And they genuinely refuse to grapple with that fact.
@Wolven @dianarajchel so many of them mention it, but they all seem to take it as a fundamental characteristic of the world rather than imagine a world without those things, or a way to avoid training those things into AI.