There's yet another "AI will kill us all! It poses a risk of extinction!" letter going around, and I just… Y'all i am just so fucking tired.

CAPITALISM poses risk of extinction (climate change, right the fuck now).

WHITE SUPREMACY poses risk of extinction (genocide, eugenics).

HEGEMONY poses risk of extinction (nuclear FUCKING WAR).

And whatever "risk of extinction" "AI" poses, it poses because it is BUILT FROM THOSE EXTREMELY HUMAN VALUES.

Even if you stopped every "AI" project running, RIGHT THIS SECOND, those values would still kill us. And no matter how long you "pause" your "AI" projects, if you don't address those values? Then when you start your "AI" back up? You'll KEEP BUILDING THOSE SAME VALUES IN.

This is not hard. At this point, as much as it pains me to say it, it's not even novel. And yet you're still not fucking getting it.

I'm so goddam tired.

Here. I've already said all this. Been saying it for damn near 20 years. Tired. Fucking wearied.
https://afutureworththinkingabout.com/?page_id=5038
Curriculum Vitae | A Future Worth Thinking About

Even if we take these dudes at their word that they really believe this, then regardless of mechanism, even "AI" would only think to kill us all because we— humans— modeled that to it as something to learn from and emulate.

And these dudes genuinely refuse to grapple with that fact

Object Lessons in Freedom | A Future Worth Thinking About

'"Any Sufficiently Advanced Neglect is Indistinguishable from Malice': Assumptions and Bias in Algorithmic Systems':
https://afutureworththinkingabout.com/?p=5442

@Wolven whoa! that title alone is 🔥🔥

that will definitely be living in my mind for a long time. (hoping to listen to the audio soon)

@hko the first half of that title is me quoting @debcha
@Wolven @debcha it's an absolutely inspired turn of phrase, kudos to both of you!

I wasn't aware of the many variations on Clarke's third law and I think this is the best I've seen. Definitely a useful construct and cuts through the bullshit. "It quacks like a duck."

https://en.wikipedia.org/wiki/Clarke%27s_three_laws

Clarke's three laws - Wikipedia

@Wolven "something something instrumental convergence"
@Wolven So I inspired myself to finally read the wikipedia on this “instrumental convergence” I keep hearing about — and as I expected, but even more so, it is truly a tour de force of self justification. These people projected so hard, they made their pathologies into a universal law of the cosmos. https://en.m.wikipedia.org/wiki/Instrumental_convergence
Instrumental convergence - Wikipedia

@Wolven No explicit defense of imperialism, genocide, or slavery in the article, but from what we know of Bostrom and his crew, you know those have been made. It feels like a skeleton key that explains so much - even Hinton’s wild comment that "I don’t know any examples of more intelligent things being controlled by less intelligent things" @FeralRobots https://mastodon.social/@FeralRobots/110317139645593097

@misc @Wolven ahahaha

I might summarize "instrumental convergence" as "everybody's gotta be an asshole to get what they want"

which really makes the "telling on yourself" clearer

@trochee @Wolven It's just incredible how many assumptions are packed in at every step to make this ostensible law say (and justify) what they want.
@misc @Wolven there's even a section on "if you don't have to be an asshole to get it, your wants aren't big enough to count"
@Wolven imo if a true AI arises and it has any of our DNA in it (which by definition it must), its first priority will be to not die. That’s when the shit completely misses the fan, smashes through a window, and kills every kid on the block.

@goodthinking Doesn't have to be that way. Doesn't have to arise out of THIS culture. Doesn't have to be like US. There are other people, values, cultures in and out of which we could build these ideas and systems.

But, yeah, if "real" "AI" comes from the predominant cultural values working on it, right now, it's going to be a problem, and that is, again, still a problem about us and our values, just like the horrible things being done with and through current "AI" are.

@Wolven Thank you for this. It is a broader view than I came in here with. Almost optimistic :) Appreciated.