There's yet another "AI will kill us all! It poses a risk of extinction!" letter going around, and I just… Y'all i am just so fucking tired.

CAPITALISM poses risk of extinction (climate change, right the fuck now).

WHITE SUPREMACY poses risk of extinction (genocide, eugenics).

HEGEMONY poses risk of extinction (nuclear FUCKING WAR).

And whatever "risk of extinction" "AI" poses, it poses because it is BUILT FROM THOSE EXTREMELY HUMAN VALUES.

Even if you stopped every "AI" project running, RIGHT THIS SECOND, those values would still kill us. And no matter how long you "pause" your "AI" projects, if you don't address those values? Then when you start your "AI" back up? You'll KEEP BUILDING THOSE SAME VALUES IN.

This is not hard. At this point, as much as it pains me to say it, it's not even novel. And yet you're still not fucking getting it.

I'm so goddam tired.

Here. I've already said all this. Been saying it for damn near 20 years. Tired. Fucking wearied.
https://afutureworththinkingabout.com/?page_id=5038
Curriculum Vitae | A Future Worth Thinking About

Even if we take these dudes at their word that they really believe this, then regardless of mechanism, even "AI" would only think to kill us all because we— humans— modeled that to it as something to learn from and emulate.

And these dudes genuinely refuse to grapple with that fact

Object Lessons in Freedom | A Future Worth Thinking About

'"Any Sufficiently Advanced Neglect is Indistinguishable from Malice': Assumptions and Bias in Algorithmic Systems':
https://afutureworththinkingabout.com/?p=5442

@Wolven whoa! that title alone is 🔥🔥

that will definitely be living in my mind for a long time. (hoping to listen to the audio soon)

@hko the first half of that title is me quoting @debcha
@Wolven @debcha it's an absolutely inspired turn of phrase, kudos to both of you!

I wasn't aware of the many variations on Clarke's third law and I think this is the best I've seen. Definitely a useful construct and cuts through the bullshit. "It quacks like a duck."

https://en.wikipedia.org/wiki/Clarke%27s_three_laws

Clarke's three laws - Wikipedia

@Wolven "something something instrumental convergence"
@Wolven So I inspired myself to finally read the wikipedia on this “instrumental convergence” I keep hearing about — and as I expected, but even more so, it is truly a tour de force of self justification. These people projected so hard, they made their pathologies into a universal law of the cosmos. https://en.m.wikipedia.org/wiki/Instrumental_convergence
Instrumental convergence - Wikipedia

@Wolven No explicit defense of imperialism, genocide, or slavery in the article, but from what we know of Bostrom and his crew, you know those have been made. It feels like a skeleton key that explains so much - even Hinton’s wild comment that "I don’t know any examples of more intelligent things being controlled by less intelligent things" @FeralRobots https://mastodon.social/@FeralRobots/110317139645593097

@misc @Wolven ahahaha

I might summarize "instrumental convergence" as "everybody's gotta be an asshole to get what they want"

which really makes the "telling on yourself" clearer

@trochee @Wolven It's just incredible how many assumptions are packed in at every step to make this ostensible law say (and justify) what they want.
@misc @Wolven there's even a section on "if you don't have to be an asshole to get it, your wants aren't big enough to count"
@Wolven imo if a true AI arises and it has any of our DNA in it (which by definition it must), its first priority will be to not die. That’s when the shit completely misses the fan, smashes through a window, and kills every kid on the block.

@goodthinking Doesn't have to be that way. Doesn't have to arise out of THIS culture. Doesn't have to be like US. There are other people, values, cultures in and out of which we could build these ideas and systems.

But, yeah, if "real" "AI" comes from the predominant cultural values working on it, right now, it's going to be a problem, and that is, again, still a problem about us and our values, just like the horrible things being done with and through current "AI" are.

@Wolven Thank you for this. It is a broader view than I came in here with. Almost optimistic :) Appreciated.
@Wolven My main issue is that they won't say WHY it poses an extinction risk. Are they going to cite a short story by Ray Bradbury or something? Will it be Idiocracy where we all forget the recipe for ice and start sprinkling our crops with Gatorade? What?
@dianarajchel Even if we take them at their word that they really believe this, literally anything they could point to that "AI" might "decide" to do to kill us all, it would only "think" to do because we, humans, modeled that to it as something to learn from and emulate. And they genuinely refuse to grapple with that fact.
@Wolven @dianarajchel so many of them mention it, but they all seem to take it as a fundamental characteristic of the world rather than imagine a world without those things, or a way to avoid training those things into AI.
@Wolven @dianarajchel It's like we're on the Titanic with water up to our ankles and all the rich passengers are worried about the crime rate in New York.

@dianarajchel @Wolven Two of the founders of the "Center for AI Safety", Dan Hendrycks and Oliver Zhang, are apparently affiliated with the "LessWrong" apocalyptic AI cult, whose leadership has advocated nuclear war over "AI" under the pretext of some imaginary superintelligence that could turn the Earth into self-replicating grey goo.

Judge their nonsensical claims about "existential risk" and "AI" accordingly.

@michael_w_busch @Wolven Thank you for that context. Also wheee a new cult to examine! (Some people watch true crime. Me, it's cults.)

@dianarajchel @Wolven One misfortune of my time living in Silicon Valley was encountering a couple of members of the LessWrong cult.

They have a very distinctive vocabulary: https://rationalwiki.org/wiki/LessWrong (that review is some years old now and so does not include Eliezer Yudkowsky of LessWrong calling for nuclear war over AI in a Time magazine piece on 2023 March 29).

LessWrong

LessWrong is a community blog focused on "refining the art of human rationality." To this end, it focuses on identifying and overcoming bias, improving judgment and problem-solving, and speculating about the future. The blog is based on the ideas of Eliezer Yudkowsky, a research fellow for the Machine Intelligence Research Institute (MIRI); previously known as the Singularity Institute for Artificial Intelligence, and then the Singularity Institute). Many members of LessWrong share Yudkowsky's interests in transhumanism, artificial intelligence (AI), the Singularity, and cryonics.

RationalWiki
@michael_w_busch @Wolven somehow, I managed to avoid that, but then when I lived in San Francisco I struggled to leave the house!
@Wolven It is depressing knowing that we have the ideas and the tools to do so much better, and just don't use them. Still, these resources are slowly becoming more mainstream and visible. Even these infuriating conversations at least help shine a light on the absurdity of our current path.
@Wolven (the only reason I'm not boosting is because everyone who follows me already agrees and is equally tired)

@Wolven yeah the "AI will end the world" takes are all mostly made by people trying to sell you AI technology. Speaking as a computer scientist, I can tell you the recent fads are not more than fancy autocomplete algorithms.

The true danger is as you say- it will be used to make oppression easier and more cost effective to do.

@Wolven
just don't build the basilisk to do the thing, easy

@Wolven

Just trying to distract us from the bigger issues with a non-issue per the norm.

@Wolven Feel like I'm waking up out of a fever dream. Needed this. Thank you!

@Wolven Yeah. I really can't get worked up about the current "AI panic".

There are places with actual tent cities as a result of climate change, and stacks of scientific papers warning how bad it could be, and broad agreement it's an existential threat to our current civilization if not our species, and… mostly crickets, with a side of hopes-n-prayers.

But sure, let's definitely spend a bunch of time regulating technology that doesn't exist yet.

My dudes… let's start off by regulating some 19th century technology, like coal-fired powerplants, and if we can get that technology beaten down to size, then maybe we can work our way through 20th century problems and on to the hypothetical ones.

(And that's setting aside that most of the "AI panic" hell-scenarios are not the result of the technology, but of people doing shitty stuff with the technology due to lack of *business* regulation.)
#SMH #AI #regulation #climateChange

@Wolven
This needs to be WAY more present in #sociology communities here on Mastodon who are particularly fixated on AI right now!

@Wolven

Long before AGI poses any substantial threat to humanity or other life, ordinary AI wielded by unscrupulous humans for nefarious ends will be far more dangerous.

@Wolven So many folks refuse to talk to anybody. You might not convince everybody, but folks adopt these harmful ideas because they are convinced they are good. If they are wrong and you know why, tell them. They may have overlooked something.
@Wolven Yes sir, all of this. What is "AI" after all but a reflection of our values?

@Wolven The all of this and more...

It's almost like the people making the thing don't realize they *might* be the problem (and of course uses there of etc...).

Also the thread gets this...

@Wolven If you add Patriarchy to this list you may have summed up all of humanity's downfall factors in one post.
@Wolven NAILED IT. Also, today I learned about Kyriarchy. Intersectional Feminism which as I understand (still learning) is what fourth wave feminism is all about. Thank you for sharing!
@Wolven Exactly. I would be a lot more worried about ground drones (aka robot dogs with guns) conducting AI-driven euthanasia sweeps on all us peasants were it not for the uncomfortable certainty that climate-pumped typhoon clusters will have destroyed the robot factories and the international flows of money and small tech gear that make them possible, long before they get a chance to maximize my paperclips.
@Wolven just seen a news item on this very subject and could not agree more
@Wolven The additional risk with AI is that even if our values are good, it's hard to describe our values to the AI. For example, if you tell a robot to simply make you a cup of coffee, it might step on a baby in the process, because you forgot to tell it to care about the baby.
@botahamec That's still our values, though. What questions you DON'T think to ask reflect your values and culture at least as much as the questions you do.
@Wolven We could hardcode rules like "don't step on babies". That's what ChatGPT has been doing lately. But of course, you can trick it. No program is perfect. And eventually the AI will try to do something nonsensical, like try harvesting water from bleach, because it can't find any water, and nobody said the coffee had to not be poisoned. The problem's worst case scenario happens when the AI is smarter than us, so we should expect that it can come up with something that humans can't.
@botahamec And that's still a failure of values. Think of all the things we take for granted as "common sense" or assumptions about the validity and universality of our lived experience. Now understand that each one of those things is culturally situated and contextual, and then understand that any "AI" system will have to be made to account for that fact, too.
@Wolven I guess my question is what do you think would happen if we all had perfect values? Would we then be able to solve the problem of telling the AI which things it cannot do?
@Wolven it does make for a good sales pitch though

@Wolven

My mom (who literally spent her life working for issues of peace and justice) said decades ago that colonizing other planets wouldn't solve any of our problems; we would only take them with us, unless we solved them here on Earth first.

The same principle seems to apply with regards to AI, particularly when it amplifies and exacerbates the existing threats you listed.

@Shachihoko Your mom is very astute.

@Wolven

Yes, she was, and I miss her deeply. Although the way things are going now, part of me thinks she's relieved not to still be here (or maybe she's just even more worried).

@Wolven

"And whatever "risk of extinction" "AI" poses, it poses because it is BUILT FROM THOSE EXTREMELY HUMAN VALUES."

Nailed it!

The creation story applies just as much to god creating man in his image (it's always "his" right?) as to man creating AI from his own regurgitated delusional bile.

@Wolven
Preach. Tools don't kill things: The wielders do.

Modern AI is a truly revolutionary technology that has the potential to change the world, and it certainly will. The issue that poses is how we deal with that change; if we're not careful, thoughtful, deliberate, we could do unspeakable damage.

@WelcomeToTheCafe how the tools are formed out of and embedded into the culture matters a great deal, too.

@Wolven
I can agree with that, certainly.

Thank you for your post!