There's yet another "AI will kill us all! It poses a risk of extinction!" letter going around, and I just… Y'all i am just so fucking tired.

CAPITALISM poses risk of extinction (climate change, right the fuck now).

WHITE SUPREMACY poses risk of extinction (genocide, eugenics).

HEGEMONY poses risk of extinction (nuclear FUCKING WAR).

And whatever "risk of extinction" "AI" poses, it poses because it is BUILT FROM THOSE EXTREMELY HUMAN VALUES.

Even if you stopped every "AI" project running, RIGHT THIS SECOND, those values would still kill us. And no matter how long you "pause" your "AI" projects, if you don't address those values? Then when you start your "AI" back up? You'll KEEP BUILDING THOSE SAME VALUES IN.

This is not hard. At this point, as much as it pains me to say it, it's not even novel. And yet you're still not fucking getting it.

I'm so goddam tired.

@Wolven The additional risk with AI is that even if our values are good, it's hard to describe our values to the AI. For example, if you tell a robot to simply make you a cup of coffee, it might step on a baby in the process, because you forgot to tell it to care about the baby.
@botahamec That's still our values, though. What questions you DON'T think to ask reflect your values and culture at least as much as the questions you do.
@Wolven We could hardcode rules like "don't step on babies". That's what ChatGPT has been doing lately. But of course, you can trick it. No program is perfect. And eventually the AI will try to do something nonsensical, like try harvesting water from bleach, because it can't find any water, and nobody said the coffee had to not be poisoned. The problem's worst case scenario happens when the AI is smarter than us, so we should expect that it can come up with something that humans can't.
@botahamec And that's still a failure of values. Think of all the things we take for granted as "common sense" or assumptions about the validity and universality of our lived experience. Now understand that each one of those things is culturally situated and contextual, and then understand that any "AI" system will have to be made to account for that fact, too.
@Wolven I guess my question is what do you think would happen if we all had perfect values? Would we then be able to solve the problem of telling the AI which things it cannot do?