You can see what the “let’s pause new AI work because AI is too dangerous” letter is for when you look at mainstream and local news coverage. It’s all about hypothetical harms by theoretical future products. It’s sucked all of the oxygen out of the room for discussion on current harms by existing tech.

(Local news here in Iceland fell especially hard for this nonsense.)

@baldur
@JoeGermuska
Yeah, it was a bit of a strategic error on the part of Yudkowsky to not point at concrete, existing harms from unaligned and unfriendly AI. But I think that can somewhat be excused when you know he's in a bubble looking at the higher-capacity AI tech like GPT-4 hooked to programming interfaces and the Internet, not the lower-capacity pervasive applications like face recognition and driving.
@benlk @baldur I've seen it argued that it was strategic intent—not error—to control the narrative
@JoeGermuska
@baldur to what end?
@benlk keep regulators from meddling. Basically what @baldur said
@JoeGermuska That doesn't make sense to me, unless you believe that Yudkowsky's editorial is written in bad faith. The editorial's worries are consistent with things he's been saying since founding MIRI in 2000. He's genuinely worried about unfriendly AI, and the Time editorial invites regulators to step in.
@baldur
Pause Giant AI Experiments: An Open Letter - Future of Life Institute

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Future of Life Institute

@JoeGermuska Oh! Most discussion I've seen has been around the more-extreme proposals in https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

@baldur

The Open Letter on AI Doesn't Go Far Enough

One of the earliest researchers to analyze the prospect of powerful Artificial Intelligence warns of a bleak scenario

TIME