Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with AI hype. Here's a quick rundown.

First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

#AIhype

>>

Pause Giant AI Experiments: An Open Letter - Future of Life Institute

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Future of Life Institute

For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

So that already tells you something about where this is coming from. This is gonna be a hot mess.

>>

Why longtermism is the world’s most dangerous secular credo | Aeon Essays

It started as a fringe philosophical theory about humanity’s future. It’s now richly funded and increasingly dangerous

Aeon

There a few things in the letter that I do agree with, I'll try to pull them out of the dreck as I go along.

So, into the #AIhype. It starts with "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]".

>>

Footnote 1 there points to a lot of papers, starting with Stochastic Parrots. But we are not talking about hypothetical "AI systems with human-competitive intelligence" in that paper. We're talking about large language models.

https://faculty.washington.edu/ebender/stochasticparrots/

And the rest of that paragraph. Yes, AI labs are locked in an out-of-control race, but no one has developed a "digital mind" and they aren't in the process of doing that.

>>

Emily M. Bender

Emily M. Bender, Professor and Director, Professional MS in Computational Linguistics, Department of Linguistics University of Washington.

@emilymbender Thanks for great thread!