Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with AI hype. Here's a quick rundown.

First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

#AIhype

>>

Pause Giant AI Experiments: An Open Letter - Future of Life Institute

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Future of Life Institute

For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

So that already tells you something about where this is coming from. This is gonna be a hot mess.

>>

Why longtermism is the world’s most dangerous secular credo | Aeon Essays

It started as a fringe philosophical theory about humanity’s future. It’s now richly funded and increasingly dangerous

Aeon

There a few things in the letter that I do agree with, I'll try to pull them out of the dreck as I go along.

So, into the #AIhype. It starts with "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]".

>>

Footnote 1 there points to a lot of papers, starting with Stochastic Parrots. But we are not talking about hypothetical "AI systems with human-competitive intelligence" in that paper. We're talking about large language models.

https://faculty.washington.edu/ebender/stochasticparrots/

And the rest of that paragraph. Yes, AI labs are locked in an out-of-control race, but no one has developed a "digital mind" and they aren't in the process of doing that.

>>

Emily M. Bender

Emily M. Bender, Professor and Director, Professional MS in Computational Linguistics, Department of Linguistics University of Washington.

And could the creators "reliably control" #ChatGPT et al. Yes, they could --- by simply not setting them up as easily accessible sources of non-information poisoning our information ecosystem.

And could folks "understand" these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we'd be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes.

>>

@emilymbender I want to see 100% "open" AI research -- in sense of #freeknowledge and #foss ideals -- assuming it's even possible to do this ethically. (Think 100% open & ethical possible but worth saying that's a big assumption.) ♥️💔

I don't have good intuition for how 100% open can work in tension with your suggestion that we don't put these things into the "wild" where they are used to amplify misinformation and for other harms?

Enjoying your posts here! Thank-you.