Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with AI hype. Here's a quick rundown.

First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

#AIhype

>>

Pause Giant AI Experiments: An Open Letter - Future of Life Institute

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Future of Life Institute

For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

So that already tells you something about where this is coming from. This is gonna be a hot mess.

>>

Why longtermism is the world’s most dangerous secular credo | Aeon Essays

It started as a fringe philosophical theory about humanity’s future. It’s now richly funded and increasingly dangerous

Aeon

There a few things in the letter that I do agree with, I'll try to pull them out of the dreck as I go along.

So, into the #AIhype. It starts with "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]".

>>

Footnote 1 there points to a lot of papers, starting with Stochastic Parrots. But we are not talking about hypothetical "AI systems with human-competitive intelligence" in that paper. We're talking about large language models.

https://faculty.washington.edu/ebender/stochasticparrots/

And the rest of that paragraph. Yes, AI labs are locked in an out-of-control race, but no one has developed a "digital mind" and they aren't in the process of doing that.

>>

Emily M. Bender

Emily M. Bender, Professor and Director, Professional MS in Computational Linguistics, Department of Linguistics University of Washington.

And could the creators "reliably control" #ChatGPT et al. Yes, they could --- by simply not setting them up as easily accessible sources of non-information poisoning our information ecosystem.

And could folks "understand" these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we'd be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes.

>>

@emilymbender I've come across a beneficial reason. With Bing Chat it crushes various articles into an answer. I imagine you can see the most prominent articles it's crushing right there in the search results. But if you open those articles you're inundated with clickbait and various other inducements to pollute your attention.

Whereas Bing Chat gets straight to the point as requested.

Not that it's trustworthy, but at least it becomes a minimal answer to consider. If you trace through the search results you can see what it used to formulate the answers. It's becomes a synopsis almost.

@rood @emilymbender and then other ML models will be used to poison the inputs to manipulate the summaries. Like SEO 2.0.