Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with AI hype. Here's a quick rundown.

First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

#AIhype

>>

Pause Giant AI Experiments: An Open Letter - Future of Life Institute

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Future of Life Institute

For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

So that already tells you something about where this is coming from. This is gonna be a hot mess.

>>

Why longtermism is the world’s most dangerous secular credo | Aeon Essays

It started as a fringe philosophical theory about humanity’s future. It’s now richly funded and increasingly dangerous

Aeon

There a few things in the letter that I do agree with, I'll try to pull them out of the dreck as I go along.

So, into the #AIhype. It starts with "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]".

>>

Footnote 1 there points to a lot of papers, starting with Stochastic Parrots. But we are not talking about hypothetical "AI systems with human-competitive intelligence" in that paper. We're talking about large language models.

https://faculty.washington.edu/ebender/stochasticparrots/

And the rest of that paragraph. Yes, AI labs are locked in an out-of-control race, but no one has developed a "digital mind" and they aren't in the process of doing that.

>>

Emily M. Bender

Emily M. Bender, Professor and Director, Professional MS in Computational Linguistics, Department of Linguistics University of Washington.

And could the creators "reliably control" #ChatGPT et al. Yes, they could --- by simply not setting them up as easily accessible sources of non-information poisoning our information ecosystem.

And could folks "understand" these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we'd be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes.

>>

Next paragraph. Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the "Sparks paper" and OpenAI's non-technical ad copy for GPT4. ROFLMAO.

>>

@[email protected] on Mastodon on Twitter

“Remember when you went to Microsoft for stodgy but basically functional software and the bookstore for speculative fiction? arXiv may have been useful in physics and math (and other parts of CS) but it's a cesspool in "AI"—a reservoir for hype infections https://t.co/acxV4wm0vE”

Twitter

I'm mean, I'm glad that the letter authors & signatories are asking "Should we let machines flood our information channels with propaganda and untruth?" but the questions after that are just unhinged #AIhype, helping those building this stuff sell it.

>>

Okay, calling for a pause, something like a truce amongst the AI labs. Maybe the folks who think they're really building AI will consider it framed like this?

>>

Just sayin': We wrote a whole paper in late 2020 (Stochastic Parrots, 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about "too powerful AI".

Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).

>>

They then say: "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."

Uh, accurate, transparent and interpretable make sense. "Safe", depending on what they imagine is "unsafe". "Aligned" is a codeword for weird AGI fantasies. And "loyal" conjures up autonomous, sentient entities. #AIhype

>>

Some of these policy goals make sense:

>>

Yes, we should have regulation that requires provenance and watermarking systems. (And it should ALWAYS be obvious when you've encountered synthetic text, images, voices, etc.)

Yes, there should be liability --- but that liability should clearly rest with people & corporations. "AI-caused harm" already makes it sound like there aren't *people* deciding to deploy these things.

>>

Yes, there should be robust public funding but I'd priortize non-CS fields that look at the impacts of these things over "technical AI safety research".

Also "the dramatic economic and political disruptions that AI will cause". Uh, we don't have AI. We do have corporations and VCs looking to make the most $$ possible with little care for what it does to democracy (and the environment).

>>

Policymakers: Don't waste your time on the fantasies of the techbros saying "Oh noes, we're building something TOO powerful." Listen instead to those who are studying how corporations (and governments) are using technology (and the narratives of "AI") to concentrate and wield power.

Start with the work of brilliant scholars like Ruha Benjamin, Meredith Broussard, Safiya Noble, Timnit Gebru, Sasha Costanza-Chock and journalists like Karen Hao and Billy Perrigo.

@emilymbender Sounds like "let's hinder competition to train their LLMs after we did so" re-framed as "No nooes, our AIs becomes too mighty".
@mhartle @emilymbender To me it reads more like "we've overextended, so let's chill competition while we consolidate our gains by presenting our questionable claims as unalterable axioms". Or maybe we're saying the same thing?
@emilymbender It seems like most of the bad parts of the letter are rhetorical (dog whistles to the AGI cult) and I am willing to swallow that if it's what it took to assemble a united front for measures that could be very welcome. We need more oversight and a general slow down. Then we fight over the implementation.
@misc It's not dog whistles, exactly, so much as it is accepting wholesale the rhetorical framework, if not the epistemological framework, of the people who are selling access to the black box LLMs. Genuine skeptics of either the achievements or the intentions claimed by those people should discuss their wares in terms that do not aid their marketing, and that they would not like.
@chrislay Fair. I hope they write their own letter, and get a lot of the same people to sign onto it!
@misc @emilymbender Sure, but when joining a revolution, it's important to know who is trying to become it's leader, these aren't the people that sjould be leading it. It should be society focussed, not technology/VC focussed.
@compthink @emilymbender That's fair, and united fronts are hard. But I hope that the question can become, how can we productively join forces toward a common goal of slowing things down in the name of oversight - while establishing and preserving clear independence for the fight over how to implement this stuff.
@emilymbender Right. The technology as such is not the issue; the issue is (state) capitalism. The narratives about gizmos and gadgets are just a smokescreen to distract us from the actual questions.
@emilymbender How about an open letter to pause giant capitalistic experiments contributing to our imminent self destruction?
@emilymbender Moreover, the "we've built something too powerful" rhetoric will fuel the concentration of power, so that only "reputable" orgs will be allowed to develop it.

@emilymbender

> I'd priortize non-CS fields that look at the impact

Thank you for this important observation. ICYMI, I wrote about this in WIRED this week https://www.wired.com/story/tech-governance-public-health/

To Hold Tech Accountable, Look to Public Health

The field of public health has transformed medicine, yet failed the most vulnerable. This trajectory can be avoided.

WIRED
@emilymbender and if we’re talking about VCs, they don’t even care about a business model or long term profitability. We don’t even get innovation or employment out of their money grab. ALL of the externalities are negative.
@emilymbender The seed investors and A-series investors have to make their money back. As much as possible. The C-Series and D-Series people can end up underwater. It's like with WeWork, the people at the top of the triangle cashed out at a personal profit IIRC.

@emilymbender This is the central point: we don’t have AI, and it’s not in any way close! I wrote an explainer about this last week.

https://adsei.org/2023/03/12/theres-no-such-thing-as-artificial-intelligence/?amp=1

There’s no such thing as Artificial Intelligence

There truly is no such thing as Artificial Intelligence. But, but ChatGPT, you cry! But self-driving cars! But The Algorithm! None of these are even close to intelligent. But how do we know?

Australian Data Science Education Institute

@emilymbender with a followup about bias: Machines are unbiased, and other bedtime stories.

https://adsei.org/2023/03/27/machines-are-unbiased-and-other-bedtime-stories/?amp=1

Machines are unbiased, and other bedtime stories

When we use machine learning for really important things like recruitment and health, we need to be immensely cautious and rationally sceptical of the results that we get.

Australian Data Science Education Institute

@emilymbender
As IBM's training material stated *back in the '70s*:

A computer can never be held accountable, therefore a computer must never make a management decisionThe corollary being that the person who decided to let the computer make decisions must be the one held accountable.

But that's, ultimately, why there's so much hype here. People are falling over each other to get computers to make major management and resource allocation decisions, notably
for other people, so that they can essentially sell virtual consulting services without actually having to do anything.

This is rent-seeking as a service.

@emilymbender "you put a system into production that you didn't fully understand and can't fully explain, and you're 'shocked that this frivolous law suit' has been brought against you?" Hopefully some judge in a couple of years
@Emily_S @emilymbender kinda reminds me of W. Bush era bank deregulations that allowed massive bundling of bad mortgages with robo signings as the insider's bought insurance against 'em because they knew what they were selling would fail and then Lehman Bros failed & $13 trillion in US home equity vanished. Of course this time its not finance but nearly every aspect of communication & behavior. Propaganda & misinformation & forgery will never be the same!

@emilymbender is watermarking an actual technology or just something we have to figure out? While I can imagine it for Audio and video, I fail (but it's my limitation probably) to imagine it for text. Perhaps long form text can have detectable patterns inside that point to an LLM generating it? But wouldn't that signal be invisible in small snippets?

I like the idea of being able to identify the source of LLM created text.

@signaleleven @emilymbender there are already some schemes for watermarking text. They usually works better for longer texts.

It works by messing with next word probabilities along some pre-determined schema - see https://www.nytimes.com/interactive/2023/02/17/business/ai-text-detection.html for more info

How ChatGPT Could Embed a ‘Watermark’ in the Text It Generates

An arms race is underway to build more advanced artificial intelligence models like ChatGPT. So is one to build tools to determine whether something was written by A.I.

The New York Times