"Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work."

https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

ā€˜The Godfather of AI’ Quits Google and Warns of Danger Ahead

For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

The New York Times

Many women from Google who were pushed out and/or left have commented with agreement and frustration at Hinton's statements after departing Google. As @Mer__edith has written:

"Where were these guys when we spent months + thousand$ on lawyers? Where were they when we were organizing to stop it before it reached this point? Where were they when Sundar lied about us & diminished the risks we demonstrated? I'm not interested in dissent without solidarity."

https://twitter.com/mer__edith/status/1653103878471049241

Meredith Whittaker on Twitter

ā€œWhere were these guys when we spent months + thousand$ on lawyers? Where were they when we were organizing to stop it before it reached this point? Where were they when Sundar lied about us & diminished the risks we demonstrated? I'm not interested in dissent without solidarity.ā€

Twitter

Within tech, there's a script of (especially) men seeking fame from hyping a system to gain resources/power, then warning people about its dangers once they face criticism over the dangers.

Back in (checks) 2019, I called this the "evilbrag":

When a powerful man makes a hairshirt apology in a national magazine, to manage reputational risks and also acquire even more resources, even though they helped create the problem in the first place.

It's like a humblebrag for the harms you caused

For some reason guys in tech who cause serious harms seem to only understand the idea of failing upward.

The public evilbrag is a basic stepping stone in that upward mobility.

Joseph Wiezenbaum, who developed the first widely-known AI chat system in 1966, authored one of the classic evilbrags in the field.

By expressing worry about the risks of the AI systems he created (and computers and the Internet), he was catapulted to stardom as a result (and made some important early critiques)

https://www.wsj.com/articles/SB120553421433837797

MIT Professor's Work Led Him To Preach the Evils of Computers

MIT professor Joseph Weizenbaum created a beguiling artifact of early computing called Eliza. But after test subjects said the program empathized with their problems, he spent decades preaching the computer apocalypse.

The Wall Street Journal

Why do people fall in love with evilbrags from people who created the problems they now decry?

Facing the failures of technocentrism, people worry that the solutions might also be technocentric (rather than social or political) and conclude that only the experts who caused the problem can help manage it.

The result? Imagined solutions reproduce exactly the same blindspots that created the problem in the first place.

If you're ever tempted to evilbrag about something terribly harmful that you created, what else could you do?

1. open your eyes and notice people who were faster than you to notice the problems
2. acknowledge their contributions and apologize to them
3. ask (and compensate) people to help you identify your blindspots, and invest the time to understand them
4. leverage your influence to uplift the actual pioneers and support people who've been doing the work in the meantime

I should observe that there's a huge public good from people questioning the harms of the things they helped create — it's not an easy choice, it's a very clarifying thing for the public to see, and journalists do us a service by telling those stories.

The challenge for the person (and for society) is what we do next.

I was wondering if I was being too hard on this particular group of AI leaders, and then I read this interview in the MIT Tech Review.

When someone's theory of democracy is the movie "Don't Look Up" we have a problem.

https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

ā€œI have suddenly switched my views on whether these things are going to be more intelligent than us.ā€

MIT Technology Review

@natematias Agreed. Deeply disappointing. I'm all for incorporating "more philosophical work" into AI development and policy (as well as political science, sociology, etc, etc).

I wish leading CS folks had the humility to recognize their math knowledge doesn't necessarily generalize to other fields, and they should instead, maybe, TALK to experts in these areas before making massive unsupported claims...

@natematias Not sure if Weizenbaum really fits into this category.. I mean, was he warned of it before (I honestly don't know)? Could he really have expected something as simple as Eliza to be taken seriously?
Seems quite different from people who create problems while actively ignoring any criticism (including the decades-old warnings from Weizenbaum) nowadays

@Doomed_Daniel that's a fair point. I don't know the full details of Weizenbaum's story, so it's possible he and others may not have anticipated the problems of Eliza.

I do think the story of his elevation to public intellectual has continued to provide a script to others who probably do know better.

@natematias
Part of the problem might be the general inability to learn from other people's mistakes.
Related: Men who suddenly start caring about sexism/gender equality/... when they have a daughter
@Doomed_Daniel @natematias I was thinking similar. The way he describes it in Computer Power and Human Reason he became worried when he saw how people responded to Eliza, their confiding in it. Then alarmed when people exaggerated its abilities, suggesting it could replace human therapists. That led him to write the book, which apparently took a couple of years, ending late 1975.
@[email protected] @natematias "My own shock was administered not by any important political figure espousing his philosophy of science, but by some people who insisted on misinterpreting a piece of work I had done."

@brad this is a very fair point. Thank you for bringing up this very clarifying context.

I tend to think about Wiezenbaum as a person who showed that there's a possible pathway of innovator to concerned ethicist. It's nice to hear more about the story, that he was acting in good faith, whether those inspired by him are in good faith or not.

@natematias yep. That's the story behind _The Social Dilemma_, basically
@ntnsndr Yep. Thinking about writing a post in which I cite multiple cases, including The Social Dilemma.
@natematias "I alone was smart enough to do this, and to trick you into using it, and so I alone understand how to fix it"

@natematias Jack "Twitter never should have been a company" Dorsey

Various warmongers like Kissinger and Rumsfeld (K just wrote a book on AI??)

The peak is Sam "I am building Skynet but don't worry I won't let it destroy the world" Altman, because the repentance precedes the evil act, which is itself potentially investor-faving vaporware

Niebuhr on cheap grace comes to mind

@ntnsndr Such a good point!

Minor note: wasn't it Bonhoeffer who developed the idea of "cheap grace" in The Cost of Discipleship? I mentally pair that with Niebuhr because of the serenity prayer, which is my canonical example of cheap grace.

@natematias Oh ha—I stand corrected!

@ntnsndr @natematias Perhaps this is a corollary to the classic "Stages of a Bubble". One might look at that and consider that those seen as Cassandras are those in the Stealth and Awareness phases.

So an "evilbrag" would be someone who goes public with their criticism during the Mania Phase, between Greed and Delusion. A bunch of people will then try to jump on the evilbrag bandwagon but they'll be too late.
https://transportgeography.org/contents/chapter3/transportation-and-economic-development/bubble-stages/

@natematias itā€˜s actually also a method used by CEOs frequently: driving a business (economically and/or ethically) down the flush and then being first to question current course and speed to stay afloat.

@natematias
The original evilbrag:

ā€œNow I am become Death, the destroyer of worldsā€

Robert Oppenheimer

@natematias ā€ževilbragā€œ šŸ‘ didnā€˜t make it into Oxfordā€˜s yet but should be considered
@natematias this is about the google AI guy, but it applies equally to Yoel Roth. :)
The Prodigal Techbro

Prodigal tech bro stories skip straight from the past, when they were part of something that—surprise!—turned out to be bad, to the present, where they are now a moral authority on how to do good…

The Conversationalist
@natematias it doesn't even require any actual harms, or even the perception of them (cf. Bill Joy)
@natematias smells like robert oppenheimer
@natematias @Mer__edith @maria you asked in chat about Hinton’s comments, and this thread from @natematias is immediately what came to mind—didn’t seem like there was much new that others hadn’t already said at much greater personal cost. But I admit I didn’t dive particularly deep.