"Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work."

https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

‘The Godfather of AI’ Quits Google and Warns of Danger Ahead

For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

The New York Times

Many women from Google who were pushed out and/or left have commented with agreement and frustration at Hinton's statements after departing Google. As @Mer__edith has written:

"Where were these guys when we spent months + thousand$ on lawyers? Where were they when we were organizing to stop it before it reached this point? Where were they when Sundar lied about us & diminished the risks we demonstrated? I'm not interested in dissent without solidarity."

https://twitter.com/mer__edith/status/1653103878471049241

Meredith Whittaker on Twitter

“Where were these guys when we spent months + thousand$ on lawyers? Where were they when we were organizing to stop it before it reached this point? Where were they when Sundar lied about us & diminished the risks we demonstrated? I'm not interested in dissent without solidarity.”

Twitter

Within tech, there's a script of (especially) men seeking fame from hyping a system to gain resources/power, then warning people about its dangers once they face criticism over the dangers.

Back in (checks) 2019, I called this the "evilbrag":

When a powerful man makes a hairshirt apology in a national magazine, to manage reputational risks and also acquire even more resources, even though they helped create the problem in the first place.

It's like a humblebrag for the harms you caused

For some reason guys in tech who cause serious harms seem to only understand the idea of failing upward.

The public evilbrag is a basic stepping stone in that upward mobility.

Joseph Wiezenbaum, who developed the first widely-known AI chat system in 1966, authored one of the classic evilbrags in the field.

By expressing worry about the risks of the AI systems he created (and computers and the Internet), he was catapulted to stardom as a result (and made some important early critiques)

https://www.wsj.com/articles/SB120553421433837797

MIT Professor's Work Led Him To Preach the Evils of Computers

MIT professor Joseph Weizenbaum created a beguiling artifact of early computing called Eliza. But after test subjects said the program empathized with their problems, he spent decades preaching the computer apocalypse.

The Wall Street Journal

Why do people fall in love with evilbrags from people who created the problems they now decry?

Facing the failures of technocentrism, people worry that the solutions might also be technocentric (rather than social or political) and conclude that only the experts who caused the problem can help manage it.

The result? Imagined solutions reproduce exactly the same blindspots that created the problem in the first place.

If you're ever tempted to evilbrag about something terribly harmful that you created, what else could you do?

1. open your eyes and notice people who were faster than you to notice the problems
2. acknowledge their contributions and apologize to them
3. ask (and compensate) people to help you identify your blindspots, and invest the time to understand them
4. leverage your influence to uplift the actual pioneers and support people who've been doing the work in the meantime

I should observe that there's a huge public good from people questioning the harms of the things they helped create — it's not an easy choice, it's a very clarifying thing for the public to see, and journalists do us a service by telling those stories.

The challenge for the person (and for society) is what we do next.

I was wondering if I was being too hard on this particular group of AI leaders, and then I read this interview in the MIT Tech Review.

When someone's theory of democracy is the movie "Don't Look Up" we have a problem.

https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

MIT Technology Review

@natematias Agreed. Deeply disappointing. I'm all for incorporating "more philosophical work" into AI development and policy (as well as political science, sociology, etc, etc).

I wish leading CS folks had the humility to recognize their math knowledge doesn't necessarily generalize to other fields, and they should instead, maybe, TALK to experts in these areas before making massive unsupported claims...