"Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work."

https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

‘The Godfather of AI’ Quits Google and Warns of Danger Ahead

For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

The New York Times

Many women from Google who were pushed out and/or left have commented with agreement and frustration at Hinton's statements after departing Google. As @Mer__edith has written:

"Where were these guys when we spent months + thousand$ on lawyers? Where were they when we were organizing to stop it before it reached this point? Where were they when Sundar lied about us & diminished the risks we demonstrated? I'm not interested in dissent without solidarity."

https://twitter.com/mer__edith/status/1653103878471049241

Meredith Whittaker on Twitter

“Where were these guys when we spent months + thousand$ on lawyers? Where were they when we were organizing to stop it before it reached this point? Where were they when Sundar lied about us & diminished the risks we demonstrated? I'm not interested in dissent without solidarity.”

Twitter

Within tech, there's a script of (especially) men seeking fame from hyping a system to gain resources/power, then warning people about its dangers once they face criticism over the dangers.

Back in (checks) 2019, I called this the "evilbrag":

When a powerful man makes a hairshirt apology in a national magazine, to manage reputational risks and also acquire even more resources, even though they helped create the problem in the first place.

It's like a humblebrag for the harms you caused

For some reason guys in tech who cause serious harms seem to only understand the idea of failing upward.

The public evilbrag is a basic stepping stone in that upward mobility.

Joseph Wiezenbaum, who developed the first widely-known AI chat system in 1966, authored one of the classic evilbrags in the field.

By expressing worry about the risks of the AI systems he created (and computers and the Internet), he was catapulted to stardom as a result (and made some important early critiques)

https://www.wsj.com/articles/SB120553421433837797

MIT Professor's Work Led Him To Preach the Evils of Computers

MIT professor Joseph Weizenbaum created a beguiling artifact of early computing called Eliza. But after test subjects said the program empathized with their problems, he spent decades preaching the computer apocalypse.

The Wall Street Journal
@natematias Not sure if Weizenbaum really fits into this category.. I mean, was he warned of it before (I honestly don't know)? Could he really have expected something as simple as Eliza to be taken seriously?
Seems quite different from people who create problems while actively ignoring any criticism (including the decades-old warnings from Weizenbaum) nowadays

@Doomed_Daniel that's a fair point. I don't know the full details of Weizenbaum's story, so it's possible he and others may not have anticipated the problems of Eliza.

I do think the story of his elevation to public intellectual has continued to provide a script to others who probably do know better.

@natematias
Part of the problem might be the general inability to learn from other people's mistakes.
Related: Men who suddenly start caring about sexism/gender equality/... when they have a daughter