Here's what I wrote in 2015 when our OpenAI overlords were announced, and after I had a terrible experience at NIPS (a conference now named NeuRIPS after a number of people, mostly women demanded a name change and received death threats for doing so. Interesting to see my reflections from ~8 years ago, not much I'd change except for certain phrases I probably wouldn't use now (e.g. implicit bias).

I struggle to understand Silicon Valley’s libertarians’ allergic reaction to discussing problems caused by the extreme homogeneity of the research circle. The same is true for many in the AI research community of which I am a part. As this thoughtfully written letter to SCOTUS from physicists explains,

The implication that physics or “hard sciences” are somehow divorced from the social realities of racism in our society is completely fallacious.

The exclusion of people from physics solely on the basis of the color of their skin is an outrageous outcome that ought to be a top priority for rectification.

The rhetorical pretense that including everyone in physics class is somehow irrelevant to the practice of physics ignores the fact that we have learned and discovered all the amazing facts about the universe through working together in a community.

The benefits of inclusivity and equity are the same for physics as they are for every other aspect of our world.

The statement holds true for AI and any type of “ism”. One would think that the people trying to “stop AI from harming society” would pay attention to this sort of stuff.

However, amidst the wide acclaim given to Elon Musk and others’ announcement of a “non-profit” venture to “Stop AI from destroying humanity,” only one potential problem has been raised: the fact that all the researchers are working on deep learning.
A White tech tycoon born and raised in South Africa during apartheid, along with an all White all male set of investors and researchers is trying to stop AI from “taking over the world” and the only potential problem we see is that “all the researchers are working on deep learning?”
Google recently came out with a computer vision algorithm that classified black people as Apes. AS APES. Some try to explain away this mishap by stating that the algorithm must have picked out color as an essential discriminator in classifying humans.
If there was even one Black person in the team, or just someone who thinks about race, a product classifying Black people as apes would not have been released. Either the dataset would have been sufficiently augmented, or more research into the algorithm would have been mandated. But a product with this type of misclassification would not have been released. Imagine an algorithm that regularly classifies White people as non-human.

No American company would call this a production ready person detection system.

At this point in these types of conversations people usually mention that there aren’t qualified this-or-that group of people who are deep learning researchers. I can name at least 10 extremely qualified female researchers in my sleep--including one who left the field due to exclusion--and swathes of them can be found here (LINK).

When you ask someone to “recommend” potential researchers in this extremely segregated and homogeneous world, they will recommend other researchers who are their friends, read the same hacker news every day and talk about the same things. They will recommend other boys in the very tight boys network.
Funny enough, the same men feel too uncomfortable to participate in events encouraging women in CS simply because the room might be more than 70% women. Go to the conferences, talk to people, attend the parties and you will see.
As the night proceeds, you will be transported to the Madmen era where drunk old men, professors or company representatives from places such as Google, start unsolicitedly groping some of the handful of women present at the conference parties. And the women are either completely assimilated (a well documented method of survival for minorities) or too uncomfortable to speak up.
But that type of overt caricature of sexism is nothing compared to the covert one bestowed upon us by people who believe they are too objective, rational and intelligent to have biases. Keep in mind they are researchers--researchers who have the skills and resources to learn about most subjects they deem to be worthy of their time. But they are simply not interested in an attempt to understand their implicit biases, read literature or ask questions about it.
These are the people creating AI. And the tasks, values and priorities of the AI agent are reflections of those of its creators. People harm society. We don’t have to project into the future to see AI’s potential adverse effects. It is already happening. However, Silicon Valley does not speak out on atrocities such as the US drone warfare that is illegally killing people.
Because presumably the intelligence occupied by this select group of men (many speak as though they belong to a different species with superior intelligence and rationality), cannot be bothered to think of mundane day to day activities such as war. Even the nuclear bomb was not created by physicists whose day to day life was completely divorced from the complex realities of the world we live in.
They were very much aware of politics and history, and actively engaged in a capacity higher than the abstract futuristic intellectual discussions in vacuum that AI researchers of today seem to be engaged in. By the way, these discussions, once again, are conducted amongst a highly homogenous group of people: see for example the participants in a NIPS panel on the societal impacts of machine learning (http://www.doc.ic.ac.uk/~mpsha/NIPS_Symposium_2015.html).
NIPS Symposium 2015

I am very concerned about the future of AI. Not because of the risk of rogue machines taking over. But because of the homogeneous, one dimensional group of men who are currently involved in advancing the technology.

Concerned AI researcher

@timnitGebru

I am concerned for both reasons, b/c one promotes the other re/ bias, etc.

But what also has me worried is that even though some #AI luminaries have seen the light and called for a moratorium, it is for the wrong reasons.

Legislation will never catch up with the exponential evolution of #GAI, not even in the #eu

Concerned observer and commentator

@HistoPol @timnitGebru

« Legislation will never catch up with the exponential evolution of … »

Probably not if things go on like they went so far.

Isn’t that (forecasted) exponential evolution dependent (/directly related to) a primary energy demand exponential growth ?

🤔

@leadb @HistoPol @timnitGebru The "forecasted" exponential growth is just propaganda with no plausible mechanism proposed.

@dalias @HistoPol @timnitGebru

propaganda is certainly (a large part of) the Silly-Con-Valley marketing-mechanism shamelessly called self-fullfilling-prophecies 😏

Yet it has kind-of-worked for many local gurus to become globaly-influencial enough for redirecting a large part of the global primary energy demand and all the forms of capital relying on it (finance, intelligence/skills, infratsructures) towards their own personal goals/profits.

And looks like the're up for a new try.

🤞everyone