This piece by Nico Grant covers how @Mahdi was censored at Google, from publishing this work that I shared yesterday: https://arxiv.org/abs/2209.15259
https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html
On the Impossible Safety of Large AI Models

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance. However they have been empirically found to pose serious security issues. This paper systematizes our knowledge about the fundamental impossibility of building arbitrarily accurate and secure machine learning models. More precisely, we identify key challenging features of many of today's machine learning settings. Namely, high accuracy seems to require memorizing large training datasets, which are often user-generated and highly heterogeneous, with both sensitive information and fake users. We then survey statistical lower bounds that, we argue, constitute a compelling case against the possibility of designing high-accuracy LAIMs with strong security guarantees.

arXiv.org
First, he covers how El Mahdi El Mhamdi was censored. He was the last person we hired into our team at Google (he has since resigned). Within like 2 weeks of him joining, I (his manager) was fired, then Margaret Mitchell his subsequent manager, was fired.
Mahdi is one of the most brilliant scientists I have worked with, because he is both incredibly thorough in his computer science work, and in his journalism work. His journalist friends are imprisoned in Morocco, and he has been getting constant threats. Of course none of these tech co.s care about how their platforms (e.g. Facebook, Twitter, YouTube), are the biggest vectors for this type of harassment.
Mahdi wrote a paper and "used mathematical theorems to warn that the biggest A.I. models are more vulnerable to cybersecurity attacks and present unusual privacy risks because they’ve probably had access to private data stored in various locations around the internet. You can read it here: https://lnkd.in/gdCH9yqF
On the Impossible Safety of Large AI Models

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance. However they have been empirically found to pose serious security issues. This paper systematizes our knowledge about the fundamental impossibility of building arbitrarily accurate and secure machine learning models. More precisely, we identify key challenging features of many of today's machine learning settings. Namely, high accuracy seems to require memorizing large training datasets, which are often user-generated and highly heterogeneous, with both sensitive information and fake users. We then survey statistical lower bounds that, we argue, constitute a compelling case against the possibility of designing high-accuracy LAIMs with strong security guarantees.

arXiv.org
Though an executive presentation later warned of similar A.I. privacy violations, Google reviewers asked Dr. El Mhamdi for substantial changes. He refused and released the paper through École Polytechnique."
He resigned from Google this year, citing in part “research censorship.” He said modern A.I.’s risks “highly exceeded” the benefits. “It’s premature deployment,” he added."

Then you have "In March, two reviewers from Ms. Gennai’s team submitted their risk evaluation of Bard. They recommended blocking its imminent release, two people familiar with the process said. Despite safeguards, they believed the chatbot was not ready.

Ms. Gennai changed that document. She took out the recommendation and downplayed the severity of Bard’s risks, the people said."

When you have ZERO regulation and the CEOs basically in a middle school MUST HAVE BIGGEST CHATBOT race, and the "AI ethics" team is under the SVP in charge of covering company's ass, lobbying, and harassing employees," what does one expect?

Then we have, at Microsoft:

"Despite having a “transparency” principle, ethics experts working on the chatbot were not given answers about what data OpenAI used to develop its systems, according to three people involved in the work. Some argued that integrating chatbots into a search engine was a particularly bad idea, given how it sometimes served up untrue details, a person with direct knowledge of the conversations said."

"In the fall, Microsoft started breaking up what had been one of its largest technology ethics teams. The group, Ethics and Society, trained and consulted company product leaders to design and build responsibly. In October, most of its members were spun off to other groups, according to four people familiar with the team."

Again, what do we expect?

@timnitGebru Shit. This is damning stuff.

@timnitGebru "Microsoft started breaking up what had been one of its largest technology ethics teams ..."

... back in 2000.

FTFY

@timnitGebru sure, why not? Everything is just gleefully spiralling around the drain. Let’s everyone stop caring about anything and just be evil. That’s the vibe I’m getting from so many directions
@timnitGebru
The end of the Information Age. The Disinformation Age is here.