"The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- this phrasing makes it starkly clear that it's a race to nowhere good.

https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees

From @daveyalba

>>

Google Bard AI Chatbot Raises Ethical Concerns From Employees

The search giant is making compromises on misinformation and other harms in order to catch up with ChatGPT, workers say

Bloomberg

“The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said.”

>>

Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said.

➡️We don’t tolerate “experiments” that pollute the natural ecosystem and we shouldn’t tolerate those that pollute the information ecosystem either.

>>

“Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety.”

➡️Are they though? It seems to me that those in charge (i.e. VCs and C-suite execs) are really only interested in competition (for $$).

“But ChatGPT’s remarkable debut meant that by early this year, there was no turning back.”

➡️False. We turned back from lead in gasoline. We turned back from ozone-destroying CFCs. We can turn back from text synthesis machines.

>>

“On the same day, [Google] announced that it would be weaving generative AI into its health-care offerings.”

➡️ 🚨🚨🚨

“Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.”

➡️Employees are correct.

>>

“One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review.”

➡️Not a good look, Google.

“But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.”

➡️And it shows…

>>

“When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings”

➡️So tempting to focus on fictional future harms rather than current real ones.

Thank you, @daveyalba for this reporting.

@emilymbender @daveyalba

If they shift the focus to an all-powerful technology that cannot be controlled by human beings, it's easier to distract from the truth - a flawed, specialized technology with applications that are mostly undesirable, controlled by a clique of privileged and disconnected tech oligarchs.

This will not end well.

Thanks @daveyalba for the article. Not much here is surprising, but it sure is disheartening.

@emilymbender @daveyalba Pichai's swan song is just bizarre. If AI harms are there, isn't is Google's responsibility to fix it? He's not some underfunded NGO but the chief of one of the wealthiest corporations in human history.

@chmps @emilymbender @[email protected] They do not care one bit.

If you look at it though the lens of "the folks who make the big decisions never once think about potential harm," every last thing makes sense.

We need to stop asking "Why don't they do anything?" and start asking "What do/we/ start doing right now about it?"

@emilymbender
🤔 a hype becoming a stupidity storm,
like a large fire becomes a fire storm 😒
@emilymbender Seems like the FDA should be looped in on this.
@emilymbender 🤨
in nearly all other tech-segments, you need to prove that a product does no harm, a company producing even one trading the product is accountable, down to the engineer;
why isn't that applicable on software products?
... even with piles of red flags up ...
@wobweger @emilymbender this has been in the culture of the software industry for a long time. In most software EULAs you will find clauses disclaiming all liability for consequences arising from the use or misuse of the product. Software vendors routinely refuse to provide any guarantees about the correctness of their product. It's rotten and it should change.

@emilymbender

Open the pod bay doors Hal...

#tallship

.

@emilymbender I'm not even turning TOWARD them. I have so far avoided even playing with them (and usually, I'm all about new toys).

@emilymbender Safety for most of these companies, is at best a tick box on various legal compliance forms.

#TrustAndSafety is a ridiculous name for the function, when there's zero #trust in any of these companies and the #safety aspect mostly depends on who designed and manages the triage systems or policy nuances.

So much of my role as a T&S leader has been fighting with execs who are actively & wilfully ignorant about safety or #risk They only ever briefly care when there are bad headlines.

@emilymbender Fun fact: Lead in gasoline and ozone-destroying CFCs were invented by the same guy, Thomas Midgley Jr.
@emilymbender It's not that I disagree (I definitely think the toll on the environment alone isn't worth it), but wouldn't reverting put the US at a severe disadvantage next to China--and possibly Russia?
@humansbgone @emilymbender The main thing LLMs have been shown to be able to do is to mostly-accurately echo what everybody already says about any given thing. The New York Times can handle that for us quite adequately.
@mjfgates @emilymbender They've also been facilitating a lot of programming workflows.

@emilymbender @RuthStarkman Turning trains from death machines (more annual deaths/maimings than Civil War at peak) into “mere” titans of the economy also has resonance.

A big question on my mind: what are the best levers to make this period of literal lawlessness short?

@luis_in_brief @emilymbender @RuthStarkman I have recently been thinking about the environmental Kuznets curve (thanks to @jfleck ) which turns out not necessarily to be descriptive of what happens- regulation is unfortunately not inevitable

https://www.intelligenteconomist.com/kuznets-curve/

The Environmental Kuznets Curve - Intelligent Economist

The Environmental Kuznets Curve is used to graph the idea that as an economy develops, market forces begin to increase and economic inequality decreases.

Intelligent Economist
@emilymbender How do you get hostile state actors to agree not to use such systems? How do you enforce that agreement?
@emilymbender And they still complain in the boardrooms about that.