"The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- this phrasing makes it starkly clear that it's a race to nowhere good.
From @daveyalba
>>
"The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- this phrasing makes it starkly clear that it's a race to nowhere good.
From @daveyalba
>>
“The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said.”
>>
Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said.
➡️We don’t tolerate “experiments” that pollute the natural ecosystem and we shouldn’t tolerate those that pollute the information ecosystem either.
>>
“Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety.”
➡️Are they though? It seems to me that those in charge (i.e. VCs and C-suite execs) are really only interested in competition (for $$).
“But ChatGPT’s remarkable debut meant that by early this year, there was no turning back.”
➡️False. We turned back from lead in gasoline. We turned back from ozone-destroying CFCs. We can turn back from text synthesis machines.
>>
“On the same day, [Google] announced that it would be weaving generative AI into its health-care offerings.”
➡️ 🚨🚨🚨
“Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.”
➡️Employees are correct.
>>
“One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review.”
➡️Not a good look, Google.
“But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.”
➡️And it shows…
>>
“When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings”
➡️So tempting to focus on fictional future harms rather than current real ones.
Thank you, @daveyalba for this reporting.
If they shift the focus to an all-powerful technology that cannot be controlled by human beings, it's easier to distract from the truth - a flawed, specialized technology with applications that are mostly undesirable, controlled by a clique of privileged and disconnected tech oligarchs.
This will not end well.
Thanks @daveyalba for the article. Not much here is surprising, but it sure is disheartening.
@chmps @emilymbender @[email protected] They do not care one bit.
If you look at it though the lens of "the folks who make the big decisions never once think about potential harm," every last thing makes sense.
We need to stop asking "Why don't they do anything?" and start asking "What do/we/ start doing right now about it?"
@emilymbender Safety for most of these companies, is at best a tick box on various legal compliance forms.
#TrustAndSafety is a ridiculous name for the function, when there's zero #trust in any of these companies and the #safety aspect mostly depends on who designed and manages the triage systems or policy nuances.
So much of my role as a T&S leader has been fighting with execs who are actively & wilfully ignorant about safety or #risk They only ever briefly care when there are bad headlines.
@emilymbender @RuthStarkman Turning trains from death machines (more annual deaths/maimings than Civil War at peak) into “mere” titans of the economy also has resonance.
A big question on my mind: what are the best levers to make this period of literal lawlessness short?
@luis_in_brief @emilymbender @RuthStarkman I have recently been thinking about the environmental Kuznets curve (thanks to @jfleck ) which turns out not necessarily to be descriptive of what happens- regulation is unfortunately not inevitable