"The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- this phrasing makes it starkly clear that it's a race to nowhere good.

https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees

From @daveyalba

>>

Google Bard AI Chatbot Raises Ethical Concerns From Employees

The search giant is making compromises on misinformation and other harms in order to catch up with ChatGPT, workers say

Bloomberg

“The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said.”

>>

Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said.

➡️We don’t tolerate “experiments” that pollute the natural ecosystem and we shouldn’t tolerate those that pollute the information ecosystem either.

>>

“Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety.”

➡️Are they though? It seems to me that those in charge (i.e. VCs and C-suite execs) are really only interested in competition (for $$).

“But ChatGPT’s remarkable debut meant that by early this year, there was no turning back.”

➡️False. We turned back from lead in gasoline. We turned back from ozone-destroying CFCs. We can turn back from text synthesis machines.

>>

@emilymbender Safety for most of these companies, is at best a tick box on various legal compliance forms.

#TrustAndSafety is a ridiculous name for the function, when there's zero #trust in any of these companies and the #safety aspect mostly depends on who designed and manages the triage systems or policy nuances.

So much of my role as a T&S leader has been fighting with execs who are actively & wilfully ignorant about safety or #risk They only ever briefly care when there are bad headlines.