"The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- this phrasing makes it starkly clear that it's a race to nowhere good.

https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees

From @daveyalba

>>

Google Bard AI Chatbot Raises Ethical Concerns From Employees

The search giant is making compromises on misinformation and other harms in order to catch up with ChatGPT, workers say

Bloomberg

“The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said.”

>>

Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, the employees said.

➡️We don’t tolerate “experiments” that pollute the natural ecosystem and we shouldn’t tolerate those that pollute the information ecosystem either.

>>

@emilymbender
in healthcare there are oversight bodies for experiments on humans and people get to decide whether or not to participate and take the risk. Harms are anticipated and documented. Also an experiment measures some outcome and at the end there’s an evaluation of the data and results. This sounds like an unconsented, unsupervised and unscientific nightmare.
@BeneCal @emilymbender
🎯 well said.