MichaelG

@MichaelGasser
14 Followers
26 Following
22 Posts
Ugh -- agreed to speak to a journalist about whether people (actually women in particular, because it's a women's magazine type publication) should use chatbots for health information. I said in no uncertain terms that they should not, but they found someone else to quote too saying it could be beneficial blah blah blah. And then the article ends with a quote from me saying "Don't do this" followed by a "quote" from ChatGPT --- i.e. more synthetic text published as news. Grr.

This is the real, human cost of mass surveillance of everyone's private digital communications.

If we actually care about keeping people safe, we need more end-to-end encryption not less.

@willknight @emilymbender
Am so so so thankful we (people reading about AI) have Emily's input to shed light on our blind spots. Global understanding of AI systems is fundamentally more informed because of how well she can see things and describe what she sees. 😍 (/end)
These are the people creating AI. And the tasks, values and priorities of the AI agent are reflections of those of its creators. People harm society. We don’t have to project into the future to see AI’s potential adverse effects. It is already happening. However, Silicon Valley does not speak out on atrocities such as the US drone warfare that is illegally killing people.
But that type of overt caricature of sexism is nothing compared to the covert one bestowed upon us by people who believe they are too objective, rational and intelligent to have biases. Keep in mind they are researchers--researchers who have the skills and resources to learn about most subjects they deem to be worthy of their time. But they are simply not interested in an attempt to understand their implicit biases, read literature or ask questions about it.
Because presumably the intelligence occupied by this select group of men (many speak as though they belong to a different species with superior intelligence and rationality), cannot be bothered to think of mundane day to day activities such as war. Even the nuclear bomb was not created by physicists whose day to day life was completely divorced from the complex realities of the world we live in.

I am very concerned about the future of AI. Not because of the risk of rogue machines taking over. But because of the homogeneous, one dimensional group of men who are currently involved in advancing the technology.

Concerned AI researcher

The Eritrean and Ethiopian refugees who have been in Sudan for generations are in a dire situation right now, and apparently getting ZERO protection from UNHCR.People said that, as usual, what UNHCR advertises vs what they do on the ground are diametrically opposed. This was what @refugeesinlibya had been raising awareness about as well. The institutions that were created to protect refugees themselves are often the problem.
All of us impacted by US militarism must stand up, connect our struggles, and work to build a new society that prioritizes our people and environment. @nodutdol
https://magazine.scienceforthepeople.org/vol25-3-killing-in-the-name-of/the-sacrifice-of-human-health-and-environment-in-south-korea-under-us-military-occupation/
The Sacrifice of Human Health and Environment in South Korea Under US Military Occupation • SftP Magazine

The US neocolonial agenda steal Korean land, pollute environment, harm people's health, and continue to obstruct peace on the peninsula.

Science for the People Magazine

This article by Tekendra Parmar is heartbreaking. Makes my blood boil that people like Yann LeCun still go around gaslighting us about "94% of hate speech taken down by AI" nonsense.
https://www.businessinsider.com/facebooks-local-partners-say-hate-speech-stays-on-the-platform-2023-4

""One of the things that Facebook said was, 'We are not arbiters of truth,'"... "I remember asking, wouldn't it be better if Facebook was taking down posts, [rather] than having posts stay on the platform that could hurt people," the partner said.

Facebook's local partners say hate speech stays on the platform

Facebook's "trusted partners" in Ethiopia say the platform was slow to reply to urgent hate speech warnings and allowed hateful content to stay online.

Insider