Screen oracles can become folk school and a spiritual dope

Artificial intelligence is very fast, but damages nature and threatens culture and freedom.

In our latest blog post, MSt in Practical Ethics student Tom de Kok questions AI in healthcare, by discussing AI-trogenic harm: a novel term for the unintentional harm caused by artificial intelligence (AI) in healthcare. To know more, read the article here:
https://shorturl.at/uv8Xr

#practicalethics #ethics #AI #ethicsandAI #medicalethics

Practical Ethics in the News Blog

Details of and link to the Practical Ethics in the News Blog

Another example by OpenAI showing how they balance ethics and product growth decisions.

https://www.theverge.com/2024/8/4/24213268/openai-chatgpt-text-watermark-cheat-detection-tool

#OpenAI #ethicsAndAI #genai #ai

OpenAI won’t watermark ChatGPT text because its users could get caught

OpenAI is reportedly internally divided over whether to watermark ChatGPT’s text output, worrying that although benefits exist, it could turn off users.

The Verge

Excited to share my new article, “The Harms of Terminology: Why We Should Reject So-Called ‘Frontier AI’” — now available and open access:

https://link.springer.com/article/10.1007/s43681-024-00438-1

#AI #AIhype #EthicsAndAI

The harms of terminology: why we should reject so-called “frontier AI” - AI and Ethics

In the mid-2023, promoters of artificial intelligence (AI) as an “existential risk” coined a new term, “frontier AI,” that refers to “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” Promoters of this new term were able to disseminate it via the United Kingdom (UK) government’s Frontier AI Taskforce (formerly the Foundation Models Taskforce) as well as the UK’s AI Safety Summit, held in November 2023.I argue that adoption of the term “frontier AI” is harmful and contributes to AI hype. Promoting this new term is a way for its boosters to focus the public conversation around the AI-related risks they think are most important, namely “existential risk”—a scenario in which AI is able to bring about the destruction of humanity. Simultaneously, “frontier AI” is a re-branding exercise for the large-scale generative machine learning (ML) models that have been shown to cause severe and pervasive harms (including psychological, social, and environmental harms). Unlike “existential risk,” these harms are actual rather than theoretical, whereas the term “frontier AI” moves our collective focus away from actual harms to focus on hypothetical doomsday scenarios.Moreover, “frontier AI” as a term invokes the colonial mindset, further reinscribing the harmful dynamics between the handful of powerful Western companies who produce today’s generative AI models and the people of the “Global South” who are most likely to experience harm as a direct result of the development and deployment of these AI technologies.

SpringerLink
Who Said The AI ML Was Fair?

A look at fairness in A.I. and M.L. and F.A.I.R. data