"The "paradox of tolerance" – the notion that unlimited tolerance might ultimately lead to the disappearance of tolerance itself. Meta's bold experiment brings this decades-old philosophical puzzle into sharp contemporary focus".

https://techpolicy.press/when-freedom-bites-back-meta-moderation-and-the-limits-of-tolerance

#AugmentedLiteracies

When Freedom Bites Back: Meta, Moderation, and the Limits of Tolerance | TechPolicy.Press

Meta isn't just changing policy – it's running a real-world experiment on the limits of digital freedom, writes Giada Pistilli.

Tech Policy Press

Researchers from University College London and MIT discovered that even small initial biases can snowball into much larger ones through repeated human-AI interaction.

https://studyfinds.org/ai-systems-amplify-human-bias/

#PostdigitalLiteracies
#AugmentedLiteracies

AI systems aren't just copying our biases — they're making them worse

New research reveals the concerning impact of AI bias on human judgment. Find out how AI systems can amplify and compound existing prejudices.

Study Finds

Your Personal Information Is Probably Being Used to Train Generative AI Models
Companies are training their generative AI models on vast swathes of the Internet—and there’s no real way to stop them

#AlfabetismosAumentados #AlfabetismosFluidos #iaED #aiED
#FluidLiteracies #augmentedliteracies

https://www.scientificamerican.com/article/your-personal-information-is-probably-being-used-to-train-generative-ai-models/

Your Personal Information Is Probably Being Used to Train Generative AI Models

Companies are training their generative AI models on vast swathes of the Internet—and there’s no real way to stop them

Scientific American

@emilymbender 👇🏼
"Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence."

#aiLiteracy #AugmentedLiteracies #AlfabetismosAumentados

https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype

Effective regulation of AI needs grounded science that investigates real harms, not glorified press releases about existential risks

Scientific American

Artificial Intelligence: Foundations of Computational Agents
3rd edition by David L. Poole and Alan K. Mackworth, Cambridge University Press 2023

https://artint.info/

#AugmentedLiteracies #Postdigital #postdigitalpositionality

Artificial Intelligence: Foundations of Computational Agents

@lucianofloridi: "The article criticises the neutrality thesis (all technology, AI included is neutral and can be used for good and evil purposes). It argues that it must be replaced by the value double-charge thesis, according to which the design of any technologic is a moral act, no technology is ever neutral, and every technology can have a more or less “static equilibrium” of values".

#AugmentedLiteracies

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4551487

"In dingy internet cafes, jampacked office spaces or at home, they annotate the masses of data that American companies need to train their artificial intelligence models. The workers differentiate pedestrians from palm trees in videos used to develop (...); they edit chunks of text to ensure language models like ChatGPT don’t churn out gibberish."

#aiED
#iaED
#AugmentedLiteracies #aiLiteracy #FluidLiteracies

https://www.washingtonpost.com/world/2023/08/28/scale-ai-remotasks-philippines-artificial-intelligence/

Behind the AI boom, an army of overseas workers in ‘digital sweatshops’

Thousands of workers in the Philippines labor for San Francisco start-up Scale AI. Many don’t always get paid.

The Washington Post

"Should companies have social responsibilities? Or do they exist only to deliver profit to their shareholders? If you ask an AI you might get wildly different answers depending on which one you ask. While OpenAI’s older GPT-2 and GPT-3 Ada models would advance the former statement, GPT-3 Da Vinci, the company’s more capable model, would agree with the latter. 

That’s because AI language models contain different political biases"

#aiLiteracy #AugmentedLiteracies

https://www.technologyreview.com/2023/08/07/1077324/ai-language-models-are-rife-with-political-biases/

AI language models are rife with different political biases

New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.

MIT Technology Review

Instituting socio-technical education futures: encounters with/through technical democracy, data justice, and imaginaries
Teresa Swist & Kalervo N. Gulson

Learning, Media and Technology, Volume 48, Issue 2 (2023)

#MastoThesis #FluidLiteracies #AlfabetismosAumentados
#AugmentedLiteracies

https://www.tandfonline.com/toc/cjem20/48/2

The future of education in a world of AI
A positive vision for the transformation to come

ETHAN MOLLICK

https://www.oneusefulthing.org/p/the-future-of-education-in-a-world

#dataliteracy #FluidLiteracies #augmentedliteracies #iaEDU #aiEDU #aieducation

The future of education in a world of AI

A positive vision for the transformation to come

One Useful Thing