@CriticalAI

1.1K Followers
129 Following
2.2K Posts

Critical AI's second issue is out! Read it at the link in our profile.

Critical AI began as a new interdisciplinary initiative at Rutgers University, organized and led through a steering committee with support from the Center for Cultural Analysis and the Rutgers Center for Cognitive Science. To learn more, see our website below.

For more information about the journal, or to share ideas, please email [email protected].

To reach our editor directly, email [email protected].

Rutgers Websitehttps://sites.rutgers.edu/critical-ai/
Events and Archivehttps://sites.rutgers.edu/critical-ai/event-details/
Critical AI Bloghttps://criticalai.org/
Second issue of Critical AI Journalhttps://read.dukeupress.edu/critical-ai

Luke Munn, Liam Magee, Vanicka Arora, and Awais Hameed Khan introduce “Unmaking AI,” a framework for critically evaluating generative AI image models beyond surface-level bias metrics—focusing instead on business ecosystems, training data, and generative outputs in #criticalAI 3.2

Read the article here: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-12095973/406214/Unmaking-AI-A-Framework-for-Critical-Investigation

Nicolas Malevé and Katrina Sluis review THE BIRTH OF COMPUTER VISION (Dobson), highlighting how the book reconstructs the early history of computer vision and its ties to military funding, computational methods, and changing ontologies of the image.

Link: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700282/401265/The-Birth-of-Computer-Vision

In this book review, Aarthi Vadde examines AI SNAKE OIL (Narayanan and Kapoor) and CO-INTELLIGENCE (Mollick), highlighting how both books sift through noise to assess AI’s real capacities, limits, and social impacts.

Link: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700273/401269/AI-Snake-Oil-What-AI-Can-Do-What-It-Can-t-and-How

In "Rethinking Error," historian Johan Fredrikzon goes to the very heart of a large language model's incapacity to "know": a problem the industry likes to call hallucinations, but which Fredrikzon calls "epistemological indifference."

Link: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700255/401267/Rethinking-Error-Hallucinations-and

Jakko Kemper examines how generative AI makes aesthetic production seem frictionless while relying on extractive infrastructures, linking everyday creative work to the “imperial mode of living.”

Link:https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700273/401269/AI-Snake-Oil-What-AI-Can-Do-What-It-Can-t-and-How

#PurnimaMankekar examines how information, Big Data, and algorithms are shaped by postcolonial histories and development projects, focusing on the Aadhaar biometric ID system in #india.

A nuanced look at how data becomes a tool of governance and world-making.

Link: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700264/401271/Genealogies-of-Knowledge-Production-Information

In "Endangered Judgment" political theorist Luke Ferndandez discusses the imperiousness of instrumental reason. Returning to Joseph Weizenbaum's distinction between calculation and judgment (which was borrowed from Hannah Arendt), Fernandez shows that when we treat machine logic as if it can replace human judgment, we risk everything that matters in decision-making: from contextual understanding to our responsibility to others.
Three artists from Algorithmic Resistance Research Group (“Cultural Red Teaming: ARRG! and Creative Misuse of AI Systems”) take on AI by flipping AI on its head, they turn its logic into a playful critique of creativity and control—a sharp read for anyone into art, tech, or resistance.
Ask an Expert: Evaluating LLM “Research Assistants” and their Risks for Novice Researchers

Welcome to our new ASK AN EXPERT feature, a partnership between Critical AI and Critical AI @ Rutgers’ NEH-funded DESIGN JUSTICE LABS network. In response to an anthropology professor who asked abo…

Critical AI