Luke Munn, Liam Magee, Vanicka Arora, and Awais Hameed Khan introduce “Unmaking AI,” a framework for critically evaluating generative AI image models beyond surface-level bias metrics—focusing instead on business ecosystems, training data, and generative outputs in #criticalAI 3.2
Read the article here: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-12095973/406214/Unmaking-AI-A-Framework-for-Critical-Investigation
Nicolas Malevé and Katrina Sluis review THE BIRTH OF COMPUTER VISION (Dobson), highlighting how the book reconstructs the early history of computer vision and its ties to military funding, computational methods, and changing ontologies of the image.
Link: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700282/401265/The-Birth-of-Computer-Vision
In this book review, Aarthi Vadde examines AI SNAKE OIL (Narayanan and Kapoor) and CO-INTELLIGENCE (Mollick), highlighting how both books sift through noise to assess AI’s real capacities, limits, and social impacts.
Link: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700273/401269/AI-Snake-Oil-What-AI-Can-Do-What-It-Can-t-and-How
In "Rethinking Error," historian Johan Fredrikzon goes to the very heart of a large language model's incapacity to "know": a problem the industry likes to call hallucinations, but which Fredrikzon calls "epistemological indifference."
Link: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700255/401267/Rethinking-Error-Hallucinations-and
Jakko Kemper examines how generative AI makes aesthetic production seem frictionless while relying on extractive infrastructures, linking everyday creative work to the “imperial mode of living.”
Link:https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700273/401269/AI-Snake-Oil-What-AI-Can-Do-What-It-Can-t-and-How
#PurnimaMankekar examines how information, Big Data, and algorithms are shaped by postcolonial histories and development projects, focusing on the Aadhaar biometric ID system in #india.
A nuanced look at how data becomes a tool of governance and world-making.
Link: https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11700264/401271/Genealogies-of-Knowledge-Production-Information
In "Endangered Judgment" political theorist Luke Ferndandez discusses the imperiousness of instrumental reason. Returning to Joseph Weizenbaum's distinction between calculation and judgment (which was borrowed from Hannah Arendt), Fernandez shows that when we treat machine logic as if it can replace human judgment, we risk everything that matters in decision-making: from contextual understanding to our responsibility to others.
Three artists from Algorithmic Resistance Research Group (“Cultural Red Teaming: ARRG! and Creative Misuse of AI Systems”) take on AI by flipping AI on its head, they turn its logic into a playful critique of creativity and control—a sharp read for anyone into art, tech, or resistance.

Ask an Expert: Evaluating LLM “Research Assistants” and their Risks for Novice Researchers
Welcome to our new ASK AN EXPERT feature, a partnership between Critical AI and Critical AI @ Rutgers’ NEH-funded DESIGN JUSTICE LABS network. In response to an anthropology professor who asked abo…
Critical AI