RE: https://mastodon.social/@nixCraft/116188939207308679

“It [Oracle] is also exploring an arrangement called “bring your own chip” (BYOC), where new customers will be required to supply their own hardware, shifting capital requirements off Oracle’s books.”

#hardware #CriticalAIStudies

Hi AoIRistas, after having to sit @AoIR conferences out for many years (😥), I am reeeaaallllllly planning on attending #AoIR2026 🤗🤞 Papers are already in preparation, but I'm also happy to plan/join roundtables, fishbowls, and pre-conferences. Feel free to reach out, notably re:
#STS #GenAIuse #AIgovernance #AIpolicy #criticalAIstudies

The #CriticalAI Guest Forum welcomes op-ed from Ricky D. Crano, a Film and Media Studies Lecturer at UC Irving, on disturbing “AI”-related institutional changes in higher ed

https://criticalai.org/2025/03/12/guest-forum-ricky-d-cranos-uc-irvines-school-of-humanities-is-shutting-down-critical-ai-research/

#amWriting #criticalAIstudies

GUEST FORUM: RICKY D. CRANO’S “UC IRVINE’S SCHOOL OF HUMANITIES IS SHUTTING DOWN CRITICAL AI RESEARCH”

Critical AI’s Guest Forum welcomes writers on topics of potential interest to our readers inside and outside of the academy. Below, an op-ed from Ricky D. Crano (Film & Media Studies, UC Irvin…

Critical AI
Karen Hao (@_KarenHao) on X

I have an investigation coming out tomorrow over a year in the making, related to the climate impacts of AI and a tech company's hypocrisy. Stay tuned 🙏

X (formerly Twitter)

Today the real JANE ROSENZWEIG talked about the future of undergraduate research in the era of Generative AI! Thread below about the event.

#criticalAI #CriticalAIStudies

Many thanks to Duke University Press over on Twitter, who have published this Q&A with our editor Lauren M. E. Goodlad on their blog: https://dukeupress.wordpress.com/2024/02/22/qa-with-lauren-m-e-goodlad-editor-of-critical-ai/

#CriticalAI #CriticalAIStudies

Q&A with Lauren M. E. Goodlad, editor of Critical AI

Lauren M. E. Goodlad is Distinguished Professor of English and Comparative Literature at Rutgers University as well as a faculty affiliate of the Center for Cultural Analysis (CCA), the Rutgers Bri…

Duke University Press News

Another important article today by @billyperrigo. Check it out here: https://time.com/6684266/openai-democracy-artificial-intelligence/ Some commentary follows

A thread 🧵 /1

#CriticalAIStudies #AI #CriticalAI

Inside OpenAI’s Plan to Make AI More ‘Democratic’

The company wants to align its AI to ‘human values.’ But whose values should AI reflect? And who should get to decide?

Time

All of these tests in the medical domain have a striking similarity about them: an LLM does better on a standardized test than first-year med students; or a bot does better on a texting situation than real doctors who don't actually practice medicine that way.

More grist for the industry's mill as it makes the case that this flawed, exploitative, and environmentally irresponsible technology will benefit humanity. /3

#CriticalAI #CriticalAIStudies #AI

A Call For A Systemic Dismantling: These Women Refuse To Be Hidden Figures In The Development Of AI

This convergence of events: the week at OpenAI and the dreaded New York Times article shed light on a disconcerting reality—the increasing marginalization of women in artificial intelligence, a glaring lack of recognition and lack of respect for their work that is emblematic within both industry and media.

Forbes

From @Mer__edith, posted initially on X/Twitter:

"This paper is really important, presenting empirical evidence of the imbrication bet. AI & the surveillance biz model. This is notable particularly given that most production surveillance tech is proprietary, its existence and use hidden from the public."

Read it here: https://arxiv.org/abs/2309.15084?ref=404media.co

#CriticalAIStudies #CriticalAI #AI

The Surveillance AI Pipeline

A rapidly growing number of voices argue that AI research, and computer vision in particular, is powering mass surveillance. Yet the direct path from computer vision research to surveillance has remained obscured and difficult to assess. Here, we reveal the Surveillance AI pipeline by analyzing three decades of computer vision research papers and downstream patents, more than 40,000 documents. We find the large majority of annotated computer vision papers and patents self-report their technology enables extracting data about humans. Moreover, the majority of these technologies specifically enable extracting data about human bodies and body parts. We present both quantitative and rich qualitative analysis illuminating these practices of human data extraction. Studying the roots of this pipeline, we find that institutions that prolifically produce computer vision research, namely elite universities and "big tech" corporations, are subsequently cited in thousands of surveillance patents. Further, we find consistent evidence against the narrative that only these few rogue entities are contributing to surveillance. Rather, we expose the fieldwide norm that when an institution, nation, or subfield authors computer vision papers with downstream patents, the majority of these papers are used in surveillance patents. In total, we find the number of papers with downstream surveillance patents increased more than five-fold between the 1990s and the 2010s, with computer vision research now having been used in more than 11,000 surveillance patents. Finally, in addition to the high levels of surveillance we find documented in computer vision papers and patents, we unearth pervasive patterns of documents using language that obfuscates the extent of surveillance. Our analysis reveals the pipeline by which computer vision research has powered the ongoing expansion of surveillance.

arXiv.org