Will Beason

132 Followers
69 Following
376 Posts
UT Austin iSchool PhD student studying ethical, sustainable AI. (he/him) 🏳️‍🌈

Horrid new report from @972mag on an "AI" system called Lavender and "Where's Daddy?" used by the IDF. Has a 10% error rate, and acceptable "collateral" from 15 to 100 civilians per target.

This is sick and the future of AI warfare for US Empire.

https://www.972mag.com/lavender-ai-israeli-army-gaza/

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.

+972 Magazine
"In this account, a charismatic technology derives its power experientially and symbolically through the possibility or promise of action: what is important is not what the object is but how it invokes the imagination through what it promises to do." - The Charisma Machine by Morgan G. Ames
"They think if people can possess enough things they will be content to live in prison. But I will not believe that. I want the walls down. I want solidarity, human solidarity." - Shevek, The Dispossessed by Ursula K. Le Guin
Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face. https://www.eff.org/deeplinks/2024/03/cops-running-dna-manufactured-faces-through-face-recognition-tornado-bad-ideas
Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force...

Electronic Frontier Foundation
It's FAccT rejection day!
Well, God Emperor of Dune sure was ... something
"I find the philosophy that sees human beings as unknowable black boxes and machines as transparent deeply troubling. It seems to me a worldview that surrenders any attempt at empathy and forecloses the possibility of ethical development. The presumption that human decision-making is opaque and inaccessible is an admission that we have abandoned a social commitment to try and understand each other." - Virginia Eubanks, Automating Inequality

"Atrocity has no excuses, no mitigating argument. Atrocity never balances or rectifies the past. Atrocity merely arms the future for more atrocity. Whoever commits atrocity also commits those future atrocities thus bred." - Muad'Dib, Children of Dune

Naturally I post this quote *completely unrelated* to current events.

So it looks like my interview with TIME Magazine is available online… but seemingly only if you have Apple News + (different from regular Apple News), and thus an Apple *product*? But hey! Better than nothing, while i try to get them to put it up everywhere, right? 😉
https://apple.news/A21yoP44dRRWLK41gzu7t9A
Worst Practices: Bias in the System — TIME

AIs, like people, can absorb prejudices and exacerbate problems instead of solving them. In this Q&A, Damien P. Williams, an expert on social justice, explains the issues and what we can do to push back.

So about five days ago, or so, people on Bsky and Twttr started highlighting Elsevier science papers with GPT/LLM hallmark phrases riddled all throughout them. [Dozens and dozens (at least)] of peer-reviewed papers.

As I said, then, and as I discussed in my dissertation, knowledge-making and expertise are always a tricky process, but it needs deep, intentional confrontation and reform:
https://media.proquest.com/media/hms/PRVW/1/twSaS?_s=yIAhHtzhif4xd76I%2BihtcJJXTPw%3D

Anyway, now it looks like @404mediaco has dug down on this, and found *Even More of It* and I am genuinely and completely struggling against despair at what the future of being an educator, researcher, and writer will even mean over and at the end of the next 5 years.
https://www.404media.co/scientific-journals-are-publishing-papers-with-ai-generated-text/

Quite frankly, this should genuinely a) be the death of peer review as we know it (Again: AS WE KNOW IT), and b) lead a complete reformulation of the knowledge-making and expertise processes, but it won't and that terrifies and saddens me.