Hey Philly folks --- SEPTA is doing a pilot of a system supposedly uses "AI" to call the cops when the "AI" detects a gun.

This is terrifying. What kind of civil oversight do you all have going on out there?

https://tpinsights.com/philadelphia-is-allocating-hundreds-of-millions-of-dollars-to-address-mounting-gun-violence/

>>

Philadelphia is Allocating Hundreds of Millions of Dollars to Address Mounting Gun Violence

KEY INSIGHTS: Philadelphia has allocated over $200 million to violence prevention this year to address soaring gun violence; there were over 2,000 shooting

The Plug

@alex is quoted raising key points. There's zero transparency about how the system is evaluated and it's pretty predictable what harms are going to happen --- and to whom.

>>

And can you spot the GLARING omission in this evaluation plan? (Answer in next post, for those who aren't sure.)

>>

@emilymbender

No of evaluation of false positives in a machine learning system is just flabbergasting.

@Leszek_Karlik @emilymbender OMG the First Rule of Metrics is to outline your counter-metric

(said differently: figure out how to Monkey's Paw perversely game your metric, measure to be sure you're not doing that)

@Leszek_Karlik @emilymbender and the obvious Monkey's Paw move here is to call in the guns EVERY TIME

that is the homicidal logic we see from "mad AIs" in SF all the time

"i will reduce human suffering by removing all the humans"

@trochee @Leszek_Karlik @emilymbender based on what i can tell these are not CV/AI experts who necessarily know all of the best practices. These are domain experts who have built a tactical tool, like building a tourniquet out of sticks and a t-shirt. It has a very old-school tinkerer/hacker vibe in a way. They are building something to solve a problem they are familiar with and probably cobbled together just enough coding and ML to make it work; from the complete other direction of a computer scientist who understands the intricacies of the tools and techniques they are using.

Maybe they would be willing to hire you for some consulting.

@DenialShown @Leszek_Karlik @emilymbender it's not a matter of best practices.

I'm objecting to the application of AI for these purposes altogether, especially because the existing AI systems are widely known to be amplifiers of cultural, race, gender, and language bias.

More like building a garrote than a tourniquet TBH.

@trochee @Leszek_Karlik @emilymbender i understand what you are saying (at least i hope i do). I don't necessarily disagree, but I'm not fully baked enough to go hard in any direction. What I'm saying is both a claim about gaining a richer understanding of the path that led to them doing what they did as well as an analysis of the rhetoric.

In terms of the former, these are people who are trying to address a real, known, and tangible problem. They are attempting to bring innovative solutions, and working in the adjacent possible space of technologies that are immediately available. They are thinking at a very immediate and tactical level.

Along comes a group of academics throwing stones at their work, yet offering no proposals to address the original problem. No evidence has been presented of harm from the innovation, only speculation and accusations. "How dare they look down on us from their ivory towers, when we are at least trying to solve real, known, and tangible problems!?" I'm reminded of a Benjamin Sisko line from ST:DS9 "Well, it's easy to be a saint in Paradise…".

Again, the original concerns about the system under test might be valid. My only feedback, offered humbly is that as i see it the question is largely being framed as "better the devil you know than the one you don't". As far as i can tell it's not a binary choice, though. Engaging with the creator and/or the transit agency offers a third or fourth way where the tool is made better or the folks in charge can call off its use if it's truly bad. To stretch that field tourniquet turned garrote metaphor, teach them how to build the former instead of the latter, or observe the dressing closely enough to tell the field commander when to order "stop!" Telling someone in the next foxhole over that tourniquets are not good is generally not well received when they've got people already dying all around them.

@DenialShown @Leszek_Karlik @emilymbender I'm not convinced i understand everything you're saying, but I'm pretty sure that your perspective is not taking existing systems of power and control into account.

Legislative & "criminal justice" systems in the US and Europe (at least) are already full of friction and loopholes that reinforce existing power dynamics.

So do most AI systems -- and they punch down along the same axes.

@DenialShown @Leszek_Karlik @emilymbender

These "tinkerers/hackers" are not "punching up", they're playing "on easy mode" -- rich people can already get off parking tickets by throwing lawyers at the court until everybody backs down -- it's why SBF isn't currently awaiting trial behind bars

@DenialShown @Leszek_Karlik @emilymbender

I'd be interested in applications of AI that were _counter_ to those systems of oppression, but i don't believe — under the current provenance of the LLM training data and curatorship of the models and their training -- that this is possible.

I don't share your optimism that these are scrappy underdogs punching up against entrenched power. That was the "disrupt" line that sold us into the hands of Uber and the "gig economy"