Hey Philly folks --- SEPTA is doing a pilot of a system supposedly uses "AI" to call the cops when the "AI" detects a gun.

This is terrifying. What kind of civil oversight do you all have going on out there?

https://tpinsights.com/philadelphia-is-allocating-hundreds-of-millions-of-dollars-to-address-mounting-gun-violence/

>>

Philadelphia is Allocating Hundreds of Millions of Dollars to Address Mounting Gun Violence

KEY INSIGHTS: Philadelphia has allocated over $200 million to violence prevention this year to address soaring gun violence; there were over 2,000 shooting

The Plug

@alex is quoted raising key points. There's zero transparency about how the system is evaluated and it's pretty predictable what harms are going to happen --- and to whom.

>>

And can you spot the GLARING omission in this evaluation plan? (Answer in next post, for those who aren't sure.)

>>

@emilymbender

No of evaluation of false positives in a machine learning system is just flabbergasting.

@Leszek_Karlik @emilymbender OMG the First Rule of Metrics is to outline your counter-metric

(said differently: figure out how to Monkey's Paw perversely game your metric, measure to be sure you're not doing that)

@Leszek_Karlik @emilymbender and the obvious Monkey's Paw move here is to call in the guns EVERY TIME

that is the homicidal logic we see from "mad AIs" in SF all the time

"i will reduce human suffering by removing all the humans"

@trochee @Leszek_Karlik @emilymbender and the first law of humans is to see where the incentives lie. Are incentives for finding error ? Or just incentives for not finding errors? And of course, follow the money.
@KatLS @Leszek_Karlik @emilymbender yep. Understanding the reward surface is key to understanding system behavior, even if the system is people