Hey Philly folks --- SEPTA is doing a pilot of a system supposedly uses "AI" to call the cops when the "AI" detects a gun.
This is terrifying. What kind of civil oversight do you all have going on out there?
>>
Hey Philly folks --- SEPTA is doing a pilot of a system supposedly uses "AI" to call the cops when the "AI" detects a gun.
This is terrifying. What kind of civil oversight do you all have going on out there?
>>
@alex is quoted raising key points. There's zero transparency about how the system is evaluated and it's pretty predictable what harms are going to happen --- and to whom.
>>
And can you spot the GLARING omission in this evaluation plan? (Answer in next post, for those who aren't sure.)
>>
No of evaluation of false positives in a machine learning system is just flabbergasting.
@Leszek_Karlik @emilymbender OMG the First Rule of Metrics is to outline your counter-metric
(said differently: figure out how to Monkey's Paw perversely game your metric, measure to be sure you're not doing that)
@Leszek_Karlik @emilymbender and the obvious Monkey's Paw move here is to call in the guns EVERY TIME
that is the homicidal logic we see from "mad AIs" in SF all the time
"i will reduce human suffering by removing all the humans"