After 9 months, the New Orleans Police Department’s use of facial recognition has resulted in zero arrests and multiple false positives
After 9 months, the New Orleans Police Department’s use of facial recognition has resulted in zero arrests and multiple false positives
14 out of 15 requests were of black people. Facial recognition is notoriously bad with darker skin tones.
Racial Discrimination in Face Recognition Technology …harvard.edu/…/racial-discrimination-in-face-reco…
While true, it is important to remember that New Orleans is a predominately black city, with ~58% of the population, compared to ~12% nationally. So it does make sense there would be more requests of black people than you may typically expect. Maybe not 14/15 high, but 9 out of 15 would be expected (if we assume all races commit crimes at the same rate).
Yeah, but statistics is a b*tch.
We had a similar technology for a test run some years ago at a train station in Berlin, capital of Germany and largest city in the EU with 3.8M.
The results the government happily touted as a success were devastating. They had a true positive rate of 80% (and this was already cooked since they tested several systems but only reported the best results), which is really not that good to start with.
But they were also extremely proud of the false negative rate, which was below 0.1%. That doesn’t sound too bad, does it?
Well, let’s see…
True positive means you actually identified the people you were looking for. Now, I don’t know the number of people Berlin’s police is actively looking for, but it’s not that much. And the chances of one of them actually passing that very station are even worse. And out of that, you have 20% undetected. That’s one out of five. Great. If I were a terrorist, I would happily take that chance.
So now let’s have a look at the false negative rate, which means you incorrectly identified a totally harmless person as a terrorist/infected/whatever. The population for that condition is: everyone passing through that station.
Let’s assume there’s a 100k people on any given day (which IIRC is roughly half of what that station in Berlin actually has). 0.1% of 100k is 100 people, every day, who are mistakenly reported as „terrorists“. Yay.
I am asking a group of scientists who should be very well-versed in statistics and weights, you know, one of the biggest components in a machine learning model, to account for how biased their data is when engineering their model.
It’s really not a hard ask.
It can be an imported bias/descrimination. I still think that words fair.
Do you have a more accurate word?
Who remembers the HP computer and was unable to identify black people? One of my favorite “oooph, that’s not a good look” tech fails of all time. At least the people in that video were having a good laugh about it.
www.youtube.com/watch?v=t4DT3tQqgRM
Holy hell, that was 13 years ago.
More recently, there was also Google Photos mistaking a photo of a black couple as “gorillas”, back in 2015.
www.bbc.com/news/technology-33347866
On a funnier note, there was also the AI tool turning a pixelated photo of Barack Obama into that of a white man.
Haha. He looks like Mike Nelson.