Racist algorithms exacerbate into the prosecutor’s fallacy: a high p(match|guilty) will still yield a very small p(guilty|match) if you dragnet.
In the case, it sounds like not even p(match|guilty) was high.
*Ban police use of facial recognition.* https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html?smid=tw-share
In the case, it sounds like not even p(match|guilty) was high.
*Ban police use of facial recognition.* https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html?smid=tw-share
^Exacerbate the
Think about the purpose of using facial recognition in police work. If you have a reasonable number of suspects, humans can do it as well or better than algorithms.
The purpose has to be to screen faces at scale.
The purpose has to be to screen faces at scale.
When you screen faces at scale, the prior probability that any one of them is guilty is very low. That means that even with a low false positive rate, the chance that a match is actually guilty will also be low.
A match does not even rise to the level of reasonable suspicion.
A match does not even rise to the level of reasonable suspicion.