Racist algorithms exacerbate into the prosecutor’s fallacy: a high p(match|guilty) will still yield a very small p(guilty|match) if you dragnet.

In the case, it sounds like not even p(match|guilty) was high.

*Ban police use of facial recognition.* https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html?smid=tw-share
^Exacerbate the
Think about the purpose of using facial recognition in police work. If you have a reasonable number of suspects, humans can do it as well or better than algorithms.

The purpose has to be to screen faces at scale.
When you screen faces at scale, the prior probability that any one of them is guilty is very low. That means that even with a low false positive rate, the chance that a match is actually guilty will also be low.

A match does not even rise to the level of reasonable suspicion.
You can follow @CT_Bergstrom.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.