if we consider humans rationally formulating bad decision rules to be an instance of algorithmic failure, then the only negative outcomes that can't be blamed on algorithms are those produced by opaque holistic human judgment. i don't think we want to encourage more of that
concerns about the growing role of AI in day-to-day decision-making deserve to be taken seriously. but the problem is *not* that humans should be making more of the decisions; it's that humans should be thinking more carefully about how to evaluate and regulate algorithms
put differently, the question "should we trust algorithms?" carries the soft implication that we were already doing okay trusting human experts. we weren't. human decision-makers regularly make terrible, hugely biased decisions. we should trust neither humans nor algorithms
if anything, the benefit of algorithmic decisions is they can (at least in principle) be interrogated in a way that human decisions can't. you will never know for sure why your Uber driver suddenly swerved and crashed; but Tesla *might* be able to explain an Autopilot crash
one irony of the backlash against algorithmic decision-making is that many of the biases people object to were already present—often to a far greater extent—in human decisions, and this didn't seem to bother nearly as many people
is it bad if a racist algorithm gives some groups disproportionately long jail sentences? of course it's bad. but it's also bad when racist human judges do the same—and somehow there hasn't been the same movement to evaluate and regulate human decision-making in this context
to be clear, i'm not saying "if you didn't complain about racist judges, you shouldn't complain about racist algorithms"; you should complain about both! the point is that the root problem is not the use of algorithms, it's the lack of appropriate testing/regulation regimes.
You can follow @talyarkoni.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.