So. We hear a lot about bias these days, especially about bias in "AI". But while bias in "AI" systems is real and a problem, the narrative hides a bigger issue and masks it as something to find technical fixes for.
"AI" is it is understood today is basically a form of bureaucracy. Let me explain.
Bureaucracies are systems of sometimes opaque rules that stem from past experiences and that often don't really make coherent sense. These rules are applied to the world whether they fit or not. Things outside the rules are broken until they fit.
This is exactly what machine learning systems do: Take the past, grind it to statistics and apply these crystalized pasts to the world.
When we talk about bureaucracies and bias we talk about actual discrimination, we talk about changing processes to ensure more fairness and transparency. We never reasonably claim that another form will fix things.
But in "AI" that is what we do. A lot of the work on bias treats it like a bug that needs a simple fix: Oh black faces are not detected properly? Add more black people to the training data.
But maybe black people who are already threatened by the police don't want the facial recognition thing that the police uses to be better. The bias isn't the issue. The consequences in the material world are.
(And of course some biases are also very good. Removing all bias isn't just impossible but sometimes harmful.)
That's why I am not convinced that the whole bias thing in "AI" is all too helpful because it hides that these stupid systems have materially harmful effects by pretending we just need to "AI" better.
This should be a blog post with clearer argument but I don't know when I have the time. So here's this.
You can follow @tante.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.