Like @cfiesler, I've gotten a lot of "It's not the algorithm, it's the training data" explanations/responses over the years (including IRL when I give talks), and I want to explain the problem with these 1/ [THREAD] https://twitter.com/cfiesler/status/1288446851000217606?s=20
As @cfiesler covers in her thread, it's patronizing to assume women in CS need ML explained to them. It's a form of gatekeeping 2/ https://twitter.com/cfiesler/status/1288449803748356096?s=20
I get these responses when I point out a real-world application that is harming real people. They are a distraction from harm being caused, which should be the central focus. 3/
This response is also a way of deflecting responsibility-- of suggesting that there are no concerns or risks about machine learning, it's just that pesky training data (which is assumed to be someone else's responsibility/fault) 4/
It's indicative of a narrow framing of bias as solely an issue with training data, which can be easily fixed with different training data. This ignores crucial questions, such as whether this task should exist at all and who is deploying it on whom: 5/ https://twitter.com/math_rachel/status/1275515335492304896?s=20
Suggesting that data collection & ML are separate, isolated silos feeds into the hierarchy in which algorithm development has been disproportionately glorified above all else. 6/ https://twitter.com/math_rachel/status/1135709270928961536?s=20
Machine learning often fails to critique the origin, motivation, platform, or potential impact of the data we use, and this is a problem that we in ML need to address. It's not someone else's problem. 7/

https://arxiv.org/abs/1912.10389  @unsojo @timnitGebru https://twitter.com/math_rachel/status/1223799130180349953?s=20
You can follow @math_rachel.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.