Training models on the data produced by a racist society teaches them to emulate the biases of that society.

There's a temptation to look at them as objective. And they are, but not about the truth. They are an objective reflection of the biases that went into their training. https://twitter.com/hankgreen/status/1353784705989046272
What this result tells us is who the mass of data it was trained on talks about as people, and who it talks about as merely humans. And it's not smart enough to filter what it says, so it can be made to say the quiet parts out loud if you ask it the right questions.
Treating this kind of model as an arbiter of the truth is just laundering the bias already present in our society. But this kind of algorithm has interesting things to say about what our society as it is right now says and believes.
It's imperative that those of us involved in machine learning, whether as researchers or practitioners, be honest with ourselves about what our models have been trained to do and resist attempts to make them something they are not.
Providing "answers" to the general public, this model is an automated perpetuator of the biases it was trained on. In another context, it could perhaps be a window into examining those biases. -Ben
You can follow @BcSystem.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.