Thanks to @jovialjoy, I'm reading through this #FairFace paper. There's a lot to unpack but I just want to point out a couple of underlying fundamental problems with respect to the way these authors think about gender. https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf
The authors developed a new face image dataset and trained a model that they claim performs better than previous models on 'race, gender, and age classification.' However, even putting aside (for now) ethical problems, they made a number of fundamentally incorrect assumptions:
False assumption 1: there are only two genders.

The authors don't even provide a caveat, such as 'for the purposes of this study we reduced gender to a binary.'

I'm not going to waste time here with this one other than to say, look it up. https://en.wikipedia.org/wiki/Gender 
False assumption 2: humans can correctly classify gender from photos (to provide 'ground truth' for model testing). They took a 'best 2 out of 3 Turker guesses' approach.
False assumption 3: humans can correctly classify gender from photos ACROSS ALL RACES, ETHNICITIES, AGES, & GENDERS with equal error rates.

This is especially problematic since it undermines the whole point of the paper.
False assumption 4: it's OK for either humans or machines to classify people's gender without consent.

It's not OK to do this. Don't do it. You need to ask people if you want to know their gender, and get informed consent if you want to use that information to train your model.
You can follow @schock.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.