I’m not in #recruiting, but I often get cold-emailed by startups selling #AI-based resume screening. So I’ve started responding by asking them to show me what due diligence they’ve done to make sure system bias isn’t causing significant harm. The responses are illuminating.
The most terrifying thing is when they tell me no one has ever asked them this question before.
The second most terrifying thing is when they say they haven’t measured their bias or its effects but are “improving as they go” That suggests they had no baseline for minimum-harm when deciding their product was minimally viable. And are using their customers as guinea pigs.
They love to claim that getting more data will always correct for bias over time, but I don’t think think that’s what really happens in systems like these.
Why do you think your future data isn’t biased too? How can your system correct itself if it doesn’t even KNOW when it was wrong? (e.g. failed to match a position with a candidate who was in fact qualified)
I honestly don’t know if #AI-based recruiting systems like automated resume screening can ever be made fair or not, because fairness is a really difficult concept. But it’s certainly a lost cause if the folks building them don’t even try, and the folks buying them don’t even ask.
You can follow @TessaFiche.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.