Here's my 3-minute opening statement from #AIDebate2 [as a thread].
My lab @BerkeleyPsych studies how humans form beliefs and build knowledge in the world. In particular, we focus on how humans navigate the vast seas of all of the possible information they could try and make sense of in the world. https://www.kiddlab.com/
The thing I’d like to emphasize today is that algorithmic bias is not only problematic for the direct harms it causes, but also for the cascading harms of how it impacts human beliefs.
Algorithmic bias is problematic because there are systems interfacing with people everyday, embedded seamlessly into our lives. These systems drive human beliefs in sometimes destructive, likely irreparable ways.
What our research has shown is that:
1. People don’t learn very deeply about most things in the world.
2. People have to make up their minds quickly in order to act.
3. Once a person makes up their mind, cognitive mechanisms dissuade them from revisiting those topics.
1. People don’t learn very deeply about most things in the world.
2. People have to make up their minds quickly in order to act.
3. Once a person makes up their mind, cognitive mechanisms dissuade them from revisiting those topics.
Content-serving algorithms on news and social media recommend content based on likelihood of user engagement—thus leading to users to see content that espouses homogenous (sometimes rather wild) beliefs. https://www.reuters.com/article/us-alphabet-google-research-focus-idUSKBN28X1CB
This is problematic when users go to these sources unsure, in order to collect the information they’ll use to make up their minds. These systems are likely to push users to strong, incorrect beliefs that—despite are best efforts— are difficult to correct.
Here’s another example: @LinkedIn and @Amazon have both been caught employing technologies that promoted men and filtered out qualified women job candidates. But the harms went beyond just women candidates in these particular pools. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
Biased recruitment AI almost certainly impacted the beliefs of the recruiters using the systems. If their searches didn’t turn-up qualified women, they likely concluded that qualified women don’t exist—when, in truth, it was just a bias in the application system.
I want to close by saying that this is a terrifying time right now for ethics in AI. The termination of @timnitGebru from @Google marks a dark turn. https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html
Even after #MeToo
and the #BlackLivesMatter
protests of 2020, it is clear that private interests will not support diversity, equity, and inclusion.
https://techcrunch.com/2020/12/03/googles-co-lead-of-ethical-ai-team-says-she-was-fired-for-sending-an-email/


https://techcrunch.com/2020/12/03/googles-co-lead-of-ethical-ai-team-says-she-was-fired-for-sending-an-email/
It should horrify us that the control of algorithms that drive so much of our lives remain in the hands of a homogenous, narrow-minded minority.
https://uploads-ssl.webflow.com/5f2876f679889c3267ee6dee/5fdd9622618da2c43dd7fffb_support_gebru_ethical_ai.pdf
https://uploads-ssl.webflow.com/5f2876f679889c3267ee6dee/5fdd9622618da2c43dd7fffb_support_gebru_ethical_ai.pdf
What @TimnitGebru experienced at @GoogleAI is the norm. It’s hearing about it that is unusual. https://www.technologyreview.com/2020/12/16/1014634/google-ai-ethics-lead-timnit-gebru-tells-story/
It is also unfortunately the norm that people who speak inconvenient truths to power are discarded. They
are quietly pushed out by institutions like Google, who if caught, pretend that people like Timnit did something wrong.
https://dynamic.uoregon.edu/jjf/institutionalbetrayal/
are quietly pushed out by institutions like Google, who if caught, pretend that people like Timnit did something wrong.
https://dynamic.uoregon.edu/jjf/institutionalbetrayal/
This response manipulates everyone’s beliefs into thinking that underrepresented people are underrepresented because they cause trouble, not because the institutions themselves discriminate.
But you should listen to @timnitGebru—and countless others—about what the environment at Google was like.
@JeffDean should be ashamed.
@JeffDean should be ashamed.
The rest of us have a responsibility to see it for what it is, and insist that it stop.