Here's my 3-minute opening statement from #AIDebate2 [as a thread].
My lab @BerkeleyPsych studies how humans form beliefs and build knowledge in the world​. In particular, we focus on how humans navigate the vast seas of all of the possible information they could try and make sense of in the world. https://www.kiddlab.com/ 
The thing I’d like to emphasize today is that algorithmic bias is not only problematic for the direct harms it causes, but also for the cascading harms of ​how it impacts human beliefs​.
Algorithmic bias is problematic because there are systems interfacing with people everyday, embedded seamlessly into our lives. These systems drive human beliefs in sometimes destructive, likely irreparable ways.
What our research has shown is that:

1. People don’t learn very deeply about most things in the world.

2. People have to make up their minds quickly in order to act.

3. Once a person makes up their mind, cognitive mechanisms dissuade them from revisiting those topics.
This is problematic when users go to these sources unsure, in order to collect the information they’ll use to make up their minds. These systems are likely to push users to strong, incorrect beliefs that—despite are best efforts— are difficult to correct.
Biased recruitment AI almost certainly impacted the beliefs of the recruiters using the systems. If their searches didn’t turn-up qualified women, they likely concluded that qualified women don’t exist—when, in truth, it was just a bias in the application system.
Even after #MeToo and the #BlackLivesMatter protests of 2020, it is clear that private interests will not support diversity, equity, and inclusion.

https://techcrunch.com/2020/12/03/googles-co-lead-of-ethical-ai-team-says-she-was-fired-for-sending-an-email/
It should horrify us that the control of algorithms that drive so much of our lives remain in the hands of a homogenous, narrow-minded minority.

https://uploads-ssl.webflow.com/5f2876f679889c3267ee6dee/5fdd9622618da2c43dd7fffb_support_gebru_ethical_ai.pdf
It is also unfortunately the norm that people who speak inconvenient truths to power are discarded. They
are quietly pushed out by institutions like Google, who if caught, pretend that people like Timnit did something wrong.
https://dynamic.uoregon.edu/jjf/institutionalbetrayal/
This response manipulates everyone’s beliefs into thinking that underrepresented people are underrepresented because they cause trouble, not because the institutions themselves discriminate.
But you should listen to @timnitGebru—and countless others—about what the environment at Google was like.

@JeffDean should be ashamed.
The rest of us have a responsibility to see it for what it is, and insist that it stop.
You can follow @celestekidd.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.