I had a busy week and finally got some time to tweet about the recent incident: Google fires prominent AI ethicist Timnit Gebru.

I'll briefly introduce the background and state the seriousness of this incident. (1/n)
Mainstream AI models rely on data, which means these AI models' results are inevitably biased since there is almost no such thing as unbiased data. An example of the error rate of people with different skin colors: (2/n) https://twitter.com/math_rachel/status/976235105973714949
AI models are both powerful and full of flaws. Most of the research team in Google publishes papers focusing on the advance in AI models and how powerful they are. In contrast, Google's AI ethical team reveals the ethical issues in these models. (4/n)
Timnit Gebru (co-founder of Black in AI) was the co-lead of Google's ethical AI team and had many impactful publications focusing on AI ethics. Here is a nice thread on her contributions and publications by @math_rachel : (5/n) https://twitter.com/math_rachel/status/1334545393057599488
Recently, Google ordered Dr. Gebru to retract a research paper, without even informing the exact process leading to this order and the people involved in this decision. The full news: (6/n) https://www.bbc.com/news/technology-55187611
Dr. Gebru replied and asked for more information on the retraction order, which is a pretty reasonable request from a co-lead of a research team: (7/n) https://twitter.com/timnitGebru/status/1334900391302098944
Dr. Gebru emphasizes the importance of this information for the team by stating, "if the conditions are not met, then I can work on a last date." (8/n) https://twitter.com/timnitGebru/status/1334343577044979712
Google's reply was shockingly short. It simply interprets this email as a resignation, without even considering communicating with Dr. Gebru: (9/n) https://twitter.com/timnitGebru/status/1334364732480958467
According to the email of Jeff Dean, lead of Google AI, the paper was submitted a day before the deadline and is too late for the review process. He also mentioned that the paper did not cite some recent works on reducing biases. (10/n) https://twitter.com/JeffDean/status/1334953632719011840
However, some Google employees pointed out that the paper had feedback from 28 people and had 128 cited papers. These numbers are unusually high for a conference paper. (11/n) https://www.bbc.com/news/technology-55164324
Some questions worth thinking about: Why is the reviewing process of the ethical AI team so different from the other teams? Why did Google choose to fire a co-lead instead of trying to discuss the issue? Why is it so hard to formulate a transparent reviewing process? (12/n)
This incident is serious in 2 aspects: 1. If Google can retract any paper without a reason, it may censor works which revealed issues that may impact Google's revenue. 2. It is irresponsible to dismiss the reasonable request of Dr. Gebru without even trying to communicate. (13/n)
You can follow @j3soon.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.