The news came to light after @timnitGebru says Google fired her; she had questioned an order not to publish research saying AI that mimics speech could disadvantage marginalized communities.
This summer, a senior Google manager told authors of a draft paper on content recommendation tech to “take great care to strike a positive tone,” internal correspondence shows. The authors then “updated to remove all references to Google products.”
When are @Google scientists to flag research for “sensitive topics” reviews? If looking at Google services for bias, face/sentiment analysis, race or gender categorization, the oil industry, COVID-19, China, location data, religion, driverless cars, telecoms, home security & more
In senior Google scientist @mmitchell_ai’s words: “If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship.”
Google declined to comment for the story. SVP @JeffDean said earlier this month that Google supports AI ethics scholarship and is “actively working on improving our paper review processes, because we know that too many checks and balances can become cumbersome.”
You can follow @JLDastin.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.