. @Google this year tightened control over its scientists’ papers by launching a “sensitive topics” review, and in 3 cases told authors to refrain from casting its tech in a negative light, docs & interviews show. Story w/ @peard33 https://www.reuters.com/article/us-alphabet-google-research-focus-idUSKBN28X1CB
The news came to light after @timnitGebru says Google fired her; she had questioned an order not to publish research saying AI that mimics speech could disadvantage marginalized communities.
This summer, a senior Google manager told authors of a draft paper on content recommendation tech to “take great care to strike a positive tone,” internal correspondence shows. The authors then “updated to remove all references to Google products.”
When are @Google scientists to flag research for “sensitive topics” reviews? If looking at Google services for bias, face/sentiment analysis, race or gender categorization, the oil industry, COVID-19, China, location data, religion, driverless cars, telecoms, home security & more
In senior Google scientist @mmitchell_ai’s words: “If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship.”
Google declined to comment for the story. SVP @JeffDean said earlier this month that Google supports AI ethics scholarship and is “actively working on improving our paper review processes, because we know that too many checks and balances can become cumbersome.”