2020 has been a crazy year for many reasons, but an issue that has come to the fore is the aspect of Bias, Ethics, and Censorship of Algorithms.
A few stories from my experience working in big tech and where things need to go from here /thread.
A few stories from my experience working in big tech and where things need to go from here /thread.
After grad school, my first job was at Microsoft, working on Bing (called MSN Search at the time), where we were building a Google-killer product. Microsoft had all the resources in the world (an army of engineers, researchers in MSR, and a ton of cash) to beat Google. /2
Our team learned very quickly that Google had one big competitive advantage over us. It was their search log, which they had been collecting for a while, which MSN Search or Internet Explorer did not bother to keep track of during their early years. /3
Even though Bing and Microsoft Research invented cutting edge Algorithms such as Neural Networks for Search Ranking, we came 2nd best because Google had better data to address the long tail search quality. /4
https://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf
https://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf
This was the first lesson I learned about the value of collecting and processing big data for algorithms to work effectively. https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf /5
Later on, I spent a decade working at social networking companies like Twitter, Pinterest, and Facebook. The one common theme that stayed true in my jobs was the importance of collecting, analyzing, and processing large volumes of data to make our products better. /6
Things started taking a turn after the 2016 Elections, which I consider a historic moment in this journey. People started noticing the macro side effects on society. We saw the emergence of several issues like fake news, misinformation, data privacy, algorithmic bias, etc. /7
I was at Facebook when we bore the brunt of these issues during that time, and the feeling was like we were caught off-guard. We started working on putting guard-rails to our AI/ML algorithms, checking for data integrity, building debugging, and diagnostic tooling. /8
One such tool my team worked on was called “Why am I seeing this?” which shows human-readable explanations for news recommendations. /9 https://about.fb.com/news/2019/03/why-am-i-seeing-this/
Algorithmic integrity and transparency became so critical that it was one of the projects tracked by the Chief Product Officer, Chris Cox. Large teams worked on tools to detect and flag misinformation in news and diagnose why a story was making it to the top of News Feed /10
Tools like “Why am I seeing this’’ started to bring much needed Algorithmic transparency and thereby accountability to the Newsfeed for both internal and external users. /11
It's not only Facebook but we're seeing that, in the last 4 years, several complaints have cropped up about bias in Algorithms across products of Apple, Amazon, Microsoft. One needs to see a chilling documentary @CodedBias to fully grok the impact of these issues on society. /12
The reality is that humans are at the center of technology design and humans have a history of making product design decisions that are not always in line with everyone's needs. /13
For example, female drivers were 47 percent more likely to be severely injured in an automobile accident until 2011 because automobile manufacturers weren’t actually required to use crash-test dummies that represented the female body. /14
Big tech companies started Ethical AI and Responsible AI initiatives to address this issue. While that is commendable — it is not clear if all is well there and if there is a fundamental alignment of incentives.
We've all seen what happened with @timnitGebru recently. /15
We've all seen what happened with @timnitGebru recently. /15
The question is - Could we depend on the ethical branding strategy of a large corporation to course-correct itself?
We need accountability, independent 3rd parties, external regulators, and stricter laws.
We need congress to pass Algorithmic Accountability Act in 2021 /16
We need accountability, independent 3rd parties, external regulators, and stricter laws.
We need congress to pass Algorithmic Accountability Act in 2021 /16
Meanwhile, algorithms are also empowering big tech to more actively and preemptively determine which speech should be permitted and should be suppressed, often according to their own criteria, which is likely influenced by commercial considerations. /17
Unlike human moderation, with algorithmic censorship, social platforms can, in theory, intervene to suppress any content their algorithms deem prohibited according to the platform’s criteria. /18
I have had at-least one CXO of a big social platform tell me that that they are afraid of potential misuse of this power. Because, if a social platform wants to silence an entire community or a political discourse, they can do that with the click of a button today. /19
Therefore it is critical that we have regulations that ensure transparency into the kind of censorship these platforms are attempting. @chamath, @DavidSacks, and @friedberg talk about this at length in this podcast: /20 https://podcasts.apple.com/us/podcast/e10-twitter-facebook-botch-censorship-again-publisher/id1502871393?i=1000494932673
Unless we foster transparency, fairness, and accountability, in this new decade we can't ensure Algorithmic Justice for all.
This is the reason, we founded @fiddlerlabs to work towards a mission of Building Trust in AI and we need all your support! /end https://krishnagade.medium.com/algorithmic-justice-for-all-5a2d3abb123a
This is the reason, we founded @fiddlerlabs to work towards a mission of Building Trust in AI and we need all your support! /end https://krishnagade.medium.com/algorithmic-justice-for-all-5a2d3abb123a