Thread coming up in the morning. What's clear is that there still isn't enough transparency about Facebook's decision-making about hate speech https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346?redirect=amp#click=https://t.co/Rqg4wCXblA
The @alexstamos quote about the company's structure in other countries is key.
See:
"A Facebook spokesman, Andy Stone, acknowledged that Ms. Das had raised concerns about the political fallout that would result from designating Mr. Singh a dangerous individual, but said her opposition wasn’t the sole factor in the company’s decision"
"A Facebook spokesman, Andy Stone, acknowledged that Ms. Das had raised concerns about the political fallout that would result from designating Mr. Singh a dangerous individual, but said her opposition wasn’t the sole factor in the company’s decision"
He says 'her opposition' but what input does Facebook seek from local staff exactly? Threat to business? Threat to staff? Does the company re-designate the classification of the speech based on a non-expert's input?
The report says that Ms. Das oversees a team that decides 'what content is allowed on the platform'. Is that under Indian law (applied only in India) or under the community standards? Because I thought community standards were applied by a global team.
Much of the hateful content does violate Indian law so it's true that the India team could choose to block it locally. But the global team is also supposed to exercise its own judgment about hate speech that violates the platforms' standards.
That's two different decision-making systems to look at:
1. How does the India office apply hate speech law to politically sensitive speech
2. How does the global team involve local staff in its decision-making about politically sensitive dangerous speech
1. How does the India office apply hate speech law to politically sensitive speech
2. How does the global team involve local staff in its decision-making about politically sensitive dangerous speech
Oh, I want to remind you all that the Oversight Board only looks at content that is taken down. So if local staff around the world plead for hate speech/ dangerous speakers to stay on Facebook - their decisions cannot be appealed before the Oversight Board
I wrote this piece years ago about Facebook's enthusiastic removal of content about Kashmir https://medium.com/berkman-klein-center/rebalancing-regulation-of-speech-hyper-local-content-on-global-web-based-platforms-1-386d65d86e32
And this other piece raising concerns about the potential for politically skewed application of WhatsApp electoral misinformation policies https://www.washingtonpost.com/opinions/2019/04/25/india-could-see-next-whatsapp-election-stakes-couldnt-be-higher/
Clearly all valid concerns now that the WSJ shows us that we should worry about these questions.
So let me remind you once again that the local office can choose to share WhatsApp metadata with the government pretty easily. How much do you trust it?
So let me remind you once again that the local office can choose to share WhatsApp metadata with the government pretty easily. How much do you trust it?