Thank you members of the Judiciary Committee for the opportunity to speak with the American people about Twitter and your concerns around censorship and suppression of a specific news article, and generally what we saw in the 2020 US Elections conversation.
We were called here today because of an enforcement decision we made against the @NYPost, based on a policy we created in 2018 to prevent Twitter from being used to spread hacked materials. This resulted in us blocking people from sharing a @NYPost article, publicly or privately.
We made a quick interpretation, using no other evidence, that the materials in the article were obtained through hacking, and according to our policy, blocked them from being spread. Upon further consideration, we admitted this action was wrong, and corrected it within 24 hours.
We informed the @NYPost of our error and policy update, and how to unlock their account by deleting the original violating tweet, which freed them to tweet the exact same content and news article again. They chose not to, instead insisting we reverse our enforcement action.
We did not have a practice around retroactively overturning prior enforcement. This incident demonstrated that we needed one, and so we created one we believe is fair and appropriate. https://twitter.com/twittersafety/status/1322298208979197955?s=21
I hope this illustrates the rationale behind our actions, and demonstrates our ability to take feedback, admit mistakes, and make changes, all transparently to the public. We acknowledge there are still concerns around how we moderate content, and specifically our use of §230.
Three weeks ago we proposed three solutions to address the concerns raised, and they all focus on services that decide to moderate or remove content. They could be expansions to §230, new legislative frameworks, or a commitment to industry-wide self-regulation best practices.
Requiring 1) moderation process and practices to be published, 2) a straightforward process to appeal decisions, and 3) best efforts around algorithmic choice, are suggestions to address the concerns we all have going forward. And they all are achievable in short order.
It’s critical as we consider these solutions, we optimize for new startups and independent developers. Doing so ensures a level playing field that increases the probability of competing ideas to help solve problems going forward. We mustn’t entrench the largest companies further.
Finally, before I close, I wanted to share some reflections on what we saw during the US Presidential election. We focused on addressing attempts to undermine civic integrity, providing informative context, and product changes to encourage greater consideration.
We updated our civic integrity policy to address misleading or disputed information that undermines confidence in the election, causes voter intimidation or suppression or confusion about how to vote, or misrepresents affiliation or election outcomes.
More than a year ago, the public asked us to offer additional context to help make potentially misleading information more apparent. We did exactly that, applying labels to over 300k tweets from Oct 27-Nov 11, which represented 0.2% of all US election-related tweets.
We also changed how our product works in order to help increase context and encourage more thoughtful consideration before tweets are shared broadly. We’re continuing to assess the impact of these product changes to inform our long term roadmap.
Thank you for the time, and I look forward to a productive discussion focused on solutions.
You can follow @jack.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.