What do platforms do when the law tells them to take down illegal user content? They take down a bunch of other stuff, too. How do we know? Well, here's an updated list of empirical studies documenting the problem:
http://cyberlaw.stanford.edu/blog/2021/02/empirical-evidence-over-removal-internet-companies-under-intermediary-liability-laws
There is every reason to expect this over-removal problem to have disparate impact on marginalized groups. Studies show patterns of bias from automated tools, and logic suggests we should expect it from human moderators as well: https://twitter.com/daphnehk/status/994598247384547330
Here's one of the studies about disparate impact from automated content moderation tools: https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf
Any CDA 230 "reform" legislation that introduces takedown obligations without attempting to grapple with this problem is, in my opinion, intellectually and morally unserious. That conspicuously includes the current @MarkWarner bill.
Grappling with the problem would mean, at minimum, adopting the procedural protections for users that have been utterly standard in human rights literature and civil society discussion for a decade now. (Appeals, penalties for bad faith allegations against lawful speech, etc.)
You can follow @daphnehk.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.