Yesterday I did a radio interview about Section 230, and the main line of questioning was along the lines of "why do we have 230, which allows social media companies to censor their users?" I've walked through this before, but one more time:
Most important point: social media companies are not bound by the First Amendment because they are not state actors. Courts repeatedly have said this, relying on an opinion by Kavanaugh. To hold otherwise would require SCOTUS reversal of longstanding First Amendment doctrine.
Conversely, the First Amendment protects private companies from government-imposed requirements that they distribute someone else's speech.
The only possible moderation-related case liability I could envision is if a platform violated its own terms of service. But ... the platform could just change its terms of service. So I don't see this as a very likely source of liability.
Now, 230 does have two provisions that allow platforms to get moderation-related claims dismissed early. Sometimes they are unnecessary because the court dismisses the case on 1A merits. Other times, they are used to efficiently dispose of a case that would fail on 1A grounds.
I think people get tripped up about 230 because they don't know its history. It was a response to a case in which Prodigy was held liable for all of its users' posts because it had previously engaged in some moderation. Congress thought this made no sense because...
Congress wanted the platforms to feel free to moderate content without fearing liability for everything. (I think the Prodigy case was wrongly decided, but it is what Congress was responding to). Congress also did not want to stifle the Internet with regulation and lawsuits.
Section 230 is responsible for moderation to the extent that it is responsible for the Internet that we know today (and often take for granted) -- where we have many, many websites and apps that allow user content, and have different rules and different user bases.
Without 230, there likely would be far *more* moderation, because platforms suddenly would fear liability for user content that might be on the margins of defamation. Or there simply would be fewer platforms that allow instantaneous user posts.
Platforms also would be much more willing to take down user content after receiving complaints, because they probably don't have the interest in or resources to litigate the merits in court.
I think some of the debate really is along the lines of: "We think that social media companies discriminate based on particular viewpoints/politics, so we think it's unfair that they receive liability protection for their user content."
That's a reasonable normative debate to have, but I think that we have to dissociate cause and effect. (And also recognize that all platforms -- not just a few big social media companies -- benefit from 230).
You can follow @jkosseff.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.