Section 230 is very short, yet it is consistently misunderstood. That's partly because people are unaware of the history that led to its passage. Today's 230 thread examines the court cases that prompted Congress to pass 230, in an effort to help folks understand its purpose.
The tl;dr of this thread is: Congress passed Section 230 because it wanted to encourage online services to develop their own policies and practices for moderating legal but objectionable content, and it did not want to stifle free speech and innovation on the Internet.
In other words, encourage the private sector to determine the types of moderation that users demand, and keep government regulators away from online speech. The early court cases made it hard to achieve these goals without a law like 230.
Anyone who has heard me talk about Section 230 knows that I'm going to start out by talking about bookstores. You can't understand why we have 230 until you understand the early cases in which bookstores were possibly liable for the books they distributed.
In 1959, the Supreme Court reversed the conviction of Eleazar Smith, a bookstore owner from LA, who was arrested under a LA ordinance that banned the sale of obscene material, regardless of whether the seller knew the material was obscene.
Obscenity is not constitutionally protected, so the government can regulate it. But the court concluded that this ordinance was unconstitutional because it likely would chill speech that is constitutionally protected. The court's reasoning:
The court passed on defining exactly what mental state is necessary for a distributor to be held liable for content it did not create, but it emphasized that there must be *some* showing of a state of mind.
Lower courts would apply this not only in criminal obscenity cases, but in defamation cases against bookstores and newsstands. The general common law rule in defamation cases became that distributors are liable if they knew or had reason to know of the defamatory material.
These rules were first applied to online services in 1991, when CompuServe was sued for an allegedly defamatory article posted in one of its forums. CompuServe had been unaware of the article.
A federal judge in NY agreed with CompuServe that it was a distributor like a newsstand, and therefore did not face liability because it neither knew or had reason to know of the alleged defamation.
Although the judge dismissed the case, he threw a bit of a curveball when he focused on the fact that CompuServe had "little or no editorial control" over the newsletter, while acknowledging that it could decline to carry the newsletter altogether.
This focus on editorial control became a problem a few years later, when Prodigy was sued for a defamatory statement made on a user forum. Like CompuServe, Prodigy was unaware of the statement.
But Prodigy, at least at one point, had exercised what some might view as "editorial control" over its services, setting detailed community standards and employing contract moderators to enforce its rules. This was an effort to market Prodigy as a family-friendly service.
Those efforts led a N.Y. state court judge to conclude that unlike CompuServe, Prodigy was not entitled to "distributor" protections, but was rather a "publisher" who faced the same liability as the post's author.
I disagree with this decision, and the general focus on editorial control. As the CompuServe judge acknowledged, even newsstands can exercise *some* editorial control by declining to carry publications.
As a sidenote, the NY state court judge who decided the Prodigy case was not exactly a model jurist. A few years earlier, he was censured for making racist remarks to an attorney. http://www.scjc.state.ny.us/Determinations/A/Ain.Stuart.L.1992.09.21.DET.pdf
But the Prodigy case received widespread media attention, particularly for the proposition that a platform could significantly increase its liability by making its sites more family-friendly and enforcing community standards.
This was in 1995, when people were totally freaked out about children accessing pornography on this new Internet thing.
The 26 words in Section 230 were intended to prevent this disincentive to moderation: providers of interactive computer services shall not be treated as publishers of third-party content, even if they moderate.
Indeed, the conference report accompanying Section 230 (passed in Februrary 1996), explicitly states that the law was intended to overrule the Prodigy case.
During House floor debate on Section 230, the bill's co-author, then-Rep. Chris Cox, explained why the Prodigy case led to a bad result, and how he intended to fix that principle with 230.
This is a very long way of saying that when people argue that 230 is contingent on platforms not moderating, they fail to address the history of 230, which says otherwise.
Of course, Section 230 has been interpreted more broadly than only preventing platforms from being subjected to strict "publisher" liability. Courts also have interpreted it to preclude "distributor" liability, so they also are not liable even if they knew/had reason to know.
Later this week I'll examine how courts have reached that conclusion. As always, I will continue to write these threads until everyone on Twitter understands Section 230.
You can follow @jkosseff.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.