What happens on the day after Section 230 is repealed? I'm going to prognosticate a bit in today's Section 230 thread. The tl;dr is that we'll likely see a lot less speech, especially if the speech even borders on being controversial.
Recall that Section 230's core immunity says that platforms shall not be treated as the publishers of third-party content. So unless an exception applies, a social media site, for instance, cannot be successfully sued for a user's post.
The user who posted it always could be sued, but not the platform. This provides platforms with great flexibility to set their own moderation standards.
Recently, many people have criticized platforms for preferring particular political viewpoints in these moderation standards, and for not being very transparent about how they develop and implement those standards.
We can set those arguments aside for another day (basically every day for the rest of my life). What I want to talk about right now is how moderation would change on the day after Section 230 is repealed.
As I've described in other threads and congressional testimony, we don't entirely know what liability standards courts would provide platforms in a 230-free world, because we have had 230 on the books since 1996.
Our best -- though imperfect -- prediction comes from cases involving physical bookstores, which distribute speech and are not covered by 230. They are liable if they know or have reason to know of illegal/defamatory content.
Assuming that online platforms in a 230-free world face a liability standard that is anywhere similar to what bookstores face, you could reasonably predict that the finger will be much more heavily on the side of removing user content.
Let's say that there is a heated political debate on Twitter and it gets personal (I know this rarely happens on a site as civil as Twitter, but just humor me). Twitter currently has a number of potential guidelines that govern whether the content comes down or user is banned.
Twitter currently has a number of potential guidelines that govern whether the content comes down or user is banned.
Without 230, I'd expect those guidelines to tighten up significantly and more frequently result in speech takedowns. Twitter's concerns will no longer merely be about creating a safe and responsible environment. It now will be concerned about liability from the comments.
Now you might say, there are a lot of protections like the opinion privilege, the falsity requirement, actual malice, etc. that make it unlikely for the standard political debate to result in a successful defamation case, even without 230. You're probably correct. But...
Twitter most likely does not want to litigate the merits of many, many defamation cases. Litigation is time-consuming and expensive. Removing any remotely controversial content would be in Twitter's business interests in a 230-free world.
Repealing 230 also might give the subject of negative user content the ability to force the removal of that content. Don't like a tweet? Complain to Twitter, and then they "know" of the alleged defamation and could face as much liability/litigation expense as the author.
Bookstores also could be liable if they have "reason to know" of the defamatory content they distribute. If applied to online platforms, this could trigger some sort of requirement to pre-screen material if the general nature of it is known to be defamatory.
Who knows, social media companies might decide that they will just take on the increased risk and refuse to change their moderation standards. That's theoretically possible until we know for certain how courts would define liability standards. But unlikely.
In 2018, within days of Congress passing FOSTA, which created a sex trafficking and prostitution facilitation exception to Section 230, this appeared on Craigslist.
Platforms are businesses. Businesses consider risks when developing their products and services. Section 230 is a big part of the risk equation for these platforms.
So this is a long way of saying that there would be some irony in repealing Section 230 because we are upset that platforms engage in too much moderation. The result of repeal is very likely going to be more aggressive moderation. And I'm not talking about Alanis irony.
I'm sure that I'll get responses about the unfairness of platforms censoring certain users while being covered by Section 230. That is all a valid debate to engage in, but this thread seeks to answer a very specific question: what would platforms do in response to 230 repeal?
You can follow @jkosseff.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.