Okay. We found our notes from when we did that close read of the DoJ's draft legislation around Section 230 a couple months back. We had hoped this would not be relevant; at the time there was no obvious path to the bill being introduced, let alone passed.
The Verge is reporting that the GOP is refusing to pass the House's bill for a $2000 stimulus payment in the Senate, and is mentioning a Section 230 repeal as the thing they want to do in exchange for that. https://www.theverge.com/2020/12/29/22204976/section-230-senate-deal-stimulus-talks-checks
The Verge is also rightly noting that there are a lot of open questions about what this would even mean. There is *still* not any formally proposed text to comment on.
Our notes, and the remainder of this thread, are about a draft that the DoJ prepared at POTUS's request a while back. We think it's instructive to look at that to understand what the GOP is likely to want, but please understand none of this is final.
This draft, if passed, would be a very significant form of censorship. It is ironic, but predictable, that the executive order telling the DoJ to write it was titled the "Executive Order on Preventing Online Censorship".
Orwell's "freedom is slavery" thing was simply an observation of how fascists talk - they couch their horrors in language that makes them sound good, until you actually look at what they mean by it.
For those who aren't used to reading legal text, a redline is a document that shows *changes* in an existing text. If you're a programmer, it's like a diff. In this case it's a diff to the law.
Every possible consequence we're going to outline is highly speculative. Some of it might hold up in court, some might not. We're not lawyers, and there might be existing precedents about some of this wording; we tried to note the ones we're aware of.
Oh - one last note before we start. It's ironic for us to be defending Section 230. When the Communications Decency Act passed in 1996, we were upset because, compared to the status quo, it indeed seemed like censorship. We protested. (Yes, we were intense teenagers.)
We would support thoughtful reform to the CDA. However, that is not what's on the table right now.
Okay, so first off: This entire draft is written as if it primarily applies to social media platforms. However, it is our understanding that Section 230 applies to all forms of online communication, including chat rooms, email, ... everything.
Some portions of the draft seem nonsensical when applied to things that aren't social media. In particular, (d)(4) is very strange if you imagine somebody filing a takedown notice about an email - what would that even mean?
Note that, although this draft is pitched as a "repeal", it actually creates substantial new processes and procedures. It is a repeal in the sense that the protections for free communication online are being repealed, and replaced with various oppressive bullshit.
It is not a repeal in the sense that there wouldn't be a Section 230 after it took effect. There would, but its meaning would be the opposite of what's there now.
Okay, now let's go section by section in the redline. The newly-added section (c)(1)(C) creates an "objectively reasonable belief" test for content moderation decisions. This test would be really, really bad.
Right now, under the law, if you run a chat room, you are allowed to kick people from it, and they can't do anything about it. Under this draft, it's likely that section (c)(1)(C) would allow them to sue and argue that your reason for kicking was not objectively reasonable.
This was probably motivated by POTUS wanting to sue Twitter, but realizing that Twitter's actions are allowed by the law. In the Twitter case, ironically we might be on POTUS's side, although we would probably formulate a weaker test than this.
However, that's because we believe that social media platforms are special by virtue of the fact that a tweet is visible to *millions of people*. Any real reform would have to draw a clear distinction such that it *only* applies to the megacorps.
Moving forward. There are some small changes in section (c)(2) which basically just go along with the change we just described. They give it teeth by defining more specifically what counts as a moderation action ("moderation action" is our term, not the draft's).
Previously this wording was pretty loose because this section was saying that people doing this kind of moderation work were *shielded* from liability. The draft now establishes that people *are* liable for it.
Okay, moving on. Section (d) is entirely new, created by this draft bill. It's creating an exclusion from the previously-established immunity; in other words, it's making people liable for stuff.
Section (d)(1), interestingly, seems like it mostly affects hate sites. We say that because those are the only places, in practice, that we see people trying to tempt other people into saying illegal things, which is what this wording is about.
It's quite broad wording; if enforced fully, it would, in our opinion, be a prior restraint on speech. We think it would be bad. It's important when discussing any sort of change to rules or laws, to think through what it would do long-term, not just what it would do right now.
That means that sometimes people wind up defending their political opponents, or hurting their political allies, because the rule change that would do what they want right now would be a bad effect. It's important to be willing to do that; that's what it means to have laws.
Even though its proximate effect would be to limit the activities of hate sites, we think that section (d)(1) is too strong. It says that you cannot "promote, solicit, or facilitate material or activity" that somebody else might do which might violate the law.
That could mean, for example, that the Lockpicking Lawyer could no longer post lock-picking tutorials to YouTube, as he presently does. It could mean that lawyers couldn't post legal advice at all.
Lots of things *could* violate the law; it's important to be able to talk about them anyway.
For example, it's important to be able to talk about whether they *do* violate the law.
This seems like quite an extreme consequence, so hopefully SCOTUS wouldn't uphold it in that strongest form, but... like we said, we're outlining the plain meaning of the draft, as best we understand it. How a court would handle it is a larger question.
Okay, we're only about a third of the way through our notes but we need to take a break. We'll be back in about two hours to finish the thread. Thanks for reading!
Sorry, one thing we realized we should clarify: Hate sites are the *clear-cut* cases we say, in which a court could reasonably conclude that the intent of the speech exercised on them is to encourage unaffiliated strangers to do crimes.
The other examples we mentioned are illustrative of what a *stronger* interpretation could mean.
Thanks to @luisbruno for linking to this summary of what Section 230 does right now. Great point, we hope our readers take the time to absorb that. https://twitter.com/luisbruno/status/1344071445157195777
Okay, we're going to resume going through our notes on this draft legislation. As we noted at the start of the thread: This is a draft from a couple months ago by the DoJ, that has not been introduced in Congress.
The only relevance of this analysis to current events is that the draft gives insight into what the GOP is talking about when they say "repeal section 230".
So. Section (d)(2)(C) of the draft appears to require sites to both delete disputed posts, and also retain them. This is an apparent contradiction but "delete" mostly means cease to distribute...
... but if this were implemented naively, such that disputed posts can only be seen by site admins, there are questions about whether they are still, in a sense, distributing the post to their admins.
The draft doesn't provide guidance on what type of mechanism would actually satisfy these requirements. It might be that it's impossible to.
There's a more worrying point, also about (d)(2)(C). The delete-and-retain obligation kicks in immediately following *any notice* that material is illegal.
That might seem feasible, but in reality, any such notice is only an opinion until a court has ruled on it. So the effect of the ruling is to turn "hey this might be illegal" into something which sites must act on IMMEDIATELY, even before court proceedings.
Basically, this gives concern trolling the force of law.
Any site owner attempting to exercise critical thinking about such demands would be legally liable for using their brain.
Moving on. Section (d)(4). This section would create a process analogous to DMCA notices, but for "defamatory or unlawful material or activity as described in Subsections (d)(2) and (3)".
Generally speaking, we think that DMCA notices have had positive effects but they have also had chilling ones, making people afraid to criticize copyrighted material online. We can reasonably expect that this would do the same, but for criticism of the law.
There's also a bit of wording in (d)(4) which we find worrying for reasons unrelated to how it's applied here: "if it designs or 34 operates its service to avoid receiving actual notice [...] or the ability to comply with the requirements".
(the 34 there is a copy-paste error, sorry about that)
This is worrying because it appears to say: You can't avoid these rules simply by having a service that provides communication but in a way that doesn't insert yourself in a way gives you the power to censor things.
For example, Signal would run afoul of these rules, as would any end-to-end encrypted platform.
This worrying frightens us because it provides a template for similar provisions in other legislation. This fight is brewing, people, and we're not going to avoid it. We have to assert a strong moral right to have private conversations that the state can't intrude into.
*this WORDING. although it is worrying, too.
One last note about section (d)(4). As written, it might also wind up affecting search engines. Presently, the US is not a jurisdiction in which suing a search engine will get you very far; there are other democracies where it will.
We don't claim to know what's best for society there, but we strongly suspect that NEITHER the hard-line censorship position NOR the hard-line preservation of information position is fully correct.
The status quo is the result of many years of legal conflict between those positions. This bill might shift the balance.
Sorry - this *draft*. Once again, there isn't a bill yet.
One more thing to note before we move on from section (d). Defamation claims are one of the categories of notice that would require takedown. Defamation suits are very popular in SLAPP situations where litigious plaintiffs try to silence their critics.
In the US, in particular, defamation suits are even more common than in other common-law jurisdictions, due to some quirks in the law which make them easier to assert.
Defamation law is important, but a proactive notice-and-takedown system goes much, much too far with it, just as the DMCA system goes too far with copyright. Even when a law is *good*, it should also have *humility*.
Legislatures should be *humble* when enacting laws, and should not create enforcement systems that eliminate even the possibility of dissent. Everyone is wrong sometimes, and even in a democracy, sometimes we're all collectively wrong together.
This humility framing is original to us, by the way, and we'd love to discuss it further with anyone who enjoys it.
Okay, moving on to the remainder of the draft. The remaining comments we have on it all relate to section (g), the definitions section.
Section (g)(5)(A) mandates, in effect, that every site must have a code of conduct. Fine. It also mandates that moderators can never take any action that isn't described, in advance, in the CoC.
Furthermore, it mandates that the CoC can't be based on general principles. It must be specific. So for example, you couldn't say that site members must be civil, without defining civility in highly fact-specific ways.
As someone who has been involved in online community ownership for decades, we can state on our own experience that the reality of content moderation is that very nearly every situation that arises is novel in some way.
Even with our decades of experience, we would be unable to express any single set of rules that, if people simply follow those rules, would result in them all getting along. That should be obvious, when we say it like that!
Furthermore, we actually believe that rules that are framed as appeals to general principle are BETTER than specific rules - at least, in the specific context of private communities where people come together voluntarily to hang out.
For Twitter and Facebook and YouTube, we believe the opposite: The rules need to be things that you can KNOW when you're violating it.
One of the most profound evils of this draft legislation is that it tries to treat a chat room with five friends in an identical fashion to how it treats Facebook.
That is simply not appropriate. There is no world in which the same procedural obligations and high caps for liability should apply to five people hanging out as to a multinational corporation with billions of dollars of revenue.
Any legislation that attempts to treat those situations identically can only ever result in making it illegal for people to congregate online anywhere that *isn't* run by a multinational corporation with thousands of lawyers on staff.
You can follow @ireneista.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.