I’ve been reading lots recently about the interaction between First Amendment law and free speech principles with respect to online services in light of the events of the last few weeks.

And I have thoughts (MY OWN). So, I’m sorry ... a thread 1/25
One of the main reasons I think users are best served by a recognition that social media services have 1st Amendment rights to curate the content on their sites is because many users want filtered content, either by topic, or by behavior, or other. 2/
So online services should have the right to do this filtering, and to give their users the tools to do so too. For more detail see our Prager U amicus brief https://www.eff.org/document/prager-university-v-google-eff-amicus-brief 3/
So, I disagree with my friends (and others) who say that every online service should apply First Amendment rules, even though they cannot be required to do so. There are both practical and policy reasons why I don’t like this. 4/
Most obviously, the 1st Amendment reflects only one national legal system when this is inherently an international issue. So it’s politically messy, even if you think a 1st Amendment-based policy will be most speech-protective (though probably only non-sexual speakers). 5/
And even when the 1st Amendment provides clear categorical rules, they are really hard to apply without the full consideration of evidence & the full context that courts are well equipped to do, but which social media companies are not, especially at scale. 6/
And that doesn’t even consider speech that may be restricted under strict or intermediate scrutiny schemes. We should not expect, or want, even large well-resourced companies to take on a quasi-judicial role for those type of ends-means balancing tests. 7/
And no – automation does not solve this problem. There is no magic robot that distinguishes protected speech from unprotected speech. And I doubt there ever might be. (And yes, I know about CSAM hashing, but that only detects images after they are known to be unprotected.) 8/
On a policy level, most online services, even those that spout lofty free speech rhetoric, don’t want a no-holds-barred service and don’t think their users do either. 9/
Some, for example, don’t want their users to be harassed on their sites or to harass others, even if it is protected speech. That doesn’t mean it’s easy or pleasant to define harassment. But they want to be able to try, at least sometimes. 10/
Parler, as an example, holds itself out as a “free speech platform” but bans nudity and “indecent” speech. It has said it follows "FCC rules," though it seems to only mean the indecency regime (but without the 10 PM to 6 AM safe harbor). And those are hardly clear 'rules.' 12/
So, I disagree with those who urge a rethinking of 1st Amendment doctrine so that online services will have to remove more speech. They all already remove a lot of legal speech. You just want them to remove different legal speech. That doesn’t require rethinking the doctrine. 13/
Of course, users are also best served when such curatorial policies are clear, predictable, & consistently applied, & when decisions can be effectively appealed. & users are even better served when there are options: when users can choose among multiple curatorial styles. 14/
But it's bad that a few sites have so much power to control speech globally. I understand that allowing sites to moderate means there will be a lot of ‘censorship’ of protected speech. And historically silenced voices will inevitably encounter further silencing. 15/
This creates a huge (and of course unfunded) burden on these speakers and on civil society to advocate to challenge unjust moderation decisions and call out unjust policies and practices. 16/
Efforts, like the Santa Clara Principles, to establish a human rights framework for content moderation can help to address this. 17/ https://santaclaraprinciples.org/ 
And any single service’s decisions would be less consequential if users had more options and more control–if social media, for example, was a more federated and interoperable system, & if users had tons of tools they could use to control their own online experiences. 18/
The Trump decisions scarily showed the power sites like Twitter & Facebook wield. That raises serious free speech concerns in almost every other context. But as I’ve said, I am more concerned about state censorship of non-state speech, than the other way around. 21/
It was odd to see marketplace of ideas fans (of which I am not) bemoan the Parler decisions when in some ways it was an example of “inferior” ideas being culled from the marketplace by widespread non-state condemnation of those ideas. 23/
The market model assumes that some ideas get rejected. There is a problem is of course that these decision makers don’t adequately represent “the market.” 24/
Now that I got off my chest, I can concentrate on eye-rolling in response to those who spent 4 years complaining about the wrong-headed bothsidesism of media coverage yet now want to see a new Fairness Doctrine for online media. THE END.
You can follow @davidgreene.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.