One of many, many problems with the 230 discourse is that a lot of it seems to assume there’s some obvious bright line for what speech is either criminal or tortious, such that (for instance) it’s unproblematic to hold platforms liable once “notified” of this fact.
But that’s very clearly not the case. Whether speech is protected or prosecutable (or subject to civil liability) very often depends on things like the speaker’s knowledge & intent.
Courts—via mechanisms like discovery obligations & compelled testimony—can sometimes ferret out details of knowledge & intent. Did the speaker know their claims were false? Did they foresee & intend the consequences of their speech? At trial, maybe you can prove those things.
Social media platforms with billions of users all over the globe are not well positioned to hold trials. They typically won’t know whether posts by their users are true or false, let alone what the user’s mental state was when they posted it.
Yet it’s common to hear proposals for axing 230 breezily suggest that platforms should be liable once they’ve been “made aware” of bad content, as though there’s some simple algorithm they can apply to distinguish protected from unprotected speech.
These are questions it often takes courts months or years to settle with the aid of trained lawyers, paid investigators, and compulsory process to obtain documents and testimony. There is no app for that.
You can follow @normative.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.