Someone I know has had some quite useful COVID-related posts removed from Medium, LinkedIn, and Nextdoor—they've been deemed "COVID misinformation". (It's not what the WHO endorses!)
I think his posts overstate the case for the treatment he’s arguing for. They aren’t without justification, however. He has some considerable scientific evidence on his side (including a small RCT).
This year, we’ve come to better appreciate the fallibility and shortcomings of numerous well-established institutions (“masks don’t work”)… while simultaneously entrenching more heavily mechanisms that assume their correctness (“removing COVID misinformation”).
False claims about COVID (or any disease) are, of course, undesirable. But, leaving aside the merits of content removal as the response, have we really figured out a sensible applied epistemology for operationalizing such designations?
And, more broadly, aren’t his efforts something close to a paradigmatic example of where society should benefit from the internet’s broadly participatory nature? Is the equilibrium where that’s stifled really the optimum?
Science is not a coherent, monolithic edifice. It changes daily and scientists don’t all agree with each other. A lot of ex post correct views first show up as claims that look ridiculous by the standards of the time. https://nintil.com/discoveries-ignored/
These platforms have tough jobs, no doubt. But I’m worried that the embrace of “misinformation” as a newly illegitimate category may have costs that are considerably greater than what’s apparent at the outset.
You can follow @patrickc.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.