The real threat to Section 230's future comes not from people who are angry about being censored, but from people who are angry that platforms were weaponized and did not do enough to reduce the harms.
And that's a debate that we need to have. I think there are a lot of challenges in addressing these problems by simply changing Section 230, but it's reasonable to examine how the liability framework affects the distribution of misinformation and other harmful content.
But to have this debate, we have to operate off of a factual record, which has largely been lacking in the debate around 230 for the past few years. What types of moderation are possible? How does it affect other speech? What do platforms do in response to changes to 230?
What protections exist for platforms under 1A/common law without 230? Can federal criminal laws (which are exempt from 230) be changed for better accountability? How do we prevent the dominant platforms from becoming more dominant?
More than a year ago, I called for a congressionally created commission to examine 230 and platform moderation. I think that's more vital than ever. https://www.theregreview.org/2019/10/10/kosseff-understand-internets-most-important-law-before-changing-it/