For my part, I think we need to have honest public policy conversations about prioritization in moderation. Major companies could & should devote more resources to identifying risks of offline violence in countries around the world and developing mitigation/prevention strategies.
(This is much more like intelligence work than standard content moderation, something we’ve learned from various companies’ efforts to grapple with coordinated election interference these past 4 years. Challenging, resource-intensive, and a complement to regular moderation.)
But even major companies will face a limit, eventually, to the resources they can devote to contextual analysis in moderation (there’s a finite number of potential mods in the world & “AI” isn’t a magic solution). Smaller competitors face that limit pretty much immediately.
So: prioritization. It seems pretty clear that lots of US-based tech companies are prioritizing the violence and threats to democracy in the US right now, which makes sense for a lot of reasons. But what other issues deserve this level of scrutiny and attention to context?
That’s not a rhetorical question. There are many potential answers: Incitement to violence by other world leaders/political figures/celebrities/regular folks. Hate speech that can lay the groundwork for eventual violence. Disinformation that warps our shared sense of reality. &c.
It’s (been) clear that a contextualized response is absolutely necessary, in some circumstances, to prevent violence & it’s crucial for discerning between speech that incites and speech that educates/documents/informs. But that contextualized response is not replicable at scale.
In that sense, Trump’s suspensions aren’t really generalizable to the broader debates about moderation & intermediary liability. There, we need to grapple with systemic and scalable impacts. The kind of analysis Twitter did here will never undergird every moderation decision.
(More relevant to the Section 230 & DSA discussions, are the parallel stories about Apple & Google kicking Parler out of their app stores for failing to moderate sufficiently. Moderation by infrastructure-ish providers raises much more significant free expression concerns.)
(But that's a topic for a different thread! Thanks for making it to the end of this one.)