The Trump ban has got the world (and her grandmother) talking about online content, who has the right to regulate & what role should Big Tech platforms play in policing the web.

So far, so good, no?
In Europe, this discussion has been raging for years (more than in the US, imho), with different countries pushing varying degrees of regulation onto platforms -- and a big dose of self-regulatory to boot.

It stands as a lesson for how the rest of the world *may* respond
So let's go through the options. Be warned: none of them are perfect, none of them will "fix" the problem (whatever that is), and all come with downsides.

You wish you hadn't got into this quagmire, amirite?
What's not to love, right? Hefty(ish) fines, 24 hour takedowns. Take that, online content! Yet, the rules *only* applied to already existing "illegal content" like Nazi propaganda, under German law, & didn't touch the waves of disinformation out there.
More important, the German post-WWII context is very very specific, and it's impossible that, say, the US would follow suit b/c of its First Amendment tradition.

So it's a good start, but not really hitting it out of the park.
To make matters more complicated, Germany then went ahead and asked the companies to proactively inform law enforcement when illegal content popped up, saying ppl were still not safe. So even these rules are a work in progress.
Now, let's scooch over to Brussels where @EU_Commission, in 2018, unveiled its own "code of practice" on disinformation, a voluntary regime that all the companies quickly signed up to that involved providing greater transparency on what was happening on their networks
But it quickly fell apart b/c, surprise surprise, voluntary regimes are as good as their enforcement powers. The companies were more transparent about what was going on, but it really didn't move the needle. But, importantly, it was a foray into disinformation regulation.
Then let's slide over to France, home to some of the nastiest disinformation out there (hooray!). But even there, Paris shied away from going beyond policing existing outlawed content like terrorist propaganda. The French talk a good game, but in the end, they too like rules.
Yet local judges recently overturned proposals that would have forced social media companies to remove hateful and terrorist content within hours. Why? Because it went against freedom of speech principles. (Hey America, you're not the only w/ those, jfyi) https://www.politico.eu/article/french-constitutional-court-strikes-down-most-of-hate-speech-law/
Problem: London fudged the proposals, basically limited the enforcement to existing illegal content and then asked the companies to define what illegal content was. This all while leaders like @MattHancock claimed in post-Jan 6 world that more had to be done.

Sure, sure.
But there, too, officials mostly limited their proposals to removing "illegal content," not wider disinformation. There were improvements, though: mandatory auditing of social media companies, greater data access for researchers, a greater regulatory enforcement
For those looking for a wider swing at disinformation, hopes now rest on how this clause will be implemented. Check out 1(c): "intentional manipulation of (social media) service... w/ negative effect... on the electoral process
That's a pretty wide scope, potentially allowing regulators to hold social media companies to account for widespread disinformation campaigns that *could* have reduced how Jan. 6 riots went down. I'm skeptical, but that's how things are shaking out
The key issue is this. Everyone wants online content to be regulated. And on the most heinous stuff like terrorist propaganda, there's already a consensus on takedowns (albeit the platforms don't do very well on identifying non-english posts, imho)
But for 90% of the content that ppl want regulated, it's actually not illegal. Yes, you might not like what ppl are saying online, but what law is it actually breaking?
So far, politicians have outsourced those decisions to companies, and then (irony klaxon) turn around and are dismayed when platforms ban the likes of Trump.

What do you want them to do when you have not really identified/regulated the actual problem at hand?
And just in case you think these are very nation-specific issues, that what happens in France doesn't affect the US. Or Germany's NetzDG isn't applicable to the UK. Well, I give you this: how hate speech campaigns are now global, using the same playbook https://www.politico.eu/article/us-nationalists-far-right-europe-elections-digital-facebook/
Now let's jump over to the US where "let's do something about Section 230!" has become, for good reason, part of the response to the Jan. 6 riots.
Let's leave aside the question of whether digital policymaking is at the center of the Biden administration's to-do list (it is not). But, say if it is, what do you want to do about it, and would it solve the problem?
I appreciate the US context is different to Europe (though not as different as many would say it is). But is the goal to "remove" illegal content? To clamp down on harmful, but legal content? To get the companies to self-regulate, or fall under a federal supervisory system?
All are legitimate questions (albeit the Dems & Reps aren't exactly going to see eye-to-eye on any of that). But, as Europe has shown, none of this is easy, none of this can be done overnight and none of this should be seen as a "win" for democratic principles
There's so much good work being done on this, and none of it says a kneejerk response to content regulation is a good thing. For more, see these threads: https://twitter.com/alexstamos/status/1347942615509946374?s=20 & https://twitter.com/daphnehk/status/1347958005816381443?s=20
Weirdly, UK House of Lords (yes, unelected, blah blah) did a good report on this https://publications.parliament.uk/pa/ld5801/ldselect/lddemdigi/77/7702.htm. Ditto @Graphika_NYC & @ISDglobal have been doing excellent work on flagging disinformation campaigns. You should really check them out.
Rant over. Thoughts appreciated.
You can follow @markscott82.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.