I've been thinking a lot lately about the parallels between law school and content moderation.

Specifically, the way I think about the law--the rules that govern both our offline and online worlds--has shifted dramatically over the course of three years.

Long 🧵
To me, law always seemed straightforward. Don't murder. Don't break a contract. Don't defame people. Don't invade other's privacy.

I remember the days of proclaiming something illegal, looking up a law online, and pointing to a random code section and feeling so sure of myself.
You see this a lot on Twitter. "You're defaming me" in response to an insult-laden tweet. "You're violating xyz law" with a link to a state code that's outside both users' jurisdictions.

The false platform/publisher distinction that arises in the #Section230 is another example.
The most mind-blowing aspect of law school was realizing that the law isn't so black and white. Decades of litigation, judicial interpretation, and convoluted, ever-changing legislation makes the law anything but straightforward.
Murder; something that should be just about as clear as day to all of us, was one of the hardest topics to wrap my head around 1L year. The tiniest yet crucial nuances when it comes to knowledge and intent, make a world's difference.
Not to mention, each state has their own definitions of murder, exceptions, the degrees of murder, manslaughter, etc.

The same goes for defamation. I thought someone defamed me every time they told a lie about me or slandered my good name. Until I took Internet law...
...and I learned about the several elements that go into proving defamation, and burden shifting, and listener perception, and the many MANY defenses and exceptions to defamation that makes it so incredibly difficult to be successful in court (esp. if you're a public figure).
Law school drew back the curtain and exposed me to a wishy washy, totally gray, distorted, deceptive, mess of a legal landscape.

After three years, I'm walking away with the realization that there is so much about the law I don't know and will probably never understand.
Truthfully, I find that to be unfortunate. The laws that govern our livelihoods have become so incredibly inaccessible that we resort to paying legal professionals astronomical fees to cut through the noise (most of which they created in the first place).

But I digress.
In the same way law school blew my mind about the law, my professional and academic experiences w/Trust and Safety continue to blow my mind about content moderation.

Again, I thought it was straightforward. If you break the community guidelines, you get banned.
And again, you see this a lot on Twitter. Users rightfully question how content that seems like bullying, harassment, or hate speech isn't being taken down from the service. Or we wonder how content that we're absolutely sure was well-within the guidelines, came down.
To us users, it seems like the line between rule-breaking and legitimate couldn't be brighter.

And yet, just like our offline laws, that line couldn't be more opaque...or in some cases, non-existent.
The content you and I might perceive as "hate speech," might be subject to an elaborate system of processes and flows that result in different outcomes depending on the moderator, their world experiences, cultural upbringings, biases, and more.
Not to mention, that one little 280 character tweet might be subject to other considerations such as the public interest, various evaluations pertaining to human rights, the Santa Clara principles, criteria for preserving democratic participation, social justice, and more.
And just like each state has their own definition of murder, each website has their own definitions when it comes bullying, harassing, threatening, political, misinformation, hate speech, etc.

Take a moment to consider the content you can get away with on Twitter versus Roblox.
And just like the law is subject to constantly ever-changing judicial interpretations, the same goes for content moderation. As services learn more about their users, and as society (and content) evolves, so too must those definitions and the elaborate processes and flows.
That's what makes content moderation hard. You can't quite understand it until someone draws back the curtain.

And just as the law greatly suffers from that inaccessibility problem, so too does content moderation. Both of which can be mitigated w/more transparency.
The good news is unlike the law, you don't need a fancy degree to understand content moderation.

My moment of realization came from attending content moderation conferences where I got to hear from experts speak at length about their roles in online community building.
You can follow @jess_miers.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.