Of the 20 most-engaged Facebook posts containing the word "election" over the past week, all of them (literally 100%!) are from Trump and have labels indicating that they're false or misleading.
What are the labels doing, exactly?
What are the labels doing, exactly?
Meanwhile, YouTube videos containing false voter fraud allegations have been viewed *rubs eyeballs* 138 million times according to one estimate. https://www.nytimes.com/2020/11/18/technology/election-misinformation-often-evaded-youtubes-efforts-to-stop-it.html
Voter fraud conspiracy theories pushed by a sitting president are a bigger problem than social media companies alone can solve, but man, ranking information based on how interesting it is has consequences. https://twitter.com/_cingraham/status/1329515194121289733
The problem with tech companies "fighting" misinformation is that false things are generally way more interesting than true things. If your system is built around an engagement-ranked feed, you can label and fact-check all you want and it won't move the needle much.
Sidenote: this argument (that engagement rises as content gets worse, barring manual intervention) was made a few years ago by a promising young data scientist, "M. Zuckerberg." https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/