Today @GordPennycook & I wrote a @nytimes op ed
"The Right Way to Fix Fake News"
https://www.nytimes.com/2020/03/24/opinion/fake-news-social-media.html
tl;dr: Platforms must rigorously TEST interventions, b/c intuitions about what will work are often wrong
In this thread I unpack the many studies behind our op ed
1/
"The Right Way to Fix Fake News"
https://www.nytimes.com/2020/03/24/opinion/fake-news-social-media.html
tl;dr: Platforms must rigorously TEST interventions, b/c intuitions about what will work are often wrong
In this thread I unpack the many studies behind our op ed
1/
Platforms are under pressure to do something about misinformation. Would be simple to rapidly implement interventions that sound like they would be effective.
But just because an intervention sounds reasonable doesn’t mean that it will actually work: Psychology is complex!
2/
But just because an intervention sounds reasonable doesn’t mean that it will actually work: Psychology is complex!
2/
For example, its intuitive that emphasizing headline's publisher (ie source) should help people tell true vs false Low quality publisher? Question the headline.
But in a series of experiments, we found publisher info to be ineffective!
Details: https://twitter.com/niccdias/status/1217473772166381573?s=20
3/
But in a series of experiments, we found publisher info to be ineffective!
Details: https://twitter.com/niccdias/status/1217473772166381573?s=20
3/
What about warnings on articles factcheckers mark as false? Seems like that should reduce belief- and it does!
The problem: Most false headlines never get checked (fact-checking doesnt scale) & users may see lack of warning as implying verification!
https://twitter.com/DG_Rand/status/1236102072795308033?s=20
4/
The problem: Most false headlines never get checked (fact-checking doesnt scale) & users may see lack of warning as implying verification!
https://twitter.com/DG_Rand/status/1236102072795308033?s=20
4/
Another example: General warnings to "Watch out for fake news!" Should help keep users on their toes, right?
But this can lead to people not just disbelieving false headlines, but also rejecting TRUE headlines (ie being generally suspicious)
https://link.springer.com/article/10.1007%2Fs11109-019-09533-0
5/
But this can lead to people not just disbelieving false headlines, but also rejecting TRUE headlines (ie being generally suspicious)
https://link.springer.com/article/10.1007%2Fs11109-019-09533-0
5/
These are cases where intuitively compelling interventions may actually be problematic. Its essential for platforms to test if the results from these experiments generalize to actual behavior on-platform
But also, intuitively UNappealing interventions may actually work well!
6/
But also, intuitively UNappealing interventions may actually work well!
6/
Take crowdsourcing: When Facebook announced they would promote content from news outlets that users said they trusted, everyone thought it was a terrible idea!
But turns out layperson source ratings actually agree quite well with fact-checkers:
https://twitter.com/DG_Rand/status/1089999404898095105
7/
But turns out layperson source ratings actually agree quite well with fact-checkers:
https://twitter.com/DG_Rand/status/1089999404898095105
7/
Crowdsourcing also robust against "gaming":
1) Poll random/selected users rather than allowing anyone to contribute their opinion-Prevents coordinated attacks
2) Knowing ratings will influence ranking≠gamed responses-Most ppl dont care about politics
https://psyarxiv.com/z3s5k/
8/
1) Poll random/selected users rather than allowing anyone to contribute their opinion-Prevents coordinated attacks
2) Knowing ratings will influence ranking≠gamed responses-Most ppl dont care about politics
https://psyarxiv.com/z3s5k/
8/
And of course, sometimes experiments find that interventions DO work the way intuition suggests
For example, when people think more carefully, they are less likely to believe false headlines (but not less likely to believe true headlines)
https://twitter.com/BenceBago/status/1220099034465144838?s=20
9/
For example, when people think more carefully, they are less likely to believe false headlines (but not less likely to believe true headlines)
https://twitter.com/BenceBago/status/1220099034465144838?s=20
9/
Similarly, nudging people to think about the concept of accuracy makes them less likely to share misinformation
This is the case in survey experiments (eg looking at sharing intentions for false and true headlines about COVID-19)
https://twitter.com/DG_Rand/status/1240010913270370305?s=20
10/
This is the case in survey experiments (eg looking at sharing intentions for false and true headlines about COVID-19)
https://twitter.com/DG_Rand/status/1240010913270370305?s=20
10/
...and also in an actual field experiment on Twitter where we sent an accuracy nudge message (asking them to rate the accuracy of a random headline) to over 5k users and found an increase in the quality of the news they subsequently shared
https://twitter.com/DG_Rand/status/1196171145227251712?s=20
11/
https://twitter.com/DG_Rand/status/1196171145227251712?s=20
11/
TAKE-HOME
Platforms need to do rigorous tests- and if they can show they are doing so, the public needs to be patient
The key: Platform transparency about evaluations they conduct internally, and collaboration with outside independent researchers who publish
12/
Platforms need to do rigorous tests- and if they can show they are doing so, the public needs to be patient
The key: Platform transparency about evaluations they conduct internally, and collaboration with outside independent researchers who publish
12/
My group, w @j_a_tucker and Paul Resnick's groups, are having a great experience in such a collaboration with Facebook around crowdsourcing https://www.axios.com/facebook-fact-checking-contractors-e1eaeb8b-54cd-4519-8671-d81121ef1740.html
I hope FB, and other platforms, will do more of these!
13/
I hope FB, and other platforms, will do more of these!
13/
Finally, if you want to learn more, below is an always-updating doc with links to ALL of the papers @GordPennycook and I have written about misinformation / fake news (most of which also have Twitter thread summaries)
https://docs.google.com/document/d/1k2D4zVqkSHB1M9wpXtAe3UzbeE0RPpD_E2UpaPf6Lds/edit?usp=sharing
end/
https://docs.google.com/document/d/1k2D4zVqkSHB1M9wpXtAe3UzbeE0RPpD_E2UpaPf6Lds/edit?usp=sharing
end/