Today @GordPennycook & I wrote a @nytimes op ed

"The Right Way to Fix Fake News"
https://www.nytimes.com/2020/03/24/opinion/fake-news-social-media.html

tl;dr: Platforms must rigorously TEST interventions, b/c intuitions about what will work are often wrong

In this thread I unpack the many studies behind our op ed

1/
Platforms are under pressure to do something about misinformation. Would be simple to rapidly implement interventions that sound like they would be effective.

But just because an intervention sounds reasonable doesn’t mean that it will actually work: Psychology is complex!

2/
For example, its intuitive that emphasizing headline's publisher (ie source) should help people tell true vs false Low quality publisher? Question the headline.

But in a series of experiments, we found publisher info to be ineffective!

Details: https://twitter.com/niccdias/status/1217473772166381573?s=20

3/
What about warnings on articles factcheckers mark as false? Seems like that should reduce belief- and it does!

The problem: Most false headlines never get checked (fact-checking doesnt scale) & users may see lack of warning as implying verification!

https://twitter.com/DG_Rand/status/1236102072795308033?s=20

4/
These are cases where intuitively compelling interventions may actually be problematic. Its essential for platforms to test if the results from these experiments generalize to actual behavior on-platform

But also, intuitively UNappealing interventions may actually work well!

6/
Take crowdsourcing: When Facebook announced they would promote content from news outlets that users said they trusted, everyone thought it was a terrible idea!

But turns out layperson source ratings actually agree quite well with fact-checkers:
https://twitter.com/DG_Rand/status/1089999404898095105

7/
Crowdsourcing also robust against "gaming":
1) Poll random/selected users rather than allowing anyone to contribute their opinion-Prevents coordinated attacks
2) Knowing ratings will influence ranking≠gamed responses-Most ppl dont care about politics
https://psyarxiv.com/z3s5k/ 

8/
And of course, sometimes experiments find that interventions DO work the way intuition suggests

For example, when people think more carefully, they are less likely to believe false headlines (but not less likely to believe true headlines)
https://twitter.com/BenceBago/status/1220099034465144838?s=20

9/
Similarly, nudging people to think about the concept of accuracy makes them less likely to share misinformation

This is the case in survey experiments (eg looking at sharing intentions for false and true headlines about COVID-19)

https://twitter.com/DG_Rand/status/1240010913270370305?s=20

10/
...and also in an actual field experiment on Twitter where we sent an accuracy nudge message (asking them to rate the accuracy of a random headline) to over 5k users and found an increase in the quality of the news they subsequently shared

https://twitter.com/DG_Rand/status/1196171145227251712?s=20

11/
TAKE-HOME

Platforms need to do rigorous tests- and if they can show they are doing so, the public needs to be patient

The key: Platform transparency about evaluations they conduct internally, and collaboration with outside independent researchers who publish

12/
You can follow @DG_Rand.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.