NEW PAPER

"Measuring Misperceptions?"

The one-word answer is "no." There is a substantial gap between the beliefs researchers describe and the beliefs surveys measure.

Paper: https://m-graham.com/papers/Graham_measuringMisperceptions.pdf
Presenting today (Tu 2/17) at 2 pm eastern: https://jawspolisci.network/ 

[1/n]
To measure how meaningful beliefs are, attitudinal survey researchers have long examined response stability.

If a survey response reflects what one really believes, one should — at bare minimum — express the same belief when asked the same question a second time.

[2/n]
Evidence on response stability has been absent from research on misperceptions and misinformed beliefs. This is conspicuous given the received wisdom. Journals describe deep, firm, steadfast, confidently held beliefs. Pop pieces spotlight the truest of believers.

[3/n]
The paper shows that surveys don't come close to measuring beliefs of this kind. Asking people their confidence level can do some good. At best, those who claim to be 100 percent certain of falsehoods exhibit moderate temporal stability, kind of like a "miseducated guess."

[4/n]
The same pattern of moderate stability can be seen on general knowledge questions with plausible, incorrect answers.

For example, claims to be 100 percent certain that vaccines cause autism are similar to claims to be certain that electrons are larger than atoms:

[5/n]
The most successful misperception Q was on COVID-19's origin. Those who said they were 100 percent sure that the virus was created in a lab claimed to be 86 percent sure of this when asked again. Their betting behavior suggests somewhat less self-assurance (see paper).

[6/n]
Other questions were less successful. Climate change deniers were comparable to continental drift deniers.

(The vaccine and climate questions were copied verbatim from the 2019 ANES pilot; electrons and continental drift, from the GSS.)

[7/n]
This isn't to say that misperceptions & misinformed beliefs aren't a problem.

Instead, the key message of the paper is that taking these problems seriously requires serious attention to measurement. Existing methods dull our sense of the problem by finding it everywhere.

[8/n]
Lessons for measurement:
- Provide hard evidence for strong interpretations of survey measures.
- Default to a cautious posture. Guesses & mistaken inferences, not misinformation.
- Don't substitute theory for evidence. Splitting my results by party ≠ change the story.

[9/n]
Broader lessons:
- Partisan response differences aren't evidence of misperceptions.
- We know less about the prevalence and predictors of misperceptions than we think.
- "Correction" interventions teach us more about reducing ignorance than about reducing misperceptions.

[10/n]
If you don't like my results, great news: my framework can be used to prove that they don't hold for your favorite Q or measurement technology.

Survey researchers are smart people. We know that there could be problems. We just haven't had a framework for assessing them.

[11/n]
Let's hope that someday, research provides positive proof that a survey has measured misperceptions and misinformed beliefs as they are conventionally defined.

Until then, be careful what lessons you take from work that purports to measure beliefs of this kind.

[12/n]
You can follow @Matt__Graham.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.