[CW: Sack up, we're going long.]

So, important bits of pre-election polling sucked.

Why?

Well, I don't run a polling organisation, but I have run a few experiments, and I do spend a lot of time thinking about bullshit and misrepresentation in the presentation of numbers.

So!
NONRESPONSE BIAS

I'm wondering about how handling of people who don't respond to overtures for information.

"Hi, I'm from Big Media Company You Hate, and this is an extra long phonecall about your opinion on the presid..."

*click*
Are nonresponses recorded and balanced? If they are, can they be matched to areas with known demographic/voting patterns? Hope so.

But a lot of the language of polling seems to imply they only rely on information they got, not on imputation of information they didn't get.
INTENTION vs. ACTION

How are we doing with procedural and mechanical factors related to "I am answering a poll with intent" to "I am performing a later action consistent with my intent"?

This covers a lot. Voter suppression on one end, and plain old forgetting on the other.
SPECIFICATION

Obviously, scientists (who aren't crap) are quite keen on you 'showing your working' in what you publish - reporting method sections fully. How did that experiment *actually* work? Are all the steps specified?
The analogue to polling here is straightforward: the less you say, the less we can identify sources of bias. Helps if we know what they are, too.

I wonder if there are methodological details left on the table when polls are quality graded.
JEEZ THERE'S A LOT OF THEM

I saw so many names within the polls reported, dozens it seems. Get the impression that anyone with a computer can just start hoovering up responses and claiming to be 'nationally representative' these days. And that brings us to...
METHOD OF CONTACT

Landline. Mobile.
Person on phone. Robocall.

In person.
Location. Method of approach.

Online.
... well, everything. Email. Header ad. Sidebar ad. Bleh.

And sewing them together - mailed a pamphlet that says go online etc. etc. What a mess.
SMALL POWERFUL EFFECTS vs. BIG WEAK EFFECTS

A subpopulation of people with a strong preference hidden within a big sample of people gets undersampled, then shows up big time later on.

I didn't see much about seeing really granular local-level polls, and their overall effects.
COMPETING PERCEPTION PROBLEMS

It's pretty easy to poll Wyoming and DC. It's much, much harder to poll ... well, all these states suffering from an inability to add up at present. (Note: might not be their fault.)

So, a few high-profile goofs leading people to claim "polls suck"
... are from the most heavily contested places, but a counterclaim of "90% of our polls were in the right direction" is a fatuous AF as ~70% of them were very very easy. So maybe we can't even have a truly representative and honest post-mortem about previous representativeness.
MORE COMPLICATIONS

Thanks, corona - between a sudden change in mail-in and early voting habits, you have a whole series of new sources of variance to consider. Perhaps some of this is based on plague fear, but with registration drives some is also undoubtedly the convenience.
Someone comes to your house to help you fill in the papers, and then in some places will even collect it for you? Easy street. I wouldn't line up for hours on a Tuesday to vote. I have a *job*.
And the legality vs. practice of managing this as a macro-process (seems it's allowable in some places, not others) is something that individual people can't tell you. Partisan organisations might be able to, but I doubt if they would. No point tipping off the opposition.
After thinking about it for a few minutes, I do NOT envy pollsters their jobs. This is hard as shit.

And it's likely that the clever ones have thought of most - maybe all! - of the above. They're the pros.

I'm just not convinced there's anything they can do about some of it.
At least, not with the present methods. Perhaps there's young scruffy renegade pollsters who know all this, but The Man (Big Poll?) isn't a fan of looking stupid and letting them thrive in this methodological space.

THAT happens in science. A lot.
Perhaps let this stand as a testament to the fact that measuring the opinion of 300M+ people is really hard when a few hundred thousand somewhere that you missed can show up and make you look a right donkey.

If you've got links to anything sensible about this, pony up.
You can follow @jamesheathers.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.