Seems like the 2020 prediction error *across all states* is similar to that in 2016.

Fivethirtyeight's MSE was 45 in 2016 and 48 so far this time.

For 12 key states, the MSE was 17 in 2016 and 33 so far this time. (If Biden ends up winning PA by 1-2 points, it would be 26ish).
Most notably, the predictions are off by a lot in the Midwest and erred in the same direction.

MI -7, WI -8, OH (90% votes in) -7

What I'm struggling with is that if the polls fed into the model are nowhere near the eventual outcome👇, how could you have done any better?
The Realclearpolitics simple average of polls did better than all other models among the 12 key states, but for the wrong reason.

Lots of troll/junk polls got included and with the same weight as the more carefully and transparently conducted polls.
Would Ann Selzer have anything to teach us? She's the only pollster that got IA right (Trump +7) when almost everyone else showed Trump +1 or Biden +1.

Does this mean we give the fundamentals another look? (The economy, presidential approval, incumbency status, etc.)
You can follow @yiqinfu.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.