I think we probably need to get rid of probabilistic election forecasting, not because the models aren't sound in their own right but they are made to play a role in election coverage that is far beyond their actual usefulness.
Like, Nate Silver was correct four years ago that Trump had a pretty ok shot at winning, and he will also be correct this year if Trump wins, because 1 in 10 events happen with a fair amount of frequency. But like what have we learned from this exercise then?
The role is actually pretty binary when you get down to it: the models tell you if a race is competitive or not. But so long as the competitiveness exists, the model struggles to inform you of much beyond that.
As the model people say, "you wouldn't get on a plane that had a 1 in 10 chance of crashing." Which is true, but there's no difference between whether you'd board that plane vs. one with a 1 in 2 chance of crashing. The difference is essentially meaningless.
I just feel like if we returned to the 20th century punditry where people would be like "well polls seem to say Biden has the edge, but nothing's guaranteed" we'd be just as correct as now and save ourselves a lot of grief
I think that's totally fair but the various modeling sites are relentlessly self-promotional about the models so 🤷‍♂️ https://twitter.com/jalexa1218/status/1323121239330820098
Like I think it would be really helpful if Silver, the Economist etc. came out and said "we have a model for this cycle but we're not going to make it public because you're not going to use it right" but obviously they don't do that—they talk about how great the models are
You can follow @JakeAnbinder.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.