Not a hot take, but a mildish take with some social science and journalism nuance about everyone dumping on polls and models today. I’ll start with the proposition that polling journalism isn’t good for us. BUT. (thread)
The controversy about @FiveThirtyEight’s model in particular is kind of strange to me. I think it’s pretty clear people misunderstood what the model was saying. That’s partly a content problem but also a public problem.
I personally like 538 from a poll science perspective because it provides a hot read on uncertainty. But I am reading this as someone trained in social science methods. I have a lot of understanding about how error, upper/lower bounds, and uncertainty work in stats.
But that’s kind of the problem. I read it as someone trained in this. The public by and large does not. So that’s a problem.
BUT. That nuance is there if you consume 538’s content on the whole. The site does includes caveats and qualifiers to help you read their content.
BUT. That nuance is there if you consume 538’s content on the whole. The site does includes caveats and qualifiers to help you read their content.
But caveats get lost. People don’t consume everything. They don’t read all the stories, listen to the podcasts. They look at the pretty model page, share the big-picture headlines and easy-to-digest visuals.
My view is I see this as the problem with taking complex models and making them into content. If this were something like an academic conference, someone who horribly misreads a model would get pushback by people who know their stats.
Who’s pushing back on the public misread?
Who’s pushing back on the public misread?
Again, the nuance is all there. I just don’t think the public is nearly sophisticated enough or has nearly enough time to get it all. It’s not that people are dumb. They just aren’t trained, and they are busy.
It’s like retweeting without reading the article, in that sense. It’s easy to share headlines. What 538 has constructed is mammoth and interesting. It makes good content for someone like me. I’m not sure it is helpful to the public writ large.
Again, I think polling journalism in general is bad for us.
But here’s the thing I’ll say that’s positive. Poll models have done a LOT the past few cycles in educating journalists about polling methods. I’ve noticed some subtle shifts in reporting analysis over the years.
But here’s the thing I’ll say that’s positive. Poll models have done a LOT the past few cycles in educating journalists about polling methods. I’ve noticed some subtle shifts in reporting analysis over the years.
Most of the reporting is still bad. Wolf Blitzer freaking out on CNN last night about early returns is much more in line with the level of understanding too many journalists have of polling methods.
But it has helped things somewhat. I think that education has been useful.
But it has helped things somewhat. I think that education has been useful.
But does that education need to be public? If 538’s chief value is making polling journalism better, wouldn’t it be best as a consultancy? Maybe! But that’s not their product.
(Also, newsrooms are too cheap to pay for that kind of consultancy)
I don’t know how much of this can be fixed at the edges. Maybe tinker with presentation, though I have no suggestions on that. What they had on the model page this year was much better than years past, had a lot more nuance and context. But again, I don’t think people read it.
The big problem is 538 really can’t sell you a product built around the idea that their model could be wildly wrong (and some caution here; we haven’t counted nearly all the ballots here, and the results relative to the polls in general and their model might improve).
So I don’t have a whole lot of suggestions on how to move forward here, just wanted to offer some mild pushback that people really slamming the Nates today are probably laying too much blame at their feet. Some of this is on the public.
The Needle, of course, can go to hell.
The Needle, of course, can go to hell.