There's a weird thing in political science literature where people do observational causal inference but avoid causal language and opt instead for the language of predictive modelling but then don't use the methods of predictive modelling (test/train splits & cross-validation).
Is "A predicts B" interesting?

Yes if "predicts" is causal but we avoid claiming that probably because it's not really credible.

Yes if A provides some unique, parsimonious, or more easily observable basis for prediction. But does A really predict B? How well? Compared to what?
It's this last point that I think is really troubling.

"A predicts B" is useful if:

(1) B is important,

(2) B is difficult to observe or measure,

(3) we also have some ground truth about A and B (think verified turnout), and

(4) A is cheap or easy to observe.
But often A and B are just survey-measured psychological constructs - ideology and partisanship, personality and behavioral intentions, etc. A and B are equally easy to measure. Why then is the predictive power of A for B useful? It's not really clear.
I think there are a lot of opportunities for a more developed predictive modelling lit in political science beyond turnout and vote choice but then we have to take prediction seriously and not just borrow the language thereof when we're not confident enough to claim causality.
You can follow @thosjleeper.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.