A few thoughts on the piece by @DeatonAngus which culminates in: "RCTs have no unique advantages or disadvantages over other empirical methods in economics." Disclaimer: I've run RCTs for a living for three years, but wrote a structural estimation master thesis before. A thread:
I wholeheartedly agree that there are good and bad studies, randomized or otherwise. But somehow, in this article the lines are blurred of what randomization has to do with bad study design.
Early on @DeatonAngus suggests that @JPAL (and by extension other organizations that run RCTs in the development space) neglect their mandate of poverty alleviation by focusing on RCTs. But just take a moment to imagine the opposite...
Imagine @JPAL ran an RCT today, calibrated a large-scale DSGE model tomorrow, found its love for overlapping-generations models of the interplay of climate change and the economy on Saturday, and by Sunday merrily spread musings on auction design and OTC security trading...
No offense, but most of that research would almost certainly be garbage. There is immense value to specializing in methods. That's one of the longest-standing tenets of economics.
Most of these methods require years of study and practice and the incentives both in academia and industry do not exactly align with all of us getting three PhDs, so we can be perfectly versed in everything.
One can definitely do good science and even get at causality without experiments, but that requires clever and complicated models, huge time investments of learning them, and a level of technical discourse that I haven't seen (m)any industry/gov client tolerate.
And so if there is demand for simple study designs and low complexity, by all means PLEASE RUN EXPERIMENTS and don't try to get at causality any other way OR be super explicit that the study is descriptive/correlational.
I have read too many papers in my life that clearly didn't tease out any causality, yet pretended they did and that in my eyes is MUCH worse. Instead of faulting RCTs for the generalizability challenges that they share with most other research, ...
... let's educate research consumers to understand the value of replication studies, of studies at large scale, of a profound qualitative understanding of the context under study, and not least of squaring different types of evidence.
I think my main peeve with the piece is this last point: The strawman RCT that's being run in complete isolation from other evidence. Sure, it happens and results in bad studies, but most good RCTs I know corroborate their findings with observations from outside the experiment.
Or try to explain non-experimentally observed patterns or solidify common sense. As last year's @NobelPrize winners in economics stressed, RCTs more often than not force researchers to leave their cozy offices. That's a plus!
The piece enumerates a number of reasons why we shouldn't trust RCTs, but they can mostly be reduced to RCTs with small samples, selective attrition/selection into treatment, clustering standard errors at the wrong level, not replicating findings etc. Those are just bad studies.
Angus Deaton makes it sound like people invoke sample balance based purely on randomization – and completely skips the law of large number part of the argument. I wouldn't have gotten my RA job had I left that out in my interview... Strawman again.
Fishing for high t-stats/low p-value is also not a symptom of RCTs but of bad research. Same with ignoring outliers or jumping into regressions before looking carefully at the data you are working with.
At one point, Deaton writes "There is a great attraction of being able to make policy recommendations without having to construct models." At the peril of sounding petty, I don't know if I prefer making recommendations only based on models. Why not just do both? Strawman again...
A critical point and one on which I largely agree is the argument of vulnerability. But that's followed up by this: "Institutional review boards in the US have special protection for prisoners, whose autonomy is compromised; ...
... there appears to be no similar protection for some of the poorest people in the world." That's simply not true. The principles of non-coercion and beneficence very clearly work together to balance study incentives in a way that ensure participants are empowered ...
... to withdraw from a study without any disadvantage to themselves. You also can't run an RCT in Kenya with only a US IRB. Again, the opposite is an unethical study, not a general rule for RCTs. Yet, I agree that the ethics review in most institutions needs more teeth.
You can follow @MoritzPoll.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.