7 years ago I gave a training on commissioning, interpreting and using evidence, frustrated that colleagues were using true-but-bullshit killer facts rather than serious research. Aim: to be comfortable asking for and using evidence, rather than convenient stats. I used memes.
The first difficulty in using research is asking the right questions - this is true whether you're commissioning it or searching what's already been published. I'm always surprised by how hard policymakers find it to articulate exactly what they're asking. It has consequences.
The second point was to make sure that the evidence they're using/commissioning makes sense for the final decision they want to inform. If the objective is change at scale, evidence on efficiency is not enough. And we need to know that when we're buying or using research.
That means "good" or "useful" with respect to research derives partly from the question and partly from the method and execution. So a policymaker should be using descriptive work, RCTs, modelling, qualitative work depending on what exactly they're doing.
But once you've narrowed in on a question, a sensible method for answering it and the range of research available, you should start from a position of scepticism, and make the research convince you. It often falls at a pretty early hurdle.
If it's not in the method, it's in whether it actually matters. Always ask about effect sizes in a unit that has some concrete meaning.
And there is never a bad time to ask about the data, though it may shatter your illusions.

(I did this one yesterday - but it's better than the original)
Even if a piece of research clears that first bar of scepticism, you're not home free How does it fit with the broader picture emerging from the community of people who study this thing all the time?
If the purpose is to achieve some change in the world through policy, even a credible, robust estimate of the effect size is not enough. You need to understand why it worked, including implementation detail. The causal chain is much longer than the segment studied in the paper.
All of this can seem daunting, especially for a non-specialist, but the good thing is you don't have to know everything yourself - one of the best ways of understanding a piece (or body) of research better is to talk about it.
Most of all, you need to do these things systematically - even when you think you know the answer. The great temptation is to pursue what's convenient or easy by pretending you've been guided by evidence. This happens regularly.
The training ended with a little exercise to see how easily people updated their beliefs (spoiler: not very). I have loads of these old trainings I delivered as a civil servant on my computer. Will see if there any other fun ones...
You can follow @scepticalranil.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.