I recently sat on a NIH review panel for R01 proposals and a large majority of the applications had issues with their power analyses. Here are some common problems I saw. A thread. /1
Lots of grants said something like “We did a power analysis and we’re properly powered” w/o providing more info. This is not enough. The reviewer shouldn’t have to trust that the authors did the analysis properly. Provide enough info so someone can evaluate what you did. /2
Lots said something like “With n=200, alpha=.05, power=.80, we can detect an effect of d = .25” but didn’t provide ANY evidence that they expected a d >.25. This is not ok. For an R01 (i.e., LOTS of $), people need to have some expectation of how big their effects will be. /3
A huge # of grants powered on a different test than what they proposed in their analytic plan. For example, one powered on an ANOVA for Aim 1, but then proposed to analyze the data from Aim 1 with a multilevel growth curve model. /4
Don’t do this. You should power on as close to the actual analysis you will conduct as possible. Because a multilevel growth curve model estimates a different number of parameters than an ANOVA (for example). /5
I was honestly very surprised to see these issues happen so frequently, often from very well established researchers. And I had to wonder why it was happening. Is it a lack of training? /6
I was totally “that person” during the mtg who brought up the power analyses over and over again. But to me that is one of the most critical parts of a grant. If a study isn’t properly powered, should we really invest hundreds of thousands or even millions of dollars in it? /7
You can follow @LisaJaremka.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.