
We can hopefully help with that

@StefanGruijters led this useful (we hope) project. It's Open Access and available at https://doi.org/10.1080/08870446.2020.1841762

Brief summary in this

1/19
To plan a study, you need to know how many participants you'll need to recruit. One of the most salient things this depends on is the strength of the effect you're studying.
Getting this right is hard, or, in @lakens words, "Your power analysis sucks": https://twitter.com/lakens/status/1329728971899105282?s=19
Getting this right is hard, or, in @lakens words, "Your power analysis sucks": https://twitter.com/lakens/status/1329728971899105282?s=19
We discuss four ways to establish this effect size, explaining why you should or can not use the first three.
3/19
3/19
The first is to use benchmarks or rules of thumb, for example Cohen's small (d=0.2, r=.1), medium (d=0.5, r=.3) and large (d=0.8, r=.5) labels. However, these have no basis, really, so they can hardly be used to justify sample size planning:
4/19
4/19
The second is to use effect sizes from previous research. This would be cool if the world was just. However, it is not, and the literature suffers from publication bias, making this A Very Bad Idea. For example, look at this:
5/19 https://twitter.com/rlmcelreath/status/1334181189750501376?s=20
5/19 https://twitter.com/rlmcelreath/status/1334181189750501376?s=20
These are z-scores from over a million effects from MedLine studies. See the gap? Those are the nonsignificant findings. Basically, effect sizes cannot be trusted, and adjusting for this bias isn't straightforward.
6/19
6/19
There's a bit more to this, but the paper is Open Access for a reason, so see there 
On to the third strategy: using theoretical significance. This would be the best approach. However, it's also often not feasible, since many theories in psychology are qualitative.
7/19

On to the third strategy: using theoretical significance. This would be the best approach. However, it's also often not feasible, since many theories in psychology are qualitative.
7/19
As in, they don't make quantitative predictions. As such, theories (e.g. the TPB, the HAPA, the TTM, etc etc) can't tell you which effect to expect (e.g. "d=0.43").
So, that eliminates the third approach (in most cases).
So what's left?
8/19
So, that eliminates the third approach (in most cases).
So what's left?

8/19
Practical importance!
So, how do you know what is a meaningful change in practice?
That's where @StefanGruijters and me try to help: we describe a way to determine that meaningful change definition if your outcome variable is continuous.
9/19
So, how do you know what is a meaningful change in practice?
That's where @StefanGruijters and me try to help: we describe a way to determine that meaningful change definition if your outcome variable is continuous.
9/19
The problem here is that you need a way to decide whether a Cohen's d of .43 is worth detecting, or whether you can base your sample size estimates on a Cohen's d of .67 or (god forbid) you're even interested in an effect of d = .22.
10/19
10/19
We propose 5 steps to estimate your required sample size.
STEP 1: define the threshold that dichotomizes your continuous variable. For example, guidelines recommend a minimum of 150 minutes of weekly exercise. Such guidelines are usually based on societal/political input.
11/19
STEP 1: define the threshold that dichotomizes your continuous variable. For example, guidelines recommend a minimum of 150 minutes of weekly exercise. Such guidelines are usually based on societal/political input.
11/19
STEP 2: estimate the control event rate, or the base rate. How many people currently meet this standard from step 1? If this isn't yet available (i.e. in your country, the relevant behavior isn't monitored), we provide estimates of the required sample size to estimate it.
12/19
12/19
STEP 3: decide how much you want to increase this proportion. Do you want or expect a 5% improvement? Or would you already consider your intervention sufficiently effects with a 1% improvement?
13/19
13/19
STEP 4: estimate the Smallest Effect Size Of Interest (SESOI) based on the base rate (e.g. 47%) and the meaningful change definition (e.g. 5%). We make this available as @jamovistats module and #RStats package {behaviorchange}.
14/19
14/19
The Cohen's d you'll end up with depends on both the base rate (the control event rate) and how much change you want. Of course, there is no 'desirable answer': large Cohen's d's can mean smaller sample sizes suffice, but also requires a more powerful intervention.
15/19
15/19
STEP 5: compute the required sample size. This is probably the step most people are most familiar with. We include a table with the Cohen's d and required sample sizes for both NHST and AIPE approaches for a range of CERs and MCDs:
16/19
16/19
The behaviorchange @jamovistats module is available from the jamovi library, and the {behaviorchange} R package from CRAN (manual page at https://r-packages.gitlab.io/behaviorchange/reference/dMCD.html).
18/19
18/19
The scripts we use to produce the tables and figures in the manuscript are available from https://osf.io/xrs23/ (or the @gitlab repo at https://gitlab.com/matherion/meaningful-change-definitions).
We hope this helps behavior change intervention developers (and evaluators) with planning their studies!
19/19
We hope this helps behavior change intervention developers (and evaluators) with planning their studies!
19/19