A somewhat technical thread about measuring vaccine efficacy.

We're used to the notion that certain properties of tests for disease depend on prevalence: positive and negative predictive value do, for example, whereas sensitivity and specificity do not.
My colleague @evokerr pointed out that estimates of vaccine efficacy also depend on prevalence. The logic is pretty simple. Efficacy is defined as one minus the risk ratio of treatment to placebo.

The key observation is you can only get COVID once.
So as exposure risk increases, the number of cases in a group increases sub-linearly. Suppose half the group is contracting the disease during the study, and then you double the exposure. Assuming homogeneity and independence etc you'd now expect about 3/4 of the group to get it.
With a good vaccine, the treatment group effectively "sees" a lower rate of exposure than the placebo group. As the disease gets more prevalent, the case count in the placebo group starts to saturate first, while that in the treatment group continue to increase nearly linearly.
Of course you can work this all out analytically for various models of exposure risk. Here's a little illustration that @evokerr put together for a vaccine that blocks 90% of infections.. You can think of N as measuring number of close contacts with people in the community.
Most of the time, this probably doesn't matter much if infection rates are low even in the placebo group. But I'm surprised I haven't seen people talk about it.

I assume this is a well-known result? Any concern about comparing efficacy estimates across time and place?
You can follow @CT_Bergstrom.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.