As an @AmJEpi social media editor, this month I’ve picked “Commentary: Surprise!” By Stephen R. Cole, Jessie K. Edwards, and Sander Greenland. https://academic.oup.com/aje/advance-article/doi/10.1093/aje/kwaa136/5869593
To start us off, how many of you have heard of the Shannon information value (S value)?
If you are wondering what my answer would have been before reading this article, I would have said that I’d vaguely heard of it. But I like learning new things, so I‘m intrigued.
The authors start by reminding us of the definition of a p-value. “A P value represents the chance of observing a data summary (test statistic) as extreme as or more extreme than what was seen, under a test hypothesis and auxiliary (background) assumptions.”
Which of the following is true about p-values. The null p-value is the probability of:
They then give us an example to illustrate the difference between a p-value and S value using a trial of drug for SARS-CoV-2. The risk difference in 28-day mortality was −5.8%. The 1-sided p-value is 0.16. The trial authors say there was no benefit.
Is the author’s assessment of  “no difference” a fair assessment of the data in your view?
Does your assessment of the risk difference change when you find out the 95% CI goes from −17.3% to 5.7%. Is the author’s assessment of  “no difference” a fair assessment of the data in your view?
The S-value reimagines this problem by framing evidence to be like a coin tossing experiment. Say we want to see if a coin is fair, so we toss it s times and see how many are heads. If we observe heads in each toss , how surprising would that be if this was a fair coin?
The S index measure for this experiment is simply -log2(0.5^s). The units for this are described as “bits” or binary digits.
So then go back to the SARS-CoV-2 example. The null p-value can be converted to an S value as -log2(0.16) = 2.6 How surprising would it be to you if you tossed a fair coin 3 times and got heads all 3 times (rounding up)?
Well, you can use the S value to reinterpret the results of the trial. “The observed p = 0.16 is less surprising than seeing 3 heads in a row in 3 fair tosses…” For me, that’s a pretty intuitive way to think about the data without needing an arbitrary cutoff.
We can generate a p-value for any hypothesis, not just the null as @ken_rothman reminds us. The figure below show the p-value function for this data (for the RR, not the RD, but you get the idea) from Ken’s episheet. http://krothman.org/Episheet.xls 
In this case, the p-value for the hypothesis that the RD is 11.6% is the same as the p-value for the null. So the data are equally as surprising under the hypothesis that there is no effect as the effect is 11.6%.
I found this to be a useful figure, which converts the p-value to an S value. The authors note that p-values of 0.10, 0.05, 0.01, and 0.005. are about as surprising as seeing 3, 4, 7, or 8 heads in a row from fair coin-tossing.
With this conversion, you can judge how surprising your data is under the null or any other hypothesis.
Of course, this is subject to all the usual caveats. The p-value assumes no systematic error in our study so it doesn’t account for all sources of error. Still, there is something intriguing about reframing the p-value this way.
So I can’t say I’m going to start using this tomorrow, but I’m interested in the idea and am curious to see where it goes.
So for those of you who had not heard of the S value before. Will you consider using it now?
You can follow @ProfMattFox.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.