In clinical studies of X vs Y, there's a lot going on besides the difference between X and Y. There are also numerous effects that are unaccounted for.
The difference between studies may be larger than the difference between X and Y in any single study.
The difference between studies may be larger than the difference between X and Y in any single study.
You see this in reviews of multiple studies of vitamin D or HCQ as treatments for COVID, for example.
You can't put much stock in any one study because you can't isolate variables in a medical study the way you can in a physics experiment.
You can't put much stock in any one study because you can't isolate variables in a medical study the way you can in a physics experiment.
See this paper estimating household transmission rates for COVID. Skim the plots in the PDF. The consensus conclusion at the bottom of each plot is an average of widely varying results from different studies. This is typical. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2774102
I first became aware of this kind of variation when I was working at MD Anderson Cancer center.
A colleague showed me two response curves and asked me which treatment was better.
One was clearly better, but they were the same drug in two different trials.
A colleague showed me two response curves and asked me which treatment was better.
One was clearly better, but they were the same drug in two different trials.
That is why clinical trials need active controls, i.e. you need to put subjects on the new treatment and the standard of care.
You can't just assume you know what will happen in the control group, because you don't know how standard of care will work in a new trial.
You can't just assume you know what will happen in the control group, because you don't know how standard of care will work in a new trial.
Maybe in some contexts you can predict better how the control group will respond, but I haven't seen that in oncology or with COVID.