A thread based on my recent experience with post-publication peer-review and its publicization. It raises questions about incentivizing Open Science which I whole-heartedly support. A recently updated @RetractionWatch story is here: https://retractionwatch.com/2021/01/21/sleuths-scrutinize-high-profile-study-of-ultra-processed-foods-and-weight-gain/
On Jan 19, @ivanoransky from @RetractionWatch emailed me: “A few researchers have scrutinized a 2019 paper from your group and are planning to publish posts with their critiques (attached) later this week…Will they prompt a reconsideration of your paper?”
No scientist wants to receive such an email from @RetractionWatch which is an influential website primarily known for reporting major errors or scientific misconduct that often lead to peer-reviewed publications being retracted, reputations being tarnished, or worse.
While I knew we had not engaged in scientific misconduct, even the best scientists make errors and it’s very important to correct the scientific record. We do this regularly, but it’s uncomfortable when others claim to have found errors, or worse evidence of misconduct.
We immediately worked to address the bloggers' questions. I responded that Jan 20 was a federal holiday & many of our NIH team wasn't available, so I asked for more time to fully respond. This request was denied & publication of the @RetractionWatch story would occur on Jan 21.
Under immense time pressure, we provided simple explanations for most of the issues by Jan 21 before the blog posts and the original @RetractionWatch story were published at noon. One blogger incorporated our response in his updated blog.
The next day, Jan 22, we provided an updated response which was uploaded as a link on the @RetractionWatch story along with acknowledgement from the bloggers that we had responded. We fully addressed their questions and there were no errors in our publication requiring correction
This is a good example of Open Science leading to a better understanding of a published study and identifying simple explanations for seemingly suspicious patterns in the data. It also identified a minor software bug that did not affect the main results in the publication.
However, this was unnecessarily stressful given the time pressures to respond before publication of the @RetractionWatch story. I was also concerned that many readers might still consider our study to be under a cloud of suspicion even with the updated blogs & our full response.
The first bloggers had incorporated our initial response in their original Jan 21 post (near the end) saying, “having communicated with the authors, we now think that … most of these [seemingly suspicious] patterns can be explained”
5 days later, the other blogger updated his post saying, “I believe that these responses adequately address all of the points that I made in the original post” & tweeted our responses “answer all of my doubts about the numbers that I found in the dataset” https://twitter.com/sTeamTraen/status/1354205759152783364?s=20
The next day NIH press office asked @ivanoransky to update the @RetractionWatch story to prominently indicate that we have adequately addressed the issues raised by the bloggers. He refused.
The main reason @ivanoransky provided for not updating the @RetractionWatch story was: “I'm still not seeing clear evidence [that the bloggers] have now acknowledged that Dr. Hall and the study authors have adequately addressed the issues they raised."
This confirmed my fear that reading the first bloggers' post was insufficiently clear that we had adequately answered their questions. If @ivanoransky wasn’t convinced by this by the blog post that we addressed the concerns, neither would most readers.
After repeated email attempts over the past week, the first bloggers finally responded to my request for clarification as follows: “We feel that you have adequately addressed the major concerns we raised in our post. We think this is clear from how the post is written”.
Finally, 8 days after the original @RetractionWatch story was posted, @ivanoransky agreed to update the story to reflect that we had fully answered the questions posed by the bloggers: https://retractionwatch.com/2021/01/21/sleuths-scrutinize-high-profile-study-of-ultra-processed-foods-and-weight-gain/
The work done by @RetractionWatch and the self-described “data thugs” is very important for identifying errors as well as exposing fraud and negligence. A great book on this overall topic is “Science Fictions” by @StuartJRitchie should be required reading for all scientists.
While our study was eventually vindicated, I worry about the potential damage done during the week when the @RetractionWatch story could have easily been interpreted as casting a cloud of suspicion on our research. Reputations take years to build, but only an instant to destroy.
Why was @RetractionWatch in such a hurry to publish? I worry that such a respected site publicizing the mere investigation of a study by self-described “data police” without allowing scientists adequate opportunity to fully respond will end up disincentivizing Open Science.