This is a very important read for people doing bioimage analysis. The authors do a good job of laying out potential problems and, even more useful, solutions. https://twitter.com/embojournal/status/1352595619177783298
Couple of thoughts: we spend a lot of time worrying about inappropriate image manipulation ending up in figures. The reality is that while this is very very bad, conclusions should never rest on one image, but on the quantification of many images.
The bigger problem, as these authors correctly identify, are systematic problems in the workflow that could lead to bias in the results of quantification. We should focus on these types of errors, as they're much more damaging.
The use of the imageJ macro recorder to record and publish a workflow is a great idea and is accessible to anyone doing image analysis.
Of course, full reproducibility and open science requires (and I will tell about this until the day I die and then have it carved on my tombstone) using software that is freely available and open-source.
Lastly, there is still an issue with how we deal with manual quantification of objects (say, counting protrusions or classifying phenotypes). Not everything can be automated, and manual quantification of complex outputs is often necessary by researchers.
This is another large potential source of error and it's hard to include (sometimes unavoidable) steps like this in a reproducible image analysis workflow. How to deal with this is a complex issue.
You can follow @damiandn.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.