• I was recently asked to peer-review a scientific paper that introduced a novel platform for scientist to use in their work. A very important work but I found the review tricky: it does not fit in the standard intro - methods - results - discussion - conclusion mold that we force academic papers into. As an earth scientist, not a computer scientist, I could easily judge that the platform introduced in the paper is going to be useful for many scientists, but I couldn't clearly indicate what the "results" were. The journal guidelines asked: "are the conclusion supported by the results" and "are the methods explained clearly enough for a trained professional to repeat the work"... Both not really helpful for this type of paper.

      Having worked with research software engineers from the Netherlands eSciencecenter the last few years on building our own eWaterCycle platform, I've had quite a few discussions and gotten tips from them on how you can do review of software. I am a so called 'domain scientists', ie. a scientist that use computers intensively, but are not themselves computer scientists by training or vocation. I decided that based on my experience in reviewing academic papers on software made by and for other domain scientists I would write a checklist with questions to answer when reviewing software papers. The checklist can be found here on Github. I'm sharing the list with three goals in mind:

      • It might help others when reviewing (or writing) papers about research software.
      • Sharing this checklist helps in expectation management: by referring to this checklist reviewers can clearly indicate what they did.
      • By adopting this checklist as (part of their) reviewers guidelines journals communicate clearly to other scientists, but also to science-journalists and to society at large, what it means for a piece of software to be 'peer-reviewed'.

      Finally, I've published the checklist on Github and not, for example, here on this blog because others might have great suggestions and additions to make the checklist better and Github provides a good environment to collaboratively improve the checklist as a 'living document'.

      If you want to use the checklist (when reviewing a paper, or in any other way) please cite it as:

      Hut, Rolf, Drost, Niels, Kalverla, Peter, & Aerts, Jerom. (2023). On the difficulties in reviewing academic software papers in the earth sciences: a helpful checklist (v1.0.0). Zenodo. https://doi.org/10.5281/zenodo.8168785


      If you want to host the checklist as a post on your own website, or maybe even add it as a guideline to the guide for reviewers of your journal, I'd be happy to facilitate! Email me!

      Many thanks to Niels Drost and Peter Kalverla from the Netherlands eSciencecenter and to Jerom Aerts from Delft University of Technology for constructive feedback on the checklist. Also thanks to Jeff Horsburgh, editor of journal of environmental software on encouraging me to publish the checklist.