Earlier today, someone shared the following with me:
an editor's perspective on the peer review process. In the piece, she offers feedback to peer reviewers (including offering internal criticism, such as offering additional literature or analyses to strengthen the paper, over external criticism, comparing the topic of the present paper to other topics and assessing its validity and importance), authors (on how to navigate the peer review process), and fellow editors. I found her recommendations of data collection and research needs about the peer review process to be very interesting:
I recognize that there are biases in the journal peer review process. One thing that surprised me in my career was how the baseline probability for publishing varied dramatically across different research areas. I worked in some areas where R&R or conditional acceptance was the norm and in other research areas where almost every piece was rejected. [...]
I also think that journal editors have a collective responsibility to collect data across research areas and determine if publication rates vary dramatically. We often report on general subfield areas in annual journal reports, but we do not typically break down the data into more fine-grained research communities. The move to having scholars click on specific research areas for reviewing may facilitate the collection of this information. If reviewers’ recommendations for R&R or acceptance vary across research topics, then having this information would assist new journal editors in making editorial decisions. Once we collect this kind of data, we could also see how these intra-community reviewing patterns influence the long term impact of research fields. Are broader communities with lower probabilities of publication success more effective in the long run in terms of garnering citations to the research? We need additional data collection to assess my hypothesis that baseline publication rates vary across substantive areas of our discipline.
No comments:
Post a Comment