I recognize that there are biases in the journal peer review process. One thing that surprised me in my career was how the baseline probability for publishing varied dramatically across different research areas. I worked in some areas where R&R or conditional acceptance was the norm and in other research areas where almost every piece was rejected. [...]
I also think that journal editors have a collective responsibility to collect data across research areas and determine if publication rates vary dramatically. We often report on general subfield areas in annual journal reports, but we do not typically break down the data into more fine-grained research communities. The move to having scholars click on specific research areas for reviewing may facilitate the collection of this information. If reviewers’ recommendations for R&R or acceptance vary across research topics, then having this information would assist new journal editors in making editorial decisions. Once we collect this kind of data, we could also see how these intra-community reviewing patterns influence the long term impact of research fields. Are broader communities with lower probabilities of publication success more effective in the long run in terms of garnering citations to the research? We need additional data collection to assess my hypothesis that baseline publication rates vary across substantive areas of our discipline.