Monday, August 15, 2016

Open Source Publishing, Peer Review, and a What-If Scenario

On the Reviewer 2 Must Be Stopped Facebook page, someone posted an interesting scenario:


Some of the responses included references to other types of social media: e.g., "It would be like Facebook" or "It would be like blogging". But what this user is proposing is somewhat different than that. True, I could publish my research on my blog - which would in many cases preclude me from publishing it elsewhere - but you either have to know about my blog to find it or it would have to come up during a web search. Same thing with Facebook: you'd have to know me to see my posts. This scenario, on the other hand, involves putting articles from different authors in one place. So a user would just have to know about the journal to access the articles.

Of course, that doesn't make this a good scenario. Let's, for the sake of argument, say there are multiple such sites for different subjects - that deals with the problem of having articles on so many different topics that it fails to be readable. After all, to make this like a regular journal, it would need to have aims and scope: a description of what the journal is about so that authors know whether their article would be a good fit. But we already have a potential failure point - authors may be very bad at determining whether their article fits, or they may be such poor writers that they fail to show the article fits. Part of what happens during review is the editor and reviewers determine if the article is a good fit. So now you have articles on, say the strength of different concrete mixtures next to an article about college students' social media behaviors.

Obviously, if this is an online journal, people can search the articles. People who search for articles on concrete shouldn't find articles on social media behavior. But once again, we have a failure point: who checks those keywords, to make sure they accurately reflect the subject of the article? In the current state of publishing, authors do generate their own keywords, often using specific standards. Certain keywords are more likely to be searched for than others, so authors might be tempted to pad their keywords with more common headings, even headings that are only somewhat relevant, to increase the chances their article is found. Fortunately, editors can double-check those keywords and drop ones that don't fit. But with the proposed system, there is no quality control.

Two major issues, and we haven't even gotten to the posting or user comments yet. And before you say, "The post said to drop reviewers, not the editor," remember that the proposal was a publishing source where instead of review, users up/down-voted articles and left comments. If editors can decide what does and does not get published, and can control (to some extent) the content, you still have the same system as you do now, where a small number of people control the flow of information. For this to work as the poster intended, you can't really have editors.

Now for the key portion of the proposal: users get to rate and comment on articles. This is where the similarities to Facebook are strongest. What posts do you "like" on Facebook? Often, ones you agree with. You would have the same danger here: that people would up-vote the articles whose results they agree with, even if the study is methodologically flawed. When you evaluate scientific research, you have to evaluate the methods. If the methods are sound, the results are presumed to be valid, even if you disagree with them. That is, the rating system would be driven by opinion rather than scientific validity. Sure, some people would evaluate the strength of the methods to generate their rating, but their voices would probably be drowned out by ratings based purely on opinion. And if you have many non-scientists visiting the articles and giving ratings, the difference between the two would be even stronger. Once again, there is no quality control of who does the rating and whether they have the necessary knowledge, as there would be if an article is peer-reviewed.

This system would really only work if you assume that everyone using it - authors and readers - do so honestly and with the best of intentions. I try to see the best in people, individually (because the way one person will behave is an unknown), but here, we're talking about patterns of groups, which are far more predictable. As much as I want to like this idea - because peer review can be unfair and problematic in its own ways - it would likely be chaos.

What do you think, readers?

No comments:

Post a Comment