I see lot of discussion these days about the value of peer review. Are journals too selective? Are acceptance decisions arbitrary? Does peer review actually catch scientific mistakes or fraudulent practices? Wouldn’t it be better to just put everything out there, say on preprint servers, and separate the wheat from the chaff in post-publication review? I’m not quite ready yet to give up on pre-publication peer review. I think it serves a useful purpose, one I wouldn’t want to do away with. In the following, I discuss four distinct services that peer review provides, and assess the value I personally assign to each of them.
Peer review screens out nonsense and pseudoscience
It’s important that somebody screen all potential scientific publications for actual scientific content. I don’t mind publishing null results, replication studies, or studies that present only a very minor advance. All of these works contain real science, and they may find some use at some point in the future. However, we must never mix science with pseudoscience. Someone has to assure that whatever gets published in a scientific journal is not complete nonsense. Even the preprint archive arxiv.org has some sort of a screening and filtering system in place to hold back the crackpots. In most cases, nonsensical papers would be caught by the editor and not even sent out to review. Nevertheless, we can consider filtering out nonsense to be an essential service of the pre-publication review process.
Peer review catches major mistakes and/or fraud
Many people seem to think that it is the reviewers’ job to catch major mistakes and/or fraud. And when they fail to do so, that is taken as evidence that peer review doesn’t work. I don’t think we can put such a high burden on the reviewers. Ultimately, the burden of producing correct and genuine results lies with the author. Peer review operates under the assumption that fundamentally the authors are honest and reasonably capable scientists. If peer review does happen to catch a major issue with a paper, that’s great, but generally I think that post-publication review is the much better venue to address major flaws or scientific misconduct.
Peer review assess novelty, potential impact, and fit with the journal scope
Whether reviewers (or editors) should consider novelty and impact, and whether journals should be selective at all, is probably the most contentious issue in peer review. Traditionally, this has always been part of peer review. However, there are now several journals that explicitly state review should only assess scientific soundness (e.g. PLOS ONE or PeerJ). I think there are valid arguments for both sides. On the one hand, it is imperative that we have publishing venues that will publish any scientifically sound study. Nobody benefits if a valid study is suppressed just because some reviewers didn’t find it interesting. If there’s no obvious scientific flaw, put it out there and let the readers (and Google) sort it out.
On the other hand, I think that more selective journals can provide value as well. In my mind, where science has gone off-track is that the most selective journals (which are also considered to be the most prestigious ones, e.g. Nature, Science, Cell, PLOS Biology, PNAS) employ arbitrary selection criteria based primarily on the subjective goal of publishing “the best science.” As a consequence, whether I can publish in such journals depends much more on my marketing skills than on my scientific skills, and also on whether I’m working on a sexy study system.
By contrast, the next lower tier of selective journals usually employ more objective selection criteria, and those arguably provide a useful value. For example, I’m an Associate Editor for PLOS Computational Biology, a fairly selective journal. The main requirement for publication in PLOS Computational Biology is that you have produced high quality computational work that yields a novel biological insight. In my mind, it is fairly straightforward to determine whether a paper satisfies that requirement or not. I also think that any capable computational biologist can jump over that bar. As a consequence, I feel that we’re providing useful selectivity without generating excessive artificial scarcity or making highly arbitrary decisions. If I see that somebody has on their CV a couple of PLOS Computational Biology papers, I can reasonably assume that they are doing consistent, high-quality computational work leading to novel insights into biological systems.