Is there an avalanche of low-quality research, and if so, must we stop it?

Putting artificial limits on output is never a good idea.

Update: It turns out the article in the Chronicle is not recent, I misread the date on the page. (The Chronicle has two dates on each page, today’s date and the article publication date.) I stand by everything else I say, though.

A recent article in the Chronicle of Higher Education argues that “we must stop the avalanche of low-quality research.” The authors decry the rapid growth of the scientific literature, which (as they argue) puts increasing strain on readers, reviewers, and editors without producing much benefit. They argue that this growth is driven by an increasing pressure on scientists to publish more, and the result is increasing amounts of low-quality publications. To address the pressure on scientists, they propose three fixes, of which one is Ok and two are positively inane. Maybe what we really have to stop is the avalanche of low-quality, non-reviewed opinion pieces published on web pages?

Reading through the article, I found it difficult not to wonder whether the authors had ever heard of the internet or of modern information-processing technology (e.g., Google). Now, to be fair, none of the authors are in the natural sciences. The authors work in English, mechanical engineering, medicine, management, and geography. I don’t really know these areas. My own work is in biology, and I’m also somewhat familiar with the publishing cultures in physics and in computer science. So, everything they say may make sense in their fields, but it doesn’t in mine. I’m not convinced we have a major crisis, and I certainly don’t think their proposed fixes are any good. Let’s take a look at their proposed solutions first.

Limit number of papers submitted for job applications or promotions

The first fix, to limit the number of papers that applicants are allowed to submit for job applications or promotions, is actually somewhat reasonable. Applicants should be judged on the quality of their work, not on the mere quantity of output. However, I am strongly opposed to saying applicants are not even allowed to mention anything beyond their key 3-5 papers. Why should productivity be punished? What if they wrote 10 important papers? Should Ed Witten be limited to list only 5 papers? (To date, he has written over 30 papers with over 1000 citations each!) While there are negative outliers in academia, people who produce huge amounts of mindless drivel, I definitely see a correlation between quantity and quality. The most interesting and influential papers are generally written by the most productive researchers. I have previously given arguments for why we would expect such a correlation to exist.

Most job search and promotion processes that I am aware of have already found a solution to this problem, by asking applicants to submit both (i) a full list of all publications and (ii) the 3-5 most important papers, possibly with a statement explaining their impact. This is good practice that strikes a balance between quality and quantity, it allows applicants to showcase both how good they are and how consistently productive they are, and most importantly, it is already common practice. So point 1 is a non-issue, from where I stand.

Evaluate researchers by impact factors

Evaluating researchers by impact factor is such an absurd and untimely suggestion, I can’t help but wonder whether the authors have been living under a rock for the last 10 years. It’s particularly ironic that the Chronicle of Higher Education would publish this statement a mere 11 days after nobel-prize winner Randy Schekman publicly proclaimed that luxury (i.e., high impact-factor) journals such as Nature, Cell, and Science “are damaging science.” Did the authors really not see this article, nor the widespread outrage it caused over containing a cheap plug for a different luxury journal? If there is one problem we have in science right now, at least in the biomedical field, it’s an over-reliance on impact factors and publications in high-profile journals. The outcry over Schekman’s article shows how sensitive of an issue this is, and how many scientists are concerned about the growing pressure to publish in only the highest-impact journals. Schekman himself addresses this in his response to the criticism he received. Scientists should be judged on the quality of their work, not on whether or not they published in Nature.

Limit the length of papers published

I don’t see how imposing page limits connects at all to the issue at hand. Surely, if we want fewer but higher-quality publications, the papers should be longer not shorter. Also, I strongly oppose to the split model with a brief (4-6 page) main article (i.e., advertisement) accompanied by longer supporting materials. Invariably, the supporting materials are not written as carefully as the main article, and the quality of the paper as a whole suffers. Notably, PNAS just went the other direction, and now allows papers of up to 10 pages in length in their online-only PNAS Plus edition. This was a very welcome change, I think. The 6-page limit of PNAS was often too limiting, whereas most articles fit comfortably within 10 pages.

Is there too much pressure to publish?

While there is pressure to publish, frankly I don’t see that there is excessive pressure to publish. From what I see, for example in conversations with colleagues, the common expectation is reasonable productivity both in terms of quantity and in terms of quality. In terms of quantity, reasonable is usually a number between 1 and 10 papers per year. Publish less, and people start wondering whether you’re working consistently, and in particular whether you’ll keep working in the future. Publish much more than 10 papers per year, and people start looking at you suspiciously. I sat on a grant-review panel once where one applicant claimed his previous 3-year NSF grant had led to ~100 publications. People were very suspicious of this claim and the grant did not get good reviews, even though the science seemed to be reasonable. (I’m not saying the proposal would have been funded if the applicant had had fewer publications, but the high number certainly didn’t help; if it had any effect it was a negative one.) In most areas of Biology, I think 2-3 papers a year will be considered perfectly reasonable for anybody but a senior PI running a large lab. (This includes all papers with your name on, not just first-author papers.)

In terms of quality, I stick to my earlier recommendation: publish at least one paper a year that has some real substance. Where exactly that paper is published is secondary, I believe. Publishing the occasional high-profile article in a luxury journal can’t hurt, but I hope that we as scientists can collectively learn to pay a little less attention to where something is published and pay more attention to the content. We shouldn’t hire somebody without having carefully read at least one or two of their papers, and I think the more diligent search committees operate like that already.

With regards to excessive workload for editors and reviewers, I think there are several things that could be done relatively easily:

  1. Institute a system of reviewing credits, where you receive one credit for each article you review and you have to spend a number of credits (e.g. 6) to submit an article. This would ensure that everybody who publishes carries their fair share on the reviewing side.

  2. Have more graduate students and postdocs review papers. Not every paper needs to be reviewed by three members of the NAS. In fact, I often find that graduate students write better reviews than senior scientists do, because the graduate students take the job much more seriously and put way more effort into it than an established scientist normally would.

  3. Have less stringent reviewing criteria, don’t judge impact. Much of the excessive reviewing load actually comes from the pressure to publish in highly selective journals. Thus, many articles make the mandatory trek from Science to Nature to PNAS to PLOS Genetics to PLOS ONE, possibly undergoing four or more separate rounds of review. It’s not uncommon for me to review the same article several times for different journals. And in the end, everything gets published anyway, somewhere. If it was the reviewers’ job to only look for major scientific flaws, then most articles could be published after 1-2 rounds of review, cutting the total review burden way down.

  4. Improve tools for post-publication evaluation of articles. At present, all we have is citations and word-of-mouth. (“Have you seen the latest paper by X in PNAS? It’s really not very good.”) I’m sure we can do better than that, and over time we’ll find ways to put modern computing power and crowd-sourcing ideas to good use. NCBI’s PubMed Commons is a first step in this direction. I’m sure over the next 10-20 years we’ll see many more innovative ideas to evaluate the quality of scientific work post publication.

Avatar
Claus O. Wilke
Professor of Integrative Biology

Related

comments powered by Disqus