Having spent the last couple of days dealing with pure woo, such as germ theory denialism and naturopathic quackery, I think now’s as good a time as any to move on to a more serious topic.
One of the most important aspects of science is the publication of scientific results in peer-reviewed journals. This publication serves several purposes, the most important of which is to communicated experimental results to other scientists, allowing other scientists to replicate, build on, and in many cases find errors in the results. In the ideal situation, this communication results in the steady progress of science, as dubious results are discovered and sound results replicated and built upon. Of course, scientists being human and all, the actual process is far messier than that. In fact, it’s incredibly messy. Contrary to popular misconceptions about science, it doesn’t progress steadily and inevitably. Rather, it progresses in fits and starts, and most new scientific discoveries go through a varying period of uncertainty, with competing labs reporting conflicting results. To achieve consensus about a new theory can take relatively little time (for example, the less than a decade that it took for Marshall and Warren’s hypothesis that peptic ulcer disease is largely caused by H. pylori or the relatively rapid acceptance of Einstein’s Theory of Relativity) to much longer periods of time.
One of the pillars of science has traditionally been the peer review system. In this system, scientists submit their results to journals for publication in the form of manuscripts. Editors send these manuscripts out to other scientists to review them and decide if the science is sound, if the methods appropriate, and if the conclusions are justified by the data presented. This step of the process is very important, because if editors don’t choose reviewers with the appropriate expertise, then serious errors in review may occur. Also, if editors choose reviewers with biases so strong that they can’t be fair, then science that challenges such reviewers’ biases may never see print in their journals. The same thing can occur to grant applications. In the NIH, for instance, the scientists running study sections must be even more careful in choosing scientists to be on their study sections and review grant applications, not to mention picking which scientists review which grants. Biases in reviewing papers are one thing; biases in reviewing grant applications can result in the denial of funding to worthy projects in favor of projects less worthy that happen to correspond to the biases of the reviewers.
I’ve discussed peer review from time to time, although perhaps not as often as I should. My view tends to be that, to paraphrase Winston Churchill’s invocation of a famous quote about democracy, peer review is the worst way to weed out bad science and promote good science, except for all the others that have been tried. One thing’s for sure, if there’s a sine qua non of an anti-science crank, it’s that he will attack peer review relentlessly, as HIV/AIDS denialist Dean Esmay did. Indeed, in the case of Medical Hypotheses, the lack of peer review let the cranks run free to the point where even Elsevier couldn’t ignore it any more. One thing’s for sure. Peer review may have a lot of defects and blindnesses, but lack of peer review is even worse. It’s no wonder why cranks of all stripes loved Medical Hypotheses.
None of this means that the current system of peer review is sacrosanct or that it can’t be improved. In the 25 years or so I’ve been doing science, particularly in the 20 years since I began graduate school, I’ve periodically heard lamentations asking, “Is peer review broken?” or demanding that the peer review system be radically altered or even abolished. Usually they occur every two or three years, circle around scientific circles for a while, and then fade away, like the odor of a particularly stinky fart. It looks as though it’s time yet again, as a rather amusingly titled article in The Scientist, I Hate Your Paper: Many say the peer review system is broken. Here’s how some journals are trying to fix it:
Twenty years ago, David Kaplan of the Case Western Reserve University had a manuscript rejected, and with it came what he calls a “ridiculous” comment. “The comment was essentially that I should do an x-ray crystallography of the molecule before my study could be published,” he recalls, but the study was not about structure. The x-ray crystallography results, therefore, “had nothing to do with that,” he says. To him, the reviewer was making a completely unreasonable request to find an excuse to reject the paper.
Kaplan says these sorts of manuscript criticisms are a major problem with the current peer review system, particularly as it’s employed by higher-impact journals. Theoretically, peer review should “help [authors] make their manuscript better,” he says, but in reality, the cutthroat attitude that pervades the system results in ludicrous rejections for personal reasons–if the reviewer feels that the paper threatens his or her own research or contradicts his or her beliefs, for example–or simply for convenience, since top journals get too many submissions and it’s easier to just reject a paper than spend the time to improve it. Regardless of the motivation, the result is the same, and it’s a “problem,” Kaplan says, “that can very quickly become censorship.”
I daresay pretty much every scientist has submitted a paper (probably several) papers, only to have outrageously unreasonable reviewer comments returned to them similar to those Kaplan described above. I myself have experienced this phenomenon on multiple occasions. Most recently, it took me multiple submissions to four different journals to get a manuscript published. It took nearly a year and a half and more hours of writing and rewriting and doing more experiments than I can remember. But “censorship”? I’m half tempted ot respond to Dr. Kaplan: Censorship. You keep using that word. I do not think it means what you think it means. In fact, I just did.
No, incompetent or biased peer review is not “censorship.” It’s incompetent or biased peer review, and it’s a problem that needs to be dealt with wherever and whenever possible. As for “rejecting papers for convenience,” perhaps Dr. Kaplan could tell us what a journal editor should do when he or she gets so many submissions that it’s only possible to publish 10 oe 20% of them. Peer reviewers aren’t paid; with the proliferation of journals the appetite of the scientific literature for peer reviewers is insatiable. Moreover, reviewing manuscripts is hard work. That’s why higher impact journals not infrequently use a triage system, where the editor does a brief review of submitted manuscripts in order to determine whether it is appropriate for the journal or has any glaring deficiencies and then decides whether to send them out for peer review.
I have the same problem with another complaint in the article, that of Keith Yamamoto:
“It’s become adversarial,” agrees molecular biologist Keith Yamamoto of the University of California, San Francisco, who co-chaired the National Institutes of Health 2008 working group to revamp peer review at the agency. With the competition for shrinking funds and the ever-pervasive “publish or perish” mindset of science, “peer review has slipped into a situation in which reviewers seem to take the attitude that they are police, and if they find [a flaw in the paper], they can reject it from publication.”
He says that as though that were a bad thing. There is no inherent right to publish in the scientific literature, and papers with major flaws should be rejected. How major or numerous the flaws have to be to trigger rejection comes down to the policies of each peer reviewed journal. Don’t get me wrong. I’m not all Pollyannaish, thinking that our current peer review system is the best of all possible worlds. Improvement in the system can only be good for science, if true improvement it is, and there are some good suggestions for improving peer review in the article.
Perhaps the most pernicious problem in peer review is the problem of reviewers with a bias or an axe to grind. To attack this problem, some journals are trying to eliminate anonymous peer review. The idea is that, if everything is completely open and transparent, with the peer reviews being “part of the record,” so to speak. I can see the appeal of this change. A reviewer is less likely to “be a dick” if he or she knows that the review will be in the public record, for all to see, or at least that the manuscript authors know who the peer reviewers are. Personally, I have a problem with this, mainly because I think the downside of getting rid of reviewer anonymity outweigh the potential good side. For example, I rather suspect that a lot of reviewers would be reluctant to be too hard on the manuscripts submitted by big names in their field if they knew their names would be on the review. You don’t want to piss off the big Kahunas in your field. These are the people who organize conferences, invite outside speakers, and sit on study sections. In general, it’s not a good idea to get on their bad side, particularly if you’re still young and struggling to make a name for yourself in the field. For example, I’m a breast surgeon, and I know I would be reluctant to apply even deserved respectful insolence to a paper by, for example Monica Morrow or Armando Guliano (two very big names in the field) if I knew they knew who was reviewing their papers and even if the paper I was reviewing was obviously crap.
Personally, I like the idea expressed here:
Frontiers journals are trying to find a balance by maintaining reviewer anonymity throughout the review process, allowing reviewers to freely voice dissenting opinions, but once the paper is accepted for publication, their names are revealed and published with the article. “[It] adds another layer of quality control,” says cardiovascular physiologist George Billman of The Ohio State University, who serves on the editorial board of Frontiers in Physiology. “Personally, I’d be reluctant to sign off on anything that I did not feel was scientifically sound.”
As would I.
Another idea I’ve proposed before is to go for full anonymity. In other words, reviewers are anonymous to the authors of manuscripts, and–here’s the change–the authors are anonymous to the reviewers. One advantage to such an approach is that it would tend to alleviate any effect of personal dislikes or even animosity, and it would “take the glow” off of big names submitting papers, hopefully making it less likely that reviewers would give a weak paper a pass because it came from a big name lab. On the other hand, in small fields, everyone knows what everyone else is doing; so anonymizing the manuscript authors would often not hide the identity of the authors.
The last two problems with peer review discussed by this paper are highly intertwined:
- Peer review is too slow, affecting public health, grants, and credit for ideas
- Too many papers to review
The first of the two problems above is largely a function of the last. As I pointed out above, the appetite of journals for peer reviewers is insatiable, and peer reviewers are not paid. They’re expected to do it out of the goodness of their hearts, because it’s service back to the community of science. True, peer review activity counts when it comes time to be considered for promotion and tenure, but it’s a lot of work for very little reward, not to mention articles like the one under discussion, in which seemingly no one can get it right. Oddly enough, there was one suggestion that I didn’t see anywhere in this article, and that’s to pay reviewers for their hard work. Apparently the financial model of journal publishing won’t support it.
Be that as it may, one solution to this proposed is to go to a model like that of PLoS ONE:
An alternative way to limit the influence of personal biases in peer review is to limit the power of the reviewers to reject a manuscript. “There are certain questions that are best asked before publication, and [then there are] questions that are best asked after publication,” says Binfield. At PLoS ONE, for example, the review process is void of any “subjective questions about impact or scope,” he says. “We’re literally using the peer review process to determine if the work is scientifically sound.” So, as long as the paper is judged to be “rigorous and properly reported,” Binfield says, the journal will accept it, regardless of its potential impact on the field, giving the journal a striking acceptance rate of about 70 percent.
“The peer review that matters is the peer review that happens after publication when the world decides [if] this is something that’s important,” says Smith. “It’s letting the market decide–the market of ideas.”
This approach has also proven successful, with PLoS ONE receiving their first ISI impact factor this June–an impressive 4.4, putting it in the top 25 percent of the Biology category. And with a 6-fold growth in publication volume since 2007, Binfield estimates that “in 2010, we will be the largest journal in the world.” Since its inception in December 2006, the online journal has received more than 12 million clicks and nearly 21,000 citations, according to ISI.
I realize that my experience is anecdotal, but among the worst reviewer experiences I ever had was submitting a manuscript to PLoS ONE. In my case, at least, the reviewers were every bit as brutal as any I have ever experienced, which I found odd, because the manuscript I submitted had been all but accepted at an excellent cancer journal. The sole reason it wasn’t accepted is that the reviewers wanted animal studies, and I didn’t have them, nor did I want to delay publication to do them. So I sbumitted to PLoS ONE, believing its mantra that it’s all about the scientific merit of the paper, only to have my manuscript rejected with extreme prejudice. I later reformatted it for another journal, and got it accepted after one round of revisions to a journal with an impact factor significantly higher than PLoS ONE. Maybe my experience was anomalous, but I don’t buy that PLoS ONE represents the savior of anything or even that much of an improvement over traditional publication methods. Certainly, I don’t plan on submitting any more of my work to PLoS ONE for a long time, if ever. In fact, I doubt I’ll submit anything to PLoS ONE ever again.
More intriguing is the concept of letting authors take their peer review with them when they resubmit their manuscript to a different journal after rejection. When I first heard of this concept, I was quite skeptical. After all, if your paper was rejected, chances are that the reviews probably weren’t that positive or that they were, at best, lukewarm. Personally, I can say unequivocally that after I’ve had a paper rejected by a journal the last thing I want is to have to show the next journal to which I submit my manuscript the crappy reviews that I got the first time around. Why on earth would anyone want that? I want a fresh start; that’s why I resubmit the manuscript in the first place! Peer reviews from a journal that rejected my manuscript are not baggage I want to keep attached to the new manucript as I submit it to another journal.
In the end, peer review is the mainstay of scientific publishing. While it has a great deal of difficulty detecting fraud, it can generally detect bad science. No one claims that the current system is perfect or even that it doesn’t have a lot of problems, some of them serious. However, the cries that “peer review is broken” strike me as a perennial complaint without that much substance. As scientists, we can and should do whatever is feasible to shore up the peer review process, and we shouldn’t be afraid of trying out new models of peer review, such as some of the models described in this article. Just don’t throw the baby out with the bathwater. Peer review may have significant problems, but it works surprisingly well, given its ad hoc nature, and it’s incumbent upon those who would overthrow it to show that the systems that vie to replace it would result in better science being published.