I was very happy with NEWSWEEK recently, specifically because of its lengthy expose of Oprah Winfrey and her promotion of pseudoscience, mysticism, and quackery on her talk show. However, I haven’t always been that thrilled with NEWSWEEK’s coverage of medicine and science. For example, NEWSWEEK’s science columnist Sharon Begley has gotten on my nerves on more than one occasion, most recently when she castigated doctors for not enthusiastically embracing comparative effectiveness research, making the unjustified slur against physicians that they “hate science.” Indeed, she even managed to irritate Steve Novella with her slur, and that’s not nearly as easy a thing to do as it is to irritate me sufficiently to want to lay down a heapin’ helpin’ of not-so-Respectful Insolence or to irritate PZ enough to type up a slapdown.
Oops, she did it again.
Last week, Begley produced an article that, quite frankly, annoyed the crap out of me, called From Bench To Bedside: Academia slows the search for cures. It never ceases to amaze me how some pundits can take an enormously flawed idea as to why a problem exists and run right off the cliff with it.
Begley begins by pointing out that President Obama has not yet appointed a Director of the NIH. That’s a fair enough criticism. Personally, I’m not happy that there’s no permanent NIH Director. I’d like to think, as Begley hopes, that it’s because Obama realizes how important this pick is and wants to get it right. But that’s about all I agree with Begley on. After that introduction, she runs straight off the cliff:
NIH has its work cut out for it, for the forces within academic medicine that (inadvertently) conspire to impede research aimed at a clinical payoff show little sign of abating. One reason is the profit motive, which is supposed to induce pharma and biotech to invest in the decades-long process of discovering, developing and testing new compounds. It often does. But when a promising discovery has the profit potential of Pets.com, patients can lose out. A stark example is the work of Donald Stein, now at Emory University, who in the 1960s noticed that female rats recovered from head and brain injuries more quickly and completely than male rats. He hypothesized that the pregnancy hormone progesterone might be the reason. But progesterone is not easily patentable. Nature already owns the patent, as it were, so industry took a pass. “Pharma didn’t see a profit potential, so our only hope was to get NIH to fund the large-scale clinical trials,” says Stein. Unfortunately, he had little luck getting NIH support for his work (more on that later) until 2001, when he received $2.2 million for early human research, and in October a large trial testing progesterone on thousands of patients with brain injuries will be launched at 17 medical centers. For those of you keeping score at home, that would be 40 years after Stein made his serendipitous discovery.
Whenever I see a story like this, I always wonder exactly why it took so long to move an idea from concept to clinical trial to clinical use. A while back, I wrote about John Ioannidis’ study that showed that it takes between 14 and 44 years for an idea to make it “from bench to bedside.” In any case, when in doubt, do a PubMed search to see what the person describing his research has published. So I did just that for Dr. Stein. He has a healthy publication record (162 publications), as well as a number of publications from the late 1960s on brain injury in rodent models. Clearly, Dr. Stein has been a successful and well-funded rsearcher. However, when I searched his name and “progesterone,” I didn’t find a single publication until 2006. So I dug a little deeper, and the first paper I could find by him postulating a sex difference in healing after head injuries was published in 1987. In 1986, he coauthored a review in Nature on the pharmacological attenuation of brain injury after trauma, and didn’t once mention progesterone. The point here is not to cast doubt on Dr. Stein’s contention that he first noticed this finding in the 1960s, but rather to point out that it must not have been a high priority in his career, because he didn’t publish on it for 20 years and didn’t really start doing a lot of work on it until the last few years, with a flurry of publications since 2006.
The other point is, as I have said time and time again, that a scientist can’t just go to human studies (unless, of course, one believes animal rights activists who deny that animal research contributes anything to medical advancements). There has to be solid preclinical evidence. In other words, there has to be a lot of cell culture, biochemical, and animal work that all support your hypothesis, and it can take a minimum of several years to develop that evidence. Medical ethics and the Helsinki Accord demand it. Moreover, the sort of preclinical work that would have been needed to lay the groundwork for clinical trials of progesterone as a neuroprotective agent in trauma is exactly the sort of research that the NIH has funded all these years. One wonders why Dr. Stein, who clearly has a well-funded lab, didn’t divert a bit of that funding earlier to do some pilot experiments to use to pursue NIH funding. Maybe he didn’t have enough extra funds lying around or couldn’t find a way to relate the project to one of his existing projects sufficiently to justify doing so. In any case, at the risk of sounding too harsh, I will say that the whole big pharma thing struck me as very self-serving. Whatever the case was, I strongly suspect that the full story is far more complicated than the “big pharma won’t fund it because it can’t patent it” hyperbole that Begley is laying down (and that sounds very much like the same sorts of excuses purveyors of “natural” therapies use to justify why they don’t do any research to show that their “cures” work).
But that’s not what irritated me the most about Begley’s article. This is:
The desire for academic advancement, perversely, can also impede bench-to-bedside research. “In order to get promoted, a scientist must publish in prestigious journals,” notes Bruce Bloom, president of Partnership for Cures, a philanthropy that supports research. “The incentive is to publish and secure grants instead of to create better treatments and cures.” And what do top journals want? “Fascinating new scientific knowledge, [not] mundane treatment discoveries,” he says. Case in point: in research supported by Partnership for Cures, scientists led by David Teachey of Children’s Hospital of Philadelphia discovered that rapamycin, an immune-suppressing drug, can vanquish the symptoms of a rare and sometimes fatal children’s disease called ALPS, which causes the body to attack its own blood cells. When Teachey developed a mouse model to test the treatment, he published it in the top hematology journal, Bloodin 2006.
A brief aside: Wow. Surgeon that I am, I didn’t know that Blood was such a top tier journal. The reason I’m amazed is that I published in Blood last year. If Blood will take one of my manuscripts, it can’t be that awesome, can it? (Queue false modesty.) Now, back to Begley:
But the 2009 discovery that rapamycin can cure kids with ALPS? In the 13th-ranked journal. The hard-core science was already known, so top journals weren’t interested in something as trivial as curing kids. “It would be nice if this sort of work were more valued in academia and top journals,” Teachey says. Berish Rubin of Fordham University couldn’t agree more. He discovered a treatment for a rare, often fatal genetic disease, familial dysautonomia. Given the choice of publishing in a top journal, which would have taken months, or in a lesser one immediately, he went with the latter. “Do I regret it?” Rubin asks. “Part of me does, because I’m used to publishing in more highly ranked journals, and it’s hurt me in getting NIH grants. But we had to weigh that against getting the information out and saving children’s lives.”
My brain hurts from the concentrated ignorance on display here.
Let’s boil down Begley’s thesis here. The cool basic science stuff appeared in the top hematology journal, but the first report of the application of that basic science to treat patients appeared in only in the 13th-ranked journal. Obviously journals value basic science over clinical science! Those bastards! They don’t care about curing children! To them curing kids is “trivial.”
And Begley’s full of crap.
Begley seems blissfully ignorant of two things: How journal rankings work and the fact that different scientific journals fill different niches. Ranking of scientific and medical journals are in general based on something called the “impact factor” (IF). The IF is often used as a proxy for the importance of a journal in its field, with the higher the IF number the better. Although the algorithm that determines the IF is proprietary, it is calculated based on a two-year period and is based on the average number of citations in a year given to papers in a journal published during the two preceding years. In general, higher IF journals are viewed as more desirable to publish in. Thus, what makes the IF a proxy for a journal’s importance is the presumption that more citations of its articles equates to more interesting science and novel findings that more scientists cite. This may or may not be a valid assumption. Finally, one aspect of the IF is that journals designed for a more general readership tend to have higher IFs than subspecialty journals. In other words, Cell, Nature, and Science have high IFs. Within a field, Cancer Research or Clinical Cancer Research has a higher IF than Breast Cancer Treatment and Research (an example that actually is in line with using IFs, as CR and CCR are definitely better journals than BCTR).
Here’s where niches come in. Different journals have different niches. For example, the example mentioned by Begley, Blood, is not primarily a clinical journal. True, it does publish some clinical trial results, but its main emphasis is clearly on basic and translational research. It’s simply silly to get all worked up because Blood didn’t publish a small pilot study with six patients and conclude that journals don’t value clinical research. They do, just not journals that are primarily basic and translational science journals. Publishing clinical trials is not their raÃ®son d’Ãªtre. However, I think I know why Teachey’s second study was not viewed as being interesting as his first study. A mouse model that provided proof of principle that rapamycin can treat a rare blood condition, complete with scientific mechanism is indeed interesting for a wide range of researchers, basic science, translational, and clinical. A small pilot study tends to be less so.
Let’s look at Teachey’s BJH article. It’s a nice study, but clearly a very preliminary pilot study. Such pilot studies do not generally make it into the top tier journals, no matter how interesting the science is, because, well, they’re so preliminary and small (and thus could be wrong). Begley seems to think that not considering such studies as being top tier is akin to considering curing children of deadly diseases to be “trivial.” She also seems to think that not placing such a study in a top tier journal will fatally delay the application of such cures. However, no treatment is going to be approved on the basis of such a small pilot study; at a minimum, a larger phase II study would still have to be done, and that is the study that would be likely to show up in the higher tier journals, particularly if it was well-designed to include some cool correlative science studies that confirmed the mechanism in humans. In either case, Begley doesn’t make a good case that Teachey’s study’s not being published in Blood has somehow delayed the fruits of his research from reaching sick children. Much work still needs to be done before Teachey’s discovery becomes common practice.
Begley is closer to the mark (albeit still exaggerating) when she discusses how the importance of IFs can distort how and where scientists decide to publish. In brief, scientists tend to want to publish in the highest impact journals because articles in such journals are viewed as being more meaningful than in lesser journals. Where she goes off the mark is her assumption that it is those horrible basic scientists, with their insistence on knowing molecular mechanisms that keep clinical research in the ghetto of lower tier journals, are somehow keeping teh curez from teh sick babiez!!!! For instance, after lionizing Berish Rubin for having chosen to publish in the lesser journal rather than keep teh curez from teh babiez, she castigates an unnamed scientist:
Not all scientists put career second. One researcher recently discovered a genetic mutation common in European Jews. He has enough to publish in a lower-tier journal but is holding out for a top one, which means identifying the physiological pathway by which the mutation leads to disease. Result: at least two more years before genetic counselors know about the mutation and can test would-be parents and fetuses for it.
This is so vague as to be useless. “A genetic mutation common in European Jews”? What mutation? What is the significance of this mutation in carriers? To what disease or defect does it predispose? Begley doesn’t say. I realize she’s probably doing so in order not to give a huge clue as to who this evilly careerist scientist who doesn’t care about patients may be, but without that information I have no idea whether this discovery is so potentially important to patients that delaying its publication until he figures out how this mutation does its dirty work. In any case, validating a new genetic mutation as a risk factor to the point where developing a screening test for it is an incredibly difficult task, requiring clinical trials and validation to carry out. The process of FDA approval for a new genetic test is not trivial. In any case, all we’re left with is a bunch of self-serving anecdotes to support her dislike of basic science. (I’m half tempted to ask, with great satisfaction, why does Sharon Begley hate basic science?)
There’s a deeper problem, though, with Begley’s essay. Having both an MD and a PhD and doing translational research myself, I think I have some perspective on this. The problem is that Begley seems to buy into the Magic Bullet model of scientific progress, a.k.a. the “big breakthrough.” While it’s true that big breakthroughs so sometimes occur (think Gleevec, for instance), the vast majority of science and scientific medicine is incremental, each new advance being built upon prior advances. It’s also very frequently full of false starts, dead ends, and research that looks promising at first and then peters out. If a big breakthrough could be conjured by willpower and risky research, we’d have the cure for cancer by now. These disease processes are incredibly complex, and sometimes the research to understand and treat them are even more complex.
But it’s more than that. Begley may have a point when she mentions that clinical researchers are often stymied when their their grants are reviewed by basic scientists, but I can tell you that this goes both ways. If you’re a basic scientist and want to get funded by the NIH, your project had better have a practical application to human disease. Just studying an interesting biochemical reaction or a fascinating gene because it is fascinating science is not enough. If you can’t show how it will result in progress towards a treatment for a disease, it is incredibly unlikely that your grant will be funded by the NIH.
Discoveries can’t be mandated or dictated, no matter how much Begley seems to think that just changing the emphasis of the NIH to more translational research or funding riskier projects would do it. Don’t get me wrong; there’s no doubt that the NIH has often been far too conservative in what grants it funds, and that risk averseness becomes worse the tighter its budget gets and the tighter the paylines it can fund. However, the NIH is also the steward of taxpayer money. Fund too many risky projects, and it is likely that nothing will come of the vast majority of them. As in everything, there needs to be balance. Ideally there should be a portfolio of research that is balanced between the solid, but not radical, science that is likely to reliably lead to incremental progress and riskier projects with a higher potential payoff but a much higher risk of producing nothing.
Finally, Begley doesn’t appear to understand that, without basic science, there can be no translational science. Translational research depends upon a constant flow of new observations and new discoveries in basic science. Sometimes, it can’t be predicted where those new discoveries will come from. Sometimes they come right out of left field. I know; a project I’m working on is just such a project. It resulted from a serendipitous discovery by my collaborator and has the potential to result in a great new treatment for not just breast cancer but melanoma as well. It was not the sort of discovery that could have been foretold, and it may never have been noticed if it hadn’t been for a basic scientist following curiosity where it led. Although there’s no doubt that the NIH can use some improvement, I hope that, whoever the next director of the NIH is, he or she does not succumb to the sort of temptation for quick fixes that Begley seems to think necessary to “fix” the NIH and medical academia.