When I wrote about the Trial to Assess Chelation Therapy (TACT) trial last week, little did I suspect that I would be revisiting the topic again so soon. For those of you not familiar with TACT, it was a trial designed to test a favorite quack treatment for cardiovascular disease, chelation therapy. It is, as I have described many times in the past, an incredibly implausible therapy based on a hugely simplistic concept that because calcium accumulates in atherosclerotic lesions, then using chelation therapy could remove the calcium and reduce the lesions. Chelation therapy is a favorite treatment option recommended by naturopaths, and the claims made for it border on the absurd. It’s frequently referred to as a “Roto Rooter” for the arteries that is a “safe and effective” alternative to angioplasty or coronary artery bypass.
The first time I wrote about the results of TACT, its principal investigator Gervasio Lamas, MD, a professor of clinical medicine at the Columbia University Division of Cardiology and Chairman of Medicine at Mount Sinai Medical Center had just presented part of the trials results at the American Heart Association’s annual meeting back in November. As I noted at the time, the results were at best underwhelming, particularly given the methodological flaws. Basically, there was only a statistically significant difference between groups detected on subgroup analysis of diabetics, and there was no detectable difference in quality of life issues no matter how much Lamas tried to slice and dice the data. The next time around was a mere two weeks ago, when at the American College of Cardiology Meeting Lamas, apparently in pursuit of grinding out as many minimally publishable units (MPUs) as he could (or maybe I should say minimal presentable units), presented the results of the part of the study dealing with the high dose multivitamin and mineral solution that naturopaths so frequently like to administer with their chelation brew. As I explained again, the results were similarly underwhelming. Then, earlier this week, TACT was published online in JAMA. So underwhelming were the results again that I hadn’t planned on blogging the study, given how extensively I’ve already written about it.
Then I saw a post over on Forbes by Harlan Krumholz entitled Chelation Therapy: What To Do With Inconvenient Evidence, and, oh, Lordy, I realized that I had no choice but to jump back into the breach and discuss the study some more because Dr. Krumholz’s post was in essence a broadside against science and all those nasty skeptics who, to him, won’t accept valid scientific results. It was painful to read and a big disappointment for Forbes given that my good bud Peter Lipson blogs at Forbes. Of course, my good bud Peter immediately (and correctly) took Dr. Krumholz to task for his misguided bloviation about TACT and us supposedly “close-minded” skeptics who won’t accept “inconvenient” evidence. How could he resist? After all, Dr. Krumholz begins with a massive straw man:
What do we do with inconvenient evidence? Imagine studying a seemingly absurd practice that is used to an alarming extent by those who believe in it despite the lack of evidence – and finding that the intervention improves outcomes. And imagine that the people conducting that trial are famous scientists with impeccable credentials who have extensive experience with this type of investigation. Imagine that the practice is so out of the mainstream that the investigators cannot even posit how the treatment could reduce patient risk?
We live in a world of evidence-based medicine, where we are urged to base our medical recommendations and decisions on clinical studies. We base our guidelines on the medical literature and evaluate our practices by how well we adhere to the evidence. But what should we do with inconvenient evidence?
What indeed? The implication is that critics of TACT are questioning and rejecting the results because they are having trouble dealing with the results of a trial that seems to support a therapy that they find absurd. The problem, of course, is that is a simplification that is so massive that it’s either intentional or reveals that Dr. Krumholz is almost completely unfamiliar with TACT, the sheer unethical design of the study, and the well-described problems with many of the sites at which the study were carried out, which were described in detail by Dr. R. W. Donnell in his most excellent Magical Mystery Tour of NCCAM Chelation Study Sites (part 1, part 2, part 3, part 4, part 5, part 6, part 7). I urge Dr. Krumholz to read all seven parts. As Dr. Donnell points out, only 12 of the 110 TACT study sites were academic medical centers. Many of the study sites were highly dubious clinics touting highly dubious therapies, including heavy metal analysis for chronic fatigue, intravenous infusions of vitamins and minerals (I could never figure out how infusing minerals could be reconciled with chelation therapy to remove minerals, but that’s just me), antiaging therapies, assessment of hormone status by saliva testing, and much more. Dr. Donnell also points out that the blinding of the study groups to local investigators was likely to have been faulty. So right off the bat, this study was dubious for so many reasons, not the least of which was that some of its site investigators were felons, a problem blithely dismissed by the NIH as being in essence irrelevant to whether the study could be done safely.
OK, OK, I get it. Just because several key investigators weren’t exactly the sort of people who had demonstrated a high level of dedication to scientific rigor, ethics, or even honest doesn’t necessarily mean the results of the trial aren’t valid, but they sure as hell make me wonder, particularly given how minimally statistically significant the detected differences were.
Then there was the result of the FDA inspection of the highest accruing TACT site. It’s brutal. In fact, it’s more brutal than the Form FDA 483 that I just discussed about Stanislaw Burzynski. I kid you not. it’s that bad. Read it and note observations that the investigators there:
- The investigators didn’t conduct the investigation in accordance with the signed statement and investigational plan. Several examples were given of shoddy procedures, prefilled forms, and failure to train personnel.
- Failure to report promptly to the IRB all unanticipated problems involving risk to human subjects or others. Examples are given, including failure to report the deaths of patients on the study in a timely fashion (in one case the death wasn’t reported to the IRB until four months later; in another case it was never reported at all). In other cases, adverse event reports were not submitted to the IRB.
- Failure to prepare or maintain adequate case histories with respect to observations and data pertinent to the investigation.
- Investigational drug disposition records are not adequate with respect to dates, quantity, and use by subjects.
In other words, the trial was a total mess at that site. One wonders what it was like at other sites, for instance the Marino Center.
It’s probably worth looking at the paper itself a bit at this point. I didn’t see anything there that made me change my original detailed assessment of the study from four months ago. You can go to the link for the full deconstruction. Every word applies to the published study, but let’s look at some key points again. First, the primary endpoint (i.e., the aggregated serious cardiovascular events) did indeed show a modest difference, namely 30% of placebo subjects versus 26.5% of the EDTA chelation subjects (hazard ratio 0.82 for chelation). However, one notes that the result is just barely statistically significant, p = 0.035, with the 99% confidence interval for the hazard ratio ranging from 0.69 to 0.99. (The predetermined level for statistical significance for purposes of this study was 0.036; so this is statistically significant by the barest margin.) More importantly, if you look at the individual endpoints that make up that aggregate, there was no statistically significant difference in death, myocardial infarction, stroke, coronary revascularization, and hospitalization for angina. Subgroup analysis (always a questionable analysis that requires replication, even when preplanned, as in TACT) purported to show a much greater benefit for diabetics, with a hazard ratio of 0.61 (p=0.002), while patients without diabetes showed no statistically significant difference in any of the outcome measures, including the aggregated total bad outcomes.
One question that came up last time had to do with other ingredients in the chelation mixture, specifically procaine and heparin, either of which could conceivably have had an effect on cardiovascular outcomes, particularly when given over the course of months intravenously. Another question that came up was how there could have been a better outcome in diabetics. One notes that the placebo solution contained 1.2% glucose in order to match the osmolarities of the control and experimental solutions. That could conceivably have contributed to a slightly worse outcomes in the control group even in the absence of a therapeutic effect due to chelation. Whatever the case, one notes that in nondiabetic patients there was no statistically significant detected benefit due to chelation therapy. Finally, only 65% of subjects finished all infusions, with only 76% finishing at least 30. That’s a high drop-out rate. Moreover, 17% withdrew consent, resulting in missing data. The investigators tried to correct for this in an online supplement, but these issues remain serious. They might not be so serious as to call into doubt the effect reported if there had been a much more convincing treatment effect, but when you get equivocal results such as this such issues loom much larger.
In fact, so messed up was this trial that it’s hard to fathom the decision of JAMA’s editors to publish it. Indeed, Kimball Atwood made a compelling case that, given all the ethical problems involved with this trial that any journal that published its results would be violating ethical norms established through the Uniform Requirements for Manuscripts Submitted to Biomedical Journals, established by the International Committee of Medical Journal Editors.
JAMA’s editors seem to know this in that they wrote an accompanying editorial justifying their decision to publish this trial using some of the lamest reasoning I’ve ever seen. First, they claim that they were really, really, really careful in reviewing the article to the point that they even read the Office of Human Research Protections (OHRP) reports, stating:
Moreover, we recognize that publication of research reports in influential journals can do harm. For instance, the debacle involving the study reporting an association between the measles-mumps-rubella vaccine and autism3 and the adverse effects that article had on immunization rates is an important reminder for all medical journal editors about the influence of their work on the attitudes, behaviors, and decisions of physicians and the nonphysician public.
Despite the limitations of the trial by Lamas et al and the continuing controversy surrounding TACT,4 once the scientific issues had been addressed satisfactorily, the decision to publish this report in JAMA involved consideration of several important factors. First, this NIH-sponsored study had been approved by institutional review boards at 2 academic medical centers, was conducted in compliance with federal regulations, and the OHRP investigation had determined that the corrective actions that had been taken were such that patient protection was not at risk.
Second, despite numerous setbacks, criticisms, and concerns, the funding agencies and the investigators (who include one of the preeminent cardiovascular researchers and one of themost respected statisticians) demonstrated courage and persistence in continuing this trial to its completion.
“Courage and persistence”? Give me a break. Dr. Lamas and his co-investigators might have demonstrated many things throughout the long and winding $30 million road of TACT, but courage and persistence were not among them. OK, maybe persistence, but let’s not forget that a huge grant was at stake, and no investigator who’s the PI of such a huge grant can afford to let it go and let the study crash and burn, although it would have been better for taxpayers and patients if TACT had been allowed to crash and burn. In fact, I get the feeling that the JAMA editors deep, deep down know that, too, as they published an editorial by Steven Nissen, who was utterly blistering in his criticisms of TACT:
Differential dropout in TACT suggests unmasking, but the problem of intentional unblinding is more concerning. The sponsors of the trial, the National Heart, Lung, and Blood Institute (NHLBI) and the National Center for Complementary and Alternative Medicine (NCCAM), were unblinded throughout the trial. The National Institutes of Health policy unwisely allows the sponsor access to unblinded trial data, and both organizations sent observers to the closed sessions of the data monitoring committee. This gave them access to confidential data during each of the 11 interim analyses. The unblinding of the study sponsor represents a serious deviation from acceptable standards of conduct for supervision of clinical trials. If a pharmaceutical company sponsoring a trial were allowed access to actual outcome data during the study, there would be major objections. Like any sponsor, the NHLBI and NCCAM cannot be considered unbiased observers. These agencies made major financial commitments to the trial and may intentionally or inadvertently influence study conduct if inappropriately unblinded during the study.
Dr. Nissen also notes many other limitations in the design of TACT that undermine its reliability, including an observation that the primary endpoint should have only included the most objective and reliable components in its composite endpoint (death, stroke, and myocardial infarction) but also included “softer” endpoints (coronary revascularization and hospitalization for angina) that represented 318 of 483 events reported as primary end point event. He further noted that “if any unblinding occurred, investigator biases could potentially influence the decision to hospitalize or revascularize individual patients.”
Dr. Nissen concluded:
Given the numerous concerns with this expensive, federally funded clinical trial, including missing data, potential investigator or patient unmasking, use of subjective end points, and intentional unblinding of the sponsor, the results cannot be accepted as reliable and do not demonstrate a benefit of chelation therapy. The findings of TACT should not be used as a justification for increased use of this controversial therapy.
I couldn’t have said it better myself.
Unfortunately, Dr. Krumholz sees it almost exactly the opposite:
The irony is that if a drug manufacturer had gotten this result, they would have celebrated. We have billion dollar drugs like niacin and fenofibrate and ezetimibe that have less evidence than chelation therapy has now. None of those drugs has contemporary outcomes studies showing benefit – and 2 of them (niacin and fenofibrate) have 2 recent negative trials.
If we have little faith in chelation therapy, then it is hard to turn 180 degrees with a positive result and suddenly completely believe in it and recommend its use. Any trial can give an anomalous result and we need to be careful about jumping to a new position with each new piece of evidence. However, we cannot on one hand promote evidence-based medicine and on the other hand ignore what we do not like.
This is, as I’ve discussed extensively above, not what skeptics and critics are doing. Nor are they being hypocritical, as Dr. Krumholz implies insultingly. The ethical and scientifically rigorous conduct of clinical trials is a key component of evidence-based medicine. Trials that are sloppy in execution, carried out in large part at centers full of quacks (yes, quacks), and unethical are not good evidence-based medicine.
It’s highly disappointing that Dr. Krumholz took the results of TACT at face value. As an academic cardiologist, he should know better, but it appears that he didn’t even bother to read the paper. He didn’t know that there was heparin in the chelation solution, and he didn’t seem to have a problem with the addition of “soft” outcomes to the more typical “triple” aggregate outcome used in cardiology studies consisting of myocardial infarction, death, and stroke. In fact, as a commenter pointed out, even the triple composite outcome is not a patient-important outcome. Indeed, given how the individual endpoints that made up the composite endpoint showed no statistically significant differences, the composite endpoint can best be looked at as a way of trying to produce a statistically significant by adding endpoints that are not independent together and hoping that they aggregate to result in a statistically significant difference. I also note, as I have done for defenders of Stanislaw Burzynski, saying that other investigators do it too (or, as I like to call it, the “They do it too!” defense) is not a compelling retort, except perhaps among six year olds. Apparently both Dr. Krumholz and Forbes’ Matthew Herper find such a retort compelling because they both use it in comments after Peter Lipson’s post.
The bottom line is not, as Dr. Krumholz argues, that proponents of EBM are reflexively rejecting valid clinical trial results. It is rather that TACT is a trial testing a highly implausible therapy using methodology that was incredibly unlikely to produce a useful result. Worse, it was incompetently carried out at many sites and so riddled with problems that Dr. Nissen is quite correct to declare it useless and uninformative. Worse, it endangered patients without offering a reasonable likelihood of helping. If ever there was a dubious trial that is the poster child for using a Bayesian approach to clinical trials, it’s TACT.
And if you’re in the U.S., as I am, you paid for it to the tune of $30 million. That’s $30 million that could have gone to actual, useful biomedical research. It’s very sad that apparently neither Dr. Krumholz nor Matthew Herper can see that. It’s even sadder still that JAMA published this tripe. In that JAMA is every bit as guilty as The Lancet was in 1998 when it published Andrew Wakefield’s antivaccine nonsense. I can (sort of) accept the argument that all clinical trials should be published. However, that doesn’t mean a clinical trial so riddled with scientific and methodological flaws should be published in JAMA. If published at all, TACT should have been published in some crappy, bottom-feeding journal, because that’s all that it deserves. In a world where medical publishing worked properly, no journal in the top or middle tier would have touched this toxically bad manuscript with the proverbial ten foot pole.
Shame on JAMA! Shame on NCCAM and the NHBLI for funding this nonsense! And, yes, shame on all the shruggie cardiologists who are apparently unwilling or unable to look beyond the hype.