So-called “complementary and alternative medicine” (CAM) or, as it’s now as frequently called, “integrative medicine” (IM) represents a hodge-podge of remedies that are mostly based on prescientific concepts about how the human body works and how disease attacks it. Homeopathy, through its concept of “like cures like” and law of contagion. The former in essence is a manifestation of the magical concept that “like produces like.” Similarly, homeopathy’s law of infinitesimals, in which serial dilutions to the point that there is unlikely to be a single molecule left of the substance thought to be a remedy, is postulated to retain and increase the potency of the remedy is a manifestation of the magical law of contagion, which postulates that there is a lasting connection between things that were once in contact. What is the homeopath’s prized “memory of water” but the claim that water that has been in contact with a remedy retains a connection with that remedy (“memory”)? And don’t even get me started on the various forms of “energy” healing, which postulate imbalances in a mystical life force that science can’t detect and further claim that humans can somehow manipulate this life force to healing effect. Reiki, therapeutic touch, and even acupuncture are forms of “energy healing,” and many of these “energy” modalities are, when stripped to their core, nothing more than faith healing that substitutes Eastern mysticism for Christian beliefs. One of the only forms of CAM/IM that has any scientific plausibility is the use of herbal remedies, which substitute impure active ingredients at variable concentrations and with variable contaminants for pure active ingredient.
Given all this, it is not surprising that, when tested in rigorous scientific settings in well-designed clinical trials, the vast majority of CAM modalities fail miserably to show any efficacy greater than that of a placebo. Yet, so strong is the belief in these modalities that physicians who should know better cling to them even after clinical trial after clinical trial fail to show any efficacy beyond that of a placebo. Even worse, all too frequently they argue that, if randomized clinical trials can’t show efficacy for these modalities, then it’s because controlled clinical trials are somehow not appropriate or inadequate to the challenge of testing these treatments, usually because they’re so “personalized.” Sadly, this nonsense keeps cropping up even today in mainstream medical journals whose editors should also know better than to think this sort of “appeal to other ways of knowing” belongs in scientific medicine. The most recent journal to succumb to this sort of “reasoning” is BMJ, which published earlier this week a depressing exercise in pseudoscientific prestidigitation in the form of an editorial entitled Closing the evidence gap in integrative medicine. Written by Hugh MacPherson, David Peters (professor of integrated healthcare), and Catherine Zollman from the University of York, the University of Westminster, and the Nightingale Valley Practice of Bristol, respectively, it’s an exercise in how not to think about CAM right from the very start:
Integrative medicine was recently defined as “medicine that reaffirms the importance of the relationship between practitioner and patient, focuses on the whole person, is informed by evidence, and makes use of all appropriate therapeutic approaches, healthcare professionals and disciplines (conventional and complementary) to achieve optimal health and healing” (www.imconsortium.org).
First off, this is a steaming, stinking, slimy pile of fetid dingo’s kidneys. “Integrative” medicine is nothing more than a “rebranding” of “complementary and alternative medicine” that removes the word “alternative” and seeks to slap a patina of scientific respectability on modalities that have no science behind them. It’s nothing more than “integrating” pseudoscience with effective medicine. There’s no proven benefit, but IM promoters like to try to get you to think there is, and they’re willing to do anything (other than acknowledging the science, of course) to accomplish this. Even worse, all this blather about “holistic” medicine, “reaffirming the importance of the relationship between the practitioner and the patient” and focusing on the “whole person” are a part of all good medical practice. This is nothing more than another example of the “bait and switch” of CAM/IM, in which woo-meisters appropriate perfectly valid parts of the practice of science-based medicine and act as though they and they alone have discovered their importance. They did it with diet and exercise, which are not in any way “alternative” or “integrative,” and they do it here with “holistic” medicine.
So let’s see what MacPherson et al think. After standard boilerplate about how chronic diseases are difficult to deal with, account for 78% of health care expenditures, and aren’t always handled as well as we would like by scientific medicine, the authors whine about how the evidence base for IM is just not what advocates would like it to be. This is the key sentence of the entire paper:
Yet when it comes to deciding whether an intervention, and which type of intervention, might be helpful for a particular patient, a worrying gap exists between the perceived potential for using integrative approaches in areas of poorly met clinical need and the availability of supporting evidence derived from good research.
Note the phraseology: the perceived potential for using integrative medicine in areas of poorly met clinical need. That’s what it is, perception, and what MacPherson et al are in essence admitting is that here is a gap between their perception of how awesome IM is for chronic diseases and the reality of the largely negative clinical trial data that is available. I can’t help but pointing out that real scientists, when faced with a gap between their perception and the cold, hard evidence of science, generally close the gap by losing or adjusting their perception, ditching therapies that don’t stand up to scientific scrutiny and moving on to others. True, it may be a messy process. It may take a lot longer than we would like it to. It may be very contentious. But in science, ultimately evidence wins out, and scientists and science-based physicians ultimately adjust their perceptions to align with the evidence. As you might imagine, hwoever, that’s not what MacPherson et al do. Oh, no, not at all. For them, if there is a gap between perception and reality when it comes to CAM/IM, it’s time to measure reality differently until reality appears to align itself with their perception:
Integrative interventions tend to involve potentially synergistic, multimodal, and complex interactions that are often dependent on the relationship between practitioner and patient, and on patients’ preferences, expectations, and motivations. For example, the motivation, compliance, and response of a patient undertaking dietary or other lifestyle changes, or practising relaxation exercises, will depend greatly on how they feel about their practitioner. Consequently, a randomised placebo controlled trial aiming to study components of integrative interventions in isolation may actually distort the very thing it is investigating. Moreover, many patients who seek integrative medicine in routine care would often be excluded from entry into a trial because they have chronic diseases, multiple pathologies, strong preferences, or are using concurrent treatments. Therefore, the extent to which findings from randomised controlled trials can be generalised to these patients is far from clear.
The limitations of making systematic reviews and meta-analyses of randomised double blind placebo controlled trials the pinnacle of an evidence hierarchy were recently stressed by Sir Michael Rawlins, who expressed his concern that, “Hierarchies attempt to replace judgment with an over-simplistic, pseudo-quantitative, assessment of the quality of the available evidence” and that “hierarchies of evidence should be replaced by accepting–indeed embracing–a diversity of approaches.” Similarly, the translational research movement suggests using a “multiplicity of tactics.”
First, note the standard CAM-speak about “synergistic” and “complex” interactions that depend on the relationship between the practitioner and patient. If there were a more blatant admission that CAM relies on placebo effects, I haven’t seen it. No doubt MacPherson didn’t mean it that way, but perhaps it’s a Freudian slip.
Now, in fairness, I’ll point out that part of the reason I’ve come to embrace science-based medicine (SBM) over evidence-based medicine (EBM) is due to shortcomings in the EBM paradigm, but my reasons do not concern primarily what’s at the top of the EBM hierarchy of evidence but rather what’s at the bottom: Basic science studies and prior probability based on science. The problem with EBM is that it values clinical trials over every other form of evidence, even when testing remedies as patently ridiculous as homeopathy. However, I also have to point out that SBM already uses a “multiplicity of tactics.” Its a frequent canard of the CAM crowd that, if the evidence isn’t from a double-blinded, randomized controlled clinical trial, it’s worthless. That’s not true at all. There are lots of clinical questions that don’t always lend themselves to an RCT (many surgical procedures come to mind). That doesn’t mean there’s no evidence. As I said, we do use a “multiplicity of methods” and often end up using a confluence of evidence from different sources that converge into an answer, and often those sources are not RCTs. However, RCTs are in general the strongest form of clinical evidence and to be preferred when it is possible to do them. And few are the CAM modalities that can’t be subjected to RCTs. It’s just that the woo-meisters don’t like the results; so they label EBM hierarchies as “simplistic” devices that preclude independent thought, which, presumably they have in abundance, not being the pharma shill science drones who can’t see the “whole patient,” as CAMsters can.
So what, according to MacPherson, is the answer? This:
What sort of diversity or multiplicity might better reflect the complex causality of the real world?9 To give some examples, pragmatic randomised controlled trials are increasingly used to collect evidence from typical populations receiving treatment in ways that reflect normal practice.10 Within pragmatic trials it is possible to optimise rather than constrain patient-practitioner interactions, and by incorporating patient preferences into trial design, the effects of synergies between treatment and choice can be captured.5 Observational studies might help target treatments and frame future research questions more effectively. More basic science research could help identify mechanisms of action, and meta-regression could better explain variability in response. Evidence from different sources can be combined using decision-analytical modelling and can be used for economic evaluations.11 Overall, research should aim to serve both practice and policy development.12
This is more errant nonsense. First off, “pragmatic” trials are usually performed after a treatment has been shown to be efficacious in order to see how a given treatment performs in the “real world,” which is usually much messier in how medicine is practiced than the regimented paradigm of the RCT. The reason is not because effects are expected to be greater than in an RCT but because the less rigorous application of the treatment and broadening of its use to patients beyond the inclusion criteria for an RCT usually results in a decrease in the observed efficacy, not the increase in efficacy that MacPherson appears to expect to see if CAM is studied this way. As for “synergy” between “treatment and choice,” I have no idea what the hell that means. It’s woo-speak bordering on Lionel Milgrom’s homeopathy articles.
Next, observational studies are nothing more than retrospective trials, which are prone to all sorts of confounders and biases that can be devilishly difficult to control for, which is why they produce all sorts of seemingly “positive” results (i.e., false positive results) that quite often don’t hold up in later better-designed prospective randomized trials. Moreover, the implication that SBM doesn’t deal with such studies is nonsense. The literature is chock full of observational studies, which are often the first step (and, on occasion, the only step) in finding evidence that influences practice. The Women’s Health Intiative, for example, is the study following thousands of women and correlating health outcomes to lifestyle. It’s the study that demonstrated, among many other things, that hormone replacement therapy does not prevent heart disease in postmenopausal women and is associated with an increased risk of breast cancer.
Finally, as for basic science research, I rather think that basic science doesn’t mean what MacPherson thinks it does. The reason is that it is basic science considerations alone that dismiss the vast majority of CAM as being so scientifically implausible as to be not worth spending a lot of money studying. Examples include homeopathy and most, if not all “energy healing” modalities. MacPherson should be careful what he asks for on this score; he might actually get it someday, and when that day comes I rather suspects he won’t like the results. Indeed, I’m working to see that he does get what he claims to want when it comes to basic science.
MacPherson concludes, after denigrating RCTs yet one more time as having “non-typical patients and artificially standardised interventions,” that the only way to “close the evidence” gap is to “broaden” the range of evidence used to study CAM/IM. What he really means is to substitute lower quality trials and evidence for the highest quality RCTs because high quality RCTs almost inevitably fail to find efficacy for most CAM/IM. What MacPherson fails to acknowledge is that most IM therapies are perfectly amenable to RCT methodology and that whenever they are tested by this methodology they almost invariably fail to demonstrate any effects that are easily attributable to either placebo effect or random chance. Unfortunately, MacPherson tacitly assumes that, if only more research were done, an evidence base supporting efficacy of CAM/IM would somehow magically emerge from these “broadened” sources of data. It’s an excellent example of what can only be called wishful thinking, and at some point science needs to say no to further special pleading. If CAM/IM advocates want to play by the rules of science, then they should play by the rules of science and quit trying to change the rules when they can’t win under them.