I hate to end the week on a bit of a downer, but sometimes I just have to. At least, it’s depressing to anyone who is a proponent of science-based cancer care as the strategy most likely to decrease the death rate from cancer and improve quality of life for cancer patients. Unfortunately, in enough ways to disturb me, oncology is actually going in the exact opposite direction. I’m referring, of course, to the phenomenon of “integrative oncology,” a form of quackademic medicine that is proliferating and insinuating itself in academic medical programs like so much kudzu. The concept behind “integrative medicine” is that somehow it is the “best of both worlds,” in which the very best of science-based medicine is combined with the very best of “alternative medicine.” Sometimes, in a rather racist construct, it’s portrayed as combining the “very best” of “Western medicine” with the best of “Eastern medicine,” as though only “Western” medicine is scientific and “Eastern medicine” is mystical and magical. In reality, what integrative oncology involves is “integrating” quackery with medicine, pseudoscience with science, and woo with reality-based treatment, and I have yet to see any evidence indicating that diluting the scientific basis of medicine will do about as much good as diluting the remedies used as the basis of homeopathy. Certainly, it won’t make oncology stronger, only woo-ier.
But how widespread is the phenomenon? Pretty freakin’ widespread, I’m afraid. In fact, i was just reminded of how widespread it is by a recent systematic review out of the Ottawa Integrative Cancer Centre, the Canadian College of Naturopathic Medicine in Toronto, and the Ottawa Hospital Research Institute that appeared a month or two ago in Current Oncology. Basically, a naturopath named D. M. Seely and colleagues surveyed the “integrative oncology” landscape and found that there’s a lot out there. The purpose of the review was to summarize the research literature describing integrative oncology programs, and to do so Seely et al combed the medical literature and conference abstracts, looking for programs reporting combining “complementary and alternative medicine” (CAM) care and conventional cancer care. The results were summarized in the abstract thusly:
Of the 29 programs included, most were situated in the United States ( n = 12, 41%) and England ( n = 10, 34%). More than half ( n = 16, 55%) operate within a hospital, and 7 (24%) are community-based. Clients come through patient self-referral ( n = 15, 52%) and by referral from conventional health care providers ( n = 9, 31%) and from cancer agencies ( n = 7, 24%). In 12 programs (41%), conventional care is provided onsite; 7 programs (24%) collaborate with conventional centres to provide integrative care. Programs are supported financially through donations ( n = 10, 34%), cancer agencies or hospitals ( n = 7, 24%), private foundations ( n = 6, 21%), and public funds ( n = 3, 10%). Nearly two thirds of the programs maintain a research ( n = 18, 62%) or evaluation ( n = 15, 52%) program.
OK, it’s pretty dry stuff, but it indicates an unfortunately robust integrative medicine presence in oncology, and not just in the U.S. The disturbing aspect of this article is not so much the data contained in the abstract, but the commentary, which buys into every trope, exaggeration, and bit of spin used to sell “integrative oncology” to academics and thence to the masses. For instance:
The goals of integrative oncology are to reduce the side effects of conventional treatment, to improve cancer symptoms, to enhance emotional health, to improve quality of life, and sometimes to enhance the effect of conventional treatments6–8. Sagar and Leis describe integrative oncology as both a science and a philosophy that recognizes the complexity of care for cancer patients and that provides a multitude of evidence-based approaches to accompany conventional therapies and to facilitate health9.
Uh, no. Integrative oncology might proclaim those goals, but to call “integrative oncology” science reveals a gross ignorance of what actually falls under its rubric, which can include a hodge-podge that ranges from the seemingly reasonable and science-based (such as diet, exercise, and other lifestyle interventions) to the interventions that are at best highly questionable (such as acupuncture) to interventions that are based on nothing more than magical thinking (homeopathy, reiki). In fact, Figure 2 shows the frequency of different interventions found in integrative oncology programs. It’s a depressing figure to look at, as nearly half the programs offer reflexology and reiki, while 20% of programs offer homeopathy. I kid you not. Then there are tables listing the various programs, and they’re even more depressing to look at. Lots of big names are there, including Memorial Sloan-Kettering and M.D. Anderson, which are listed side-by-side with the woo-peddling Cancer Treatment Centers of America as though they were equivalent.
One notes also that these programs seem averse to doing something that pretty much every science-based program normally does, measure patient outcomes as a means of improving the program’s offerings:
Half the programs ( n = 16, 55%) in our sample reported consistently measuring patient outcomes as a means to evaluate the program. Of the remaining programs, 2 (7%) specifically reported not conducting program evaluations, and 11 (38%) were silent on that issue. A range of outcomes are assessed across the programs, including quality of life, cancer- and cancer treatment–related symptoms, well-being, survival, patient-identified concerns and benefits, and descriptions of patient experiences within the program. Some programs rely on researcher-developed questionnaires to assess patient outcomes; others rely on standardized measures. Most commonly, a baseline assessment is made when a patient is first referred to the program, with follow-up occurring after treatment or after a predetermined amount of time.
For evaluation purposes, 3 programs (10%) reported collecting data other than patient outcome data, including clinic volume, therapies used, reasons for referral, financial assistance requests, and client feedback on aspects of the program they liked or would like to see changed. Results of the evaluation programs are used to improve the treatment approach or to develop a case for expansion of the program; they are sometimes published in academic journals or presented at scientific conferences.
One would think, wouldn’t one, that it would be closer to 100% of these programs measuring patient outcomes. That’s what real cancer programs do: Engage in continual quality improvement, examining their outcomes and figuring out how to improve them. For instance, I’m involved in state- and nation-wide quality improvement initiatives for cancer care in general and specifically breast cancer care that our cancer center is involved in, where we track adherence to evidence-based guidelines and try to improve it. Of course, my cancer center is also involved in cutting-edge research, but it’s understood that not all cancer centers can do that. Most cancer centers will be focused on providing the best clinical care to cancer patients that they can.
Be that as it may, another telling indication of where these “integrative oncology” centers are coming from can be found in this passage:
The decision to offer specific complementary therapies is most commonly made based on evidence ( n = 12, 41%) and patient demand ( n = 10, 34%). Other reasons include clinical experience ( n = 3, 10%), recommendation from a conventional health care practitioner ( n = 2, 7%), recommendation from a complementary health care practitioner ( n = 1, 3%), availability of practitioners ( n = 1, 3%), and the ability to easily integrate a therapy into a hospital setting ( n = 1, 3%). The stated goals of all the included integrative oncology programs were closely aligned, collectively identifying common principles within the field, such as “whole-person,” “patient-centred,” “collaborative,” “empowerment,” and “evidence-based.” Further, each of the included programs had framed their goals in terms of providing high-quality supportive care alongside, and not in place of, conventional care.
Only 41% cited evidence? It should be 100%! Why spend the money to add a program to your cancer center’s portfolio of clinical programs if you don’t believe that there’s compelling evidence that it would make cancer care better and result in better outcomes for patients? One thing that did surprise me, though, is that only 34% cited patient demand. As I’ve written before with respect to the Bravewell Consortium and the Samueli Institute, it’s usually much higher, like 85%.
Of course, the growth of “integrative oncology” is driven far more by perceived patient demand than it is by science. Same as it ever was. This review only shows more evidence of the intellectual bankruptcy at the heart of this emerging “specialty.” These days, it is quackademic medicine triumphant and excellent evidence supporting how we boil down “integrative medicine” into 34 words.