Several years ago, Harriet Hall coined a term that is most apt: Tooth fairy science. The term refers to clinical trials and basic science performed on fantasy. More specifically, it refers to doing research on a phenomenon before it has been scientifically established that the phenomenon exists. Harriet put it this way:
You could measure how much money the Tooth Fairy leaves under the pillow, whether she leaves more cash for the first or last tooth, whether the payoff is greater if you leave the tooth in a plastic baggie versus wrapped in Kleenex. You can get all kinds of good data that is reproducible and statistically significant. Yes, you have learned something. But you haven’t learned what you think you’ve learned, because you haven’t bothered to establish whether the Tooth Fairy really exists.
There’s a lot of tooth fairy science out there right now. It’s been increasing in quantity ever since the rise of so-called “complementary and alternative medicine” (CAM), now known as “integrative medicine” over the last two decades. “Energy healing,” acupuncture, homeopathy, craniosacral therapy, reflexology, even faith healing, there’s no pseudoscience too ridiculous to be excluded from pointless clinical trials. What all these clinical trials share in common is that they are tooth fairy science. They study a phenomenon without its ever having been established that the phenomenon actually exists. Worse, because of the vagaries of he clinical trial process, bias, and even just the random noise in clinical trial results that produce seemingly positive trials by random chance alone, advocates of these pseudoscientific treatments can always point to evidence that their treatment “works.” The overall body of existing research on a treatment like homeopathy is negative, but homeopaths can always cherry pick individual studies and sound convincing doing it.
Here’s another one that I’ve become aware of. It’s a couple of months old, but better late than never. Unfortunately, it was funded by the American Cancer Society, which really should know better, and touted on its website with the headline Acupressure May Ease Breast Cancer-Related Pain, Fatigue. Yes, it’s a study of auricular point acupuncture (APA) in breast cancer patients, and it’s based on prescientific ideas with no basis in anatomy and physiology:
APA therapy is a form of traditional Chinese medicine (TCM) based on a concept called the meridian theory. It proposes that how you feel is governed by the flow of energy, or qi, through a network of invisible pathways that connect different organs in the body. Specific points on the ear correspond to specific areas of the brain, and these areas have a reflex connection with specific parts of the body. Stimulating the ear points can signal the brain to prompt reactions in the body to relieve symptoms, such as breast cancer-related pain.
“We have all the points on the ear that correspond to our body parts,” says Yeh. “That means I can always find a point on the ear to deliver treatment.”
Here we go again. Qi is a vitalistic concept; there is no such thing. Science has never been able to detect it, nor has any acupuncturist or practitioner of traditional Chinese medicine (TCM) been able to demonstrate that she can detect or manipulate it. When it comes to acupuncture, for example, which is based on the fantasy that sticking thin needles in “meridians” through which the qi supposedly flows will “unblock” the flow of qi, with healing effect, studies have consistently shown that it doesn’t matter where you stick the needles (thus invalidating meridians) and that, in fact, it doesn’t even matter if you actually stick the needles in. The result is the same and can be explained by placebo effects. And no, there are not points on the ear that correspond to different body parts; this is nothing more than a variation on reflexology.
So what about the study itself? Let’s take a look. Before I get to the study design itself, I can’t resist pointing out a particularly bit of silly pseudoscience in the introduction of the article:
Auricular point acupressure (APA) involves attaching a few very small plant seeds (eg, Vaccaria segetalis) with a small amount of adhesive tape to the outer ear and ear lobe of an individual to treat symptoms (eg, pain) throughout the body. Auricular point acupressure is a well-established treatment strategy in traditional Chinese medicine (TCM). In TCM, particular points on the ear are related to all parts of the human body, including each of the internal organs, and all meridians have reference points on the ear. In 1972, Dr Paul Nogier, a French neurosurgeon, retheorized that the outer ear represents an inverted fetus within the womb and therefore provides the acupressure points that correspond to all parts of the human body, including the internal organs. Nogier’s mapping and distribution of these specific auricular points—or acupoints—on the outer ear have since been widely used by therapists worldwide. Moreover, the World Health Organization considers auricular medicine as a form of microacupuncture that has therapeutic effects on the entire body.
The ear represents an “inverted fetus”? Give me a break. Acupuncturists will believe anything. Or so it would seem. One wonders if they’ll do a clinical trial based on meridians mapping to the butt. Why not? Others have believed this enough to be perfectly willing to accept an abstract for presentation. Maybe they used this map:
The first thing I noticed about this trial is how small it was. It only involved 31 patients, who were divided into two groups. One group received what was described as “active APA,” described as “featuring acupoints related to symptoms—seeds taped onto the designated acupoints for pain, fatigue, and sleep.” How were these acupoints selected? This you need to read:
The Chinese Standard Ear-Acupoints Chart was used as a guide to locate the active ear points.34 A systematic auricular diagnostic procedure was used to identify reactive acupoints for treatment.35 Identification of acupoints includes 3 steps: (1) query the participants about where they were experiencing pain in the body; (2) visually inspect the ear to see if there is any discoloration or deformity on the auricle; (3) utilize the electronic point finder to identify acupoints. The electronic point finder used in this study, manufactured by Auricular Medicine and International Research and Training Center (Hooner, Alabama), measures auricular cutaneous resistance to identify ear acupoints. In most cases, auricular acupoints on both ears were identified for treatment; however, if the participant’s pain was located on 1 side of the body, then only that side was tested and treated. The number of points treated and their specific locations on the ears of each patient varied slightly because each patient experienced pain in different body locations and the different pain projected onto different corresponding points according to somatic topography. Between 8 and 12 total acupoints were used for each participant.
Here we go again with more tooth fairy science. Once again, it has not been established that these auricular acupoints actually exist, much less that this woo machine can detect them by measuring auricular cutaneous resistance.
What about the controls? Basically, for the controls investigators used what they described as acupoints unrelated to the patients’ symptoms. Specifically:
Control participants had Vaccaria seeds taped onto the stomach, mouth, duodenum, and eye acupoints that were not related to the symptom cluster of pain, fatigue, and sleep disturbance.
I suppose that’s as good a control as any. Yes, I’m being sarcastic. I find it hard to restrain my…Insolence…when reading a methods section like this. In any event, the seeds were stuck to the ear acupoints with surgical tape, and the subjects were instructed to apply pressure to the seeds with their thumb and forefinger three times per day for three minutes each time, even if they were not experiencing any symptoms.
One thing leapt out at me right away. Well, maybe that’s the wrong word. One thing was conspicuous by its absence. I looked and looked but didn’t find it. What am I referring to? I didn’t find any mention of blinding procedure. Not surprisingly given that, there was no mention of assessing adequacy of blinding. (I suppose that’s rather hard to do if there’s no blinding.) So I have to assume that this study was not blinded, particularly given this description of the placement of seeds on the acupoints:
During the APA treatment, participants were asked to sit in a comfortable chair in an outpatient clinic. Acupoints on each ear were identified using both an electronic acupoint locator and systematic auricular strategies, which include visual inspection (ie, identifying palpation on the ear) and probing for tenderness. The acupoint locator has 2 probes: one was held by the participant, and the other was used by the PI to locate the acupoints. The locator makes a sound when the probe makes contact with acupoints corresponding to (1) particular target symptoms and/or (2) pain in particular parts of the body. When the locator sounded, participants were asked if they were experiencing pain in the particular part of the body corresponding to that acupoint or asked to describe the symptom they were experiencing. After acupoints were identified, the PI placed seeds on the acupoints for each participant using tape; this procedure took 5 to 10 minutes. The PI demonstrated the technique for applying pressure to the acupoints with the thumb and index finger and then asked the participants to perform the technique to verify that they understood the technique.
The ultimate machine that goes ping! In any case, although it’s unclear whether or not this is the case, it certainly sounds as though the PI (principal investigator, for those of you who don’t know the lingo) knew who was in the control group and who was receiving “real” APA right from the beginning. If that’s the case then it almost doesn’t matter what the results were; they’re meaningless. Unblinded acupuncture studies are basically even more worthless than the usual acupuncture study. Oh, what the heck? I’ve written this much already. I might as well finish it.
Participants recorded their symptoms at baseline using the M.D. Anderson Symptom Inventory (MDASI), which uses a 0-10 point numerical rating scale, with 0 meaning “not present” and 10 meaning “as bad as you can imagine.” It’s a widely used scale to measure cancer symptoms. They also recorded the World Health Organization Quality of Life (WHOQOLBREF), which is a 26-item questionnaire used to assess general quality of life in terms of physical, psychological,
social, and environment factors. These were assessed at the end of the trial period as well, and differences in various measures examined. What the investigators found is this:
After 4 weeks of APA, participants in the active APA treatment had reported a reduction of 71% in pain, 44% in fatigue, 31% in sleep disturbance, and 61% in interference with daily activities. The control APA group experienced some moderate reduction in these symptoms.
That’s from the abstract. Reading that, my first question was this: Why didn’t they provide the numbers for the control group as well? “Some moderate reduction”? What does that even mean? There’s also a key bit of information in the actual text that tells a lot more. For one thing, the data are presented in an enormous chart with differences between baseline and end of intervention for the experimental group, culminating with columns showing the difference between APA and control. It’s a very difficult-to-read method of presenting the findings, and I was puzzled as to why they didn’t just directly show changes in the various measures in the controls next to the changes observed in the APA group in graphical form of some sort. Be that as it may, here’s the key:
After the 4-week APA treatment, the mean scores for pain, fatigue, sleep, lack of appetite, distress, dry mouth, sadness, and numbness displayed decreases that were clinically significant (ie, defined as symptom decreases of Q30%; data available upon request). In addition, Table 5 lists outcomes of interferences and quality of life. Participants in the active APA group had higher improvement of interferences and better quality of life than did those in the control APA group; however, the difference of the improvement was not statistically significant.
So basically, most of the differences were not statistically significant. Worse, there are 18 different measures being examined in two time periods each, baseline to end of intervention and baseline to the one month followup. That’s 36 comparisons. I see no evidence that correction was made for multiple comparisons, although I could be wrong. (Again, the methods section of this paper really stinks in terms of providing key details needed to evaluate the study.) So is it a surprise that a handful of measures are borderline statistically significant? No, it is not.
Basically, between the lack of blinding and the apparent lack of correction for multiple comparisons, this is a negative study. Yet it’s being promoted as a positive study and used as the basis for further grant applications. Such is the way it works with the tooth fairy science that is “integrative medicine.”