So I finally made it to the Society of Surgical Oncology Annual Symposium. Thanks to the snowstorm that apparently wasn’t (at least, I don’t see any snow around), my arrival was delayed by a day, as all flights to the Washington, DC area were canceled on Wednesday. But I did finally get here, and, although I missed most of the first day, I did at least get to see a talk given by a friend of mine late in the day and I had a chance to hang out for a while with an old friend.
I also got the chance after I got back to my hotel room to be highly amused by a “response” to criticism from the author of an acupuncture meta-analysis last year. Part of my amusement came from the whininess demonstrated by the author of the article. More importantly because it is always about me, what really amused me is that Orac got a whole paragraph worth of mention in the article. I kid you not. These acupuncture apologists were so upset at criticism of their meta-analysis in the skeptical blogosphere that they actually wrote a response to it and that response was actually published in the “peer-reviewed” literature! I use scare quotes because the journal in which the response was published isn’t exactly what I’d call a decent journal. That’s true by definition because it’s an acupuncture journal. Specifically, it’s Acupuncture in Medicine, which is published by BMJ. Now what BMJ is doing publishing an acupuncture journal, I don’t know. When puzzled about a company’s motivation, look for the profit potential, I guess. Be that as it may the journal exists, and it appears to exist for the amusement of Orac.
The article, by Andrew J. Vickers of the Department of Epidemiology and Biostatistics at Memorial Sloan-Kettering Cancer Center, is fresh off the press in the March 2012 issue and entitled Responses to the Acupuncture Trialists’ Collaboration individual patient data meta-analysis. Right from the abstract, you can tell that Dr. Vickers is pissed. Real pissed. In fact, I can’t say that I’ve ever seen an abstract that reads quite like it, and, because the journal is behind a paywall and you can’t read it all for yourself other than what I quote of it, I think the abstract is worth quoting in its entirety:
In September 2012 the Acupuncture Trialists’ Collaboration published the results of an individual patient data meta-analysis of almost 18,000 patients in high quality randomised trials. The results favoured acupuncture. Although there was little argument about the findings in the scientific press, a controversy played out in blog posts and the lay press. This controversy was characterised by ad hominem remarks, anonymous criticism, phony expertise and the use of opinion to contradict data, predominantly by self-proclaimed sceptics. There was a near complete absence of substantive scientific critique. The lack of any reasoned debate about the main findings of the Acupuncture Trialists’ Collaboration paper underlines the fact that mainstream science has moved on from the intellectual sterility and ad hominem attacks that characterise the sceptics’ movement.
Methinks Dr. Vickers doth protest too much. “Little argument about the findings in the scientific press”? That’s probably because the scientific press didn’t pay much attention to the meta-analysis, not because it agreed with the results. Also, at only six months after the publication, it’s rather hard to see how much the scientific press agrees or disagrees; you can’t tell that until there has been time for some citations of Vickers’ work to see how it is used by other scientists. The lay press, however, ate it up, which, of course, it always does whenever there is a study that purports to find that some form of “alternative medicine” allegedly works. I’m also amused by the annoyance Dr. Vickers has at “anonymous criticism.” I’m amused because my identity is one of the worst-kept secrets in the medical blogosphere, so much so that if he had just clicked on a certain link on this blog he would have found out who I am. Even more amusing, so concerned about my anonymity am I that I published almost exactly the same post under my real name on my other blog under a very similar title. In fact, if you Google the title, both versions of the post will pop up. Sadly, so will a bunch of stolen versions of the post, in which link dumps basically steal content wholesale. Very annoying, but a side issue to Dr. Vicker’s apparent laziness or lack of Google skills (take your pick). Mad Google skillz, he haz dem not, as an LOL Cat might say.
In particular, he was annoyed at my asking “Can we finally just say that acupuncture is nothing more than an elaborate placebo? Can we?” and by a post by askeptic entitled Acupuncture study reveals new desperation on the part of NCCAM. Oddly enough, he was also annoyed by a post on the study by Steve Novella, which just goes to show that being a nice guy will still get you lumped in with us “militant” skeptics if the “victim” of one of your deconstructions is annoyed enough. I mean, seriously. How can you compare Steve’s nearly always calm deconstruction of such papers to a work of Orac-style Insolence. The dude must have some seriously thin skin. One wonders how he reacts when someone asks him a critical question at a scientific conference after he presents his work.
The other thing that people like Dr. Vickers seem not to understand is just what an ad hominem attack is. It’s not simply being critical. It’s arguing against an argument based on the person making the argument rather than the argument itself. Were I to say that Dr. Vickers was wrong because he’s an acupuncturist, that would be an ad hominem. I didn’t do that. I deconstructed why I thought he was wrong based on the way the meta-analysis was done, how it was written, and the methodological “issues” I found. Let’s just put it this way. Calling someone stupid (which I did not do) is not an ad hominem attack. Saying that someone is wrong because he is stupid is an ad hominem attack.
Let’s see what else is bothering Dr. Vickers. Here’s a hint. These are not ad hominems:
In a typical blog post, the study authors were accused of displaying ‘considerable pro-acupuncture bias’;3 a comment on another blog described senior author Klaus Linde as a ‘homeopath’.4 One poster, who claims that the study shows that the ‘desperation of NCCAM’ and the ‘gullibility’ of the media, opined that ‘Dr Vickers … needs to go back and take an introductory course on statistics’ and warned ‘like loaded guns, some people shouldn’t be left alone with a statistical software program’.5
Oh. My. God! A commenter actually had the temerity to mention that Klaus Linde is a homeopath! Note that I didn’t mention a thing about the study’s connection to homeopathy in the post itself. At this point it would appear useful to mention another thing. Not all ad hominems are inappropriate. If a study author is a homeopath, that tells you something right there about the source. It tells you that at least one of the authors embraces pseudoscience with a big, enthusiastic bear hug. It’s a piece of the puzzle that is entirely worth mentioning. If it’s the “meat” of your argument, then your argument is basically a logical fallacy, an ad hominem, but it is not inappropriate to mention information that reflects on the reliability of the source in the context of a broader criticism of a meta-analysis like this.
Here’s another hint. Reading a paper and finding that it shows “considerable pro-acupuncture bias” is not an ad hominem attack either. It’s a statement of Steve’s opinion, a conclusion based upon reading the article. Here’s the full quote in more context:
I took a close look at the study and find that the authors display considerable pro-acupuncture bias in their analysis and discussion. They clearly want acupuncture to work. That aside, the data are simply not compelling, and the authors, in my opinion, grossly overcall the results, which are compatible with the conclusion that there are no specific effects to acupuncture beyond placebo.
Sorry, Dr. Vickers. That’s just not an ad hominem attack. It’s a statement of opinion based on analysis. Steve says he thinks the study shows pro-acupuncture bias, and then he describes why, both in the rest of the paragraph and the rest of the article. You, sir, are a whiny baby whose spine could use a bit of stiffening. No, that’s not an ad hominem attack either. It’s a statement of my opinion based on your commentary.
Even the seemingly vicious bit about needing remedial statistics education is not an ad hominem attack. It’s sarcastic and insulting, yes, but the order is wrong for an ad hominem attack. It would be an ad hominem attack if askeptic had said that Vickers was wrong because “he needs to go back and take an introductory course on statistics,” instead of concluding that Vickers “he needs to go back and take an introductory course on statistics” because of the content of his meta-analysis. Thus endeth the lesson for Dr. Vickers. I hope he takes it to heart. In the meantime, what else is eating at Dr. Vickers?
A lot, it turns out:
One post made a direct accusation of statistical misconduct: ‘The whole thing looks like a number the authors pulled out of their nether regions and then plugged into their meta-analysis software in order to see if it would affect anything.’4
Yes, that was me. And it did look as though that’s what the authors did. But let’s put the whole thing in context. I hate quoting extensively from my own work when I can just link to it, but I think it’s appropriate here:
Finally, there’s the issue of publication bias. Publication bias, as most of my readers probably know, is the tendency for published studies to be more likely to be positive than studies that remain unpublished. That’s because scientists don’t like publishing negative studies (they seem like “failures”) and journals don’t like publishing them either (because editors don’t consider them very interesting). That’s why, it’s essential that a meta-analysis include an analysis looking for publication bias. One very common way of doing this is a funnel plot. Yet there is no funnel plot included that I could find (I couldn’t get access to the supplemental material because I had to have someone e-mail the study to me and forgot to ask). Instead, they talk about looking at effect sizes in small studies and large studies and then calculate that “only if there were 47 unpublished RCTs with n = 100 patients showing an advantage to sham of 0.25SD would the difference between acupuncture and sham lose significance.” How they calculated this number is not described. I must say, I’ve never seen this sort of analysis in a meta-analysis before, which is why it stuck out like the proverbial sore thumb, as did the lack of a description of how this estimate was calculated. Modeling? Why 47 unpublished RCTs of 100 subjects and not a smaller number of larger RCTs? The whole thing looks like a number the authors pulled out of their nether regions and then plugged into their meta-analysis software in order to see if it would affect anything. In fact, I have a sneaking suspicion that they probably tried a lot of combinations in order to find the one that would make it look as though it would take a whole boatload of studies going the other way to eliminate the statistical significance of their results. Is that unfair to say so? Well, the authors have no one to blame but themselves, and if I missed the description of how that was calculated I’ll take my lumps.
You’ll see that there is a lot more there than Dr. Vickers acknowledges. One notes that Dr. Vickers accuses me of not having read the paper. Here and now I say to Dr. Vickers: I read the paper. Oh, did I read the paper! I suffered mightily reading the paper! What I didn’t read was the supplemental material because I didn’t have access to it then. I do now and have read it over. It doesn’t change my conclusions. The authors do have no one to blame but themselves if they gave the impression that they fiddled around with the statistics software. Because they didn’t describe in detail how they calculated that statistic, it came across as the statistical equivalent of hand waving.
One of the big criticisms that many of us made was that the trials used in the analysis were not blinded. Vickers’ response to that criticism is—shall we say—less than convincing. He argues that Ernst is wrong to criticize the meta-analysis for including unblinded studies because many of the studies that were blinded found that the blinding was adequate because post-study assessments showed that the blinding was adequate. That’s all well and good, but it doesn’t mean that the studies that were unblinded weren’t prone to the bias that results from inadequate blinding. Basically, Vickers seems to think that lack of blinding works only at the patient level to insert bias, when in fact it also works at the researcher level to insert observation bias. That this is true is so well-accepted in general that it is rather shocking that Vickers would even argue against it. His evidence? A single study comparing a laser acupuncture with “real” acupuncture.
Among critics, Orac gets a full paragraph, plus the previous unhappiness:
In an extensive critique published anonymously online, ‘Orac’ makes several points.4 The collaboration was accused of ‘comparing apples and oranges’ due to our ‘mixing studies that compare acupuncture to no treatment [with those that compare acupuncture] to sham treatment’. This is false: comparisons between acupuncture versus sham and acupuncture versus no acupuncture were kept entirely separate. With respect to our analysis for publication bias, Orac asks ‘Why 47 unpublished RCTs of 100 subjects and not a smaller number of larger RCTs?’ and then accuses us of pulling numbers out of our ‘nether regions’. The answer to the question ‘why … RCTs of 100 subjects?’ is that it was prespecified in the protocol which was previously published and referenced in the paper.10 Orac claims that our failure to report I2 was ‘sloppy’ (in fact, we chose not to cite this statistic because we believe it to be invalid) and criticises the lack of a funnel plot (highly underpowered for the number of trials in our analysis). Orac also complains about our characterisation of the study results, stating that ‘it’s uncommon to have a 50% reduction in pain scores’. But in fact we chose 50% precisely because it was close to what was reported in the trials.11
One notes that Vickers concentrates on what are actually rather minor parts of my original criticism. For instance, what did Vickers do if not mixing studies that had different methodologies? Sure, they did separate comparisons of the two different types but that doesn’t change the problematic nature of using studies with different controls. As for the issue of the “47 trials,” as I said, I’d take my lumps if I missed something. I didn’t miss anything. If Vickers had simply said what he said above in the paper, I probably would have still complained about the equivalent of statistical handwaving, but I would probably not have wondered about whether they fiddled with the software to find just the right number. I’m not a mind reader. If something that clarifies a point, particularly for a calculation as unusual as the one above, is not in the part of the paper in which that calculation is reported, it’s the authors’ fault if readers wonder. Finally, one notes that Dr. Vickers is trying to have it both ways. He says that the paper followed PRISMA methodology for high quality meta-analyses, but then he dismisses a key statistic that PRISMA suggests.
Overall, Dr. Vickers sidesteps what is the key criticism of his analysis, one that every single critic he lambastes makes note of, namely that the effect size is so small that it’s almost certainly not clinically significant. Let’s briefly revisit that argument. Vickers et al try to argue that a change of 5 on a 0-100 pain scale, a subjective scale, is noticeable by patients. As I pointed out, it’s probably not. In fact, in light of Vickers’ “response,” it is probably worth revisiting the concept of “minimally clinically important difference” (MCID), which is defined as “the smallest difference in score in the domain of interest which patients perceive as beneficial and which would mandate…a change in the patient’s management.” A recent review looking at minimal detectable and clinically relevant changes in pain scores in arthritis found a range in absolute terms between 6.8 and 19.9. Tubach et al assessed only the improvement aspect of the MCID and defined the minimal clinically important improvement (MCII) as the minimum improvement in the pain score reported by 75% of osteoarthritis patients ranking their response as “good” and reported that the MCII was -15.3 for hip osteoarthritis and -19.9 for knee osteoarthritis. A difference of -5 (the difference between sham acupuncture and “real” acupuncture found in the Vickers meta-analysis) is not clinically significant.
As I pointed out, too, Vickers et al labored mightily to try to convince readers that this tiny effect, if it exists apart from bias, is not just statistically significant, but clinically significant. They failed, and it’s no surprise that Vickers doesn’t even address this issue, except obliquely, in his “response.” Instead, he concentrated on Edzard Ernst’s assessment and claimed that Ernst was trying to dismiss the effect by saying that bias would eliminate it. This is a bit of a straw man argument. The argument based on the low change in pain score is not primarily that any bias could easily make it disappear; the argument is that the difference, if it actually exists, is so small as to be clinically insignificant by Vickers own results and arguments. It is well below the MCID.
Having failed to make substantive rebuttals to this criticism, which would still be a valid criticism even if Vickers’ responses to all the other more peripheral criticisms were absolutely valid, Vickers is left with vitriol, which I quote extensively because I know most of you don’t have access to the paper:
The Acupuncture Trialists’ Collaboration meta-analysis was published during the presidential campaigns of 2012 and it is remarkable how closely the debate about our paper mirrored the election. Contemporary politics now seems characterised by anonymous blog posts, press releases, phony expertise (how many political commentators really understand how health insurance works?), ad hominem attacks and the attempt to fight data with opinion, something that culminated in the bizarre spectacle of a leading Republican denying live on TV that Obama had won.
After a paragraph on what Vickers thinks would be appropriate debates in acupuncture, he finishes with a flourish:
However, these are not debates that many self appointed ‘acupuncture sceptics’ want to have, appearing to prefer instead the comfort of nay-saying and the thrill of adversarial campaigning. It is far less work to make a comment about a researcher’s ‘nether regions’ than to spend the time getting to grips with a complex paper, and it is clearly more fun to make a cutting remark about another scientist’s supposed statistical cluelessness than, say, to write a thoughtful critique of different approaches to handling the problem of publication bias. The lack of any reasoned debate about the main findings of the Acupuncture Trialists’ Collaboration paper underlines that mainstream science has moved on from the intellectual sterility and ad hominem attacks that characterise the sceptics’ movement.
I do so love being compared to FOX News analysts and political flacks like Karl Rove. Notice, also, how Vickers tries to dismiss criticism by claiming that his critics didn’t spend enough time trying to understand him. I can’t speak for Steve Novella or askeptic, but I’ve spent probably far more time than I should trying to come to grips with papers like that of Vickers et al, hours and hours, all for my own education and the ability to educate and entertain my readers. I suppose it’s easier to dismiss criticism than it is to spend the time coming to grips with the actual criticisms. I still don’t think that our friend Dr. Vickers knows what MCID is, why it’s relevant to his meta-analysis, and how obvious it was that he was doing contortions of language, logic, and science to try to convince you that what might be a statistically significant difference in pain scores so ephemeral that if there was even a little bias unaccounted for it would disappear into placebo is not a clinically significant difference. Dr. Vickers should take a long look in the mirror, as he appears to be projecting his own shortcomings onto the “sceptics’ movement.”