These things always seem to happen on Friday. Well, not really. It’s probably just confirmation bias, but it seems that a lot of things I’d like to blog about happen on a Friday. That leaves me the choice of either breaking my unofficial rule not to blog on the weekend or waiting until Monday, when the news tidbit might not be quite so…timely anymore. This time around, I decided to wait because, well, we’re getting into grant season again, and I could use the time to work on grants.
It is, however, good news. Very good news indeed. Remember Brian Hooker’s absolutely incompetent “reanalysis” of a paper from ten years ago, DeStefano et al, that failed to find a correlation between age at MMR vaccination and autism? I’m not going to go through just what’s wrong with Hooker’s “statistical” analysis, at least not in detail. If you want the details, I provided them a month and a half ago. In brief, the biggest foul-up Hooker did was to analyze case control data as a cohort study. That was a mistake in design that doomed his study from the very beginning, guaranteeing that its results, whatever they were, would have no validity whatsoever. The next big error is that Hooker used inappropriate statistical tests and, as I like to say, tortured the data until they confessed what he wanted them to confess, namely a correlation between MMR and autism. Even then, all he could get the data to confess to him to make the pain stop was that there was an elevated risk of autism in African American boys who received the MMR between certain ages. As I put it at the time, basically Hooker confirmed the results of DeStefano et al and proved Wakefield wrong once again, given that Hooker’s result supposedly showed a correlation between the MMR vaccine and autism in no one other than African American boys and that result was very questionable indeed. I also wondered why the antivaccine movement was so fast to jump on this “reanalysis,” given that the vast majority of the leaders of the antivaccine movement are not African-American and Hooker’s reanalysis, like the original DeStefano et al analysis, found no correlation between MMR and autism for any race other than African-American.
It was a paper so bad that even a brand new journal, like Translational Neurodegeneration, considered retracting it. Basically, less than a month after the study was published, the editor published a message:
This article has been removed from the public domain because of serious concerns about the validity of its conclusions. The journal and publisher believe that its continued availability may not be in the public interest. Definitive editorial action will be pending further investigation.
The Editor and Publisher regretfully retract the article  as there were undeclared competing interests on the part of the author which compromised the peer review process. Furthermore, post-publication peer review raised concerns about the validity of the methods and statistical analysis, therefore the Editors no longer have confidence in the soundness of the findings. We apologise to all affected parties for the inconvenience caused.
Ouch. That one’s going to leave a mark. Of course, I would argue that this is one time where the “post-publication peer review” largely consisted of bloggers like myself and several others pointing out the gaping holes in Hooker’s methodology, particularly his incompetent use of statistics. Let’s just put it this way. In this video, Hooker discusses how he reanalyzed the DeStefano et al dataset using a “very, very simple statistical technique” and brags that to him in statistics “simplicity is elegance.” He then follows up by saying that he’s “not really that smart” and therefore “likes easy things rather than much more intellectually challenging things.” So he did the “simplest, most straightforward analysis.”
Here’s a hint: In statistics, the simplest analysis is often not the correct analysis, and, boy, was this the case for Hooker’s reanalysis of the DeStefano et al dataset.
More interesting to me, however, is the other part of the justification for retracting Hooker’s paper: “undeclared competing interests on the part of the author which compromised the peer review process.” I wonder what that means. I can’t be sure, but I can speculate based on my experience with the peer review process on both sides, as peer reviewer and as someone submitting a manuscript to a journal. First note the specific wording. The editor and publisher are saying that some sort of undeclared conflict of interest somehow compromised the peer-review process. Let’s take a look at the checklist for submissions to this particular journal:
Have you prepared a covering letter for your submission, explaining why we should publish your manuscript and elaborating on any issues relating to our editorial policies detailed in the instructions for authors, and declaring any potential competing interests? This should be provided using the ‘cover letter’ section of the submission process. And do you have the contact details (including email addresses) of at least two potential peer reviewers for your paper, which you will need at the same time? These should be experts in your field of study, who will be able to provide an objective assessment of the manuscript’s quality. Any peer reviewers you suggest should not have recently published with any of the authors of your manuscript and should not be members of the same research institution. You will be asked to select the one or two Editorial Board members whose interests most closely match the subject of your manuscript.
Most journals instruct authors to suggest potential peer reviewers. The editor, of course, is free to accept or reject the suggestions, and an author has no way of knowing whether the editor picked one or more of his suggestions. Obviously, the editor knows that an author is going to suggest reviewers whom he views as sympathetic (or at least not hostile) to his work. Having had to do this many times before, let me tell you that it can be a difficult process, mainly because any potential peer reviewer I can think of is usually someone with whom I’ve worked or even published with before. That generally leaves people whom I don’t know and, because some of my work is a bit specialized, not very many of the to choose from.
Let me guess what might have happened. Take a look at Hooker’s recent publication record. Since 2013, he’s published four papers on vaccines and autism. All of them except for his retracted Translational Neurodegeneration paper are co-authored with Mark and David Geier. (I’m sure you remember them, don’t you? He and his son David have published a number of truly horrible papers and are advocates of chemical castration as a treatment for autism.) Also in this list are Janet Kern, an RN who used to publish on secretin and nearly all of whose publications are with the Geiers, and Lisa Sykes (an antivaccine activist whom we haven’t seen in a while on this blog but one who is closely associated with the Geiers and is known for harassing critical bloggers.)
Now, I have no way of knowing if this is what happened, but if true it would fit what the editor and publisher said about an undisclosed conflict of interest that compromised peer review. My guess is that Hooker probably named one or more of his previous co-authors, such as Mark Geier or Janet Kern, as suggested reviewers, and that the editor used one or more of them. Now, I have no idea whether the editor would have double checked to see if Hooker had recently published with these suggested reviewers. Editors deal with a lot of manuscripts and might not do PubMed searches to verify whether any of the suggested reviewers have published with the author recently. Basically, it is the honor system for the most part. You just don’t suggest a reviewer for whom reviewing your manuscript would represent an obvious conflict of interest, nor do you agree to review an article for which you have a conflict of interest. Heck, just last month I was asked to review a grant that was very similar to the sort of work I’m doing. I turned it down, as seeing the grant could easily tipped me off what a competitor is doing and, as much as I tried to be objective, I would have a very hard time. So I wouldn’t review it, and I told the committee that sent it to me why.
Yes, I think something like what I just described above is very likely what happened. Hooker probably picked one or more of his antivaccine cronies as peer reviewers, and the editor, not knowing any better, sent the manuscript to one or more of them to review. We’ll probably never know, though, although we do know that the editor can’t have sent the manuscript to a statistician to review. It never would have passed muster, unless it was the most incompetent statistician in the world reviewing it.
Speaking of incompetent statisticians, amusingly our old buddy Jake Crosby is very, very, very unhappy that his good bud and mentor Brian Hooker has been slapped down so publicly and has taken to expressing his rage on his blog. Of course, he gets it wrong. Note how he focuses on the term “undeclared conflicts of interest” but ignores the part about how those undeclared conflicts of interest compromised the peer review process. Watching Jake spar in the comments with people pointing out his flawed reasoning and then watching him defend Hooker’s utterly incompetent reanalysis of case control data as cohort data are worth a chuckle, though.
When I first heard late Friday that the retraction of this paper had been made official, I was satisfied. I really thought this paper deserved retraction. Of course, as I warned before, retracting this paper will likely feed the conspiracy theorists of the antivaccine movement in a way that almost nothing else could, particularly now that it’s official. On the other hand, I don’t really care much about changing the minds of people like Jake Crosby or the antivaccine zealots at wretched hives of scum and quackery like Age of Autism or The Thinking Moms’ Revolution, because they are such cranks that their minds really can’t be changed. What I do care about is persuading the general public, particularly the fence sitters, and a retraction of a scientific paper sends a powerful message to the public about a study.
Science wins this time. For now.