On the blogging front, I started out this week with a part facetious, part serious, part the highly detailed analysis of a new study of interest that you’ve come to know and love (or hate). The study was Price et al, and it was yet another nail in the coffin of the scientifically discredited notion that mercury in vaccines causes autism, a notion whose coffin already had so many nails in it that Price et al probably had a hard time finding even a tiny area of virgin wood into which to pound even a tiny nail of a study published in an impact factor one journal, much less the spike that their study in Pediatrics represented. Yet, pound it in they did, and, if the thimerosal-autism weren’t dead, dead, dead, at least from a scientific viewpoint, it’s certainly pining for the fjords now.
But like the pet shopkeeper Mr. Praline in that famous Monty Python sketch, the anti-vaccine movement can’t admit that the parrot is not pinding but rather that he’s passed on. Like the parrot in the Monty Python sketch, this hypothesis is no more. It has ceased to be. It’s expired and gone to meet its maker. It’s a stiff. Bereft of life (not to mention scientific support), it’s pushing up the daisies. Its metabolic processes are now history. It’s off the twig. It’s kicked the bucket. It’s shuffled off this mortal coil, run down the curtain and joined the bleedin’ choir invisible.
It is an ex-hypothesis.
Of course, cranks like our friends over at that anti-vaccine crank blog Age of Autism can never let it go. How could they? After all, Generation Rescue, the organization for which AoA is the propaganda blog, once confidently proclaimed that autism is a “misdiagnosis for mercury poisoning” and blamed the mercury in vaccines for it. Even though even AoA has backed away from the mercury-autism ex-hypothesis, it will never, ever, let it go. Oddly enough, though, AoA was not first off the mark with the expected counterattack on Price et al. It did, however, publicize what was the long expected response from Sallie Bernard and SafeMinds. The response itself, predictably, resembles the previously debunked sour grapes response that Sallie Bernard published in response to the precursor study, Thompson et al, a response I discussed extensively when it as released and that was rebutted by the authors themselves quite effectively.
Now it looks like Sallie is back for more. The only thing that surprised me was that it took her nearly four whole days to come up with such a lame response. Given a response so lame, I figured she’d have been ready right off the mark with it on Monday, the day Price et al was published. I mean, come on! Couldn’t Bernard come up with a better way to start off her critique than the pharma shill gambit?
This study was funded by CDC and conducted by several parties with an interest in protecting vaccine use: CDC staff involved in vaccine research and promotion; Abt Associates, a contract research organization whose largest clients include vaccine manufacturers and the CDC’s National Immunization Program; America’s Health Insurance Plans, the trade group for the health insurance industry; and three HMOs which receive substantial funding from vaccine manufacturers to conduct vaccine licensing research.
Lame, lame, lame, particularly given that the funding and conduct of this study is pretty much transparent. If you can’t attack the design, execution, and conclusions effectively, then attack the funding source. I wondered why Bernard decided to lead with the pharma shill gambit. Then I read the rest of the critique, and I wondered no more.
Remember how I pointed out that in some of the measures, there was a small, statistically significant finding that thimerosal in vaccines appeared to be protective against autism? For instance, for exposure from birth to 7 months, the hazard ratio was 0.60 (95% confidence interval: 0.36 – 0.99) and for exposure from birth to 20 months it was 0.60 (95% confidence interval: 0.32- 0.97). The authors quite properly pointed out that they did not know of any mechanism that could account for such a result, and they most definitely did not state that thimerosal is protective against autism. However, that doesn’t stop SafeMinds from making that result the centerpiece of its criticism.
The stage is set thusly:
There are two primary deficiencies in the study methodology which would lead to the curious finding of a protective, rather than a harmful effect of early thimerosal exposure found in the study. The first deficiency concerns the variables used for stratification and the second concerns the low participation rate leading to sample bias. The stratification scheme would bias the results to the null; the sampling bias would swing the results to show a lower autism rate among those highly exposed. Had these deficiencies been addressed through a better study design, it is equally likely that the results would have in fact shown a harmful effect from early thimerosal exposure.
First off, it is not equally likely that the results would have shown a harmful result from early thimerosal exposure, as, I hope you will soon see. Let’s run through some of SafeMinds’ complaints:
The study sample did not allow an examination of an exposed versus an unexposed group, or even a high versus a low exposed group, but rather the study mostly examined the effect of timing of exposure on autism rates. There were virtually no subjects who were unvaccinated and few who were truly less vaccinated; rather, the low exposed group was mostly just late relative to the higher exposed group, ie, those vaccinating on time.
This criticism is so wrong that it’s not even wrong. Ms. Bernard is, in essence, criticizing a case-control study for not being a different kind of study. Here’s how case-control studies work. Basically, you take a population and identify the cases. Take a random subset of cases if you can’t examine all the cases. Then you look at the rest of the population and randomly select people who do not have the condition that you are studying. You match them for as many relevant demographic parameters as you can that might confound the measurement. Then you look for differences in the group. If the cases, for instance, have a higher exposure to the substance under study, then the conclusion is that exposure to the substance is associated with the condition and therefore the substance might cause or contribute to the studied condition. If the exposure to the substance under study is lower in the case group than in controls, then the conclusion is that that substance might be protective. If the exposure is the same between the groups, then the conclusion is that that substance probably has no relationship to the condition under study, which is what this case-control study more or less concluded (further elaboration later on the somewhat anomalous result of thimerosal seeming protective against autism).
It’s not just the concept of a case-control study, though, that Ms. Bernard fails to comprehend.The concept of dose-response also seems to elude Ms. Bernard, as does the concept of susceptibility windows. By the antivaxers’ own arguments, the idea is that there is a time window of susceptibility. What it is, they rarely say exactly, but they do generally seem to place it below age 2, because that’s when the majority of childhood vaccines are administered. Moreover, if there really were a link between thimerosal in vaccines and autism, there would be a dose-response curve. This complaint is no different than the complaint against the Italian study that (surprise, surprise!) also found no relationship between vaccines and autism. Bernard’s criticism aside, this is how a case-control study is done. You look at cases and compare them to randomly selected controls matched as well to the cases as you can. The ranges of exposures are what they are. Comparing dose exposures in cases versus controls is how nearly all environmental risk research is done, because for most substances thought to be linked to disease it is impossible to find people who have had zero exposure. For example, all of us have had exposure to environmental mercury. It would not be possible to find anyone with zero exposure. Does that mean case-control studies examining environmental mercury exposure as a risk factor for various conditions should not be undertaken? Of course not.
Besides, the variable being examined was thimerosal, not vaccines themselves. Calling for more completely unvaccinated children is a red herring. For purposes of studying the hypothesis in this study, a child fully vaccinated with all the vaccines on the CDC schedule but with thimerosal-free vaccines would count as zero exposure. Bernard is intentionally confusing the issue by bringing up unvaccinated children. Finaly, this particular criticism of the study depends on the concept that any amount of mercury, no matter how small, will increase the risk of autism so that every exposure is above a dose where the risk attributable to thimerosal plateaus. There’s no evidence that this is so, unless mercury in vaccines somehow magically behaves differently than mercury in the environment when it comes to dose-response, where apparently, when it comes to autism at least, thimerosal is apparently as potent as Botox.
Except that even Botox exhibits a safe dose.
The next complaint is just plain silly. Bernad complains that the matching of controls to cases was done by birth year. This is such an utterly standard manner of doing case-control studies, particularly those involving children, because they minimize the variation between cases and controls that might be due to being raised in different years, going to the same schools in different years, or having different exposures related to different years. In other words, it’s good practice to match based on birth year. Not that that stops Bernard from writing:
Each of the three HMOs would buy in bulk the same vaccines for all its patients and the promotion of a new vaccine would tend to be uniform across an HMO, so that within an HMO, exposure variability is lessened. Additionally, the recommended vaccines, the formulations offered by manufacturers, and the uptake rate of new vaccines varied by year, so that within a given year, exposure variability is further reduced. The effect is that children in a given year in a given HMO would tend to receive the same vaccines.
She writes this as though it were a bad thing for the study! After all, the study variable is thimerosal, not vaccines. If you want to concentrate on thimerosal, then naturally you’d want to eliminate as many of the other variables as possible. Matching by birth year is one way to help accomplish that. She also constructs a rather bizarre “what if” scenario:
The variables of time and place (HMO) are correlated with the exposure variable. Statistically, the correlation would reduce the effect of the exposure variable, as the two matching variables compete with the exposure variable to explain differences in the autism outcome. For example, say for simplicity that HMO A used vaccines in 1994 which exposed all enrolled infants up to 6 months of age with 75 mcg of mercury; the rate of ASD for 1994 births in HMO A was found to be 1 in 150. In 1995, HMO A used vaccines which exposed all enrolled infants up to 6 months of age to 150 mcg of mercury; the rate of ASD for these children rises to 1 in 100. By stratifying by year for this HMO, those children born in 1994, whether or not they had an ASD, would show identical exposures. Those with an ASD born in 1995 in HMO A would also have the same exposures as those born in 1995 in HMO A without an ASD. The association between the increased exposure and the increase in ASD can only be detected by removing the birth year variable, which otherwise masks the effect of exposure on outcomes.
This is one of those claims that sounds superficially plausible–if it weren’t for all the other correlations being tested in the various multivariate models, such as measures of birth weight, household income, maternal education, marital status, maternal and paternal age, birth order, breast feeding duration, child birth conditions including Apgar score, and indicators for birth asphyxia, respiratory distress, and hyperbilirubinemia; measures of maternal tobacco use, alcohol use, fish consumption, exposure to non-vaccine mercury sources, lead, illegal drugs, valproic acid, folic acid, and viral infections during pregnancy were created, and measures of child anemia, encephalitis, lead exposure, and pica. Moreover, when more than one site is used for a study, it is customary to compare the characteristics of the subjects enrolled at each site in order to make sure that they are comparable and can thus be used in the study. Add to that all the other subject characteristics examined, and Bernard’s complaint becomes just another smokescreen, especially since other results not reported in the Pediatrics paper suggest that autism prevalence was stable during the six years covered. In fact, if you look at the technical report, you’ll find that the authors checked the influence of HMO:
Were overall results driven by results from one particular HMO? To address this question we fit models separately to the data from the two largest HMOs and compared the results to the overall results. The exposure estimates from each of the two large HMOs are similar in direction and magnitude to the overall results. However, they were seldom statistically significant due to the smaller sample sizes obtained when modeling separately by HMO. We conclude that the overall results were not primarily driven by the results in one particular HMO.
They also controlled for study area:
Controlling the geographic area within the HMO coverage could increase the comparability of the cases, as well as make the data collection more concentrated and therefore less expensive. During creation of the sampling frame, children that were known to live more than 60 miles from an assessment clinic were excluded from the sampling frame.
Finally, they did several statistical tests to determine if the results were driven primarily by one subgroup:
In order to assess whether the results were sensitive to the influence of one or a few highly influential observations within a single matching stratum, we tried re-fitting the analysis model for the AD outcome to sequential subsets of data where, in each subset, all data from a single stratum were omitted4. For example, if one or a few highly influential observations were in Stratum “2”, then results from a model where the data were omitted from that stratum would be very different from the results when the data from the stratum are included.
Bummer, Sallie. Next time, read the full technical report. I realize that your deluded anti-vaccine fans won’t bother to check these things, but I will.
The next complaint can be dismissed quickly:
The participation rate in the study was quite low: among the cases, it was 48.1% and among the controls, only 31.7%. Controls were more likely than cases to be unable to locate and to refuse participation. The standard for minimal response is 60% and higher. This does not represent a probability sample.
Who says the standard for minimal response is 60% and higher? In any case, the authors responses to this complaint will do quite nicely. Same complaint. Same response. Plus mine, of course. The authors accounted for the lower response rate and, in fact, pointed out that their response rate ended up being higher than they had expected.
Finally, Bernard comes back to the apparent protective effect from thimerosal. Amazingly (well, not so amazingly), she doesn’t note that the authors acknowledged and discussed this result. She then constructs a scenario designed to “demonstrate” that shifts in participation in key groups in such a study can change the results. No kidding. Here’s the problem. Although Bernard does show that differences in the rate of participation in the controls based on whether they are late vaccinators or not could change the ratio of late vaccinators to have a higher percentage of “vaccinators,” this is yet another smokescreen. For one thing, she envisions identical participation rates between late and on time vaccinators in the ASD group, while in the non-ASD group she envisions 40% participation of on-time vaccinators and only 15% participation of late vaccinators. This is, to say the least, a highly artificial and unlikely construct, but that’s what it took for her to make the numbers work. To justify these numbers, she cited a paper in which the response rate for subjects with no thimerosal exposure was 48% and those with “full exposure” was 65%. That is not a nearly three-fold difference.
In other words, Bernard had to make up a highly artificial hypothetical situation in which she came up with differences far beyond what is justified in order to make the numbers in her scenario work. Nowhere does she show that there’s any reason to suspect such a huge difference in response rates. Certainly, I could find no indication that would lead me to suspect such huge reporting differences. If that’s the best she could come up with, Price et al is a better study than I thought the first time around.
When it comes to the notion that thimerosal causes autism (I refuse to dignify it with the term “hypothesis” anymore), it’s clear to me that Sallie Bernard and SafeMinds are getting desperate. This is even thinner than the usual gruel of an attack against Price et al. In fact, I’d say it’s pathetic, particularly given that it apparently took Bernard over three days to come up with it. True, there is an even more pathetic response out there, but demolishing that one is left as an exercise for the reader. (Hint: It involves the bogus claims about mercury excretion in autistic children invoked.)
In the meantime, I don’t know whether to shake my head in embarrassment that Sallie Bernard was ever allowed anywhere near a study like this (the CDC took her on as an external consultant in an ill-advised attempt to coopt her; she turned on them), or unrestrained hilarity that anyone can be so incompetent at analyzing science. Maybe a little of both.