I write about vaccines a lot here at Respectful Insolence, and for a very good reason. Of all the medical interventions devised by the brains of humans, arguably vaccines have saved more lives and prevented more disability than any other medical treatment. When it comes to infectious disease, vaccination is the ultimate in preventive medicine, at least for diseases for which vaccines can be developed. We also know that when vaccination rates fall, it opens the door for diseases once controlled to come roaring back. We saw this phenomenon with the measles a year ago in the Disneyland measles outbreak. We’ve seen it around the country, with measles outbreaks occurring in areas where a lot of antivaccine and vaccine-averse parents live. Perhaps the most spectacular example occurred in the UK, where prior to Andrew Wakefield’s fraudulent case series in The Lancet that was used to link the MMR vaccine to autism, measles was under control; it came roaring back as MMR uptake plummeted in the wake of the publicity his research engendered. By 2008, ten years after Wakefield’s case series was published, measles was again endemic in the UK. Measles outbreaks flourished. Although MMR uptake is improving again in the UK, there remains a reservoir of unvaccinated children aged 10-16 who can transmit the virus.
Fortunately, Wakefield has been relegated to sharing the stage with crop circle chasers, New World Order conspiracy theorists, sovereign citizen cranks, and other antivaccine cranks like Sherry Tenpenny. Unfortunately, the damage that he has done lives on and has metastasized all over the developed world. Given the persistence of the antivaccine movement, which fuels concerns about vaccines in parents who are not themselves antivaccine but are predisposed to the antivaccine message because they distrust government and/or big pharma or have a world view that overvalues “naturalness,” I was quite interested in an article that appeared in The BMJ last week. Basically, it asked the question “Is the timing of recommended childhood vaccines evidence based?”
Who are the debaters?
Not surprisingly, on the “yes” side of the BMJ debate were Kathryn M Edwards, Yvonne Maldonado, and Carrie L Byington, all from the American Academy of Pediatrics Committee on Infectious Diseases, which has a major role in consultation with the Centers for Disease Control and Prevention (CDC) in developing the US childhood vaccination schedule. Given that they are part of the committee tasked with developing vaccine recommendations for the US, they provide a robust defense of the evidence base for their decision-making process.
On the “no” side were Tom Jefferson and Vittorio Demicheli. We’ve met Tom Jefferson before on multiple occasions. Let’s just say that neither Mark Crislip nor I have been particularly impressed with Jefferson, due to his methodolatry leading to bad arguments and his tendency to provide sound bites beloved by the antivaccine movement. It also didn’t help that he’s appeared on Gary Null’s radio show.
Basically, Jefferson strikes me as an earnest expert who likes being an iconoclast a bit too much. Although I’ve seen Jefferson’s writings on multiple occasions, I had never heard of Vittorio Demicheli (or at least don’t remember having heard of him before), but he’s listed as being with the Cochrane Acute Respiratory Infections Group, Jefferson’s longtime stomping ground. Judging from his publications, though, he appears to be of a similar mindset as Jefferson. Indeed, he was featured in a news story in The BMJ just last November in which he opposed adding the human papillomavirus (HPV) for males, rotavirus for infants, and herpes zoster and pneumococcal polysaccharide for elderly people as too costly and in some cases ill-advised. In particular, he accused the health authorities who made the decision to add these vaccines to the national schedule in Italy of being in the pocket of big pharma, leading the health officials to respond with threats of legal action for libel.
So basically, this debate boils down to two sets of people: People with the real world responsibility to try to take the messy evidence base that exists for vaccines and turning it into actual policy recommendations versus people who are focused on evidence-based medicine and are clearly suspicious of big pharma. Not surprisingly, both likely have their own biases and experiences that color their opinions, which is in general a good thing in a discussion like this. Surprisingly, the level of difference between the two positions is less than perhaps the headline might lead the reader to believe.
What constitutes an “evidence-based” vaccine schedule?
Before we dive into the two positions, I wanted to consider briefly a question that’s very critical to the whole debate: What, exactly, does it mean for a vaccine schedule to be “evidence-based”? A frequent attack made on vaccine schedules by antivaccinationists is that they can’t be evidence-based because they differ so much from nation to nation. Of course, what constitutes “evidence-based” recommendations is not nearly as simple a question as it might appear at first on the surface. Different groups of people charged with making such recommendations can disagree with the strength of the evidence for different vaccines, and different parts of the world could easily have different conditions and different risks of various diseases that could easily lead to different interpretations of the same evidence base that then lead to different recommendations. Then, of course, there are practical concerns. As much as we hate to admit it, cost is a factor. If as a public health official you have to prioritize a list of diseases to vaccinate against, that can’t help but be a consideration. Finally, unfortunately, politics can’t be left out. Here in the US we have an ignorant group of politicians who conflate vaccine refusal with “freedom” and parental choice. Again vaccine recommendations can’t help but be forced to take that into account.
Then there’s the question of what evidence to consider, which is also not as straightforward a question as you might imagine. Here’s what the “yes” contingent (Edwards et al.) says in the point-counterpoint:
Data from clinical trials represent only a portion of the evidence considered in determining vaccination schedules.5 Burden of disease, immunogenicity, and efficacy studies enable countries to select vaccines and schedules appropriate for their populations, as shown by the recent infographic in The BMJ.6 Vaccine schedules are further refined by considerations such as timing and efficiency of access to the target population to optimise uptake. For childhood vaccines, integration with existing local or national well child visit schedules is a critical consideration. This concept was summarised well in the US Institute of Medicine (IoM) report on the childhood immunisation schedule: “Each new vaccine is approved on the basis of a detailed evaluation of both the vaccine itself and the immunization schedule.” The IoM further stated that randomised controlled trials in which children “would receive less than the full immunization schedule or no vaccines would not be ethical because they would be exposed to a greater risk for the development of diseases and community immunity would be compromised.”5
Once vaccines are in general use local surveillance is generally conducted to evaluate their effect on disease burden. Comprehensive surveillance systems are also maintained by the Centers for Disease Control and Prevention in the United States, Eurosurveillance in Europe, and the World Health Organization expanded programme on immunisation (EPI).7 8 9
The infographic referred to by Edwards et al can be found here. It’s interactive and lets the reader examine which vaccines are required when in Canada, France, Germany, Italy, Japan, Russia, the UK, and the US. If you look at the MMR vaccine, you’ll see a pretty striking similarity in the recommended schedules among most of the countries included in the infographic. With varicella, however, there is less agreement. For tuberculosis, only two nations (Japan and Russia) require vaccination at all.
Basically, producing an evidence-based schedule requires weighing the clinical trial evidence of efficacy and safety against the risk and burden of each disease being vaccinated against, while taking into account economic considerations, politics, and parental acceptance of new vaccine mandates. It’s way more than just looking at a new vaccine, seeing that it is effective and safe, and deciding that it should be added to the recommended vaccine schedule.
Yes, vaccine schedules are evidence-based
When it comes to answering the question of how evidence-based vaccine schedules are, there’s only one answer: It’s complicated. It’s complicated as hell. That’s why expert advisory bodies are required to synthesize all the lines of evidence and come up with a reasonable set of recommendations that will, based on what is known at the time, maximize benefit and minimize potential harm, as Edwards et al. also point out, with examples:
In nearly every jurisdiction, decisions regarding vaccine schedules are made by formal advisory bodies consisting of experienced practitioners, public health officials, vaccinologists, and epidemiologists. Available data are reviewed, burden of disease assessed, and practical considerations for vaccine delivery evaluated to produce an appropriate schedule for each country. Thus, expert advisory bodies may develop differing recommended schedules, based on local, regional, or national considerations. For example, the second dose of MMR vaccine is routinely given in Germany at 15-23 months of age, while in the US it is administered at 4 to 6 years. Strong trial generated evidence shows that two doses separated by at least 28 days and the first dose administered on or after the first birthday will produce measles immunity in 99% or more of people. The timing of the second dose varies in each country based on the ability to provide the earliest possible second dose that will minimise the burden of measles. Ongoing surveillance of measles cases ensures that the timing of doses remains appropriate to the epidemiology of disease.
Contrasted to, for example, Africa:
Consider also the primary vaccination schedule for infants. The EPI schedule recommends immunisation at 6, 10, and 14 weeks in central Africa based on the early burden of vaccine preventable diseases and the need for efficient vaccine delivery when infants are most accessible. In contrast, the primary schedule in North America and much of Europe is 2, 4, and 6 months; in these populations, the lower risk of acquisition of many infectious diseases and better access to care permit vaccination to be incorporated into established well child visits through the first six months of life.
So in other words, it’s important in Africa to get children fully immunized as early as is practical because they are more at risk, which might result in a different schedule because African babies aren’t available for well child visits at 2, 4, and 6 months. These are the sorts of local considerations that result in differences in vaccine schedules, even though all the public health officials responsible for producing these vaccine schedules are looking at more or less the same scientific evidence. Antivaccine warriors never seem to understand that and try to paint any differences in vaccine schedules between nations as evidence of how “unscientific” the process is. This is, of course, nonsense. It is no more unscientific than science-based medicine. The process of selecting vaccines and deciding upon their best timing is a process that is based in science, but this isn’t a perfect world, which means it can’t be based only in science. Other considerations, as I have discussed, come into play and are inextricably linked to the science. The overall goal is to produce the most scientifically rigorous and defensible vaccine schedule possible given the other constraints that impact the decision-making process.
Moreover, the process is never over. It is fluid. As Edwards et al. note, monitoring is essential and optimizes protection. They discuss a specific example in the UK to illustrate this point, the Haemophilus influenzae type b (Hib) vaccine, noting that the increase in Hib cases after implementation of an Hib conjugate vaccine schedule at 2, 3, and 5 months was observed. This led health officials to change the schedule that moved the dose at 3 months to 12-13 months, which resulted in a reduction of the burden of HiB disease. They also point to the introduction of maternal Tdap vaccination to reduce the rate of pertussis in infants too young to be vaccinated in the US and many European countries, a strategy that appears to have been effective.
Not surprisingly, Edwards et al. conclude:
In summary, vaccine schedules are evidence based, safe, and highly effective in reducing the global burden of infectious diseases. Evidence to develop and maintain these schedules involves a multifactorial and robust process carried out worldwide. The real world effectiveness is shown by the millions of children spared annually from the morbidity and mortality of vaccine preventable infections.
Which strikes me as rather hard to argue with, although that doesn’t stop Jefferson and Demicheli from trying.
Methodolatry strikes back
It occurred to me as I was writing this that I never actually described what “methodolatry” means, although I did link to definitions. Now seems as good a time as any to rectify that. Basically methodolatry in medicine is the “profane worship of the randomized controlled clinical trial (RCT) as the only valid means of investigation.” In other words, if there’s not enough RCT evidence, methodolatrists will nearly always conclude that the evidence base is insufficient to make a clear conclusion, which can be a problem in vaccine studies because it is not always possible to do an RCT for every question. Come to think of it, it’s a problem in much of medicine, where there will always exist questions for which RCT data are lacking or thin. Sometimes that does mean we can’t make conclusions; other times, the preponderance of evidence from other study types is enough to come to some fairly confident conclusions. Medicine, whether you call it evidence-based or science-based, is messy and involves synthesizing many forms of studies. That’s not to say that in general an RCT is the most rigorous form of evidence; rather it’s to say that it is not the be-all and end-all of medical evidence.
You can see a hint of methodolatry right here in the very first paragraph of Jefferson and Demicheli’s response:
If taken literally, the answer to the question is a simple no. No field trials have compared the effectiveness and harms of all vaccines used according to various schedules listed in the recent BMJ infographic.6 12 The time for such studies is ethically and logistically past.
No, even if the question of whether vaccine schedules are evidence-based is taken literally, the answer to the question is not a “simple no.” The answer to this question can only be a “simple no” if you consider one kind of evidence as trumping all others: RCTs. Note how Jefferson and Demicheli basically bemoan the fact that no “field trials” have compared the effectiveness and harms of all the vaccines in the various combinations used. It’s clear that when they write “field trials” they mean “RCTs,” because in the very next sentence they point out how the time for such trials is ethically past. In fact, we do field trials all the time with respect to vaccines according to the various vaccination schedules. They’re called epidemiological trials, and I can cite a bunch of them. In fact, I’ve discussed a bunch of them over the years. Clearly, Jefferson and Demicheli don’t consider such trials to be sufficient evidence. Indeed, one of the respondents, Richard J. Roberts, Head of the Vaccine Preventable Disease Programme in Wales drily observed:
The opening lines of Jefferson and Demicheli’s argument provide an insight into their view of what constitutes evidence. They state the simple answer to the question ‘Is the timing of recommended childhood vaccines evidence based?’ is ‘no’, because no field trials have been conducted. In doing so they consign all other evidence to the category non-evidence. Such a view is an unaffordable luxury for those who have to make real world decisions on vaccine policy. It also goes some way to explaining why the conclusions of the Cochrane Vaccines Field on vaccine efficacy have on occasion so clearly differed from vaccination policy advised by national expert advisory groups, who are not only free but ethically constrained to consider the totality of the evidence base and not just that provided by trials.
Exactly. And they do this again and again.
Vaccine schedules are evidence-based, but there is always room for improvement
That’s not to say that I disagree with everything they write. They have a point when they say that the full evidence base necessary to complete is “seldom fully available when vaccination schedules are devised.” On the other hand, one could just as well say that the full evidence base to make such decisions, when defined the way Jefferson and Demicheli define what’s necessary to make such decisions, might never be available! Making decisions in the face of incomplete or uncertain evidence is not an uncommon problem faced by public health officials. Sometimes they have the luxury to wait for more evidence. Sometimes they don’t. All the time there will be some level of uncertainty.
Jefferson and Demicheli also go astray in framing the question in a way that is, quite simply, bordering on ludicrous. After pointing out that they can find no evidence of harm from multiple vaccines being given at a session despite concerns about “overloading” infants’ immune systems, they ask: “So does this mean we should vaccinate all newborn children with all available vaccines against all targetable diseases?” This is, of course, a question that no one really asks. It is a straw man.
They then answer their own question and proceed to an argument that posits that it is the disease threat that matters above all:
No. The main evidence that should be used to guide the development of vaccine schedules is the threat that the targeted diseases pose in the first years of life. The threat assessment should include potential morbidity, mortality, and disability from the disease in question, as well as the risk of exposure to the disease. This type of evidence could even be more important in ascertaining the net benefit of a vaccine than detailed knowledge of efficacy.
Even if the threat of disease is remote, vaccination would still be warranted if the disease is associated with an unacceptable risk of morbidity and disability, as in the case of polio in rich countries. Assessment of the threat posed by the targeted disease should be based on public health surveillance, but surveillance has often been of low quality and there may be no reliable incidence data for a disease targeted by a new vaccine.
For most of the vaccines in The BMJ infographic,6 the evidence of efficacy is apparently good. However, because detailed reports for most clinical trials of vaccines are not available, and have not been independently reviewed, we cannot be certain of vaccines’ harms profiles.
Yes, it is reasonable to determine which diseases should be vaccinated against based on threat assessment. They’re also correct regarding why it’s still a good idea to keep vaccinating against polio even though there hasn’t been a case of polio in the US for a long time. We also know from historical examples (e.g., in the UK after Wakefield) that, if we were to stop vaccinating against measles, it would come back in short order and that low MMR uptake results in outbreaks. That’s why the MMR vaccine is still very important. Be that as it may, as I read the article, I was rather curious what Jefferson and Demicheli mean when they say that there may be no reliable incidence data for a disease targeted by a new vaccine. That is the sort of broad, general statement that cries out for illustrating examples. Specifically, for which vaccine-preventable diseases did public health officials lack good incidence data at the time the vaccine was under consideration for addition to the recommended vaccine schedule?
It’s also a prime example of methodolatry to say that we don’t have evidence of the vaccines’ harm profiles just because we don’t always have full results of vaccine clinical trials. Are RCTs the only source of information regarding adverse events due to vaccines? Of course not! There are many large databases that track vaccine reactions. In the US, for example, we have the Vaccine Safety Datalink (VSD), which continuously monitors vaccine safety. We also have the Vaccine Adverse Event Reporting System (VAERS), which, although not reliable for estimating incidence of adverse reactions because reports can be entered by anyone and because it’s been gamed by trial lawyers, does nonetheless serve a useful purpose as an early warning system for possible new adverse events from vaccines. Other countries, like Canada, have similar databases. The fact is that we have a very good idea of what harms vaccines can cause from epidemiological studies, which have the advantage of being able to survey many times the number of patients as any RCT, up to millions in some cases. That’s how we know the incidence of serious adverse events from vaccines on the CDC recommended schedule is so low. To argue otherwise is methodolatry.
So is their conclusion:
In summary, the vaccine schedule is a function of different interventions, contexts, and values. The evidence base used in designing schedules is incomplete. So how can we improve current practice? We should start by carrying out a more accurate assessment of the magnitude of disease threats. Those vaccines not targeting impending or credible threats should then be phased out or delayed. We also need randomised trials comparing different vaccination schedules to provide good quality data on the potential harms of single or multiple vaccinations. All aspects of vaccination should be monitored and assessed by independent studies.
Of course, no one disagrees with the last sentence. Who could or would? However, surely even Jefferson and Demicheli must know that it is impractical to conduct RCTs on many different vaccination schedules. It might not even be ethical in some cases if some children are left unprotected against specific vaccine-preventable diseases. Also, how does one estimate the magnitude of disease threats in the wake of phasing out certain vaccines or delaying them? Herd immunity is very important for many vaccine-preventable diseases; reducing herd immunity changes conditions so much that diseases that were under control because of mass vaccination can easily become a threat again if vaccine coverage falls.
Jefferson and Demicheli aren’t even consistent in their own demand to stick to strict evidence, either:
Balancing the age at first dose with the number of doses should ideally be based on the families’ perception of threat. Even if the threat of a particular disease is of low level or unknown, the possibility of some diseases may trigger alarm and anxiety in some families. If governments decide to offer a vaccine but many families refuse it the policy may be ineffective.
So wait a minute. Now they’re saying that RCT evidence isn’t the be-all and end-all of decision-making about vaccine schedules. Health policy makers also need to take into account fear of families of some diseases or the willingness of families to accept certain vaccines or not? Isn’t that what public health officials already do, try to make their recommendations as evidence-based as possible and take into account where necessary non-scientific factors?
That’s exactly what public health officials do. It’s what they should do.
From my perspective, the answer to the question posed by the BMJ debate is clear. Vaccine schedules and timing are indeed evidence-based, if by “evidence” you go beyond just RCT evidence and look at the totality of the science, RCTs, microbiology, immunology, and epidemiology. They are not, however, perfectly so. They can never be perfectly so. They can, however, be improved, but doing so takes more than just RCTs. It takes looking at the evidence—dare I say it?—holistically.