Sadly (with regards to vacation) and not-so-sadly (with regards to the events of last week), it’s time to dive headlong back into the “real world” at work, starting with clinic today. It also means it’s time to get back to my favorite hobby (blogging) in a much more regular way, although I will say that a relatively prolonged break from the blog was good, and my traffic only suffered mildly for it. I may have to do it more often, if only to keep things fresher.
One of the tasks that confronted me this weekend as I got ready to face a full week back at work was to try to catch up on all the literature that I had been ignoring for nearly three weeks. Fortunately, PubMed now lets you set up customized RSS feeds for any search you might want to set up. Unfortunately, thanks to that, approximately a thousand results were waiting for me when I finally got up the nerve to fire up NetNewsWire and let it download the results of all the feeds that I had set up. Faced with such abundance, instead of my usual practice of skimming the titles and the abstracts, I ended up just skimming the titles and marking anything that didn’t immediately catch my attention and hold it as “read.” There was, however, one article that did catch my attention and hold it almost immediately, an article by McAlister et al in PLoS Medicine entitled How Evidence-Based Are the Recommendations in Evidence-Based Guidelines?
Excellent! Something right up my alley and the perfect topic to start out my first full week back.
One of the consistent themes of this blog ever since it began as an itty-bitty ego trip on Blogspot back in late 2004 has been to emphasize evidence-based medicine and to advocate applying the same standards of evidence to alternative medicine that we expect of “conventional” or “evidence-based” medicine. Indeed, I’ve tended to resent the entire term “alternative medicine,” mainly because it’s becoming more and more clear to me that “alternative medicine” is nothing more than a politically correct term used by its advocates to describe a large body of non-evidence-based medicine and frame this description in such a way as to downplay the lack of evidence for efficacy of these treatments or, in some cases, the evidence against their efficacy. These days, my preference has been simply to refer to evidence-based versus non-evidence-based medicine or, alternatively, “scientific” versus “non-scientific” medicine. (How’s that for “re-framing” the term “alternative medicine”?) Any “alternative” (a.k.a. “non-evidence-based”) medicine that becomes “evidence-based” ceases to be “alternative” and is added to the armamentarium of scientific medicine, just as many medicines derived from plants and herbs have been over the last couple of centuries.
Admittedly, however, it’s often unclear exactly what is meant by “evidence-based” medicine. After all, anecdotes are “evidence,” albeit a very weak form of evidence prone to a number of confounding biases, and critics invoking postmodernism have even gone so far as to refer to an insistence on evidence-based medicine as inherently “fascist” in nature. That’s why, before delving into the article, I’ll review one commonly used definition of “evidence-based medicine”:
- Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.
- Evidence-based medicine is neither old-hat nor impossible to practice.
- Evidence-based medicine is not “cook-book” medicine.
One common misconception about evidence-based medicine is that randomized clinical trials are the only form of evidence that matters. Sometimes this takes the form of a misrepresentation used as a straw man argument by advocates of alternative medicine to claim that a large proportion of “scientific medicine” is not evidence-based because there are no RCTs supporting it. (You’ll often hear the claim bandied about on altie websites that only “10%” of conventional medicine is supported by RCTs). It’s true true that RCTs are considered the strongest form of evidence (i.e. the “gold standard”), which is as it should be, but they are not the only acceptable form of evidence:
Evidence-based medicine is not restricted to randomised trials and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions. To find out about the accuracy of a diagnostic test, we need to find proper cross-sectional studies of patients clinically suspected of harbouring the relevant disorder, not a randomised trial. For a question about prognosis, we need proper follow-up studies of patients assembled at a uniform, early point in the clinical course of their disease. And sometimes the evidence we need will come from the basic sciences such as genetics or immunology. It is when asking questions about therapy that we should try to avoid the non-experimental approaches, since these routinely lead to false-positive conclusions about efficacy. Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the “gold standard” for judging whether a treatment does more good than harm. However, some questions about therapy do not require randomised trials (successful interventions for otherwise fatal conditions) or cannot wait for the trials to be conducted. And if no randomised trial has been carried out for our patient’s predicament, we follow the trail to the next best external evidence and work from there.
The problem with the vast majority of alternative medicine is that there is either (1) no solid clinical evidence that it works (most alternative medicine); (2) no plausible scientific reason to think that it should work (i.e. homeopathy, Reiki therapy); (3) worst of all, evidence that it doesn’t work (i.e., laetrile, chelation therapy for cardiovascular disease); or (4) combinations of #1, 2, and 3 (homeopathy, chelation therapy, high dose vitamin C, etc.). Given this backdrop, it is of interest to know how much of “evidence-based” practice guidelines are in actuality truly evidence-based. Part of the need for such studies is that what is meant by “evidence-based” may be interpreted differently by those who write such guidelines and those who use them. McAlister et al note:
There has been a rapid expansion in the number of clinical practice guidelines over the past decade and, as a result, clinicians are frequently faced with several guidelines for treatment of the same condition. Unfortunately, recommendations may differ between guidelines, leaving the clinician with a decision to make about which guideline to follow. While it is easy to say that one should follow only those guidelines that are “evidence based,” very few guideline developers declare their documents to be non-evidence based, and there is ambiguity about what “evidence based” really means in the context of guidelines. The term may be interpreted differently depending on who is referring to the guideline–the developer, who creates the guidelines, or the clinician, who uses them. To their developers, “evidence-based guidelines” are defined as those that incorporate a systematic search for evidence, explicitly evaluate the quality of that evidence, and then espouse recommendations based on the best available evidence, even when that evidence is not high quality. However, to clinicians, “evidence based” is frequently misinterpreted as meaning that the recommendations are based solely on high-quality evidence (i.e., randomized clinical trials [RCTs]).
The authors decided to evaluate the most recent guidelines for the management of diabetes mellitus, dyslipidemia, and hypertension and focused on evaluating the evidence base for cardiovascular risk management interventions only, leaving out the evidence for other recommendations in those guidelines. They rated the quality evidence behind each guideline using an online tool (the CHEP scheme, which is based on the GRADE working group and the AGREE instrument using a scheme that they outlined here.
Hearteningly, it was found that two-thirds of the cardiovascular risk management therapeutic guidelines were based on evidence from RCTs. Less hearteningly, it was estimated that only one-half of these RCT-based guidelines were of “high quality.” What this means is not that the studies used to support these guidelines were not of high quality. Rather, the reason that half of the studies were downgraded from “high” quality when analyzed in the context of the recommendations is because of applicability. The most frequent reason was that an RCT designed to answer a particular question was being generalized to justify the recommendation it supports in a different clinical scenario. Alternatively, results of studies that were carried out in very defined populations were being used to support recommendations in a more general population. In other words, although high quality RCTs can be the basis for several recommendations, the evidence from a single RCT will not support all of the recommendations derived from it equally well, and sometimes developers of guidelines are forced to extrapolate beyond what the RCTs say simply because there is no better evidence available.
The bottom line is that, in this one area at least, if you believe this study, only around 1/3 of the recommendations in a set of consensus guidelines about how to manage cardiovascular risk in three different conditions are based on “high quality” RCT evidence. The study does have significant flaws, such as a small sample size of guidelines examined and only looked at therapeutic interventions, but it’s probably not all that far from the truth, at least as far as it is able to go. I’m surprised that I haven’t yet seen this study trumpeted on websites like NewsTarget or Whale.to as “evidence” that “evidence-based” medicine is not really evidence based, the implication being that so-called “alternative” medicine does just as well.
Does this study demonstrate that “evidence-based” medicine is nothing of the sort? Of course not! What it really demonstrates is the difficulty inherent in practicing evidence-based medicine and coming up with truly evidence-based guidelines. There are just too many holes in our clinical evidence ever to obviate the need for “filling in the gaps,” and, because of the rapid advances (or, to the more cynical, “changes”) in treatments over the years, there will always be such gaps. In the case of evidence-based guidelines, this “filling in” of the gaps usually involves extrapolating studies further than would normally be done, to encompass wider study populations or clinical scenarios that might not exactly match the questions asked in the RCTs from which the evidence supporting the guideline was drawn. It also involves considering the impact of other comorbid conditions, something that is not in general well done in these sorts of guidelines. In the case of individual clinicians, this is where clinical experience and acumen come into play, specifically the ability to synthesize the evidence from “evidence-based medicine” and apply it to the treatment of individual patients.
Now for the second question: Does this study give comfort to advocates of alternative medicine that their modalities are just as “evidence-based” as those of scientific medicine.
In a word, no.
Alties may have some pleasure in pointing a finger at this study and claiming that “only” 33% of “evidence-based” treatments are in fact evidence-based, but that would be a misrepresentation. In fact, all of the treatments in evidence-based guidelines are based on some evidence, it is only that 33% of them are based on high quality evidence that is precisely applicable to the clinical situation for which they are being made. Given the biological variability of disease and between patients, that’s actually not too bad. Certainly we can and should work to do better, and the authors’ suggestion that future guidelines explicitly grade the quality of evidence for each individual guideline is a good one. Indeed, I’m starting to see just that in the literature more and more often; for example, in the recent American Cancer Society guidelines for the use of MRI in screening for breast cancer, where each guideline was graded as “based on high quality evidence,” “based on expert consensus opinion,” or “insufficient evidence for or against.”
Contrast this to “alternative” medicine. Indeed, “evidence-based” guidelines in alternative medicine, until recently, have been nonexistent. Those that I’ve come across that claim to be “evidence-based” guidelines for the use of alternative medicine have virtually all been of extremely poor quality, with little or no attempt to base them on high quality RCT evidence. And, of course, the thought of even applying the term “evidence-based” guidelines to woo such as homeopathy, Reiki, various “detoxification” regimens, or applied kinesiology causes an intense urge in me to break out in hysterical laughter, an urge that I sometimes cannot resist given how often I come across articles by alties claiming the RCT is not an appropriate study design to identify therapeutic effects due to alternative medical modalities. It may be a legitimate criticism of scientific medicine that not enough of its recommendations are based on high quality RCT evidence and that we could do better, but in the alternative medicine world, there is an antiscientific attitude that downplays the value of evidence-based medicine in general and the RCT in particular.
If you think about it, it makes perfect sense, given how poor the evidence base is behind most alternative medicine modalities (particularly the ones not based on herbs from which actual pharmaceutical medicines might be derived), where the larger and higher quality RCTs almost invariably show much smaller or, more commonly, nonexistent effects than the usual panoply of poorer quality studies touted by alternative medicine aficianados as evidence for their woo. In other words, I’ll put the evidence base and results supporting “evidence-based medicine” up against the evidence base supporting “alternative” medicine any day of the week.