There are certain claims, certain statistics, that I like to refer to as “slasher statistics” or “slasher stats” (although sometimes I also call them “zombie statistics”). The reason is simple. Like the slashers in horror films, iconic characters like Michael Myers, Jason Voorhees, and Freddy Krueger (for example), these slasher stats never truly die. No matter how many times they appear to have been killed with science, reason, and data at the end of one installment, they always show up in the next installment to wreak havoc, death, and destruction again. One such slasher stat is the oft-repeated claim that “medical errors” are the third leading cause of death in the United States. I first wrote about this particular abuse of statistics in 2016, when a surgical oncologist named Dr. Marty Makary co-authored an “Analysis” published in The BMJ entitled—you guessed it!—Medical error—the third leading cause of death in the US, in which he and Michael Daniel estimated that medical errors accounted for a quarter of a million deaths per year, making it, yes, the “third leading cause of death” in the US. It’s a bogus factoid that never ceases to cause my blood pressure to rise and me to be seriously tempted to start refuting it whenever I see it pop up.
Unsurprisingly, quacks, antivaxxers, and other medical conspiracy theorists quickly and gleefully embraced this “study” (which, by the way, was not a study at all but an “analysis” of some very flawed studies that didn’t even qualify as a meta-analysis or systematic review), and the talking point that medical errors are the “third leading cause of death” quickly found its way into the national zeitgeist, propelled by a “Johns Hopkins University study.” Worse, a lot of physicians, journalists, and scientists who really, really should know better, seem to accept this figure as Gospel. No matter how many times I and others have tried to demonstrate why this particular statistic is bogus, having been derived from bad studies coupled with extreme innumeracy and how the true figure is likely at least an order of magnitude smaller (albeit even that is admittedly still too high), this slasher stat lives on, just as slashers do after having apparently been killed at the end of the movie before the one in which they are resurrected. horror movie.
The slasher stat about medical errors returns
To be honest, in the middle of a global pandemic, I hadn’t expected STAT News, of all media outlets, to provide the digital ink to repeat this slasher stat, but so it did in the form of a “First Opinion” op-ed by two law professors Michael Saks and Stephan Landsman entitled Use systems redesign and the law to prevent medical errors and accidents. Naturally, they have a book to sell, Closing Death’s Door: Legal Innovations to End the Epidemic of Healthcare Harm, and the “third leading cause” trope fits in very nicely with the message of the book.
Let’s just say that the article does not start well:
This summer, surgeons at University Hospitals in Cleveland transplanted a donor kidney into the wrong patient, while the patient the kidney had been destined for had to go back on the waiting list for another one to become available.
The most surprising thing about the story is not that a serious medical error occurred, but that it found its way into the news.
Injury or illness caused by the healer is called iatrogenic harm. It’s so widespread, so frequent, so massive, and so continuous that it rarely makes headlines. And unlike a plane crash or a building collapse, the vast majority of iatrogenic deaths can be kept under wraps — and they are.
The fact that this incident happened at the hospital where the general surgery residency that trained me is based piqued my interest. Granted, I’ve been gone from University Hospitals of Cleveland for around a quarter of a century now; so it’s unlikely that anyone I used to know there from the transplant service, particularly the surgeons (who have probably retired by now), is still working there, but you never know. Don’t get me wrong, this is a bad error. I did a little searching and found this local news report with more:
We have learned there were two kidney transplants happening at UH on July 2. The health system confirms a kidney meant for one patient was mistakenly transplanted into the wrong person. Fortunately, the person who received the wrong kidney seems to be accepting it and recovering, according to UH. Sources inside the hospital said the blood types were compatible.
Now we’re told the mistake wasn’t noticed until the second operation. UH won’t confirm how far along the surgery was when the transplant team realized they had the kidney intended for the first patient. UH said the second patient is back on the transplant list awaiting another organ.
Two “caregivers” — UH would not disclose if they are doctors, nurses, or other staff — are off the job pending an investigation.
This is, of course, the sort of error that should not happen, could have caused immediate harm to the second patient, and did cause the harm that the second patient had to wait longer on dialysis for a transplant. Moreover, as horrible as this particular error is, it is quite a rare kind of error, given the number of transplants that occur every year:
A quick Google search shows there have been problems around the country with transplants in the past, including with a kidney at the University of Southern California 10 years ago that put transplants on hold there.
The United Network for Organ Sharing that manages the national organ transplant system wrote a statement in response to News 5’s questions about the UH kidney issues.
It wrote in part, “…policies include verification processes meant to prevent errors such as the one reported (at UH) and they are exceedingly rare.”
So right away in Saks and Landsman’s article, I sense a bait-and-switch. They start with a striking but rare type of medical error that did not lead to any deaths and then pivot to this:
Death by medical error or accident is the nation’s leading cause of accidental death, exceeding all other causes of accidental death combined. Medical error and accidents kill approximately as many people each month in the U.S. as Covid-19 did before vaccines became available.
Yet there’s no Operation Warp Speed for preventing medical errors, no national investment of billions of dollars to develop solutions, and no national urgency about solving the problem.
The pedant in me can’t help but note that COVID-19 was the third leading cause of death in the US last year, pushing the slasher statistic of medical errors down to the fourth leading cause, but no one ever points that out—well, not “no one,” but very few people. In any event, I will give the authors credit for using another tried-and-not-so-true trope lamenting how there is no “Operation Warp Speed” for a chronic problem while ignoring the rather substantive difference between a chronic problem that’s been going on for years and a global pandemic that had just closed down the country a month or so before the program was named. (And, no, I’m not defending Operation Warp Speed, whose name I lambasted when it was first announced and blame as a significant contributor to vaccine hesitancy about the COVID-19 vaccines that were so quickly developed and tested.) The point is that Saks and Landsman are comparing apples and oranges the way that a lot of “integrative medicine” boosters like to do when they ask why there has never been an “Operation Warp Speed” for obesity. Again, I am not downplaying the significance of medical errors; rather I am pointing out that to address the problem of medical errors requires accurate, not hugely inflated, estimates of how common medical errors actually are.
Saks and Landsman are nowhere near finished, though:
Studies to determine the incidence of errors leading to injuries and deaths in hospitals began in the early 1970s. A meta-analysis of such studies concluded that the average annual death rate from such errors in the first decade of the 2000s was in the neighborhood of 250,000. That’s more than enough to make medical care gone awry the number three cause of deathin the U.S., after heart disease and cancer.
No, no, no, no, no! The article by Marty Makary and Michael Daniel to which they refer was not a meta-analysis! Not even its authors claimed that it was! (Seriously, Saks and Landsman, just by this claim alone, tell me that they are in way over their heads.) This article wasn’t even close to a meta-analysis! It was billed as an “analysis” but in reality was just a medical op-ed. I deconstructed this awful “analysis” in detail back when it was first published. Let me paraphrase my first impression of the article from then, starting with how it wasn’t a fresh study at all or even a meta-analysis. I described it instead as a regurgitation of already existing data, no matter how many news outlets referred to it a as a “study.” Basically, all Makary did was to pool existing data to produce a point estimate of the death rate among hospitalized patients reported in the literature extrapolated to the reported number of patients hospitalized in 2013 based on four major existing studies since the Institute of Medicine (IOM) report “To Err Is Human” in 1999. In reality, it’s more an op-ed calling for better reporting of deaths from medical errors (something I whole-heartedly support), with extrapolations based on studies with small numbers.
As Kaveh Shojania and Mary Dixon-Woods put it in a commentary published in BMJ Quality & Safety:
Though the paper by Makary and Daniel was widely cited as ‘a study’, it presented no new data nor did it use formal methods to synthesise the data it used from previous studies. The authors simply took the arithmetic average of four estimates since the publication of the IOM report, including one from HealthGrades,5 a for-profit company that markets quality and safety ratings, a report from the US Office of the Inspector General (OIG)6 and two peer-reviewed articles (table 1).7 ,8 The paper did not apply any established methodology for quantitative synthesis nor did it include a discussion either of the intrinsic limitations of the studies used or of the errors associated with the extrapolation process. To bolster their claims, Makary and Daniel did highlight the agreement between their estimates and that of a similar analysis published a few years ago by James.9 The apparent consensus is not, however, surprising, since they use mostly the same studies (listed in table 1, together with a more recent analysis commissioned by the Leapfrog group10).
The same article details the many serious errors in Makary and Daniel’s estimates and is well worth reading.
There were a number of problems that I myself discussed, including Makary’s definition of medical errors, which was so broad as to include a large number of deaths that were not due to any specific error or errors, but rather to the frequency of expected complications from medical procedures. I also noted that the claim of over 251,000 deaths in hospitals as a result of medical errors per year was innumerate. Given that, according to the CDC at the time, only 715,000 of the 2.6 million deaths that occur every year in the US occur in hospitals, if Makary and Daniel’s numbers were to be believed, then some 35% of inpatient deaths are due to medical errors. Or, as Ben Mazer and Chadi Nathan noted, citing the upper end of Makary and Daniel’s estimates for the number of deaths due to medical errors per year in the US:
Assuming 440,000 were an accurate portrayal of annual preventable deaths that occur in hospitals, the context inwhich these studies were conducted and where about 715,000 people die annually,18 this implies 62% of all hospital deaths are caused by preventable medical errors. Taking the 251,454 estimate, almost 34% of hospital deaths would be due to medical errors. We do not believe most physicians could reconcile such a high percentage of hospital deaths being caused by preventable medical error. The estimates’ authors propose even these are fewer than the actual figures. Makary and Daniel believe their estimate “understates the true incidence of death due to medical error.” James doubled his estimate to account for hypothetical underreporting but claimed even this “is probably an underestimate,” suggesting a factor of three might be better, although doing so would likely have placed preventable errors as the leading cause of death in the USA. It has been said these calculations lead to a “bottomless well of medical error.”15
I have in the past grimly joked about how there seems to be an “arms race” to come up with the highest number (a “bottomless well,” if you will) for medical error. On quack websites, for instance, the number is even higher. For instance, über-quack Gary Null teamed with Carolyn Dean, Martin Feldman, Debora Rasio, and Dorothy Smith to write a paper “Death by Medicine,” which estimated that the total number of iatrogenic deaths is nearly 800,000 a year, which would be the number one cause of death, if true (cancer and heart disease don’t kill that many per year) and nearly one-third of all deaths in the US. Basically, when it comes to these estimates, it seems as though everyone is in a race to see who can blame the most deaths on medical errors. It wouldn’t surprise me if one day I see a quack estimate of over a million deaths a year in the US due to medical error.
Oh, wait. Saks and Landsman are heading there:
But hospitals are not the only place where health care is delivered. Vastly more patient contacts occur outside of hospitals, where the error profile is different, dominated by diagnostic and medication errors. The limited data that exist suggest that the number of deaths caused by iatrogenic harm outside of hospitals is roughly equal to the number that occur inside hospitals.
Interestingly, no citations are given for this figure, but if you take at face value Makary’s vastly inflated figure of 250,000-440,000 deaths per year in US hospitals for medical errors and bought Saks and Landsman’s estimates, you’d end up with 500,000-880,000 deaths per year in the US due to medical error, which is getting into Gary Null territory and close to the one million deaths per year territory I grimly joked about. Remember, again, that, according to the latest CDC statistics, there are now roughly 2.85 million deaths per year in the US population of 330 million, or 869.7 deaths per 100,000 population. 880,000 deaths per year would be 31% of all deaths in the US every year. Again, however urgent and serious the problem of medical errors in the US is, it is rank innumeracy to think that medical error causes 18-31% of all deaths every year in the US, possibly more given the propensity for people citing these numbers to say that they are underestimates.
Saks and Lands then pivot back to anecdotes:
The ease with which medical errors can occur is striking. To perform a bronchoscopy to remove a sunflower seed that went down a 2-year-old’s airway instead of his esophagus, a doctor in New Mexico inadvertently sedated the boy with an adult dose of morphine, which caused him to stop breathing and led to severe permanent brain damage. A lab in New York state mislabeled a tissue sample, causing a woman who did not have breast cancer to get a double mastectomy while cancer kept growing inside the woman who had the disease. Surgeons still sometimes get left and right confused, and it’s not uncommon for patients to get the wrong medication or the wrong dose, as happened to Boston Globe health reporter Betsy Lehman, who died from an overdose of chemotherapy drugs that were miscalculated.
No one denies that these are horrible medical errors that should not occur. The question is: How frequently do errors like these occur? Estimating how often these errors occur turns out to be a lot more difficult than it might seem on the surface. Unsurprisingly, Saks and Landsman are a fan of a common method to estimate such errors, trigger tools, which are known for hugely overestimating the prevalence of medical error. Here we go:
Because hospital medical records often do not list incidents of iatrogenic harm, novel methods have been developed to detect it. The Institute for Health care Improvement created a technique known as the Global Trigger, which scours medical records for subtle indications that a patient suffered unexpected harm. A 2013 meta-analysis of Global Trigger studies found 10 times as many adverse events as found by conventional records reviews, with deaths numbering as many as 440,000 per year. Other studies, using on-scene observers, have found comparable numbers of incidents.
Here’s the problem. Trigger tools are very blunt instruments that do not distinguish between potentially preventable harms due to error and complications from procedures that occur at known rates. I discussed this 2013 study myself when I discussed Makary’s “study.” Once again, Saks and Landsman do not know what is and is not a meta-analysis, and the study by John T. James that they cite is not a meta-analysis. Don’t believe me? Just look at the methods from the source at estimating the frequency of preventable adverse events (PAEs):
The approach to the problem of identifying and enumerating PAEs was 4-fold: (1) distinguish types of PAEs that may occur in hospitals, (2) characterize preventability in the context of the Global Trigger Tool (GTT), (3) search contemporary medical literature for the prevalence and severity of PAEs that have been enumerated by credible investigators based on medical records assessed by the GTT, and (4) compare the studies found by the literature search.
That’s it. Nothing about systematic review or meta-analysis methods. Seriously, given that Saks and Landsman seem not to know what the hell a meta-analysis is, much less what constitutes a good or a bad one, I have a hard time taking them seriously, and STAT really dropped the ball in terms of editorial quality by not even noticing this glaring and repeated error.
The problem with trigger tools
Trigger tools, including the Global Trigger Tool, are way too sensitive. As I am wont to do, I will cite Mark Hoofnagle on this topic:
Exactly. All these huge estimates come from estimates of the incidence of occurrences that are used as proxies for medical error, whether they are or aren’t actually good proxies. Moreover, even these errors detected are unlikely to have been a direct cause of patient death. At most, they “contributed” and arguably many might not have even done that. Shojania and Woods explain more:
Some of the widely quoted estimates of deaths due to medical error, including the IOM estimates,1 Makary and Daniel4 and James,9 are based on studies that in fact did not set out to estimate the rate of mortality linked to medical error. Instead, these primary studies sought to measure the prevalence of harm from medical care (ie, adverse events).
Consistent with their primary purpose, these studies included no methodology for making judgements about the degree to which adverse events played a role in any deaths that subsequently ensued. For instance, a patient admitted to the intensive care unit with multisystem organ failure from sepsis might develop a drug rash from an antibiotic to which he has exhibited a past allergic reaction. This patient has certainly experienced a preventable adverse event. But, if the patient eventually dies of progressive organ dysfunction a week after the antibiotic was changed, the medical error probably did not cause the death. An error that has occurred close to a death is not a sufficient basis for concluding that the error is the cause of death. Yet these studies do not have an explicit methodology for handing this situation—for distinguishing deaths where error is the primary cause from deaths where errors occurred but did not cause a fatal outcome.
A further problem with the basing estimates on studies that use adverse event and trigger tools of the type used by Makary and Daniel (and in the similar review by James9) is that they typically involve very small numbers of deaths. For instance, one study used a trigger tool approach to review 100 charts per quarter from each of 10 hospitals in North Carolina from January 2002 to December 2007.7 This study sought to detect any decline in adverse events that might have occurred as a result of patient safety efforts. In passing, the authors report that 14 adverse events were judged to have ‘caused or contributed to a patient’s death’. These 14 deaths represented 0.6% of the total patients in the study. Similarly, one US government report included three preventable deaths;11 another reported 12.6 One of the widely quoted peer-reviewed studies identified nine deaths.8 Any extrapolation that generalises from so few deaths (14 or fewer) to so many (200 000–400 0004 ,9) surely warrants substantial scepticism.
The innumeracy is epic here too, in that these estimates of 250,000-440,000 deaths due to medical error per year are based on rather tiny numbers extrapolated unjustifiably to millions of people. So what is the likely number of such deaths? Unsurprisingly, it’s much lower (albeit, again, still too high!):
The need for scrutiny is particularly important because when studies are designed specifically to identify preventable deaths, they typically report low rates. Studies that have reviewed inpatient deaths and asked physician reviewers to judge preventability have reported proportions under 5%, typically in the range of 1%–3%.12–15 The largest and most recent of these studies13 reported that trained medical reviewers judged 3.6% of deaths to have at least a 50% probability of being avoidable.
As I discussed before, a recent systematic review and meta-analysis published just before the COVID-19 pandemic hit suggests that the true number is closer to just over 22,000 per year. Again, that’s still too high, but it is at least an order of magnitude lower than the commonly cited numbers.
Slasher stat or meme: The “arms race” to inflate the death toll
How can these overestimates happen? Shojania and Woods explain some more, contrasting how Makary and Daniels estimate deaths due to medical errors and how it should be done. First, they do it in response to arguments like this:
Note the disingenuous conflation of medical errors and drug and device complications and surgical deaths. Another form of this argument is mentioned by Shojania and Woods:
In listservs and blogs discussing the controversy over deaths due to medical error, we have encountered responses to any criticisms of the estimated death toll that take the form: “But those numbers don’t even include…deaths due to unnecessary care, diagnostic errors, excessive radiation from overuse of radiologic investigations …”. In other words, the argument amounts to, “Even if the analysis did have some problems, it didn’t include other important types of deaths due to medical error. So, the number is probably still about right”.
It’s the “arms race” again. If medical errors alone don’t get you to a huge number of deaths, then conflate and add complications or unnecessary care or other types of death and complications. Unfortunately, as Shojania and Woods note:
That said, this is a very different approach in estimating deaths due to medical error from that of extrapolating from adverse event studies. This approach starts with identifying all the important types of medical errors that we can think of—diagnostic errors, underuse of beneficial therapies (eg, failure to follow guidelines for the management of coronary artery disease), overuse of non-beneficial ones and so on. Then, to generate a total, it combines the frequency of these errors with estimates of how often each causes death. Even putting aside the speculative nature of many of the inputs to such an estimate, this approach almost certainly hugely overestimate mortality attributable to error. A patient can have a diagnostic error in connection with one aspect of their care, a medication safety problem with another, and not receive guideline-concordant care for yet another condition. Each of these categories of medical error may have an associated attributable mortality. Yet, the patient can only die once. Adding up the attributable mortalities for every type of error will substantially overestimate deaths due to errors.
Another problem with “But we didn’t even include A, B, and C when we counted up all the deaths due to medical error” is that this approach is unevenly applied. The same reasoning is not so assiduously pursued for other leading causes of death—arguing, for example, that many deaths from heart disease, stroke and kidney failure include cases of diabetes, which would therefore make it the leading cause of death.
The result is an “arms race” to come up with the largest estimates for the number of deaths attributable to medical error, which is how we get articles like Makary and Daniel’s. Why? Because the more dire you can portray the problem as, the more attention the problem is likely to get, no matter how much inflated statistics. It comes as no surprise to me that, since the pandemic, Marty Makary has become a COVID-19 contrarian, downplaying the severity of the pandemic. These days, he’s been reduced to saying things like this:
The answer is: Yes. As a surgeon, I am embarrassed by Dr. Makary, who really should know better but does not, just as I was embarrassed when he published his 2016 paper that basically created the “third leading cause of death” myth. It is the same phenomenon that led me so long ago to repeatedly invoke a recurring joke about wanting to put a paper bag over my head every time I came across a surgeon denying evolution and spewing creationist pseudoscience. In the age of COVID-19, I’m seriously thinking of resurrecting that long-abandoned recurring joke.
In fairness, their touting of hugely inflated numbers for deaths aside, Saks and Landsman do make some good points about strategies to decrease the number of medical errors through systems approaches, although their approval of denial-of-payment programs by Centers for Medicare and Medicaid Services that refuse to pay for avoidable care, such as treatment for serious hospital-acquired conditions, is somewhat divorced from reality given that none of the hospital-acquired conditions so penalized (such as catheter-related infections) is 100% avoidable and penalizing hospitals this way can lead to perverse incentives not to provide care to patients at most risk for these complications. Unfortunately, their good points are seriously undermined by their reliance on dubious statistics regarding “death by medical errors” to sell their message.
Worse, these dubious statistics do real harm. Again, as Shojania and Woods note, these sorts of estimates are so innumerate and divorced from the reality that clinicians find themselves in that “most healthcare professionals will strain to believe that their efforts to help patients in fact account for one-third of all hospital deaths” (or even nearly two-thirds, if the highest estimates are to be believed) and parading “dubious statistics instead has the effect of disengaging clinicians from what may appear to be a field lacking in credibility, damaging their confidence in interventions intended to improve safety and threatening professional-patient relationships.” Unfortunately, these dubious statistics are slasher stats.
Or maybe I should call them “memes,” as Mazer and Nahban do:
Why have these recent estimates been accepted so quickly and widely, despite presenting re-analyses of older data? One way to understand the success of these estimates is to view them as potent cultural “memes.” Biologist Richard Dawkins coined the term to describe ideas that like replicating genes, propagate “by leaping from brain to brain.”16 Memes, according to Dawkins, may be subject to many of the same evolutionary pressures as their genetic counterparts, and thus certain characteristics make ideas more fit to spread throughout the “meme pool” of society.
Whatever you call these estimates, memes or slasher stats, they are apparently unkillable, no matter how desperately they need to die. They serve no one, least of all patients and the clinicians who are actually dedicated to working to increase the quality of care and decrease the number of medical errors.