Medicine Popular culture Science Skepticism/critical thinking

Is medical error really the third most common cause of death in the US?

The claim that medical errors are the third leading cause of death in the US has always rested on very shaky evidence; yet it has become common wisdom that is cited as though everyone accepts it. But if estimates of 250,000 to 400,000 deaths due to medical error are way too high, what is the real number? A recently published study suggests that it’s almost certainly a lot lower.

I say this at the beginning of nearly every post that I write on this topic, but it bears repeating. It is an unquestioned belief among believers in alternative medicine and even just among many people who do not trust conventional medicine that conventional medicine kills. Not only does exaggerating the number of people who die due to medical errors fit in with the world view of people like Gary Null, Mike Adams, and Joe Mercola, but it’s good for business. After all, if conventional medicine is as dangerous as claimed, then the quackery peddled by the likes of Null, Adams and Mercola starts looking better in comparison. Unfortunately, there are a number of academics more than willing to provide quacks with inflated estimates of deaths due to medical errors.

The most famous of these is Dr. Martin Makary of Johns Hopkins University, who published a review (not an original study, as those citing his estimates like to claim) estimating that the number of preventable deaths due to medical error is between 250,000 and 400,000 a year, thus cementing the common (and false) trope that “medical error is the third leading cause of death in the US” into the public consciousness and thereby doing untold damage to public confidence in medicine. As I pointed out at the time, if this estimate were correct, it would mean that between 35% and 56% of all in-hospital deaths are due to medical error and that medical error causes between 10% and 15% of all deaths in the US. The innumeracy that is required to believe such estimates beggars the imagination.

Of course, even with academics providing them with hugely inflated estimates of deaths due to medical error, quacks remain unsatisfied. Perhaps the most famous estimate written by quacks is Gary Null’s Death by Medicine, each new version of which increases the estimate of the number of people who die because of medical errors and “conventional medicine,” to the point where his estimate approaches 800,000 deaths per year, or more than one third of all deaths in the US. (I strongly suspect that Null will find a way to get that estimate up over one million before too long.) That’s why it was refreshing to read a new meta-analysis written (PDF) by investigators at Yale University last week. It provides an estimate that’s significantly larger than the last paper on the topic that I discussed, but more than ten-fold lower than the inflated “third leading cause of death” numbers.

Before I discuss the new Yale paper, I will, as I always do, provide a bit of history. The attempt to quantify how many deaths are attributable to medical error began in earnest in 2000 with the Institute of Medicine’s To Err Is Human, which estimated that the death rate due to medical error was 44,000 to 96,000, roughly one to two times the death rate from automobiles. (This is the estimate to which the Yale investigators, led by Craig Gunderson with first author Benjamin Rodwin, compare their estimates.) In response to the study, the quality improvement (QI) revolution began. Every hospital began implementing QI initiatives. Indeed, I was co-director of a statewide QI effort for breast cancer patients for three years. Also, as I mentioned above, the estimates for “death by medicine” seemingly never do anything but keep increasing. They went from 100,000 to 200,000 and now as high as 400,000. Basically, when it comes to these estimates, it seems as though everyone is in a race to see who can blame the most deaths on medical errors, and each time a larger estimate is published the press gobbles it up uncritically. In contrast, each time a study publishes a more reasonable estimate, all we hear are crickets.

How did we get here? As Mark Hoofnagle put it:

Mark was referring to the use of the Institute for Healthcare Improvement’s Global Trigger Tool, which is arguably way too sensitive. Also, as I explained in my deconstruction of the Johns Hopkins paper, the authors conflated unavoidable complications with medical errors, didn’t consider very well whether the deaths were potentially preventable, and extrapolated from small numbers. Many of these studies also used administrative databases, which are primarily designed for insurance billing and thus not very good for other purposes.

The Yale paper

If the estimates between 200,000 and 400,000 are way too high, what is the real number of deaths that can be attributed to medical error? Here’s where the meta-analysis by Rodwin et al comes in, estimating the number of preventable deaths at just over 22,000 per year. It’s not even in the top ten. In fact, preventable deaths due to medical error represent less than 1% of all deaths. That number is, of course, still too high, and efforts to decrease should and will continue. (It can never be zero, given that medicine is a system run by human beings, who are inherently imperfect and sometimes make mistakes.) However, it’s nowhere near the third leading cause of death in the US.

How did Rodwin et al derive their estimate? First, here’s their rationale:

In 1999, the Institute of Medicine (IOM) published its seminal report on medical errors, To Err Is Human: Building a Safer Health System.1 This widely cited analysis extrapolated from two studies of adverse events in hospitals and concluded that between 44,000 and 98,000 Americans die annually due to preventable medical error. The two referenced studies evaluated deaths from medical error by first determining the frequency of adverse events in hospitals and then separately deciding whether the adverse event was preventable and whether the adverse event caused harm.2, 3 More recently, a report including several additional studies concluded that medical error causes more than 250,000 inpatient deaths per year in the USA, making it the third leading cause of death behind only cancer and heart disease.4

Studies that review series of admissions and determine whether adverse events occurred, whether the events were preventable, and what harms resulted have been criticized for indirectness when used to estimate the number of deaths due to medical error.5, 6 In contrast, studies of inpatient deaths offer a more direct way of estimating the rate of preventable deaths. We undertook a systematic review and meta-analysis of studies that reviewed case series of inpatient deaths and used physician review to determine the proportion of preventable deaths.

To examine the question of how many deaths per year are preventable and possibly due to medical error, the authors carried out a systematic review and meta-analysis and took care to make separate estimates for patients with less than a three month life expectancy and more than a three month life expectancy. (Spoiler alert: They found that the vast majority of preventable deaths occur in patients with less than a three month life expectancy.) They also only included studies in which the included cases were reviewed by physicians to determine if the death was preventable:

All studies of case series of adult patients who died in the hospital and were reviewed by physicians to determine if the death was preventable were included. Non-English studies were included and translated using Google Translate, which has been shown to be a viable tool for the purpose of abstracting data for systematic reviews.10 Studies which evaluated a series of inpatient admissions to determine if there was a preventable adverse event, and then determined if that adverse event contributed to death, such as those included in the 1999 Institute of Medicine report, were excluded. We primarily searched for studies of consecutive or randomly selected inpatient deaths, but also included studies that used cohorts with selection criteria but analyzed these separately. Studies limited to specific populations such as pediatric, trauma, or maternity patients were excluded because our primary research question was to determine the overall rate of preventable mortality in hospitalized patients and these populations are less generalizable.

The winnowing process to select the studies resulted in sixteen studies from a variety of countries that fit the inclusion criteria, eight of which were of random or consecutive groups of patients and eight of which were of cohorts with selection criteria, the latter of which were analyzed separately. Four of the studies examined data from multiple hospitals. Of the eight studies that could be included in a quantitative meta-analysis (the ones analyzing random or consecutive groups of patients), all defined preventable deaths as those that were rated as greater than 50% chance of having been preventable, while seven of the studies used a Likert scale to define preventability while one used a scale of 0–100%. Five studies used multiple reviewers, three of which used consensus to arbitrate differences of opinion, while one used a third reviewer and one used latent class analysis. Six of the studies included adverse events prior to admission.

The results were as follows for the percentages of hospital deaths deemed more likely than not to have been preventable:

The overall pooled rate was 3.1% (95% CI 2.2–4.1%). Individual studies ranged from 1.4 to 4.4% preventable mortality with statistically significant evidence for heterogeneity (I2 = 84%, p < 0.001). The eight studies with selection criteria reported rates of preventable mortality ranging from 0.5 to 26.9%. One study from 1988 reported that 26.9% of 182 deaths for myocardial infarction, stroke, or pneumonia were > 50% likely to have been preventable.23 A study which evaluated 124 patients from the Emergency Department who died within 24 h of admission found that 25.8% of these deaths could have been prevented.29 Another study from 1994 reported that 21.6% of 22 deaths from certain diagnostic groups were at least “somewhat likely” to have been preventable.28 A large recent study from the Netherlands reported 9.4% of 2182 deaths as “potentially preventable.” The remaining studies with selection criteria reported rates of 0.5–6.2% preventable deaths.

And overall:

Overall, our systematic review found eight studies of hospitalized patients that reviewed case series of consecutive or randomly selected inpatient deaths and found that 3.1% of 12,503 deaths were judged to have been preventable. Additionally, two studies reported rates of preventable deaths for patients with at least 3 months life expectancy and reported that between 0.5 and 1.0% of these deaths were preventable. If these rates are multiplied by the number of annual deaths of hospitalized patients in the USA, our estimates equate to approximately 22,165 preventable deaths annually and up to 7,150 preventable deaths among patients with greater than 3 months life expectancy.31

I note that that latter estimate of ~7,000 deaths a year in previously healthy people is pretty close to the estimate of ~5,000 preventable deaths per year noted in a study from last year that I discussed.

So what, specifically, were the errors that led to preventable hospital deaths? I don’t know why the authors buried the table in the supplemental materials, but I dug it out and examined the main causes. (The numbers in parentheses are the ranges of percentages of preventable deaths between the studies examined.) The main causes are:

  • Clinical monitoring or management (6-53%)
  • Diagnostic error (13-47%)
  • Surgery/procedure (4-38%)
  • Drug or fluid-related (4-21%)
  • Other clinical (4-50%)
  • Infection or antibiotic related (2-14%)
  • Supervision (24%, there being only one study citing this as a cause)
  • Technical problem (6-9%)
  • Inpatient fall (6.5%, only one study again)
  • Transition of care (3.2%, only one study again)

Clearly, the range is wide, depending on the hospital and country. The top three don’t surprise me either, although, as I’ve pointed out before, for surgical procedures it’s not always easy to tell if a surgical mistake versus a known complication from the surgery is the cause of death. Even when carried out by expert hands, surgical procedures can cause significant complications (such as bleeding) in some patients and even death in a handful. This is true for even seemingly very low risk procedures. Similarly, diagnostic errors are tricky as well, as the error often only becomes apparent in retrospect. Nonetheless, this analysis does provide an idea of the sorts of medical errors that can result in potentially preventable deaths. Moreover, because the standard was simply that a death was more likely than not to have been due to medical error and thus preventable, the figure of 22K deaths/year is likely an overestimate, given that it includes a lot of deaths whose cause might not have been medical error.

So how do Rodwin et al account for the huge difference between their estimate and the Institute of Medicine’s estimate of 44,000-98,000 preventable deaths due to medical error per year and, in particular, the ludicrously inflated estimates of greater than a quarter of a million deaths that produced the “third leading cause of death”? It’s mainly because they didn’t use trigger tools to look for complications and then make estimates of how likely those complications were to be preventable and to have resulted in the death of the patient:

These results contrast with earlier estimates of medical error which reported higher rates of preventable mortality. The IOM report as well as similar subsequent reviews has reported much higher estimates.4 Numerous authors have criticized these prior estimates for varied methodologic reasons,5, 6 including poorly described methods for determining preventability and causality for death, as well as for indirectness—these studies have in common that they primarily attempt to define the incidence of adverse events in series of hospitalized patients and then secondarily estimate the likelihood that the adverse event was preventable and the likelihood that the adverse event, rather than underlying disease, caused the patient’s death. The studies we reviewed have the advantage of both using as their denominator a series of inpatient deaths rather than admissions and directly assessing the deaths for preventability.

This study is not without limitations, however. For one thing, the studies included rely only on physician judgment to determine whether a given death examined was preventable. Given that there is no agreed-upon standard to determine whether a death was preventable, this methodology introduces potential biases, such as hindsight bias after poor outcomes. This particular bias, sometimes called the “knew-it-all-along” phenomenon, is very common after traumatic events or poor outcomes and describes the tendency of humans, examining an event that’s already happened, to view the outcome as more predictable than it actually was at the time before the outcome occurred, when the people involved were making the decisions that led to the outcome. Also, all determinations were made by retrospective chart review, and anyone who’s ever taken care of patients in a hospital knows that the medical record often lacks important information regarding management and death. Perhaps that’s why the inter-operator reliability between doctors reviewing these charts was consistently in the fair to moderate range in these studies. In any event, hindsight bias would tend to increase the estimate of preventable deaths, as the doctors reviewing the chart, knowing the outcome, might have excessive confidence due to this bias about how predictable the outcome was.

Another factor in this study that tends to inflate the estimates is that 6/8 of the studies included medical errors from prior admissions or outpatient care in their analysis, which could potentially lead to an overestimation of the number of preventable deaths due to care in the hospitalization. Only one study tried to separate out the two, and found that 25% of preventable deaths were related to prior outpatient events. On the other hand, I’d argue that a medical error is a medical error, regardless of when it happened. If a doctor made an error that harmed the patient in the outpatient setting and the patient died in the hospital after being admitted for the harm caused by that error, that’s still a death due to medical error.

There was also an interesting quirk:

A limitation of our study is also the limited geographic representation due to a lack of studies from the USA. The eight studies included in the meta-analysis are from Europe and Canada. The three studies from the USA were not included in the meta-analysis since they used selected cohorts of patients with an oversampling of specific conditions, and thus per protocol were not pooled with studies of consecutive or randomly selected cohorts.

Why do American studies use a selected cohort methodology that oversamples specific conditions, instead of an approach that’s more directly applicable to coming up with good estimates of preventable hospital mortality? Who knows? (Maybe someone out there does.)

Implications of a lower estimate of medical errors

The bottom line is that, if this study is an accurate reflection of the true number of preventable deaths due to medical error (and I think it’s very good), only around 7,150 people who were previously healthy die preventable deaths from medical error, and the vast majority of such deaths occur in people expected not to live more than three months. We’re talking estimates less than an order of magnitude smaller than the “one third of all deaths” trope. This has implications.

For instance:

“We still have work to do, but statements like ‘the number of people who die unnecessarily in hospitals is equal to a jumbo jet crash every day’ are clearly exaggerated,” said corresponding author Benjamin Rodwin, assistant professor of internal medicine at Yale.

More importantly, after agreeing that recent high estimates of preventable deaths are not plausible and that only a small fraction of hospital deaths are preventable, undermine the credibility of the patient safety movement, and divert attention from other important patient safety priorities, Rodwin et al write:

Another important implication of our study relates to the use of hospital mortality rates as quality measures. Overall hospital mortality rates and disease-specific mortality rates continue to be reported in many countries in Europe and the USA.32, 33 In the USA, overall hospital mortality rates are reported by the Veterans Health Administration and disease and procedure-specific mortality rates are used by the Centers for Medicare and Medicaid Services (CMS). Disease-specific mortality rates are also used to determine hospital reimbursement as part of CMS’ Hospital Value-Based Purchasing Program. Our results show that the large majority of inpatient deaths are not due to preventable medical error. Given this finding, variation in hospital mortality rates is more likely due to variation in disease severity and non-disease-related factors that affect the location of a patient’s death. Although disease severity is taken into account through the reporting of adjusted mortality rates, numerous critiques have pointed out the limitations of this approach.34,35,36,37

Even if disease patterns and severity were uniform, however, there would likely be variation in hospital mortality rates because of variation in the use of hospitals at the end of life.28, 37 If it is assumed that the vast majority of hospital deaths are unavoidable, then variation in inpatient mortality should be seen as a measure of where patients die, rather than whether they die. Numerous studies have found that many non-disease-related factors affect location of death, including referral to palliative care, home support, living situation, functional status, and patient and family preferences.38

Elsewhere, the authors note that in Norway there is no hospice system and therefore patents are often admitted for end-of-life care, an observation that surprised me. They further pointed out that this could be why the study from Norway that they included in their meta-analysis reported the lowest rate of preventable mortality. Patients admitted for hospice care were considered unpreventable deaths, and this diluted the percentage of preventable deaths, leading to lower percentages of preventable deaths compared to hospitals in countries with hospice systems. In other words—surprise! surprise!—hospital mortality rates are a poor measure of quality for inpatient hospital care.

More importantly, if we’re truly going to improve quality of care and patient safety, it’s important to focus our efforts where they will do the most good. To do that, we need accurate data. Innumerate and highly implausible estimates that result in the “third leading cause of death” trope credulously bandied about by the press and amplified by quacks are actually antithetical to improving quality of care.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

23 replies on “Is medical error really the third most common cause of death in the US?”

Orac guesses correctly: Null has been flirting with the one million a year recently- which he expands to “20 million over 20 years”. I imagine he’ll find ways to expand that figure to include -perhaps- to the half-dead and the almost dead, making the figure over 100 million. He considers age 27 “old age” so anything is possible

“I imagine he’ll find ways to expand that figure to include -perhaps- to the half-dead and the almost dead.”
Not to mention the living dead, the walking dead, the undead, and the Grateful Dead.
Seriously though, it’s really hard to make those kind of comparisons.
Is it possible that a patient is more likely to survive an error in a big voluntary teaching hospital? Is there a chance that one, or one kind of, institution may be better placed to cover up errors, or to catch them while consequences can be averted? And how many errors are covered up for medicolegal reasons? How many harmless errors occur? So many possible confounders in either direction!
Simply putting out statistics however low or high means little. What counts is what hospitals do to catch errors, to learn from them, and avoid them going forward. All the hospitals I have worked for have had aggressive quality control processes in place. Reportable incidents that are not reported are a serious violation that when caught mean big trouble for the violators and the hospital, and they do get caught.
For me, my clogs on the linoleum experience tells me that the number of fatal errors is definitely on the lowest end of the scale.

My 4.5 year old thinks his 9.5 year old brother is “old.”

You have to wonder what Null’s point of reference is for that.

@ Denice Walter

What we really need is some really well-done study of number of people, e.g., cancer patients, infectious disease patients, heart disease, etc., who died from “treatable” conditions by opting for CAM. Since not usually reimbursed by insurance companies, not even that data is available. We do have a number of anecdotes, some representing reasonably sized case series; but . . .

This blog (RI) reviewed some articles on the effect of CAM treatment on the risk of dying from cancer.

@ Dorit:

About the age 27 meme:
it may have something to do with repair of cells showing down then or POSSIBLY it may when the chi starts escaping from the d’an tien like air from a punctured bicycle tire or some other altie fantasy**.
IIRC, there used to be old literature about aging ( Bromley) that estimated that the late twenties were the best time for intellectual work but other studies showing continuance far beyond that

However, we DO know that executive functioning continues to develop into the mid-20s ( not that it develops in EVERYONE or that Null would recognise ex fx if it hit him over the head with a brick)

** he also holds that everyone has a chronological age ( birth certificate) and a biological age ( their level of health etc) so of course, his biological age is perhaps only a third of his chronological age, so he’s now 25. Snark

I hear you.
One of the problems is that their survivors might be ashamed about their bad choices and not sign up to assist with information.
There are always testimonials from altie successes stories but no talk about the failures
The most famous study might be when they tested Gonzalez’s treatment for pancreatic cancer vs SBM.
Total phail.

20 million over 20 years? Why, that’s 100 million over 100 years. A thousand million over a thousand years…..

What I want to know, and what is not addressed as far as I see it in this post, is the accuracy of reporting of these kind of events. I’m much more worried about that than I am of number massaging (though it is definitely an issue…).

Personally, I do not believe that I’d go for quacks just because I believe medicine is not as good as it claims to be (which is my position…). I understand that I’m an exception, though…

But what I’m worried is that this kind of fear of painting medicine in a bad light seems to be in my country the rationale for this kind of regulatory axioms in my country’s medical board:

Article R.4127-31: “Any MD must abstain, even outside the scope of the exercise of his profession, of any act that may bring disrepute on the practice of medicine”

Which I find quite ambiguous when it comes to adverse events… (and I must say I’ve witnessed quite a number of violations of the articles in the aforementioned document.)

Moreover, this kind of behavior does not have my favor at all:

Blacklisting of Renaloo

So all apologies, but I’m having trouble taking figures of adverse events, even medical deaths, at face value.

Even if the statistic is true, I wonder how many of those people would have died anyway if they hadn’t sought medical treatment at all. I don’t think that more than an infinitesimal fraction of deaths attributed to medical error by any account are of people with minor illnesses who sought treatment and died due to gross medical error. Something like that still makes the news I think, rather than just winding up in some statistic.

And is the situation really any better with so-called “alternative medicine” (S.C.A.M.)? Deaths in hospitals are generally well-reported. Not so much those people who rely on S.C.A.M. Furthermore, most of the people who so relied on S.C.A.M. often wind up with conventional medical treatment in the end. And science-based medicine still gets the blame for that when it fails to clean the mess made by S.C.A.M. The S.C.A.M.mers really got a swell deal there!

I wonder how many of those people would have died anyway if they hadn’t sought medical treatment at all.

Indeed. May 13 2007 is burned into my memory. It was my maternal grandmother’s 86th birthday. My uncle phoned with the tragic news. During her birthday party she’d started feeling unwell and was taken to hospital to be checked out. While at the hospital, she suffered a massive and fatal heart attack.
I daresay these quacks would label her death as due to medical error. In reality, she would have died no matter what.

I’m sure he could include any medical advice provided as well. Were deaths in cars preventable if only a medical adviser knew the correct amount of newtons a person could withstand without injuring themselves to the point of internal bleeding? Was a regulation in OSHA not strict enough because of incorrect medical advice leading to preventable death for something like noxious inhalation? 1 million deaths by medicine is rookie numbers. I think we could balloon this to 100 million and at the same time steal numbers from other top categories.

@ Kao Valin

Actually there exist documented evidence that companies pressured and lied to avoid stricter regulations. Our coal industry is a prime example. And the Ford Pinto with exploding gas tanks. The company knew about how foolish it was to put the tanks outside the frame from internal documents from their own engineers; but on discovery it was found that Ford had decided it was cheaper to settle claims, sealing them, than to retrofit the cars. And we know that the reduction over the past decade of using coal for electricity prevented at least 150,000 deaths. I could go on and one; but it wasn’t the medical adviser; but ignoring them in most cases. And watch the movie Tucker. In late 1940s Tucker put safety into cars, windshields, collapsing steering wheels, etc. Other auto manufacturers put him out of business and didn’t make such changes for decades. Not medical error; but evidence that profit TRUMPS people’s lives and limbs. For those who believe in small government, it’s a myth. Either government works for us or is in the pocket of industry and the wealthy. PBS has a great documentary in its American Experience series entitled “The Poison Squad” about food in the later 19th Century, literally killing many and sickening even more. Shows how, mainly Republican Party, fought against any regulations, how the food barons didn’t care, and attacked, ad hominem, against Harvey Wiley who scientifically documented just how bad food was. Sound familiar. I HIGHLY RECOMMEND WATCHING IT.

However, if a doctor advices a parent not necessary to vaccinate child and child suffers preventable disease, if a doctor treats with CAM, I think most people on the blog would consider that some form of medical error.

@ Julian Frost

One anecdote; but what about other cases where someone with a heart attack might have been saved? My maternal grandfather may have died unnecessarily. He began having numbness on side of face, etc. My grandmother rushed him to hospital. They said he was fine. Within hours upon returning home he had a massive stroke. Whether he could have been saved or not, there are interventions for strokes. There is more to the story; but that is enough. I’m only giving an anecdote because you gave one. Anecdotes aren’t science, don’t “prove” anything either way. And I can’t speak to your grandmother’s case; but in similar cases it is possible that someone decided odds aren’t good because of age and didn’t do what may have saved person’s life. People are treated for heart attacks, cancer, etc. in 80s and some live 5 – 10 years more. On the other hand, sometimes interventions do nothing but prolong suffering.

ADDENDUM @ Kao Valin

I should have added that such absolutely absurd slippery slope thinking goes against any rational dialogue. I guess, if one uses your illogic, that we should have absolutely NO regulations, laws, etc. regarding medical errors because, according to you, the sky is the limit.

What ever the valid numbers are, as Ben Goldacre wrote: ““Problems in medicine do not mean that homeopathic sugar pills work; just because there are problems with aircraft design, that doesn’t mean that magic carpets really fly.”

Despite everything, I think the accurate reporting of potential errors is deficient which doesn’t mean even close to the high end claims. In the U.S. we perform far fewer autopsies than years ago and compared today with several European nations, partly because they cost money and our for-profit health care system doesn’t like spending money. Some studies based on autopsies performed find about 25% in diagnostic errors. This doesn’t, of course, mean that a correct diagnosis would have changed the outcome; but without autopsies, doctors don’t learn and improve since some of the diagnostic errors certainly had negative results. I personally have in a file 15 peer-reviewed papers discussing the lack of autopsies.

In addition, Public Citizen, a superb consumer advocacy group founded by Ralph Nader, has documented year after year how few doctors are ever disciplined in the U.S,, let alone lose their licenses despite clearly bad treatments/incompetence. Go to:

And what about hospital-based infections? For instance, some studies find for-profit hospitals have far more. Is a hospital-based infection that may have been prevented a medical error?

And studies also find that for-profit hospitals have twice the morbidity and mortality. Shouldn’t this be called medical errors?

And what about unnecessary medical interventions? Redding Medical Center performed over 1,000 heart bypass surgeries on healthy hearts. One, on a young man with gastric esophageal reflux disease. See article at:

If the surgeries were carried out correctly, then, I guess no medical error involved? The above is just the tip of the iceberg. It is difficult to discover unnecessary treatments. If patient does well, no lawsuit, no discovery. If fact, Redding Medical Center got away with it for years. So, how many unnecessary interventions in U.S.?

And what about people who suffer, develop disabilities, or die from lack of care due to our for-profit health care system. I guess that isn’t medical error either.

So, I agree with Orac that whatever the level of error is in medicine which is based on science; but also some grandfathered in procedures, to go to CAM, based on anecdote and hype with no credible scientific basis, is quite foolish. My suggestion is to always seek a second and even a third opinion. And when you do, don’t tell them what a previous doctor said. Jerome Groopman’s excellent book, “How Doctors Think” describes cases where once a diagnosis had been made, 2nd opinions seldom differed, somehow locked into a narrow range.

@ Joel Harrison

“I think the accurate reporting of potential errors is deficient which doesn’t mean even close to the high end claims.”

Thank you! You’re my hero!

You should change your nickname to “One Punch Man” or something similar.


I left out that health insurance companies, etc. often include in their contracts mandatory arbitration. Not only have studies found that the arbitration companies, being employed by the companies, hospitals, etc, rule often against the plaintiff; but the entire event is sealed, so we don’t know if medical errors were involved or not


Published studies have found a medical culture adverse to admitting/recording errors. And even excellent doctors are reluctant to turn in a colleague.

Published studies have clearly documented that the long shifts for residents and regular hospital employees find as the hours drag on an increase in errors. The European Union has strict, enforced regulations that are much much shorter than in the United States and international comparative studies find no decrement in level of competence of doctors and lower rates of errors. And where should we include traffic accidents caused by medical staff driving home after long hours? Is our health care system not to blame? See, for instance, Wikipedia articles: Medical resident work hours AND Medical error.

And no one seems to discuss errors related to prolong suffering, e.g., longer hospital stays/rehabilitation, and long-term/permanent disabilities. Errors don’t just cause deaths.

Again, I agree that the claims made by CAM proponents are grossly exaggerated; but I’m not convinced by the Yale Study.

One last thought, adverse events do occur, despite the best medicine, these are NOT errors.

Bottom line, we really don’t have the data currently to really come up with a valid estimate, plus or minus, for instance, 10%, and it should include both deaths and suffering/disabilities.

Comments are closed.


Subscribe now to keep reading and get access to the full archive.

Continue reading