Medicine Surgery

Unforgivable medical errors

And now for something completely different.

I’ve been on a bit of a tear the last few days beating on Mike Adams, someone who arguably deserves the title of Woo–meister Supreme, but it’s important to remember that defending science-based medicine is more than just having a little fun every now and then slapping down quacks. It’s also about turning the same skeptical eyes that recognize the woo that people like Adams, Mercola, and the anti-vaccine movement promote onto scientific medicine when appropriate. That’s because, at its best, science-based medicine is always trying to improve treatments based on science and evidence. To do that, we practitioners of science-based medicine must have the desire and courage to look at our own practices and objectively evaluate whether they are the best we can do.

In my field of surgery, there are some unforgivable errors. Although some of us may disagree on the exact identity of some of them, most surgeons would agree on a handful of them. Certainly one of them would be to amputate the wrong limb or remove the wrong organ. This happens far more often than any of us would like to admit. Over the last couple of decades, checklists meant to prevent such occurrences have risen to the fore and become standard practice at most hopsitals. We surgeons ridicule them (myself included, at least until recently), but they work, as an increasing amount of scientific and clinical literature is showing. Another unforgivable error is to leave a sponge or surgical instrument behind during an uncomplicated elective case. I qualify that because it’s understandable that occasionally a sponge will be left behind in a trauma case or when an elective case goes bad. In both cases, things get crazy, and everyone is frantically trying to save the patient. But in the elective case, leaving a sponge or surgical instrument behind should in essence never happen. The tedious ritual of counting the sponges, needles, and instruments before and after the case is highly effective in preventing it–when surgeons listen to the nurse telling them that the counts aren’t correct. The third unforgivable error is to operate on the wrong patient, which has occasionally happened in the past. Again, checklists make such a spectacular mistake much less likely. At my own hospital, for instance, the nurses are required to ask each patient who he or she is, what operation she is having, who the surgeon is, and, if it’s appropriate for the operation, which side is being operated on. The surgeon is required to mark the body part and the side with his or her initials. Sure it sounds silly and pointless, but it’s clear that such systems reduce wrong site surgery markedly.

It’s becoming increasingly clear that most medical errors of these types are in actuality system problems. As much as surgeons like to think of themselves as incapable of making such errors, the fact is that we all are. The key to reducing such errors is to make the system such that it is more difficult to make such mistakes or, when mistakes are made, they are highly likely to be caught before a patient is injured. There are other areas of medicine where this is also true. One in particular came to national prominence in a story published in the New York Times over the weekend entitled The Radiation Boom: Radiation Offers New Cures, and Ways to Do Harm. It is a hugely disturbing story of errors in radiation therapy that caused significant harm to many patients, including deaths. Here are the two deaths.

Here’s death #1:

As Scott Jerome-Parks lay dying, he clung to this wish: that his fatal radiation overdose — which left him deaf, struggling to see, unable to swallow, burned, with his teeth falling out, with ulcers in his mouth and throat, nauseated, in severe pain and finally unable to breathe — be studied and talked about publicly so that others might not have to live his nightmare.

For his last Christmas, Scott Jerome-Parks rested his feet in buckets of sand his friends had sent from a childhood beach.

This is the first in a series of articles that will examine issues arising from the increasing use of medical radiation and the new technologies that deliver it.

Sensing death was near, Mr. Jerome-Parks summoned his family for a final Christmas. His friends sent two buckets of sand from the beach where they had played as children so he could touch it, feel it and remember better days.

Mr. Jerome-Parks died several weeks later in 2007. He was 43.

A New York City hospital treating him for tongue cancer had failed to detect a computer error that directed a linear accelerator to blast his brain stem and neck with errant beams of radiation. Not once, but on three consecutive days.

Here’s death #2:

But on the day of the warning, at the State University of New York Downstate Medical Center in Brooklyn, a 32-year-old breast cancer patient named Alexandra Jn-Charles absorbed the first of 27 days of radiation overdoses, each three times the prescribed amount. A linear accelerator with a missing filter would burn a hole in her chest, leaving a gaping wound so painful that this mother of two young children considered suicide.

Ms. Jn-Charles and Mr. Jerome-Parks died a month apart.

The article points out that Americans receive more medical radiation than ever before. Indeed, it was less than a month ago that I wrote about this very topic, and this article points out how the average lifetime dose of diagnostic radiation for Americans has increased sevenfold since 1980 and half of all cancer patients receive radiation therapy. It should also be noted that radiation therapy is also increasingly used for non-cancerous conditions, including severe thyroid eye disease, pterygium, pigmented villonodular synovitis, treatment of keloid scar growth, and prevention of heterotopic ossification. The reason is that our understanding of radiation effects and, more importantly, the equipment, technologies, and protocols to deliver the radiation have become increasingly more sophisticated.

Radiation oncology is a lot like surgery in that it treats local areas of the body as opposed to intravenous medications like chemotherapy, which treat the body systemically. Like a surgeon wielding a scalpel, the ability of radiation therapy to treat a part of the body depends on how tightly focused the beam can be, targeting the cancer or other malignant tissue but as little normal tissue as possible. With better technology, better equipment, and better software, radiation oncologists have become better than ever at doing just that. For the most part, gone are the days of routine frying of large amounts of bowel when treating pelvic tumors or hitting significant quantities of lung or heart during radiation therapy for breast cancer. It just doesn’t happen anymore, except rarely. These days, it’s possible to treat organs and tumors in much smaller, tighter anatomical spaces, even surrounded with easily injured healthy tissue. The ability to aim the beams is just that good.

Unfortunately, like many tecnological advances, the advances in radiation oncology have added layers of complexity to the procedure that weren’t there before, adding opportunity for errors. The latest thing in radiation oncology, intensity modulated radiation therapy, or IMRT, requires very sophisticated algorithms to calculate the best dosage and treatment plan. Again, the more complex the system, the easier it is for error to creep in. Indeed, the NYT investigation of 621 errors in radiation therapy between 2001 and 2008 sure looks like systemic errors:

Because New York State is a leader in monitoring radiotherapy and collecting data about errors, The Times decided to examine patterns of accidents there and spent months obtaining and analyzing records. Even though many accident details are confidential under state law, the records described 621 mistakes from 2001 to 2008. While most were minor, causing no immediate injury, they nonetheless illuminate underlying problems.

The Times found that on 133 occasions, devices used to shape or modulate radiation beams — contributing factors in the injuries to Mr. Jerome-Parks and Ms. Jn-Charles — were left out, wrongly positioned or otherwise misused.

On 284 occasions, radiation missed all or part of its intended target or treated the wrong body part entirely. In one case, radioactive seeds intended for a man’s cancerous prostate were instead implanted in the base of his penis. Another patient with stomach cancer was treated for prostate cancer. Fifty patients received radiation intended for someone else, including one brain cancer patient who received radiation intended for breast cancer.

Radiation intended for someone else? Radiation intended for a different organ? How is this any different from operating on the wrong patient or removing the wrong organ or the wrong limb? It’s not. Surgery, for all its faults and for how far it still has to go to reduce medical errors, appears to be far ahead of radiation oncology in that respect. Many of these errors listed above likely could have been prevented by changes in the system, one of which might be the implementation of checklists not unlike what we do in surgery. Nowhere in the NYT article did I see any mention of checklists, which, as I’ve pointed out before, have made a lot of news in surgery, thanks to Dr. Atul Gawande.

One thing that very well might make radiation oncology as a specialty more prone to errors is its heavy dependence on software. Indeed, there was one thing I learned in this article that completely shocked me. Specifically, how Mr. Jerome-Parks got such an overdose of radiation. It turned out that the software facilitiated it:

The software required that three essential programming instructions be saved in sequence: first, the quantity or dose of radiation in the beam; then a digital image of the treatment area; and finally, instructions that guide the multileaf collimator.

When the computer kept crashing, Ms. Kalach, the medical physicist, did not realize that her instructions for the collimator had not been saved, state records show. She proceeded as though the problem had been fixed.

“We were just stunned that a company could make technology that could administer that amount of radiation — that extreme amount of radiation — without some fail-safe mechanism,” said Ms. Weir-Bryan, Ms. Jerome-Parks’s friend from Toronto. “It’s always something we keep harkening back to: How could this happen? What accountability do these companies have to create something safe?”

This is another source of systemic error: Poorly designed software or unnecessarily complicated software. More importantly with something like a linear accelerator, there didn’t appear to be a warning that (1) the collimator that controlled the radiation beam was wide open or that (2) the dose or area of radiation programmed was too high. Yes, apparently the computer did show that the collimator was open, but for something like that there needs to be a drop-dead stop, a warning that does not allow the technician to proceed until it is addressed and a failsafe mechanism that requires the technician to jump through many “Are you sure?” hoops before delivering doses that are at a dangerous level. (Sometimes it may be medically indicated in certain short radiation protocols to administer large doses at once, but it should not be easy to do so; it should require multiple confirmations before the instrument will do it.) Eventually the manufacturer did release new software with a failsafe, but it took a spectacular error resulting in death to get it to do so.

Reducing medical errors that harm patients is about more than just physicians. It’s about the whole system. In surgery we have been discovering this (and struggling with it) over the last decade or so. It’s not enough just to target the physicians. In my specialty and in the operating room, it’s necessary that everyone be involved, from the nurse who sees the patient when he comes in, the physicians who do the surgery, the scrub techs counting instruments, the scrub nurse verifying surgical site–in essence everyone involved with the care of the patient from the moment he shows up for surgery to the moment he either goes home or is admitted to the hospital. Radiation oncology has at least as many people involved in the care of the patient, if not more: Nurses, radiation physicists, radiation oncologists, technicians operating the machinery. Moreover, because unlike surgery radiation is often given in small fractions over many visits, there are many more opportunities for error than in surgery. After all, you have surgery once; typical radiation therapy regimens for breast cancer involve 33 doses of radiation, each with the potential for errors both small and large. It only took three such errors to kill Jerome-Parks and one error at the beginning and not caught to kill Jn-Charles.

We had a hard time learning this lesson in surgery. In fact, we’re still having a lot of trouble learning it, and there is still a lot of resistance. It is human nature. However, as systems become more complicated, the potential for not just human error but errors that derive from interactions within the system itself even when each person involved makes no mistakes. While we as health care practitioners should always strive to do our best and make as few mistakes as possible, mistakes do happen. They are inevitable. We have to be more like the airline industry and build systems that are designed to catch these errors before they can harm patients and minimize the harm done. We have a long way to go, unfortunately.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

93 replies on “Unforgivable medical errors”

This is another source of systemic error: Poorly designed software or unnecessarily complicated software.

I object!

Proper users of the technology should not be reliant on software. They need to know how the instrument works, and what they are doing when they use it. Sure, good software makes things more convenient to use, but I have huge concerns about attempts to make potentially dangerous technology a “black box” that anyone can use.

Could you, Orac, be comfortable with a surgery robot, without having a competent surgeon standing by in case there is an issue? As long as it has good, user friendly software?

If the software is too complicated for the users, then the users should not be using the instrument.

@Pablo: I don’t think that anybody is suggesting that medical software be designed to drool-proof standards for complete idiots. Barring the invention of strong AI or very expert expert systems, the user will still need to know what inputs to give.

However, the thing with most complex software-driven systems is that the user is almost completely dependent on the software for information with which to act. Complex computer systems are always black boxes, unless you have some logic probes and serious hardware chops. The software interface is the user’s only way to manipulate the box. If the software sucks, even the best user will be liable to serious error.

See this article for a detailed review of similar radiation treatment disasters almost two decades ago, including discussion of the unwise replacement of physical interlocks with software interlocks (and of the effects of simple typing errors on system performance). It sounds like lessons that should have been learned in the 1990’s were not learned.

As much as surgeons like to think of themselves as incapable of making such errors, the fact is that we all are.

I wonder how much of your education and training was directed at hammering that point home? I know that mine wasn’t, at all — and the stakes are generally lower in engineering. If I learned the lesson the hard way, at least there weren’t lives on the line.

I’m trying to correct that in the next generation: my kids and junior engineers get a steady dose of “I know I’ve screwed this up somewhere, and human nature being what it is I’m the last person to see it. Please check this so it doesn’t get into product.”

I wish more people studied the basics of information theory, because it provides some wonderful metaphors (and methods) for other parts of our lives.

Pablo–the point isn’t that the software does everything. The point is that the systems are complicated enough that there has to be software, and therefore it’s important that the software be done right. If we were discussing contaminated medical supplies, and the error was traced to a mismanaged or broken autoclave, you wouldn’t be objecting on the grounds that surgeon shouldn’t hand off responsibility to an autoclave tech or manufacturer.

One scary thing about this round of errors is that people aren’t learning from experience. There were similar fatal errors decades ago, for lack of proper checking and interlocks: google Therac-25.

Anyone interested in this subject–which ought to be a lot of people–would do well to look at old issues of the RISKS Forum, on Risks to the Public from Computers and Related Systems, aka comp.risks (yes, Usenet, it’s that old), available at The same kinds of systems problems that turn up in medical machines turn up in all sorts of other systems, from traffic light systems to electrical grids to all the Y2K-related stuff to air traffic control to power plants.

I know this is off-topic, but Mercola has taken the lead in the Shorty Awards’ health category. Just figured I’d give a heads-up in a current thread.

Well stated Orac. Per the norm you are the most fair and balanced among the high priests of medicine.

While the systems are much different, one model worth looking at for safety improvements is aviation. Checklists are used widely, because procedures are too complicated to remember without references. Critical incidents are examined or evaluated by outside authorities to determine the cause or chain of events that led to the incident. The results are published; training or hardware may be changed to make occurrences less likely.

It’s not perfect – pilots still take off without flaps or land with gear up – but not very often. But systematic errors generally get weeded out quickly and thoroughly.

Crew Resource Management is one aviation concept that could likely benefit surgery and intensive patient care. I’m sure lots of its philosophy is already there but making it more formal and more widespread probably would be good.

Instructions getting lost amid a computer crash, like in the example given, happens all the time when working with my robots. Most often, you can’t tell it happened before you turn the machine on unless there’s some fail-safe to verify its instructions. No big deal for small rolling robots. Big deal for big kicking robots. Huge deal for radiation-blasting robots.

Not having a fail-safe to verify large doses with the radiologist is the fault of the software… and a really really big fault at that.

@Pablo: in principle you’re right, but in the end this is just an extension of the old calculators in math class issue. I certainly agree that the operators should know the basic principles of the machines they’re operating, but as the principles become fantastically complex, that becomes harder and harder to do. At some point you have to ask yourself – do I want the operator to spend more time learning and doing work that could be better done by the computer, or on other areas where computers can’t help as much, like bedside manner in dealing with patients?

That said, I also can’t help but wonder if the software managing the radiation came with the typical software warranty disclaimer, which boils down to “Warning: this software not suitable for use.” As long as software manufacturers get to pretend they don’t have any liability for their products, progress in this area is going to be painfully slow.

Proper users of the technology should not be reliant on software. They need to know how the instrument works, and what they are doing when they use it. Sure, good software makes things more convenient to use, but I have huge concerns about attempts to make potentially dangerous technology a “black box” that anyone can use.

This is overly idealistic. No matter how deep the user’s understanding is, they are STILL entirely reliant on the software. Indeed, better understanding of the instrument would not have addressed the cited cases in any way. Kalach most likely fully understood the function and importance of the collimator. But the software’s design (presuming the article’s account is accurate) fooled her into thinking that her settings had taken effect when, in fact, they had not.

There is no way at all to consider this anything BUT a software problem. The only additional knowledge on Kalach’s part that would have helped would be if she’d known about this specific design flaw in the SOFTWARE. Additional cross-checks and confirmations could have helped, but no amount of instrumental knowledge would have been relevant, and ensuring that the software provides proper feedback on exactly what it’s doing is CRUCIAL, not an attempt to reduce anything to a black box.

You also can’t brush it off by essentially saying that “the user should know about the design flaw”. The only way they CAN know is if it’s a documented behavior, and if the supplier knows about the behavior they should FIX it!

Personally, I work in software, and an equivalent issue with one of our products would be considered clearly, and properly, OUR fault and not the user’s.

So sorry to say, but you’re completely off base here.

Geez — they teach Therac-25 in computer science classes routinely, and software developers are still getting out who repeat its mistakes???

There were many things wrong with the Therac-25 software, but the biggest were two philosphical problems which are unfortunately endemic to software engineering:

1) The software doesn’t actually deliver the radiation; therefore, it is not safety-critical. I have seen this philosophy among one of my colleagues, in software designed to display the state of an autonomous weapon and issue commands to it (like “go kill that tank”). Because it was not actually in the weapon itself, he did not consider it safety-critical. (“I’ve worked with safety-critical systems. Don’t talk to me about safety-critical.”) This attitude did get caught and correct on this instance, but as these radiation therapy incidents show, it’s not unique and probably not uncommon.

2) A tendency to identify with the software. Engineers come to think of the code as an extension of themselves; in this, the movie “Tron” was accurately reflecting how programmers think of this stuff. It’s common for people developing different modules to say to one another “I receive the packet from you, and then I validate it before attempting to process it”. While it’s a convenient shorthand, it reflects a tendency to identify with the software which can make them a) overestimate the intelligence of the software, b) have a harder time spotting deficiencies in it (just as we have trouble spotting our own deficiencies), and c) major pride issues. “I don’t have bugs! How can I have bugs? I worked hard on this.”

The general solution for problem #2 is peer review, but it’s complicated by the fact that the peer reviewers are necessarily less intimate with the code than the original developer. So they may not understand it well enough to notice the problems. The overall problem for both philosophical issues is well designed testing, but there’s a problem there too: testing is the last thing you do, so if there are delays in design and implementation and a hard deadline, then test will suffer. In the end, the effectiveness of your test often ends up being inversely proportional to the number of schedule overruns earlier in the program.

I’m not sure what the answer to this is. Part of the reason our world is permeated by crappy code is the fact that it’s all driven by competition. This is good — it encourages innovation. But it also encourages cutting corners to get your product out ahead of the other guy, and the easiest engineering corners to cut are peer review and test. They are not always the cheapest, in the long run, but software companies can rarely afford to look to the long run, because they live and die in the short run.

It is inexcusable, though, and I hope the software companies involved face some repercussions.

BTW, the Therac-25 scandal’s direct root cause was a race condition — one of the classic bugs, and one which is easily prevented with some care. That brings up another endemic problem which has only been getting worse lately: developers acting as if timing will always be the same. It’s a very easy mistake to make, but the implications can be catastrophic.

Is software not certified by the FDA as a medical device? If not, why not? It clearly needs to be held to a higher standard than it is presently. It is patently obvious that software companies are not going to voluntarily meet this standard, so someone is going to have to impose it upon them.

RE. software issues:

Surely the system should be validated before it is used on the public- including simulating crashes etc. these problems should have been highlighted long before the system was used on patients- either by the manufacturer or by the hospital.

The use of checklists is a familiar one to people in many regulated industries where procedures that cannot be made fail safe are instead wrapped in other physical checks to prevent errors occurring. Orac is quite right to take the manufacturers and users of this equipment to task for not performing simple stress testing of the process. These deaths were due to negligence, not misfortune.

In case you didn’t notice the article today, this is part of a series. I hope checklists will be mentioned in a later article.

As far as software, there are something things to blame the user, but a program that regularly crashes and loses key planning information without informing the user is not acceptable. Software that allows radiation doses far beyond the range of a realistic medical treatment is not acceptable (i.e. it would allow radiation doses when all the dose regulating shields were left wide open.

The core problem with software for life-critical devices is that it is written by programmers, not domain experts, and that state of affairs is unlikely to change.

Programming is a very complex activity. It is almost impossible to predict in advance conditions like the ones referenced above, where hardware crashes put the machine into an unexpected state. Software development is still too immature an industry to have developed a reliable set of practices that help us figure out when the domain experts haven’t told us enough. On the other hand, the complexity of the device means that the domain experts aren’t going to be able to determine when we’ve missed something.

The unforgivable thing about this state of affairs is that few individuals or organizations are working towards improving it. Instead, we’re focusing on the accelerating pace of technological change.

To be honest, I’m glad I don’t work in the medical, automotive or aviation software industries. The stakes are too high, and the techniques are simply not good enough.

I should be absolutely clear that I’m not offering excuses for this type of thing. I’m only pointing out that we are very far away from a solution, and very few are actually working on it.

One more thing I meant to say….

I discussed where software engineering goes wrong, but only briefly mentioned test (mainly in how it suffers due to schedule slips). Preventing occurrence of a bug is only half of the solution; the other half is catching the ones that occur anyway. That’s the job of the test engineers. Their needs are frequently underestimated, making their job much more difficult, but even when adequately funded and given plenty of schedule, problems can escape. This is usually because the test environment did not adequately model the real environment. Mars Pathfinder is a good example here. It performed flawlessly during acceptance testing, except for one failure which they just couldn’t ever duplicate or even characterize. They eventually assumed it had been overcome by other changes, as does sometimes happen, and moved on. Flash forward a year, to the spacecraft touching down on Mars. It bounced to a halt, opened up its petals, began generating electricity, set up its communications systems, and then promptly crashed. It rebooted, and crashed again. It eventually loaded in safe mode, but when controllers told it to reboot normally, it promptly crashed again. The unreproducible fault was now happening 100% of the time, and because it impacted power management, it was threatening to end the mission by wearing out the batteries. Fortunately, engineers did work out the problem: they had grossly underestimated the number of commands the mission controllers would try to send to it, trying to squeeze out every possible teeny bit of science they could. This threw off timing, overran a buffer, and caused the reboots. A software patch was quickly written, tested as well as they could, and uploaded to Mars. Pathfinder worked fine for the rest of the mission, but it was a close thing.

Bottom line: if your test tests the wrong things, you’ll never find the bug until it’s too late. So you need to design tests very carefully.

To clarify, I wasn’t talking so much about software failure, but more to the “overly complicated” software.

I say again, if the software is too complicated for the person using the instrument, then that person should not be doing it.


Agreed, if the person can’t understand the software, they shouldn’t be operating it. That said, software design plays a huge role, and the maker should make every effort to make the software as easy and intuitive to use as possible, which I think is Orac’s point.

I object!

Proper users of the technology should not be reliant on software. They need to know how the instrument works, and what they are doing when they use it. Sure, good software makes things more convenient to use, but I have huge concerns about attempts to make potentially dangerous technology a “black box” that anyone can use.

Straw man argument. That is not at all that I was arguing.

My point was that, for dangerous doses of radiation there needs to be a failsafe warning that makes it crystal clear that what the user is about to do is potentially dangerous. Also remember this. These machines are programmed for in general around 30 doses and keep track of cumulative dose. If a dose is being set up that is very different from all the previous doses the patient has had (as in the collimator being completely open), then there should be a warning that makes it crystal clear what is going on and prevents the operator from proceeding without jumping through some hoops. This is very basic and has nothing to do with making the machine a “black box” that anyone can operate.

Does it make a 747 a “black box” that anyone can fly to have such warnings and checklists that pilots have to heed?

Another important difference between aviation and medicine is how mistakes, incidents, and accidents are treated. Aviation has had a fairly open process to review these things and come up with corrections. Crews do not usually face much punitive action beyond retraining and sometimes loss of pilot’s license. Civil litigation is rare and criminal charges rarer still, although recent cases seem to be bucking this trend.

Having an open, generally non-punitive environment and not criminalizing mistakes (as opposed to negligence) allows those who have made the mistakes to be open about them to the benefit of the industry and the travelling public.

Looking at it from the outside, the medical community seems much more closed and insular. If the TV dramas are to be believed, there are medical review boards in some cases, but what level of retraining or discipline comes out these? Is the information used in the wider medical community to prevent similar mistakes? It seems like the first thing to happen after a medical mistake is for everyone to clam up and hire a lawyer.

If medicine is to make the sort of progress aviation has with regards to mistakes, etc. the litiginous culture around medical errors needs to be changed.

The general solution for problem #2 is peer review, but it’s complicated by the fact that the peer reviewers are necessarily less intimate with the code than the original developer. So they may not understand it well enough to notice the problems.

That’s because the reviewers are doing review as a side-line to their “real” work, and they’re not evaluated on their performance of reviews. PJ Plauger identified the solution at Whitesmiths almost 30 years ago: have a reviewer for each programmer, as immersed in finding problems as the programmer is in creating them. Contrary to expectations, this does not reduce per capita productivity — it increases it, because much less time and effort is spent on hunting bugs at later stages.

If you consider engineering the systematic application of methods to produce reliable designs despite unavoidable errors (including human ones) it makes sense to start incorporating error detection and correction into the methodology according to known methods for statistical quality assurance.

All basic Deming stuff. One might wonder what would happen were Deming’s methods applied to the practice of medicine?


It’s interesting that you used the analogy to the complexity of the 747. There’s a split in the aviation community (dramatically oversimplified here) between the Boeing and Airbus philosophies. Airbus planes are ‘fly by wire’ where all inputs from the pilots still go through software before reaching the physical controls. There are circumstances where the software will simply refuse an input it deems to be incorrect. Boeing planes for the most part use direct physical connections to the control surfaces, and the pilot always has final say as to what the aircraft will do. *All* of my friends who are commercial pilots prefer to fly the Boeings.

As for failsafes in software that controls devices, they are useless if the same device implements the control program and the failsafe. If I were designing something life critical like a machine that doses a patient with radiation, I would have a second device that monitors the actual radiation output completely independently of the device that controls it. Both would have physical interlocks to shut down the radiation emission if any discrepancy is detected.

The basic point that I’m trying to convey is that there isn’t anything else in the world like software, where a tiny fault anywhere in the system can have huge effects on any other part of the system. If a system does anything useful, it quickly becomes so complex that it isn’t even theoretically possible to define in advance all the possible failure modes and develop tests for them.

If we accept this premise, then we have to treat the development of life-critical software the same way we treat aviation mishaps and surgical errors. *Every* time an unexpected condition is detected, we would have to investigate it and either issue a process to correct it, or be very explicit about why it is not necessary to do so in this particular case.

Even in these circumstances, sometimes errors will slip through undetected until they result in a death. That is unspeakably sad and unforgivable, but also inevitable.

As a radiation therapy medical physicist, I have to speak up. I’m afraid the AAPM has not yet made a response to this article, but ASTRO has, and it can be seen here: And now, I speak for myself:

In no way do I wish to dismiss the tragedy of the accounts given in the article. They are truly horrific, and never should have happened. Jerome-Parks was a victim of ignorance, and Jn-Charles was a victim of negligence.

That being said, whenever there is an upgrade to treatment planning software, there is an upheaval in the clinic. Particularly when that upgrade enables the clinic to treat with a new technology, as IMRT was at the time of Jerome-Parks treatment. Physicians are typically excited, and there is pressure to use new capabilities quickly. In the first case, it sounds as if IMRT was commissioned and implemented before a rigorous QA process was in place. These days, I would be shocked at a clinic treating a patient with an IMRT plan that had not yet been QAed.

Pablo is concerned about the reliance on software for treatment planning, but it is an unavoidable reliance for all but the most simple treatments. We could go back to the old fashioned methods of taking patient contours, calculating does without accounting for the heterogeneity of tissues in the body, etc, but this would also result in poor dosimetry and unnecessary dose to healthy tissues.

Medical physicists and dosimetrists are quite aware that trusting one piece of software alone is hazardous. That is why QA of an IMRT *minimally* involves an actual measurement of the fluence map delivered by each treatment field, either with film or a diode array, and hand or (alternate) software calculations of dose to a point, corresponding to dose to a point given in the treatment plan summary.

Yes. Mistakes in physics and treatment planning happen, but the opportunity for mistakes does not stop with the software, or in the treatment planning process. Therapists have a number of physical and software interlocks to assure they treat the right patient, using the right devices in the beam at the correct times. However, I challenge anyone to treat 40+ patients a day, day in and day out, and never accidentally put a 45 degree wedge in instead of a 60 degree wedge, or something like that. Mistakes like these, if they are made, are typically not repeated on the same patient over and over. If it happens at all, it’s usually just for one treatment out of 6-8 weeks worth of treatments. Such mistakes have minimal effect on the overall dose delivery of the treatment.

There are a number of other problems with the article that I will not address in detail now. But it is riddled with convenient omissions regarding the regulation of this field.

As a community, those involved with the planning and treatment of patients with therapeutic doses of radiation absolutely must acknowledge and address what happened to Mr. Jerome-Parks and Ms. Jn-Charles. However, the presentation of their cases in the NYT will likely have more ill effect than the 621 radiation mistakes it cites. I have no doubt that the article will cause some patients to either refuse or discontinue treatment, and that makes me just as angry as the poor treatment of Ms. Jn-Charles.

One scary thing about this round of errors is that people aren’t learning from experience. There were similar fatal errors decades ago, for lack of proper checking and interlocks: google Therac-25.

Bingo bango wrongo. Therac-25 was not any doctors fault. That was a race condition and was the problem of engineers and designers. A much more interesting scenario is the Panama City deathes. By law the doctors were supposed to double check the dosages. They didn’t and ended up killing a few people. What makes it even more interesting is that the doctors were charged with murder.

However, I challenge anyone to treat 40+ patients a day, day in and day out, and never accidentally put a 45 degree wedge in instead of a 60 degree wedge, or something like that.

Let’s see if that sounds the same if I apply it to my specialty: I challenge any surgeon to operate on several patients a day, day in, day out, year after year, and never accidentally leave a sponge or instrument in a patient, operate on the wrong limb or wrong side, or remove the wrong organ.

These are very similar to the sorts of challenges that surgeons used to make when checklists were being introduced to reduce wrong site surgeries.

Don’t get me wrong here; I’m not bashing radiation oncologist per se. I love radiation oncologists; one of my most cherished research mentors is a radiation oncologist, and for a while I seriously considered leaving surgery to become a radiation oncologist. However, this story reminds me that surgery isn’t the only specialty that tends to downplay such errors or shrug their shoulders and say, “Well, shit happens.” Clearly, the attitude is pervasive in many specialties–and still so in surgery. It’s just that surgery has been forced to do something about it sooner, thanks to lawsuits and bad publicity over wrong site surgeries.

@23 “As for failsafes in software that controls devices, they are useless if the same device implements the control program and the failsafe. If I were designing something life critical like a machine that doses a patient with radiation, I would have a second device that monitors the actual radiation output completely independently of the device that controls it. Both would have physical interlocks to shut down the radiation emission if any discrepancy is detected.”

Linacs have 2 monitor chambers, two timers, and physical limits on dose rate. The monitor chambers assure the uniformity of the beam, and the dose rate, and will shut down the machine if the dose rate is incorrect or uniformity slips out of the 2% tolerance. The timer assures the machine shuts down after the correct time to reach the number of monitor units for the given field, and the backup timer is there in case the first one doesn’t work. However, short of putting physical dosimeters IN the patient, and monitoring them during treatment, only so much can be done. (Internal dosimeters are now being done for some breast and prostate patients across the country, by the way, but they are checked either before or after a treatment. Not during.)

I want to stress that, in general, the software isn’t trusted. That is why we have so much QA, why we take physical measurements, why we do physicist double checks of plans, why we do weekly chart checks, and why we do independent dose calculations. Problems arise when people grow lax about these QA requirements. Argue about the software all day, but the software will never be completely right. Problems arise when people trust the software, and don’t do the proper QA.

I remember when doing undergrad physics labs, we did a bit of X-ray diffraction — zapping beams of X-rays at table salt crystals to study the ordering of the atoms as the X-rays scatter off of them. Part of that lab was learning how the safety devices built in worked — mostly they were designed so that if the shielding was out of place, the machine was most likely to refuse to start firing X-rays. It would take deliberate action to make the machine do something that was dangerous to its operators, besides dropping the radiation shield on your lab partner’s hand*.

So putting in some basic confirmations that would require a doctor to sign off when something obviously wrong was happening (high doses or wide doses, for one) would help — or a physical interlock that doesn’t let the radiation come out when the aperture is fully dilated, without a manual override. It probably wouldn’t help doing the right thing to the wrong person or the wrong organ, though. (A checklist might be simpler to implement, though.)

* The laser labs were the same way — the lab accidents we got into were ‘banging your head on the optical table’ and ‘cutting yourself with the razor used to prepare samples’, rather than eye-burning. Probably because those were familiar things, rather than the equipment we got safety lectures on.

@26 I’m not sure if this particular comparison to your field is apt. Mistakes like the one I mentioned have little effect on the overall treatment. Having the wrong organ removed seems much more radical.


I’m not proposing anything at all about the devices or cases in question. My only point is that the ‘proper QA’ really isn’t even theoretically possible with devices as complex as this.

This is not meant to excuse the designers or developers or test engineers. It is simply a fact that we all have to work with.

I don’t develop medical software, so I can’t make any explicit statements about the processes they already use. I can only put forward the notion that you need at least two plans of attack with devices of this nature.

The first is exactly what you stated. Build and rigorously use the best QA you possibly can, and recalibrate it with every incident you learn about.

The second is to put the framework in place to make sure you learn about those incidents. This is the hard one because the operators of the device may not know that a particular failure is important or different from what they’ve already seen, or may be made to feel (by their bosses, co-workers or the vendor) that they don’t know enough to report an issue.

Who writes acceptance and training criteria for these machines? I’ve had custom automation built for biology research, and clear acceptance criteria and testing (at several stages of development plus when the machines were complete AND after they were installed on site) were part of the proposal. Typically these were written by the scientists (i.e. the users) and the engineers. Training manuals and initial training were part of the deliverables. And every machine has a large, red, easily accessible emergency stop button – when triggered, physical interlocks stop the machine. As I tell the staff, I’d rather have a busted machine than a busted person – when in doubt about what the machine is doing, hit the e-stop.

I do have to credit the engineers (including the software engineers) who built our machines. There are physical sensors that essentially require the user to set up the machines properly before they function; there are mistakes the user can’t make. The machines will also throw an error and refuse to function when you ask them to do certain things that are dangerous. There is no override: the user is required to repair the faults before the machines will accept any further commands.

Even with all that, the importance of QA procedures, including training, retraining and checlists, cannot be underestimated.

hat, did you read the follow-up article:

If this is a one-off negligence, why did 30% of centers who thought they would be able to participate in a clinical study fail a basic test on the accuracy of their radiation dosing?
The article also discussed a QA system in Florida which only compared the system to it’s calibration at installation and not an empirical standard. Since the initial calibration was wrong, the overdosing error was propagated through dozens of treatments.

What do you think about the lack of standards for reporting and investigating errors?

The articles seem to make as clear as possible that thousands of lives are being saved by this technology, but to downplay the problems by saying it might scare some people away from treatment does a great disservice to the many people who have been harmed.

the stakes are generally lower in engineering. If I learned the lesson the hard way, at least there weren’t lives on the line.

Lives aren’t on the line in engineering? I don’t know what sort of engineering you do, but it seems to me that if the engineer screws up the plane crashes, building falls down, software over-irradiates people…not really minor failure modes.

… the lady got higher doses of radiation because the linac control software crashed? What the fuck? Crashy software is simply unacceptable when it’s being used to control millions of dollars of medical hardware. These hospitals should have had some sort of contractual guarantee that the software would work – this isn’t Windows running on your desktop, people’s lives are actually in danger here.

D C Sessions @ 22:

That’s because the reviewers are doing review as a side-line to their “real” work, and they’re not evaluated on their performance of reviews. PJ Plauger identified the solution at Whitesmiths almost 30 years ago: have a reviewer for each programmer, as immersed in finding problems as the programmer is in creating them. Contrary to expectations, this does not reduce per capita productivity — it increases it, because much less time and effort is spent on hunting bugs at later stages.

That would be a good job for a quality assurance engineer. If the company is bothering to invest in them, that is. Even though it doesn’t reduce overall per capita efficiency over time, it does reduce it *now*. And program management tends to be very focused on *now*, which is a major problem throughout the field, affecting many specialties. The recession hasn’t helped; if you’re in a real cost bind and have to lay off somebody, which one will it be: the guy writing code, or the guy reviewing it? Especially if you know you can get another code-writer to review code in his spare time? I know a QA guy who is covering all engineering (systems, hardware, software, components, test) for five different programs; you just can’t do a thorough job in that situation.

Of course, checklists got mentioned in the OP. These are useful not just on the runway or in the operating room, but also in peer review. Checklists can help make sure some of the common screwups aren’t missed, and can help mitigate some of the resource problems that I discussed above, by making it possible to do a useful (though not exhaustive) review even if you don’t know the system like the back of your hand.

Proper users of the technology should not be reliant on software. They need to know how the instrument works, and what they are doing when they use it. Sure, good software makes things more convenient to use, but I have huge concerns about attempts to make potentially dangerous technology a “black box” that anyone can use.

This is naive. How do you imagine that radiotherapy was administered prior to computer software? Do you imagine that every operator opened up the control panel and made sure that they understood the wiring? Do you suppose that they took apart the radiation source to verify that it was correctly constructed?

The real problem is that software control has not reached the level of maturity of older technologies for instrument control, and design habits that were developed for consumer devices end up being inappropriately carried over to critical medical instrumentation.

For one thing, a computer that crashes should never be used for instrument control. You would not use a piece of equipment on which the controls or meters did not work correctly. It is one thing for a consumer computer to crash; the computer manufacturer cannot anticipate what software the owner will choose to run, and the purchaser may not want to pay for the level of quality control required to prevent an occasional reboot. But an instrument is a different thing. It runs a fixed suite of software, and there is a fixed range of possible inputs. If the hardware and software is working correctly, it should never crash. If it does, that means that the device is working incorrectly, and its behavior cannot be predicted. I suspect that people are so used to having to reboot their home computers that they don’t realize that a computer crash in a medical instrument is a major red flag. If such a device crashes once, it should be immediately taken out of service, and not used until the problem is identified and corrected.

However, I challenge anyone to treat 40+ patients a day, day in and day out, and never accidentally put a 45 degree wedge in instead of a 60 degree wedge, or something like that.

Mistakes will happen; that is not the problem. The problem is that, knowing mistakes will happen, sufficient procedures to detect and rectify those mistakes are not put in place and/or observed.

“the stakes are generally lower in engineering. If I learned the lesson the hard way, at least there weren’t lives on the line.”

One never knows; A (long) while ago I was having a “dicussion” with a VP about a possible flaw in an image capture board.
His position of “it’s not like it could hurt someone” was weakened cosiderably when the VP of sales walked in bragging about a new customer that made laser eye surgey devices.

It’s somewhat ironic that the use of “direct energy” has become a medical staple while it remains speculation in military applications. It’s daunting to consider the problems of introducing this hardware to front-line combat. I wrote a dialogue gag for this scenario:
“What happens if this overheats?”
“You tell me, you read the manual!”

I am reminded of Akin’s Second Law of Spacecraft Design:

2. To design a spacecraft right takes an infinite amount of effort. This is why it’s a good idea to design them to operate when some things are wrong .

I have to object to the crowd denouncing the radiation physicist, and the operator of the machines in question. And, for the record, I AM a software developer. I have come to the realization that software that is badly designed, either in its operation or user interface, is ultimately a human problem. It boils down to the famous Tire Swing cartoon, which I first saw attributed to the University of London Computer Centre Newsletter, 1972. This illustrates a multiple-step system failure which began with the first end-user contact.

This issue (of errant radiation treatments) was first highlighted a couple days ago in a post in the wonderful IT blog Freedom to Tinker, moderated by Dr. Ed Felten of Princeton University (who earned fame by divorcing Win98 from IE, which Microsoft had declared “impossible”), who correctly identified the primary cause to be poor software and user interface.

As software developers, we need to recognize that these failings are ours to own. We need to understand that the users of our software are not typically computer savvy, and sometimes not even computer literate. Well-designed software should prevent a user from doing things as profound as irradiating a cancer patient with high-energy ionizing radiation, without being absolutely clear about what exactly the machine is going do, and making sure that the operator approves of it.

And while we’re at it, we should also make sure that the damned thing doesn’t crash.

Many of the most common errors in hospitals are not high-tech at all.

Hospital infections like staph, MRSA, and VRE are out of control, and may be getting worse, leading to even worse super-bugs. It would be nice if everybody washed their hands and surfaces in hospital rooms.

And there are far too many mix-ups of meals and medications. In my own experience, I have been giving meals that were not appropriate for someone with my diabetes, and I have been given medications which my records clearly showed I previously had a bad reaction to.

Everybody needs to be more careful, not just the high-tech folks.

The 50 Best Health Blogs


Lives aren’t on the line in engineering? I don’t know what sort of engineering you do

That’s why I wrote “stakes are generally lower in engineering.” In my case, it’s analog bits-and-pieces for microcontrollers, or in a previous job memory interfaces for custom integrated circuits.

I sweat over the possible errors, but in general the worst risks are economic, and the great majority of those are to $EMPLOYER’s profits. The stress level is nothing like that of when I’m doing e.g. a spinal immobilization, or even a simple field dressing for a laceration.

I’m not sure software fail-safes and warning dialogues are exactly analogous to surgery checklists.

It’s been abundantly demonstrated time and time again, that users don’t read text, and that they become habituated to clicking OK on error messages. Ever seen the “I’m a Mac/I’m a PC” commercial poking fun at Windows Vista prompts? All those prompts were designed to act as speed bumps to warn you that you’re doing something dangerous. How often do YOU actually read the text, or give more than half a second to consider whether you’re doing the right thing?

Your brain is really, really good at “seeing” what it expects to see, which is why medication dosage errors still happen despite procedures designed to catch them. If you don’t have to take some kind of complex action to verify the data, the risk of errors go way up. Initialing a form or medication vial, or reading an error message and clicking “OK”, doesn’t cut it, because it’s too easy to do that on autopilot. Counting sponges works a lot better, because it takes longer and requires physical movement. However, this is hard to do with software, because your interaction capabilities are so limited. You could have the user re-enter data that’s outside normal parameters, but that pre-supposes some external processes that the programmer can’t anticipate, such as having a paper copy of the data for reference.

So, let’s see:

1. the software engineers should have written in better handling of recovery after a crash.

2. The software team management should have chosen an operating system that doesn’t crash so much and that does not have email capability built in by default.

3. The technicians should have more conscientiously followed the protocols surgical teams follow to double check things before starting a procedure.

4. The hospital should not have put the techs under time pressure.

But most importantly: the existence of a safeguard at one level DOES NOT MEAN that you can be lax in handling safety at one level. Software safeguards do not mean the tech can be lax. Vigilant techs does not mean the software engineers could be lax. That’s the big lesson here.

Emma B @ 44:

It’s been abundantly demonstrated time and time again, that users don’t read text, and that they become habituated to clicking OK on error messages.

This is very true, and yet the “click to confirm” model has insinuated itself into every field of software. It’s useless; ask the user for confirmation every time, and they simply ignore it. That doesn’t mean confirmation doesn’t work, of course — just that it should be reserved for extraordinary cases.

Some cases are particularly ridiculous. The timecard application I use at work has a screen where it asks for confirmation that Friday time has been entered correctly, complete with dire warnings of what may happen if it is not entered correctly. (We have kind of an odd shift arrangement on Fridays, so there is significant potential for screwup.) It prompts for confirmation — but doesn’t give you the chance to see what you entered! You have to go back a screen to see if it was correct, then hit “Submit” again, where you get the same stupid prompt again. It’s the most useless version of these that I’ve seen (outside of The Daily WTF, anyway).

BTW, I do highly recommend The Daily WTF for examples of code gone wrong, usually presented in a humorous way.

On a related note, I’m interested to see how well surgery checklists will be thought to work ten years from now. I suspect that a large part of their current success is because people follow new procedures much more carefully, whatever the new procedure might happen to be. Over time, I think the autopilot effect will reassert itself.

I also wonder just how many errors still occur in aviation, despite its widespread use of checklists. How often do pilots notice something that they overlooked in their pre-flight routines? We only hear about such errors when they actually cause major incidents, but I imagine they occur more frequently than that. For example, pilots don’t actually land with gear up very often, and the FAA knows about it when they do — but how often do they overlook the gear-down item on the checklist, but catch the error in time and lower the gear?

It’s not likely anyone knows except the pilots themselves, but it’s directly relevant to the usefulness metrics of checklists — the landing-gear situation would be an overall aviation success, but a checklist failure. If the mechanics of flying offer more built-in feedback than those of surgery (i.e. the plane handles differently if the gear is up), medical checklists ultimately won’t be as effective in reducing the overall occurrence of problems, and may even create a false sense of security in surgeons who rely too heavily on following the checklist.

Proper users of the technology should not be reliant on software. They need to know how the instrument works, and what they are doing when they use it.

Software is part of the instrument since it’s used to aim and control the machines. If you’re working on an interface for a machine that could potentially kill a human with a massive dose of radiation, you need more than a confirmation message when something goes wrong. The whole screen needs to be blocked off by a giant warning with very big letters warning that the beam is now lethal.

Many of today’s complex tools require engineers to maintain and fix, and it’s simply unrealistic to expect doctors to also become experts in medical engineering. The software is there to tell the doctors and techs what’s wrong, where and what should be done to fix it. That’s its function.

A lot of good ideas have been suggested for improving the safety and effectiveness of using computerized medical treatment devices such as radiation imaging and treatment systems. For once, I tend to agree with most of them.

One term that has been mentioned several times is “fail-safe”. This term has been redefined and misused so much in the last 40 or 50 years that it has probably irretrievably lost its original meaning. However, I think that original concept is germane to this discussion, so I would like to review it.

Fail-safe was a design philosophy that, I think, was originally developed in the 40’s and 50’s for guiding the design of the mechanical and electrical components of nuclear weapons systems. Far from being fool-proof, error free and 100% reliable, which seems to be the current understanding, at least for the lay person; fail-safe meant that systems were not perfect and sooner or later something was going to fail. Because nuclear weapons were inherently so dangerous, the designers wanted to be absolutely sure that unless everything worked correctly, the system would not work. So, if any component or subsystem failed or did not work correctly, the weapon would not detonate and would not give a significant nuclear yield.

So, in the Jerome-Parks case, if the system had been designed to operate in a fail-safe manner, when the computer did fail and crash after the operator entered the step one choice for the collimator, the software would not just assume some default value and go on to step two. It would make the operator go back to step one and reenter the value. This happens to me quite often when I try to fill out forms online to register software or make a payment or something like that, so at least it is possible to implement.

An intimate understanding of the underlying software running the system by the operator is probably unnecessary and way too expensive to even hope to achieve. However, certain things should be red flag items for the operator.

If you as a surgeon were getting ready to operate on a patient and the overhead light that illuminates the operating area kept going out every five minutes, I am sure that you would not just press on with the surgery. Instead, you would insist that the light be replaced or fixed first.

Similarly, if the computer that was running the linear accelerator kept repeatedly crashing, the operator should have taken that as a red flag that something was wrong and insisted that it get fixed. When the overall work environment, including financial pressures, management philosophy and operator training, encourages the user to press on in the face of adversity; we only encourage further errors.

As these two cases show, some of those errors will be deadly.

That would be a good job for a quality assurance engineer. If the company is bothering to invest in them, that is. Even though it doesn’t reduce overall per capita efficiency over time, it does reduce it *now*.

Nope — that’s the funny thing. More QA actually reduces time to market, because the effort spent up-front in getting it right the first time is more than saved in reduced bughunting prior to release.

There’s boatloads of data on this point, so any project manager who pretends otherwise is (IMNSHO) at best incompetent and at worst negligent. “Thinking with the wrong head” kind of mistake: drawing conclusions from emotional biases rather than facts and reasoning.

Calli, I’m a programmer too, and I fight against this with my own clients all the time, because there’s nothing more useless than yet-another-popup. When possible, it’s always better to let the user recover from the action than to “warn” them in advance.

Part of the problem is that programmers are not your average users, and interface design strategies which seem perfectly reasonable to us don’t actually work so well for, well, normal people. Any of you non-programmers remember the old DOS WordPerfect, where you had to use all the function keys? Well, we voluntarily use text editors which make WordPerfect look seamlessly intuitive (and have Very Strong Opinions about whether it’s better to interact with said text editors by whacking Escape vs Control). Being good at the mechanics of programming has nothing to do with being a good designer — I work with a guy whose code is a thing of beauty and joy, but whenever possible, we keep him strictly away from anything display-related. His code is bulletproof and passes automated acceptance tests flawlessly, but his design skills are so bad that it can actually encourage users to make data-entry mistakes.

Too many software companies make the mistake of letting the programmers design the interface, rather than involving usability specialists (which are NOT the same thing as graphic designers, for that matter).

My grandfather is a physicist who was an early patient of a medical accerlator that he helped design. Apparently, that was an educational experience for all involved. I don’t know all the details, but I do know that some of the procedures were tweaked a little.

Not that I’m saying people need to be treated with their own machines, but that there was at least a little bit of a gap between the ways of the pure research lab guys and actual clincial practice. It seems like that gap can leave room for tragedy.

D. C. Sessions:

Nope — that’s the funny thing. More QA actually reduces time to market, because the effort spent up-front in getting it right the first time is more than saved in reduced bughunting prior to release.

I’m not talking time-to-release, even. I’m talking time to full implementation. At least from what I’ve seen, there is a tendency to focus on getting to full implementation as fast as possible, and worry about the bug hunting later. “Let’s just get it working, and deal with the details later.” Admittedly, some organizations are better about this than others. When I was working on satellite applications, they were extremely aware of the need to find bugs as early as possible. After all, it’s a product which will disappear from human view (tucked away inside a spacecraft which will fly out above the atmosphere) before it even *begins* to carry out its designed function; there is no “later” in which to hunt bugs. And yet it is still possible to get a spacecraft from first concept to flight in just a year.

But in other areas, there is too much tendency to lean on the “later”. The result is far less efficient, and since program management is rarely happy about slipping the release schedule, what ends up getting trimmed is the bug hunt. The “later” goes away — so not only have they arranged for the bug hunting work to take longer (because of all the escaped defects that need to be corrected), but they have also reduced the amount of time available in which to do it. It’s a deep irony.

Emma B — I *totally* agree with you about having specialists design user interfaces. The place where I had my biggest argument over safety was a UI. The engineer didn’t believe it could be critical. Yet this was a weapon. Usability could determine who lives and who dies, without a terribly large stretch of the imagination.

Calli, I sympathize. Please see previous comments about “incompetent” and “negligence.”

> When the computer kept crashing, […] did not realize that her instructions for the collimator had not been saved, state records show. She proceeded as though the problem had been fixed.

As a programmer this strikes me as an incredible flaw. Many computer systems are quite capable of ensuring that data is consistent when crashes occur (filesystems, databases). Either they lose the last input, but recognise this, or write the data out such that it is obvious to the system that an error has occured.

The system should have insured that the steps could only be followed if and only if the previous inputs were valid.

Secondly, a branch of computer science called “formal methods” can be used to validate the correctness of software. It’s required for the highest EAL certification ( a standard for computer security.

Finally, software can still be affected by hardware glitches. Everything from bad ground lines, flaky capacitors, failing memory modules can result in software malfunctions. Those types of errors is generally a lot harder to handle in software.

They may be harder to handle, but the software should in most cases be able to either cope or detect the faulty condition. That’s easier said than done, of course. My field is primarily in embedded computing, where it’s easier to assure that. One of the downsides of using general purpose computers (e.g. off-the-shelf Windows boxes) is that the software engineers have an impossible task in determining all possible hardware failure modes. But such computers are much cheaper to develop, reducing the cost of the finished product. Give and take. The trend for code reuse can also be a problem; though I’m very much in favor of not reinventing the wheel, it can propagate problems, especially if reused code is not reviewed or tested as thoroughly, on the dangerous assumption that it’s been reviewed and tested already on another platform.

I would recommend the book “Set phasers to stun” about radiation overdoses due to poorly designed machine/human interfaces.

What happened to good old fashioned human factors experiments with this equipment? Sorry, I haven’t read the comments and don ‘t know if anyone else mentioned this.

Sometimes it doesn’t even matter if the radiation is targeted correctly and given in the right amounts. Ask my mother, who was hospitalized (after receiving her 19th dose of 25) for 12 days with a high fever (102.5) and blood counts so shockingly low that the hospital staff was amazed she stayed alive. She even went back to finish the radiation two months after recovering from her radiation treatment. The radiation oncologist was insistent on her finishing treatment, but was much more careful in monitoring her bone marrow. Whether she needed the radiation or not is a matter of some contention, too, as they found no cancer cells in CAT or PET scans done after both surgery and chemo.

The upshot is, radiation can kill anyway, and that software is written without failsafes or verification unless you’ve got a government contract. I’ve been in the business, and the last thing ever considered is heavy bug testing before release. It takes as long as writing the software to verify it and check all areas of safety. Too much time/money lost doing it. It’s like the Pinto, a certain number of deaths are acceptable, the money’s got to roll in first, the fix comes later.

I’m not talking time-to-release, even. I’m talking time to full implementation. At least from what I’ve seen, there is a tendency to focus on getting to full implementation as fast as possible, and worry about the bug hunting later. “Let’s just get it working, and deal with the details later.”

This results in a product with some important features which are so buggy as to be entirely unusable. In other words, something which is not really a full implementation, but a partial implementation buoyed up by delusion.

As an RN who has worked my entire career in ICUs I am floored. But on reflection I shouldn’t be. Some of the (non-critical) machines when unplugged need to have 2 or three switches turned off, before it stops alarming…but, when you unplug a ventilator (which I HAVE done accidentally)….nothing. No alarm, nothing. You would think there would be a fail safe alarm for such an important piece of equipment, but there is not. Fortunately, I noticed right away that I had unplugged the wrong piece of equipment and plugged it back in again with no harm done to my patient.

I am a radiation oncologist, and when I read NYT article I was horrified. Mr. Bogdanich does a great job capturing the suffering of two people in NYC who received radiation treatment that went terribly wrong. But then he offers zero context and his eloquence covers up his lack of investigative depth.

If the article was balanced, it would not lead with these two tragedies, then let readers assume all 621 errors were identical, and then randomly mention that radiation injuries (unspecified type)occur in 5% of patients.

While I agree with many concerns expressed by both orac and commenters, I am surprised that all of your comments neglect to ask an important question: what were those 621 errors, and in how many people did these errors occur? The Times claims to have looked through thousands of documents in this expose. They did not publish these numbers. So if you want them, type in ‘new york state cancer incidence’ into Google or Bing and you can get it. I’ll save you time: the 621 errors occurred in ~470000 NY cancer patients 2001-2008

Is that what you took away from this article? An error rate of 0.1%? I doubt it.

I do think it’s a cautionary tale about rapidly adopting new technologies, but this happens in surgery too. Robotic surgery doesn’t make it better. It’s still a medical procedure performed, interpreted by humans with the potential for error. I think the vast majority of radiation oncology fully appreciate the importance of QA with checklists and safeguards. Part of radiation oncology board exams include knowing the definition of a radiation therapy misadministration, and reporting within 24 hours of discovery.

There is another distinct difference between radiation oncology and surgery. Because it’s a noninvasive treatment delivered multiple times (over 40 treatments for prostate cancer), it allows adjustments to the treatment plan should even a serious treatment delivery problem arise. This does not excuse making any mistake, but should put into perspective a bit that not every one of those 621 errors results in patient harm, definitely not lethal like poor Mr. Jerome-Parks and Ms. Jn-Charles.

I have shared the article with my staff, partly because it will scare the hell out of my current patients but also to reinforce the importance of communication and identifying errors. But when an article confuses radiation exposure from diagnostic CT scans with radiation therapy, and fails to provide context, I feel obligated to also share with contributors and other readers interested in this topic some sense of perspective, because the NY Times doesn’t.

llewelly — that is an eloquent way of putting it, and if I am ever in the position of being able to argue against the practice on the spot, I think I will borrow it.

mary podlesak:

What happened to good old fashioned human factors experiments with this equipment? Sorry, I haven’t read the comments and don ‘t know if anyone else mentioned this.

I alluded to it, but not specifically; mostly folks have been talking about design defects and inadequate review. I talked a bit about how a poorly designed test will fail to catch some critical failures. Human factors are very easily overlooked, especially because most of the time, the formal testing is performed by test engineers who are familiar with the equipment. Thus, they may not anticipate the sort of improper usage which may occur in the field. They’ll anticipate a lot of it — that’s their job. But it’s hard to anticipate how an end user might unwittingly abuse an unfamiliar system. People can be quite creative when they’re earnestly trying to get their job done.

Matthew Katz,
You’re playing games with denominators. Not every cancer patient gets this type of treatment. It’s the equivalent to calculating the rate of problems in open heart surgery to all people who ever had a cardiac arrest.

Also, has the follow-up article makes abundantly clear, there is a patchwork of rules requiring regarding radiation overdoses with many states and the federal government requiring no centralized reporting of accidents for many of these cases. The article noted that many of the cases the times uncovered were because they entered the malpractice legal system and the regulatory agencies didn’t even know about some of the cases until the NY Times came calling. How can you claim there are good numbers of accidents when no agency is consistently even collecting these numbers.

As for on-site QA, in the follow-up article did you read of the decide that was calibrated at installation and then all QA merely confirmed it hadn’t changed. Since the initial calibration was off, every patient was receiving higher than prescribed doses. Do any of your machines have that type of QA?

Therac-25 entered the case study books after 2 deaths. This is already a bigger problem and to downplay the seriousness is to delay a solution.

As for on-site QA, in the follow-up article did you read of the decide that was calibrated at installation and then all QA merely confirmed it hadn’t changed. Since the initial calibration was off, every patient was receiving higher than prescribed doses. Do any of your machines have that type of QA?

For a more famous but less deadly example of this type of problem, I submit the Hubble Space Telescope.

Hubble’s primary mirror was ground with a super-sophisticated laser-guided system to ensure it would be the most precise mirror of this size. Afterwards, it went through a meticulous validation process to ensure correct grinding.

Problem? The same instrument was used to test the mirror as was used to calibrate the grinding machine. And though the instrument was perfectly fine, there was a tiny paint fleck on its mount which reflected just a tiny little bit of light, skewing both the calibration and the validation in exactly the same way.

Really bad problem: they didn’t figure this out until after Hubble saw first light on-orbit. And though it’s designed for on-orbit servicing, there’s no way you can change out a massive mirror, even if you could assure it would be transportable. In theory, the Hubble could be retrieved and returned to Earth, but the cost would be prohibitive. So instead, the same calibration instrument with the same paint fleck was used to create a device with the exact *inverse* problem: a lens, installed in a device called COSTAR, which could be installed in the light path between the mirror and the cameras, precisely correcting the defect.

COSTAR was installed on the first Hubble servicing mission, and then deactivated on the second, when the cameras themselves were upgraded. A later mission brought it back to Earth, and it is now at the Smithsonian Institution. It’s not needed anymore, because all of Hubble’s cameras now correct for the defect internally. It’s awesome — but if they hadn’t used the same instrument for validation as for calibration, it wouldn’t have been necessary in the first place. (Mind you, there was a reason they used the same instrument; it’s very specialized equipment and so it’s tough to get more than one.)

1) A large percentage of patients receive IMRT treatments (What Mr. Jerome-Parks had). Nearly the rest of patients who receive external beam treatments, get treatments similar to those Ms. Jn-Charles had. Your comparison is not apt. Even if one assumed all 621 errors were made in IMRT treatments (which is ridiculously unlikely), and assuming that IMRT treatments made up 40% of the 13.6 million treatments in the state of New York in the time cited (this is conservative), errors still only occurred in considerably less than 1% of treatments.

2) Many treatment centers participate in calibration verifications, both by comparing output of their machines with that at other institutions, and by working with NIST or one of the three accredited secondary calibration labs in the country. Not to mention the regular internal beam output verification procedures. Despite how the NYT is portraying things, most centers are neither full of villains nor idiots.

Here’s a page that says 50-60% of cancer patients receive radiation therapy:
Here’s a website that says a center give IMRT to 1/3 of it’s patents:

That already knocks your 0.1% to 0.2-0.3%. For a complex condition, one in 333 patients getting an adverse side effect might be reasonable, but for something completely preventable it is not. For pregnancy, which is much more complex, the US maternal death rate is around 1/9000 and every death still prompts major overviews in a hospital.

Downplaying completely preventable deaths, particularly when multiple centers are making the same mistakes does a disservice to both science and your patients.

On unforgivable medical errors, Andrew Wakefield might lose his license :

“Dr Wakefield faces being struck off the medical register after the panel decided the allegations against him could amount to serious professional misconduct, which will be decided at a later date.”,_GMC_Rules

Wow, 3 different items I was thinking about mentioning, and they’re all here in this post and comments!

1. Medical errors and checklists: My dad had surgery Tuesday (doing fine so far, thanks). Doppler had found a clot in his left arm, so they hung a sign on the chart in his room saying no BP (blood pressure tests) on the left arm. As he was being readied to be wheeled down for surgery, my dad (who is still unbelievably sharp at 91) asked my sister to attach the no-BP sign to the gurney, so the people in the OR would be reminded. The nurse dismissed this, saying she’d already told the people in the OR. My sister went ahead and did as my dad asked, and made a new no-BP sign for his room.

Just human nature to say “It’s all right, I took care of that already,” instead of making absolutely sure. When you can say “I know you remember, but procedure says we have to do this checklist anyhow,” it gets past egos and ensures people are reminded of what they need to know.

2. “Non-critical” engineering: At a company I worked for, I had to review a 180-page contract (it was a contract with General Motors, and they packed every clause you can think of in there, including one saying you had to provide your own porta-potties so your people didn’t slip and fall in their bathrooms). The last page was an exhibit showing the control panel for the equipment we were installing, a system to take old paint off the hooks carrying car doors to the spray room.

A paint removal system isn’t anyone’s idea of life and death, of course. But I noticed that the way the panel was designed, there were two white indicator lights positioned together in the lower left corner of the panel. One showed the system was on and ready to operate. The other showed there was a ground fault, indicating that if you touched the metal control panel there was a good chance you’d die of electrocution.

I asked our engineers to redesign the panel so the “System On” and “Sudden Death” lights were different colors in different locations.

3. Calli Arcale writes re Hubble:

Problem? The same instrument was used to test the mirror as was used to calibrate the grinding machine. And though the instrument was perfectly fine, there was a tiny paint fleck on its mount which reflected just a tiny little bit of light, skewing both the calibration and the validation in exactly the same way.

Really bad problem: they didn’t figure this out until after Hubble saw first light on-orbit.

Years ago I used to do a lot of flying back and forth between the eastern U.S. and western Canada, long plane trips that left me with plenty of time to read. On one trip where I’d exhausted all my own reading material and had to resort to the airline’s magazine (might have been Delta?), I came across an article on the building of the Hubble. This article interviewed the fellow at the contractor who headed the construction. He related a heart-stopping moment in the last stages of mirror grinding and testing where he was called to the plant and given readings that apparently showed there had been an error in the way the mirror was ground, making months and months of careful work worthless. They would have to pretty well start over.

Luckily, after frantic searching and checking, they found the source of the “spurious” errors and were able to continue on and finish the mirror.

Of course later, after the telescope was in orbit and they found the errors weren’t spurious at all, I really wished I could get hold of that airline magazine article.

So, I haven’t read the comments, so this point might be redundant: I seem to remember when I read the Times article, that they had a chart of the causes of errors, and software was pretty low down on the list — below computer hardware failure even, if I recall. As a software engineer, that made me feel somewhat vindicated (especially because I am currently trying to convince the business group that a rash of errors observed in the field is most likely due to faulty hardware rather than bugs in the software, hahaha).

Still, as you say, it’s absolutely unacceptable for equipment like this to have buggy or non-failsafe software. Unconscionable.

When my wife was giving birth, the labor took longer than expected and in some of the downtime, being the geek that I am, I asked the midwife about interpreting some of the data on the fetal monitor. Interesting stuff. In addition, part of the reason the labor took so long is that she got induced — we believe unnecessarily — due to a postdates ultrasound that supposedly showed indications of problems that all turned out to be negative when the boy finally jumped out. Again being the geek that I am, I had been paying close attention to what the ultrasound operator was doing, and in retrospect I believe I know the error she made that led to at least one of the false positives — and better software might have prevented it.

The reason I bring all this up is that, also combined with what I saw from scoping out the IV controls, etc., I suspect that there may be an unconscious desire on the part of the operators of medical devices for those devices to be a little inscrutable. I’m sure nobody thinks this consciously, but… imagine you are an ultrasound technician, not the sonographer who actually interprets it and gets the big bucks, but you just run the equipment. Knowing how to work this arcane machine might just give you a sense of empowerment that would otherwise be missing in such a job.

I dunno, just a thought. I definitely know people around where I work who are more proud of their privileged ability to understand a complicated system than they are concerned about unnecessary complexity in said system; hell, I’ve fallen victim to that myself on occasion.

I don’t think the systems are designed to be deliberately inscrutable. If you talk to the designers of such systems, they usually express amazement that anybody finds them puzzling at all (since it would be an affront to their pride if people found them difficult to use). They’ll do this even when it’s plainly obvious that the equipment *is* hard to use, because their pride is involved.

The real source of inscrutability is a combination of several factors: 1) this is a young field, and hasn’t worked out the best way to organize these interfaces, 2) the machines really *are* complicated (there is a lot of information that needs to be presented somehow), and 3) the designers of the interface do not use it, and so will not perfectly anticipate the user’s needs.

Jud: that’s fascinating, and if you do find the article, I’d be very interested in reading more! I always like learning more of the details of spacecraft. 😉

@James Sweet

I wonder, do reiki healers have similar checklists to make sure they don’t accidentally move the “ki” on the wrong side of the patient?

One reason I heard from a proponent of reiki at Spaulding Rehabilitation Hospital (part of the Partners HealthCare network that includes Massachusetts General Hospital), is that there are no side effects from it because the energy only flows as needed.

I’d missed that chart.
In NY from 2001-2009:
284 cases of “missed all or part of intended target”
255 cases of “wrong dose given”
50 cases of “wrong patient treated”!?!?!?!??!
30 cases other
The chart notes that many cases in NY go unreported so this is a minimum incidence list.

I assume wrong patient treated means they loaded up the wrong dose protocol. It’s probably easier to do than operating on the wrong person, but still completely unacceptable.

I also notice now more articles listing some of the case studies:
and regarding the lack of oversight in the inspection process

Calli @#73: I’ve tried all the free and relatively sedentary means of locating the article that I can think of, and unfortunately no dice. 🙁 ISTM airline magazines must be a fairly specialized niche of the publication world.

At this point, my guess is that the only people who can still get hold of it are the author and his/her mother.

Guesstimates based on what I can still recall, for anyone interested in doing more legwork:

(1) Date – This would have been within a year prior to the Hubble launch, but definitely prior, not after.

(2) Airline – Very possibly Delta, maybe American, less likely United, almost certainly not USAir or Northwest. I was also flying Canadian Airlines in those days, but I think it was a U.S. carrier.

(3) There is no #3. 😉

Not much to go on, I know – sorry.

Apologies for going on a bit about a tangential (but fascinating) subject. Here’s a link to NASA’s report of the Hubble screwup, describing how indications of flaws in the mirror from a less sensitive testing instrument were eventually discounted in light of the fact that the mirror passed tests from a more sensitive instrument (which itself turned out to be flawed):

It’s that sequence of alarm by Perkin-Elmer management at the indications of problems, followed by relief at passing what were viewed as more reliable tests, that was portrayed rather dramatically in the airline magazine article I read.

bsci, I agree with you in many respects. Safety protocols should be designed for zero error rate, which is the goal but rarely achievable. And my point was simply that the current reported error rate should have been presented, not that errors are acceptable. But to say that I’m downplaying it by presenting the denominator doesn’t make sense. Already, at least one blogger took away that the error rate was 5% based upon the first article.

What should not be a surprise is that the three main professional societies for therapists, physicists and radiation oncologists all support quality standards. In fact, there is a bill in the House, HR 3652, supporting minimum quality and credentialing standards.

I just made a Facebook page to join if you want to support the bill as well.

bsci, rather than personal attacks on me why don’t you join me in trying to accomplish something productive?

While there is quite a bit of software problem attention here, I think a few commentors “get it” – the system failure in the practice of medicine is the culture and our human system, not the technology. Aviation safety culture evolved amid rancorous objections from aviators on imposition of mundane procedures of which checklists are but a small part, but are now ingrained. Those that depart from safety patterns are identified and either corrected or removed from the system. NOTHING like that happens in medicine now in most venues.

The CRM link far above to WIKI seems to have met a sad fate. Whether due to human or software error, I can’t say. Without imbedding a link, I’ll offer to those that are interested in one approach to applying aviation safety to medicine, take a gander at

This is a program developed by pilots and flight surgeons and it works. There are other folks hawking similar programs, but I warn you that the ones that simply impose a checklist are NOT implementing the entire system changes needed. Orac’s own personal experience with checklists and history of attitude towards them shows his institution did not attempt a systemic change, but a band-aid approach.

Encouraging feedback from all crew members AND paying attention to it is another key to aviation safety. To get back to the radiation accidents: I wonder if anyone in the software development, implementation and use process noticed potential for errors and did not speak up or was ignored, or if all involved were encouraged to look for problems and speak up daily as part of their work process.

I didn’t intend to make a personal attack. I was criticizing your explanation. Sorry if it seemed personal.
That said, safety protocols can’t eliminate all errors, but certain types can and should be eliminated and every single error of that type is a big deal. Masking them in percentages downplays that point. An error where someone gets the treatment aimed for someone else is never acceptable. An error where radiation is delivered with no shielding is never acceptable.

As for people developing safety protocols, that’s all well and good, but shouldn’t have taken this long and it seems this field is decades behind other areas of medicine regarding basic safety procedures (i.e. making sure the right patient gets treatment. This there are special safety issues to the field, but they seem to be behind everyone else in terms of common safety procedures.

Was the interface running windows?

I know that the original classic radiation machine error was not windows, but this sure sounds like it. Yet another example of how COTS machinery kills and does NOT solve the problems that your manager tells you it does. Unless, your job is to provide cheap word processing for a long hallway of secretaries, in which case it’s fine.

It’s long past time we boycott windows as a process control system of any kind.

I’m curious how much formal risk assessment (PRA/PSA, FMEA) is done on these devices, especially on the software side. My experience in the nuclear industry leads me to view QA as too often an exercise in bureaucracy and compliance paperwork which often adds little but cost and delay. While domain experts must be involved in product design and acceptance testing, I don’t believe they add to code quality, if only because expecting someone to be good at both (say) radiation oncology and software development is a tall order. If my experience drafting QA packages for code written by very talented engineers is typical, domain-expert-written code is functional but gruesome. A lot may have to do with the expressiveness of the underlying language (in this case Fortran 77 with scattered bits of Fortran 90/95); the engineers are experienced coders, but they haven’t changed their design methodology in about 20 years, and that has an effect on what gets produced.

I believe “software engineering” is a somewhat dangerous misnomer. It is quite unlike traditional engineering, where there are well-established principles of conservative and standard design (e.g. the ASME boiler and pressure vessel codes, the National Electric Code, &c.) and a whole host of natural phenomena which can be relied upon to not randomly fail (e.g. gravity.) Software systems are inherently brittle and no level of Capability Maturity Model or formal design methods will change that.

Yet therapeutic radiation machines are vital to people’s health and they require complex software to control; despite the risks, society is better off with them than without them. Every human endeavor involves some risk and those who create and use these devices should have some understanding of what the risks are (failure modes, probability, consequences) and should work to minimize them, preferably early in a product’s lifecycle. I don’t believe risk assessment alone is a ‘silver bullet’ (h/t Fred Brooks) but I do believe it should be required as part of licensing of these devices (assuming it isn’t already.) It will take a combined effort between developers, domain experts, usability experts, software test/QA pros, and risk assessors to make these sorts of devices as safe and effective as reasonably achievable.

I’ve taken a recent interest in life-critical software and one of the most interesting works I’ve read is the official NASA history of computers in spaceflight. The technological challenges seem insurmountable, but NASA has fielded some very impressive systems over the years and have encountered and overcome some equally impressive obstacles. You should be able to find NASA’s story at; it’s an accessible read and covers organizational and political issues as well as issues of hardware and software.

Hi bsci:
Thanks for the explanation. Maybe it just feels personal because I take it very seriously. What I think the Times didn’t portray is that many good centers have already made significant adjustments after hearing about the two terrible cases in last Sunday’s article. Support for the C.A.R.E. bill has preceded these articles — by years.

I agree that no error is acceptable, and that there should be more stringent requirements that facilities are run but qualified staff. But the professional societies for radiation therapists, medical dosimetry, medical physics and radiation oncologists have all supported legislation to make these changes, as am I. We already use checklists as some have suggested, and where I work all IMRT cases are QA’d before any treatment is delivered at all. If there are areas of medicine that you think are decades ahead, please feel free to direct me toward them.

The second article in the series raises an important point about new technologies in medicine generally. There is often a rush to adopt new technologies because they’re exciting. But the safety protocols should be in place before, not after, a radiation center has adequately trained staff to implement a new technology. An excellent example is surgery is DaVinci robotic surgery. This surgery is being hyped and rapidly adopted by many urologists for cancer care. So far, it may result in worse QOL outcomes for many patients than the standard surgery. I would like to see surgeons support similar measures, just as radiation oncology has already.

Also, the second article also give a sense of the variability in standards/penalties for medical errors. I do wonder if states without the certificate of need process have these problems more than CON states. In states without this process, a radiation center can be put literally across the street from another in the name of unrestrained free market principles. If you don’t want medical errors, then support the C.A.R.E. bill and consider the possible benefits of implements CON regulations.

And I hate comments not being able to show tone/inflection. I’m serious — I’m open to any suggestions on how to improve from other disciplines. So either join the facebook page or email me at [email protected]

I know this is a late post, but I bounced here from a more recent post.

When a user on our operating system issues a command to format a disk drive, the system asks in 3 different ways if the user is SURE he wants to take this action and is reminded of the consequences of the action (All data on this disk drive will be lost).

I believe that the writers of this software are careful because they understand the consequences of the action and want to protect, not only others, but THEMSELVES, from the consequences of a mistake.

I wonder if it should be standard practice for designers and developers of medical software to be willing to undergo a “treatment” from the software that they produce. It might make them a little less trusting of the operators of the device and a little more careful about unintended consequences.

RogerR – our data acquisition software brings up a warning when we try to abandon data without saving it. It asks TWICE if that is what we really want to do?

You know what we do? We just hit Yes twice. In fact, it is such a common thing that we just do it without thinking, and I have on a few occasions, lost data because I didn’t save it because I thoughtlessly hit yes twice. So much for protections!

The problem is that even multiple protections are not all that useful if they can be by-passed mindlessly and out of habit. If error or warning messages are so common that you get into the habit of moving on without considering them, then they are happening way too often, in places that don’t need warnings (if you are routinely not heeding them). It’s the boy who cried wolf problem. If the instrument gives you a warning for every stinking little thing you do, before long you don’t pay attention to any of them, including the ones that matter.

If you are going to give warning messages all the time, you need to make sure they are clearly distinguished on the type of error that has occurred or danger that is present. And just having different text in the pop-up box is not enough. It has to be something to catch the user’s attention. (like the difference between windows giving you a message that says it needs to stop a program and hitting a BSOD – clearly NOT the same problems causing those things).

For example, the real key to your format prevention error is not just that it asks three times, but that if you only want to overwrite a file, it only asks once. But if you want to reformat, the warning goes beyond that of something less critical than just overwriting a file, so that you don’t treat it the same way.

I have been atypically damaged by diagnostic radiation from CT scanner (Toshiba Aquillion) that no one will recognise or admit because ‘it doesn’t happen’ – but it has.

abdomen started stinging and sizzling within one hour of 3 part 64 slice scan 1594mGycm2 no abnormals revealed.. I felt like I was plugged into the mains power and it felt toxic – this sizzling lasted for 10days+

internally abdomen still feels dreadful 10 months later. Connective tissue supporting visceral organs feels ‘cooked’ – stretched and without elasticity or spring back. organs bellowing and splashing around. visible and clearly audible. I havDocs will not accept radiation could have caused mollecular changes which have not changed back.

I have no disease take no drugs except for the pain caused by this imaging and cannot stand up for any normal period of time before dragging weight under diaphragm unbearable.

Circulation in hands and arms which were very close to scanner beam ports for the duration, became scorched scalded inside within days of scan and this set of dramatic circulation changes again visible in both arms/hands where blood drains instantly as they hang to my side then drain and blanch pale as soon as elevated – this is not a normal or a usual state for my body. I know that I was burnt by levels of rad that do not normally cause deterministic effects.

I am a semi professional athlete and qualified physiologist. I know that this is not a normal reaction but equally not a normal set of feelings or symptoms that I did not have prior to scan!!!when my face neck etc started to turn to tan and look like leather I knew the scanner had done this and was causing the internal changes I can feel hear and see!I have not been near any other energy source nor in any sunlight.

They know something odd has happened but are still trying to give usual/normal explanations for some of the sudden symptoms – there are no usual explanations however and they twist my accounts to try to suit!!

I know that it is serious and I am out here on my own with it – any helpful observations please. I am a normally v healthy sensible professional with no health hangups or previous sensitivities.

FEB 5/2009 09:15AM

Medicine is no longer what it was originally intended to be, I don’t argue that there are many well meaning Doctors and nurses out there but I have come accross far too many Doctors who are prepared to deliberately faslsify medical records and commit perjury for me to know that there are also many Doctors who ought to be behind bars.
Doctors are always going to make mistakes and we ought to be operating procedures where the sooner they admit an error, the less severe any ‘punishment’
If a previously Good surgeon ends up making an error that costs the life of a patient, owns up, admits the error, I see no reason why he should not be back at work the next day if he feels up to it.

Adrian Peirson:

Doctors and nurses out there but I have come accross far too many Doctors who are prepared to deliberately faslsify medical records and commit perjury for me to know that there are also many Doctors who ought to be behind bars.

Like Andrew Wakefield?

I looked at that website you linked to, it is also a candidate for Scopie’s Law.

The problem is that even multiple protections are not all that useful if they can be by-passed mindlessly and out of habit. If error or warning messages are so common that you get into the habit of moving on without considering them, then they are happening way too often, in places that don’t need warnings (if you are routinely not heeding them). It’s the boy who cried wolf problem. If the instrument gives you a warning for every stinking little thing you do, before long you don’t pay attention to any of them, including the ones that matter.

The American Medical Association recently released a study that illustrates a concentration of medical malpractice among outpatient systems. The study showed that, in the four year span between 2005 and 2009, more damages were awarded to outpatient claims as opposed to inpatient hospital care. Moreover, major injury or death accounted for a gross majority of outpatient claims, most of which were related to diagnosing failures. Inpatient claims were generally related to surgery errors. In the last decade, however, most government and private sector efforts to prevent medical malpractice have focused on inpatient safety, as opposed to outpatient issues.

I think you summed up the problem really well, surgeons ridicule checklists, despite the fact that they work.

I disagree that the problem isn’t in the adaptation and adherence to a particular methodology, though.

The problem is in the sheer arrogance of the medical profession in general. It’s somehow beneath them.

My plumber is more thorough and trustworthy, he doesn’t need to be told to write thing down and ask questions and know about the job he’s doing. He just does it, because he knows better.

Also, WHY do so many surgeons still make so many mistakes, even with checklists ? The big elephant in the room is the drug abuse issue. But doctors don;t want the public to worry about THAT. After all, they’re just high on LEGAL drugs.

After hearing some testimonials about medical mistakes and malpractice in some new audiobook – and the exhaustive resources provided so you can prevent what you can – I feel that this industry is one in which you NEED to know how to do your homework.

Comments are closed.


Subscribe now to keep reading and get access to the full archive.

Continue reading