Categories
Medicine Science

Unforgivable medical errors, revisited

About six months ago, I applied my usual brand of not-so-Respectful Insolence to what I termed unforgivable medical errors. These are errors that are so obviously harmful and lethal that there is no excuse for not putting systems into place to prevent them or so egregiously careless that there is, quite simply, no excuse for them to occur. As I mentioned before, there are a handful of such “unforgivable” errors in surgery. Although not all surgeons would necessarily agree on the specific identities of all of them, there are some upon which nearly all surgeons would agree. Examples such as removing the wrong organ, removing the wrong breast, or amputating the wrong limb come to mind. Such errors, although relatively uncommon, are so horrific that even one occurrence is arguably unacceptable.

The last time I discussed such medical errors, it was in the context of the increasing amount of medical radiation that patients are being exposed to. Given that virtually every single one of us is a patient or will be a patient at some time. No matter how young or how healthy you might think that you are, the odds are that some day you’ll be on the receiving end of some medical radiation for some test or another. It might be for trauma or an injury, where you need a CT scan or some other test. It might be that you suffer some chest pain and undergo coronary angiography. Or, you might have a severe headache or other neurological symptom and need a head CT–or even a CT brain perfusion scan. You might be like Alain Reyes, whose injury begins a story in Saturday’s New York Times entitled After Stroke Scans, Patients Face Serious Health Risks:

When Alain Reyes’s hair suddenly fell out in a freakish band circling his head, he was not the only one worried about his health. His co-workers at a shipping company avoided him, and his boss sent him home, fearing he had a contagious disease.

Only later would Mr. Reyes learn what had caused him so much physical and emotional grief: he had received a radiation overdose during a test for a stroke at a hospital in Glendale, Calif.

Here’s a hint. Whenever you see hair fall out in such a perfectly symmetrical band, look for an iatrogenic cause. The pattern of this hair loss almost perfectly lined up with how a CT scanner lines up around the head. In any case, one question about this entire incident is: How could such a screw-up have happened? Another question is: How pervasive was this error? The answer to the latter question is, unfortunately, way too prevalent: Thirty-seven at Providence Saint Joseph Medical Center in Burbank, 269 at Cedars-Sinai Medical Center in Los Angeles; and dozens at a hospital in Huntsville, Alabama. Overall, there are at least 400 such patients at eight hospitals, but, worse, it’s expected that there will be a lot more cases discovered as investigators look more closely into the radiation overdoses.

Part of the problem is that CT brain perfusion scans use a lot of radiation even when performed properly. Basically, the scan quantitatively evaluates blood flow in the brain by generating a map of cerebral blood flow, cerebral blood volume, and mean transit time. After the scan, the scanner’s software employs complex deconvolution algorithms to produce the perfusion maps. These maps look like this:

i-869875a9088a8314a78b150e1bcd5c98-CTPerfusion.jpg

The last time I wrote about radiation overdoses from medical radiation, the problem occurred during radiation oncology treatments for cancer. When treating patients for cancer, one might argue that it is acceptable to take a bit more of a risk than one might for diagnostic imaging because actual therapy is being administered. While it’s true that a proper diagnosis can be a matter of life or death, in general less risk is acceptable when it comes to medical imaging for treatment.

We know the scope of the problem is huge, but how did it happen in the first place? To examine the NYT story is to see everyone pointing fingers at everyone else. Basically, in general the more radiation a CT scanner uses the better quality the image. The problem, of course, is that it’s very easy to use more radiation, but over the last decade the medical community has come to appreciate that the cumulative effect of all the various sources of medical radiation to which patients are subjected can pose real risks. Becuase of that, the GE scanner used in these cases has a built-in capability of automatically adjusting the dose according to the patient’s size and to the part of the body being scanned. According to GE, it’s a technical feature designed to reduce radiation dose.

However, there is a problem, and it turned out to be a big problem. The automatic feature behaved in a counterintuitive fashion under some conditions using certain settings governing image clarity. Instead of decreasing the amount of radiation used, the machine increased it. In some cases, it increased the radiation dose to as much as eight times as much as what it should have been. Unfortunately, neither the CT scan techs nor the radiologists who oversaw them appeared to be aware of this aspect of the machine’s behavior. And so the finger-pointing began:

GE says the hospitals should have known how to safely use the automatic feature. Besides, GE said, the feature had “limited utility” for a perfusion scan because the test targets one specific area of the brain, rather than body parts of varying thickness. In addition, experts say high-clarity images are not needed to track blood flow in the brain.

GE further faulted hospital technologists for failing to notice dosing levels on their treatment screens.

But representatives of both hospitals said GE trainers never fully explained the automatic feature.

In a statement, Cedars-Sinai said that during multiple training visits, GE never mentioned the “counterintuitive” nature of a feature that promises to lower radiation but ends up raising it. The hospital also said user manuals never pointed out that the automatic feature was of limited value for perfusion scans.

A better-designed CT scanner, safety experts say, might have prevented the overdoses by alerting operators, or simply shutting down, when doses reached dangerous levels.

Does this sound familiar? It should. Lack of stronger safety features permitting radiation dosages far beyond what is considered safe was the same problem in the equipment that delivered serious overdoses to cancer patients, as reported in January. If you’ll recall, the software in that case required that three essential programming instructions be saved in sequence, first the dosage, then a digital image of the treatment area, and then instructions to guide the multileaf collimator that that controlled the dose. When the computer repeatedly crashed, the medical physicist didn’t realize that the instructions to the collimator hadn’t been saved. In both cases, there was no fail safe mechanism to shut the machine off before it administered too high a dose of radiation.

I don’t know about you, but personally I find it shocking that modern equipment could be designed with such lax safety features, but it would appear to be a common problem in devices that deliver large amounts of medical radiation. In the case of machines designed to deliver radiation in order to treat cancer, the margin for error is lower because the intended dose is higher to begin with, but this case shows that it isn’t just therapeutic radiation can result in an unacceptable risk of harm, and it’s not just a cumulative dosage of diagnostic radiation that can cause harm. The radiation requirements of some diagnostic radiology tests have gotten so high that for them the margin of error is less than what it should be.

I also have to wonder if an excessively complex user interface contributed to the errors in Alabama and California that led to the bald ring around patients heads. For tests that can administer an excessive dose of radiation that can result in complications such as hair loss or worse, I will once again emphasize that there has to be some sort of fail safe, drop dead, impossible-to-ignore warning that forces the operator to approve the dose at least twice before permitting the dose to be given. In addition, someone should be recording the does of radiation. Believe it or not, the hospital in Huntsville, Alabama didn’t routinely do so:

Huntsville Hospital officials said they did not routinely record radiation dose levels before 2009. Mr. Ingram, the spokesman, said the hospital did keep information needed to calculate the dose, but he declined to say whether officials had gone back to determine doses for all patients who had brain perfusion scans.

The form letter Huntsville sent to overdose patients appears to play down the damage that high doses can inflict. The hospital told patients that hair loss and skin redness might occur but would go away. “At this time, we have no recommendations for you to have any follow-up treatment,” the letter said.

In actuality, the risks of excessive radiation to the brain include injury to the eyes (for example, cataracts), cancer, and brain damage. Remember, this is not cell phone radiation, whose almost certainly nonexistent or minimal risk so many people are exaggerating. It is real, ionizing radiation on the order of doses of up to 4 Gy, which is an astonishing dose for a diagnostic test. Typically for breast cancer, we treat women with 50-60 Gy total over the course of 30 to 35 fractions of less than 2 Gy each. The dose some people received was on the order of two full dose fractions for breast cancer. This is hundreds of times the dose due to most “simpler’ diagnostic imaging studies, such as chest X-rays and mammography. Depressingly, the FDA has to write something like this in its report:

FDA encourages every facility performing CT imaging to review its CT protocols and be aware of the dose indices normally displayed on the control panel. These indices include the volume computed tomography dose index (abbreviated CTDIvol, in units of “milligray” or “mGy”) and the dose-length product (DLP, in units of “milligray-centimeter” or “mGy-cm”).

For each protocol selected, and before scanning the patient, carefully monitor the dose indices displayed on the control panel. To prevent accidental overexposure, make sure that the values displayed reasonably correspond to the doses normally associated with the protocol. Confirm this again after the patient has been scanned.

Which should be routine and something the machine makes very easy.

The problem of radiation overdoses and excessive radiation from medical imaging tests can be addressed by technology. Problems of this magnitude affecting this number of patients also go far beyond simple human error. They are systemic in nature, and in this case they seem to result from a most unfortunate confluence of poor communication between equipment manufacturers, a lack of documentation of a rather important feature of the CT scanner, and apparently lax monitoring of radiation doses administered to patients in some hospitals. Because this is a systemic problem, a systemic solution is required. This solution will require input from GE, the manufacturer of the CT scanners, the FDA, and the hospitals.

However, this problem highlights once again an even more important issue in medicine. That issue is not so much how to improve the safety of tests like the cerebral perfusion scan, although that is very important. Rather, it’s determining when it is appropriate to use such powerful technology and when it is not. Reviewing the cases, there is a very real and legitimate question of whether the patients who suffered complications from radiation overdoses due to cerebral perfusion scans actually needed the scans in the first place. If there’s one medical “sin” we physicians are very prone to, it’s becoming overly enamored of the latest technology to the point where we order it at the drop of a hat. It was pointed out in the article that many of the patients who suffered radiation overdoses were relatively young, so young that the diagnosis of a stroke would normally seem to be a relatively unlikely diagnosis. Did these patients have actual neurological symptoms suggestive that they were having a stroke? Or where their symptoms equivocal? Would an “old-fashioned” CT have provided the necessary information at a much lower dose of radiation? In which patients does the potential benefit of these scans outweigh the risks?

Unfortunately, we have far less information to answer these questions than we should.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

46 replies on “Unforgivable medical errors, revisited”

I realize this is a very serious issue, and seeing doctors interact firsthand with technology (both in and out of the clinic), it amazes me that this actually doesn’t happen as often as it could.

But no post that starts talking about incorrect organ removal can go by w/out a mention of Weird Al’s “He Got the Wrong Foot Amputated”

The doctor had compassion
He tried to cure my disease

Man, you must be kidding me
Take a look! (He got the wrong foot amputated)
Hey!!!… You shoulda cut they other knee
Take a bow! (You got the wrong foot amputated)

Hey-ay, that leg was fine
You mean to tell me
that this stuff happens all the ti-ime
Hey-ay! This ain’t my day!

Don’t wanna hear no excuses
Its already too late
Don’t try to say I got a two for one sale

Thanks for sharing this, Orac. The timing is quite fortuitous. This story also highlights that people should not blindly trust automated systems. It’s far too easy to become complacent.

An almost pointless remark: Apart from the sin of “over-application” diagnostic CT, would an MR scan have been an alternative? MR is usually better at distinguishing soft tissue (and fluids) and I know it can do perfusion imaging. So why use CT in the first place?

To me this sounds like a typical “expert writing for experts” error at the software maker. The radiation specialists at GE probably knew that the software might increase intensity, but “no one in their right mind would use the automatic feature for that kind of scan”, so it wasn’t an issue.
But that’s why we have child locks on cars, because there’s always someone who wants to try to get out of the car on the interstate at 70 mph.

I especially agree with your final point, Orac. Physicians tend to have a very low threshold for using large amounts of ionizing radiation – especially when the provisional diagnosis is stroke, pulmonary embolus, or intracranial bleed secondary to head trauma. Even though there are several evidence based guidelines describing when CT is necessary in all of these situations, docs are still (understandably) very afraid of missing a catastrophic diagnosis. Often you’ll find that a physician who “got burned” in the past will act this way. It’s really a problem of putting our own experiences ahead of the literature.

Hence the old adagiumØ medicine is to do as much nothing as possible.

Physicians should be reminded on a daily basis that every intervention is inherently dangerous. As an intern I was given the freedom to order any test, X-ray, et cetera, I wanted …. The only thing my supervisor asked of me was to explain what I was going to do with the result. If I could not offer any reasonable argument the whatever-I-wanted was not done. Case in point, an X-ray for a suspected fracture of the collarbone. What is the consequence if it is a fracture? At that time (around 1995) I had just been told: fracture or no fracture, the treatment is the same. No operation, no plaster, just a sling.

Anyway, since that time I always ask the same question: is this really necessary?

@Antares

MRI is harder to access than a CT and the time to complete the test is longer. In an emergency situation (like suspected stroke), CT is still the modality of choice.

I had the same thoughts as Mu @4. It’s so obvious to the experts doing the design and manual generation that they fail to understand how the very nature of the machine interface obscures something that would otherwise be obvious. I know this has been said before, but the aircraft industry would be an excellent resource for managing some of these problems.

@Antares

I had the same thought – PET and CT scans should be becoming less common, since MRI can accomplish much the same with less risk and better resolution, but I guess the cost hasn’t come down enough for them to replace CT scans.

And yeah, MRI does take a while to get results back (less for anatomical than functional, but still).

For a possible stroke, time is of the essence, and minutes can make a difference. MRI is way too slow. At least, the current generation of machines are.

Of course, when time is of the essence and the test needs to be done in a hurry, mistakes are more likely, making idiot-proof safeguards even more important.

Thank you, Jack, Sivi and Orac, for the enlightening input. I was not aware of the time constraints. It seems obvious now.

I’m so glad you blogged about this, Orac. Not only is it a clear explanation of the source of the problem and straightforward solutions, but it demonstrates once again that you give no more quarter to ‘allopathic’ errors than you do to the various medically unsound forms of woo you routinely excoriate. It’s important to point out, as you have done on this and many other occasions, that we’re not engaged in an ideological battle between ‘conventional’ and ‘alternative’: Medical negligence and incompetence should not be tolerated regardless of who is responsible.

The aircraft industry isn’t immune to these problems either, but they do seem to be more aware of them at least. I’m just appalled to hear of ANOTHER case where an automated system ended up applying too much radiation. How many times does this have to happen, especially when one of the instances has been a textbook example used in computer science classes for years? There is simply no excuse. GE’s “it works in a counterintuitive way” is simply butt-covering. If it’s counterintuitive sometimes but intuitive othertimes, it’s badly designed and possibly actually not even compliant with the design in the first place. It’s either a bug or a bad design, but either way, it should not be tolerated or excused. The machine purports to limit radiation; therefore, it should limit radiation. GE screwed up and doesn’t want to admit it.

The operator isn’t blameless either, of course. The machine tells you how much radiation you’re using; you need to be paying attention to that. “Trust, but verify.” And there shouldn’t be a couple hundred instances before someone realizes that maybe something’s going wrong somewhere. The only conclusion one can reach from that is that the hospitals and clinics came to believe that nothing could go wrong with this system, and so they were not looking for signs of a problem. That’s unacceptable. No system is perfect; all systems can fail. Therefore, you must always be looking for signs of a problem.

One other thing from the article: the Alabama official who said that there had been no overdoses. That wasn’t a denial of the actual medical reports from the Alabama hospital (though it probably encouraged the hospital to delay and minimize those reports). It was self-referential “don’t look behind the curtain” thinking:

Alabama has no standard for radiation dosages, therefore the state claims that there is no such thing as a radiation overdose. Not “we aren’t qualified to determine overdoses, you’ll have to talk to the patients’ doctors” (which is buck-passing, but sane), but a claim that by definition none of the patients had been given excessive amounts of radiation.

I think see how we can declare victory in the War on Some Drugs and go home: there is no state-determined quantity of heroin or cocaine that constitutes an overdose, therefore there are no overdoses, so there’s no problem.

It sounds like the obvious thing to do is to make it so that there’s a maximum radiation dose the scanning machine can give, operator error or not.

Or are the machines too multipurpose, so that the wild overdoses here are the normal dose for something else it does?

I don’t know about you, but personally I find it shocking that modern equipment could be designed with such lax safety features, but it would appear to be a common problem in devices that deliver large amounts of medical radiation.

Look at the numbers on how many websites are estimated to be vulnerable to SQL injection attacks, even though all modern SQL APIs provide you with facilities that make it ridiculously easy to injection-proof your code. Just because medical equipment safety is far more important than website data security, that doesn’t mean that the average person developing the software magically became less dumb.

The reason why the equipment needs to be idiot-proofed is the same as the reason why it’s so damn hard to get the equipment to be idiot-proofed: All the jobs are being done by homo sapiens. D’oh…

The military also has extensive experience with human-machine interfaces (HMI) of both the hard- and software kind, and has issued a number of HMI standards.

“Counterintuitive” is shorthand for “we knowingly released a crap interface.” Facebook can get away with that, maybe, but this is General Frickin’ Electric; they really don’t have an excuse.

I had the same thoughts as Mu @4. It’s so obvious to the experts doing the design and manual generation that they fail to understand how the very nature of the machine interface obscures something that would otherwise be obvious. I know this has been said before, but the aircraft industry would be an excellent resource for managing some of these problems.

@Mr. Sweet (#16): It has long been my observation that as soon as I idiot-proof something, someone goes and builds a better idiot.

Mu@4 and all in agreement,
this is a plausible explanation for the root of the problem, but if true it reveals a shocking lack of pre-market testing, which seems like it *should be* a routine prerequisite for things this expensive and potentially hazardous. I am admittedly naive of the stages between the invention of such a device and its implementation into medical practice, but the absence or inadequacy of sufficient scenario modeling or consumer testing to discover this kind of defect is deeply unsettling.

From a professional standpoint, I’m curious — does anybody have any technical details on just what it is that GE is describing as “counterintuitive”?

Just finished reading the linked article.

Manufacturers say they will address some of these issues in newer models.

Address *some* of the issues? In newer models? Not even current models still rolling off the factory line? WTF? This is probably something that can be addressed in a software patch, since it’s a feature of the machine’s automated system. I realize they need to make money to pay for the work, but surely they can sell a software patch and/or field upgrade kit or even, for a fee, send out a technician to perform the field upgrade. There should be a way to fix this short of junking the fielded units and replacing them.

Not to excuse GE for what seems like a problem with their design, but my experience as an MRI researcher suggests that the medical imaging manufacturers generally err on the side of caution. We are always trying to push the envelope of what an MRI scanner can do and running into software restrictions that don’t allow it. Often, this can be corrected in a research environment simply by commenting out one line of code that says “don’t allow this.” (In a clinical environment, this is not done.)

Calli Arcale @13 Yes, unfortunately, it’s true the the air craft industry is not immune either. In fact, that’s part of why it’s such a good place to look for solutions because similar failures in both fields have such extreme consequences. It’s just that the public’s fascination with plane crashes and their wide reporting in the press tends to force the aircraft industry to address the problems sooner than many other industries.

As far as the post @18 that copies my earlier post verbatim. Is that the laziest spam bot ever?

The invocation of the aircraft industry is apt. There are a few key differences to note. First, aircraft accidents tend to be public and spectacular (not in a good way). Medical accidents tend to be one person at a time, out of the public eye and often with delayed outcome. There is, therefore, a greater public pressure to fix things in aviation. Just look at the speed with which Congress has pushed new aviation rules through in response to the Colgan crash in New York.

Doctors bury their mistakes, pilots are buried in theirs.

Aviation is a much more open culture in regards to mistakes and accidents and less litigious. There are people whose job is to determine the cause of the accident and make safety recommendations and who are completely separated from the enforcment side of the business. Discussions with safety investigators cannot be used for any enforcement actions. It’s like doctor/patient confidentiality.

There is a system (Aviation Safety Reporting System) in which flight crew can report their own mistakes without fear of enforcement action so long as criminal activity or an aircraft accident is not involved. This allows safety trends to be followed and corrective steps to be taken without involvement of the judicial system.

I have no idea whether there are any analogous structures in medicine.

This is why there are hospitals with computers that are over ten years old. The risk of errors associated with upgrading software is high. Unfortunately the risk of not updating software can be high too.

On the question of MRI vs CT for perfusion imaging, the duration of an MRI perfusion scan will come down, but CT is cheaper technology and more prevalent. Few hospitals have MRI machines sitting around to throw in whatever potential stroke patients get rolled through the door. You just can’t have emergency patients waiting an hour for an open slot on an MRI machine.

There’s another major issue with MRI. You really really really don’t want to put someone in an MRI who has contraindicated conditions (i.e. pacemakers, metal in the body, etc.) If you have an emergency patient with lower levels of consciousness entering the door you cannot rely on verbal confirmation that one can safely put them in an MRI.
That’s a recipe for some real disasters.

On a related note, MRI has it’s own safety limits. There are limits to how fast the magnetic fields can fluctuate over short periods of time. In addition, the signal absorption rate (SAR) measures how much heat is deposited in the issue over a short time window. In both cases, the hardware can exceed those limits, but the controls are in the software. Still, the software is designed to simply stop the scanner or not start if these limits are reach. (SAR is a function of body size so grossly mistyping weight is the only place for human error). Still these limits are set much lower than levels that might actually cause problems so that’s another safety factor built in to the systems.

“Counterintuitive” is shorthand for “we knowingly released a crap interface.”

jfb, I am so stealing this. I’ve said many a time that if I were 30 years younger, I’d be an interface designer. I can tell you why when someone is racing toward the elevator you’re in, you almost always push the wrong button and close the door in their face.

I have to play contrarian here, and respectfully disagree with Mu@4 and others claiming this usability problem would have been easy to find had somebody just bothered to look for problems. What we have here is basically a mismatch between the Design Model and the User Model:

Based on GE’s statement, I’d guess the Design Model (the conceptual model of the system in the head of the designer) was: “This feature automatically calculates the minimal dosage that achieves the requested image quality based on the size of the patient and the thickness of the body part being scanned.”

Now, based on the hospital’s statement, it seems that the User Model (the conceptual model of the system in head of the user) was: “This feature automatically reduces the dosage where possible based on the size of the patient and the thickness of the body part being scanned.”

From the designer’s point of view the system was working perfectly: the user requested a high-quality image and got one. From the user’s point of view the system was complete borked: he requested the system to reduce dosage and it increased it.

Sadly, these types of problems are not easy to find at all. I don’t know if any kind of usability evaluation was performed on this device, but I could easily see this problem slipping past even if standard usability testing protocols were followed. Firstly, much of the time both these models agree. If you didn’t happen to test one of the scenarios where they conflict, you wouldn’t find out about the mismatch. Secondly, how the User Model became this divergent from the Design Model might have more to do with marketing and training materials than the specifics of the user interface. Proper experiment design can reduce the impact of these issues, and there are other methods besides testing you can use, but no matter what you do, these kinds of problems remain hard to catch.

I’m not saying that we shouldn’t try to identify and correct as many usability problems as possible, and I’m not necessarily saying this problem wasn’t preventable. I don’t know the specifics, so I don’t know for sure what it would have taken to identify it, and considering that we’re talking about some inherently dangerous equipment, more safeguards might have been in order. Yet, as a very-soon-to-be usability expert, I’d call this a forgivable error, and I’d wager that even the best in the field might have missed this one.

I was always taught CT was better for blood and MRI for infarction. When you’re considering giving tPA to break open a clot and have less than three hours, CT will tell you there’s no bleeding so you can consider it.

That said, I can’t say this surprises me. There is a disconnect between those who write the software and UI and those who actually use it. Any EMR is another example. I also wonder how much of this has to come up through experience. The future is hard to predict.

@Aapo L

Actually good quality assurance should be able to catch this. I worked at a dot com trying to bust up websites. Literally doing things that I thought no one would possibly do. You’d be surprised by how many mistakes you can make with UI systems. Then you come into the old adage that “every new solution has it’s own set of problems”. So i find a bug, it gets fixed creating a new bug.
I think that the counter intuitive processes could be developed, with a simple lock out mechanism like the one Orac described, and they can be simple code written lock outs or actual physical lock outs. Where the operator has to set in the diagnostics settings, the machine calculates the required Gy and then you have to move a simple handle to the desired level. On the machine it could then ask, the level of radiation for this task is no longer in the diagnostics range but now in the therapeutic range. This machine has been set up to perform diagnostics not therapeutic applications. To proceed please enter…” It may be cumbersome here, but saving lives isn’t always speedy.

It can be done, should be done and hopefully will be done. If not, GE could learn some lessons from the guy who isn’t the smartest person in the room, which in this case would be the Radiation tech?

I won’t wait for the “Orac never writes anything critical of the medical Business” sneer, I’ll just point to this article.

I had something similar happen, not life threatening but life altering.

A few years back I had surgery and I-131 for an advanced CA of the thyroid. Neither the physician, Nuclear Medicine or the consulting Health Physicist asked if my wife and I were planning on having children. Afterwards, when my sperm count was knocked down to zero and I quoted chapter and verse from the literature and a number of standard handbooks they said “Oops”.

Yes, it knocked out the cancer. I’m officially a five year cure this month. But they were way too nonchalant and didn’t ask the important next question in their zeal to complete the treatment. A couple deposits at the sperm bank would have made a world of difference.

And before anyone asks, yes, we were demonstrably mutually fertile before then.

Disclaimer: I work with industrial x-ray CT

I think there may be a too-quick attempt to blame the manufacturer on this one. I am not a huge fan of GE, but this sounds like a marketing-gimmicky add-on. My bigger concern is the lack of training and understanding of radiation by the radiologists and x-ray techs themselves.

I have been absolutely shocked by the lack of knowledge of basic physics by almost every radiologist I’ve ever spoken to. It’s downright criminal. I had one, in an emergency situation, try to explain to me that glass would not show up on an x-ray because it’s clear. I was actually considering taking the x-rays in my lab, and bringing them back to the ER. I would’ve gotten the dose right, at least 🙂

In short, you have finally posted on my biggest pet peeve in the medical industry. Admittedly, I’m a bit demanding on my own techs, but there is absolutely no excuse for this kind of ignorance in an area that may easily cause great harm.

Unfortunately this is nothing new. Had a popular science magazine issue between 10 and 20 years ago
The story hammered down on the need for very strict quality control on the user interface of medical equipment by dragging in an example of a machine that was to be used to treat cancer with radioactivity. It gave very informative errors like: Error 14. The device gave about a dozen each day (without stopping until the error was fixed), generally warning that the operator forgot a step in the procedure. This to the point where the operators didn’t bother to look up the error in question and ignored the error message(s).
As a result people died to radiation poisoning because one of the errors was that the part that regulated the amount of radiation wasn’t properly placed/used so that the patient would be hit with the maximum dose the machine could generate.

Being in software, I have gotten an appreciation for the awfulness of niche market software, especially if it’s developed by the same people who make the hardware. After all, the machine’s quirks are normal to them…why are you having problems?!
I would wager that some of these ct scan software companies don’t have a big testing department, have no one in charge of ui design ( just let the programmer do it!) and have a skeleton crew of engineers, none of whom have an architectural plan for the project. The niche software that is good tends to be in areas where the volume and quality of competition force developers to get a plan or die.
Thats no excuse, but if you are wondering why your retail store/oil well/airport/post office/ct scan software is so ghetto, that’s probably why.
As sure as I am that the software probably sucks on multiple levels, the radiologists need a heaping serving of blame pie,too. One would expect that a radiologist would be able to understand and use the machines of their trade; that they would know which parts of the machine are controlled by which ui options and how the machine works to deliver radiation. I expect a trained radiologist to know enough about the theory of operation behind a ct scanner that they do not need a Microsoft Bob style ui to hold their hands.
It seems to me that the people who were administering these high doses of radiation were not being careful, educated, or competent for their job.

Unfortunately, I have many of GE Healthcare’s products for the biotech market. They uniformly suck ass–from the nanodrop to the disposable reactors, they are overpriced (competitors regularly price at about 60% of GE quotes), crash-prone and it’s a miracle every time you finally get the thing working. They do not have field technicians to repair things when they are broken, and replacement parts are like unicorns: we have one pricey reactor that has been collecting dust for going on three years, because GE does not see fit to send us a replacement part. If you can’t repair the machine yourself, you are stuck with a very expensive paperweight. For the past two years, there has been a recall on several of their reactor and flow controller units, and they neither notified customers (and I quote, “we lost all the customer contact information and had to go from current sales records!” even though they had current sales records in abundance of both me and my colleagues, along with many angry emails) nor bothered to repair the units, despite the fact that these units can cause deadly shocks through routine use. Our engineers are still unable to comprehend how such a machine could get a CE certificate.

GE Healthcare’s quality is so lousy that I personally do not use any product of theirs if I can possibly avoid it. They are the Bergholt Stuttley “Bloody Stupid” Johnson of the engineering world.

Error messages are a particularly tricky design question. In my experience, the typical responses when a user is presented with even a good error message are:

1. If there’s a “go ahead anyway” button, click it.
2. Try again without changing anything and hope it works.
3. Change something at semi-random; repeat until the error message goes away.
4. Call/email the vendor.

Far, far, down the list of probable responses comes

5. Read the error message.

There’s only so much the vendor can do. This doesn’t excuse the fact that many vendors don’t actually do what they can, but “it is impossible to make anything foolproof, because fools are so ingenious.”

Tamakazura:

Being in software, I have gotten an appreciation for the awfulness of niche market software, especially if it’s developed by the same people who make the hardware. After all, the machine’s quirks are normal to them…why are you having problems?!

This is very very true. Problems can arise when a company which specializes in hardware and maybe some firmware decides to get into writing the application software for the device as well if they do not invest enough effort in that software. It’s a serious development effort on its own, but if the bigwigs just see it as something that makes their pride and joy box go, they’ll tend to dismiss it as not requiring the level of effort that the box itself requires. Or, worse, decides to just delegate that job to a few of the EEs who happen to know a bit of Visual Basic.

I agree also with Scott and others who have said that you can’t make something foolproof. “Never underestimate the ingenuity of complete fools.” But at the same time, there are things you can do to limit the damage a complete fool is likely to cause. More importantly, though, is how you react to a problem like this out in the field. Clearly, this is something which could be prevented by a software change. (It could also be prevented by a procedural change, of course, but that does not appear to have been very effective thus far. Really, if you represent your equipment as able to limit radiation and it fails to do so, that is a problem, even if it’s because it lets you select “minimum radiation” and “maximum image quality” at the same time.)

Orbital Sciences builds satellites and launch vehicles (like the oh-so-awesome Pegasus air-launched rocket). Satellites are a great example of those niche applications where the control software is typically developed by the manufacturer, but they have an added twist — the computer will be placed hundreds and perhaps thousands of miles away, never to be seen by human eyes again. You can’t swap out a faulty computer, and software upgrades will be limited by the fact that you will have to upgrade the software while you are using it, and if anything goes wrong and the computer’s memory gets hosed, you have absolutely no recourse. Consequently, there’s a bit more attention paid to reliability in this market. But things still slip through.

Orbital Sciences built a commsat for PanAmSat (now Intelsat) called Galaxy 15, and it was placed into geosynchronous orbit to serve as a “bent pipe” communications transceiver. “Bent pipe” satellites are called that because they act a bit like a bent pipe — stuff you throw up into them is sent right back down to you without any attempt by the spacecraft to make sense of it, unless the signal carries certain instructions for the spacecraft, which is important for actually commanding the spacecraft to do things like move to a different latitude or upload new software or whatever. Galaxy 15 had some sort of unexpected event occur, possibly as trivial as a cosmic ray strike flipping the wrong bit. The spacecraft stopped responding to controllers and stopped stationkeeping at its assigned slot on the geosynchronous orbit. As a result, though it is still functioning well enough to maintain its lock on the Sun and charge its batteris, it is drifting freely in space. It has been dubbed “Zombiesat”, and has already caused headaches for other commsats in its vicinity. Its bent-pipe design means that if it picks up any signals meant for another spacecraft, it will dumbly relay those, potentially causing interference and loss of service for users of the other spacecraft. So far, the spacecraft it has approached have both managed to keep operating without a disruption in service. But it’s been expensive for both Intelsat and Orbital Sciences, as well as embarrassing to the manufacturer.

Nobody is likely to die or get sick because of this problem. But if Orbital Sciences does not fix it, they will lose a great deal of future business. It’s a mission critical issue, and honestly, in commsats, that tends to mean more than even safety critical does. You lose a lot of future business; these things cost hundreds of millions of dollars to build and launch, and you don’t spend that kind of money if you think it might go rogue like this. Orbital has not been able to determine the cause of the problem, but they have worked out a software patch to prevent it happening again.

Here’s where their response differs from GE’s. They are installing the patch on all spacecraft built on the same bus, with the same command & control subsystems. They are also hoping to install it even on Galaxy 15 and return the spacecraft to service; their window of opportunity will come when its reaction control wheels become saturated (as in zombie-state they can’t order it to perform a desaturation maneuver) and it loses its lock on the sun. Once the batteries run low enough, it will trigger a fault condition which will safe the spacecraft until, on the next orbit, it gets enough sunlight to reboot the computer, detect the fault state, stop retransmitting, and wait for instructions from the ground.

I’m cynical enough about these things to know that the difference is not that Orbital Sciences is more disciplined than GE. It’s that a Star-2 spacecraft costs a hell of a lot more than a CT scanner, and it’s a much more limited market. GE will not face the kind of financial repercussions that Orbital Sciences would. It all comes down to money.

Radiation overdoses due to naive use of a machine which ostensibly takes the planning away have happened far too many times in the past few decades. Clearly, companies don’t fear the repercussions enough to take seriously the threat that their machine might kill somebody. It’s easy to get cocky in software; it does what you tell it, right? And you’re not going to tell it to do something stupid. That would be stupid. But actually, yes, you might tell it to do something stupid, inadvertently, because computers are quite profoundly stupid.

“The trouble with computers is they’re very sophisticated idiots. They do exactly what you tell them at amazing speed.”
— the Doctor

There’s the rub, most of the time. It really is *exactly* what they’re told, not what you think you told them.

I really hate this damn machine,
I wish that they would sell it.
It never does what I want it to,
but only what I tell it.

“When the computer repeatedly crashed,…”

As a software designer and developer I have to say I find that piece alone unacceptable in a medical device.

And GE prides itself on its hugh investment in Six Sigma training, which is a quality controll program that is supposed to reduce errors in any process down to the parts-per-million range.

They are more interested in turning out Six Sigma Champions than they are in solving this CT scanner problem.

The company is awful to work for.

I was overdosed at huntsville Hospital with 6 gray and am still suffering after almost a year. I am sure there are others who have not been informed and don’t understand what is causing their symptoms. There appears to be no integrity when it comes to Huntsville Hospital

I am 76 and hence have grown up and been educated throughout most of the period of development of nuclear physics.

What amazed me when I read the original overdose stories, as well as the comments above, was that nobody with the first few hair loss reports, let alone uniform pattern reports, immediately put out warning notices seeking to stop all such use of such machines until the issue was addressed.

What seems quite apparent to me is that hair loss around radiation does not apparently trigger immediate concern about radiation overdose. For as long as I can remember hair loss has always been the concommitent to overdose.

Is it possible that the medical technicians, whose ministrations I have numerous times been subjected too in recent years, are not instantly aware that hair loss is a warning of radiation overdose?

This is not just a technical machine problem but apparently a social one where the use of radiation is so engrained as to having lost basic warning comprehensions among its practitioners.

Again!

Hair loss around radiation and no all out general warnings?

j.

Some observations from a radiologist with a degree in engineering:

GE did not call their own software’s function counterintuitive. Representatives of Cedars-Sinai did.

I too am shocked that so many people were injured.

Possible factors:

1. Hair loss is not immediate after a radiation overdose. The radiologist and technologist would generally not see the patient at the time of the hair loss. Followup care is delivered by the patient’s neurologist or primary care practitioner, who may or may not have the training to recognize the cause. Also, it would have taken some time for any one physician to see enough patients with the same problem to recognize a pattern. Now that it is in the news, the answer is obvious.

2. It is not so obvious how much radiation is being delivered. None of the scanners I use report the actual dose to the patient. The most you get is the DLP or dose-length product. To convert to effective dose, you need to multiply by a region-specific normalized dose. The reported conversion factors in the literature vary considerably.

3. Even if the scanner did report the delivered or absorbed radiation, the units would be problematic for me and other radiologists of my vintage (40’s and up). I was trained at a time when radiation exposure and absorbed doses (which are related but not identical) were expressed in Rads and Rems, and I therefore have an intuitive feel for those measurements. The reporting standards have shifted to Grays and Sieverts, creating issues for understanding how much is too much. I always have to look up the conversions.

4. Much of the versatility of a CT scanner comes from its flexibility. In my experience, this is how new uses for existing equipment develop: A researcher will write a paper on, for instance, cerebral perfusion CT. This will be read by a neurologist at your institution who will ask if your scanner can do the same for her patients. You look at the technique section of the paper, which may or may not include enough detail to design a protocol. You check with the “applications specialist” at the equipment manufacturer, who will suggest some protocol parameters. You pass the information along to your technologist. If you are not specifically told how to set a certain feature (like the dose reduction feature that reduces radiation for less dense organs like the lungs), then this choice will be made by the technologist, based on her past training. The same process occurs at every hospital. No, this is not the aviation industry.

I offer these observations, not as an excuse, but to try to give some insight into how things could have gone so wrong.

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading