Categories
Clinical trials Medicine Politics Surgery

When human subjects protection stifles innovation, part II

Back in late December, I came across an op-ed piece in the New York Times written by Dr. Atul Gawande, general and endocrine surgeon and author of Complications: A Surgeon’s Notes on an Imperfect Science and Better: A Surgeon’s Notes on Performance, that struck me as a travesty of what our system for protecting human subjects should be, as it did fellow ScienceBlogger Revere.

In brief, the article described an action by the U.S. Department of Health and Human Services’ Office of Human Research Protection that, on its surface, appeared to be a case of bureaucracy hewing to the letter of the law and totally ignoring its spirit. The case involved a quality improvement program implemented by Johns Hopkins University to reduce the rate of catheter infections in intensive care units throughout Michigan. The incredibly subversive and dangerous measure (yes, I’m being sarcastic) was to formalize the implementation of a checklist before central venous catheters were inserted that, among other incredibly reckless measures, required that the physician inserting the catheter wash his hand, don a sterile gown, carefully prep the patient’s skin with standard antiseptics such as iodine or chlorhexidine, and drape the area with sterile drapes. The result, reported in the New England Journal of Medicine1, was a massive and nearly immediate reduction in the rate of catheter-associated sepsis from 2.7 infections per 1,000 catheter-days to zero (that’s right, zero), where it remained for 18 months. Given that approximately 80,000 catheter-associated infections occur in U.S. ICUs each year, resulting in as many as 28,000 deaths and costing at least $2 billion, you’d think the study of how altering the system by which large organizations (hospitals) work in order to make sure that best practices are rigorously followed can result in improved outcomes would be just the sort of thing that the NIH would want to encourage, and results are rarely that dramatic in medicine.

Not in this case.

What actually happened is that the OHRP received a complaint about this research, launched an investigation, and shut it down. At the time the story was reported, Revere considered this an example of the risk-averseness and “corporate legal” mentality inherent in a government bureaucracy, while I considered it an example of how bureaucracies over time tend to evolve so that they interpret the law and the regulations derived from it in the most rigid, strict way possible. The key issue appeared to be the definition of what constitutes “human subjects research.” Indeed, the OHRP originally ruled that, because this intervention constituted “research,” full institutional review board (IRB) approval and informed consent from every patient involved in the study. This was different than the IRB at Johns Hopkins, which had ruled that the study was exempt from IRB oversight. The first problem was, on a strict basis the OHRP was correct. The second problem was, the OHRP was correct only because the investigators had bothered to study the results of its quality improvement (QI) intervention. In essence, the ludicrous implication of this ruling was that it’s acceptable to implement well-accepted QI interventions such as the checklist that the Hopkins researchers did but that it’s human subjects research if the hospital bothers to try to figure out if the QI intervention did what it was anticipated to do. To boil it down: Make systemic changes that are likely to improve patient care, but if you try to figure out if they actually do you will be subject to the full weight of the government’s regulations and protections for human research subjects, or, as Ben Goldacre put it:

You can do something as part of a treatment program, entirely on a whim, and nobody will interfere, as long as it’s not potty (and even then you’ll probably be alright). But the moment you do the exact same thing as part of a research program, trying to see if it actually works or not, adding to the sum total of human knowledge, and helping to save the lives of people you’ll never meet, suddenly a whole bunch of people want to stuck their beaks in.

Ben is a little more sarcastic about it than I am, as I understand from professional experience the reasons for the rules and the importance of oversight to protect human subjects. There really do need to be a “whole bunch of people” sticking their beaks in. It is the implementation of those rules in recent years that I have a problem with, as well as the confusing and often contradictory way in which “human research” is defined for regulatory purposes.

Last Thursday, the New England Journal of Medicine weighed in on the controversy with two commentaries, Quality Improvement Research and Informed Consent by Frank G. Miller and Ezekiel J. Emanuel, and Harming Through Protection by Mary Ann Baily (Hat tip: Revere). What these two articles make clear is that our regulations and rules for human subjects protection are screwed up and in dire need of an overhaul. The reason is that, for quality improvement and other research with zero or minimal risk to research subjects, the regulations can be incredibly onerous. As Dr. Baily puts it:

The case demonstrates how some regulations meant to protect people are so poorly designed that they risk harming people instead. The regulations enforced by the Office for Human Research Protections (OHRP) were created in response to harms caused by subjecting people to dangerous research without their knowledge and consent. The regulatory system helps to ensure that research risks are not excessive, confidentiality is protected, and potential subjects are informed about risks and agree to participate. Unfortunately, the system has become complex and rigid and often imposes overly severe restrictions on beneficial activities that present little or no risk.

In my experience, part of the problem is a combination of risk-averseness of IRBs which, even though they admit that a proposal qualifies for an exemption of IRB review under federal guidelines, are too cautious to rule so, for fear of sanctions if it’s wrong, plus “mission creep,” in which IRBs are inserting themselves into research that was never intended to be considered “human subjects research” to the point of stifling such research, a problem I’ve complained about before. Indeed, in my institution, this mission creep was not just confined to the IRB, but to the scientific review board (SRB), whose purpose is to screen human subjects research protocols for scientific merit, not human subjects protection, and to make sure that the institution has the resources to perform the research; however, our SRBs have a distressing tendency to start picking at the humans subjects protections in the protocols, something that is not their job. Uncertainty about federal regulations and how the OHRP will interpret them is likely to contribute to this “mission creep,” and this uncertainty is well described by Dr. Baily (emphasis mine):

The investigators studied the effect on infection rates and found that they fell substantially and remained low. They also combined the infection-rate data with publicly available hospital-level data to look for patterns related to hospital size and teaching status (they didn’t find any). In this work, they used infection data at the ICU level only; they did not study the performance of individual clinicians or the effect of individual patient or provider characteristics on infection rates.

After the report by Pronovost et al. was published, the OHRP received a written complaint alleging that the project violated federal regulations. The OHRP investigated and required Johns Hopkins to take corrective action. The basis of this finding was the OHRP’s disagreement with the conclusion of a Johns Hopkins institutional review board (IRB) that the project did not require full IRB review or informed consent.

The fact that a sophisticated IRB interpreted the regulations differently from the OHRP is a bad sign in itself. You know you are in the presence of dysfunctional regulations when people can’t easily tell what they are supposed to do. Currently, uncertainty about how the OHRP will interpret the term “human-subjects research” and apply the regulations in specific situations causes great concern among people engaged in data-guided activities in health care, since guessing wrong may result in bad publicity and severe sanctions.

Moreover, the requirements imposed in the name of protection often seem burdensome and irrational. In this case, the intervention merely promoted safe and proven procedures, yet the OHRP ruled that since the effect on infection rates was being studied, the activity required full IRB review and informed consent from all patients and providers.

If you want to get an idea of how complex it can be to determine whether or not research is considered “human subjects research” under federal law, all you have to do is head to this page and peruse the decision charts on, for example, whether a study is human subjects research or under what conditions the requirement for informed consent can be waived. It’s no wonder that a conservative interpretation of the regulations led the OHRP to rule that this was indeed human subjects research. The problem is not entirely the OHRP; it’s also the rules. Although a case could be made that the research was exempt from IRB review, under a strict interpretation of the rules, that case would be weak, and there’s the problem. Moreover, not all cases of QI research are as clear-cut as this one with regards to minimal risk.

Drs. Miller and Emanuel emphasize a different aspect of this case in their commentary. The fundamental question is, in reality, systems research (how changing a system can improve quality of care) versus human subjects research (which is designed to discover generalizable knowledge that can be used to improve care). They state the ethical conundrum thusly:

Such [quality improvement] research, however, poses an apparent ethical conundrum: it is often impossible to obtain informed consent from patients enrolled in quality-improvement research programs because interventions must be routinely adopted for entire hospitals or hospital units. When, for instance, research on a quality-improvement initiative that affects routine care is conducted in an intensive care unit (ICU), surgical suite, or emergency room, individual patients have no opportunity to decide whether or not to participate. Can it be ethical to conduct such research without informed consent?

They argue, and quite correctly in my opinion:

To judge whether quality-improvement research can be ethical without informed consent, it is necessary to examine particular studies in light of the ethical purposes of informed consent. Informed consent is meant to protect people from exposure to research risks that they have not agreed to accept, as well as to respect their autonomy. None of the quality-improvement interventions in this case were experimental. They were all safe, evidence-based, standard (though not always implemented) procedures. Consequently, patients were not being exposed to additional risks beyond those involved in standard clinical care. Using a protocol to ensure implementation of these interventions could not have increased the risks of hospital-acquired infection. Moreover, the participating hospitals could have introduced this quality-improvement protocol without research, in which case the general consent to treatment by the patients or their families would have covered these interventions. The only component of the project that constituted pure research — the systematic measurement of the rate of catheter-related infections — did not carry any risks to the subjects. Thus, the research posed no risks.

Although informed consent for research participation was not, and could not have been, obtained, the absence of such consent did not amount to any meaningful infringement of patients’ autonomy. Consequently, there could be no reasonable or ethical grounds for any patient to object to being included in the study without his or her consent.

I agree that the Hopkins research was clearly about as close to zero risk as human research can get. In fact, I’d argue that it’s “negative risk” in that it’s almost impossible to conceive of how a patient could be harmed by the requirement that well-established infection control methods be required through a checklist. Moreover, as both commentaries point out, there is already a mechanism in place by which the research performed by the Hopkins team in Michigan hospitals could have been approved by its IRB without the requirement of informed consent by each patient whose data was included in the study. It’s a process known as “expedited review” that can be applied to minimal risk research, and the Hopkins study clearly met the criteria for it of “collection of data through noninvasive procedures (not including anesthesia or sedation) routinely employed in clinical practice” and “research including materials (data, documents, records, or specimens) that have been collected or will be collected solely for nonresearch purposes (such as medical treatment or diagnosis).” Unfortunately, many IRB chairs are so risk-averse and not sure about how the OHRP will interpret the rules that they take the safest path: requiring full IRB review and approval. Moreover, even meeting the criteria for expedited review will not necessarily absolve protocol investigators from requiring informed consent; that is a separate question. And people wonder why fewer physicians remain interested in doing clinical research.

This story does have somewhat of a happy ending, though, although only through the caving of the OHRP to the negative publicity that this story has generated with a rather bizarre retractions. The introduction to one of Miller and Emanuel article points out that the OHRP has ruled that the Hopkins QI project can be started up again:

The Office for Human Research Protections (OHRP) — part of the U.S. Department of Health and Human Services — has concluded that Michigan hospitals can continue implementing a checklist to reduce the rate of catheter-related infections in intensive care unit settings (ICUs) without falling under regulations governing human subjects research. Dr. Kristina C. Borror, director of the offices Division of Compliance Oversight, sent separate letters to the lead architects of the study, Johns Hopkins University and the Michigan Health & Hospital Association, outlining findings and offering researchers additional guidance for future work.

[…]

OHRP noted that the Johns Hopkins project has evolved to the point where the intervention, including the checklist, is now being used at certain Michigan hospitals solely for clinical purposes, not medical research or experimentation. Consequently, the regulations that govern human subjects research no longer apply and neither Johns Hopkins nor the Michigan hospitals need the approval of an institutional review board (IRB) to conduct the current phase of the project.

In other words, as Ben Goldacre put it, “now – since it turns out the research bit is over, and the hospitals are just putting the ticklist into practise – they may tick away unhindered.” I couldn’t have put it better. It just doesn’t get any more Through the Looking Glass than that.

Having thought about this case for a while, I would be quite as hard on the OHRP as Revere is, although I do believe that what seemed to have affected the OHRP is a hidebound bureaucratic mindset whose hypercautiousness didn’t even allow the possibility of suggesting to the researchers that there was a mechanism under federal regulations for the research to continue without having to submit it to full IRB review and requiring informed consent from every patient whose infection data was tracked for the study. I also have to wonder who complained about the study to the OHRP. Now there‘s someone who really needs a lesson in common sense and critical thinking.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading