When last I wrote about ivermectin, the antihelminthic (anti-worm) medication that COVID-19 conspiracy theorists have been portraying (and some selling) as a miracle cure for COVID-19, I compared the drug to acupuncture. Specifically, I noted how, now that large quantities of high quality clinical evidence from various studies has failed to find even a whiff of a hint of a signal that it works, ivermectin advocates have pivoted to citing lower quality studies. I further pointed out one study in particular being touted by ivermectin believers. It is an observational study that I used to illustrate how, now that a number of randomized controlled trials (RCTs) exist that have failed to show a benefit using ivermectin to treat COVID-19, RCTs trump uncontrolled observational evidence. Again, I used that study to make an analogy with acupuncture, where, as higher quality evidence increasingly shows that it is nothing more than a theatrical placebo, advocates increasingly cite lower quality evidence.
Almost in passing, I also pointed to an RCT mentioned in a Wall Street Journal article. The reason that I mentioned this study only in passing was because it hadn’t been published yet and the only description came from a newspaper article. Even so, I couldn’t resist tweaking ivermectin cultists with it, mainly because it’s yet another study that shows no benefit from ivermectin in treating COVID-19. Of course, as I’ve described many times, the prior plausibility that ivermectin would work against COVID-19 was always very, very low, or, as I liked to put it, not homeopathy-level low, but pretty damned low. Why so low? That’s because in the in vitro (cell culture) studies in which ivermectin was demonstrated to show antiviral activity against SARS-CoV-2, the coronavirus that causes COVID-19, required concentrations over 50-fold higher than what can safely be achieved in the human bloodstream. If ivermectin were to work in humans, presumably something other than the mechanism by which it inhibited the virus in cell culture must be operative. (I’ve encountered this very same problem with another drug that I’ve tried to repurpose to treat cancer, albeit not so dramatically as having an order of magnitude difference between what is needed in vitro and what is achievable in vivo.) The bottom line, as I like to put it, is:
Very low prior plausibility
Equivocal clinical studies
Drug doesn’t work for the proposed indication
Here we are, a couple of weeks after the WSJ article, and the study has finally been published—in The New England Journal of Medicine (NEJM), yet! Known as the TOGETHER trial, this study was a double-blind, randomized, placebo-controlled, adaptive platform trial involving symptomatic SARS-CoV-2–positive adults recruited from twelve public health clinics in Brazil. In brief, patients who had had symptoms of Covid-19 for up to 7 days and and also had at least one risk factor for disease progression were randomly assigned to receive ivermectin (400 μg per kilogram of body weight) once daily for 3 days or placebo. The primary endpoint was a composite outcome involving either hospitalization due to COVID-19 or an emergency department visit due to clinical worsening of COVID-19 symptoms (defined as the participant remaining under observation for >6 hours), either within 28 days after randomization, and what is reported here are the results for one drug, as the TOGETHER trial looked at other treatments for COVID-19, according to this schema:
It also turns out that the two groups were well-matched:
Before I go on, I’ll be honest right here. I’ve criticized composite outcomes (outcomes that involve lumping more than one outcome together as a composite) before, particularly in trials looking at cardiovascular outcomes, where combining, for example, a composite outcome of death, MI, stroke, coronary revascularization and hospitalization for angina, a composite outcome that I viewed as a rather transparent method of taking a bunch of outcomes that weren’t significantly different between placebo and treatment groups and making them into a measure that did achieve statistical significance. So seeing a composite outcome in this trial gave me a brief pause, but the investigators justified their choice well enough to convince me that it was not unreasonable to choose this composite outcome measure:
Because many patients who would ordinarily have been hospitalized were prevented from admission because of limited hospital capacity during peak waves of the Covid-19 pandemic, the composite outcome was developed to measure both hospitalization and a proxy for hospitalization, observation in a Covid-19 emergency setting for more than 6 hours. This region of Brazil implemented mobile hospital-like services in the emergency settings (i.e., temporary field hospitals) with units of up to 80 beds; services included multiple-day stays, oxygenation, and mechanical ventilation. The 6-hour threshold referred only to periods of time that were recommended for observation by a clinician and was discounted for wait times. The event-adjudication committee, whose members were unaware of the randomized assignments, judged the reason for hospitalization or prolonged observation in the emergency department as being related or unrelated to the progression of Covid-19. Guidance for the validity of composite outcomes indicates that outcomes should have a similar level of patient importance.14
If patients who would normally have been admitted to the hospital were sent home because hospitals were overcapacity, it isn’t unreasonable to count patients who were observed in the emergency room this way. I’m not thrilled with it, as it makes the study less “pure” than I would like, but I understand it and applaud the researchers for trying to make this composite as rigorous as such composite outcomes can be.
But what about the adaptive trial design? Adaptive trials are trials that are set up such that they can be modified while under way and still provide valid results. More specifically, adaptive studies are designed so that they can utilize results accumulating in the trial to modify the trial’s course in accordance with pre-specified rules. These changes can include (but aren’t limited to) refining the sample sizes, abandoning treatments or doses, changing the allocation ratio to trial arms, identifying patients most likely to benefit and concentrating on recruiting them, or stopping the whole trial at an early stage for lack of efficacy or for adverse reactions. For example, the TOGETHER Trial protocol had prespecified criteria for altering or stopping the study based on discontinuing interventions for futility, stopping owing to superiority of an intervention over placebo, or adding new interventions.
In addition, the authors note:
The trial began recruitment for its first investigational groups on June 2, 2020. The evaluation that is reported here involved patients who had been randomly assigned to receive either ivermectin or placebo between March 23, 2021, and August 6, 2021. The initial trial protocol specified single-day administration of ivermectin, and we recruited 77 patients to this dose group. On the basis of feedback from advocacy groups, we modified the protocol to specify 3 days of administration of ivermectin. Here, we present data only on the patients who had been assigned to receive ivermectin for 3 days or placebo during the same time period. The full trial protocol was approved by local and national research ethics boards in Brazil and by the Hamilton Integrated Research Ethics Board in Canada. The CONSORT (Consolidated Standards of Reporting Trials) extension statement for adaptive design trials guided this trial report.12 All the patients provided written informed consent.
We responded to feedback from advocacy groups regarding this administration schedule and adapted the duration of ivermectin administration to 3 days at a relatively high dose as compared with most other trials of this drug.
So the authors actually used a higher dose of ivermectin than was used in most other studies of the drug, and they did so based on the feedback of ivermectin advocates.
So what were the results?
In the intention-to-treat population, 100 patients (14.7%) in the ivermectin group had a primary-outcome event, as compared with 111 (16.3%) in the placebo group (relative risk, 0.90; 95% Bayesian credible interval, 0.70 to 1.16) (Table 2). For those not familiar with the term, “intention to treat” means analyzing all subjects who were randomized, regardless of whether they completed their treatment or not or whether they dropped out of the study or not. Similar results were observed in the modified intention-to-treat population (relative risk, 0.89; 95% Bayesian credible interval, 0.69 to 1.15) and the per-protocol population (relative risk, 0.94; 95% Bayesian credible interval, 0.67 to 1.35). In this study, the modified intention-to-treat analysis included “all the patients who received ivermectin or placebo for at least 24 hours before a primary-outcome event (i.e., if an event occurred before 24 hours after randomization, the patient was not counted in this analysis),” while the per-protocol analysis looked at “all the patients who reported 100% adherence to the assigned regimen.” Basically, this is as negative as a negative study can be, given that all the confidence intervals for the relative risks of the specified outcome (in this case the Bayesian credible intervals) overlapped 1.0—and not by a little.
The negative results continued for secondary outcomes as well:
There were no significant differences between the ivermectin group and the placebo group with regard to viral clearance at day 7 (relative risk, 1.00; 95% Bayesian credible interval, 0.68 to 1.46) (Fig. S3). The 14-day restricted mean survival time difference17 was 0.11 days (95% Bayesian credible interval, −0.23 to 0.48). There were no significant between-group differences with regard to the risk of hospitalization for any cause (relative risk, 0.83; 95% Bayesian credible interval, 0.63 to 1.10), the time to hospitalization (hazard ratio, 0.83; 95% Bayesian credible interval, 0.61 to 1.13) (Fig. S1), and the number of days in the hospital (mean difference of the log-transformed values, 1.00 days; 95% Bayesian credible interval, 0.80 to 1.25).
There were also no significant between-group differences in the time to clinical recovery (Fig. S2) (hazard ratio, 1.05; 95% Bayesian credible interval, 0.88 to 1.24), the risk of death (relative risk, 0.88; 95% Bayesian credible interval, 0.49 to 1.55), the time to death (hazard ratio, 0.88; 95% Bayesian credible interval, 0.47 to 1.67), or the number of days with mechanical ventilation (mean difference, 1.06 days; 95% Bayesian credible interval, 0.63 to 1.75). There was no evidence of between-group differences in the PROMIS Global-10 physical-component score as measured on day 28 (mean difference, −0.4 points; 95% Bayesian credible interval, −1.4 to 0.6) or mental-component score (mean difference of the squared values, 6.1 points; 95% Bayesian credible interval, −104.1 to 116.7). With regard to adverse events, there were no important between-group differences in the incidence of adverse events during the treatment period (Table S6).
Again, all the intervals here overlap 1.0 (no increase or decrease in hazard ratio). There are no statistically significant differences in any of the secondary outcomes either this study. Subgroup analyses also failed to find a benefit from ivermectin in subgroups defined according to patient age, body-mass index, status of having cardiovascular disease or lung disease, sex, smoking status, or time since symptom onset, nor was there a benefit observed with ivermectin as compared with placebo among patients who began the trial regimen within 3 days after symptom onset.
So we have here yet another study, another RCT, that resoundingly failed to find even a hint of a signal that might indicate a benefit to treating COVID-19 with ivermectin. Quelle surprise! So how are the ivermectin advocates who claim that they are just “following the science” reacting? Not well. Not well at all.
Here are a couple of examples:
Those two Tweets, however, are easy to dismiss. What about more “serious” critiques of the study? One was published on—where else?—Robert F. Kennedy Jr.’s website Children’s Health Defense just yesterday. It’s by Madhava Setty, MD, billed as senior science editor for The Defender, RFK Jr.’s news blog. (How far does one have to have fallen as a medical professional to agree to be the senior science editor for a large antivaccine blog?) In any case, based on its article, Dr. Setty accuses the New York Times‘ Carl Zimmer—Carl Zimmer, one of the best science journalists out there!—of misleading the public about the TOGETHER trial.
First, demonstrating that he didn’t bother to read the actual protocol, Dr. Setty is incensed about the composite outcome:
Also of note, the investigators chose to include emergency room visits with hospitalizations for COVID. Clearly, six hours of observation in an ER is a significantly different outcome than a hospitalization that may last a night or much longer.
When excluding the ER visits from the primary outcome and examining only hospitalizations, the ivermectin cohort had even less risk of an outcome, i.e. the relative risk was 0.84 vs 0.9 when ER visits and hospitalization were grouped together.
Seriously, as I discussed above in commenting on my misgivings about composite trial designs, the authors addressed why they chose this particular outcome. You can disagree with it, or not, but they accounted for it completely in the statistical plan for the study and explained exactly why they chose this particular composite outcome. If you’re going to disagree, fine, but address what the authors actually did and their rationale for it, rather than your pathetic straw man version. Also, how far ivermectin advocates have fallen if they’re trying to disparage this study by claiming that it shows only a 10% benefit in terms of hospitalizations due to the drug. What happened to that miracle cure? Even if there were a 10% benefit in this study due to the drug (and there most definitely is not), that is not what I’d call a miracle cure—or even a highly effective treatment!
Next up, Dr. Setty thinks that this is a slam-dunk criticism:
The NEJM study took place in Brazil between March 23 and Aug. 6, 2021.The study examined 1,358 people who expressed symptoms of COVID-19 at an outpatient care facility (within seven days of symptom onset), had a positive rapid test for the disease and had at least one of these risk factors for severe disease:
Young and healthy individuals were not part of this study.
- Age over 50
- Hypertension requiring medical therapy
- Diabetes mellitus
- Cardiovascular disease
- Lung disease
- Organ transplantation
- Chronic kidney disease (stage IV) or receipt of dialysis
- Immunosuppressive therapy (receipt of ≥10 mg of prednisone or equivalent daily)
- Diagnosis of cancer within the previous 6 months
- Receipt of chemotherapy for cancer.
Note the moving of the goalposts. If ivermectin only works in “young and healthy” people, then it wouldn’t be much of a miracle cure, would it? Of course, the TOGETHER Trial intentionally studied people with symptomatic COVID-19 who were at the highest risk of progression to severe disease and death. If ivermectin worked, it should work in this population as well and wouldn’t be much good if it only worked in the “young and healthy,” who, as ivermectin cultists and antivaxxers so frequently remind us (to the point of perseverating about it), are at the lowest risk of hospitalization and death from COVID-19 to begin with. Also, limiting the study to those at higher risk allowed the use of fewer subjects, because in a study in which the expected rate of the primary outcome is very low a much larger number of subjects is required to see a statistically significant result in the treatment group compared to the placebo group.
This part made me laugh out loud:
The study’s authors wrote:100 patients (14.7%) in the ivermectin group had a primary-outcome event (composite of hospitalization due to the progression of COVID-19 or an emergency department visit of >6 hours that was due to clinical worsening of COVID-19), as compared with 111 (16.3%) in the placebo group (relative risk, 0.90; 95% Bayesian credible interval, 0.70 to 1.16).In other words, a greater percentage of placebo recipients required hospitalization or observation in an emergency department than those who received Ivermectin.
Seriously, one wonders how Dr. Setty got through medical school if he couldn’t pass a basic biostatistics course. Look at those confidence intervals! Again, as I pointed out, they overlap 1.0. Of course, Dr. Setty probably didn’t flunk biostatistics. Rather, he falls back on that favorite excuse used to explain a negative study, namely, “You didn’t use enough subjects”:
As is demonstrated in nearly every subgroup, the Ivermectin recipients fared better than those who received the placebo.
However, these data were not statistically significant given the size of the study.
This is how the authors were able to conclude there was no benefit to ivermectin use in preventing hospitalization in high-risk patients in their study.
Funny how apologists like Dr. Setty always assume that more subjects would translate into a statistically significant difference rather than the more likely outcome that it would not. Really, the only time this sort of appeal to “more subjects would’ve resulted in a positive study” is valid is when the results as reported are close to statistical significance. Such is not the case with the TOGETHER trial.
Dr. Setty continues to demonstrate that he can’t science by then arguing:
Only 288 of 679 participants randomized to receiving the placebo reported 100% adherence to the study protocol. Nearly 400 didn’t.
Why not? We asked Dr. Meryl Nass, an internist and member of the Children’s Health Defense scientific advisory committee.Nass told The Defender:Presumably they knew the difference between ivermectin and placebo, and the placebo subjects went out and bought ivermectin or something else … but whatever they did, they didn’t bother with the pills they were given. So, it was not actually a double-blinded trial. Yet the 391 people who didn’t take the placebo but did something else were included in two of the three calculations of ivermectin efficacy anyway.So, was this the definitive answer proclaimed by mainstream sources? Nass thinks otherwise:I would say that instead, it was a failed trial due to the 391 placebo recipients who admitted they did not follow protocol versus the 55 in the ivermectin arm.
Dr. Nass isn’t exactly what one would call a reputable source. But what about this criticism? This is an example of Dr. Nass and Dr. Setty ignoring the intention-to-treat analysis and modified intention-to-treat analysis to focus on this one analysis. In fact, best practice in clinical trials is to include both intention-to-treat analyses and per-protocol analyses:
The use of ITT analysis ensures maintenance of comparability between groups as obtained through randomization, maintains sample size, and eliminates bias. In addition, results obtained in such analysis more closely represent clinical practice, dealing with “effectiveness” of the intervention rather than “efficacy.” In view of these advantages, ITT is today considered as a defacto standard for analysis of clinical trials, though a minority school of thought believes that this approach is too conservative.
In contrast, per-protocol (PP) analysis refers to inclusion in the analysis of only those patients who strictly adhered to the protocol. The PP analysis provides an estimate of the true efficacy of an intervention, i.e., among those who completed the treatment as planned. However, as discussed above, its results do not represent the real life situation and it is likely to show an exaggerated treatment effect.
The CONSORT guidelines for reporting of “parallel group randomized controlled trials” recommend that both ITT and PP analyses should be reported for all planned outcomes to allow readers to interpret the effect of an intervention.
In fact, contrary to what Drs. Setty and Nass argue, the completely negative per-protocol analysis is likely more damning to ivermectin, because per-protocol analyses tend to exaggerate positive results. But leave it to cultists to try to point to a sign of more rigor in a study as evidence that the study shows the opposite of what it shows. I also can’t help but note that one criticism of this study is that ivermectin was available over-the-counter in Brazil at the time of the study, which could mean that those in the placebo group could have been using ivermectin over-the-counter. The per-protocol analysis excludes those people, who would have violated the protocol. Indeed, I’d be willing to bet that the reason that so many people in the placebo group dropped from the protocol was because they took ivermectin in addition to placebo, given how available it was, but the authors would have to comment.
Then there is the adaptive trial design, which means that different treatment groups in the overall trial “shared” placebo control groups, one of which took placebo for three days:
The per-protocol population included all the patients who reported 100% adherence to the assigned regimen. Although all the participants who had been assigned to the 3-day and 14-day placebo regimens were included in the intention-to-treat population, only those who had been assigned to the 3-day placebo regimen were included in the per-protocol population.
It’s clear that either Drs. Setty and Nass don’t understand adaptive trial design or that they do and are ignoring this issue as part of the potential explanation for why the per-protocol analysis lost a lot of placebo group subjects because they know that their readers don’t understand adaptive trial designs. Of course, their readers in general very likely don’t understand clinical trial design or proper study analysis in general, and RFK Jr. and his sycophants, toadies, and lackeys (like Dr. Setty) know that.
Dr. Setty concludes by JAQing off (“just asking questions,” for those not familiar with the term:
Rather than pounding the final nail in the coffin around ivermectin’s utility in treating COVID, the NEJM study raises more questions.
- What would the effect have been if a higher dose shown to be effective were administered?
- What would be the benefit of this medicine in patients with no risk factors?
- How statistically significant would the results have been if more participants were enrolled?
- Why weren’t more participants enrolled as the study progressed given the emerging benefit of the drug and the absence of adverse events?
- Why did the investigators define a primary outcome with such different real-world implications (ER visits vs hospitalizations)?
- With less than 50% of the placebo arm adhering to the study protocol, why were their outcomes included in the analysis?
- What effect did vaccination status have on outcome? If this is the primary means endorsed to prevent hospitalization, why wasn’t vaccination status mentioned as a confounder?
- Did the investigators choose to limit the study as it became clear that an Ivermectin benefit would be too big to ignore?
These questions are all handwaving, although the question about vaccination status might be a valid criticism of the study. My guess as to why it wasn’t included in the final analysis, given that vaccinated people were allowed to enroll, is speculation, but, I hope, knowledgeable speculation. Given the dates of the study, my guess is that the number of vaccinated patients were so small as to be insignificant. First, if you look at the statistics, at the beginning of the trial, only a very small number of people in Brazil were even partially vaccinated, and by the end only about one-fifth were fully vaccinated and one-half partially vaccinated. Given that the Delta variant only arose late in the study and the Omicron variant hadn’t arisen at all yet, it’s likely that the number of breakthrough infections in Brazil was very small in the study population by the end of the study in August 2021. Still, it would have been helpful if the authors had explicitly discussed this issue, in order to preempt this criticism.
And, of course, Dr. Setty claims that the dose was too low, citing the Frontline COVID-19 Critical Care Alliance (FLCCC) protocol (remember the FLCCC?), which recommends 600 μg/kg for five days. Never mind that at the time this study was designed the authors increased the duration of treatment based on what advocates recommended.
Basically, critics are doing this:
Not that that stopped ivermectin believers:
Also, one notes the…selectivity…with which ivermectin fans apply their criticisms to the TOGETHER study:
And, relevant to my discussion of the per-protocol analysis above:
The bottom line is that this study is yet another nail in the coffin of ivermectin as a treatment for COVID-19. At this point, I will, as is my wont, refer to a famous Monty Python sketch and note that at this point ivermectin is “pining for the fjords,” or, to paraphrase:
Ivermectin for COVID-19 is no more! It has ceased to be! It’s expired and gone to meet its maker! It’s a stiff! Bereft of life, it rests in peace!…It’s kicked the bucket, shuffled off its mortal coil, run down the curtain and joined the bleedin’ choir invisible!! THIS IS AN EX-COVID-19 TREATMENT!!
Picture the shopkeeper as Drs. Setty and Nass (and the entire FLCCC) and me as John Cleese, with the parrot being ivermectin for COVID-19:
Drs: Setty and Nass:
My prediction is that ivermectin fans will keep responding in the same way as the shopkeeper in the sketch did and keep stubbornly denying that the parrot is, in fact, dead and insisting that it’s “pining for the fjords.”