A few weeks ago, I first wrote about ivermectin is the new hydroxychloroquine. What did I mean by that? Last year, as the coronavirus pandemic washed over the world for the first time, the antimalarial drug hydroxychloroquine was touted as a near-miraculous treatment for COVID-19 despite an incredible lack of anything resembling rigorous scientific or clinical evidence. Ultimately, studies were done and hydroxychloroquine shown to be ineffective, but it remained what I like to call the “Black Knight of COVID-19 treatments” (in a nod, of course the that famous character from Monty Python and the Holy Grail) in that, no matter how many limbs were hacked off by the emerging scientific evidence, its proponents would respond, “It’s just a flesh wound” and continue promoting it. Ivermectin has developed very much that same vibe, complete with quacks, grifters, and conspiracy theorists promoting it, every bit as much as French “brave maverick doctor” Didier Raoult, America’s quack Dr. Mehmet Oz, quacks like Vladimir Zelenko, and, of course, Donald Trump promoted hydroxychloroquine. That’s not even considering the astroturf campaigns promoting the drug.
The “hydroxychloroquine” vibe behind ivermectin was very strong, but I still resisted writing about it for several months, mainly because the hydroxychloroquine vibe was so strong and my strong sense of “been there, done that” interfered. Ivermectin, you might recall, is a drug commonly used to treat worm infestations in animals and some worms in humans as well. Like hydroxychloroquine, it showed some antiviral activity in cell culture experiments, leading some scientists to think that it might be repurposing. Going beyond hydroxychloroquine, though, ivermectin proponents have pointed to at least to meta-analyses as strong evidence that ivermectin works against COVID-19, seemingly forgetting that the principle of “garbage in, garbage out” (GIGO) applies to meta-analyses. every bit as much as it does to computer programs. Make no mistake, either, most of the studies used in these meta-analyses have been very poor, and the highest quality studies of ivermectin, such as they are, have all been negative, except (supposedly) one.
Guess what? That one study appears to be fraudulent, as demonstrated in three articles: one news report and two blog posts, one of which is from a familiar source. First, these two:
- Huge study supporting ivermectin as Covid treatment withdrawn over ethical concerns (The Guardian)
- Why Was a Major Study on Ivermectin for COVID-19 Just Retracted? (Jack Lawrence)
Then, an old friend on Twitter:
Remember the study Elgazzar et al 2020? It was a study from Egypt that figured prominently in the meta-analysis by the ivermectin-promoting BIRD Group in the UK and in previous attempts at “meta-analyses” by the ivermectin-promoting Frontline COVID-19 Critical Care Alliance (FLCCC) in that, despite its only having been published on a pre-print server and both the BIRD Group and FLCCC rating it as much higher quality than it warranted, it had a major pull in making their meta-analyses of ivermectin positive. As I pointed out, using Gideon Meyerowitz-Katz’s reanalysis, without Elgazzar 2020, the BIRD Group meta-analysis that was widely touted as slam-dunk evidence that ivermectin reduces death from COVID-19 by over 60% turns into a negative meta-analysis that demonstrates no benefit. I also pointed out elsewhere that a better meta-analysis that did exclude Elgazzar 2020 was a negative meta-analysis.
So what’s the deal? First, there was a conflict of interest:
The efficacy of a drug being promoted by rightwing figures worldwide for treating Covid-19 is in serious doubt after a major study suggesting the treatment is effective against the virus was withdrawn due to “ethical concerns”.
The preprint study on the efficacy and safety of ivermectin – a drug used against parasites such as worms and headlice – in treating Covid-19, led by Dr Ahmed Elgazzar from Benha University in Egypt, was published on the Research Square website in November.
It claimed to be a randomised control trial, a type of study crucial in medicine because it is considered to provide the most reliable evidence on the effectiveness of interventions due to the minimal risk of confounding factors influencing the results. Elgazzar is listed as chief editor of the Benha Medical Journal, and is an editorial board member.
Oh, dear. What would all those groups and people pushing ivermectin say if, for instance, there were a similar conflict of interest involving someone promoting, say, one of the COVID-19 vaccines and the pharmaceutical company making the vaccine? Actually, we already know, because a lot of the people promoting ivermectin love to conspiracy monger and pull the “pharma shill” gambit about anyone saying anything positive about COVID-19 vaccines and negative about ivermectin. Indeed, ever since I wrote my first post about ivermectin last month, I’ve been subject to exactly that, even though I have nothing to do with Pfizer, Moderna, Johnson & Johnson, etc.
Then, there appears to have been plagiarism:
A medical student in London, Jack Lawrence, was among the first to identify serious concerns about the paper, leading to the retraction. He first became aware of the Elgazzar preprint when it was assigned to him by one of his lecturers for an assignment that formed part of his master’s degree. He found the introduction section of the paper appeared to have been almost entirely plagiarised.
Let’s go to the tape, so to speak, and look at Lawrence’s article:
Grftr News also detected significant levels of plagiarism in the Elgazzar paper. With one or two minor exceptions, the entirety of the paper’s introduction appears to be copied from various sources, including several other studies, press releases, and letters to the editor from other journals. (Click here to see the evidence for yourself).
Where the copying is not verbatim, the author’s appear to have employed techniques more commonly used by students to disguise plagiarism, for example, by using synonyms or changing one or two words. This is how “severe acute respiratory syndrome” becomes “extreme intense respiratory syndrome” in one sentence in the paper, despite the fact that “Severe Acute Respiratory Syndrome” is part of the exact full name of COVID-19 (hence the name of the virus, SARS-CoV-2), and no scientist would paraphrase that sequence of four words regardless of how many times they had previously appeared in their article. Another example is “The coronavirus has been a known pathogen in animals since the early 1970s”, which in Elgazzar et al.’s preprint becomes “Coronavirus has been a recognised pathogen in animals in early 1960s”.
I must admit that I laughed out loud when I read that passage, as I had only skimmed that ivermectin paper when it was still on the preprint server. I had noticed some of the—shall we say—weird phraseology, but I had attributed it not to plagiarism but simply to English not being the authors’ first language. I’ve seen this sort of thing many times reviewing papers for journals; papers from non-English-speaking countries often have prose that reads strangely simply due to a nonnative speaker trying to write in English. When I see this, my usual recommendation is that the authors get someone skilled in writing English to edit the paper before resubmitting. No judgment. I only suggest it to make such papers more easily readable.
Of course, as Meyerowitz-Katz observed, just the results of the study raised a lot of red flags. Elgazzar 2020, if you take the authors at their word, enrolled over 400 people with COVID-19 and 200 close personal contacts and allocated them either to ivermectin or placebo groups, reporting that ivermectin treatment decreased mortality from COVID-19 buy a whopping 90%. As Meyerowitz-Katz observed, if this were true, that would make ivermectin the “most incredibly effective treatment ever to be discovered in modern medicine.” While as a physician I might quibble about that a bit (we do have treatments that are greater than 90% effective at eliminating the diseases or conditions that they treat, especially a number of vaccines), he is correct if you restricted this to antiviral drugs. If this study’s results were accurate and generalizable, ivermectin would be the most most incredibly effective antiviral treatment ever to be discovered. That result alone should have raised a number of red flags, and it did among authors doing meta-analyses who were not ivermectin advocates from the BIRD Group or the FLCCC, which is why they excluded it from their analyses. There were also methodological reasons that I’ve mentioned before, including no good description of how patients were randomized or other key information necessary to judge the quality of a study to be included in a meta-analysis. That’s leaving aside, even the methodological issues, as mentioned by Meyerowitz-Katz:
However, even at first glance there are some problems. The authors used the wrong statistical tests for some of their results — for technical people, they report chi-squared values for continuous numeric data — and their methodology is filled with holes. They don’t report any allocation concealment, there are questions over whether there was an intention-to-treat protocol or people were shifted between groups, and the randomization information is woefully inadequate. As a study, it looks very likely to be biased, making the results quite hard to trust.
There were other issues as well, as reported by Lawrence:
One sign of poor research design – though not fraud – was the author’s decision to only register their trial on a clinical trials registry after completing their study and publishing their first draft. Meyerowitz-Katz explained that while this is not optimal it’s still common. The purpose of registering a trial in advance is to avoid the authors changing the questions or analysis they perform once the trial is complete. Such behaviours would be considered bad practice, but not registering a trial isn’t proof that they occurred.
More problematic is the authors’ decision to provide conflicting information about their trial start date. In their trial registry information, within the paper, and in replies to comments on the paper, the authors claim to have received ethical approval and commenced the trial on the 8th of June 2020. However, according to their original data, the authors recruited and treated several patients before this date. Moreover, almost half of the patients who died during the trial died before this date. Both Meyerowitz-Katz and Sheldrick confirmed that this is a problem. If the authors started their study before they had ethical approval, this would be a major ethics violation. Additionally, the authors claim to have conducted their trial on 18-80-year-olds, but the original data contains records for four patients younger than 18.
Then there are glaring discrepancies, according to Lawrence:
When opening what the authors claim is their original data the first thing that any reader notices is that it’s remarkably complete. In many columns data for all patients are fully listed. The second thing the reader will likely notice is that the original data do not match the author’s public results. In three of the four study arms measuring patient death as an outcome, the numbers between the paper and original data differ.
In their paper, the authors claim that four out of 100 patients died in their standard treatment group for mild and moderate COVID-19. According to the original data they uploaded, the number was 0 (the same as the ivermectin treatment group). In their ivermectin treatment group for severe COVID-19, the authors claim two patients died – the number in their raw data is four. Grftr News put these findings to the authors however has not received any reply.
The original data suggests that efforts to randomise patients between different groups either failed or was not attempted – despite claims to the contrary by the authors. Every patient in the severe COVID-19 group receiving standard care was an ICU patient, while the patients with severe disease in the ivermectin group were mixed between wards and ICU. The experts Grftr News spoke to confirmed this is extremely unlikely to happen by chance.
It also appears that some (or all) of the data for this paper were fabricated. Nick Brown wrote a long blog post about the problems with and discrepancies in the data as presented in the paper if you want the details. The sheer number of discrepancies point strongly to probable fraud, including:
At several points in the Excel file, there are instances where the values of an ostensibly random variable are identical in two or more sequences of 10 or more participants, suggesting that ranges of cells or even entire rows of data have been copied and pasted.
In cells B150:B168 and B184:B202, the patient’s initials are either identical at each corresponding point (e.g., cells B150/B184) or, in almost all the remaining cases, differ in only one letter.
Cells C150:C168 are identical to cells C184:C202.
Cells D150:D168 are identical—with one exception out of 19 cells—to cells D184:D202.
Cells I150:I167 are identical to cells I184:I201.
Cells S150:S165 are identical—with one exception out of 14 cells—to cells S184:S199.
Cells U150:U168 are identical to cells U184:U202.
Cells V150:V168 are identical to cells V184:V202.
Cells W150:W168 are identical—with three exceptions out of 19 cells—to cells W150:W168.
Cells AA150:AA168 are identical to cells AA184:AA202.
The errors and discrepancies listed in the articles by Lawrence, Brown, and Meyer0witz-Katz just scream data fabrication to me. They just do. The most charitable explanation is that Elgazzar and coauthors made a number of cut and paste errors transferring their data from SPSS to an Excel spreadsheet, although that begs the question of why they bothered to put the data into an Excel spreadsheet instead of just providing the SPSS file. If Elgazzar and coauthors can produce the original SPSS dataset that they used to come up with their results, there might be a non-fraud explanation. I doubt it though.
So what? You might say. It’s just one study. That is, of course, true, but this one study, even though it has never gotten past peer-review and has only been published on a pre-print server, has had outsized influence. Here’s where Meyerowitz-Katz comes in, and I’m going to quote somewhat liberally (you should read the whole thing, though):
The problem is, if you look at those large, aggregate models, and remove just this single study, ivermectin loses almost all of its purported benefit. Take the recent meta-analysis by Bryant et al. that has been all over the news — they found a 62% reduction in risk of death for people who were treated with ivermectin compared to controls when combining randomized trials.
However, if you remove the Elgazzar paper from their model, and rerun it, the benefit goes from 62% to 52%, and largely loses its statistical significance. There’s no benefit seen whatsoever for people who have severe COVID-19, and the confidence intervals for people with mild and moderate disease become extremely wide.
Moreover, if you include another study that was published after the Bryant meta-analysis came out, which found no benefit for ivermectin on death, the benefits seen in the model entirely disappear. For another recent meta-analysis, simply excluding Elgazzar is enough to remove the positive effect entirely.
This is a huge deal. It means that if this study is fraudulent it has massive implications not just for people who’ve relied on it but on every piece of research that has included the paper in their analysis. Until there is a reasonable explanation for the numerous discrepancies in the data, not to mention the implausible numbers reported in the study, any analysis that includes these results should be considered suspect. Given that this is currently the largest randomized trial of ivermectin for COVID-19, and most analyses so far have included it, that is a really worrying situation for the literature as a whole.
Basically, if you remove Elgazzar 2020 from the mix of studies of ivermectin to treat COVID-19 that have been subjected to meta-analyses, you will see that current best studies show a pretty consistent lack of benefit, with one or two small trials as the exceptions.
Moreover, this study should humble a lot of scientists. It did Meyerowitz-Katz:
We are also left with a monumental reckoning. It should not take a Masters student/investigative journalist looking at a study to notice that it is potentially fraud. This study was reviewed by dozens of scientists including myself, and while I said it was extremely low quality even I didn’t notice the issues with the data.
Even if the paper’s authors end up providing an innocent explanation for all this it would be puzzling why it took them so long to notice their error. Whether the final story is one of purposeful fabrication or a series of escalating mistakes involving training or test datasets, this research group has still screwed up in a big way.
Although science trends towards self-correction, something is clearly broken in a system that can allow a study as full of problems as the Elgazzar paper to run unchallenged for seven months. Thousands of highly educated scientists, doctors, pharmacists, and at least four major medicines regulators missed a fraud so apparent that it might as well have come with a flashing neon sign. That this all happened amid an ongoing global health crisis of epic proportions is all the more terrifying. For those reading this article, its findings may serve as a wake-up call. For those who died after taking a medication now shown to be even more lacking in positive evidence, it’s too late. Science has corrected, but at what cost?
Now here’s where I’ll go where Lawrence doesn’t really go and Meyerowitz-Katz only touches upon. There’s a reason this paper persisted for so long, and not all of that reason is likely to be innocent, an Dr. Avi Bitterman is on the right track about this:
Yes, I would say that meta-analysis that included Elgazzar 2020 should be fair game for retraction, especially if the meta-analysis classified Elgazzar 2020 as having a low risk for bias and/or clear randomization methods, inclusion criteria, and the like. As you can see from Meyerowitz-Katz’s analysis, this one study can, depending upon how it’s used in a meta-analysis, make the difference between a meta-analysis of ivermectin for COVID-19 being positive and negative.
There is a vast disinformation campaign about COVID-19, public health interventions to slow its spread, and especially vaccines. Back when the pandemic was new, hydroxychloroquine was promoted as a treatment in part because if there were a highly effective treatment for COVID-19 advocates could argue that masks, social distancing, and “lockdowns”—and even vaccines—were unnecessary. As the evidence finally convincingly showed that hydroxychloroquine doesn’t work, the same antimaskers and antivaxxers pivoted to ivermectin. Again, I suspect that there’s a reason why the FLCCC and BIRD Groups always included Elgazzar 2020 in their meta-analyses and why they not only didn’t exclude it from their meta-analyses for lacking so much necessary information to judge its quality and rigor for purposes of meta-analysis but even took the authors’ word for it and rated the study as much higher quality (i.e., “low risk of bias”) than it ever deserved. They wanted evidence to show that ivermectin is very effective against COVID-19. Whether subconsciously or not, whether Elgazzar 2020 turns out to be fraudulent or just incompetently performed and analyzed, I suspect that they ignored the glaring problems with Elgazzar 2020 and included it in their meta-analyses anyway because without it their ivermectin meta-analyses became, at best much less impressive with results that were no longer statistically significant, and at worst completely negative, showing no benefit in COVID-19 whatsoever from treatment with ivermectin.
I entitled this series, “Ivemectin is the new hydroxychloroquine.” However, hydroxychloroquine advocates never went this far. Perhaps future posts should be entitled, “When it comes to grift and astroturfing, ivermectin was once but the learner. Now it is the master.”