Last week, an antivaxer “challenged” me to look over a paper purporting to show that aluminum adjuvants in vaccines cause inflammation of the brain and therefore contribute to autism, a paper that she would be “citing frequently.” Being someone who lives by the motto, “be careful what you wish for,” I looked it over in detail. Not surprisingly, my conclusion was that the experiments were poorly done using obsolete and not very quantitative methodology and that the results do not support the conclusions made by the authors. I was not alone in this conclusion. Skeptical Raptor was, if anything, even harsher on the paper than I was.
The paper in question came out of the lab of Christopher Shaw and Lucija Tomljenovic in the Department of Ophthalmology at the University of British Columbia. As I note every time I examine a paper by these two warriors for antivaccine pseudoscience, both have a long history of publishing antivaccine “research,” mainly falsely blaming the aluminum adjuvants in vaccines for autism and, well, just about any health problem children have and blaming Gardasil for premature ovarian failure and all manner of woes up to and including death. Shaw was even prominently featured in the rabidly antivaccine movie The Greater Good. Not surprisingly, they’ve had a paper retracted, as well.
Given the authors’ history and a paper that I and others found completely consistent with that history of publishing bad science in the service of antivaccine views, you might reasonably ask: Why am I writing about it again? It turns out that I was indeed far too kind the first time around. You see, I didn’t look at all the DNA gels and Western blot films closely enough. I confess that sometimes I don’t, particularly when the images provided by the journal online are relatively low resolution. Fortunately, however, there are others with a much sharper eye for photos of DNA gels and films of Western blots than I am, and, if what these people are saying is correct, I rather suspect that Shaw and Tomljenovic might well be cruising for their second retracted paper. Before I explain why, it’s necessary for me to briefly explain two things for nonscientists not familiar with the methodology used.
In last week’s post, I complained that the authors had basically ground up mouse brains and used semiquantitative PCR to measure the level of messenger RNA for each immune cytokine examined. There’s no need for me to go into how this method is only roughly quantitative or how there are much better methods available now. I did that last time. What I do need to point out is that, after the PCR reaction is run, the PCR products (DNA fragments amplified by the PCR reaction) are separated by placing them in an agarose gel and running an electrical current through it. This gel electrophoresis works because DNA migrates towards the positive electrode and, once it solidifies, agarose forms a gel that separates the DNA fragments by size. The gel can then be stained with ethidium bromide, whose fluorescence allows visualization of the bands, which can be assessed for size and purity. Photos of the gel can be taken and subjected to densitometry to estimate how much DNA is in each band relative to the other bands.
To measure protein, Western blots work a little differently. Basically isolated cell extracts or protein mixtures are subjected to polyacrylamide gel electrophoresis (PAGE) with a denaturing agent (SDS). Again, like DNA, protein migrates towards the positive electrode, and the gel forms pores that impeded the process, allowing separation by size and charge. The proteins are then transferred to a membrane (the Western blot) and visualized by using primary antibodies to the desired protein, followed by a secondary antibody with some sort of label. In the old days, we often used radioactivity. These days, we mostly use chemiluminescence. Blots are then exposed to film or, more frequently today, to a phosphoimager plate, which provides a much larger linear range for detecting the chemiluminescence than old-fashioned film. Just like DNA gels, the bands can be quantified using densitometry. In both cases, it’s very important not to “burn” (overexpose) the film, which pushes the band intensity out of the linear ranger) or to underexpose them (noise can cause problems). It’s also important how the lines are drawn around the bands using the densitometry software and how the background is calculated. More modern software can do it fairly automatically, but there is almost always a need to tweak the outlines chosen, which is why I consider it important that whoever is doing the densitometry should be blinded to experimental group, as bias can be introduced in how the bands are traced.
So why did I go through all this? Hang on, I’ll get to it. First, however, I like to point out to our antivaccine “friends” that peer review doesn’t end when a paper is published. Moreover, social media and the web have made it easier than ever to see what other scientists think of published papers. In particular, there is a website called PubPeer, which represents itself as an “online journal club.” More importantly, for our purposes, PubPeer is a site where a lot of geeky scientists with sharp eyes for anomalies in published figures discuss papers and figures that seem, well, not entirely kosher. It turns out that some scientists with sharp eyes have been going over Shaw and Tomljenovic’s paper, and guess what? They’ve been finding stuff. In fact, they’ve been finding stuff that to me (and them) looks rather…suspicious.
One, for instance, took figure 1C of the paper and adjusted the background and contrast to accentuate differences in tones:
It was immediately noted:
- A clear and deliberate removal of the Male 3 Control TNF result. This isn’t an unacknowledged splice, as there is no background pattern from a gel contiguous with either band, left or right.
- Removal of the left half of the Male 1 Control IFN-g. Dubious also about Male 3 Control IFN-g, as the contrast highlight shows boxing around the band.
- What appears like an unacknowledged splice in ACHE blot, between AI Animal 2, Control Animal 3
Comparing this representative blot to the densitometry accompanying it, they score from 5 independent experiments IFN-g fold change from control to AI, relative to actin, as on average 4.5, with an SEM ranging from ~2.7 to 6.5. This seems too good to be true.
Look at the band. It’s the second from last band. It looks as though the band has been digitally removed. There is an obvious square there. The edges are clear. Now, this could be a JPEG compression artifact. Indeed, one of the commenters is very insistent about reminding everyone that compression artifacts can look like a square and fool the unwary into thinking that some sort of Photoshopping had occurred.. However, I do agree with another of the PubPeer discussants this is enough of a problem that the journal should demand the original blot.
On this one, I’ll give Shaw and Tomljenovic the benefit of the doubt. (Whether they deserve it or not, you can judge for yourself.) That might be a compression artifact. Other problems discovered in the gels are not so easily dismissed. For instance, there definitely appears to be the ol’ duplicated and flipped gel bands trick going on in Figure 2A:
Spotting these takes a little bit of skill, but look for distinctive parts of bands and then look to see if they show up elsewhere. It’s also necessary to realize that there could be multiple different exposures of the same band, such that the same band can appear more or less intense and mirror-imaged. You have to know what to look for, and I fear that some readers not familiar with looking at blots like these might not see the suspicious similarities, even when pointed out. Still, let’s take a look. There are more examples, for instance, these two bands in Figure 4C:
And Figures 4B and 4D, where bubbles on the gels serve as markers:
You can look at the rest of the PubPeer images for yourself and decide if you agree that something fishy is going on here. I’ve seen enough that I think there is, as is pointed out near the end:
Great to see such rapid progress being made: Band duplications firmly established for gels in Figs. 2 and 4. Perhaps we can add some RT-PCR from Fig. 1 too? In Fig. 2, seek out the band marked above that looks like a sailing boat with mast and forestay. Now look for it in Fig. 1A. And then perhaps check for any other duplications?
Others note that Shaw and Tomljenovic have engaged in a bit of self-plagiarism, too. Figure 1 in the 2017 paper is identical (and I do mean identical, except that the bars in the older paper are blue) to a paper they published in 2014. Basically, they threw a little primary data into one of their crappy review articles trying to blame “environment” (i.e., vaccines) for autism, this one published in 2014 in OA Autism. Don’t take my word for it. Both articles are open-access, and you can judge for yourself.
Some comments from PubPeer:
As far as I can see figure 1 is identical in the two papers? But in the 2014 paper hisograns are described as means +/- SEM from three independent experiments and in 2017 as means +/- SEM of five independent experiments? http://www.oapublishinglondon.com/article/1368
Brazen self-plagiarism of the open access 2014 paper’s Fig. 1 is a key find by the human commentator. Especially since it is not in PubMed (though it is Ref. 166 here). This means that they have used certain elements of a single gel four times in three years: Nice work if you can get it.
Here is the direct link to 2014 Fig. 1
The licence for the 2014 paper states “Creative Commons Attribution License (CC-BY)”. Unfortunately, the 2017 recycling of Fig. 1 is neither creative nor is it attributed.
What this means is that Elsevier were misled regarding the copyright situation and the originality of the work. So this finding surely gives the 2017 publisher a get out of jail card. If they choose to play it, they can now unilaterally withdraw this embarrassing Anti-vaxxer concoction on these grounds alone.
Don’t forget to archive the two papers for your records: They might disappear from the publishers’ web sites at some point.
But there are six other key points that limit what conclusions can be drawn from this paper:
- They selected genes based on old literature and ignored newer publications.
- The method for PCR quantification is imprecise and cannot be used as an absolute quantification of expression of the selected genes.
- They used inappropriate statistical tests that are more prone to giving significant results which is possibly why they were selected.
- Their dosing regime for the mice makes assumptions on the development of mice that are not correct.
- They gave the mice far more aluminum sooner than the vaccine schedule exposes children to.
- There are irregularities in both the semi-quantitative RT-PCR and Western blot data that strongly suggests that these images were fabricated. This is probably the most damning thing about the paper. If the data were manipulated and images fabricated, then the paper needs to be retracted and UBC needs to do an investigation into research misconduct by the Shaw lab.
Taken together, we cannot trust Shaw’s work here and if we were the people funding this work, we’d be incredibly ticked off because they just threw away money that could have done some good but was instead wasted frivolously. Maybe there’s a benign explanation for the irregularities that we’ve observed, but until these concerns are addressed this paper cannot be trusted.
I note that they go into even more detail about the problems with the images that have led me (and others) to be suspicious of image manipulation, concluding:
These are some serious concerns that raise the credibility of this study and can only be addressed by providing a full-resolution (300 dpi) of the original blots (X-ray films or the original picture file generated by the gel acquisition camera).
There has been a lot of chatter on PubPeer discussing this paper and many duplicated bands and other irregularities have been identified by the users there. If anyone is unsure of how accurate the results are, we strongly suggest looking at what has been identified on PubPeer as it suggests that the results are not entirely accurate and until the original gels and Western blots have been provided, it looks like the results were manufactured in Photoshop.
I agree. Oh, and I agree with their criticism of the use of statistics. I even brought up their failure to control for multiple comparisons, but Shaw and Tomljenovic also used a test that is appropriate for a normal distribution when their data obviously did not follow a normal distribution.
So, my dear readers, it turns out that Orac, as Insolent as he can be when slapping down bad science by antivaxers, was not nearly Insolent enough in this case. Mea culpa. I should have known better, given Shaw and Tomljenovic’s history. Not only do we have poorly done and analyzed experiments, but we also have self-plagiarism and, quite possibly, scientific fraud. Only releasing the full resolution original images from the original experiments (which are now probably four years old) can put these questions to rest.
Science matters. I hate to see it abused like this, particularly when experimental animals are killed in the service of such awful science.