Antivaccine nonsense Bad science Medicine Skepticism/critical thinking

Overconfidence as a contributor to science denial among physicians and scientists

The pandemic has brought scientists who have rejected science with respect to COVID-19 public health measures a disturbing level of influence. Recent research suggests reasons why and who among the public susceptible to such misinformation remains persuadable.

If there’s one thing that the COVID-19 pandemic has brought into sharp relief that the vast majority of physicians and scientists either didn’t appreciate or downplayed as unimportant is just how easily people can be led into believing in conspiracy theories and science denial such as antivaccine pseudoscience, including highly educated professionals, such as physicians and scientists. In the slightly less than two and a half years since COVID-19 was declared a pandemic, just at this blog alone we’ve written about many examples whom, before the pandemic, we had never heard of, such as Peter McCulloughMichael YeadonRobert MaloneSimone Gold, and Geert Vanden Bossche. Others were academics with whom I had not been familiar, such as the Great Barrington Declaration (GBD) authors Martin KulldorffJay Bhattacharya, and Sunetra Gupta, all of whom had been well-respected before their pandemic disinformation heel turn first to COVID-19 contrarianism and then to outright antivaccine disinformation. There were also scientists with whom we were familiar, making their turn to contrarian but whose turn to contrarianism and science denial actually shocked me, such as John Ioannidis, and ones whose heel turn with respect to science surprised me less, such as Vinay Prasad. Throughout the pandemic, I was never surprised when scientists and physicians who had been antivaccine prior to the pandemic (e.g., James Lyons-Weiler, Paul Thomas, Sherri Tenpenny, Andrew Wakefield, and Joe Mercola) immediately pivoted to COVID-19 minimization and antivaccine disinformation weaponized against COVID-19 vaccines, but some of the others genuinely puzzled me with how easily they had “turned” into spreaders of misinformation.

Looking over the list of these people and others, one can find a number of commonalities shared by some, but not all, of the contrarians listed above. . First, most of them have an affinity towards at least libertarian-leaning politics. Second, few of them have any significant expertise in infectious disease or infectious disease epidemiology. One exception is Sunetra Gupta, co-author of the GBD, is a British infectious disease epidemiologist and a professor of theoretical epidemiology at the Department of Zoology, University of Oxford. She’s a modeller, but early in the pandemic her group’s modeling, which concluded that by March 2020 more than half of the UK population might have been infected with the novel coronavirus and therefore “natural herd immunity” was within reach, disagreed markedly with the scientific consensus, which might have played a role. (Instead of wondering where she might have gone wrong, Gupta appears to have doubled down.) However, most of the others above have little or no directly relevant experience in infectious disease, infectious disease epidemiology, virology, immunology, vaccine development, or molecular biology. Yet during the pandemic they became famous—or at least Internet influencers, although a number of them—I’m looking at you, Kulldorff and Bhattacharya—have shown up fairly regularly on Fox News to attack public health interventions and mandates targeting the pandemic.

So what is the common denominator? A recent study published in ScienceAdvances last month suggests at least one reason. Steve Novella wrote about it on his personal blog the week that it was published, but I wanted to add my take and integrate it into a more general discussion of how people—including some who, one would think, should know better based on their education and profession—can so easily fall prey to science denial and conspiracy theories. As Steve noted in his post, for most people other than experts in relevant fields, a very good “first approximation of what is most likely to be true is to understand and follow the consensus of expert scientific opinion.” As he put it, that’s just probability. It doesn’t mean that the experts are always right or that there’s no role for minority—or even fringe±—opinions. Rather, as Steve put it:

It mostly means that non-experts need to have an appropriate level of humility, and at least a basic understanding of the depth of knowledge that exists. I always invite people to consider the topic they know the best, and consider the level of knowledge of the average non-expert. Well, you are that non-expert on every other topic.

It is that humility that is lacking in these people (at least), as suggested by the title of the study, Knowledge overconfidence is associated with anti-consensus views on controversial scientific issues.

Overconfidence versus scientific consensus

I like to think that I’ve consistently emphasized the need for humility in approaching science, if not hardcore science deniers. I would add that this need is not limited to science in which one is an expert. For instance, ostensibly I’m an expert in breast cancer, which means that I know an awful lot about the biology of breast cancer and its clinical behavior. However, I am a surgical oncologist, not a medical oncologist; I would not presume to tell my medical oncology colleagues which chemotherapy or targeted therapy drugs they should give my patients after I operate on them, and, I would hope, they don’t presume to tell me who is a surgical candidate, what operation to perform, or how to perform it. Similarly, based on education, experience, and publications, I can be considered an expert in some areas of basic and translational science, such as breast cancer biology related to a certain class of glutamate receptors, tumor angiogenesis, and other areas in which I’ve done research and published.

Finally, after nearly 20 years of studying anti-science misinformation, the antivaccine movement, and conspiracy theories in general, I like to think that I have a certain level of expertise in these areas. That’s why, when COVID-19 first hit, I was careful about making declarations and tended to stick with the scientific consensus unless new evidence suggested that I should do otherwise. However, I did immediately recognize the same sorts of science denial, antivaccine tropes, and conspiracy theories that I had long written about before the pandemic being applied to the pandemic, which is what I wrote about, such as early claims that SARS-CoV-2 is a bioweapon and misuse of the Vaccine Adverse Events Recording System (VAERS) database. The last of these many of us had been predicting before COVID-19 vaccines received emergency use authorizations (EUAs) from the FDA and I wrote about within weeks of the rollout of the vaccines, as antivaxxers were already doing what they had long done for vaccines and autismmiscarriages, and sudden infant death syndrome (SIDS). I recount these incidents mainly to demonstrate how, if you don’t know the sorts of conspiracy theories that have long spread among antivaxxers, you wouldn’t know that everything old is new again, there is nothing new under the sun, and antivaxxers tend just to recycle and repurpose old conspiracy theories to new vaccines, like COVID-19 vaccines. If many of the people above, most of whom self-righteously proclaim themselves “provaccine,” did know these tropes beforehand, I like to think that some of them might not have turned antivax.

Let’s move on to the study, which comes from investigators from Portland State University, the University of Colorado, Brown University, and the University of Kansas. From the title of the study, you might think that this is primarily about the Dunning-Kruger effect, a cognitive bias well known among skeptics in which people wrongly overestimate their knowledge or ability in a specific area. While there are criticisms of the Dunning-Kruger model, it has generally held up pretty well as one potential explanation of how people come to believe anti-consensus views.

This particular study, the authors point one important point out before they describe their methods and results:

Opposition to the scientific consensus has often been attributed to nonexperts’ lack of knowledge, an idea referred to as the “deficit model” (7, 8). According to this view, people lack specific scientific knowledge, allowing attitudes from lay theories, rumors, or uninformed peers to predominate. If only people knew the facts, the deficit model posits, then they would be able to arrive at beliefs more consistent with the science. Proponents of the deficit model attempt to change attitudes through educational interventions and cite survey evidence that typically finds a moderate relation between science literacy and pro-consensus views (9–11). However, education-based interventions to bring the public in line with the scientific consensus have shown little efficacy, casting doubt on the value of the deficit model (12–14). This has led to a broadening of psychological theories that emphasize factors beyond individual knowledge. One such theory, “cultural cognition,” posits that people’s beliefs are shaped more by their cultural values or affiliations, which lead them to selectively take in and interpret information in a way that conforms to their worldviews (15–17). Evidence in support of the cultural cognition model is compelling, but other findings suggest that knowledge is still relevant. Higher levels of education, science literacy, and numeracy have been found to be associated with more polarization between groups on controversial and scientific topics (18–21). Some have suggested that better reasoning ability makes it easier for individuals to deduce their way to the conclusions they already value [(19) but see (22)]. Others have found that scientific knowledge and ideology contribute separately to attitudes (23, 24).

Recently, evidence has emerged, suggesting a potentially important revision to models of the relationship between knowledge and anti-science attitudes: Those with the most extreme anti-consensus views may be the least likely to apprehend the gaps in their knowledge.

This is, as Steve described it, a “super Dunning-Kruger.” The authors, however, describe existing research and what they think that their research contributes to general knowledge thusly:

These findings suggest that knowledge may be related to pro-science attitudes but that subjective knowledge—individuals’ assessments of their own knowledge—may track anti-science attitudes. This is a concern if high subjective knowledge is an impediment to individuals’ openness to new information (30). Mismatches between what individuals actually know (“objective knowledge”) and subjective knowledge are not uncommon (31). People tend to be bad at evaluating how much they know, thinking they understand even simple objects much better than they actually do (32). This is why self-reported understanding decreases after people try to generate mechanistic explanations, and why novices are poorer judges of their talents than experts (33, 34). Here, we explore such knowledge miscalibration as it relates to degree of disagreement with scientific consensus, finding that increasing opposition to the consensus is associated with higher levels of knowledge confidence for several scientific issues but lower levels of actual knowledge. These relationships are correlational, and they should not be interpreted as support for any one theory or model of anti-scientific attitudes. Attitudes like these are most likely driven by a complex interaction of factors, including objective and self-perceived knowledge, as well as community influences. We speculate on some of these mechanisms in the general discussion.

The authors do this through five studies estimating opposition to scientific consensus, as well as objective knowledge and subjective knowledge of these topics:

In studies 1 to 3, we examine seven controversial issues on which there is a substantial scientific consensus: climate change, GM foods, vaccination, nuclear power, homeopathic medicine, evolution, and the Big Bang theory. In studies 4 and 5, we examine attitudes concerning COVID-19. Second, we provide evidence that subjective knowledge of science is meaningfully associated with behavior. When the uninformed claim they understand an issue, it is not just cheap talk, and they are not imagining a set of “alternative facts.” We show that they are willing to bet on their ability to perform well on a test of their knowledge (study 3).

The key part of the study is portrayed in this graph, Figure 1:

Fig. 2. The relationship between opposition and subjective and objective knowledge for each of the seven scientific issues, with 95% confidence bands. In general, opposition is positively associated with subjective knowledge and negatively associated with objective knowledge, but not for all issues.

Because there could be differences based on topic, the authors then broke their results down individual contentious area:

Overconfidence Figure 2
Fig. 2. The relationship between opposition and subjective and objective knowledge for each of the seven scientific issues, with 95% confidence bands. In general, opposition is positively associated with subjective knowledge and negatively associated with objective knowledge, but not for all issues.

The authors note that the relationship between opposition to the scientific consensus and objective knowledge was negative and significant for all episodes other than climate change, while the relationship between opposition and subjective knowledge was positive for all issues other than climate change, Big Bang, and evolution, where the relationship failed to achieve statistical significance.

The authors also noted one interesting effect, specifically that of political polarization, reporting that, for more politically polarized issues, the relation between opposition and objective knowledge is less negative than for less polarized issues and that the relation between opposition and subjective knowledge is less positive. One way that this sort of result might be interpreted is that advocates who take on an anti-consensus view go out of their way to learn background information about the subject that they oppose. However, in this case knowledge doesn’t lead to understanding, but rather to more skilled at motivated reasoning; i.e., picking and choosing information and data that support one’s preexisting beliefs.

Because another issue is that study subjects with different levels of opposition to a scientific consensus might interpret their subjective knowledge differently, noting that “opponents may claim that they understand an issue but acknowledge that their understanding does not reflect the same facts as the scientific community” and that this “could explain the disconnect between their subjective knowledge rating and their ability to answer questions based on accepted scientific facts.” So to test this, the authors developed a measure of knowledge confidence designed to remove this ambiguity by designing a measure of knowledge confidence that incentivized participants to report their genuine beliefs. Specifically, subjects were given an opportunity to earn a bonus payment by betting on their ability to score above average on the objective knowledge questions or to take a smaller guaranteed payout, with betting indicating greater knowledge confidence.

As the authors predicted the results were:

As opposition to the consensus increased, participants were more likely to bet but less likely to score above average on the objective knowledge questions, confirming our predictions. As a consequence, more extreme opponents earned less. Regression analysis revealed that there was a $0.03 reduction in overall pay with each one-unit increase in opposition [t(1169) = −8.47, P< 0.001]. We also replicated the effect that more opposition to the consensus is associated with higher subjective knowledge [βopposition = 1.81, t(1171) = 7.18, P < 0.001] and lower objective knowledge [both overall science literacy and the subscales; overall science literacy model βopposition = −1.36, t(1111.6) = −16.28, P < 0.001; subscales model βopposition = −0.19, t(1171) = −10.38, P < 0.001]. Last, participants who chose to bet were significantly more opposed than nonbetters [βbet = 0.24, t(1168.7) = 2.09, P = 0.04], and betting was significantly correlated with subjective knowledge [correlation coefficient (r) = 0.28, P < 0.001], as we would expect if they are related measures. All effects were also significant when excluding people fully in line with the consensus (see the Supplementary Materials for analysis).

Finally, the authors applied similar methods to questions of whether or not study subjects would take the COVID-19 vaccine (study #4) and attitudes towards COVID-19 public health interventions (study #5). Study #4 replicated the results of previous studies. There was an interesting curveball in study #5, though, questions on how much study participants think that scientists know about COVID-19, and the results were telling:

To validate the main finding, we split the sample into those who rated their own knowledge higher than scientists’ knowledge (28% of the sample) and those who did not. This dichotomous variable was also highly predictive of responses: Those who rated their own knowledge higher than scientists’ were more opposed to virus mitigation policies [M = 3.66 versus M = 2.66, t(692) = −12, P < 0.001, d = 1.01] and more noncompliant with recommended COVID-mitigating behaviors [M = 3.05 versus M = 2.39, t(692) = −9.08, P < 0.001, d = 0.72] while scoring lower on the objective knowledge measure [M = 0.57 versus M = 0.67, t(692) = 7.74, P < 0.001, d = 0.65]. For robustness, we replicated these patterns in identical models controlling for political identity and in models using a subset scale of the objective knowledge questions that conservatives were not more likely to answer incorrectly. All effects remained significant. Together, these results speak against the possibility that the relation between policy attitudes and objective knowledge on COVID is completely explained by political ideology (see the Supplementary Materials for all political analyses).

This could also suggest why certain individuals who self-identified as liberal or progressive have fallen for COVID-19 contrarianism, and, yes, it involves overconfidence:

Results from five studies show that the people who disagree most with the scientific consensus know less about the relevant issues, but they think they know more. These results suggest that this phenomenon is fairly general, although the relationships were weaker for some more polarized issues, particularly climate change. It is important to note that we document larger mismatches between subjective and objective knowledge among participants who are more opposed to the scientific consensus. Thus, although broadly consistent with the Dunning-Kruger effect and other research on knowledge miscalibration, our findings represent a pattern of relationships that goes beyond overconfidence among the least knowledgeable. However, the data are correlational, and the normal caveats apply.

Before I move on to the general public, personally, I can’t help but wonder if these results have particular relevance in suggesting how scientists and physicians who should know better could come to hold anti-consensus views so strongly. For example, let’s look at the cases of John Ioannidis and Vinay Prasad. John Ioannidis made his name as a “meta-scientist” or critic of science. His publication record covers documentation of deficiencies in the scientific evidence for a broad number of scientific subject areas, ranging from the effect of nutrition on cancer to, well, pretty much all clinical science. Not long after the pandemic hit I started wondering what the heck happened to him and he started abusing science to attack scientists holding consensus views about COVID-19 as “science Kardashians” and reacting very badly to criticism of his attacks, I had already started becoming uncomfortable with his overconfidence and apparent attitude in which he seemed to view himself as more knowledgeable about everything, as evidenced by his arguing that the NIH rewards “conformity” and “mediocrity” when awarding research grants. I would speculate—perhaps even argue—that the overconfidence in his own knowledge described in this study had already infected Ioannidis before the pandemic.

Similarly, before the pandemic Prasad had made his name criticizing the evidence base for oncology interventions. (He is an academic medical oncologist.) In particular, he was known for documenting what he referred to as “medical reversals,” when an existing medical practice is shown to be no better than a “lesser” therapy. Both Steve and I discussed this concept at the time and both agreed that, while Prasad’s work produced some useful observations, it also missed a fair amount of nuance. By just before the pandemic, I note that Prasad had taken to criticizing those of us who were combatting pseudoscience in medicine as, in essence, wasting our medical skillsdenigrating such activities as being below him, as a pro basketball star “dunking on a 7′ hoop.” As with Ioannidis, I would speculate—perhaps even argue—that the overconfidence in his own knowledge described in this study had already infected Prasad before the pandemic. One could easily speculate that all of the “COVID contrarian doctors” who went anti-consensus—some of whom even became antivaccine— likely shared this overconfidence before the pandemic.

One thing this study doesn’t address, though, that I’ve been wondering about, and that’s the role of social affirmation in reinforcing anti-consensus views and even radicalizing such “contrarian doctors” further into anti-science views. Vinay Prasad and many of the other physicians and scientists I listed have large social media presences and legions of adoring fans. Ioannidis, although he frequently humble brags about not being on social media, does receive lots of invitations to be interviewed in more conventional media from all over the world, and the reasons are his pre-pandemic fame as the most published living scientists and his COVID-19 contrarian views. Then, for many of these doctors, there are financial rewards. A number of them have Substacks in which they monetize their contrarian and science-denying views.

These would be good topics for other studies. In the meantime, let’s move on to the general public that consumes—and is influenced by—their misinformation.

Who is persuadable?

The Science Advances study notes something that skeptics have known for a long time. It isn’t (just) lack of information that drives science denial. It’s more than that, which is why, in and of itself, trying to drive out bad information with good information generally doesn’t work very well:

The findings from these five studies have several important implications for science communicators and policymakers. Given that the most extreme opponents of the scientific consensus tend to be those who are most overconfident in their knowledge, fact-based educational interventions are less likely to be effective for this audience. For instance, The Ad Council conducted one of the largest public education campaigns in history in an effort to convince people to get the COVID-19 vaccine (43). If individuals who hold strong antivaccine beliefs already think that they know all there is to know about vaccination and COVID-19, then the campaign is unlikely to persuade them.

Instead of interventions focused on objective knowledge alone, these findings suggest that focusing on changing individuals’ perceptions of their own knowledge may be a helpful first step. The challenge then becomes finding appropriate ways to convince anti-consensus individuals that they are not as knowledgeable as they think they are.

This is not an easy task. I frequently point out that hard core antivaxxers, for instance, are generally not persuadable. Indeed, I liken their antivaccine views—or other anti-consensus views—to religion or political affiliation, beliefs intrinsic to their identities and every bit as difficult to change as religion or political orientation. In other words, it’s possible to change such views, but the effort required and failure rate are both too high to make these people a target of science communication. As a corollary to this principle, I’ve long argued that it is the fence sitters who are the most likely to have their minds changed, or to be “inoculated” (if you will) against misinformation.

Last week, a study from authors at the Santa Fe Institute was published in the same journal suggesting that I should be rather humble, as I might not be entirely on the right track, in that people do tend to shape their beliefs according to their social networks and existing moral belief systems. The authors note, as I frequently do:

Skepticism toward childhood vaccines and genetically modified food has grown despite scientific evidence of their safety. Beliefs about scientific issues are difficult to change because they are entrenched within many interrelated moral concerns and beliefs about what others think.

Again, the reason many people gravitate to anti-consensus views with respect to specific scientific conclusions (e.g., vaccine safety and effectiveness) involves social moral beliefs and concerns and personal ideology. To determine who is most susceptible to belief change, the authors, Information alone, although a necessary precondition to change minds, is usually not sufficient to change minds. I would also argue that it’s equally important to identify whose minds are changeable with achievable effort.

To boil this research down, the authors looked at two scientific issues, vaccines and genetically modified organisms (GMOs), using this rationale:

In this paper, we consider attitudes toward GM food and childhood vaccines as networks of connected beliefs (7–9). Inspired by statistical physics, we are able to precisely estimate the strength and direction of the belief network’s ties (i.e., connections), as well as the network’s overall interdependence and dissonance. We then use this cognitive network model to predict belief change. Using data from a longitudinal nationally representative study with an educational intervention, we test whether our measure of belief network dissonance can explain under which circumstances individuals are more likely to change their beliefs over time. We also explore how our cognitive model can shed light on the dynamic nature of dissonance reduction that leads to belief change. By combining a unifying predictive model with a longitudinal dataset, we expand upon the strengths of earlier investigations into science communication and belief change dynamics, as we describe in the next paragraphs.

In brief, investigators constructed a cognitive belief network model to predict how the beliefs of a group of almost 1,000 people who were at least somewhat skeptical about the efficacy of genetically modified foods and childhood vaccines would change as the result of an educational intervention. Using a nationally representative longitudinal study of beliefs about GM food and vaccines carried out at four different times over three waves of data collection (once in the first and third waves and twice in the second wave, before and after an intervention). During the second wave, the authors presented subjects with an educational intervention on the safety of GM food and vaccines, quoting reports from the National Academies of Sciences. Participants were divided into five experimental groups for the GM food study and four experimental groups for the study of childhood vaccines, with one control condition in each study in which participants did not receive any intervention. All experimental conditions received the same scientific message about safety with a different framework. The results were then analyzed using the cognitive network model developed, to look for how beliefs would change in response to the educational intervention.

To sum it up, those who had a lot of dissonance in their interwoven network of beliefs were more likely to change their beliefs after viewing the messaging, although not necessarily in accordance with the message. Overall, the authors found that people are driven to lower their cognitive dissonance through belief change. As the authors put it in an interview for their press release in which they commented on how people with little dissonance showed little change in their beliefs after the intervention and those with more dissonance were more likely to show more change:

“For example, if you believe that scientists are inherently trustworthy, but your family and friends tell you that vaccines are unsafe, this is going to create some dissonance in your mind,” van der Does says. “We found that if you were already kind of anti-GM foods or vaccines to begin with, you would just move more towards that direction when presented with new information even if that wasn’t the intention of the intervention.”

What this study suggests is that targeting such people with science communication can be a two-edged sword in that people with high dissonance will try to reduce their dissonance. Unfortunately, reducing that dissonance might not go in the direction that you think it will. They might reduce their dissonance by moving towards accepting scientific consensus with respect to scientific issues that are contentious among the public, such as vaccines and GM foods, or they might move further into conspiracy:

All in all, we found that network dissonance, belief change, and interdependence relate to each other over time, in line with our model assumptions. Interventions aimed at changing people’s beliefs led to a reconfiguration of beliefs that allowed people to move to lower network dissonance states and more consistent belief networks. However, these reconfigurations were not always in line with the objective of the intervention and sometimes even reflected a backlash.


Individuals are motivated to reduce the dissonance between beliefs and reconfigure their beliefs to allow for lower dissonance. Such a reconfiguration can be, but is not necessarily, in line with the aim of the intervention. The direction in which individuals change their beliefs depends not only on the intervention but also on the easiest way for individuals to reduce their dissonance. This finding also goes beyond the classic finding that inducing dissonance leads to belief change (41–43) by showing that providing individuals with new information interacts with dissonances in their belief network. Individuals with low dissonance are unlikely to change at all, whereas individuals with high dissonance can change in both directions.

So the real question, if this research holds up, is: How do we identify people with a high degree of dissonance who won’t reduce their dissonance in response to a pro-science message by moving deeper into antiscience beliefs and conspiracy theories? More importantly, how do we craft messages that make it less likely that these people will adjust their anti-consensus beliefs in the direction that we want them to, rather than doubling down? Again, it is clear that those who are the most certain and have the least dissonance are the least likely to change their views in response to science communication.

Putting it all together

How does one put the results of these two studies together and combine them with what is already known? From my perspective, I see a couple of very important takeaway points. First, as Steve has said, humility is key. One is tempted to quote Richard Feynman, who famously said, “The first principle is that you must not fool yourself and you are the easiest person to fool.” My colleagues who “go crank” almost all have forgotten Feynman’s warning.

Indeed, those who follow my Twitter feed might have noticed me saying things like this lately:

I won’t name those skeptics here, but the warning above about overconfidence applies especially to people like John Ioannidis, who seems to think that, having spent over a quarter century documenting problems with biomedical research, he is now immune to the same errors in thinking that leads others astray. Then there’s Vinay Prasad, who is so brimming overconfidence that he not only doesn’t think that such errors in thinking are important enough to trouble his massively brilliant (and overconfident) brain with, but has expressed outright contempt towards those of us who do and act upon that belief by combatting medical misinformation. It is not difficult to see how two men who built their academic careers on “meta-science” and, in essence, commenting on deficiencies in science, might come to think that they know better than the scientists with relevant topic-specific expertise, rather than more general expertise about scientific methodology.

Ideology also plays a role. Martin Kulldorff is an excellent example of this. He clearly almost certainly must have had preexisting libertarian views towards collective action led by government as anathema. I say this based on how easily Jeffrey Tucker of the “free market” think tank American Institute for Economic Research enticed him to come to its headquarters, where, having found his people, Kulldorff wanted to hold the conference at which the GBD was birthed more urgently than even Tucker did. Ever since then, Kulldorff, along with his fellow GBD author Jay Bhattacharya, has been slipping further and further from science and deeper and deeper into conspiracy theory and antivaccine advocacy.

As for the rest, most of them weren’t academic, and those who tended to be either adjunct clinical faculty or already flirting with crankdom. It isn’t hard to believe that what drew them into COVID-19 contrarianism was the ego boost from the attention that they got for promoting views that went against the mainstream. I’ll reiterate right here that there is nothing inherently wrong about challenging the mainstream. What matters is that you have the goods—the evidence and/or logic—to back up that challenge. That’s what differentiates legitimate scientists expressing nonmainstream views from the cranks.

Then, of course, the role of grift cannot be underemphasized. As I like to say, ideology is key, but it’s often also always about the grift. Just look at America’s Frontline Doctors if you don’t belief me.

As for reaching the public, what these studies suggest is that science communication is complex and far more difficult than most people appreciate. While it’s undeniably true that the more certain that, for example, an antivaxxer is, the less likely anyone is going to be able to change their mind, predicting who will be persuadable and won’t react to science communication by moving deeper into conspiracy and crafting messages to minimize that possibility are very difficult. The more I learn about this area, the less I feel confident that I understand well. The key, however, is being willing to change one’s views and approach based on new evidence. The application of the findings from these two studies, and many more that not infrequently conflict with each other is the biggest challenge for science communicators. It doesn’t help that we’re in the middle of a pandemic that has resulted in the spread of anti-science disinformation and conspiracy theories, many not innocently spreading but intentionally promoted. That a large amount of the disinformation that we see is not organic, but rather promoted by actors with agendas, just makes the problem even worse.

I don’t mean to finish on low note. The situation is not hopeless. In fact, it is research like this that is most likely to provide the tools to public health officials and science communicators to counter antiscience messages.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

74 replies on “Overconfidence as a contributor to science denial among physicians and scientists”

I’m not sure how to phrase this but bear with me:
wouldn’t what we already know about the personality difference seen in CT believers- including anti-vaxxers- be relevant in predicting who is still reachable or not? Researchers like Hornsey and Douglas show certain characteristics such as not accepting hierarchies of expertise, having narcissistic or paranoid leanings and being concerned more with personal outcomes rather than those that affect others.

Amongst the trolls who visit RI, it seems that many have trouble evaluating self and others’
abilities, seeing themselves as equals or superior to experts: insulting Dr Fauci, calling Orac names.. Social cognitive abilities along these lines usually develop alongside formal operational thought during adolescence but not everyone succeeds to adult levels. Similarly, development in executive function re knowledge about their own skills and their usage occurs then too.

People get mad when Joel insults trolls but I think that he ( also Bacon, Narad etc) are seeing relevant qualities expressed by those commenters.

The finding above about scoffers who assiduously seek out confirming ( CT) sources is telling: they work really hard at this if you read their writing.

I see a serious theoretical problem there, paradigmatic of how IMHO “social science” gets this stuff wrong. Lately Orac has observed that the “information deficit model” of science denial/resistance just doesn’t account for what we observe, yet too many conceptual frameworks are stuck in it. I.E. the idea that trolls lack the cognitive ability to evaluate expertise, they don’t know what their skills are. IOW, they don’t have accurate information about abilities in their conscious minds, so they err.

I don’t see any way someone could arrive at that hypothesis outside of an absurd confirmation bias (based on their own ideas about their own supposed ‘rationality’) and naivete about human psychology. IOW, narcissism, e.g., is not an information deficit problem. A far more probable hypothesis (and one I believe you’ve often floated yourself) is that trolls are acutely aware at some level that experts know more than they do, which makes them feel small and weak, which leads them to project a false assessment of their own capacities as compensation. IOW, it’s an expression of lizard-brain/sub-conscious/emotions into the realm of cognition and facts.

This all makes me think of Foucault’s declaration (going beyond the old “knowledge is power” adage) that power and knowledge are functionally the same thing, fungible in either direction. Why do COVID trolls scoff at Fauci’s knowledge? Because they understand him as having power over them. Feeling disempowered then, many will intuitively react and resist by professing knowledge equal to greater than the experts who personify for them the socio-political power of the establishment.


I wrote a looonngg comment on my problems with the studies covered in the OP under the earlier version published last week at SBM. I’ll not copy it here, but just drop a link;

Let me elaborate:
I believe that there are several classes of trolls/ alties we are dealing with both here and at woo sites/ social media:
— pure contrarians who oppose whatever the consensus / expert view says. To themselves, they may admit that Orac and other SBM supporters know more than they do about a topic although they will not admit it out loud
— true believers who follow altie/ woo ideas and believe themselves ahead of standard science. This may be a defensive position to save face and lift self esteem- e.g. anti-vax mothers of severely autistic kids
— political opponents who oppose any sort of authority such as expert opinion or governmental regulation so they scoff at them all

People can be notoriously bad at self-evaluation ( metacognition) when they have emotional issues. Political advocates, salesmen and other manipulators know how to exploit this tendency to their advantage. If you paint experts as monsters, you may discourage people from listening to them or seeing their contributions as valuable: note what has been done to Drs Fauci, Offit and Hotez- shameful!

-btw- sadmar, I was in your town 10 days ago: I waved hello

An additional note: the information-deficit / poor-metacognition models are based on the state of the individual. They assume that if people scoff at experts, they must think they are smart and informed. But these are social phenomena. The scoffing reflects how these people are representing themselves to others, how they want to be seen. This is not conscious strategy in most cases, more like instinct. So we can say that among the people they are trying present a front of knowledge to are themselves.

All three categories you noted strike me as, in part anyway, doing the same thing: attempting to perform power by declaring “you can’t make me agree, much less behave.”

Among other things, this explains why no amount of debunking has any effect on our resident trolls. The exchanges aren’t about “truth” but about authority. Every critique they receive gives them an opportunity to perform invulnerability to criticism, by replying with continued defiance.

@rs: Funny enough, I just made the exact same “Terrible Twos” connection myself down below. You are literally correct. Toddlers are narcissists: at 2 years old, their entire universe consists of Self and Servant.

Most young children grow beyond it; develop some sort of empathy and healthy theory of mind. Narcissists do not, and never will. Their early nurture has instead hardwired that model into for life. It is all they are. It is all they have.

I won’t call such nurture nor its product “maladapted” though, because while it is a maladaptation from a societal perspective, the narcissist herself is not “abnormal” or “ill”. She is perfectly 100% normal for Her.

It is just that her “her” is a creature that is something distinctly different to the rest of us; an animal that is mostly but not fully human.

She is, and has, in Herself everything she could ever need, with just the one exception: More.

In practice, her not-quite-humanness may express as a social disadvantage, especially if she is low-functioning lacking the impulse control to keep her most disruptive, destructive behaviors safely out of full public view. But it can also be a serious survival advantage for an individual. As in psychopathy, narcissism’s [almost not-quite identical] twin, being without wired-in social limitations such as the emotional and intellectual unwillingness/distaste/pain at harming others which the rest of us feel can be highly empowering.

Lacking such conventional constraints, these empathy-less individuals are free to exploit certain individual survival strategies, strategies most of us would never dream of employing due to their high, horrific costs to other individuals and to society overall. The are free to game societal systems. They are free to game each one of us. And they will do so, the instant the benefit to them of doing so exceeds the risk to them.

A vampire is not maladapted. It simply sees us as its food. And it is not wrong. The only risk it faces is, that if its behavior is widely exposed and condemned, at some point it may find itself staked through the heart.

(And even that assumes it is a high-function intelligent reasoning creature, not just a low-functioning animal wholly driven by instinct alone.)

I suspect I start to relate to these most toxic of Type 2 Cluster B Personality Disorders, because while I might not be personality disordered myself (although the first doctor to diagnose me did wonder if I was schizoid at first) I have enjoyed several decades of severe, incurable mental illness to twist me up inside. Exceedingly nasty. Thus, while I certainly wasn’t short of empathy in later childhood, and don’t think I am without it now, I have “learned” to ration it, hard, as a survival technique. Feeling too much of other people’s pain is 100% guaranteed to tip me into a major depressive spiral, from where I’m no bloody use to myself nor anyone else, being dead. So if I have to play the part of an elective psychopath to survive long enough to make that empathy actually count for something positive, then so it goes.

If I “temporarily turn off” my own empathetic connection towards you or sadmar or Denice or our lovely host, I will feel gross and revolting afterward.

When I turn off that same empathy the Four Horsemen of the Asspocalypse here, I feel nothing at all.

Except a gleeful pleasure that the black evil transgressive bits of me are allowed to run off the leash, completely guilt-free, at least until they’ve filled up and I have to chain them and get back to doing something valuable again.

(And yeah, this is unhealthy for me: I just suicide by tiny degrees, instead of dramatically all at once. Won’t bother me, I only need to buy a few more years. Dexter-ing chuds for blood sport is just an ugly release valve in the meantime.)

So when I say, “throw out all your tested, working metacognition models” upon encountering NPDs (and APDs), I mean it. Those models work brilliantly for predicting you and your fellow humans. They don’t work for predator vampires. Mis-applying those same models to non-humans is deadly dangerous—to you. You will get eaten, doing that.

These animals’ thinking is fundamentally different to ours. You need to construct brand new models from scratch, just for them; models uncolored by all your own past preconceptions, prejudices, and that original image of your own Self which, having been your original Model #0, the very first you model began to construct in your early years, has inevitably become the founding template for all your other models to follow.

A psychopath may know he’s a psychopath. (Intellectually if he’s high-functioning; instinctively if low.) He just doesn’t care is all.

A narcissist does not know she is a narcissist. Furthermore she cannot know it. The disorder itself makes her incapable of recognizing it.

That’s why I tell everyone that lecturing the Four Horsemen here is 100% wasting absolutely everybody’s lives except for theirs. Because all four tick the boxes for full-blown narcissism. You feed them, receiving nothing in return. You strengthen them by sacrificing your self, on their altar.

A narcissist is incapable of change.

A narcissist is incapable of error.

A narcissist is incapable of “trying to see things from your perspective”—because there is no you. You don’t exist, outside of being a potential source of narc food. There is only Self, and the insatiable ravening appetite of Self.

Once you can understand what this new, alien, thing you are now dealing with, you can start to learn how to deal with this new, alien, thing effectively (and ineffectively), and select strategies which work.

But as long as all you humans here permit your own Ego to rewrite your own perception of what’s standing right in front of you, staring you in the face, to make it look more like… well, you, then the only thing you have is them right where they want you.


@sadmar: “Every critique they receive gives them an opportunity to perform invulnerability to criticism, by replying with continued defiance.”

Yup. Narcissistic power, to be precise. The raw pure bounty of making you drive your own self to insanity, while expending no energy of theirs.

Your life, gushing down their throat. Idiotic. You might as well shove a bamboo trochar in your neck and cry “Fill yourselves up!” At least that will be both quick and final, after which they will be forced to hunt elsewhere.

Eh, not my funeral. Why should I care.

@ sadmar:

I agree: most of these trolls are unreachable: if they are involved enough to actually seek out SBM supporters to argue with or read articles. I imagine some of Orac’s frequent flyers define themselves as being ‘contrary’ or ‘ahead of science’. You should read some of the autism anti-vax mothers ( via AoA, CHD or even here).

When we read about alties’ conversion to SBM, usually something happened that enlightens them ( see Dr Laidler, Britt Hermes). I don’t think that these are average alties either.

Instead, people who have been influenced by altie BS and perhaps are NOT so involved and do not have personality issues but who read an intriguing article or saw something on social media and started to doubt what they see on the news. It means that they still are somewhat grounded in reality.

“Amongst the trolls who visit RI, it seems that many have trouble evaluating self and others abilities”

As I noted further down:

The narcissist is incapable of recognizing she is a narcissist. The disorder itself ensures it.

Accepting this simple inviolate reality is your key first step towards not peeling all of the paint off your own damn walls. Embarrassingly, I’ve forgotten the reference for this observation, but I expect I’ll run back over it again some other time so will try to remember to hang onto it when I do.

I should add:
A long time ago, Dr Barrett of Quackwatch, discussed why medical professionals ( usually in a clinical setting) go woo. They may become disillusioned about their effectiveness in treating serious illness especially if they are in a position without much authority such as assistants, paraprofessionals et al.
So I imagine being experts in adjacent areas, one might feel similarly : a cardiologist or veterinarian takes on infectious diseases and vaccines.

Certainly one of the trends in contrarian scientists is a form of narcissistic behaviour. These people believe that science is not giving them the adulation that is due to them. When they head over to the crankosphere, they get all the adulation they need.

“Certainly one of the trends in contrarian scientists is a form of narcissistic behaviour.”

+100%. Except, there’s no “a form of”. It is narcissistic behaviour, period. And dollars to donuts, the reason for that is simple: They are narcissists; either full-blown clinically diagnosable NPD (not that they would ever seek a diagnosis) or well up on that spectrum.

Here’s a quote via spoof @VpwndF, which is an absolute corker about Anthony Fauci as diagnosed by The Prasad himself:

Narcissists and psychopaths attacking their critics as “narcissists” and “psychopaths” is absolutely SOP for narcissists and psychopaths.

Like Denice IANAPsych, but I’ve been reading around long enough to recognize a lot of the “tells” when they do come up. Not only are narcs 5% of the general population, they cluser where there is food. The Four Horsesmen of the Asspocalypse here, for example, all tick a pile of boxes. As do Prasad, Raoult, and a bunch of other grandiose douches. And, of course, The Donald is globally famous as the Emperor amongst Emperors.

It is also something of a truism (and possibly backed by hard research, though I haven’t looked) that education and medicine are two of the professions especially coveted by [more intelligent] narcissists, precisely because these are 1. very high-status positions with 2. fabulously rich and endless supplies of highly vulnerable people who are at the total mercy of their practitioners. These careers the ultimate all-you-can-eat buffet for the narcissistic abuser (and all NPDs are, by their nature, obligate abusers and bottomless pits, incapable ever of achieving satiation), so you can and should expect to find a rich supply of narcissists in frontline medicine, because that is where the richest narc food supplies are to be found.

What bothers me is how so few MOPs (members of public) seem aware or able to recognize Narcissists for exactly what they are. Everyone knows about “psychopaths”—the Gacys, the Lecters; they’re all over popular culture and the frontpage news. Narcs, if they are noticed at all, are most commonly played for the LOLs. Funny, rite? Now go spend a day on r/JustNoMIL and r/raisedbynarcissists, you won’t be laughing. You will want to vomit. And then rip their spines out.

Narcs are essentially born addicts, made inside the first 5 years of life primarily by nurture, neurally-wired into eternal toddlerhood; rats behaviorally locked into forever pushing that damn button to get yet more food. Those behaviors are regular and predictable though, once you understand how they work, which makes them controllable/containable, to a degree. You won’t—can’t—stop them predating other people; you might as well ask Dracula to kindly stop sucking on innocent virgins’ blood. However, you can train everyone else in society to recognize these vampire parasites by their behavior and, while not stake them in the heart, at least part them from their easy, naive, unprotected food.

Operant conditioning chambers work both ways: the rat will quickly learn behaviors that reward it, but the rat can also be trained to avoid certain behaviors too. Behavioral reinforcement is your best means of limiting the harms that high-functioning narcissists and psychopaths do.

(Low-functioning NPDs/APDs are typically managed by criminal justice systems, absent the necessary impulse control to keep their behaviors being loudly notices.)

The alternative to containment and control is simply to shoot all of the narcissists and psychopaths in a human society, as and when they pop up. Which is undeniably effective but somewhat unpalatable to those of us humans who are possessed of empathy, ethics, and other such limiters on our own behaviors. Such aggressive remedies are also, more importantly, disagreeable to society at large, which values “never rock the boat” above almost all else.

Which is exactly why these abusers so often operate for so long, unimpeded: society doesn’t want to face the demons in its own midsts, especially when those demons look just like us. Plus, of course, there is greed, ego, selfishness, indifference, cruelty is in everyone; and in challenging these individuals effectively we must be unafraid to confront those same (if lesser) negative aspects within ourselves. And we are generally cowards; whereas they are wired without guilt and shame, which gives them the general upper hand in playing the game against us when we are not rigorously trained in it. The rest of us want to look away; thus the smart abuser gives us just what we wish, then continues her meal at her leisure.

All this stuff is known, documented, understood. Again, those Reddits are really good as self-help groups, highly accessible collections of real-life horror stories such that anyone who reads [who isn’t herself an NPD/APD], and links to other, more formal reading resources.

@ has:

Sceptics need to focus upon reachable people.
Anti-vax mothers who alighted here and proselytised, scolded and attacked for – seemingly- years do so to negate that their child has low fxing ASD ( which they claim, originated from external sources– vaccines, toxins, evil drs) not from any internal sources ( genes, prenatal events ) are UNREACHABLE.
So are people who make money off of anti-vax.

We know that some people are not amenable to education or therapy. It seems to me, people who are not actively involved but who have been influenced by anti-vax nonsense are the most likely targets for change. People who read and don’t comment or follow the news on television, social media. Orac has a lot more readers than commenters. Social media has been negligent about allowing misinformation on its channels.

Social media has been negligent about allowing misinformation on its channels.

Disagree. Social media’s objective is engagement.

If customers of these services select misinformation over information, that is the customer’s choice. SM providers are just there to make their own coin by ensuring all of the transactions they facilitate run frictionless.

They are very good at that. Thus their platforms are working exactly as designed.

If those transactions also happen to forment mass deaths, civil war, etc. all that stuff is somebody else’s problem; for as long as the platform itself continues to function, producing revenue as normal.

Not that I’m letting SM vendors off the hook: they have their choices too, and the choices they make speak of their own character.

But, ultimately, if you do not hold the billions of [mis-]users of social media directly responsible for their own [mis-]use of information channels, who provides the conduit is pretty irrelevant. PEBKAC.

People buy lies because people want to be lied to.

The first fix is always free. Every fix after that is 100% on the mark.

@ has:

Social media are businesses, first and foremost BUT they are also vulnerable to criticism and public opinion. They don’t want a bad look or visible governmental investigation. They made ( slight) efforts to clean up the worst misinformation and violent rhetoric.

The loons I survey now despise FB, Twitter, You Tube, Insta etc and either seek out less controlling outlets like Gab, Telegram, Parler, Bitchute, Substack or create their own, like Brighteon.

@Denice: “Social media are businesses, first and foremost BUT they are also vulnerable to criticism and public opinion. They don’t want a bad look or visible governmental investigation.”

Bingo. Like I say, SM’s objective is engagement. That’s what drives it.

If SM’s existing customers start walking away en-masse, the SM vendors aren’t getting the engagement, so will revise their technology (and business) algorithms until they do.

Thus, once again, responsibility lies with customers, not vendors. Let them know by action, not words, the product you want from them and which you do not, and your mass engagement—or mass withdrawal of it—will do the work.

Everyone’s culpable.

Most people just don’t want to give up their own engagement. And there are real rational fears (“our walking away means the abusers win”), and practical matters of efficacy (“if we don’t all walk away as one, vendors won’t even feel it”), as well as just vanity driving that. That’s no excuse.

“The loons I survey now despise FB, Twitter, You Tube, Insta etc and either seek out less controlling outlets like Gab, Telegram, Parler, Bitchute, Substack or create their own, like Brighteon.”

Allow them! I imagine Gab, Parler, et al are even more controlling, whether it’s the vendor itself and/or mob rule of users who shut down “disruptive individuals”. The fascists can’t help it. Encouraging them to go rot in their own hellholes, which fleece and feed them, isn’t nice, but we can’t do anything else without being as repressive as them—it won’t work, and it’d twist us up too.

You can’t save them. Only they can choose to save themselves, and they have already made a choice not to; because that choice feeds something in them. So ostracise them utterly in the online world; be rigidly, zero-tolerance courteous in the street. Don’t allow them hooks into anything/anyone, except themselves. That’s all we can really do: the damage containment.

And if part of our “accepting our own responsibilities” means organizing to mass-deprive SM of our own foot traffic, so be it. The extremists have already proved it is 100% technically achievable, by creating their own more distributed networks, and network effects, and using those first.

What will the majority of moderates, so acclimated to doing nothing themselves as “somebody else’s problem”, do, I wonder?

We should not only relish in the extremists’ failures; we should learn humbly from all their successes as well. And then do all those things better—and for the better—than them too, for ourselves.

(yes because I suspected you would ignore the underlying cite – meaning your not doing science)

I doubt it. He could have skipped to the final paragraph:
“the risk of hospital admission or death from myocarditis is greater after SARS- CoV2 infection than COVID-19 vaccination and remains modest after sequential doses including a booster dose of BNT162b2 mRNA vaccine”.

It’s feels funny to find myself agreeing with a post by johnlabarge, even if he probably doesn’t. I think that few people ignore the data regarding myocarditis in young men. Except, maybe, for @johnlabarge.

Your discussion completely ignores the role of money in shaping the official “scientific consensus”. The people to whom you attribute Dunning-Kruger effect are persuaded by data. When the ostensible “scientific consensus” ignores such data they are appropriately skeptical. Have you read the Dunning-Kruger paper? There are actually two trends: overestimation of ability by those who are non-experts and UNDERestimation of ability by actual experts. I’m going to go out on a limb and infer that as a cancer surgeon, you have treated fewer patients for Covid than individuals such as Drs. Kory and McCullough. Guess which Dunning-Kruger category that puts you in? (Me? I know I’m a non-expert, so my job is to spot the experts. I’ll go with the ones who are following the data instead of the money.)

@ Chaos Infusion

First, besides being a cancer surgeon, Orac has a PhD in immunology and over 60 peer-reviewed journal articles and millions of dollars in grants for his research.

Second, Peter McCullough is a cardiologist, not immunologist, infectious disease, or other relevant training/experience. He has had a number of articles retracted due to methodological problems. His case studies on, for instance, hydroxychloroquine, are ludicrous. Since the vast majority of those testing positive for COVID-19; e.g., nasopharyngeal swabs, never develop any symptoms of disease and/or mild symptoms, if he hadn’t given them hydroxychloroquine the outcome would have been the same, etc.

And we now have a number of well-done placebo-controlled randomized double-blind studies of hydroxychloroquine and ivermectin with results, dldn’t help and hydroxychloroquine could cause some problems. These studies were done by medical experts around the world.

As for Dunning-Kruger, again, Orac is both a cancer surgeon and immunological researcher. Perhaps you don’t understand that immunology is the basis of vaccine science. So, he certainly has the expertise as compared with Peter McCullough

Speaking of Dunning-Kruger, what is your background that you base your comments on???

This argument you make re hydroxychloriquine can easily be made for the vaccines, particularly wrt the less virulent variants.

There are many well run placebo controlled vaccine trials. Results are not what you say they are

There are many well run placebo controlled vaccine trials. Results are not what you say they are

Link to the ones with omicron and delta.

I shouldn’t have to point this out, but there’s a difference between vaccine trials and disease trials.

Moreira ED Jr, Kitchin N, Xu X, Dychter SS, Lockhart S, Gurtman A, Perez JL, Zerbini C, Dever ME, Jennings TW, Brandon DM, Cannon KD, Koren MJ, Denham DS, Berhe M, Fitz-Patrick D, Hammitt LL, Klein NP, Nell H, Keep G, Wang X, Koury K, Swanson KA, Cooper D, Lu C, Türeci Ö, Lagkadinou E, Tresnan DB, Dormitzer PR, Şahin U, Gruber WC, Jansen KU; C4591031 Clinical Trial Group. Safety and Efficacy of a Third Dose of BNT162b2 Covid-19 Vaccine. N Engl J Med. 2022 May 19;386(20):1910-1921. doi: 10.1056/NEJMoa2200674. Epub 2022 Mar 23. PMID: 35320659; PMCID: PMC9006787.

“I know I’m a non-expert, so my job is to spot the experts.”

You’re doing a miserable job of it. Peter McCullough is a cardiologist, whose “expertise” is in promoting failed remedies for Covid-19. And in citing Pierre Kory, you’ve ignoring the vast majority of his colleagues in critical care medicine who don’t espouse his beliefs and consider him and his fellow “frontline” doctors a disgrace.

“I’ll go with the ones who are following the data instead of the money.”

Don’t forget to follow the money.

The people to whom you attribute Dunning-Kruger effect are persuaded by data

Well no, they are not. Locally. you certainly aren’t. labarge has no idea what reliable data is. ginny — well, she’s never met a conspiracy based lie she doesn’t like.

The people to whom you attribute Dunning-Kruger effect are persuaded by data

Well no, they are not. Locally. you certainly aren’t. labarge has no idea what reliable data is. ginny — well, she’s never met a conspiracy based lie she doesn’t like.

@ldw: You are wrong, though your analysis is itself correct. Chaotic Infarction is right: She and her ilk are absolutely persuaded by data!

You will note, too, that CI says absolutely nothing about whether this data needs to be accurate or not. (Never mind proactively tested to verify its accuracy or destroy it otherwise.) This is not an oversight.

To the paranoid, narcissistic mind, Truth is measured by how closely it correlates to and comforts the narcissist’s own knowledge of what they already know is True. And what is True is that which the narcissist states to be True. This is itself self-evident, as the narcissist is inherently incapable of error.

Understand: They are functioning correctly.† The error here is entirely your own.

To wit, as Francis Bacon put it: “Knowledge Itself is Power.”

He never said it had to be right.

† For them.

I’m a non-expert

True — probably the first true thing you’ve ever posted.

so my job is to spot the experts.

It it really were your job you’d be fired immediately, since you never quote experts, only cranks.

I’ll go with the ones who are following the data instead of the money.)

Nope — you don’t follow people who cite data, You refer to losers like yourself.

about “spotting the experts”

That’s a huge problem for the public: unless you’ve studied a particular area you may not be able to tell who is SB or not. This is especially true when we’re discussing topics that are based on new technologies and recent discoveries.

Nearly every week, I watch someone attempting to raise funds for public television or sell a product that “enhances brain function” or memory. Most of this is altie BS, not based on research. Some of these salesmen are actual doctors and some of the ads discuss research but there is little to support their claims.

In more general woo, similarly, claims are made about particular supplements or foods to improve medical conditions or prevent/ cure illnesses while there is no evidence that any of this is true. There are general nutritional guidelines BUT not powerful cures which are being touted.

What Orac and his followers discuss above- credentials/ relevant education/ training in a particular area- often is not considered by the general public: they see that these dudes are doctors and they take that as enough ( although often, the relevance of the medical establishment itself is questioned by alties) because they agree with them on some issue like anti-vax. Amongst the woo-meisters and altie icons that I survey, presenting oneself as an expert is the coin of the realm yet if you look into their backgrounds, you will find very little standard education, training or experience that would qualify them as experts YET they simultaneously insult and ridicule real experts. It is difficult to teach sceptics about how to discern who is real and who isn’t because of the level of new information the general public must ingest in order to spot crappy research and its purveyors. They misrepresent both their subject matter and themselves.

I am not a biologist, MD or medical researcher BUT I can assure you that I have studied far more biology, physiology, research design/ statistics et al than the people I survey and I RECOGNISE THIS EASILY. This is why I frequently try to show how little they know in more general subjects so readers can discover their ignorance for themselves.

There’s a topic for Orac and Co:
how can one tell who is an actual expert on a particular subject
( and I know that one of the correct answers has a standard- bad- response from alties)

I believe your focus is incorrect. The problem is discovering the truth, not finding experts. As Orac has said, the prior probability of an expert being correct is higher than non-experts. However it can never be 1. Overconfidence, as stated, is one reason. Another is when we’re on the bleeding edge and no expert knows the truth, such as early in the pandemic. Experts too often tried to offer assurances about methods to stay safe which may have been wrong, inadequate, excessive or speculative. The more reliable experts (re-established high prior) knew to alter their opinions as data accumulated. Human nature is such that some previously reliable experts dug in their heels and became unreliable (plummeting prior). True cranks and grifters have a perpetually low prior because they focus on contrarianism, fame, money and other non-science factors. Many are “not even wrong.”

Sometimes a non-expert is better is ferreting out the truth. Science is a human enterprise and those with a strong inclination in favor of truth and with the expertise and experience to navigate the human landscape are more reliable at getting at the truth. Witness Deer vs Wakefield.

Identifying experts has utility but is no panacea. As with investing: the trend is your friend, until it isn’t. E.g. Ioannidis.

“Experts too often tried to offer assurances about methods to stay safe which may have been wrong”

Bill Nye. I’ve been enjoying his/Seth Mcfarland’s disaster porn, The End is Nye, but it blew up last night.

His ‘act of cow’ for why the grid is not protected is because we’re not putting transformers in Faraday cages.

Of course, it’s the long runs of wire connecting the transformers. They pick up the induced low frequency currents which saturate the transformers so the 60 Hz is free to cook them. Thus the ‘saturation switch’ as is used with aircraft landing lights and such. The correct answer is resistive low pass filters.

He’s always been a fun science communicator for the general public but I believe that episode really was too watered down/stupid solution. The act of cow is that we’re not putting low pass filtering on long runs of transmission line.

I agree with most of what you say however most people don’t have the same qualifications as Brian Deer-
he’s terribly smart and persistent and he worked for a major newspaper which provides experts, data and legal assistance.
He could consult with profs and medical experts and he did.

Regular readers need to be able to differentiate “experts” who actually have reasonable knowledge from “guests” on altie sites, popular podcasts and right wing channels. Unfortunately, some people listen to the later.

I looked up the educational backgrounds of two successful pod casters: same as the average guy/ girl spouting off at the bar/ pub. Same problem choosing trustworthy sources.

“Regular readers need to be able to differentiate “experts” who actually have reasonable knowledge from “guests” on altie sites, popular podcasts and right wing channels.”

(I had a reply earlier today that the blog software ate and discarded, so I’ll try to regurgitate something vaguely similar.)

How is the average person supposed to differentiate? Both old and new media tend to select for photogenic con artists because they look good and they sound good, and they know how to pander to the audience. Experts rarely sound as persuasive.

The fact that fake experts are invited to appear gives them an aura of authority; that is, their appearance implies the media considers them an expert qualified to talk on the issue at hand. Does anybody, other than the rare skeptic, dig into their backgrounds? Not likely. Often the journalist that solicited their appearance is no better. Instead they may interview two “experts” and we’re left with a “he said, she said” muddle.

Determining who is truly an expert requires an expert or a motivated and diligent non-expert.

@ rs:

That’s precisely the problem **.
Average people are vulnerable to scams, misguidance and the premeditated sales techniques of charlatans and miscreants. I was aware of altie/ new age BS being promoted in the 1990s and started there. In the early 2000s, I discovered quackwatch and later, Orac and other sceptical doctors who wrote about these issues.

The average person is frequently NOT able to discern if material is truly cutting edge SBM, recycled woo or entirely made up crap.
This made worse by standard media’s assistance which you describe so well, new media’s search for followers and political opportunists taking advantage of the public’s rising distrust of authorities.

What I also consider important is that parts of the general audience may resent experts and instead identify with brave mavericks who rose from the working class through their brilliance and hard work and now SHOW all those elitists the Truth!. This is a common theme amongst those I survey. They worked on a family member’s farm; they didn’t have enough money for elite schools; they were rejected by authorities in standard science and medicine. Similarly, podcasters who present brave mavericks who challenge the powers-that-be: two current It boys are university dropouts who made their names in comedy and martial arts.

When confronted by sceptics’ analysis of alties’ mistakes, they merely point out that “experts have been wrong before”, “they are all paid off” and “they are criminal, anyway”.

I try to point out, rather than the OBVIOUS SB errors, the many general information errors they make*** and how much they benefit monetarily.

** and why we can’t have nice things EVER
*** the other day, PRN’s medical, art and US history expert reminisced about sitting on a riverbank in WV as a child viewing the island where Hamilton dueled! If you know anything about history or SAW THE PLAY, you’ll know that this is confabulation. Hamilton is associated with NY and NJ, dueled and died there.
He’s told this tale many times.

I should add:
there’s a recent trend disparaging university education for most people although they may brag about their own endlessly. I hear this from alt med folk, rightist political people and even entertainment people ( Bill Maher).

Kids should learn a trade, become farmers or follow traditional roles “in the home” ** – not be a “careerist” or “professional”.
IOW, expand their audience in the next generation.

** Kinder, Kuche, Kirche?

Focus on evaluating the advice/claim first.

Then examine the qualifications and experience of the “expert” and whether his/her stated evidence/beliefs are shared by those with equal or better qualifications and experience.

If the claim is shaky or unsubstantiated, while the expert consensus is far different and backed by good evidence, then it doesn’t matter whether the original “expert” has glowing credentials.

Your discussion completely ignores the role of money in shaping the official “scientific consensus”. The people to whom you attribute Dunning-Kruger effect are persuaded by data. When the ostensible “scientific consensus” ignores such data they are appropriately skeptical.

And, this salient point by Chaos turns what Orac is spinning on its head. It is often the skeptics that are risking everything by going against the ”consensus’. They will be shunned, castigated and in general experience their careers going down the craphole. They are the ones with the humility who knowingly risk everything to do what they believe is right.

You are not Galileo just because you disagree. You must be right, too. Besides COVID contrarians do not lose anything. They havw good political connections toprotec them.

“You are not Galileo just because you disagree.”

I also don’t recall Galileo ever being able to stick his head so far up his ass he could suck himself off in full public view to the adulation of legions of fans all none the wiser. So Gerg & chums have a leading advantage there.

“knowingly risk everything to do what they believe is right.”

So do suicide bombers.

So do suicide bombers.

Right, and just look at how successful they all are: No-one can ever tell them they are wrong ever again!

It is often the skeptics that are risking everything by going against the ”consensus’. They will be shunned, castigated and in general experience their careers going down the craphole. They are the ones with the humility who knowingly risk everything to do what they believe is right.

It is very sad that someone could write that and believe it. You aren’t talking about skeptics in this anti-science crusade you’re supporting, you’re talking about people who have no understanding of the things they’re dismissing. If they had a shred of humility they’d recognize that and do a little work to educate themselves. Apparently that’s too difficult for you, labarge, CI, ginny, and others of your ilk.

Well CI you seem to have missed the point completely.

You aren’t siding with the people who follow the data, you’re siding with the people who say what you believe. Lowering your cognitive dissonance as it were.

That explains Thabo Mbeki’s AIDS denialism and his support of bad leaders like Robert Mugabe very well. Mbeki earned a Master’s degree from the University of Sussex aged 24. He is perhaps the most intelligent man ever to lead South Africa, and his blind spots cost some 343,000 lives and worsened the situation in Zimbabwe.

Orac writes,

“What matters is that you have the goods—the evidence and/or logic—to back up that challenge.”

MJD says,

Tell that to the FDA when a safety challenge (e.g., citizen petition) is denied wherein the outcome is heavily influenced by “economic impact.” In other words, economic impact as a contributor to science denial among public health officials.

@ Orac,

On a high note, one can always vent their frustration here at RI. Right?

To be fair, if money were the ONLY benefit then I’d agree.

Now prove that money has negated all the vaccine safety data since data collection began.

If you want a campaign to stop price gouging by big Pharma then I reckon an awful lot of the regulars would support you.

@ Everyone

I’ve posted similar comment umpteen times. Antivaxxers criticize vaccines because companies make a profit on them. Profit says absolutely nothing about value of a product. In supermarket one can buy soft drinks, potato chips, and donuts or one can buy fresh fruit, veggies, whole grain breads, etc. All are sold for a profit. Pharmaceutical companies sell insulin, albuterol, etc as well as vaccines. Profit doesn’t say if product highly beneficial, harmful, or anything in between. So, antivaxxers attacking vaccines because companies make a profit is ludicrous. And actually pharmaceutical companies make much better profits on items such as statins that have to be taken every day for life. Historically, because vaccines only used a few times; e.g., mmr, we went from almost 40 companies manufacturing vaccines back in 1960s to only a handful today. Take mRNA Covid vaccine. Entire worldwide sales less far less than cost of hospitalizations, doctor visits for those sick, economic losses due to mitigation, etc.; but antivaxxers downplay seriousness of pandemic. Oh well. Do they live in a parallel universe???

However, one can attack companies, not only pharmaceutical companies; but especially them for extortionist profits. Insulin is a primary example. Price has gone up around 10 fold past two decades; but production/distribution costs maybe a few percent. Diabetics are suffering, some even have died because couldn’t afford. Democrats tried to put cap on insulin price; but Republicans, party of the corporations and super rich, by the corporations and super rich and for the corporations and super rich defeated it. Overall costs of pharmaceuticals in US much higher than elsewhere. Companies claim need to cover research costs; but studies have shown they spend more on marketing than research and often tweak a drug so can prolong patent. If one checks fortune 500 companies, pharmaceuticals at top and their CEOs exorbitant salaries. Even worse, studies have shown that 90% of basic research funded by government and around 50% of applied research. Unfortunately, our bought and paid for Congress, Republicans more; but Democrats also, at least some of them, refuse to deal with pharmaceutical prices.

Agreed. Pharmacy can have a great product and overcharge for it for massive profit. Doesn’t mean product is bad. The whole health system is designed for profit and presented, by Republicans primarily, as a free market system while it most certainly is not. It’s designed to always increase in cost until breaking point. Insurance and large hospital groups along with pharmaceutical companies are essentially a tax. A low income area in a particular state will generally pay for the costs associated with large city areas. Massive state of the art hospitals in Atlanta for instance are paid for by people who will never use them at the other end of the state. Insurance companies have a profit percentage cap. This incentivizes insurance companies to want higher medication and procedure costs via 80/20 rule. Crude example… If a procedure is $100 the max profit and admin costs is $20. If the same thing is $1000 max profit and admin is $200. They raise rates accordingly. And they know exactly what each individual costs per year and that is why you have a deductible, to cover the one or two minor procedures , MRI, sleep study and other ancillary procedures.

@ EmSc16

And rural Whites vote Republican, not aware that it is Republican support for our for-profit system that, among other things, has resulted in increasing rates of closure of rural hospitals. Our taxes pay for health care in US; but turned over to private sector which takes 30 cents on dollar for unnecessary counter-health bloated bureaucracies, profits, and incredible CEO salaries.

You might find two papers I wrote of interest, just cut and paste titles:

Paying More, Getting Less: How much is the sick U.S. health care system costing you? May/June 2008

The Case for A Non-Profit Single-Payer Healthcare System. August 10, 2018

There are states that want to follow the rest of the world but can’t because they are being blocked by Trump states. This is when it comes to Single Payer healthcare, free college education for all, high speed rail, more housing for the homeless. Sure California wants to do these things but needs federal approval and funding to do so.

@Joel: “Antivaxxers criticize vaccines because companies make a profit on them.”

More precisely, because the ones who are making the profit isn’t them.

Exhibit #0, of course.

And, Orac throws around ‘scientific consensus’ as if it is articles of faith on every scientific matter that is enshrined in literature — and, no doubt, such volumes are kept in pharma’s library. So, Orac (or anyone for that matter), how does the book on the ‘scientific consensus’ on Covid reads as it relates to such things as masking, lockdowns, social distancing, and the safety and effectiveness of Covid vaccination and alternative treatments?

Still being written.

There’ll be a whole chapter entitled “We tried to make people avoid infection by keeping their distance and they whined about freedom”.

Beats hell out of “articles of faith” held by conspiracy-mongering loons, and the trolls who love them..

I like to alert readers about altie/ anti-vax events and marketing:

in two weeks, there will be a symposium in Philadelphia/ virtual space given by the Activities for Daily Living ( see featuring many well-known subjects of Orac’s attention: Andy Wakefield. Mary Holland, Brian Hooker, James Lyons-Weiler, Dr Palevsky and Katie Wright . They will premier 3 films including Andy’s Infertility. Only 79.99 USD!** They address “brain damaged” people- which to them, includes people with autism.

Andy has been operating the Crystal Clear Film Foundation in Austin, a non-profit, to showcase his …. art, presenting The Act a while ago. They took in about a half a million USD recently. Their legal data is easy to find on the net.

Films seem to be popular funding sources for alties: easy to make, followers will pay to attend or watch,
can re-use old material, cheap technology etc. Null puts out a few each year and hosts “premiers” in theatres as well as virtual events. All for money. Old ones are re-cycled as free fodder for new recruits.

** I’m not sure if the price is for live attendance, virtual or both.

That sounds like a real toe-tapper. Ohhhuu (as in an expression of alarm, surprize, excitement, and auuhhrggg). Let me get my pen.

thx for the heads-up.

“Films seem to be popular funding sources for alties”

Building brand awareness is what it’s all about.

The content itself isn’t important: it’s getting your identity seen and talked about. That’s the real message—the one that actually counts.

Mike Adams is a brand. So is Andrew Wakefield, Null, Oz, and so on.

That professional marketing and personal narcissism share much common ground is a lovely bonus for these successful, effective, industry Thought Leaders. When it doesn’t even feel like working, that’s the best job of all.

Films aren’t easy to make. Even a crappy propaganda documentary requires more work to create than you might imagine unless you’ve actually done location shoots, editing, etc. etc. What has changed from the ‘good old days’ is the cost involved has dropped dramatically with HD digital video and desktop post-production tools. So getting a production out is basically a question of marshaling the labor power capable of generating a product at whatever level of competence your audience requires.

Even so, I wouldn’t expect Andy’s or Null’s cinematic works to generate profit directly. The way almost ALL kinds of non-Hollywood filmmaking works is closer to has’s concept of “brand awareness,” that is, as a form of advertising. For example, there’s no income to be had from the exhibition of avant garde films, but getting films seen will translate into other monetizable opportunities for the maker who acquires a “name” as a result — personal appearances, teaching gigs, consulting…

Thus, for Andy, it’s likely not about his cut of the $79, but putting stuff out that keeps the contributions to his “non-profit foundation” (excuse me while I vomit) flowing. I’d guess Null’s releases are likewise pegged to support of his main revenue streams rather than direct profit.

If this stuff does bring in any “box office” $$$, that’s just gravy. Advertising works. No doubt AJW and Null have figured out the value of their media in terms of the larger enterprise, and are investing a fair amount on competent production work in anticipation of longer-term return.

I appreciate how difficult films are, even with all the fancy-schmansy editing tools now available to me. I have access to a cheap educator-discounted license for Adobe Creative Suite through my school, for instance, and that gives me access to Adobe Premier Pro. It’s one piece of software that I’ve never really figured out. My attempts to make videos featuring the puppies that we’ve fostered have been amateurish at best. Indeed, the learning curve for shooting, editing, and producing video is the primary reason why I’ve never really tried to do YouTube videos. It’s just too much time and would take me too long to get halfway decent at.

“Indeed, the learning curve for shooting, editing, and producing video is the primary reason why I’ve never really tried to do YouTube videos. It’s just too much time and would take me too long to get halfway decent at.”

Rest assured, “amateurishness” is no barrier at all to making teh puppeh filmz for teh YouTubes. If anything, it is a powerful sales advantage: part of the whole sincere openly emotional defenseless charms of such works. Lean into it, why not.

(Assuming there’s still plenty time in t’schedule for the Regular Insolence, of course.;)

That said, Premier Pro is into semi-pro territory at least. I dunno how it stacks for amateur use against, say, Apple’s iMovie, but I have on occasion bodged out a vid on the latter and, while it’s still more effort than I’d like, it’s certainly doable on minimal skills. Adobe’s stuff really isn’t geared to the amateur auteur.

Beyond that, it’s like any tool. You learn the buttons to push, you can work it. What differentiates amateur ass from competent no-budget filmmaker is knowing the Why of when to push and not push. 80% of that is probably just a rote set of rules one can memorize out of a rulebook, and never go too far wrong. (It’s amazing how many low-budget directors fail at that basic due diligence.) The rest is rigorous unflinching self-criticism, i.e. kicking the crap out of your own sad attempts (until you give up, or stop sucking as much), which is the fundamental feedback loop any scientist must run as standard so I’m sure you have that part pat.

I’m sure a Plan 9 from Outer Space DVD† and a copy of Katharine Coldiron’s eponymous book cannot harm either.

Pup 9 from Outer Space—Tell me that there title alone is not YouTube instagold, or my name ain’t t’Amazing Criswell. 🙂

† I believe Tommy Wiseau’s The Room is nowadays the solid contender for “Worst Picture Ever”, although I am a tad old-school to keep up with all this modern nonsense. I’m sure our resident narcs would enjoy it though, like a mirror.

Got thrown a bit there by the posted figure from the paper, where “Homeopathic medicine” has the same slope as the other subjects. However, going to the paper itself, the X-axis is the opposition to the scientific consensus, not the subject itself. So luckely the study still supports my own suppositions :), but they could have labeled the figure a bit more clearly to prevent confusion!

Comments are closed.


Subscribe now to keep reading and get access to the full archive.

Continue reading