Yesterday, I explained why a study that purports to show that psychotic patients tended to vote for President Bush in the 2004 election and is presently making the rounds to snarky gloating through the left-wing blogosphere is so utterly flawed that almost certainly does not mean what the author claims it does, given the data dredging, small sample size, and the failure even to consider alternative hypotheses to explain the observations. In my discussion, I complained that I had only found one skeptical take on the study among the credulous acceptance and use of the study to imply (or outright state) that Bush supporters are mentally ill.
Now I’ve found another skeptical take by Alon Levy. He emphasizes the small sample size and emphasizes, more than I did, the nonrandom sampling of the large population about which the study purports to make observations. He also goes on to discuss the problems with metastudies (the kind that are often presented as further evidence of a link between conservatism and various mental pathologies):
The buzzword in areas of social science that generate numerous studies is “meta-study.” It’s easy to botch these too, but when something draws enough buzz for there to be a hundred different data sets about it, a political hack will be able to use five that show the correlation is statistically significant.
In fact, usually there will be many more than five, because published studies have an existing bias in favor of data sets that show significant correlation. “There’s no link between these two things” won’t get you published unless it’s a real hot-button issue like racial IQ differences, and even that only diminishes that bias but does not eliminate it.
This is the same sort of problem we have with metanalyses in medicine and is known as publication bias. The problems with metanalyses is a topic I’ve been meaning to blog about for a while now. Maybe next week. But in the meantime, I’ll share a quote I heard from renowned burn surgeon Dr. Basil Pruitt while listening to SESAP 11:
“I have viewed metanalysis and its frequent use as sort of a means of turning a lady of the night into a Vestal Virgin.”
I’m not meaning to say that metanalyses can’t produce valid results (many do), but physicians tend to forget that, according to the time-honored computer principle of GIGO (“garbage in, garbage out”), metanalyses are no better than the studies upon which they are based. They need to remember that, and evaluate metanalyses accordingly. So does anyone else who reads and cites such studies.