Just when I start to think that maybe, just maybe, I could stop worrying and learn to love the National Center for Complementary and Alternative Medicine (NCCAM, with apologies to Stanley Kubrick and Peter Sellers), damn if it doesn’t go and do something that renews my cynicism about the entire Center. This time around, Dr. RW has turned me on to a proposed project that leaves me scratching my head, Omics and Variable Responses to CAM: Secondary Analysis of CAM Clinical Trials.
Given the nature of the woo to be studied, my first inclination was to start making light of the whole “oooommmmm”-ics angle of the title as being quite appropriate to woo, but that would be too easy and a cheap shot.
Which is exactly why I just did it, of course.
The idea behind this project is to attempt to correlate why some people “respond” and some people don’t to alternative medicine with “omic” profiles, specifically genomics, proteomics, and metabolomics. You know, I’m getting rather tired of scientists tacking the abbreviation “omics” after a term and calling it a new science. But, leaving that aside, what is being proposed seems so utterly unlikely to find any useful information that, even by the standards of NCCAM-sponsored research, it startles even me:
This initiative will be used to stimulate omics analysis by CAM investigators.
Multiple NCCAM clinical studies have appropriately banked patient samples from a variety of tissues. Therefore, this initiative will encourage use of these already acquired samples from well-designed clinical studies for identification of genomic, proteomic and metabolomic variants that may be markers or classifiers for the level of response to CAM interventions. Trials need not exhibit significant differences between arms. This analysis may be applied to studies with negative results as a function of high patient variability in the treatment arm, which may represent responder/nonresponder phenotypes.
In other words, it’s a massive fishing expedition. As much as I like the new technology that can produce such copious amounts of data, I’m not a big fan of such studies even in conventional medical trials, where the treatment studied has clear efficacy, a molecular mechanism of action, and thus a reason to think that there might be “omics” profiles that correlated with magnitude of response. For one thing, such studies are not hypothesis-driven, and investigators will not have much of an idea of what they might expect to find; that is, if they find anything at all. Thus, in the absence of clear criteria for what constitutes a significant finding, there is a significant risk of all sorts of spurious findings being labeled as “significant.” In addition, such studies are not cheap. For example, gene chips for genomics studies cost between $1,000 and $1,500 apiece, and a high degree of specialized statistical expertise is needed to analyze the data. Moreover, because the tissue and blood samples were collected for another purpose, the most such studies can produce is a retrospective data set, with all the attendant pitfalls and difficulties that entails, the worst of which is that correlation may not necessarily equal causation. And, of course, if researchers look for a large enough number of correlations, they will inevitably find them even if they are not true indicators of causation just on the basis of random chance alone. Such analyses are in essence post hoc exercises; it is far better to build such analyses into a study prospectively from the very beginning. The data is cleaner that way, and the statistical analysis has more power.
That being said, I will concede that for some clinical trials, post hoc analysis of the tissue specimens can be useful as a means of generating hypotheses. Specifically, if there are clear-cut “responders” and “nonresponders,” subjecting samples from these patients to “omics” analyses can provide clues to help investigators determine why some subjects respond and some do not. Useful hypotheses to test can be generated from such correlations. In fact, this is the very reason why I find the NCCAM proposal so pointless: precious few of the modalities tested in NCCAM-funded grants have even shown a glimmer of efficacy. Given that, it’s a waste of resources, specifically precious grant money that could go to fund far more worthy endeavors. This sort of approach may be somewhat useful for various herbal remedies, mainly because such remedies are drugs (impure drugs with many components, but drugs nonetheless), but would be utterly useless for practically every other CAM modality. For example, does anyone imagine that genomics, proteomics, or any other “omics” would identify “responders” and “nonresponders” when it comes to Reiki therapy. I don’t think so–unless there is an “omics” profile for credulity or the placebo effect. Or maybe they will discover a genomic or proteomic profile for credulousness or personal belief structure.
On second thought, watching the fundamentalists go crazy over such a finding might be worth it–no, not really.
I’ve written about the myth of “personalizing” treatments in alternative medicine and how it’s a sham that is only invoked when it is convenient to do so, particularly when justifying failure. This entire project feeds into that mindset perfectly. An NCCAM-funded study shows no effect for your favorite woo? No problem! Apply for more money to do analyses on the specimens to see if you can find differences between patients on the study that you can correlate to that lack of effect! If you find a correlation, well, then, you’ve hit the jackpot, because obviously a prospective study will have to be done to verify whether the correlation is real, of course. The potential for a serious gravy train is impressive indeed:
The analyses of CAM studies likely will involve omics differences and generation of class predictors. They are unlikely to be of sufficient scale for class discovery. Review criteria will emphasize a credible hypothesis for expectation of a genetic component or basis for the observed differences in CAM effects. Clear objectives, including patient selection criteria, are necessary to inform the appropriate study design and analysis strategy. The studies will be assessed based on their design and analytical strategy. By necessity, these studies will require assembly of multi-disciplinary teams combining clinical researchers, CAM expertise, disease-specific expertise, geneticists, bioinformatics expertise, and statisticians. Relevant tissue samples and facilities must be available. It is of interest to use clinical samples already acquired from complete and ongoing NCCAM-funded human studies. Applications are envisioned as correlative studies to complete and ongoing funded research projects.
Yep, big grants will be needed, all in the service of funding this:
This indicates that it may be possible to identify populations that are especially responsive to CAM interventions (subgroup genotyping), which may enhance the ability to establish efficacy in a clinical trial. There are additional reports of variations in reactivity to dietary components such as the relationship between equol production and responses to soy. As an example of protein expression influences, in the mind/body realm the effect of expectancy on pain may be dependent on the level of expression of the mu-opioid receptor.
The whole idea of doing omics on tissue samples from NCCAM-funded studies is about as good an example of putting the cart before the horse as I can think of. There is no doubt that individual biological variability is problematic for all medicine and is the main reason that clinical trials of sufficient size and power are needed to separate random biological variation from real treatment effects. However, if a treatment is efficacious in a significant part of the population, it should demonstrate efficacy in a properly designed clinical trial, and if there are significant differences in how people respond sufficient to produce two classes (responders and nonresponders) that should show up as well. Studying “omics in the absence of clear evidence that these two conditions apply (clinical efficacy and definite groupings of responders) is about as likely to produce useful information as EneMan is to recommend that you forego colonoscopy if you have rectal bleeding.
The really frustrating aspect of this whole initiative is that, in these constrained budgetary times, any money devoted to this project is money diverted from more scientifically worthy projects. A justification can be made for such retrospective analyses when there is definite evidence of efficacy and definite evidence of classes of responders and nonresponders, as is found in many drug trials,, but, barring that, it’s far more likely that good money will be thrown down the pit after bad to find out whether there is an “omic” profile for woo, and I have just the term.