If there’s one thing that confounds advocates of so-called “complementary and alternative medicine” (CAM), it’s the placebo effect. That’s because, whenever most such remedies are studied using rigorous clinical trial design using properly constituted placebo controls, they almost always end up showing effects no greater than placebo effects. That’s the main reason why they frequently suggest that, you know, all those rigorous, carefully constructed randomized placebo-controlled clinical trials aren’t really the best way to investigate their woo after all. To them, it’s much better to do “pragmatic” trials, which are not even always randomized and often don’t control for placebo effects, mainly because they’re more likely to produce false-positive effects due to biases and placebo effects. Of course, part of the reason for this dislike of placebos among CAMsters is because recently some very good placebo controls have been developed for modalities thought not to be amenable to placebo-controlled trials. Foremost among these modalities undoubtedly acupuncture, for which placebos in the form of sham needles that do not actually penetrate the skin but are able to reliably prevent the patient (and sometimes even the practitioner) from knowing which treatment is being administered. Studies such as these have demonstrated quite conclusively that acupuncture effects are placebo effects.
What brought these thoughts to mind is a study that I recall seeing a few days ago in the Annals of Internal Medicine on placebos in randomized clinical trials coming out of the University of California San Diego and the University of Oxford. The study is entitled, provocatively enough, What’s in Placebos: Who Knows? Analysis of Randomized, Controlled Trials. Basically what Golomb et et al did in this study was something incredibly simple. They simply asked: How often did published randomized clinical trials (RCTs) provide sufficient detail about the composition of the placebo used that another investigator could replicated them? The answer was, in essence: Disturbingly less often than I would have expected. I filed the article away as something I should consider blogging about. Then I filed it away and, thanks to the other things going on this week, forgot about it.
Until, that is, I saw Mike Adams at that even more wretched hive of scum and quackery than The Huffington Post, NaturalNews.com, decide to argue on the basis of this study that the entire scientific basis of science-based medicine is now in doubt. I kid you not. That’s what he is arguing in an article he entitled Placebo fraud rocks the very foundation of modern medical science; thousands of clinical trials invalidated. It’s a screed of flaming stupid so over the top that only Mike Adams could have produced it, firing napalm into the sky to form the words, “I’m an idiot” in letters big enough for the state of Texas to see.
Before we get to Mike Adams’ rather fevered interpretation of Golomb et al, let’s settle down a moment and look at the study itself. To get at the answer of how many , Golomb et al screened four clinical journals with high impact factors between the period from January 2008 to December 2009, looking for placebo-controlled RCTs. Articles were eligible if they were RCTs and used a non-active control (i.e., placebo). They excluded articles where the primary findings were cited as having been published in other articles, in other words, papers that recommend additional findings of a clinical trial that had previously been reported. In addition, they compared how often placebo ingredients were disclosed for pills versus injections. Their findings are summarized well in the abstract:
Background: No regulations govern placebo composition. The composition of placebos can influence trial outcomes and merits reporting.
Purpose: To assess how often investigators specify the composition of placebos in randomized, placebo-controlled trials.
Data Sources: 4 English-language general and internal medicine journals with high impact factors.
Study Selection: 3 reviewers screened titles and abstracts of the journals to identify randomized, placebo-controlled trials published from January 2008 to December 2009.
Data Extraction: Reviewers independently abstracted data from the introduction and methods sections of identified articles, recording treatment type (pill, injection, or other) and whether placebo composition was stated. Discrepancies were resolved by consensus.
Data Synthesis: Most studies did not disclose the composition of the study placebo. Disclosure was less common for pills than for injections and other treatments (8.2% vs. 26.7%; P = 0.002).
Limitation: Journals with high impact factors may not be representative.
Conclusion: Placebos were seldom described in randomized, controlled trials of pills or capsules. Because the nature of the placebo can influence trial outcomes, placebo formulation should be disclosed in reports of placebo-controlled trials.
Primary Funding Source: University of California Foundation Fund 3929–Medical Reasoning
Personally, I don’t find it surprising at all that the ingredients of more injections than pills were disclosed. After all, injections are more invasive than taking a pill. Even so, placebo content should be disclosed.
In the discussion, Golomb et al engage in a discussion of the issues behind placebos. What has to be considered is that there is no such thing as a perfect placebo. Consider this. If a drug has a characteristic taste or smell and the placebo used in a study doesn’t also have that characteristic taste or smell, it’s quite possible that patients in the control group might figure out that they’re receiving a placebo. Consequently, it’s often necessary to add something to the placebo to make it taste and/or smell like the real drug. The authors use a specific example of a drug that leaves a fishy aftertaste, meaning that, to make a convincing placebo, something that adds a fishy aftertaste would have to be added. It’s also important that the placebo have the same color and texture, or at least be as close as possible in color and texture, to the drug.
More problematic for drug trials is that some ingredients used in placebos can actually have physiological effects. The results can be as follows:
However, negative, positive, or same-direction effects of a placebo can result in the misleading appearance of positive, negative, or null effects of the experimental drug (7). For instance, olive oil and corn oil have been used as the placebo in trials of cholesterol-lowering drugs (7, 10, 11). This may lead to an understatement of drug benefit: The monounsaturated and polyunsaturated fatty acids of these “placebos,” and their antioxidant and antiinflammatory effects (12, 13), can reduce lipid levels and heart disease (13, 14). In one of these studies (11), the authors commented that “The lack of any overall effect in patients with myocardial infarction might be related to the unexpectedly low mortality rate in the placebo group.” The possibility that the placebo composition may have influenced this “unexpectedly low mortality” was apparently not considered.
Another example cited by the authors is that of megestrol acetate for anorexia associated with cancer, a trial in which an unexpected benefit for megestrol in gastrointestinal symptoms was found. It turns out that the placebo control contained lactose. It further turns out that lactose intolerance is prevalent in cancer patients, excacerbated by chemotherapy and radiation, leading to speculation that the lactose in the placebo might have produced the appearance of benefit for adverse GI symptoms. Was the amount of lactose in the placebos enough to have actually provoked or worsened GI symptoms in the lactose intolerant? Who knows? It’s a possibility, but only that–a possibility. Certainly Golomb cites nothing that demonstrates it to be more than just a possibility. In fact, I looked up the references. The references cited don’t demonstrate that this is what in fact happened, which actually irritated me a bit. All they showed was that, yes, cancer patients probably a higher than normal incidence of lactose intolerance that might be related to chemotherapy and radiation. The other references cited included the megestrol study, which didn’t even speculate about the lactose in the placebo and a bioethics paper in which Golomb discussed the possibility of active ingredients in placebos. In any case, nothing she presented shows that the small amount of lactose that was probably in the placebo used in the megestrol study was even likely to have contributed to GI complaints that could have made megestrol look better by comparison. Moreover, this study was not without weaknesses. For instance, its focus is on only four journals. True, they may be high impact journals, but that’s a limited sample nonetheless and “high impact” does not necessarily equal “more rigor.” Basically, what we have here is an interesting preliminary study that might have found a potential problem with placebo-controlled clinical trials.
Worse, clearly Dr. Golomb has at the very least a bit of naÃ¯vetÃ© about how cranks use her work to try to justify her pseudoscience. For example, in June Dr. Golomb appeared on Joe Mercola’s YouTube channel in an interview:
Still, even given Dr. Golomb’s apparent affinity for (or at least inability to recognize) cranks coupled with the preliminary nature of her study, I still have a hard time not agreeing with her when she advocates that high impact journals change their CONSORT (Consolidated Standards of Reporting Trials) guidelines to include new rules on the reporting of placebos. At the same time, I wish Dr. Golomb would be a bit more careful about whom grants interviews to. Let’s just put it this way: Giving an interview to Joe Mercola is not a good way to burnish your credibility. Specifically, they propose adding a section requiring the specification of placebo ingredients that answers these questions:
- Was the (placebo) control treatment described in detail?
- If a chemical compound, were its full constituents given (by weight)?
- Were its appearance and any differences from the test drug described (or absence of differences stipulated)?
- Was it stated what other factors might render the experience of the control distinctive from the test agent (or absence of other factors stipulated)?
All of these are reasonable questions. Overall, I was left with the impression that there might be a problem, but I wasn’t convinced that it was as huge a problem as Golomb et al were trying to argue. Even so, I thought that their proposal to modify the CONSORT guidelines was a good first step in correcting the situation. After all, as Golomb et al argue, it can only increase the usefulness and rigor of reporting clinical trials if placebo ingredients are reported.
Obviously, Ã¼ber-quack Mike Adams has different ideas. In fact, I’m going to jump to the hugest howler in a howling sea of stupidity right now. As he is wont to do, Adams leaps on one thing Golomb et al mentioned, namely that there is no government regulation of the contents of placebos. Because of that (get a load of this), Adams concludes:
You see, if there are no regulations or rules regarding placebo, then none of the placebo-controlled clinical trials are scientifically valid.
It’s amazing how medical scientists will get rough and tough when attacking homeopathy, touting how their own medicine is “based on the gold standard of scientific evidence!” and yet when it really comes down to it, their scientific evidence is just a jug of quackery mixed with a pinch of wishful thinking and a wisp of pseudoscientific gobbledygook, all framed in the language of scientism by members of the FDA who wouldn’t recognize real science if they tripped and fell into a vat full of it.
Big Pharma and the FDA have based their entire system of scientific evidence on a placebo fraud! And if the placebo isn’t a placebo, then the scientific evidence isn’t scientific.
Step back a minute. Step back and cover your head to try to protect yourselves against the waves of burning stupid washing over you. I’m sorry I didn’t warn you. In retrospect, I really should have warned you, so that you could have gone to your kitchens to get some tin foil to construct a hat to keep the conspiracy waves mixed with the waves of burning stupid from frying your neurons. Adams’ post is truly a black hole of neuron-apoptosing stupid. I can’t make up my mind if he really believes it when he argues that a problem with the reporting of placebo ingredients means that all placebo-controlled RCTs are invalidated, that they’re all pseudoscience and quackery, or whether Adams contempt for his audience and erstwhile customers is so great that he thinks they’ll believe anything, even whoppers like the one he just told. I’m also quite sure that he has no idea that the contents of placebos for these trials are known and could be discovered. Records are kept; the IRB application for the clinical trial has to reveal the placebo as well. If anyone has any reason to question any individual clinical trial, it is quite possible to go back to the trial records and find out what the placebo used was. True, this is not ideal, being far more difficult than simply being able to read in the journal article describing the results of the trial, but the fact that too many clinical trials don’t adequately report the placebo used does not automatically invalidate every clinical trial that uses a placebo.
Think about what Adams is trying to get his marks to believe. He’s trying to argue that, just because a low percentage of studies report the ingredients of their placebos, that all science-based medicine is not just wrong, but a fraud. It’s hard for me not to chuckle, of course, at Adams’ thinking that the lack of FDA regulation of placebos is such a horrible thing, given that he has frequently likened the FDA to Nazis and worse on his website. After all, any time the FDA tries to rein in quackery, Adams is so fast off the mark in attacking the FDA as a bunch of jack-booted thugs that he shatters windows with his sonic booms. Yet, here he is, railing that the FDA doesn’t regulate the contents of placebos.
More hilarious is how Adams represents this issue as being a massive conspiracy involving–who else?–big pharma to manipulate placeboes in placebo-controlled RCTs in order to get the results that they want. Amusingly, in doing so, he can’t even get it right about what statistical significance means:
As the key piece of information on its regulatory approval decisions, the FDA wants to know whether a drug works better than placebo. That’s the primary requirement! If they work even 5% better than placebo, they are said to be “efficacious” (meaning they “work”).
Is it just me, or is Adams confusing the convention that statistical significance requires a p-value of less than 0.05 (i.e., 5%), meaning that there is only a 5% chance that the differences observed are due to random chance alone? (I realize I’m simplifying a lot, but this is Mike Adams we’re dealing with here.) Instead, Adams misstates the concept as meaning that the FDA just requires a 5% difference between drug and placebo. Someone who can’t even state with reasonable accuracy the concept of what a statistically significant difference means shouldn’t be lecturing anyone about how to do science. Yet lecture scientists (more like harangue them) is exactly what Adams does, with this broadside against those evil skeptics:
It really makes you wonder about so-called “skeptics,” doesn’t it? If they’re skeptical of homeopathy, tarot cards, psychic mediums and people who claim they can levitate, I can at least understand the urge to ask tough questions about all these things. I ask tough questions, too, especially when people tell me they’ve seen ghosts or spirits coming back from the dead or other unexplained phenomena. (And I’ve already publicly denounced so-called “psychic surgery” which it quite obviously little more than sleight-of-hand trickery combined with animal blood.)
But most conventional skeptics never step out of bounds of their “safety zone” of popular topics for which skepticism may be safely expressed. They won’t dare ask skeptical questions about the quack science backing the pharmaceutical industry, for example. Nor will they ask tough questions about vaccines, or mammography, or chemotherapy. And you’d be hard pressed to find anything more steeped in outright fraudulent quackery than the pharmaceutical industry as operated today (and the cancer branch of it in particular).
That’s why I’m skeptical about the skeptics. If a skeptic doesn’t question the loosey goosey pseudoscience practiced by Big Pharma, then they really have no credibility as a skeptic. You can’t be selectively skeptical about some things but then a fall-for-anything fool on other scams just because they’re backed by drug companies.
This passage, more than any other, amuses the hell out of me. Why? First, it amuses me because Adams pats himself on the back because he’s skeptical of paranormal phenomena and has actually figured out that psychic surgery is nothing more than sleight-of-hand. Wow! That took a lot of effort and knowledge! Apparently, though, he used up all that skepticism on psychic surgery, because there isn’t a form of other quackery that Adams won’t defend. Indeed, his website is a one-stop shop for all things quackery. More amusingly, even according to Adams’ self-serving criteria, I’m still a skeptic. After all, I’ve attacked the pharmaceutical industry when it’s misstepped. I’ve written about mammography and its shortcomings extensively and given realistic appraisals of what chemotherapy can and can’t do.
Adams parodies skeptics so that he can dismiss them and then pat himself on the back for doing so, but he doesn’t realize that most skeptics who write about quackery of the sort he so loves. Certainly I don’t. Neither does Steve Novella. Neither does Mark Crislip. Ben Goldacre, whom I met last week for the first time, criticizes homeopathy and “alt-med,” but in reality his true notoriety comes from criticizing pharmaceutical companies. Indeed, last week I heard him accuse the pharmaceutical companies of lying to him about their drugs. No, the real difference between skeptics and pseudoskeptics is that skeptics base their skepticism on science and evidence. Pseudoskeptics like Mike Adams do not; rather, they base it in ideology.
Of course, skeptics have different interests, and there is nothing wrong with that. Some of us are more interested in the claims of alt-med than others, and some aren’t very interested in the misdeeds of pharmaceutical companies. I’ve heard some argue that there are plenty of watchdogs who criticize pharmaceutical company misdeeds but not so many taking on the claims of alt-med. Again, there is nothing wrong with that. More importantly, we are consistent in our application of evidence, regardless of whether we’re applying our skepticism to Mike Adams’ quackery of the perfidy of pharmaceutical companies.
One last point: For all the railing Mike Adams does against science and science-based medicine, remember this. It was not Mike Adams who discovered this potential problem with placebos. It was not one of Mike Adam’s merry band of quacks who discovered this potential problem. I daresay it was not even one of Mike Adams’ readers who discovered this problem. No! It was scientists, examining the methods of science-based medicine who did the hard work of carrying out the study. In essence, it was science criticizing itself.
The contrast with Mike Adams and is ilk couldn’t be more stark.
Golomb BA, Erickson LC, Koperski S, Sack D, Enkin M, & Howick J (2010). What’s in placebos: who knows? Analysis of randomized, controlled trials. Annals of internal medicine, 153 (8), 532-5 PMID: 20956710
37 replies on “What’s in a placebo? Mike Adams certainly doesn’t know.”
You’re quoting Mike Adams. You don’t need to warn us about the heap-o-burnin’ stoopid. You need to warn us if it’s not stupid.
Which is how science works. Using this to attack science is the same basic manoeuvre as criticising real medicine because it abandons stuff that doesn’t work, while CAM never abandons anything (or as they tend to put it “has survived the test of time”).
Mike Adams is so stupid, I have trouble even reading the insolence because it makes me want to crawl through the computer screen and strangle him. He’d put P.T. Barnum to shame, that’s for sure.
OT: Is it just me, or is SBM down? I can’t get the page to load.
Here, the regulations on clinical trials specify that minimally a qualitative list of ingredients in the placebo be provided, but I agree that more information would be better. It’s a little surprising that guidance documents don’t insist on more, but thankfully a good portion of clinical trials do present much more thorough descriptions of their placebos.
Thanks for this. I suppose I knew that a placebo is not just a sugar pill, but I needed a reminder. Now I see why olive oil could be needed in a placebo, and be a problem. Overall, your explanation is a lot more balanced than Adams’ black-and-white stance “placebo are not inert”.
Not that this is surprising me. A lot of things are more balanced than the Health Ranger.
Well, most of us humans do have a safety zone, so I suppose I should thanks him for reminding me of my own limitations.
But will Mike Adams realize that is he just describing his own behavior?
Some of the cases mentioned in the article seem to me to suggest that the problem is broader than just reporting of placebo ingredients. The lactose bit, for instance. That sort of thing makes it appear that the lack of reporting is simply a symptom of a broader issue of lack of thinking about it. Not surprisingly, I suppose. It would be all too natural to just say “oh, this is a standard placebo”.
Requiring reporting of the ingredients won’t be enough, since there’s a very real risk that it would be reported without thought by the writers, and skimmed over without notice by the readers. When really what’s needed is for everyone to make it part of their standard thinking to ask, “might this choice of placebo actually have some biological activity relevant to the study?”, “how easy would it be for the subjects to distinguish this placebo from the active treatment?”, and so on.
I can’t help but think that, if such thinking were as common as it really ought to be, placebo composition would be reported already.
Re placebo contents in general: It seems to me that the best placebo would be the “filler” in the real pill. If the real drug’s preparation uses olive oil, lactose, and dextrose, why not make the placebo out of the same ingredients (in the same proportions)? Seems like that would avoid the basic problem and everyone did better than expected from historic controls maybe it’s time to start investigating the “inactive” ingredients. (Isn’t that how valproic acid was found to have anti-epileptic properties?)
Well, it seems like this problem would be greater in situations where (a)a small number of studies were performed and (b)the effect of the drug is smaller. The larger the number of studies, the more likely it is that many different placebos were used. Similarly, a non-“inert” placebo will be more of a problem if the drug does slightly better than the placebo than if the drug is many times as effective as the placebo.
There was an interesting article in Wired about how the placebo effect seemed to be getting stronger:
I wonder if the ingredients used in the placebo was a factor.
Shouting “Fraud!” when it’s clearly nothing of the sort would really undermine Adams’ real argument. If he had one, that is.
That’s often how placebos are made. Placebos for vaccines, for example, can be the formulation without the antigen. This only becomes problematic if the active ingredient is somehow perceptible (an odour, colour, sensation…) in which case one may have to add something to duplicate the attribute.
A double-blinded test (with effective blinding) of the effectiveness of capsaicin/menthol creams would be difficult without a way to simulate the smell/sensation.
The amount of excipient contained in standard placebos is truly unimportant. It’s close to homeopathy to claim that a hundred milligrams of cornstarch, methylcellose, or vegetable oil (standard excipients) is likely to have a significant physiological effect. These ingredients are generally recognized as inert at the doses provided. That’s why the publications don’t mention the excipients. They don’t matter. The publications also don’t mention the color ink the protocol is printed in, or whether the investigators part their hair on the right or left. It doesn’t matter.
Even so, the protocols themselves do specify the excipients, and those are reviewed by the relevant regulatory agency (such as FDA) and by the relevant ethics committees. If there was a risk to the study from the composition of the placebo, those guys would object.
All that does matter is if the treatment separates from placebo. Active drugs do. “Alternative treatments” generally don’t.
Disclosure: I work for a pharma.
The capacity for self-criticism, among other abilities, is a higher mental function which usually develops around adolescence -from which Mikey has apparently not yet progressed. But seriously, while I am certainly amused by his antics- and those of Mercola and Null- I am truely upset by the *reach* of these web woo-meisters. While I don’t believe their wildly inflated audience estimates for an instant , I know that they affect many unsuspecting people who follow their advice, buy their products, and use social media to “spread the word”. Their counselling goes far beyond simple dietary advice: they frighten patients about standard SBM treatments for serious illness, they cultivate distrust of medical professionals and pharmaceuticals, and above all, they waste valuable time. These medical fabulists, encouraged by the support of their faithful followers, stride boldly into new areas of inexpertise like psychology and economics. People take their bad advice. At the depths of the recession ( March,2009), Null encouraged listeners to “get out of the market, take your money out of the banks” (“Sell low!”); he continues to give financial advice. Mercola sells EFT; Adams champions the CCHR. I don’t expect them to disappear while they continue to financially succeed.
While he lived in Ecuador, Mike believed that the Spanish-speakers at the market always complimented him by saying that he spoke like a “local”. What he didn’t realize is that “loco” does not mean “local”.
Having read the Golomb et al paper, I was disappointed that they didn’t take the next step and contact the authors (most papers I’ve read have a “corresponding author” for just that purpose) and ask them what they used for a placebo.
Perhaps that didn’t occur to them or perhaps it would have negated some of the “impact” of their study.
What Mr. Adams conveniently forgets to mention is that Golomb et al only looked at whether the composition of the placebo was mentioned in the paper (I don’t believe they mentioned whether they looked in supplemental data), not whether the placebo was “valid” or not.
Another point is that “high impact” journals also tend to have the tightest requirements on the number of words in a published paper. That’s why journals like Nature, Science, PNAS, Cell, etc. have such massive on-line supplemental data sections. When looking where to trim “excess words”, one obvious choice is in the description of the placebo. After all, if someone is truly interested in what the authors used as a placebo, they can e-mail the corresponding author.
Which, apparently, is something that Golomb et al didn’t bother to do.
A real problem, however, is that when drug effects are marginal, and side effects significant, then placebo effects may be more beneficial than drug treatment.
This appears, unfortunately, the case for many drugs. Esp. anti-depressants. (Unfortunately, placebo effects don’t endure. So you need to keep changing the treatment.) If acupuncture works without notable side effects, then it may be the right answer. And perhaps it’s more durable than many other treatments. (That would require studies that I’m not aware of.)
Just because a treatment is based around a placebo effect doesn’t mean it’s not the optimal treatment.
Additionally, I’m not convinced that most drug trials control sufficiently to eliminate the placebo effect from the drug that they’re trying to sell. In which case most of the drug action may be due to placebo effect, and the side effects may merely be things that cause the placebo effect to be durable by proving to the participant that something is really being done.
I’ll grant that I’m no expert in this area, but I’ve read many (popularized) reports that seem to back up this opinion.
N.B.: This doesn’t apply when you can actually show a mechanism that is validly reasonable for why the drug in question should work. But often that isn’t the case. And even sometimes when it is, the explanation is faulty. (And, of course, this only applies to drugs with marginal effects.)
But when a drug company is trying to sell you a product, they aren’t an unbiased party. And they have often been caught either suppressing negative reports, or in outright lies. And the expense of the drug is, itself, a negative effect. (Not that acupuncture is free, of course.)
Today I’m very upset at drug companies because a generic drug that I’ve been taking has been taken off the market and the same thing is now only available under a trade name at a vastly increased price. Today I wouldn’t put much of anything past those companies. So this may not be an unbiased post.
I skipped all the Mike Adams stuff. In any case, it sounds like this is probably not too big of a problem, but in any case, it seems fair enough to suggest that it ought to be standard practice to always record the contents of the placebo, even in a footnote or something. It does not seem like it would really be a resource drain on researchers, and that way it’s there and you have closer to full disclosure. In the unlikely event that the choice of placebo was an important factor, now you have it recorded and that can be tested later if the suspicion arises.
Just for comparison, a glass of milk contains about 12 grams of lactose. So even if somebody was lactose intolerant, they would be unlikely to be affected by the small amount of lactose that will fit into a pill.
While I agree with Orac that the composition of the placebo is important enough to be specified in publications, the likelihood that reactions to the “inert” components of a placebo are a significant confound to interpretation of placebo-controlled trials is pretty close to zero.
Excellent point Prometheus, thanks for bringing that to attention.
I work in government regulation of pharmaceuticals. My approach will thus tend toward transparency and completeness of information.
If they are standard placebos it should be easy to list the composition. If it’s simply the drug product formulated without the drug substance, that’s easily noted as well. It’s easier to simply provide the information for all clinical trials than to have us request it when we want to see it, and have delays while the regulatory staff at the drug company contacts the scientific staff to try to determine what the placebo was.
Generally non-medicinal ingredients are inert, but in the interest of completeness the contents should be reported, and our guidance documents do specify that a list of ingredients be provided for placebos.
Now, for academic research (rather than approved clinical trials) it would be nice to have the ingredients, but with space limitation I can see why it might not be feasible.
To paraphrase a quote upon reading Adams’s “report.”
THE STUPID!! IT BURNS!!!!!
Honestly, does Adams not get basic statistics?
Ha, I apparently wasn’t the only one with this reaction.
But then, it is a lot easier to fight a potential bogeyman than to investigate it properly to learn that you are making a big stink over nothing.
Huh? If they are standard placebos, then why bother stating the composition? Why not just say, “We used the standard placebo.”?
It’s when you use NON-STANDARD stuff that you need to provide details. When we report chemicals that we use in our studies, we say, “They were obtained from commercial suppliers and used without purification.” When that applies. It is when we make up something unusual that we provide details of how we got it. “Substance X was prepared by the following procedure…”
” I was disappointed that they didn’t take the next step and contact the authors…”
“Purpose: To assess how often investigators specify the composition of placebos in randomized, placebo-controlled trials.”
I’m disappointed that the authors didn’t send me a million dollars, but then they never set out to do that. The purpose of the paper was to review disclosure in published papers, not to determine validity of the conclusions of the papers based on possible influence of placebo composition.
My disappointment was not that they didn’t achieve their stated purpose, but that they were so close to having meaningful results and failed to put in the effort to reach them.
As their study stands, they have only shown that – in the high-impact journals they chose to examine – including the composition of the placebo in the article was not a high priority of either the authors or the editors and reviewers. Not a very useful bit of information.
However, if they had taken the next step and checked to see what the placebos were – which, I admit, would have required significantly more effort than simply reading a number of studies – they would (potentially) have had findings of real significance. Instead, they settled for “exposing” a minor editorial quibble.
Mike Adams – in his infinite ignorance – saw this and made the leap of illogic to “placebo-gate”. I’d be disappointed in him, too, if I had any expectations that he was a reasonable person.
Charles, that’s why drug trials are placebo-controlled. It is assumed that the drug under investigation will have a “placebo effect”, so they compare it to a “placebo”, which has no physiological effect but which the subjects cannot distinguish from the drug being tested (if they do it right).
While the “real” drug may have detectable side-effects (and non-side-effects, i.e. desired effects), subjects receiving placebos also report “side-effects”, including dizziness, sedation, euphoria, nausea etc. That’s another good reason to use a placebo-controlled study: you can also determine which of the side-effects are from the “placebo effect”.
On the placebo effect: I once got a friend “drunk” by making her G&Ts with water poured out of a gin bottle for a prank. She was really quite tipsy, giggling, swaying and bright pink after several generous measures of tap water. The prank was revealed when she asked me to cut her off because she was starting to feel a bit ill.
Thanks for this. I often doubt that placebos are indistinguishable from the ‘active’ prep. Most recently, I saw a study on elderberry extract where I wondered (and they did not disclose) how their placebo mimicked the color, flavor and taste of the real thing. They did not explain how they did that, and I doubt they could.
[email protected] –
as you know, working in gov’t regulation, the Clinical Study Reports submitted to FDA are far more complete than the publications submitted to journals. Every CSR I’ve ever written contains a list of the placebo excipients. And at the time drugs are reviewed for approval, module 3 of the eCTD contains all manufacturing information, in boringly exquisite detail. WIth over a dozen submissions under my belt I can say that no regulatory reviewer has ever had to request such information – it’s always provided.
But the article in question wasn’t looking at deficiencies in information provided to regulators. It was focused entirely on journal publication. Most journals have word length limits and using up precious words listing unimportant parts of the study would be a waste.
You mentioned “standard placebos.” Sadly, for it would make life easier, there can’t be such things. Different medicines require different excipients and different formulation processes in order to ensure proper dissolution and absorption. The placebos generally are identical excepting the active ingredient (sometimes with an inert component such as cornstarch added to make the weight match). But the differences in formulation of pills is dictated by differences in the chemical properties of the medicines studied, so there will never be a “Standard Placebo.”
Well, what if they give a sugar pill as a placebo, but the sugar pills were really homeopathic remedy for the condition being treated?
Now what’s Adams going to say?
Homeopathy being tested against the big pharma drug!
heavens to murgatroyd!
I think it’s funny that you’ve spent so much time and energy attacking Mike Adams. If he is such a quack, why even acknowledge his article? If what he’s saying is so bizarre and requires the use of a tin foil hat to read, then why are you bringing so much attention to his article?
I certainly don’t speak for Orac, but not apparently Mike Adams has a following. These people seem unconvinced that he is a quack, and may believe he has some amount of reason and evidence on his side. There are also, likely, others who repeat Adams’ comments or make similar comments.
If Orac or someone like him doesn’t explain just why Adams is wrong, there is no counter to his erroneous statements. Without a counter, some people might believe him – and this would not be a good thing.
Why speak about Adams?
Mikey is hilarious. It’s like easily mined comedy gold. Right there on the surface.
No problem here. In a well conditioned Pavlovian response, whenever I read the name Mike Adams, the head protection goes on immediately.
In fact, I keep one (personally molded into four foil layers!)in a handy spot under my desk especially for this purpose.
Why would anyone bother to inform potential customers of the potentially dangerous defects regularly found in cars sold at “Conman X’s Quality Used Cars”?
Why inform the consumers about toy designs which pose a choking risk to the age group they were designed for?
All those mass media warnings of badly designed,dangerous or ineffective products? Why?
Why on the FSM’s Drunken Creation do so many people spend so much time informing the potentially uninformed about any number of scams currently doing the rounds?
Why do they even use lactose as a pill-binder? Contrary to the conventional wisdom (“Just for comparison, a glass of milk contains about 12 grams of lactose. So even if somebody was lactose intolerant, they would be unlikely to be affected by the small amount of lactose that will fit into a pill.” at #17) there are plenty of people who are lactose intolerant enough to be bothered by milligram levels of lactose; I’m one of them. I sure as hell can’t drink a glass of milk; I’d likely wind up in the hospital, no word of a lie. If the lactose intolerance didn’t do it, the casein allergy would. I do so love breaking out in itchy spots.
Seriously, can’t they find something more inert to use in pills, like that half the world isn’t intolerant of? I’m given to understand that lactase persistence is still in the minority, genetically speaking.
That most skeptics who write about quackery of the sort he so loves… what?
> Why do they even use lactose as a pill-binder?
Because it works well as a pill-binder. #captainobvious
Also, you’ll note if you do ever sign up for a trial using these, or get an actual medication, that you’re warned that it’s there.
it seems like this is the typical ‘we could do this better’ article, which likely will be implemented if it means the science is more easily reproduced, which is what I took as the authors’ main point.
We have similar problems in optical physics. When specific experiments are dependent on a myriad of parameters, one finds that rarely, if ever, are those parameters effectively reported in the literature, making reproduction a real chore. In trying to figure out some effects in my own data, I turned to a paper that cited an effect due to the amount of laser power in a specific area of the sample. Unfortunately for the audience, the authors reported neither the quantity of power nor the amount of area they illuminated! Reproduction becomes quite difficult in such a case. I don’t know how that one slipped by the editors.
A specific question for the medical scientists here though. It seems that the placebo effect described in the context of the studies discussed in this paper is due to there being a positive or negative effect on the same variable that the trial drug attempted to affect. Is that where the placebo effect usually comes from? Or is there some psychological aspect to it as well in which there is a longer route between cause and effect?
Thanks. It’s a very interesting aspect of medical research to me.
The “placebo effect” traditionally refers to people’s symptoms appearing to improve because they think they’re supposed to be feeling better — a psychological effect — but it’s not limited to this. It also encompasses observer bias (the primary source of placebo effect in veterinary and plant medicine) and what you refer to: an actual effect in the control substance which unfortunately acts on the same system as the study substance.