New research shows that physicians in industry-sponsored trials are more captured by pharmaceutical companies than physicians in unsponsored ones.
Economists love health care because it is a classic example of “information asymmetry”: patients do not know what treatment they need, how much it should cost, or which tests will properly diagnose them. Instead, patients have to trust their physicians to prescribe, treat, diagnose, and prevent their afflictions.
Yet, even if physicians act entirely for their patients’ wellbeing, their main source for information on new treatments are clinical trials, and those are based on who has sponsored the trial.
In new research, I find that the magnitude of sponsorship bias in clinical trials is statistically significantly positively correlated with how distinct the side effects of the drug are. From this result, I conclude that sponsorship influences physicians overseeing a clinical trial. Sponsorship incentivizes physicians to show the trial sponsor’s drug is more effective and that when doctors can more clearly identify what drug a patient is taking (based on side effects) this bias is greater.
Clinical trials are how new pharmaceuticals are tested. They are used to evaluate whether a drug is safe and effective, as well as how to dose it properly. Most clinical trials are sponsored by pharmaceutical companies. This sponsorship is a conflict of interest because after spending an average of $430 million on developing a new drug, the pharmaceutical company is now tasked with evaluating the efficacy of its own drug.
The data show that sponsored clinical trials are much more likely to find the sponsor’s drug superior. This is called sponsorship bias. Sponsorship bias is quantified by the difference between sponsored and unsponsored clinical trial results. Joel Lexchin, an associate professor at the University of Toronto, finds that clinical trials sponsored by a drug company are four times as likely to find their sponsor’s drug superior. Additionally, comparisons between drugs will likewise favor the drug manufactured by the trial’s sponsor.
For example, consider a cross-study of Sertraline and Venlafaxine, two drugs that treat depression. The trial, sponsored by Sertraline’s producer, found that the two drugs are equally effective but concluded that Sertraline was preferable due to lower side effect burden. The makers of Venlafaxine found Venlafaxine to be significantly more effective.
For my research, I chose to explore antidepressants because patient improvement is more difficult to quantify. Not only is the reported efficacy of antidepressants more subject to physicians’ interpretation of patients’ moods, but antidepressants also come with a heavy side-effect burden. Moreover, major depression is estimated to cost the US economy $210 billion annually. Finally, previous research has found prominent sponsorship bias in clinical trials for antidepressants.
Pharmaceutical companies create sponsorship bias by influencing results in several ways. The most prevalent way is publication bias: by sponsoring the trial, pharmaceutical companies can choose how data gets published, where it is published, and which trial results are published. For example, Merck sponsored clinical trials of the drug Vioxx, and hid that the drug significantly increased patients’ risk of stroke or heart attack.
Industry-sponsored trials can also be ended early if they are not going to show the sponsor’s drug superior. For example, GlaxoSmithKline did not report the results of four trials showing their generic antidepressant was both ineffective and increased the risk of suicidal tendencies.
Finally, articles summarizing sponsored clinical trials can spin neutral results in a positive way, such as the Sertraline findings above.
Other potential explanations for this sponsorship bias include trial design and methodology concerns. For example, pharmaceutical companies typically run larger clinical trials—a larger patient population enables more precise efficacy results and are more likely to result in statistically significant results. However, researchers dispute whether pharmaceutical industry-sponsored trials are necessarily poorer quality trials.
Another potential contributor to sponsorship bias is captured physicians. Captured physicians are not necessarily corrupt physicians. Originating from University of Chicago economics professor and Nobel laureate George Stigler, capture is the theory that regulatory agencies come to be influenced by the corporations they are supposed to regulate.
All regulatory agencies experience some level of capture due to working with industry researchers and the revolving door between government and industry work. However, it is important to recognize when capture is significant enough to sway legislators.
In this instance, physicians are the regulators of drug trials. They are charged with reporting the efficacy of a drug on their patients. However, these supposedly unbiased physicians are paid by pharmaceutical companies that have a huge financial incentive for the drug to work. So, physicians are under pressure to show the drug works.
To prevent capture, clinical trials are typically double blind: neither the physician nor the patient knows whether the patient is taking a drug or a placebo. Likewise, in a drug comparison trial, neither the physician nor the patient knows which drug the patient is taking. The purpose of these double blind trials is to protect clinical trial results from the biases of participating physicians.
However, since many patients will come in complaining of drug-related side effects, the physician can predict which patients are treated and which of them are taking the placebo. This is called a broken blind.
Thus, more side-effects cause more broken blinds. The connection between broken blinds and patients’ reported improvement has been the subject of previous research. Irving Kirsh, associate director of Placebo Studies at Harvard Medical School, finds that experiencing side-effects is 96 percent correlated with experiencing mood improvement from antidepressants.
A patient who expects the drug to work and believes themselves to be on it due to side effects is more likely to report the drug works, a phenomenon known as in psychology “expectation bias.”
As Kirsh explored the impact of broken blinds on patients, I explored how broken blinds influence physicians.
Using data from Tamar Oomstrom, an MIT researcher, I found a statistically significant positive relationship between the magnitude of sponsorship bias and how common and distinctive the side effects of the drug were. This suggests that physicians in sponsored trials are more captured—recall, all regulators face some level of capture—than unsponsored ones. Moreover, it suggests that a captured physician’s ability to influence the results of a clinical trial is limited to patients for whom the blind has been broken.
Due to all of the biases outlined in this essay, public funding is crucial to preserving the integrity of clinical trials as indicators of drug efficacy.
Because public funding is money provided by the government, trials that are publicly sponsored do not have strong incentives to provide a certain result, and are thus less biased, presenting an alternative to private funding which does not come with the same strong financial incentive for a specific outcome.
Yet, the budget of the National Institutes of Health—the prominent source of public funding for health research—has not even increased enough to keep up with inflation. This has troubling implications for the future of clinical trials. Industry-sponsored clinical trials are a concerning conflict of interest, and publicly-funded clinical trials through the NIH are crucial to prevent bias.
In order for physicians to compare pharmaceuticals in a comprehensive way, they must have somewhere to go for unbiased clinical trials. Moreover, as the private sector becomes the only source of funding for clinical trials, pharmaceutical companies may face incentives to develop drugs with more distinctive side effects to better enable captured physicians to break blinds and bias trials.