Financial ties of principal investigators and randomized controlled trial outcomes: cross sectional studyby Rosa Ahn, Alexandra Woodbridge, Ann Abraham, Susan Saba, Deborah Korenstein, Erin Madden, W John Boscardin, and Salomeh Keyhani.BMJ 2017 356:i6770
Objective: To examine the association between the presence of individual principal investigators’ financial ties to the manufacturer of the study drug and the trial’s outcomes after accounting for source of research funding.Design: Cross sectional study of randomized controlled trials [RCTs].Setting: Studies published in “core clinical” journals, as identified by Medline, between 1 January 2013 and 31 December 2013.Participants: Random sample of RCTs focused on drug efficacy.Main outcome measure: Association between financial ties of principal investigators and study outcome.Results: A total of 190 papers describing 195 studies met inclusion criteria. Financial ties between principal investigators and the pharmaceutical industry were present in 132 [67.7%] studies. Of 397 principal investigators, 231 [58%] had financial ties and 166 [42%] did not. Of all principal investigators, 156 [39%] reported advisor/consultancy payments, 81 [20%] reported speakers’ fees, 81 [20%] reported unspecified financial ties, 52 [13%] reported honorariums, 52 [13%] reported employee relationships, 52 [13%] reported travel fees, 41 [10%] reported stock ownership, and 20 [5%] reported having a patent related to the study drug. The prevalence of financial ties of principal investigators was 76% [103/136] among positive studies and 49% [29/59] among negative studies. In unadjusted analyses, the presence of a financial tie was associated with a positive study outcome [odds ratio 3.23, 95% confidence interval 1.7 to 6.1]. In the primary multivariate analysis, a financial tie was significantly associated with positive RCT outcome after adjustment for the study funding source [odds ratio 3.57 [1.7 to 7.7]. The secondary analysis controlled for additional RCT characteristics such as study phase, sample size, country of first authors, specialty, trial registration, study design, type of analysis, comparator, and outcome measure. These characteristics did not appreciably affect the relation between financial ties and study outcomes [odds ratio 3.37, 1.4 to 7.9].Conclusions: Financial ties of principal investigators were independently associated with positive clinical trial results. These findings may be suggestive of bias in the evidence base.
We found that more than half of principal investigators of RCTs of drugs had financial ties to the pharmaceutical industry and that financial ties were independently associated with positive clinical trial results even after we accounted for industry funding. These findings may raise concerns about potential bias in the evidence base.
Possible explanations for findings
The high prevalence of financial ties observed for trial investigators is not surprising and is consistent with what has been reported in the literature. One would expect industry to seek out researchers who develop expertise in their field; however, this does not explain why the presence of financial ties for principal investigators is associated with positive study outcomes. One explanation may be “publication bias.” Negative industry funded studies with financial ties may be less likely to be published. The National Institutes of Health [NIH]’s clinicaltrials.gov registry was intended to ensure the publication of all trial results, including both NIH and industry funded studies, within one year of completion. However, rates of publication of results remain low even for registered trials…Other possible explanations for our findings exist. Ties between investigators and industry may influence study results by multiple mechanisms, including study design and analytic approach. If our findings are related to such factors, the potential solutions are particularly challenging. Transparency alone is not enough to regulate the effect that financial ties have on the evidence base, and disclosure may compromise it further by affecting a principal investigator’s judgment through moral licensing, which is described as “the unconscious feeling that biased evidence is justifiable because the advisee has been warned.” Social experiments have shown that bias in evidence is increased when conflict of interest is disclosed. One bold option for the medical research community may be to adopt a stance taken in fields such as engineering, architecture, accounting, and law: to restrict people with potential conflicts from involving themselves in projects in which their impartiality could be potentially impaired. However, this solution may not be plausible given the extensive relationship between drug companies and academic investigators. Other, incremental steps are also worthy of consideration. In the past, bias related to analytic approach was tackled by a requirement for independent statistical analysis of major RCTs. Independent analysis has largely been abandoned in favor of the strategy of transparency, but perhaps the time has come to reconsider this tool to reduce bias in the analysis of RCTs. This approach might be especially effective for studies that are likely to have a major effect on clinical practice or financial implications for health systems. Another strategy to reduce bias at the analytic stage may be to require the publishing of datasets. ICMJE recently proposed that the publication of datasets should be implemented as a requirement for publication. This requirement is increasingly common in other fields of inquiry such as economics. Although independent analyses at the time of publication may not be feasible for journals from a resource perspective, the requirement to release the dataset to be reviewed later if necessary may discourage some forms of analytical bias. Finally, authors should be required to include and discuss any deviations from the original protocol. This may help to prevent changes in the specified outcome at the analytic stage…
The Randomized Controlled Trial [RCT] is a good way to determine clinical usefulness.
In 1962, the FDA was charged with requiring two Randomized Controlled Trials [RCTs] demonstrating statistical efficacy and all human usage data demonstrating safety in order to approve a drug for use. It’s a weak standard, designed to keep inert potions off the market. It was presumed that the medical profession would have a higher standard and determine clinical usefulness. That made [and makes] perfect sense. The FDA primarily insures safety and keeps swamp root and other patent medicines out of our pharmacopeia, but clinical usefulness should be determined by the medical profession and our patients. Not perfect, but I can’t think of a better system for approval. However, approval doesn’t necessarily correlate with clinical usefulness, or for that matter, long term safety. And then something unexpected happened. The Randomized Controlled Trials became the gold standard for everything – called Evidence Based Medicine. Randomized Clinical Trials are hardly the only form of valid evidence in medicine. That was a reform idea that kept people from shooting from the hip, but was also capable of throwing the baby out with the bathwater.
This structured procedure designed to dial out everything and isolate the drug effect [RCTs] became a proxy for the much more complex and varied thing called real life. RTCs have small cohorts of recruited [rather than help-seeking] subjects in short-term trials. Complicatred patients are eliminated by exclusionary criteria. The metrics used are usually clinician-rated rather than subject-rated. And the outcome is measured by statistical significance instead of by the strength of the response. Blinding and the need for uniformity eliminates any iterative processes in dosing or identifying target symptoms. It’s an abnormal situation on purpose, suitable for the yes-no questions in approval, but not the for-whom information of clinical experience.
It is ever going to be possible to create a system that insures that the industry sponsors will openly report on their RCT without exaggerrating efficacy and/or understating toxicity.
These RCTs were designed for submission to the FDA for drug approval. The FDA reviewers have access to the raw data and have regularly made the right calls. But then those same studies are written up by professional medical ghost writers, signed onto by KOL academic physicians with florid Conflicts of Interest and submitted to medical journals to be reviewed by peer reviewers who have no access to the raw data. The journals make money from selling reprints back to to the sponsors for their reps to hand out to practicing doctors. These articles are where physicians get their information, and discrepancies between the FDA version and the Journal versions are neither discussed, nor even easy to document.So it’s not the FDA Approval that’s the main problem. It’s the glut of journal articles that have been crafted from those studies and been the substrate for advertising campaigns that have caused so much trouble. The basic Clinical Trials that were part of the Approval have been glamorized. And many trials that were unsuccessful attempts at indication creep have been spun into gold. It seems that every time there’s an attempt to block the fabulation of such trials, there have been countermoves that render the reform attempts impotent. So far, it’s been a chess game that never seems to get to check-mate.
Structured RCTs may well be the best method for our regulatory agencies use to evaluate new drugs. They cost a mint to do and about the only people who can fund them are the companies who can capitalize on success – the drug companies. But medicine
doesn’t need to shouldn’t buy into the notion that they’re the only way to evaluate the effectiveness of medicinal products. As modern medicine has become increasingly organized and documented, there are huge caches of data available. And it’s not just patient data or clinic data. What about the pharmacy data that’s already being used by PHARMA to track physician’s prescribing patterns? And where are the departments of pharmacology and the schools of pharmacy in following medication efficacy and safety? or the HMOs? or the Health Plans? the VAH? What about the waiting room questionnaires? I’d much rather they ask about the medications the patient is on than being used to screen for depression. It’s really the ongoing data after a drug is in use that clinicians need anyway – more important than the RTC that gets things started.