by Jack Friday
January 30, 2013
by Paul Raeburn
January 30, 2013
by Rachel Ehrenberg
January 29, 2013
Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin
by S. Swaroop Vedula, Tianjing Li, and Kay Dickersin
January 29, 2013
[full text on-line]
Background: Details about the type of analysis [e.g., intent to treat [ITT]] and definitions [i.e., criteria for including participants in the analysis] are necessary for interpreting a clinical trial’s findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication [i.e., what was reported] with descriptions in the corresponding internal company documents [i.e., what was planned and what was done]. Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation.Methods and Findings: For each trial, we compared internal company documents [protocols, statistical analysis plans, and research reports, all unpublished], with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions [i.e., criteria for including participants in each type of analysis]. We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents [i.e., different numbers of participants were included in the analyses].Conclusions: Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
In our sample of industry-supported trials in off-label uses of gabapentin, we observed discrepancies between what was reported in trial publications and what was described in internal company research reports. In this regard, we found that the trial publication was not a transparent, or accurate (presuming that the research report truly describes the facts), record for the numbers of participants randomized and analyzed for efficacy. In three of ten trials in our sample, the number of participants randomized in the trial, as specified in the “main publication”, was not the same as that described in the research report. The “main publication” was a full-length journal article for two of the three trials with a disagreement in the number of participants randomized in the trial, and a conference abstract describing preliminary results for the third trial. In one case, the description in the publication did not include data from 40% of participants actually randomized in the trial. There was such wide variation in describing the participant flow in the included trials, even among documents for the same trial, that we were unable to summarize what we found.
In addition, we observed extensive variability, both among documents for the same trial and across trials, in the types of analyses specified for efficacy and safety as well as in the criteria for including participants in each type of analysis; this is consistent with Chan’s findings comparing protocols and publications. Our study extends comparisons of the protocol and publication to comparisons of analyses presented in internal company research reports and analyses presented publicly.We are concerned that, even for commonly used types of analysis such as ITT, a number of different definitions were used across trials included in our sample. Trial findings may be sensitive to the alternative definitions used for the type of analysis, i.e., analyses using different definitions of the type of analysis can include different subsets of participants, thereby leading to different findings. Because internal company documents are not generally available, our study provides a unique snapshot of how well a publication may reflect what is known by the company. Since doctors rely on publications for information about a drug’s effectiveness for off-label indications, our study raises particularly important questions that may be applicable more broadly to other drugs like gabapentin, which has been prescribed so frequently for off-label indications.
When I look back on my own wanderings in this literature, I am surprised what a slow learner I’ve been. Right at two years ago, after a few years of reading and writing about what others said about the corruption in the pharmaceutical companies’ presentation of their data, I took the plunge and began to haunt clinicaltrials.gov, PubMed, the FDA site, and started reading our literature on the Clinical Trials myself in the original. I started with Seroquel [seroquel II [version 2.0]: guessing…]. I was pretty naive, so it took me a while to realize how widespread all the pseudoscience really was. The more I looked, the more I found. I discovered what those who had gone before had known for years, but I guess I’m glad it unfolded slowly. I’m not sure I could’ve registered it all at once. With as much time as I’ve spent looking and as many times as I’ve been frustrated by not having full access to the data itself, it was only four or five months ago that it occurred to me that the solution was simple – that the raw data for all studies has to be made publicly available. And it took my actually running across the raw data for one study [Paxil Study 329] for me to have my belated awakening [a movement…] because I saw how easy it was to see the distortions.
For reasons that I can’t even think of right now, I had passively gone along with the idea that the companies owned that data without even really thinking about it. That now seems crazy to me. There are no trade secrets in the data. They publish articles that are supposed to be accurate reflections of that data themselves. Why should they be a secret? Right now, I can’t even mount an argument for why they should’ve been allowed to be secret in the first place. As I finally figured out five months ago, the only reason I can think of for keeping the raw data from clinical trials secret is to make distortion possible.