more distortions…

Posted on Saturday 2 February 2013

And by the way, speaking of synchronicity, guess who has an op-ed piece in the New York Times today? Ben Goldacre [Health Care’s Trick Coin]. Now for a chain of hat tips:
I guess when it rains it pours. I’ve been writing about Ben Goldacre, studies gone missing, dodgy studies, and then along comes this article in PLoS Medicine:

Background: Details about the type of analysis [e.g., intent to treat [ITT]] and definitions [i.e., criteria for including participants in the analysis] are necessary for interpreting a clinical trial’s findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication [i.e., what was reported] with descriptions in the corresponding internal company documents [i.e., what was planned and what was done]. Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation.
Methods and Findings: For each trial, we compared internal company documents [protocols, statistical analysis plans, and research reports, all unpublished], with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions [i.e., criteria for including participants in each type of analysis]. We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents [i.e., different numbers of participants were included in the analyses].
Conclusions: Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
In dodginess…, I was reporting on an article that compared the Clinical Trial data as reported to the FDA with the data as reported in the published version of the study in the peer-reviewed literature – specifically in the form of the effect sizes. There were discrepencies, always in the direction favorable to the drug’s efficacy. That suggests more than jury-rigging the presentation or spin. It suggests either changing the data or miscalculating on purpose. In either case, they’re stepping out of the range of plausible deniability into the domain of just plain fraud – AKA lies. But the explanations for the differences weren’t clear in that article. In this report, we find something similar. The researchers in the Department of Epidemiology at Johns Hopkins compared company documents obtained through litigation from Clinical Trials for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis with the published version in the peer-reviewed literature. I’m not going to detail their findings because the full text study is available [and it’s nicely explained in the Science News article] but instead quote from their discussion:
In our sample of industry-supported trials in off-label uses of gabapentin, we observed discrepancies between what was reported in trial publications and what was described in internal company research reports. In this regard, we found that the trial publication was not a transparent, or accurate (presuming that the research report truly describes the facts), record for the numbers of participants randomized and analyzed for efficacy. In three of ten trials in our sample, the number of participants randomized in the trial, as specified in the “main publication”, was not the same as that described in the research report. The “main publication” was a full-length journal article for two of the three trials with a disagreement in the number of participants randomized in the trial, and a conference abstract describing preliminary results for the third trial. In one case, the description in the publication did not include data from 40% of participants actually randomized in the trial. There was such wide variation in describing the participant flow in the included trials, even among documents for the same trial, that we were unable to summarize what we found.

In addition, we observed extensive variability, both among documents for the same trial and across trials, in the types of analyses specified for efficacy and safety as well as in the criteria for including participants in each type of analysis; this is consistent with Chan’s findings comparing protocols and publications. Our study extends comparisons of the protocol and publication to comparisons of analyses presented in internal company research reports and analyses presented publicly.

We are concerned that, even for commonly used types of analysis such as ITT, a number of different definitions were used across trials included in our sample. Trial findings may be sensitive to the alternative definitions used for the type of analysis, i.e., analyses using different definitions of the type of analysis can include different subsets of participants, thereby leading to different findings. Because internal company documents are not generally available, our study provides a unique snapshot of how well a publication may reflect what is known by the company. Since doctors rely on publications for information about a drug’s effectiveness for off-label indications, our study raises particularly important questions that may be applicable more broadly to other drugs like gabapentin, which has been prescribed so frequently for off-label indications.
Frankly, their discussion, damning as it is, soft-pedals what they found. Pfizer and Parke-Davis lied to us [us being doctors and patients] to get Neurontin sold for off-label uses. And it worked. In a clinic in a plenty rural place in the South, the number of people on Neurontin is remarkable. The patients are reticent to stop it because they were put on it for "pain" – just about any kind of "pain" you can imagine. They’re afraid to stop it because they think they’ll hurt more. Once off of it, neither they nor I can tell a great deal of difference one way or another. But the take-away point in the PLoS Medicine article is that the published articles that justified and popularized these off-label uses were FUBARed for publication – the published numbers and the numbers in the company documents were widely discrepant along a number of axes. In this case, they’ve mangled basic definitions and obviously shopped around for analyses that come out in their favor.

When I look back on my own wanderings in this literature, I am surprised what a slow learner I’ve been. Right at two years ago, after a few years of reading and writing about what others said about the corruption in the pharmaceutical companies’ presentation of their data, I took the plunge and began to haunt clinicaltrials.gov, PubMed, the FDA site, and started reading our literature on the Clinical Trials myself in the original. I started with Seroquel [seroquel II [version 2.0]: guessing…]. I was pretty naive, so it took me a while to realize how widespread all the pseudoscience really was. The more I looked, the more I found. I discovered what those who had gone before had known for years, but I guess I’m glad it unfolded slowly. I’m not sure I could’ve registered it all at once. With as much time as I’ve spent looking and as many times as I’ve been frustrated by not having full access to the data itself, it was only four or five months ago that it occurred to me that the solution was simple – that the raw data for all studies has to be made publicly available. And it took my actually running across the raw data for one study [Paxil Study 329] for me to have my belated awakening [a movement…] because I saw how easy it was to see the distortions.

For reasons that I can’t even think of right now, I had passively gone along with the idea that the companies owned that data without even really thinking about it. That now seems crazy to me. There are no trade secrets  in the data. They publish articles that are supposed to be accurate reflections of that data themselves. Why should they be a secret? Right now, I can’t even mount an argument for why they should’ve been allowed to be secret in the first place. As I finally figured out five months ago, the only reason I can think of for keeping the raw data from clinical trials secret is to make distortion possible.

By the way, have I mentioned the AllTrials petition and movement?…
  1.  
    Steve Lucas
    February 2, 2013 | 10:14 AM
     

    In the first stat class I ever took as an undergraduate the instructor looked around, closed the door and said: “The first question you ask is: How do you want the numbers to turn out?”

    Steve Lucas

  2.  
    February 2, 2013 | 10:27 AM
     

    I respect and admire your zeal in this pursuit for truth and transparency. But, I see you swimming against a tide of such combined forces as I have spoken of prior.

    In the end, people are going to have to “touch the stove and get burned” to finally realize that effective care is not just by meds alone.

    And, if my working hypothesis had any merit, that antisocial cretins are more entrenched in positions of power, change can only come with legal consequences. It is what it is at the end of the day.

  3.  
    berit bj
    February 2, 2013 | 2:46 PM
     

    Thank you, dr Mickey! The blog at http://www.radstats.org.uk, (Radical Statisticians)
    is also helpful. There’s a link to a podcast in the Guardian, a one hour interview with dr Goldacre, in case someone missed it…
    And research is being done in Europe, gauging increases of contamination from skin care products and pharmaceuticals in lakes, streams, ground- and coastal waters. Fish caught in the proximity to some cities said to contain traces of prescription drugs, as far north as outside Tromsö.

  4.  
    berit bj
    February 3, 2013 | 7:52 AM
     

    Change will also come from a more knowledgeable public, patients, carers, professionals who pitch in by talking and writing, spreading the message and turning away in disgust from corrupt producers of pharmaceuticals that do lots of harm and little good, contaminating people and environments locally and on a global scale. US media, bloggers, journalists, lawyers and honest doctors have been main sources of information for years. Many brooks have contributed to what I think is an unstoppable flood towards transparency. and healthy health care.
    I was reviewing 60 Minutes the other day, the interview with prof Irving Kirsch on the placebo effect of medical interventions and the so called antidepressants. Jim Gottstein, Judi Chamberlain and David Oaks have all been to Norway, Robert Whitaker is on a grand Euopean tour, soon coming to a place nearby, Gothenburg in Sweden, Ben Goldacre in the NYT! I’m delighted. Thanks to everyone, and to the formidable blogger/docs at 1BOM and HealthCareRenewal. The tide is turning…

Sorry, the comment form is closed at this time.