gone missing…

Posted on Friday 1 February 2013

Dr. Ben Goldacre is an entertaining speaker [something of value…, another Ben/TED talk…], a journalist [Bad Science: The Guardian], and the author of several books [Bad Science: Quacks, Hacks, and Big Pharma Flacks, Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients]. He’s a major force behind AllTrials [upper left], the petition calling for full data transparency in ALL Clinical Trials. On this blog, I’ve focused my attention on the ways in which the scientific data in Clinical Trials has been manipulated to distort both efficacy and safety in the service of commercial gain within the non-ethic of plausible deniability. And while Dr. Goldacre is a master-sleuth at exposing that kind of sheenanigans, which he calls dodgy studies, he’s also on another tack that I haven’t sufficiently emphasized – studies gone missing. He uses an example that goes something like this: "If I flip a coin a hundred times and withhold half the data, I can convince you that I have a two-headed coin." He calls this method of distorting science publication bias. It’s simple, just publish the positive studies. It gets around the tool of meta-analysis, since the negative studies simply aren’t available to be vetted. So the AllTrials campaign is not just asking for the raw data in published Clinical Trials to look at the hanky-panky used to create dodgy studies, they are asking that All Trials be published as a check on publication bias – eliminating the gone missing phenomena.

In an earlier blog [at least that much…], I reviewed a 2012 study of publication bias in the articles published on the Clinical Trials of the Atypical Antipsychotics, but I didn’t follow the trail far enough to their earlier study of the Antidepressants [2008]. Dr. Goldacre mentions that earlier study in the TED Talk below [another Ben/TED talk…]. Here it is:
Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy
by Erick H. Turner, Annette M. Matthews, Eftihia Linardatos, Robert A. Tell, and Robert Rosenthal
New England Journal of Medicine. 2008 358:252-260.
[full text on-line]

Background: Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials — and the outcomes within those trials — can lead to unrealistic estimates of drug effectiveness and alter the apparent risk–benefit ratio.
Methods: We obtained reviews from the Food and Drug Administration [FDA] for studies of 12 antidepressant agents involving 12,564 patients. We conducted a systematic literature search to identify matching publications. For trials that were reported in the literature, we compared the published outcomes with the FDA outcomes. We also compared the effect size derived from the published reports with the effect size derived from the entire FDA data set.
Results:Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published [22 studies] or published in a way that, in our opinion, conveyed a positive outcome [11 studies]. According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall.
Conclusions:We cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, from decisions by journal editors and reviewers not to publish, or both. Selective reporting of clinical trial results may have adverse consequences for researchers, study participants, health care professionals, and patients.
First off, this article only looks at the studies submitted to the FDA. The FDA rule is that the sponsor is required to submit at least two studies showing significant efficacy, but is required to submit all studies done to look at safety. I guess the FDA still carries its original mandate front and center – safety. Efficacy came later – a point for a later blog. So here’s what they found when they looked at whether the studies submitted to the FDA were published in the peer-reviewed literature:
The Published, conflicts with FDA decision category refers to studies that the FDA said were negative or questionable that were published as positive [dodgy studies]. The Not published categories refers to the studies gone missing. They go on to show us which pharmaceutical companies were the worst offenders:
So that’s a 45% [11 dodgy + 22 gone missing ÷ 74 total trials] fudge factor. That’s totally horrible! Grand Jury horrible! If you look at only the published studies, that’s still a 22% [11 dodgy ÷ 51 published trials] fudge factor. Even that’s pretty horrible! And this is only trials submitted to the FDA, so these numbers are low estimates. There’s another indictment in this article that’s another kind of publication bias [effect size inflation], but I’ll save it for another post. Right now, let me just say that this article is available full text, is clearly presented, and has other visuals that drive its powerful point home. And if you haven’t signed the AllTrials petition, read the article then reconsider signing it and anything else you can figure out to do. As Ben Goldacre preaches, this is Bad Science from Bad Pharma…
  1.  
    February 1, 2013 | 10:33 AM
     

    Mickey, you are a hero of mine for the amazing work you do on this blog. I’ve been meaning to post a comment for some time now and your “gone missing…” post offers a perfect opportunity. I’ve been following the publication bias issue closely, and for all the shenanigans of studies like 329 and Gibbons’ recent papers, the suppression of the huge volume of negative studies may pose a bigger threat to the validity of the published literature. The Turner study exposed this with respect to antidepressants, and the findings are, as you say, “totally horrible!”

    There is a third form of publication bias beyond misreporting findings in published studies and burying negative studies that similarly infects the drug trials literature. I am referring to the phenomenon of duplicate publication, in which data from the same trial are published in multiple articles, often without acknowledgement that the data have already been published elsewhere. Melander and colleagues demonstrated the extraordinary extent of this practice here (see Figure 1): http://www.bmj.com/content/326/7400/1171.

    Unfortunately, Turner and colleagues didn’t examine this issue in their meta-analysis, instead deciding that, “When the results of a trial were reported in two or more primary publications, we selected the first publication.” Ironically, their scandalous findings substantially underestimate the selective publication problem in the antidepressant literature. This situation seems to fit the general rule when it comes to pharma-related corruption of science: no matter how bad it seems, the full truth is much worse.

    Warm Regards,

    Brett

  2.  
    February 1, 2013 | 5:10 PM
     

    Plus, there’s the amplification effect when these dodgy studies are recycled into review articles and Cochrane — garbage in, garbage out, but the aggregated opinions are even more convincing to physicians.

Sorry, the comment form is closed at this time.