at least that much…

Posted on Wednesday 21 March 2012

Hat Tip to Ed Silverman for always being on top of this kind of article 

Background: Publication bias compromises the validity of evidence-based medicine, yet a growing body of research shows that this problem is widespread. Efficacy data from drug regulatory agencies, e.g., the US Food and Drug Administration [FDA], can serve as a benchmark or control against which data in journal articles can be checked. Thus one may determine whether publication bias is present and quantify the extent to which it inflates apparent drug efficacy.
Methods and Findings: FDA Drug Approval Packages for eight second-generation antipsychotics—aripiprazole, iloperidone, olanzapine, paliperidone, quetiapine, risperidone, risperidone long-acting injection [risperidone LAI], and ziprasidone—were used to identify a cohort of 24 FDA-registered premarketing trials. The results of these trials according to the FDA were compared with the results conveyed in corresponding journal articles. The relationship between study outcome and publication status was examined, and effect sizes derived from the two data sources were compared. Among the 24 FDA-registered trials, four [17%] were unpublished. Of these, three failed to show that the study drug had a statistical advantage over placebo, and one showed the study drug was statistically inferior to the active comparator. Among the 20 published trials, the five that were not positive, according to the FDA, showed some evidence of outcome reporting bias. However, the association between trial outcome and publication status did not reach statistical significance. Further, the apparent increase in the effect size point estimate due to publication bias was modest [8%] and not statistically significant. On the other hand, the effect size for unpublished trials [0.23, 95% confidence interval 0.07 to 0.39] was less than half that for the published trials [0.47, 95% confidence interval 0.40 to 0.54], a difference that was significant.
Conclusions: The magnitude of publication bias found for antipsychotics was less than that found previously for antidepressants, possibly because antipsychotics demonstrate superiority to placebo more consistently. Without increased access to regulatory agency data, publication bias will continue to blur distinctions between effective and ineffective drugs.
As opposed to Dr. Gibbons recent meta-analyses, this paper is clear and to the point. They took 25 trials of atypical antipsychotics that had been submitted to the FDA and compared them to the published versions. I’ve spent some time schlepping around in the FDA material, and it’s grueling work, so I’m doubly impressed with their stick-to-it-ness. Rather than just look at routine statistics [p values], they looked at Effect Size which is a way of measuring the strength of the treatment as compared to placebo rather than just statistical significance. I have redrawn their graphs to simplify them [click on the graphic to see the original].

The ‘forest plot’ on the right shows the Effect Size for the twenty five Clinical Trials of atypicals antipsychotics submitted to the FDA as part of NDA [New Drug Approval] Packages. First off, notice that three of the four studies that were not published were ‘dogs.’ Likewise, these are certainly not "high" effect sizes, particularly those for Abilify, Fanapt, Seroquel, and Geodon [which are below 0.5, the usual cut-off]. In the text and tables, there were plenty of other things wrong with some of these studies, things like inferior performance to conventional drugs or QT prolongation, things that are well summarized by the authors.

They also calculated the Effect Size for both the FDA version and the published studies shown on the left. Geodon, Abilify, Zyprexa, and Risperidal looked a lot better between the glossy journal covers than in the FDA NDA applications. Things like not publishing negative studies or somehow dolling up the studies in their published version are called publication bias. The authors noted that the publication bias in these studies of the atypical antipsychotics was less than it had been with the wave of antidepressants that had preceded them.

Implications: Selective reporting of research results undermines the integrity of the evidence base, which ultimately deprives clinicians of accurate data for prescribing decisions. With further studies investigating publication bias in other drug classes, a more accurate evidence base can emerge. To that end, increased access to FDA reviews has been advocated. At the present time, the FDA is not as transparent with its clinical trial data as it could be. For example, we pointed out in 2004 that the reviews for several antipsychotic drugs were posted on the FDA web site, but only for the original indication of schizophrenia and not for bipolar mania. More than seven years later, the mania reviews remain inaccessible. On the other hand, it is encouraging that the FDA has convened a Transparency Task Force. If the agency fulfills its mission to increase transparency, the public health will surely benefit.
I would make two points about this meta-analysis. First, it just is what it is. I didn’t find anything in it that made me think they were putting any spin on the ball in either direction, unlike Dr. Gibbons’ recent outings. Second, I’ve now spent a lot of time fishing around in the FDA data that’s available on the Internet. I’m glad it’s there, but it’s spotty and rarely in a format that makes it easy to get at. The raw data is almost never there. I think that one of the best things we could do is have the FDA post the raw data in some kind of regular format for every study that they receive. There are plenty of us around who would access it, re-analyze it, and perform a watchdog function much more refined than is even possible at this point. I think the same thing about clinicaltrials.gov. The sponsors have begun to hide the names of the clinical research centers by naming them things like Forest Site #3 [available by calling …]. There’s no reason for that. They rarely post their results. If they want us to believe their drugs are safe and effective, put out the facts where they can be independently vetted. They owe us at least that much…
  1.  
    March 22, 2012 | 6:29 AM
     

    This meta-analysis may be good, but the problem with all these studies is that the placebo group isn’t a true placebo. These studies take people stabilized on antipsychotics, then drop half of them suddenly onto nothing and call them a “placebo group.”. They’re not “placebo”, they’re people with acute medication withdrawal symptoms. Most people on any kind of antipsychotic feel a ton better than people in acute medication withdrawal, hence the larger effect size than the antidepressant studies. The biochemical mechanism for this is the increase in dopamine D2 receptor density and the conversion of D2 receptors from the low affinity state to the high affinity state. Yes, antipsychotics cause psychosis. Check out Philip Seeman’s work. This is also the reason that most antipsychotic Phase I studies are done on people with a schizophrenia label instead of the general population – This effect shows up way too clearly in chronically normal people. (This is Dan Fisher’s term for the psychiatrically undiagnosed.)

    John Bola has done a lot of articles on the ethics of using a true placebo group to study antipsychotics and he says this would be MORE ethical than our current paradigm of medicating everyone with psychosis. It turns out that much of psychosis can come from trauma or existential crises and is fairly self-limited. If you help someone work through their initial freak out with peer support, hope, encouragement, and re-assurance, then people don’t become ill for the rest of their lives. Often the diagnoses and labels are self-fulling prophecies. Then using medications that increase the propensity for further illness can turn a short term crisis into lifetime disability. By medicating everyone, we miss a large group of people who might have much better outcomes if their brains had never been exposed to antipsychotic. Not to mention the shrinkage of frontal lobes documented by Andreasson lately, I think in Archives of General Psychiatry.

  2.  
    Dan Abshear
    March 22, 2012 | 4:05 PM
     

Sorry, the comment form is closed at this time.