outcome switching…

Posted on Monday 4 January 2016

In our reanalysis of Paxil Study 329 [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence], we challenged the fact that the published paper reached it’s conclusions about efficacy based on an analysis of outcome variables that were not included in the a priori protocol:
There were four outcome variables in the CSR and in the published paper that were not specified in the protocol. These were the only outcome measures reported as significant. They were not included in any version of the protocol as amendments [despite other amendments], nor were they submitted to the institutional review board. The CSR … states they were part of an “analysis plan” developed some two months before the blinding was broken. No such plan appears in the CSR, and we have no contemporaneous documentation of that claim, despite having repeatedly requested it from GSK…

… although investigators can explore the data however they want, additional outcome variables outside those in the protocol cannot be legitimately declared once the study is underway, except as “exploratory variables”—appropriate for the discussion or as material for further study but not for the main analysis. The a priori protocol and blinding are the bedrock of a randomised controlled trial, guaranteeing that there is not even the possibility of the HARK phenomenon [“hypothesis after results known”]
Their claim was that these variables were declared before the blind was broken, ergo they were a priori. We had no evidence that was true, but even if it had been, we would’ve challenged using those variables anyway. When billions of dollars in commercial profit are on the line, taking a claim like that on faith is irrational. We couldn’t prove they "peeked" along the way, but the burden of proof is on their shoulders, not ours. The only guarantee that the final analysis is not jury-rigged is to insist on the analysis as declared in the a priori protocol. But what if they made a mistake in the protocol? Too bad. Do another trial. The whole point of an a priori protocol is to eliminate even the possibility of outcome switching [as it has come  to be called]:
Vox: Science and Health
by Julia Belluz
December 29, 2015

For years, the drug company GlaxoSmithKline illegally marketed paroxetine, sold under the brand name Paxil, as an antidepressant for children and teenagers. It did so by citing what’s known as Study 329 — research that was funded by the drug company and published in 2001, claiming to show that Paxil is "well tolerated and effective" for kids. That marketing effort worked. In 2002 alone, doctors wrote 2 million Paxil prescriptions for children and adolescents. Years later, after researchers reanalyzed the raw data behind Study 329, it became clear that the study’s original conclusions were wildly wrong. Not only is Paxil ineffective, working no better than placebo, but it can actually have serious side effects, including self-injury and suicide.

So how did the researchers behind the trial manage to dupe doctors and the public for so long? In part, the study was a notorious example of what’s called "outcome switching" in medical research. Before researchers start clinical trials, they’re supposed to pre-specify which health outcomes they’re most interested in…

… "In Study 329," explains Ben Goldacre, a crusading British physician and author, "none of the pre-specified analyses yielded a positive result for GSK’s drug, but a few of the additional outcomes that were measured did, and those were reported in the academic paper on the trial, while the pre-specified outcomes were dropped."

These days, it’s easy to see whether researchers are engaged in outcome switching because we now have public clinical trials registries where they’re supposed to report their pre-specified outcomes before a trial begins. In theory, when journals are considering a study manuscript, they should check to see whether the authors were actually reporting on those pre-specified outcomes. But even still, says Goldacre, this isn’t always happening.

So with his new endeavor the Compare Project, Goldacre and a team of medical students are trying to address the problem. They compare each new published clinical trial in the top medical journals with the trial’s registry entry. When they detect outcome switching, they write a letter to the academic journal pointing out the discrepancy, and then they track how journals respond. I spoke to Goldacre to learn more.

"When we get the wrong answer, in medicine, that’s not a matter of academic sophistry — it causes avoidable suffering"

Julia Belluz: Why does outcome switching matter?

Ben Goldacre: This is an interesting example of a nerdy problem whose importance requires a few pages of background knowledge, and that’s probably why it’s been left unfixed for so long. But in short: Switching your outcomes breaks the assumptions in your statistical tests. It allows the "noise" or "random error" in your data to exaggerate your results [or even yield an outright false positive, showing a treatment to be superior when in reality it’s not].

We do trials specifically to detect very modest differences between one treatment and another. You don’t need to do a randomized trial on whether a parachute will save your life when you jump out of an airplane, because the difference in survival is so dramatic. But you do need a trial to spot the tiny difference between one medical intervention and another. When we get the wrong answer, in medicine, that’s not a matter of academic sophistry — it causes avoidable suffering, bereavement, and death. So it’s worth being as close to perfect as we can possibly be…

First off, three cheers for Ben Goldacre’s Compare Project! I’m glad to see our Paxil Study 329 findings put into general use in this regard. Of course I didn’t believe the GSK explanation for outcome switching in that study the first time I read it, but the RIAT Initiative wasn’t intended to be a critique or an indictment – rather an accurate republication. So we didn’t rest our argument on inuendo or inference – we stuck to the facts. I would recommend Ben’s army do the same. And Godspeed! I wrote them and suggested they include any protocol change in their project, including Income Switching [see the next post]…
  1.  
    1boringyoungman
    January 4, 2016 | 9:47 PM
     

    How do any of the original Study 329 authors justify the omission of reporting in the JAACAP paper of 2° end points that were written into the original protocol? No matter one’s take on adding some additional ones prior to breaking the blind. ““But I think it is too bad because the paper presents everything…” How can this be true when the majority of 2° end points written into the initial protocol were not reported in the published paper?
    http://www.browndailyherald.com/2014/04/02/controversial-paxil-paper-still-fire-13-years-later/

  2.  
    1boringyoungman
    January 5, 2016 | 5:11 AM
     

    “The secondary outcomes were decided by the authors prior to the blind being broken. We believe now, as we did then, that the inclusion of these measures in the study and in our analysis was entirely appropriate and was clearly and fully reported in our paper.””In other words, the disagreement on treatment outcomes rests entirely on the arbitrary dismissal of our secondary outcome measures.”

    Neither Dr. Klein, nor any other Study 329 author, has ever spoken to the to the non-arbitrary exclusion of reporting of secondary outcome measures in the JAACAP paper.

  3.  
    1boringyoungman
    January 5, 2016 | 9:48 AM
     

    It’s a testament to the state of medical journals that the COMPARE project is necessary. I was about to ask if anyone was going to set up a parallel project for our journals, then I wondered who in the world was going to have the time to do this (Ben and a group of motivated medical students looking at the 5 top journals, in part for laudable PR purposes, is not necessarily a reproducible paradigm).

    Then I wondered why we have editors in the first place. Isn’t it, at least in part, so be don’t have to do this ourselves?

    As long as a journal does not suffer in standing among it’s readership for failing in their mission then we’re left to rely on the sporadic generosity of hobbyists.

Sorry, the comment form is closed at this time.