this tawdry era…

Posted on Saturday 14 May 2016

For the last week, I’ve been unable to focus on anything very far from a single article [see the jewel in the crown…, why is that?…, the hope diamond…, and the obvious irony… ]. And it’s been frustrating in that the article has been behind the pay-wall. But now, the International Journal of Risk & Safety in Medicine has generously published it Open Access. And while the authors’ background notes are not yet on-line, they allowed me to post them here [see Update below]. So you can decide for yourself if my monomania is justified with a full deck:
by Jureidini, Jon N., Amsterdam, Jay, D, and McHenry, Leemon B.
International Journal of Risk & Safety in Medicine, 2016 28[1]:33-43.

OBJECTIVE: Deconstruction of a ghostwritten report of a randomized, double-blind, placebo-controlled efficacy and safety trial of citalopram in depressed children and adolescents conducted in the United States.
METHODS: Approximately 750 documents from the Celexa and Lexapro Marketing and Sales Practices Litigation: Master Docket 09-MD-2067-[NMG] were deconstructed.
RESULTS: The published article contained efficacy and safety data inconsistent with the protocol criteria. Procedural deviations went unreported imparting statistical significance to the primary outcome, and an implausible effect size was claimed; positive post hoc measures were introduced and negative secondary outcomes were not reported; and adverse events were misleadingly analysed. Manuscript drafts were prepared by company employees and outside ghostwriters with academic researchers solicited as ‘authors’.
CONCLUSION: Deconstruction of court documents revealed that protocol-specified outcome measures showed no statistically significant difference between citalopram and placebo. However, the published article concluded that citalopram was safe and significantly more efficacious than placebo for children and adolescents, with possible adverse effects on patient safety.
While this is only one example of many similarly misreported Clinical Trials, the access to the internal industry documents allowed these authors to leave nothing to our imagination. They prove that it’s ghost written; that it was framed by the industry executives for commercial gain before any academic author got near the data; that it was deceitfully written to hide its failings, on purpose; and that it was a negative Clinical Trial presented as positive and subsequently used to gain FDA Approval. Those points and more are abundantly clear in this easy-reading article.

I wanted to go through just one of their many examples to illustrate why it’s imperative that these RCT reports adhere to the pre-registered a priori Protocols and Statistical Analysis Plans, so clearly explained in Adriaan de Groot’s 1956 paper [see the hope diamond…]. In this case, the a priori Protocol was among the archived documents examined by Jureidini et al:

from the a priori Protocol [page 23]


12.5.1 Primary Efficacy Parameter
  Change from baseline in CDRS-R score at Week 8 will be used as the primary efficacy parameter. Descriptive statistics will be calculated by visit. Comparison, between citalopram and placebo will be performed using three-way analysis of covarianee [ANCOVA] with age group, treatment group, and center as three factors,and the baseline CDRS-R score as covariate.
12.5.2 Secondary Efficacy Parameter[s]
  The secondary efficacy parameters are:
    1. CGI-Improvement subscale score [CGI-I].
2. Change from baseline in CGI-Severity score [CGI-S].
3. Change from baseline in K-SADS-P [depression module] score.
4. Change from baseline in CGAS score.

However, in the published article [A Randomized, Placebo-Controlled Trial of Citalopram for the Treatment of Major Depression in Children and Adolescents], the parameters were not-so-subtly changed. Response is nowhere defined in the Protocol. And the K-SADS-P and CGAS were just dropped:

from Wagner et al [page 1080]


The primary outcome measure in this study was the change from baseline in score on the Children’s Depression Rating Scale – Revised at week 8 or upon termination. The Children’s Depression Rating Scale – Revised was administered at each study visit. Response was defined as a score of ≤ 28 [indicating minimal residual symptoms]. Secondary measures included Clinical Global Impression [CGI] improvement and severity ratings [25].

Then, in their terse Results section, they included Response [erroneously called "prospectively defined"], a non-Protocol Effect Size [wrongly calculated], and again just left out the K-SADS-P and CGAS altogether:

from Wagner et al [page 1081]


Citalopram treatment showed statistically significant improvement compared with placebo on the Children’s Depression Rating Scale – Revised as early as week 1 [F= 6.58, df=1,150, p<0.05], which persisted throughout the study. At week 8, the effect size on the primary outcome measure, Children’s Depression Rating Scale – Revised [last observation carried forward], was 2.9. Additionally, at endpoint more citalopram-treated patients [36%] met the prospectively defined criterion for response than did placebo-treated patients [24%], a difference that was statistically significant [χ²=4.178, df=1, p<0.05]. The proportion of patients with a CGI improvement rating ≤ 2 at week 8 was 47% for the citalopram group and 45% for the placebo group [last observation carried forward values]. For the CGI severity rating, baseline values were 4.4 for the citalopram group and 4.3 for the placebo group, and endpoint values [last observation carried forward] were 3.1 for the citalopram group and 3.3 for the placebo group.

Not to mention the fact that the reported CDRS-R result failed to follow Protocol-directed exclusions which invalidated the claimed significance or that the add-in Response had a trivial NNT [8.3]. So by deviating from the a priori Protocol in a variety of ways, they were able to cherry-pick among parameters to give the illusion of efficacy.

But the even bigger revelation in the documents is the amount of effort the industry handlers and doctors put into controlling the process and actively hiding the true results of the Clinical Trial:

from Jureidini et al [section 3.2.2]


Wagner et al. failed to publish two of the protocol-specified secondary outcomes, both of which were unfavourable to citalopram. While CGI-S and CGI-I were correctly reported in the published article as negative, the Kiddie Schedule for Affective Disorders and Schizophrenia-Present [depression module] and the Children’s Global Assessment Scale [CGAS] were not reported in either the methods or results sections of the published article.
In our view, the omission of secondary outcomes was no accident. On October 15, 2001, Ms. Prescott wrote: “I’ve heard through the grapevine that not all the data look as great as the primary outcome data. For these reasons [speed and greater control] I think it makes sense to prepare a draft in-house that can then be provided to Karen Wagner [or whomever] for review and comments.” Subsequently, Forest’s Dr. Heydorn wrote on April 17, 2002: “The publications committee discussed target journals, and recommended that the paper be submitted to the American Journal of Psychiatry as a Brief Report. The rationale for this was the following: … As a Brief Report, we feel we can avoid mentioning the lack of statistically significant positive effects at week 8 or study termination for secondary endpoints.
Instead the writers presented post hoc statistically positive results that were not part of the original study protocol or its amendment [visit-by-visit comparison of CDRS-R scores, and ‘Response’, defined as a score of ≤28 on the CDRS-R] as though they were protocol-specified outcomes. For example, ‘Response’ was reported in the results section of the Wagner et al. article between the primary and secondary outcomes, likely predisposing a reader to regard it as more important than the selected secondary measures reported, or even to mistake it for a primary measure.

There’s nothing speculative here. The points are illustrated with verbatim references from the perpetrators’ own internal emails. And yet the authors had one hell of a time getting it published [also well referenced in their background notes with  emails from journal editors].

Like the Paxil Study 329 article, the list of contributors stretches well beyond the listed authors – the subjects in the studies, the kids prescribed the medication, the litigation that released these documents, the library that archived them, etc. But major credit goes to these authors who spent countless hours doing tedious, unfunded research, wrote the paper, then persisted until they found a journal that would accept the article as it should’ve been written. And while we’re at it, the International Journal of Risk & Safety in Medicine deserves credit for rising to the occasion – both by publishing it and for making it Open Access.

I think it’s now our job to insure that all this dedicated work is rewarded with a wide readership, one that helps us move closer to putting this tawdry era behind us…
  1.  
    Edmund C. Levin, M.D.
    May 14, 2016 | 2:55 PM
     

    Dear Mickey,
    BRAVO to Jureidini, Amsterdam and McHenry for their excellent analysis of the 2004 paper by Wagner et al, which touted citalopram neither wisely nor well. And, similarly, bravo to you–both for your critique of their paper and for your seeing that it got as much circulation as possible.
    Yours, ed

  2.  
    WDM
    May 15, 2016 | 1:48 AM
     

    Wagner nth-authored a study of (arguably manipulable) patient-level predictors of success in SSRI trials using young subjects

    “Attributional Style, Hope, and Initial Response to Selective Serotonin Reuptake Inhibitors Youth Psychiatric Inpatients” (2005)

    http://link.springer.com/article/10.1007%2Fs10608-005-9633-x

    When did placebo response start hitting 35% ?

  3.  
    May 15, 2016 | 10:25 AM
     

    And yet, this just comes out this week?:

    http://ajp.psychiatryonline.org/doi/abs/10.1176/appi.ajp.2016.15111444

    I think psychiatry needs to re-examine what was going on with the FDA trying to hamper Citalopram sales when Forest was trying to sell Lexapro like mad.

    And why I personally see more complications with high dose Lexapro than I do with Citalopram. But, now that both are generic, not a topic of interest to most? Sort of like the same BS going on with Pristiq of late, versus venlafaxine…

  4.  
    1boringyoungman
    May 22, 2016 | 3:48 AM
     

    Have all the details in the April 2005 “Dr. Wagner and colleagues reply” in AJP been vetted with the new information from the JRSM paper?

Sorry, the comment form is closed at this time.