paxil in adolescents: “five easy pieces”…

Posted on Tuesday 15 October 2013

I try not to be boring and tedious, but sometimes it just can’t be helped. This post isn’t yet another commentary on Paxil Study 329, but it’s in the mix. It’s a timeline about all the studies SKB and GSK did on Paxil in adolescents. I think it’s of interest, but it may put others to sleep. So there – you’ve been warned.

It would be a cloistered person indeed who didn’t know about Paxil Study 329, a paradigm for the jury-rigging of Clinical Trial data that still stands unretracted in the Journal of the American Academy of Child and Adolescent Psychiatry now in its second decade. If you’ve been living in a cave and don’t know the story, here’s a reference that will get you up to speed [the lesson of Study 329: an unfinished symphony…]. It was a decidedly negative study published as positive in 2001. But it was only the first of three studies done on Paxil in adolescents by SmithKlineBeecham [that became GlaxoSmithKline in the middle of things]:
Efficacy of Paroxetine in the Treatment of Adolescent Major Depression: A Randomized, Controlled Trial
by MARTIN B. KELLER, NEAL D. RYAN, MICHAEL STROBER, RACHEL G. KLEIN, STAN P. KUTCHER, BORIS BIRMAHER, OWEN R. HAGINO, HAROLD KOPLEWICZ, GABRIELLE A. CARLSON, GREGORY N. CLARKE, GRAHAM J. EMSLIE, DAVID FEINBERG, BARBARA GELLER,  VIVEK KUSUMAKAR, GEORGE PAPATHEODOROU, WILLIAM H. SACK, MICHAEL SWEENEY,  KAREN DINEEN WAGNER, ELIZABETH B. WELLER, NANCY C. WINTERS, ROSEMARY OAKES, AND JAMES P. MCCAFFERTY
Journal of the American Academy of Child and Adolescent Psychiatry, 2001, 40[7]:762–772.
Conclusions: Paroxetine is generally well tolerated and effective for major depression in adolescents.
An International, Multicenter, Placebo-Controlled Trial of Paroxetine in Adolescents with Major Depressive Disorder
by Ray Berard, Regan Fong, David J. Carpenter, Christine Thomason, and Christel Wilkinson
Journal of Child and Adolescent Psychopharmacology. 2006 16[1-2]:59–75.
Conclusions: No statistically significant differences were observed for paroxetine compared with placebo on the two prospectively defined primary efficacy variables. Paroxetine at 20–40 mg/day administered over a period of up to 12 weeks was generally well tolerated.
Paroxetine Treatment in Children and Adolescents With Major Depressive Disorder: A Randomized, Multicenter, Double-Blind, Placebo-Controlled Trial
by GRAHAM J. EMSLIE, KAREN DINEEN WAGNER, STAN KUTCHER, STAN KRULEWICZ, REGAN FONG, DAVID J. CARPENTER, ALAN LIPSCHITZ, ANDREA MACHIN, AND CHRISTEL WILKINSON
Journal of the American Academy of Child and Adolescent Psychiatry. 2006 45[6]:709-719.
Conclusions: Paroxetine was not shown to be more efficacious than placebo for treating pediatric major depressive disorder.
As we all know, the first study [329] was  a negative trial that got reported as positive in the Journal of the American Academy of Child and Adolescent Psychiatry in 2001. The other studies [377 and 701] were decidedly negative, but weren’t published, at least initially. When you do a clinical trial, the variables that are key to success or failure are declared beforehand in the protocol. These next tables show the results of these Primary Efficacy Variables for each trial:

So in the three studies, none of the five primary efficacy variables [my "five easy pieces"] reached significance [last column, P]. Now notice the column that says either Odds Ratio or Effect Size:

Strength of Effect
We all learned about p values, probability, that ubiquitous statistical measure that tells us that the observations in a comparison are probably not do to chance. In clinical trials, separation from placebo is the gold standard, but it tells us nothing about the strength of the effect of a drug. There are some other indices that estimate strength of the effect – the odds ratio and the effect size. They are often not in the articles when they’re published which stick with p, but they ought to be there. The odds ratio is for categorical variables like response rate. A value of 1 means there was no strength over placebo. We’d like to see an odds ratio like 3 to say the drug was strong. The effect size is for continuous variables like fall in the scores on a rating index. The usual cut-off is 0.25. Anything smaller than that is trivial.
I’ve calculated the strength of effect for the five primary efficacy variables with the 95% Confidence Intervals. These numbers should be present in every Clinical Trial article. They’re easy to calculate and are clinically meaningful. So Take-Home number one from this post is to calculate them yourself if they’re not provided in the article. As you can see, even if Paxil had separated from placebo, the more important fact is that Paxil doesn’t have a strength of effect in adolescents that would suggest clinical usefulness. These strength of effect values are the ones used in the meta-analyses like those done by the Cochrane Collaborations.

I’ve previously said more than enough about why Paxil Study 329 was reported as positive [again see the lesson of Study 329: an unfinished symphony…], but I’ve never taken the time to look at the timeline as it unfolded. The figure below is the simplest version I could come up with:

Paxil came to market after FDA Approval for Adult Major Depressive Disorder in 1992 [chasing Prozac, Celexa, and Zoloft]. Not long after, SmithKlineBeecham began the process to get approval for its use in children and adolescents with Study 329 which started in April 1994. Because of recruitment problems, they doubled the number of centers and finished the trial in March 1997. 329 was a trial in the US. There was also an International Trial [377] begun a year later. When the results came in they were disappointing [from their memo]:

But they pressed ahead and published the infamous JAACAP article in July 2001. Study 377 stayed in the file drawer. Notice on the timeline that they commissioned a third trial [701] while Study 329 was being written up [lead by three of the Study 329 authors]. I see that as "one for the road," hoping they could get a positive to submit with 329 to the FDA. But it was a bust [Placebo beat Paxil]. Shortly before Study 329 was published, GlaxoWelcome acquired SmithKlineBeecham and GlaxoSmithKline [GSK] came into being:

SmithKlineBeecham became GlaxoSmithKline
It’s tempting to say that GSK inherited the Paxil problems from SKB [see comment]. All of these trials were started by SKB. SKB decided to publish Study 329 and squash Study 377. But when I look at that timeline, SKB may have started the ball rolling, but it gathered no moss in the hands of GSK. GSK knew that 377 and 701 were busts. They couldn’t have not known that 329 was a jury-rigged mess. In case they didn’t know, the FDA told them when they submitted it for a Pediatric Extension, Jon Juriedini told them in his 2003 letter to the JAACAP, and Eliot Spitzer, the NY Attorney General told them when he successfully sued them and forced them to post study results for all three studies in 2004. And Efficacy wasn’t the only problem, there was a strong signal for suicidality as an Adverse Effect in these studies, something GSK published at the end of the drug’s patent life, not as soon as they knew it [Black Box Warning]. And they still drug their heels on publishing 377 and 701 until their patent was at its end. No wounded victims at GSK – they were right in there squeezing every Paxil sale out of the market right up to the end. That is Take-Home point number two of this post – GSK was guilty and earned their $3 B fine that included Paxil among their transgressions.

My final point is subjective rather than objective. There’s currently a debate about how transparent data transparency needs to be. In a recent article, a GSK President and Iain Chalmers, a champion for data transparency propose that  patient level data isn’t required, that full study reports should be enough [see the wisdom of the Dixie Chicks…, trojan horse or real reform? the jury’s still out…, just that simple…]. Iain Chalmers recently commented on this blog saying:
The Lancet piece is about the analyses based on individual patient data [IPD] from similar trials after trial completion. We noted that among other advantages, access to IPD facilitates “more thorough data checking” and identification of “missing information”. This includes “checking on whether trialists are cheating”, and some IPD analyses have detected this. In deciding how best to make IPD available for data and other checks and reliable re-analyses the logical starting place is the substantial experience acquired during nearly three decades of IPD analyses done by various collaborative trialists’ groups. That experience is important not only in illustrating the advantages of IPD analyses, but also in showing how the privacy of individual patients can be assured.

He is suggesting, I think, that the trialists, the people like himself and those at the Cochrane Collaboration, like Ben Goldacre and his AllTrials Collaborators, the epidemiologists like Peter Doshi and Tom Jefferson, the campaigners like David Healy, etc. should perhaps be the ones to have access to the individual patient data because of [1] their proven expertise and [2] their track record at protecting subject confidentiality. That should reassure the people who want access to the raw data for anyone that wants it [like me] and we should stand down. It’s a tempting offer. Lord knows, before I started reading their work, I didn’t know what effect size was, how to calculate the odds ratio, what forest or funnel plots were. In fact, the people in this paragraph are my current heros of the realm, in particular Iain Chalmers:

an n=1 opinion
Since medical school, I have venerated the medical literature. The articles come from our highest centers of learning and are written by the leaders in the field – department heads, section chiefs, acclaimed researchers, our best and brightest. The journals are edited by luminaries and the submissions peer-reviewed by expert volunteers. I believed in that process and trusted it. Many of the names on the articles at the beginning of this post come from those upper levels of the profession.
And yet when I look over the story in this post, I see no shining signs of integrity glistening back. This is just a story about turning a study with a diminutive glimmer into a shining light for no reason except to sell a product – one that was of little to no help, was toxic to many, and fatal to some. All my former confidence in the process was for naught. I never imagined that such a thing could actually happen – but it did. The main author on Study 329 was the Chairman of Psychiatry at Brown University, one of our best. There are plenty of studies from the current era that are flawed in the same way – plenty.
So as much as I respect the trialists pioneers in the previous paragraphs, who knows what the future may bring? Who can guarantee their successors will be any less vulnerable than the academicians on those articles above? So I’m sticking to my guns here. Nobody is going to wander up this mountain to get my vote, but I can tell you what it would be if they did. The only way to assure integrity into the future is to put the raw numbers out there for anyone who wants to look at them. As our critics are fond of pointing out, it’s the physicians that write the prescriptions. So it’s the physicians and the scientists and the patients who have the right and need to know about those numbers. Of course we’ll turn to the experts for opinions. But if I’m responsible for giving the medicine to a sick person, I should be able to see the actual unprocessed data myself. Learning to analyze it is my responsibility. Since I can’t count on the literature to produce the "five easy pieces" I need, I should be able to derive them myself. I think the threat that we can and will do that will keep PHARMA in line. That’s my Take-Home number three of this post – an opinion…
  1.  
    Steve Lucas
    October 16, 2013 | 9:21 AM
     

    As I participate in two other blogs I am surprised at how cloistered other doctors are, either through desire or just being overwhelmed by their working schedule. The math here is extraordinary, but does show that this type of analysis is possible by one person.

    Doctors also seem to be oblivious to the cascade effect of adding more and more medication and we now deal with the polypharmacy effect in all aspects of medicine.

    KOL’s, COI’s are old topics on some blogs while other blogs are just now realizing that their leadership is in the pocket of pharma.

    More doctors need to drop the white wall and look at the system they participate in and demand changes for themselves and their patients. Data transparency is a start in this process of regaining the patients trust and returning us to a system where medicine is for the good of all, not just the profit of a few.

    Steve Lucas

  2.  
    October 16, 2013 | 1:11 PM
     

    Okay, so GSK as well as SKB are all psychopaths. I can go with that.

  3.  
    wiley
    October 16, 2013 | 6:12 PM
     

    Off topic, but where else could I share this:

    http://www.theatlantic.com/health/archive/2013/10/sleeptexting-is-the-new-sleepwalking/280591/

    This is a DISORDER— “…it’s being classified as a parasomnia, putting it in the same class of sleep disorders as sleepwalking, night terrors, and bedwetting.”

    “The line is blurring between wakefulness and sleep,” Gelb explains. “So, you’ll be texting one second and the next second you’re asleep, but then you get a ping and the ping awakens you. It’s becoming more of a trend because the line is really being blurred between being awake and being asleep.””

    Really? Doesn’t everyone just drop suddenly into a deep sleep then regain wide-eyed consciousness when the alarm goes off?How long has this line been blurred? Why is it getting blurrier? Just how blurry can it get? Are they suggesting that people really just drift in and out of sleep and are at times roused by noises into a semi-waking state? That’s madness!

    Someone needs to kick this one square in the n*ts, to put it bluntly.

  4.  
    Sarah
    October 16, 2013 | 8:14 PM
     

    1BOM, I have found your blog very informative particularly your near obsessive hunt for the truth of the fraudulent Paxil studies that resulted in FDA approval for use with children and adolescents. I would like to know if the approval process for Celexa (for children and adolescents) was any more trustworthy. I ask because my 20 yr old daughter was a suicide victim of our GPs belief that Cipramil (citalopram, celexa) was the safest antidepressant available. The GP also assured us that if she didn’t have depression it would do nothing (presumably she meant no harm). Just over 3 years later on her second withdrawal attempt she felt bad, visited the GP and was told that it was the depression returning and she may need to be on antidepressants for the rest of her life. That life ended the following day. I now believe that the withdrawal method recommended was completely wrong and that my daughter did not in fact suffer from depression at all. I also believe that GPs dish out prescriptions in misguided good faith.Why?

  5.  
    wiley
    October 16, 2013 | 9:46 PM
     

    Mercy, Sarah, I’m so sorry for your loss.

    Have you checked out the RxISK website?

    https://www.rxisk.org/Research/DrugInformation.aspx?DrugID=4332&ProductDrugID=-5605&ProductName=Cipramil#18_0_0_0_0

  6.  
    wiley
    October 16, 2013 | 9:55 PM
     

    All the SSRI’s come with the suicide warning for people under 24.

  7.  
    October 16, 2013 | 11:00 PM
     

    Sarah,

    Such a painful story!

    Actually, the only SSRI ever approved for children and adolescents was Prozac, and that was before the Black Box warning came along in 2004. Celexa isn’t the worst offender, but difficulty coming off these drugs [withdrawal] can happen with any of the SSRIs. And the misinterpretation of withdrawal symptoms as the “depression coming back” is all too common.

    I think the “why” is that the GPs have been misinformed and think these drugs are more benign than they are. Wiley’s right. David Healy’s Rxisk site is very informative. SurvivingADs is another good resource [Altostrata’s site]…

  8.  
    AA
    October 17, 2013 | 8:20 AM
     

    Sarah,

    My deepest sympathy for your loss.

    To answer your question, I think it is also extreme arrogance. On another board, someone went to a PC and had symptoms of sleep apnea, including a report by the bed partner that the person snored, that were begging for a referral to a sleep study. Instead, the bleeping professional diagnosed depression and wrote a prescription for an antidepressant. Fortunately, the person is seeking another opinion and did not fill the prescription.

    This incident and the tragic situation with your daughter is another reason why I feel the gross overprescription of psych meds transcends psychiatry.

  9.  
    October 17, 2013 | 1:46 PM
     

    Yes, the symptoms of antidepressant withdrawal syndrome can be severe and persistent. People experiencing them might become desperate and sometimes resort to suicide. (David Foster Wallace killed himself after 1.5 years of Nardil withdrawal syndrome.) This is widely denied by medicine.

    I agree, Sarah, please contribute your case to Rxisk.org.

  10.  
    Sarah
    October 17, 2013 | 6:38 PM
     

    Mickey,Wiley, AA, Altostrata, Thank you all. I did fill a Rxisk report some time ago. My daughter was prescribed Celexa (Cipramil in UK and Ireland) in 2004. At that time we had faith in our GP’s judgement. No black box warning existed in this part of the world. A suicide warning crept in around May 2007 but neither my daughter nor I were alerted to the change by GP or pharmacist. She died 4 months later.

Sorry, the comment form is closed at this time.