This is a lot for a blog post. I’ve tried to only show the essence of things, but it’s still a lot of words and some confusing tables. There are a number of sources: a recent lawsuit, a twenty-five year old FDA drug approval, and an eighteen year old journal article. While a lot of the information is old, the issues raised are as much on the front burner today as they should have been in the past. So while the post is dense, I think the information adds important details and underscores why the AllTrials campaign is an essential ingredient in any chance we have for effective Clinical Trial reform. I have no dog in this hunt that relates to the lawsuit itself – that’s for the courts. But it’s through discovery in suits like this that we gain valuable documentation of the story behind what we’ve previously been able to see as physicians, observers, and patients. As is apparent from my title [zoloft: the approval I…], there’s more coming… |
Pfizer submitted its new drug application ["NDA"] to the FDA in 1990. As part of the application, six placebo controlled trials were presented to the FDA. Ofthe six clinical trials, four showed that Zoloft was no more effective than placebo in treating depression and two indicated that Zoloft had slight positive impact on depression. The two studies that showed that Zoloft was more effective than placebo in treating depression, however, were severely flawed.In the first trial that supposedly demonstrated efficacy, researchers enrolled 369 patients in a double-blind trial to test the efficacy of Zoloft at 50mg, 100mg, and 200mg against placebo. Within the treatment groups, i.e., those taking Zoloft and not placebo, about 50% of the patients quit before the trial was completed-22% because of side-effects, 18% because it was not effective, and 10% for unexplained reasons. This large drop-out rate reduced the available patient population to 191. Of the remaining 50%, i. e., the population that did not quit Zoloft, the trial tracked patient changes in depression based on the Hamilton Rating Scale for Depression ["HAM-D"] over the course of six weeks. The HAM-D scale is a multiple item questionnaire used to measure a person’s perception of depression. It is usually composed of 17-29 questions where the patient rates specific areas on a 0-5 point scale. A person’s HAM-D scale rating can be anywhere between 0-70 depending on the scale used. The trial revealed that there was only a slight improvement in those taking Zoloft than those taking placebo. During the first four weeks of treatment, the study did not show any statistically significant difference on the patient’s HAM-D scale between those taking Zoloft and those taking a placebo. Then, during weeks 5 and 6, the data showed that there was a slight statistically significant difference for the 50mg treatment group, although there was no significance in the 100 mg or 200mg groups. The study showed that, on average, a person taking Zoloft had a HAM-D scale improvement of about 2.3 points above those taking placebo after six weeks, which, depending on the scale being used, means that Zoloft, with its many documented adverse side effects, appeared to be better than placebo by 1-5% after six weeks. This is an extremely small treatment effect and it was not associated with dosage…
This is from the fixed dose trial they mentioned – a table showing the drop-out rates and reasons for dropping out:
And this is the HAM-D data table with my graph made from that table:
Sertraline safety and efficacy in major depression: a double-blind fixed-dose comparison with placebo.
by Fabre LF, Abuzzahab FS, Amin M, Claghorn JL, Mendels J, Petrie WM, Dubé S, and Small JG.
Biological Psychiatry. 1995 38:592-602.
In a 6-week, randomized, double-blind, multicenter trial, sertraline 50 mg, 100 mg, or 200 mg, or placebo, was administered once daily to 369 patients with DSM-III-defined major depression. Efficacy variables included changes from baseline scores for total Hamilton Rating Scale for Depression [HAMD], HAMD Bech Depression Cluster, Clinical Global Impressions [CGI] Severity, CGI Improvement, and Profile of Mood States Depression/Dejection Factor. For the evaluable-patients analysis, all sertraline groups showed significantly [p < 0.05 or better] greater improvements in all efficacy variables except one when compared with the placebo group. For the all-patients analysis, all efficacy variables in the 50 mg group were statistically significantly [p < 0.05] better than placebo. Side effects increased with increasing dosage but were usually mild and well tolerated. The results of this study show that sertraline 50 mg once daily is as effective as higher dosages for the treatment of major depression with fewer side effects and therapy discontinuations.
When you look at my HAM-D graph and the one in the published paper, it’s kind of hard to imagine they’re from the same study. And then when you read the F.D.A. reviewer’s comment about a dose response curve…
… and compare it to the published article’s version, "Side effects increased with increasing dosage but were usually mild and well tolerated. The results of this study show that sertraline 50 mg once daily is as effective as higher dosages for the treatment of major depression with fewer side effects and therapy discontinuations.", you have to uncross your eyes and wonder, "How did they do that?" The answer is in the table’s footnote and the graph’s parentheses. It’s the phrase "evaluable patients." The published paper mentions "evaluable patients" sixteen times. This is the most pertinent instance:
A total of 178 patients [48%] discontinued prematurely from this study [Table 2]. Side effects and lack of efficacy were the most frequently cited reasons for discontinuation. The number of patients in each group who discontinued because of side effects prior to study day 11 and who were thus excluded from the evaluable-patients efficacy analysis was 4 [4%] in the sertraline 50 mg/day group, 10 [11%] in the 100 mg/day group, 23 [25%] in the 200 mg/day group, and 2 [2%] in the placebo group.
The version we have from the F.D.A. N.D.A. is not the primary data and is suspect itself because of the ubiquitous L.O.C.F. correction factor [Last Observation Carried Forward]. The justification for using it is the assumption that drop-outs are random and equal among groups. That is clearly not the case here [see either version of the tables above or the comments about that first eleven days]. The validity of throwing out the first eleven days and basing conclusions on "evaluable patients" in the published study is only justifiable is you want to monkey with the results, as is the averaging ["Sertraline Combined"]. What they gained is obvious. They brought the insignificant responses in the higher doses into significance, and spun them into justifying the unjustifiable conclusion, "The results of this study show that sertraline 50 mg once daily is as effective as higher dosages for the treatment of major depression…"
There’s plenty more to be gleaned from these documents, but I’m going to take a break. This is enough to show the extent of the jury-rigging of a Clinical Trial report from twenty-five years ago, the dawn of time for the era of new psychopharmacology in psychiatry. To be honest, today, the South is in a deep freeze. It was 19° in the sun on my front porch the last time I looked, and the little gas heater in my office is no competition for the roaring fire in the living room [and, the next episode of Helen Mirren’s Prime Suspect is singing from NetFlix]. I’ll have to say that I didn’t expect this much sleight of hand that long ago, but it is what it is as they say. And what it is is really bad. Much more later… |
Nabbed this one for my website, Dr. Mickey. Found a really nice picture to go with it. Am looking forward to more.
Stay warm.
Baum, Hedland may have at last struck gold. Any antidepressant manufacturer can be similarly sued for consumer fraud.
I remember the NIH study in 2002 that showed St. John’s Wort was no more effective than placebo in major depression, but neither was Zoloft, a finding that was minimized by the study’s principal investigator, Duke’s Jonathan Davidson, whose career has been bankrolled by Big Pharma.