This is a released-on-line version of a new meta-analysis of SSRIs in pediatric depression. Be warned that I’m not going to try to vet all their analytic techniques [as they do some pretty heady mathematical pyrotechnics to derive their various curves]. I’ll just say that in-so-far-as I understood it, it appeared legit to me. They looked at the results of thirteen pediatric trials and compared them to forty adult trials:
Systematic Review and Meta-Analysis: Early Treatment Responses of Selective Serotonin Reuptake Inhibitors in Pediatric Major Depressive Disorderby Varigonda AL, Jakubovski E, Taylor MJ, Freemantle N, Coughlin C, Bloch MHJournal of the American Academy of Child & Adolescent Psychiatry 2015, Published on-line in advance.
Objective: Selective serotonin reuptake inhibitors [SSRIs] are the first-line pharmacological treatment for pediatric major depressive disorder [MDD]. We conducted a meta-analysis to examine [1] the time-course of response to SSRIs in pediatric depression, [2] whether higher doses of SSRIs are associated with an improved response in pediatric depression, [3] differences in efficacy between SSRI agents; and [4] whether the time-course and magnitude of response to SSRIs is different in pediatric and adult patients with MDD.Method: We searched PubMed and CENTRAL for randomized controlled trials comparing SSRIs to placebo for the treatment of pediatric MDD. We extracted weekly symptom data from trials in order to characterize the trajectory of pharmacological response to SSRIs. Pooled estimates of treatment effect were calculated based on standardized mean differences between treatment and placebo group.Results: Meta-analysis included 13 pediatric MDD trials with a total of 3,004 patients. A logarithmic model indicating the greatest benefits of SSRIs occurred early in treatment best fit the longitudinal data {log [week] =0.10 [95% CI, 0.06 to 0.15, p<0.0001]}. There were no significant differences based on maximum SSRI dose or between particular SSRI agents. SSRIs were demonstrated to have a smaller benefit in pediatric compared to adult MDD.Conclusion: Treatment gains in pediatric MDD are greatest early in treatment and are on average minimal after 4 weeks of SSRI pharmacotherapy in pediatric MDD. Further research is needed using individual patient data to examine the power of early SSRI response [e.g. 2-4 weeks] to predict outcome in short-term pharmacological trials.
[reformatted to fit]
The SMD [Standardized Mean Difference] is the Effect Size [mean difference from baseline corrected by the pooled Standard Deviation – essentially Cohen’s d] using the study’s main depression scale [HAM-D, MADRS, CDRS-R, or K-SADS-P] from the Observed Cases.
"The larger the effect size, the greater the difference between treatment groups in the outcome measure. There are no universally accepted standards for describing values of d. However, it has been suggested that an effect size of 0.8 (8/10ths of a standard deviation) is “large,” a value of 0.5 (half a standard deviation) is “medium,” and a value of 0.2 (one fifth of a standard deviation) is small."from the BMJ Endgames by Philip Sedgwick
The graph on the left [above] is the raw weighted compendium of all thirteen studies and shows that the the major change is in the first several weeks. The graph on the right compares the response of adults and children [this time they used curve fitting mathemagic]. The point is obvious – the pediatric response is dramatically less than adults by more than a half. By classic interpretation of Effect Sizes, this is in the small or weak range.
[recolored and reformatted to fit]
This second set of graphs is dramatic – with the therapeutic effect evaporating as we come forward in time. I think the effect demonstrated is real, but I want to chase their mathemagic a bit before commenting too much here, This is the elusive phenomenon has been variously attributed to increasing placebo effect, differing patient populations, the coming of the CROs – but no explanation has received universal support. And these investigators who seem skilled and competent were hampered by having to use indirect methods because they didn’t have full data access.
There are interesting findings scattered throughout this meta-analysis. The choice of drug or dose didn’t seem to matter [think about all the time clinicians expended listening to the comparative hype from the pharmaceutical reps and KOLs spread under our noses].
I wanted to give credit where credit is due. This may be the first time I’ve read the part in red below said so matter-of-factly particularly in the JAACAP. Also it was "reviewed under and accepted by deputy editor John T. Walkup, MD" rather than Editor Andre Martin [for those of you who follow the retract Paxil Study 329 efforts].
"Systematic reviews and meta-analyses regarding SSRI pharmacotherapy in pediatric depression have been common. These meta-analyses have demonstrated that SSRI pharmacotherapy is effective for pediatric depression. SSRIs as a class provide around a 25% greater chance of responding over the short-term when compared to placebo and have a number needed to treat [NNT]=10. Meta-analysis of response rates in pediatric antidepressant trials are high [61%], but so is the response rate to placebo [50%]. Systematic reviews have demonstrated that treatment estimates of SSRI efficacy were previously exaggerated by publication bias and time-lag bias in the distribution of negative trial results."
And the article mentions and lays out the AACAP’s practice guidelines for SSRIs in the article. That last line in red is absolutely correct. In light of the findings of the much lower efficacy of these drugs in adolescents it seems ludicrous to have the same guidelines for both adults and children given its near inertness in the latter:
"Regarding the use of SSRIs in pediatric depression, the American Academy of Child and Adolescent Psychiatry [AACAP] Practice Parameter recommends that “patients should be treated with adequate and tolerable doses for at least 4 weeks. Clinical response should be assessed at 4-week intervals, and if the child has tolerated the antidepressant, the dose may be increased if a complete response has not been obtained. At each step, adequate time should be allowed for clinical response, and frequent, early dose adjustments should be avoided. However, patients who are showing minimal or no response after 8 weeks of treatment are likely to need alternative treatments. Furthermore, by about 12 weeks of treatment, the goal should be remission of symptoms, and in youths who are not remitted by that time, alternative treatment options may be warranted.” These AACAP recommendations mimic American Psychiatric Association [APA] Practice Guidelines in adults with MDD."
Another finding of interest – that the bulk of the effect was in the first week [or two]. That’s in stark contrast to the reports in adults or my experience using them. They want to propose shorter clinical trials or something like that. I wondered if it wasn’t evidence of the response to an active placebo. Whatever the case, I’m reporting this study because of its documentation of the low efficacy in adolescents compared to adults, and will get back to these other findings after some reflection. It feels like a study worth looking into more deeply…
I haven’t yet read the full paper, but I scanned the list of included studies and found Study 329, as well as others known to have spun findings, falsified data, and exceptionally poor methodology (e.g., Emslie et al., 2002). The quality of this research is so poor that I am skeptical of meta-analytic conclusions based on it.
Brett,
Thanks for that comment. It brings up something I actually spent time thinking about writing this. As you probably know, the Study 329 Data is posted on the Internet and I had an amateur’s look at it back in 2012 [a movement… etc]. Since then, I’ve been part of a team that has had full access to the 329 data, so I’ve seen it all. It wasn’t the data or study that was the problem. It was what they did to/with it. I haven’t seen the information from the other trials, but my guess is that there too, the problem isn’t the data, it’s in the deceptive presentation and analyses. We won’t know that for sure until we see it all, but my guess is that that’s how things will play out. I don’t think PHARMA would risk outright fraud, and these studies do go in some detail to the FDA. This study makes an attempt to infer the raw data and move from there. But that’s why I equivocated about some of their analyses where I’m not sure what they used, However, in the basic efficacy story, I felt like they were on solid ground. I plan to look further to see if I can say this with more conviction.
So I take your point, but am less skeptical based on what I’ve seen so far. Thanks for your comment. I hope I can get closer to nailing down your point in the near future.
Mickey, I share your interest in trying to understand the actual safety and efficacy of “antidepressants.” This job is not for the faint of heart! I have waded through this literature and have found it a challenge to digest, in part because of publication bias and problematic design and reporting practices in clinical trials. One reasonable solution is to try to cut through the spin by analyzing the raw data oneself, as you have done with Study 329. For me, this solution is still imperfect because of two problems. The first is that antidepressant trials are often designed in such an overtly biased manner that it is difficult for me to construe their results as a valid scientific test. The second issue is that there is reason to believe the raw data available in places like the GSK clinical study register may not perfectly correspond to the “actual” raw data.
Regarding the second point, we know that in the infamous Study 329 (Keller et al., 2001), investigators misclassified suicidal ideation among children taking paroxetine as “emotional lability.” GSK paid a $3 billion fine to the US Department of Justice in 2013 for healthcare fraud, and study 329 was cited as an example of this. Regarding trial details provided to the FDA, it appears these don’t always correspond to the data published in journal articles. Jonathan Leo (2006; http://www.baumhedlundlaw.com/media/ssri/PaxilConsumerFraudClassAction/SSRI%20Trials.pdf) discovered that two suicide attempts among children taking fluoxetine reported in the FDA medical review were not mentioned in the Emslie et al. (1997) article. Leo also reported that some suicidal events on sertraline reported to the FDA were omitted in the Wagner et al. (2003) article. All three of these published studies were among the 13 articles meta-analyzed in the Varigonda et al. analysis in Journal of the American Academy of Child & Adolescent Psychiatry. Participants from these three studies account for 637 of the 3004 patients (21.2%) in the meta-analysis. Notably, the authors used data reported in published articles, not obtained from the FDA.
I’m sure you recall this post of yours in 2013: (http://1boringoldman.com/index.php/2013/01/18/at-the-end-of-the-day/). Discovery documents obtained by David Healy show that Eli Lilly investigators were pressured to reclassify suicidal events among children on fluoxetine as overdose (http://www.healyprozac.com/Trials/CriticalDocs/cbouchy131190.htm).
Based on these reports, I think there is reason to question the extent to which the available raw data – even that obtained from the FDA, let alone data from published studies – correspond to the “actual” results of these trials. It seems in some cases, the raw data themselves have already been “massaged” prior to statistical analysis. If this is true, the problem cuts deeper than mere deceptive presentation and analysis.
Regarding the first point, there are serious design flaws in industry-funded antidepressant trials that are all but ignored in the scientific community. The double-blind is almost never assessed, which means we cannot rule out the possibility that the apparent advantage of antidepressants over placebo (which is small and inconsistent) is the product of unblinding. The few studies in the antidepressant literature to assess the double blind reliably find that it is easily penetrated by patients and study personnel. Antidepressant trials always employ placebo washout periods, some use “drug run-in” periods in which drug non-responders are excluded from the trial before it begins, and some simply “replace” early drug non-responders with early drug responders. These and other features stack the deck in favor of the drug over placebo. And all of these issues translate into potential problems with the validity of the raw data per se, not the manner in which it is analyzed and spun.
I don’t offer these observations to argue anything in particular about the new article that was the subject of your blog post. Rather, what inspired my comment was learning of yet another meta-analysis that included results from studies known to be “dodgy,” as they say in Australia. Mickey, I always appreciate your critical analysis and efforts to keep us abreast of important developments in the literature. I look forward to reading more about your thoughts on these issues.
Best,
Brett
I have a strong suspicion that children show lower response rates to SSRIs because they are less likely to break the blind. All the data support this and it also makes logical sense.
With children, information on side effects is relayed to and understood most by their parents, so you are one person removed from a real understanding of side effects. The parents themselves will also be less likely to break blind because they are not personally experiencing the side effects. Children probably also have less understanding of the RCT process and are more likely to assume they are on active drug due to this lack of understanding.
As for the data, we see higher placebo response rates with children. This is exactly what you would expect if more children (and parents) taking placebo thought they were taking the active drug.
I would love to see trials examining the extent of blind breaking in both children and adults. This data so easy to collect I find it hard to believe it is not being collected, given the controversy over blind breaking and how long this has been known. I consider the lack of this data in studies performed over the last ten years to indicate either incompetence or deliberate deception.
I don’t have much information about effectiveness, but I can tell you that 10 or 20 years down the road, those who have been on antidepressants since childhood have a very, very difficult time going off them; don’t have any idea of their “true” personalities; and often mourn the sacrifice of their sexuality to this questionable drug experiment.
I was just wondering when they talk about children in these studies, are they including adolescents in this description, or describing the 12 and under crowd. Are there studies that show where adolescent fare with SSRI – more like the children or more like the adults?
Mickey,
Isn’t there a problem underlying this research in that the category of major depressive disorder has very poor reliability/validity? I remember reading about the validity of MDD in the DSM V field trials, and it was so bad (close to 0.2) that it was just embarrassing. Hopefully reliability was better in the past, but how much better…