and…

Posted on Thursday 14 March 2013

Years ago I joked, "What they might as well do is make a table with all the DSM diagnoses down the side and all the psychiatric meds across the top and then just fill in the blanks." I was making fun of the endless clinical trials that were choking our journals. One of my more astute partners laughed as she said, "You’re sooooo far behind. They’ve already done that. Now they’re doing them two drugs at a time. Much bigger table!" But of all the combo drug strategies that came along, the one I least understood was adding atypical antipsychotics to antidepressants. That made some sense with severe cases of psychotic depression, but those are not common and most people being given the combination treatment didn’t seem to have those symptoms. What was the rationale for antipsychotics? Beats me. The drugs are anxiolytic, but there are plenty of much safer things to do to treat anxiety. So I’ve generally been suspicious that the atypical augmentation was a marketing ploy. Here, we are presented with a new meta-analysis of the studies to date that met solid study criteria. These are all industry funded studies with one being a joint industry/NIMH study:

Adjunctive Atypical Antipsychotic Treatment for Major Depressive Disorder: A Meta-Analysis of Depression, Quality of Life, and Safety Outcomes
PLoS Medicine
by Glen I. Spielmans, Margit I. Berman, Eftihia Linardatos, Nicholas Z. Rosenlicht, Angela Perry, and Alexander C. Tsai.
March 13, 2013
[full text on-line]

Background: Atypical antipsychotic medications are widely prescribed for the adjunctive treatment of depression, yet their total risk–benefit profile is not well understood. We thus conducted a systematic review of the efficacy and safety profiles of atypical antipsychotic medications used for the adjunctive treatment of depression.
Methods and Findings: We included randomized trials comparing adjunctive antipsychotic medication to placebo for treatment-resistant depression in adults. Our literature search [conducted in December 2011 and updated on December 14, 2012] identified 14 short-term trials of aripiprazole, olanzapine/fluoxetine combination [OFC], quetiapine, and risperidone. When possible, we supplemented published literature with data from manufacturers’ clinical trial registries and US Food and Drug Administration New Drug Applications. Study duration ranged from 4 to 12 wk. All four drugs had statistically significant effects on remission, as follows: aripiprazole [OR], 2.01; 95% CI, 1.48–2.73], OFC [OR, 1.42; 95% CI, 1.01–2.0], quetiapine [OR, 1.79; 95% CI, 1.33–2.42], and risperidone [OR, 2.37; 95% CI, 1.31–4.30].
The number needed to treat [NNT] was 19 for OFC and nine for each other drug. All drugs with the exception of OFC also had statistically significant effects on response rates, as follows: aripiprazole [OR, 2.07; 95% CI, 1.58–2.72; NNT, 7], OFC [OR, 1.30, 95% CI, 0.87–1.93], quetiapine [OR, 1.53, 95% CI, 1.17–2.0; NNT, 10], and risperidone [OR, 1.83, 95% CI, 1.16–2.88; NNT, 8]. All four drugs showed statistically significant effects on clinician-rated depression severity measures [Hedges’ g ranged from 0.26 to 0.48; mean difference of 2.69 points on the Montgomery–Asberg Depression Rating Scale across drugs]. On measures of functioning and quality of life, these medications produced either no benefit or a very small benefit, except for risperidone, which had a small-to-moderate effect on quality of life [g = 0.49].
Treatment was linked to several adverse events, including akathisia [aripiprazole], sedation [quetiapine, OFC, and aripiprazole], abnormal metabolic laboratory results [quetiapine and OFC], and weight gain [all four drugs, especially OFC]. Shortcomings in study design and data reporting, as well as use of post hoc analyses, may have inflated the apparent benefits of treatment and reduced the apparent incidence of adverse events.
Conclusions: Atypical antipsychotic medications for the adjunctive treatment of depression are efficacious in reducing observer-rated depressive symptoms, but clinicians should interpret these findings cautiously in light of [1] the small-to-moderate-sized benefits, [2] the lack of benefit with regards to quality of life or functional impairment, and [3] the abundant evidence of potential treatment-related harm.
I will quickly admit that the ins and outs of meta-analyses are new to me, but they’ve become something of major importance especially in psychiatry where there has been so much outside interference, conflict of interest, and scientific misbehavior in our literature [also so many clinical trials]. The point of a meta-analysis is to look at all the studies and separate the ‘wheat from the chaff’ – to come up with something that can be believed. I think this is a good one, so at the risk of living up to my moniker [one boring old man], I’m going to assume that the reader is as meta-analysis-naive as I was and go through this one in a bit of detail – at least the efficacy part [I do have something in mind to say besides writing a meta-analysis primer, but it’s at the end].

The whole point of a randomized placebo controlled clinical trial is to see if a medication has the desired clinical effect compared to placebo on some predefined outcome, with three usual measures of success: statistics, strength of effect, and overall improvement. The outcomes for the first two focus on some objective parameter [ie rating scale] and measure the percent achieving response or remission, or they measure the magnitude of difference in the actual outcome scores. The strength of the drug’s effect on the non-parametric [yes-no] parameters of response or remission are quantified by the Odds Ratio [OR] or the Number Needed to Treat [NNT]. The strength of the parametric [continuous scale] magnitude of difference is measured by the effect size. If you already know this stuff, this paragraph is boring and simplified. If you don’t know it, it may still be Greek, but I maybe it will help below.

  • Statistical: Statistical significance is all you need to know if the outcome to not significance. In that case, anything you see in the numbers is just number noise to be ignored. In a clinical trial, if a difference is statistically significant, all you know is to look further. The differences observed may be meaningful or trivial, but the p value [probability] doesn’t discriminate the two. Something can be very significant but clinically irrelevant.
  • Strength of Effect: In these studies, the investigator defines the change in the outcome measure that will define response and remission in advance. There are two ways to express the results – Odds Ratio and Number Needed to Treat, both calculated from the same things – the percentages of response or remission compared between placebo and drug. So if 5% respond to placebo and 25% respond to the drug, the Odds Ratio and Number Needed to Treat are:
    Obviously, the higher the OR, the better the response, and the lower the NNT the better the response. The NNT is the easiest to say in plain language: You need to treat 5 patients to get one subject who does better on the drug than placebo. The Effect Size is a bit more complicated and requires more information:
    The Effect Size varies with the differences in the means and the measurement of variability [overall variance in the sample]. Obviously, the most accurate computation comes from having the raw  data in hand, but there are any number of other ways and formulas to calculate it indirectly, depending on what information you can get. Here’s what the authors said about their computations:
    For continuous outcomes, effect sizes were computed from means and standard deviations when possible. When these were not provided, effect sizes were computed based on means and p-values, or p-values only.
    … meaning that they did the best they could with what they had to work with.

    In this meta-analysis, the focus was on four atypical antipsychotics and their efficacy as adjuncts in depressed patients who had not responded to antidepressants [treatment-resistant depression]. Here’s a greatly truncated version of the forest plots of the Odds Ratios for Response and Remission with the four drugs Aripiprazole [Abilify], Olanzapine/Fluoxetine Combo [Zyprexa + Prozac], Quetiapine [Seroquel], and Risperidone [Risperdal] just to show the variability within and among the studies:


    [Odds Ratios with 95% Confidence Intervals]

    Now here are the OR, NNT, and Effect Size composite results [shown without confidence limits]:

    So what do the numbers mean? There’s the rub. What’s a good Odds Ratio? a strong NNT? a robust Effect Size? Click on the links to discover that even Wikipedia won’t directly answer the question. But there are a couple of other things to mention before returning to interpretation.
  • Overall Improvement: The Strength of Effect measures are based on rating scales with specific items related to depression scored by external raters [HAM-D, MADRS, etc]. There are other self rating scales. Here’s what the authors said:
    Continuous measures of quality of life included the Quality of Life Enjoyment and Satisfaction Questionnaire [Q-LES-Q] and the Short Form 36 Health Survey [SF-36]. The only continuous measure of functional impairment employed in these trials was the Sheehan Disability Scale [SDS]. As measures of quality of life and functional impairment varied across studies, we pooled such measures together to create an omnibus effect size for each drug, and across all drugs.
    The results were:
    but they hastened to add:
    With regards to quality of life and functioning, adjunctive quetiapine, aripiprazole, and OFC produced effect sizes that were either not statistically significant or small and clinically negligible in magnitude. Adjunctive risperidone was more efficacious than adjunctive placebo on quality of life/functioning, with a small-to-moderate effect size. The pooled effect across quality of life/functioning measures varied significantly across treatments [QB = 6.88, p = 0.003], with risperidone [g = 0.49] yielding a higher effect than the other three drugs combined [g = 0.11], which did not differ significantly from each other [QB = 4.02, p = 0.13]. However, the effect of aripiprazole on quality of life/functioning was small [g = 0.22] and statistically significant [p = 0.001], whereas the effects of OFC [g = 0.04, p = 0.74], and quetiapine [g = 0.05, p = 0.53] were both not statistically significant and of quite small magnitude. The effect of aripiprazole on quality of life/functioning should be interpreted with caution, as the effect for the drug on the SDS was very small and no longer statistically significant when patients who violated study protocol were excluded from analysis [g = 0.12, p = 0.08]. Similarly, the effect of risperidone on quality of life/functioning should be interpreted tentatively since it is largely driven by post hoc analyses.
    These authors concluded that from the subjects point of view, their lives weren’t particularly improved by atypical antipschotic augmentation [which, by the way, is the point of treatment in the first place].
They did something else that’s an addition to meta-analysis that I didn’t know about and couldn’t tell you how to do, but there’s a nice explanation in Ben Goldacre’s TED Talk [at 10:22]. They looked for publication bias using the "trim and fill" method and a funnel plot. They concluded that there were some missing studies, three to be exact. This is a major problem in meta-analyses. Having only what’s been published when the negative studies are in a file drawer is like a batting average based only on the good days. Here’s the funnel plot with the imputed missing studies in black:
So what about all of this. My read is Zyprexa+Prozac? A total bust. The others? This is statistical but only slight clinical significance, particularly when you look at the side effect burden. You have to treat around nine patients before you get one that beats a placebo, and on the self rating scales the patients couldn’t really tell you that they noticed. There’s nothing to write home about here.
The reason I went through parsing this article and dumbing it down was to make a simple point. This is a thorough meta-analysis that represents untold hours of work on the part of the authors. They had to wrench their numbers out of these articles using indirect and ponderous conversion formulas and doing some reading between the lines. In fact, there’s a new generation of scientists going over our medical literature using some pretty fancy detection methods, like the funnel plots to detect hidden studies. I guess they’re our CSI [crime scene investigation] types. But the problem is that they can’t investigate until they’re already working on "cold cases." I’m not complaining, they’re a great addition. But that graph up there is the years these studies were published, and the patent life of the drugs in question is behind us. The profits are in the bank or long ago distributed to CEOs and Stockholders. Augmenting antidepressant failures with atypical antipsychotics was a multi-billion dollar business. It’s still going on:
So I’m glad we have this meta-analysis and I appreciate all the work that went into it – but…
  • if we had the raw data as these studies were published, they could’ve been checked  in a timely fashion.
  • and if we had the raw data as the studies were published, this meta-analysis could’ve been done without all the detective work it took to excavate the facts.
  • and if we’d had the raw data when it should’ve been available, a huge cohort of people wouldn’t have taken unnecessary and expensive medications that made some of them sick and didn’t help very many.
  • and if we’d had the raw data up front, the drug companies and the KOLs wouldn’t have made so much ill-gotten profit and there would be fewer Mercedes in the parking lots.
  • and if we had the raw data from the start, the medical literature could’ve stayed scientific like it was  supposed to be.
  • and…
  1.  
    March 14, 2013 | 10:48 PM
     

    Allow me to point out that, in addition to their own adverse effects, “adjuncts” are prescribed when an antidepressant isn’t doing anything positive. The patient then bears the burden of TWO drugs for all-but-invisible improvement: An antidepressant that has been demonstrated useless and a questionable atypical antipsychotic.

    On what planet is this good medicine?

  2.  
    Brett Deacon, PhD
    March 14, 2013 | 11:09 PM
     

    1BOM, thanks for another excellent post. I have just one point to add: as you have discussed previously, the raw data from industry trials isn’t necessarily THE raw data. Recall the early Prozac trials in which the investigators were pressured, against their better judgment, to spin suicide attempts as “overdose.” Study 329 employed a similar misclassification strategy of suicide attempts as “emotional lability.” Even assuming we had our wish and got access to patient-level data from all industry trials, the data has already been “massaged” in a product-friendly manner.

    Industry trials use every design tweak under the sun to find an advantage of the active drug over placebo. They employ drug run-in periods, assess adverse effects in ways designed to minimize them, and assess efficacy almost exclusively using clinician ratings scales that shower stronger drug effects than self-report measures. In my lecture on Wednesday, I told my undergraduate abnormal psychology students about the use of placebo washout periods in antidepressant trials and the entire class spontaneously erupted in derision. Yet this is what passes for “state of the art” drug trial methodology. I would be laughed out of my profession if I studied psychotherapy in such a manner.

    You wrote, “The whole point of a randomized placebo controlled clinical trial is to see if a medication has the desired clinical effect compared to placebo on some predefined outcome.” With all due respect, this is not the point of industry trials, and you know it. The point of industry trials is to *demonstrate* an advantage of the active medication over placebo, without regard to the actual clinical effects of the medication. The point of a clinical trial conducted for the sake of science is as you said, but industry trials are conducted for marketing purposes. And when they are published in scientific journals, they are, as David Healy has described, wolves in sheep’s clothing.

    The problem for meta-analysts like Glen Spielmans is that it is impossible to tell what proportion of the available information is marketing v. science. And even unfettered access to patient-level industry data does not solve the problem. Or, if we are to believe Robert Gibbons, perhaps it does.

  3.  
    March 15, 2013 | 12:12 AM
     

    Alto and Brett,

    Both points well made and taken. I was sort of alluding to the limits of raw data in my last post. I say raw data is better than no raw data, but there’s more, sure enough. And as for Gibbons, his ploy is “I get the raw data cause I’m so special. You don’t cause you’re not me.”

  4.  
    Bernard Carroll
    March 15, 2013 | 4:07 AM
     

    The very idea of giving antipsychotic drugs to NONPSYCHOTIC depressed patients is so foolish, it can only be explained by academics who should know better buying in to the commercial agenda. Let’s name KOL names: Charles Nemeroff from Emory/Miami (PubMed ID 16760927; 16336032); Mark Rapaport from Cedars Sinai/Emory (PubMed ID 16760927); Charles DeBattista from Stanford (PubMed ID 19453199); George Papakostas from MGH/Harvard (22550279; 19687129); J. Craig Nelson from UCSF (PubMed ID:19183784 ; 19192475; 19287552; 19687129); Maurizio Fava from MGH/Harvard (PubMed ID 18344725; 21731833; 19192475; 19287552; 19656577; 19573476; 22286203); Madhukar Trivedi from UT Southwestern (PubMed ID19192475; 18344725; 19287552); Michael Thase from U Penn (PubMed ID 18344725; 19287552; 19192475); Martin Keller from Brown (PubMed ID 16760927). And that’s just for starters….!

  5.  
    March 15, 2013 | 4:58 AM
     

    If it looks like subjective guess-work, and sounds like subjective guess-work…

    Duane

  6.  
    March 15, 2013 | 5:01 AM
     

    And how brilliant is it to use a dopamine antagonist as an adjunct to treat depression?!

    A person does *not* need to be a doctor to come to the stark realization that this *makes no sense*.

    Beam me up!

    Duane

  7.  
    berit bj
    March 15, 2013 | 5:17 AM
     

    Reviews of society’s costs of medication errors, overmedication, adverse reactions, iatrogenic illnesses, early death, have found that these costs “may be underestimated”, to quote from a review undertaken by researchers at The Nordic School of Public Health at Gothenburg, http://www.nhv.se
    When personal losses, environmental damage and the rampant corruption of science, politics and ethics are on the scales, the benefit to risk ratio weighs heavily against Big Pharma et al in academia, the medical professions and politics.

  8.  
    berit bj
    March 15, 2013 | 5:30 AM
     

    … and if we’d had responsible governmental oversight and regulation of the industry, the clinical trial literature could have prevented an ocean of injury and death and sorrow, cover-ups and litigation by grieving, angry mothers and defrauded states.

    Thank you 1BOM for one more excellent exposition!

  9.  
    wiley
    March 15, 2013 | 2:20 PM
     

    Mickey, instead of using the term “dumbed down”, may I suggest “putting this in lay terms”? On my website I’m working on posts to put in lay terms many facets of psychiatry, psychology, sociology, and health that most of the unwitting victims of harmful psychotropics are unaware of while being given dangerous medications and having their minds colonized by the marketing and popular portrayals of “mental illness” and “treatment”. I’m linking to and posting an excerpt to this post because it’s a short and sweet introduction.

    Successfully educating oneself, in order to make truly informed decisions about diagnoses and treatments is key to taking control of one’s mental life and health and maintaining autonomy. Unfortunately, for most persons there is a tsunami of bad information, false information, and marketing images that need to be evaluated in order to dismiss their impacts.

    Trusting authority is what most working poor, and much of the middle-class has been taught to do. Betrayal is hard to deal with and too many people have learned the lessons of bio-psychiatry the hard way, on top of having learned to blame themselves for their problems and difficulty or inability to solve them. Individuals are being yoked with the full weight of dysfunction and pathology in their societies and communities, and are being locked into a vicious circle—

    1) If you’re suffering overwhelming emotions and mental confusion it’s because your brain is defective.

    2) If you’re not getting diagnosed and medicated, you’re being irresponsible and ignorant or stupid.

    3) If you reject what you are told by mental health professionals, you’re “anti-psychiatry,” a Luddite, a yokel, and possibly a Scientologist.

    4) If you don’t respond to treatment, you are “treatment resistant” (and oh, aren’t you a problem child).

    5) And if a drug is having bad effects— even if the effects are worse than what the patient wanted to have treated, it’s likely that the prescriber will raise the doses, and/or add more drugs and/or prescribe a more severe class of drug.

    6) If a patient declines a drug or treatment and wants to discontinue the medication, see 2 because 1 and/or 3.

    If a patient is hurt by a diagnosis and/or drugs they have little to no recourse or redress and can be forced by a judge to take the medications that are hurting them while their stresses are magnified because the drug(s) hamper their ability to deal with their problems and cause additional problems.

    It’s all an insidious abuse of civil rights, and is medical malpractice that pays big for a select few and costs everyone else.

    Life is too short for this. Our relationships with ourselves and others is too primary to allow unhelpful or harmful chemical disruption, marketing bullshit, and professional lies to threaten us when we are most vulnerable and/or gullible and to create for us a false identity that denies our experience and awareness that can be used against us legally.

    Asking WTF(?!) is a healthy beginning to regaining agency and self-understanding.

    If I got it right, you should be getting a ping-back, Mickey.

  10.  
    wiley
    March 15, 2013 | 2:22 PM
     

    Oh yeah, “lacking insight” needs to be on that list.

  11.  
    wiley
    March 15, 2013 | 2:25 PM
     

    Becky, I also posted a link to your article “jon mcclellan’s tes­ti­mony to u.s. sen­ate ” and an excerpt. My contact information is on the top bar, if you have any questions or issues with it. Hope you got a ping-back.

  12.  
    Melody
    March 15, 2013 | 5:07 PM
     

    Wiley–

    Absolutely brilliant comment. When you say “when we are most vulnerable and/or gullible “, I think you speak not only to ‘mental illness’, but to most illness. Because the doctor/patient relation is founded on asymmetrical information, it is also a cause of rising healthcare costs. Trusting authority–believing our doctors would not order tests or prescribe medicines NOT in our best interest (but rather for financial gain)–is one of the things that keeps us overmedicating and overpaying. WTF! indeed is a good beginning . . . but it is incredibly difficult to get others to join the choir.

  13.  
    March 15, 2013 | 11:14 PM
     

    Thank you wiley,I appreciate that.

  14.  
    Steve
    March 15, 2013 | 11:50 PM
     

    Here’s something that I see in my practice a lot: patients are on stimulants like concerta AND an antipsychotic.

    So, they’re on a drug that blocks dopamine and one that liberates it. Someone please explain that one to me.

  15.  
    Secuti
    March 16, 2013 | 8:05 AM
     

    Seems this strategy of augmentation somehow makes the treating physicians feel better about things (by 2 – 3 pts on the scale!) without any subjective improvement in the patient. Studies should be titled…

    “In Treatment Resistant Depression, adjunctive Antipsychotics improve the psychiatrists outlook, without meaningfully altering disease course and with the cost of some harm”

  16.  
    Secuti
    March 16, 2013 | 12:40 PM
     

    Perhaps this disparity in percieved benefit (drs vs patients) is due to the raters picking up the sedative effects of anti-psychotics, to which these rating scales are sensitive. You could easily pick up an extra 2-3 pts with this.A side effect which is not necessarily beneficial for patients. As Dr Healy has quipped, a couple of glasses of merlot would produce similar results…with less long term damage.

  17.  
    March 16, 2013 | 7:07 PM
     

    Wasn’t there a movement a couple of years ago to rebrand the “antipsychotic” category to make the drugs more palatable for wider prescription?

Sorry, the comment form is closed at this time.