smoke and mirrors…

Posted on Friday 27 May 2016

When I began to look at Clinical Trial reports a few years back, I had to relearn how to approach our literature. The studies are now industry funded, and that turned out to mean that the pharmaceutical companies control every step of the process, resulting in distorted and unreliable efficacy and side effect profiles. One might think that with all of the focused attention and attempts at reform, things might’ve changed more. But the beat just goes on.

The newest antidepressant on the block, Vortioxetine [Brintellix®, now Trintellix®], is sticking to the same old formulas. The publications’ authors are all either employees or otherwise tainted. A ghosted review article has an army of KOLs on the byline [see the recommendation?…]. They made a yeoman’s attempt at indication creep, including an all day KOL-rich Institute of Medicine production and a special FDA re-hearing trying [and failing] to create a new indication, Cognitive Dysfunction in Depression [see more vortioxetine story… and a parable…]. There have been a number of independent meta-analyses and critiques:
Now we have yet another Vortioxetine meta-analysis, this time by the manufacturers [Takeda/Lundbeck], again with their [everyman’s] KOL, Michael Thase, and four company employees on the byline:
Thase ME, Mahableshwarkar AR, Dragheim, M, Loft H, and Vieta E.
European Neuropsychopharmacology. Mar 25, 2016 [Epub ahead of print]
2014/2015 Impact Factor 4.369

The efficacy and safety of vortioxetine, an antidepressant approved for the treatment of adults with major depressive disorder (MDD), was studied in 11 randomized, double-blind, placebo-controlled trials of 6/8 weeks? treatment duration. An aggregated study-level meta-analysis was conducted to estimate the magnitude and dose-relationship of the clinical effect of approved doses of vortioxetine [5-20mg/day]. The primary outcome measure was change from baseline to endpoint in Montgomery-Åsberg Depression Rating Scale [MADRS] total score. Differences from placebo were analyzed using mixed model for repeated measurements [MMRM] analysis, with a sensitivity analysis also conducted using last observation carried forward. Secondary outcomes included MADRS single-item scores, response rate [≥50% reduction in baseline MADRS], remission rate [MADRS ≤10], and Clinical Global Impressions scores. Across the 11 studies, 1824 patients were treated with placebo and 3304 with vortioxetine [5mg/day: n=1001; 10mg/day: n=1042; 15mg/day: n=449; 20mg/day: n=812]. The MMRM meta-analysis demonstrated that vortioxetine 5, 10, and 20mg/day were associated with significant reductions in MADRS total score [Δ-2.27, Δ-3.57, and Δ-4.57, respectively; p<0.01] versus placebo. The effects of 15mg/day [Δ-2.60; p=0.105] were not significantly different from placebo. Vortioxetine 10 and 20mg/day were associated with significant reductions in 9 of 10 MADRS single-item scores. Vortioxetine treatment was also associated with significantly higher rates of response and remission and with significant improvements in other depression-related scores versus placebo. This meta-analysis of vortioxetine [5-20mg/day] in adults with MDD supports the efficacy demonstrated in the individual studies, with treatment effect increasing with dose.
The article has a number of figures, forest plots of the parameters they compiled from the various Clinical trials. The two below are from Figures 2B and 4B. Take a look at the versions in the paper first. In the ones below, I’ve removed some of the columns that were irrelevant to the points I wanted to make, and I’ve sorted them differently – first by region [US, mixed, non-US] and then by dose [my apologies for the "waviness" – an artifact of my graphic capabilities]. In both of my versions, I see nothing of the "dose response" effect they advertise in the text. Instead, heterogeneity seems to be the order of the day. Another thing that appears obvious is that there’s a big difference between the US and non-US sites in both tables.

In the MADRS Total Score Differences [above], the forst plot abscissa is plotted as the raw difference. While that’s a legitimate way to show Effect Size, the units are unfamiliar, at least to me. In the far right column, they show the more familiar Standardized Effect Size [Mean Difference ÷ Standard Deviation] roughly scaled as 0.25=weak, 0.50=moderate, and 0.75=strong. Only one of the US sites reaches something that might remotely be clinically significant.

The MADRS Remission data is similar: Dose response curve? Not so much. And in the US sites, nothing achieves significance [p<0.05] or has an NNT or an Odds Ratio that is the range of a clinically solid antidepressant.

Is it fair to imply as I have done here that the US data is more reliable? I know that’s sure what I think. But my real point in redoing these figures is to show that the way the data is presented can be [and often is] easily used to guide the reader towards some preferred conclusion. Omission is another way to lead the reader. What’s missing here is that six of these studies had active comparators, included in some of the other papers listed above. In this paper, they explain why they left them out in the text…
This meta-analysis centers on the comparison between vortioxetine and placebo in the 11 individual studies and does not evaluate differences between vortioxetine and the active references [duloxetine and venlafaxine XR]. The results of the active references can be found in the publications for the individual studies. In two of the previous meta-analyses of the vortioxetine data, direct comparisons between vortioxetine and the active reference were included. Direct comparison of vortioxetine and the active reference is not appropriate, as the individual studies were not designed or powered to enable this comparison. Rather, the rationale for including an active reference in these studies was for the internal validation of the study design [i.e., assay sensitivity]. To evaluate the efficacy of vortioxetine relative to another antidepressant would require a study that is specifically designed for that purpose, that is, an active-comparator study. In addition, in the six studies that include an active reference, patients were excluded – for ethical reasons – if they had known hypersensitivity or a history of lack of response to previous treatment with the active reference, which introduces the potential for bias in favor of the active reference.
… which is baloney. This plot of those comparisons from Cosgrove et al [see publication bias III – close encounters of the second kind…] is the likely explanation for why the active comparators were omitted. Vortioxetine just doesn’t come out very well:

It’s sort of unusual to see an industry produced meta-analysis of their own clinical trials. I’ve given just a few examples where the way things are displayed or omitted falsely inflate the overall picture of how the drug performs. There are others, but even without the window dressing, this is a weak antidepressant on the best of days [if that]. I would speculate that this industry created meta-analysis is intended to neutralize the less than enthusiastic findings of the independent meta-analyses.

We’re so used to these industry prepared, ghost managed papers that there are some aspects of these articles that we don’t even notice. All along the way, when there’s something that looks negative, the narrative explains it away as in the omission of the active comparators. They’re written and presented aiming towards a conclusion rather that trying to clearly present the facts and allowing the reader to reach their own conclusions. Also, it’s worth noticing that this article is published in European Neuropsychopharmacology which is the publication of the European College of Neuropsychopharmacology, a scientific organization – a place where one might expect the most rigorous of scientific papers to appear rather than what looks to be a ghost-managed commercial advertisement…
    May 29, 2016 | 1:40 AM
    could this really be true about severity of symptoms?

    Bernard Carroll
    May 29, 2016 | 12:35 PM

    I think Peter Kramer is half right here. There’s a conflation of severity with subtype. Since DSM-III came along, major depression was defined as melancholia lite, while melancholia definition itself was dumbed down. When Kramer writes about generic major depression he can’t distinguish mild or early melancholia (which does respond to antidepressants) from non-melancholic depression, usually only mild to moderate in severity, which in my book doesn’t respond to antidepressant drugs – excepting atypical depression, that is, which responds to MAOIs.

    James OBrien, M.D.
    May 29, 2016 | 6:26 PM

    He’s been overly optimistic before. I’d like to see some prevalence studies under the old definition before and after.

    If the supposed decline in melancholia is due to getting an SSRI somewhere along the line, then we should also see a decline in suicides over thirty or in the firmly out of the black box warning demographic. Obviously what we see is the opposite, so I am skeptical.

    Bernard Carroll
    May 30, 2016 | 1:19 AM

    Oh, I didn’t mean to suggest that melancholia responds to SSRI drugs. And that’s related to the failure of suicide rates to fall despite 25% of the population receiving them.

    James OBrien, M.D.
    May 30, 2016 | 10:54 AM

    I concur. Another reason his theory is probably wrong is because of how much treatment settings have changed for the worse in forty years. If you’re not working in a jail or prison, you’re not seeing the most seriously ill patients.

Sorry, the comment form is closed at this time.