more vortioxetine story…

Posted on Monday 29 February 2016

In the last post [indications…], I started out by objecting to the FDA Advisory Panel’s endorsement of an indication for Vortioxetine [Brintellix®] in Cognitive Dysfunction in Major Depressive Disorder on the grounds of obvious Indication Creep [the well known marketing strategy of adding indications to allow misleading advertising]. But as I looked into the articles themselves, my objection broadened. So besides just objecting to just Indication Creep, I think their analysis is scientifically flawed. Those articles are available on-line and fully referenced in indications….

The FDA Advisory Committee meeting on February 3rd that voted to support the new indication is referenced on-line here. Unfortunately, some of the links don’t work, but there’s one in particular that does work that I found helpful: Slides for the February 3, 2016 Meeting of the Psychopharmacologic Drugs Advisory Committee [PDAC]. In the public hearing, the morning session was devoted to the general topic of the Cognitive Deficit in MDD, and the afternoon was focused on the specific Vortioxetine application for an indication. The slides tell the story of the presentations:
The FDA presentation discusses the FDA’s original position that this indication is an example of pseudospecificity and gives an impressive list of reasons. In February 2015, there was a Workshop at the Institute of Medicine entitled Enabling Discovery, Development, and Translation of Treatments for Cognitive Dysfunction in Depression: A Workshop moderated by Tom Insel [NIMH] and Thomas Laughren [formerly FDA’s director of psychiatry products with  a history of COI with industry: see Top FDA Officials, Compromised by Conflicts of Interest]. It was loaded with other dignitaries including Madhukar Trivedi, Maurizio Fava, and Richard Keefe, the author of one of the Vortioxetine papers. In the FDA presentation being discussed here, this workshop moved the FDA from "No" to "Maybe." The NIMH presentation was neuroscience heavy, but notably concluded that the tests in these papers do not define Cognitive Dysfunction in Depression [by my reading].

The second FDA presentation needs a look. It documents the extreme persistence of Takeda/Lundbeck in pursuing this indication [working with the FDA]. And towards the end, it has a summary of the specifics with the Takeda/Lundbeck papers. In my opinion, they’re off the mark, but it’s a good description of the mark I think they missed. Their argument hinges on the results of the DSST [Digit Symbol Substitution Test] from the two studies shown in this summary slide from the FDA presentation:

[click image for the original]

In the first study [FOCUS, 2014], there were three groups, Placebo and two different doses of Vortioxetine. The One Way Analysis of Variance [Omnibus ANOVA] is significant at p<0.001. The pairwise comparisons are significant for both doses at p<0.001 with Cohen’s d=0.487 and 0.479 for each dose, both in the moderate range. The pairwise comparison of the two doses is not significant at p=0.945. However, in the second study [CONNECT, 2015], things were not so rosy. The Omnibus ANOVA was 0.062 which is not significant. The pairwise comparisons are p=0.021 for Vortioxetine versus Placebo, p=0.104 for Duloxetine versus Placebo, and p=0.463 for Vortioxetine versus Duloxetine. The Cohen’s d Strength of Effect was 0.250 for Vortioxetine versus Placebo [weak] and 0.173 for Duloxetine versus Placebo [nil]. In both analyses, they skipped the Omnibus ANOVA, which is a prerequisite to validate even running the pairwise comparisons. So the second study did not reach statistical significance when fully analyzed AND the Strength of Effect was near trivial. In addition, using another method appropriate for datasets with more than two groups, the Tukey’s HSD [Honest Statistical Difference] test, there was no significance found: PBO vs VTX p=0.055, PBO vs DLX p=0.235, and VTX vs DLX p=0.743. The graphic on the right compares the DSST Mean Differences with 95% Confidence Intervals from the Tukey HSD test in these trials.

Things like skipping the omnibus ANOVA, outcome switching, or failing to correct for multiple variables are all too common in the clinical trials of pharmaceuticals [all three were present in our examination of Paxil Study 329]. While you can even find articles that support such practices, that’s not what’s in the statistics books, and you sure don’t want to do that on your statistics final exam if you want to pass. In fact, my insistence on playing by the book opens me up to criticism of bias. My take on these debates is that it tells us how close to the wire many of these trials have been, in spite of the fact that statistical significance is the weakest of our tools to evaluate drugs. Effect sizes, whether measured by the Standardized Mean Difference [Cohen’s d, Hedges g] or simply the Difference in the Means as in the above right figure, is a better choice for approximating clinical significance. Actually, simple visual inspection itself isn’t half bad. Both the smallness of the MEAN DIFFERENCE and the difference between the studies are readily apparent [see 4. and 5. in this comment]. The graph also shows something else. One thing we’ve learned from the clinical trials over and over is that replication is perhaps our most powerful tool for evaluating efficacy. And from the graph, it’s clear that the CONNECT study did not replicate the DSST findings from the FOCUS trial.

The weaknesses in this application are particularly important because the Takeda/Lundbeck sponsors are asking for not only an approval of their product, they’re asking the FDA to create an entire new indication for just that purpose – Cognitive Dysfunction in Depression. So why did 8 out of 10 members of the Advisory Group vote for approval?  I wasn’t there so I don’t know, but I can speculate that they bought the glitz. The sponsors have made a five year long full court press as described in the afternoon FDA presentation above [FDA Presentations]. The National Academy of Science Workshop at the Institute of Medicine was loaded to the gills with PHARMA friendly and Translational Medicine promoting figures, the author of the CONNECT study being the sole presenter on the Effects of Pharmacological Treatments on Cognition in Depression. And then be sure to take a look at the slides from the loaded FDA hearing presentations of a KOL-supreme [Madhukar Trivedi, M.D. Presentation] and the sponsors [Takeda Presentations]. Finally, in spite of the generally skeptical review presented by the FDA [FDA Presentations], the final slide was surprisingly concilliatory [my objections are marked in red, particularly the third one that was not statistically significant, even by their own testing – p=0.463]:

[click image for the original]

My assumption is that the sponsors have put all this effort [and treasure] into this approval for commercial reasons. Vortioxetine is a late arrival to the flooded antidepressant market. In an article soon to be available by Cosgrove et al [Under the Influence: the Interplay among Industry, Publishing, and Drug Regulation], they use the original NDA of Vortioxetine as a [negative] example for their discussion. The paper contains a meta-analysis of all of its trials versus any comparator drugs. Vortioxetine usually comes up short and is never superior. So I presume the sponsors think this cognitive dysfunction indication would give them a needed commercial advantage. If they succeed in getting it in the coming month, the approval won’t hinge on anything scientific that I can see, but will rather be a testimonial to their persistence, some deep pockets, and their spin.

Sorry, the comment form is closed at this time.