the empty pipeline…

Posted on Thursday 3 March 2016

Last night, I spent some time writing a letter to the FDA. It was about Vortioxetine [Brintellix®] and their application for an approval for Cognitive Dysfunction in Major Depressive Disorder [see indications… and more vortioxetine story…]. This morning I got a confirmation that it had been passed along to "the Division" [which I suppose is better than the alternative]. My letter was a less "bloggy" version of indications… and more vortioxetine story…. While I’m hardly holding my breath, it felt kind of good to actually send a letter to the place that ought to hear it rather than just post it to the universe at large on a blog at the edge of the galaxy.

I see this as something of a dangerous time. We’re at the end of a pipeline period in psychopharmacology – one that lasted for a really long time considering that there were only a couple of classes of drugs involved. But then the pipeline dried up. The approval of Brexpiprazole® as an Adjunct to Antidepressants in Treatment Resistant Depression is an example of why it’s a dangerous time. They presented two trials with the amazing pixie dust of changing the outcome definitions in the middle of the trial to bring the second study into statistical significance. The FDA reviewer didn’t buy it…

7.4.1. The Sponsor conducted two adequate and well – controlled trials to assess the efficacy of brexpiprazole for the adjunctive treatment of MDD. Based on the prespecified statistical analysis plan, only one of these trials [Study 331-10-228] was positive. The Sponsor acknowledges that Study 331-10-227 was not positive based on the pre-specified plan, but provides a number of arguments to support the concept that brexpiprazole should nonetheless be approved for this indication.

Approval of the adjunctive MDD indication would have to rest on one of two possible scenarios: 1. Study 331-10-227 can be considered a positive trial based on the retrospective application of Amendment 3 criteria and use of the per protocol population instead of the intent – to – treat population for the primary analysis. 2. Study 331-10-228 can be considered “strongly positive” and, thus, approval can be based on this single study. Study 331-10-227 would then be viewed as “supportive evidence” for this approval.
but then ended with…
One can also reasonably consider Study 331-10-228 a “strongly positive” study — in a population of individuals with a history of multiple failed antidepressant trials and prospectively demonstrated suboptimal response to an additional antidepressant, the addition of 2 mg/day of brexpiprazole yielded an average additional symptom improvement of just over 8 points on the MADRS. This was 3 points beyond the improvement in placebo — a difference that was highly statistically significant at p=0.0001. These results are both clinically and statistically meaningful.

Thus, with one strongly positive trial and supportive evidence from two additional trials, there is adequate evidence of efficacy to approve this product for the adjunctive treatment of MDD.
That’s a peculiar conclusion. The FDA standards are low, actually shamefully low. Two positive trials out of as-many-as-you-want-to-do. And positive means statistically significant rather than clinically significant. For example, this drug for this indication flunked clinically significant by any objective measure I know – Subject Rated Scales and Effect Sizes. But I’ve said that so much I might get thrown off the Internet for terminal monotony. My point this time is that the reviewer’s logic is upside down. Given what we know about Clinical Trials, a better interpretation would be that the second study did not replicate the first. Replication is something of a gold standard, and it should be. That’s what we do in practice, count on the fact that the results of a clinical trial will be replicated in our patients. So I’m concerned that in the face of the empty pipeline, the FDA will do things like this, stretch their already too-low standards even lower – something they did in this case.

The story with this Brintellix® application for approval is the same issue – a failed replication. Their second study [CONNECT] did not reproduce the results of the earlier trial [FOCUS]. While there are lots of other questions about approval for this indication, they’re in the background to this non-replication. So it’s a dangerous time because here’s another situation where they’re about to give a drug the benefit of the doubt if they follow the recommendations of the advisory committee [see more vortioxetine story…]. There’s no reason to do that. Brintellix® has never beaten another antidepressant head-to-head – and this indication [Cognitive Dysfunction in Major Depressive Disorder] seems both contrived and unproven.

The only result from this approval will be to allow Takeda/Lundbeck to create misleading advertisements. It’s not going to help any of our patients in any substantive way…
Mickey @ 10:07 PM

a must read…

Posted on Tuesday 1 March 2016

Some commentaries need no introduction other than the link. Psychiatry With And Without A Conscious State by George Dawson of Real Psychiatry is an example of such a must-read.
Mickey @ 9:28 PM

words and music…

Posted on Tuesday 1 March 2016

It is either fashionable or required to end articles with a few comments about the limitations of the study. In this case, I’m going to start with the limitations; then summarize my previous musings on the topic; then say what the topic actually is that has these limitations and musings:
Limitations
Several limitations of this study require comment. First, the conditions of a randomized trial like RAISE-ETP may not be generalizable to real-world practice since all sites that volunteered for RAISE-ETP were capable and motivated to successfully implement a comprehensive, integrated care FEP program with existing non-research sources of funding. As a result, CC sites most likely offered a level of FEP care that was superior to usual FEP treatment in the US, thus minimizing observed differences between NAV and usual treatment. In several other RCTs of FEP programs, rates of hospitalization among control groups were 37% to 71% over 12 months, 1.5 to 3.5 times greater than the 20% seen in the first 12 months for CC in this study. This difference suggests that the lack of more favorable NAV-CC differences in inpatient care and costs may reflect an exceptionally good performance at keeping hospital utilization low at CC sites in this study. If CC subjects in the present study had performed similarly to control groups in previous trials cited above, the differentially greater costs associated with NAV might have been reduced to zero or might even have been reversed to as much as $7000 in savings. Generalizability of these results is thus uncertain as RAISE-ETC may have artificially increased CC effectiveness and reduced CC costs….

Translation? My first encounter with the NIMH RAISE [Recovery After Initial Schizophrenic Episode] study was Dr. Insel’s blog touting the fact that SAMHSA had announced the availability of Block Grants to States for setting up early intervention programs in Schizophrenia, and was using the methodology of the ongoing NIMH RAISE study as a template for that effort. Dr. Insel saw that as moving research from the bench to the bedside – an oft quoted motto for Translational Medicine [Director’s Blog: From Research to Practice].
Opportunity knocks: What I concluded after that exploring around was that Dr. Insel’s version was more spin than not. When the Funds became available with ARA, he jumped on the opportunity to fund a study on the treatment of First Episode Psychosis [and good for him!]. It had a rocky start with two different programs – John Kane and Jeffrey Lieberman. Kane’s study survived – a community based program that was controlled by matched treatment-as-usual controls. The along came another program – Block Grants for the States for FEP, and the NIMH again jumped and passed along their Protocol [again, good for them!]. I say Protocol because the study wasn’t completed and there weren’t yet any results. So this wasn’t exactly a story about the wonders of Translational Medicine so much as a story about taking advantage of the breaks that come your way. I suspected that there was a back story, but sometimes the back story isn’t something bad. In the course of things, I got interested in the actual RAISE program itself, focusing on one arm – the Individual Resiliency Training.
Resiliency? Training? I won’t reiterate my complaints about the Individual Resiliency Training here. I am indebted to Dr. Sandra Steingard who was connected with the study through her center, and reassured me that it was only meant as a guide. We so need an effort for these patients that anything that gets a funded program going can evolve into something good. So I sat quietly and awaited the results [which were slow in coming]. Then they published something:
More Confusion: The paper finally came out [Comprehensive Versus Usual Community Care for First-Episode Psychosis: 2-Year Outcomes From the NIMH RAISE Early Treatment Program], and it was confusing at best. The New York Times reported that this study showed that their program worked [New Approach Advised to Treat Schizophrenia] – the patients were treated with "talk therapy," required less antipsychotic medication [which is exactly what we all wanted to hear]. But that’s not what the paper itself said. I weould summarize what it said as "We did it! We got a program going for First Episode Psychotic illness!" and not much else. It took Discovery Magazine’s best-ever-blogger, Neuroskeptic, to clear things up [sort of] [Medication for Schizophrenia: Less is More?]. His finding was that we don’t know the outcome just yet. So again I sat patiently and awaited the results, thinking once more that there was a buried back story that would surely come out in time..

I have a little text file of things I keep up with that I check each month. It’s something of a pet peeve of mine that the news cycle is so rapid that many things that matter just get lost in the whirrs and buzzes of life. This is on that list as "RAISE RESULTS?" So yesterday [the end of a month list check], I saw that annotation and went looking. I got a hit!
by Robert Rosenheck, Douglas Leslie, Kyaw Sint, Haiqun Lin, Delbert C. Robinson, Nina R. Schooler, Kim T. Mueser, David L. Penn,, Jean Addington, Mary F. Brunette, Christoph U Correll, Sue E. Estroff, Patricia Marcy, James Robinson, Joanne Severe, Agnes Rupp, Michael Schoenbaum, and John M. Kane.
Schizophrenia Bulletin. Advance Access 01/31/2016.

This study compares the cost-effectiveness of Navigate [NAV], a comprehensive, multidisciplinary, team-based treatment approach for first episode psychosis [FEP] and usual Community Care [CC] in a cluster randomization trial. Patients at 34 community treatment clinics were randomly assigned to either NAV [N = 223] or CC [N = 181] for 2 years. Effectiveness was measured as a one standard deviation change on the Quality of Life Scale [QLS-SD]. Incremental cost effectiveness ratios were evaluated with bootstrap distributions. The Net Health Benefits Approach was used to evaluate the probability that the value of NAV benefits exceeded its costs relative to CC from the perspective of the health care system. The NAV group improved significantly more on the QLS and had higher outpatient mental health and antipsychotic medication costs. The incremental cost-effectiveness ratio was $12 081/QLS-SD, with a .94 probability that NAV was more cost-effective than CC at $40 000/QLS-SD. When converted to monetized Quality Adjusted Life Years, NAV benefits exceeded costs, especially at future generic drug prices.
It’s the abstract of the study that had those limitations I started with at the end of the full paper. And this press release from the NIMH goes with it [it’s on-line and deserves a full reading if this is a topic of interest]:
NIH-funded study shows early intervention is more cost-effective than typical care
NIMH: Press Release
February 1, 2016

New analysis from a mental health care study shows that “coordinated specialty care” [CSC] for young people with first episode psychosis is more cost-effective than typical community care. Cost-effectiveness analysis in health care is a way to compare the costs and benefits of two or more treatment options.  While the team-based CSC approach has modestly higher costs than typical care, it produces better clinical and quality of life outcomes, making the CSC treatment program a better value. These findings of this study, funded by the National Institute of Mental Health, part of the National Institutes of Health, will help guide mental health professionals in their treatment for first episode psychosis.

This new analysis, published online today by Schizophrenia Bulletin, was led by Robert Rosenheck, M.D. , professor of psychiatry and public health at Yale University. It is part of the Recovery After an Initial Schizophrenia Episode  initiative also funded by the National Institute of Mental Health. This paper reported on the cost-effectiveness of CSC treatment in the RAISE Early Treatment Program  , a randomized controlled trial headed by John M. Kane, M.D ., professor and chairman, Department of Psychiatry at The Hofstra North Shore-LIJ School of Medicine and The Zucker Hillside Hospital.

Coordinated specialty care for first episode psychosis is a team-based treatment program tailored to each individual that involves more specialty care from mental health providers than typical care. Dr. Rosenheck and colleagues focused on a specific CSC program, called NAVIGATE, which featured a team of specialists offering recovery-oriented psychotherapy, low-dose antipsychotic medications, family education and support, case management, and work or education support…

I wondered why I hadn’t seen this. Looking back, it was sort of all over the Internet, but not in the places I usually follow. And it didn’t make it to the big news outlets. As far as I can tell, it didn’t even make it to Psychiatric News or to Mad in America. Anyway, for whatever reason, I missed it.

Well, those are all the words, at least the ones I know about. Here’s the music. Anybody reading this knows that we badly need stable programs for people with psychotic illness, particularly First Episode Psychosis. I expect that most people reading this feel that the skillful use of antipsychotic medication is part of that treatment for many. I expect most would agree that there needs to be a place in those programs for people who won’t or don’t need to take medication. Many would agree that there needs to be a provision for actively psychotic people who are out of control or dangerous other than prison. And most accept that the care needed is a humanitarian responsibility of society. I expect that people who think it’s all a medication issue or all a psychotherapy issue aren’t reading this.

So as much as I am a critic of Dr. Insel and his reign at the NIMH, of the particular Individual Resiliency Training protocol in RAISE, of Dr. Kane’s PHARMA connections, of their unwillingness to publish the RAISE results directly, of spinning seredipity as Translational Medicine, I’ve got to admit that they "did it! [they] got a program going for First Episode Psychotic illness!" They saw some opportunities and did something with them. I’ll bet that the program cost more than they wanted it to and that the medication use was more than they hoped and that the results weren’t as robust as they’d hoped [how often have you heard "with a .94 probability" instead of p=0.06?], so they published their cost-effectiveness data and slid the study results in the spaces and the press release rather than publishing a separate straight-shooting paper. In spite of all of that, if this statement turns out to be even slightly true, this may turn out to be the NIMH’s best outing of the Insel Era [but I still want them to just tell us outright what they found]:
“This scientific work is having an immediate impact on clinical practice in the United States and is setting a new standard of care,” added Heinssen. “We’re seeing more states adopt coordinated specialty care programs for first episode psychosis, offering hope to thousands of clients and family members who deserve the best care that science can deliver.”
So, I guess on rare occasion, I can go for "the ends justify the means…" And, by the way, I buy that limitations comment. They picked their initial cohort from among the ones that are working well. There are many, many more that aren’t…
Mickey @ 4:18 PM

more vortioxetine story…

Posted on Monday 29 February 2016

In the last post [indications…], I started out by objecting to the FDA Advisory Panel’s endorsement of an indication for Vortioxetine [Brintellix®] in Cognitive Dysfunction in Major Depressive Disorder on the grounds of obvious Indication Creep [the well known marketing strategy of adding indications to allow misleading advertising]. But as I looked into the articles themselves, my objection broadened. So besides just objecting to just Indication Creep, I think their analysis is scientifically flawed. Those articles are available on-line and fully referenced in indications….

The FDA Advisory Committee meeting on February 3rd that voted to support the new indication is referenced on-line here. Unfortunately, some of the links don’t work, but there’s one in particular that does work that I found helpful: Slides for the February 3, 2016 Meeting of the Psychopharmacologic Drugs Advisory Committee [PDAC]. In the public hearing, the morning session was devoted to the general topic of the Cognitive Deficit in MDD, and the afternoon was focused on the specific Vortioxetine application for an indication. The slides tell the story of the presentations:
The FDA presentation discusses the FDA’s original position that this indication is an example of pseudospecificity and gives an impressive list of reasons. In February 2015, there was a Workshop at the Institute of Medicine entitled Enabling Discovery, Development, and Translation of Treatments for Cognitive Dysfunction in Depression: A Workshop moderated by Tom Insel [NIMH] and Thomas Laughren [formerly FDA’s director of psychiatry products with  a history of COI with industry: see Top FDA Officials, Compromised by Conflicts of Interest]. It was loaded with other dignitaries including Madhukar Trivedi, Maurizio Fava, and Richard Keefe, the author of one of the Vortioxetine papers. In the FDA presentation being discussed here, this workshop moved the FDA from "No" to "Maybe." The NIMH presentation was neuroscience heavy, but notably concluded that the tests in these papers do not define Cognitive Dysfunction in Depression [by my reading].

The second FDA presentation needs a look. It documents the extreme persistence of Takeda/Lundbeck in pursuing this indication [working with the FDA]. And towards the end, it has a summary of the specifics with the Takeda/Lundbeck papers. In my opinion, they’re off the mark, but it’s a good description of the mark I think they missed. Their argument hinges on the results of the DSST [Digit Symbol Substitution Test] from the two studies shown in this summary slide from the FDA presentation:


[click image for the original]

In the first study [FOCUS, 2014], there were three groups, Placebo and two different doses of Vortioxetine. The One Way Analysis of Variance [Omnibus ANOVA] is significant at p<0.001. The pairwise comparisons are significant for both doses at p<0.001 with Cohen’s d=0.487 and 0.479 for each dose, both in the moderate range. The pairwise comparison of the two doses is not significant at p=0.945. However, in the second study [CONNECT, 2015], things were not so rosy. The Omnibus ANOVA was 0.062 which is not significant. The pairwise comparisons are p=0.021 for Vortioxetine versus Placebo, p=0.104 for Duloxetine versus Placebo, and p=0.463 for Vortioxetine versus Duloxetine. The Cohen’s d Strength of Effect was 0.250 for Vortioxetine versus Placebo [weak] and 0.173 for Duloxetine versus Placebo [nil]. In both analyses, they skipped the Omnibus ANOVA, which is a prerequisite to validate even running the pairwise comparisons. So the second study did not reach statistical significance when fully analyzed AND the Strength of Effect was near trivial. In addition, using another method appropriate for datasets with more than two groups, the Tukey’s HSD [Honest Statistical Difference] test, there was no significance found: PBO vs VTX p=0.055, PBO vs DLX p=0.235, and VTX vs DLX p=0.743. The graphic on the right compares the DSST Mean Differences with 95% Confidence Intervals from the Tukey HSD test in these trials.

Things like skipping the omnibus ANOVA, outcome switching, or failing to correct for multiple variables are all too common in the clinical trials of pharmaceuticals [all three were present in our examination of Paxil Study 329]. While you can even find articles that support such practices, that’s not what’s in the statistics books, and you sure don’t want to do that on your statistics final exam if you want to pass. In fact, my insistence on playing by the book opens me up to criticism of bias. My take on these debates is that it tells us how close to the wire many of these trials have been, in spite of the fact that statistical significance is the weakest of our tools to evaluate drugs. Effect sizes, whether measured by the Standardized Mean Difference [Cohen’s d, Hedges g] or simply the Difference in the Means as in the above right figure, is a better choice for approximating clinical significance. Actually, simple visual inspection itself isn’t half bad. Both the smallness of the MEAN DIFFERENCE and the difference between the studies are readily apparent [see 4. and 5. in this comment]. The graph also shows something else. One thing we’ve learned from the clinical trials over and over is that replication is perhaps our most powerful tool for evaluating efficacy. And from the graph, it’s clear that the CONNECT study did not replicate the DSST findings from the FOCUS trial.

The weaknesses in this application are particularly important because the Takeda/Lundbeck sponsors are asking for not only an approval of their product, they’re asking the FDA to create an entire new indication for just that purpose – Cognitive Dysfunction in Depression. So why did 8 out of 10 members of the Advisory Group vote for approval?  I wasn’t there so I don’t know, but I can speculate that they bought the glitz. The sponsors have made a five year long full court press as described in the afternoon FDA presentation above [FDA Presentations]. The National Academy of Science Workshop at the Institute of Medicine was loaded to the gills with PHARMA friendly and Translational Medicine promoting figures, the author of the CONNECT study being the sole presenter on the Effects of Pharmacological Treatments on Cognition in Depression. And then be sure to take a look at the slides from the loaded FDA hearing presentations of a KOL-supreme [Madhukar Trivedi, M.D. Presentation] and the sponsors [Takeda Presentations]. Finally, in spite of the generally skeptical review presented by the FDA [FDA Presentations], the final slide was surprisingly concilliatory [my objections are marked in red, particularly the third one that was not statistically significant, even by their own testing – p=0.463]:


[click image for the original]

My assumption is that the sponsors have put all this effort [and treasure] into this approval for commercial reasons. Vortioxetine is a late arrival to the flooded antidepressant market. In an article soon to be available by Cosgrove et al [Under the Influence: the Interplay among Industry, Publishing, and Drug Regulation], they use the original NDA of Vortioxetine as a [negative] example for their discussion. The paper contains a meta-analysis of all of its trials versus any comparator drugs. Vortioxetine usually comes up short and is never superior. So I presume the sponsors think this cognitive dysfunction indication would give them a needed commercial advantage. If they succeed in getting it in the coming month, the approval won’t hinge on anything scientific that I can see, but will rather be a testimonial to their persistence, some deep pockets, and their spin.
Mickey @ 11:58 AM

indications…

Posted on Saturday 27 February 2016

I suppose every discipline has its jargon, words or phrases that have specific meanings that aren’t necessarily found in the general language. If you walk into the waiting room and see someone with bilateral exopthalmos [bulging eyes], you immediately think "Grave’s Disease" [hyperthyroidism]. It’s a pathognomonic sign:
    pa·thog·no·mon·ic [puh-thog-nuh-mon-ik]
    adjective

    1. Medicine/Medical. characteristic or diagnostic of a specific disease.
There are other words that are everyday words, but in Medicine have come to have something of an idiosyncratic meaning – for example:
    in·di·ca·tion [in-di-key-shuhn]
    noun

    1. anything to indicate or point out, as a sign or token.
    2. Medicine/Medical. a special symptom or the like that points out a suitable remedy or treatment or shows the presence of a disease.
    3. an act of indicating.
    4. the degree marked by an instrument.
It means something like in this situation, do this. It’s not really as imperative as that sounds – maybe it’s more like this is what is usually done in this situation, so be sure and think about it – and if you don’t do it, have a good reason. At least that’s what I always took it to mean.

I didn’t see detail men in practice, but my partners did. So occasionally, I would get caught. In one such captivity, the sales rep looked at me meaningfully and said with emphasis, "We’ve just found out that we have a new indication for whatever-drug-it-was in whatever-condition-it-was" [as if that had a very special, near mystical significance]. I had never heard it used that way [my ignorance a consequence of chronic avoidance of sales reps]. Later, I asked one of my partners, who explained that the FDA approved drugs for specific diseases or situations. I supposed that was a reasonable idea, a way of communicating what they looked at when they put the drug on the market. I later asked her, "Have they always done that?" She laughed at my by-then legendary naivity in such matters and said, "I don’t know, but what it really means is that they can advertise it for that indication" [truth comes in many forms, and that’s an example].

So I guess there’s another definition for the word:
    in·di·ca·tion [in-di-key-shuhn]
    noun

    1. Medicine/Medical. a special symptom or the like the FDA certifies for a pharmaceutical company’s marketing department to legalize their advertising it as a use for their drug.
I’m wiser now. This is why I’ve been so absolutely beside myself that the FDA approved Brexpiprazole for Augmentation in Treatment Resistant Major Depressive Disorder. Drugs like Risperdal, Seroquel, or Abilify didn’t make it to the top of the charts in sales because there was an epidemic of psychotic illness. They got there by getting other indications for more prevalent conditions. And industry plans for such things well in advance, like with their clinical trials [see extending the risk…]. As Tone Jones said at the TMAP Trial, "You can’t be a billion dollar drug in a 1% market."

Which brings me to this report I ran across on MEDPAGE TODAY. It’s about Vortioxetine [Brintellix®] and a recent long day at the FDA:
Panel debates vortioxetine for treating cognitive dysfunction in major depression
MEDPAGE TODAY
by Kristina Fiore
02/03/2016

An FDA advisory committee voted 8-to-2 to give the antidepressant vortioxetine [Brintellix] a new indication for cognitive dysfunction in major depressive disorder [MDD]. The decision followed an unusual meeting format for the Psychopharmacologic Drugs Advisory Committee, with the first half of the day dedicated to discussing whether or not cognitive dysfunction in MDD is a suitable target for development, and if so, how exactly it should be studied…

Historically, the FDA’s division of psychiatry products within Center for Drug Evaluation and Research has taken the position that cognitive dysfunction in MDD was a pseudo-specific drug target, "meaning that this claim would be considered artificially narrow and related to the overall disorder of MDD," the agency wrote in review documents posted ahead of the meeting. But the organization changed its stance as more research has suggested that it can be considered a distinct problem that hasn’t been evaluated in drug trials.

Starting in 2011, Lundbeck, the original developer of vortioxetine, began submitting data on the drug’s efficacy in cognitive dysfunction to the FDA. In 2014, Takeda and Lundbeck met with FDA to further discuss the potential indication, and the agency ultimately decided on a public hearing to discuss both the possibility of such an indication and the efficacy of the drug on the same day…

In one of the approval trials, called ELDERLY, vortioxetine offered better results on the digit symbol substitution test [DSST] than duloxetine, prompting the companies to open two other trials focused on cognitive impairment in MDD: FOCUS and CONNECT, both of which enrolled 602 patients for an 8-week trial period. FOCUS looked at a composite of DSST plus Rey Auditory Verbal Learning Test [RAVLT] learning and memory for its primary endpoint, while CONNECT looked only at DSST — and both showed an effect, albeit a small one, according to FDA clinical reviewer Wen-Hung Chen, PhD…

Should the FDA decide that cognitive dysfunction in MDD is indeed a valid therapeutic target and that Takeda has demonstrated that vortioxetine improves the condition, the two groups will work together to define the exact terms of the labeling. The agency is not obliged to follow recommendations from its advisory committees, but it usually does.

Vortioxetine was a late comer to the antidepressant scene [2013] and I haven’t paid much attention to it since that bizarre review article in the Journal of Clinical Psychiatry [see the recommendation?…]. This piece caught my attention because of the indication creep, but I also wondered what an FDA Advisory group had seen that would have them voting 8:2 in favor of this new indication. Then after I scanned the abstracts, I found something else that piqued my curiosity. First, here are the papers:
by McIntyre RS, Lophaven S, and Olsen CK.
International Journal of Neuropsychopharmacology. 2014 17[10]:1557-1567.
Clinical Trial NCT01422213

The efficacy of vortioxetine 10 and 20 mg/d vs. placebo on cognitive function and depression in adults with recurrent moderate-to-severe major depressive disorder [MDD] was evaluated. Patients [18-65 yr, N = 602] were randomized [1:1:1] to vortioxetine 10 or 20 mg/d or placebo for 8 wk in a double-blind multi-national study. Cognitive function was assessed with objective neuropsychological tests of executive function, processing speed, attention and learning and memory, and a subjective cognitive measure. The primary outcome measure was change from baseline to week 8 in a composite z-score comprising the Digit Symbol Substitution Test [DSST] and Rey Auditory Verbal Learning Test [RAVLT] scores. Depressive symptoms were assessed using the Montgomery-Åsberg Depression Rating Scale [MADRS]. In the pre-defined primary efficacy analysis, both doses of vortioxetine were significantly better than placebo, with mean treatment differences vs. placebo of 0.36 [vortioxetine 10 mg, p < 0.0001] and 0.33 [vortioxetine 20 mg, p < 0.0001] on the composite cognition score. Significant improvement vs. placebo was observed for vortioxetine on most of the secondary objectives and subjective patient-reported cognitive measures. The differences to placebo in the MADRS total score at week 8 were -4.7 [10 mg: p < 0.0001] and -6.7 [20 mg: p < 0.0001]. Path and subgroup analyses indicate that the beneficial effect of vortioxetine on cognition is largely a direct treatment effect. No safety concern emerged with vortioxetine. Vortioxetine significantly improved objective and subjective measures of cognitive function in adults with recurrent MDD and these effects were largely independent of its effect on improving depressive symptoms.
by Atul R Mahableshwarkar, John Zajecka, William Jacobson, Yinzhong Chen and Richard SE Keefe
Neuropsychopharmacology. 2015 40: 2025–2037.
Clinical Trial NCT01564862

This multicenter, randomized, double-blind, placebo-controlled, active-referenced [duloxetine 60 mg], parallel-group study evaluated the short-term efficacy and safety of vortioxetine [10-20 mg] on cognitive function in adults [aged 18-65 years] diagnosed with major depressive disorder [MDD] who self-reported cognitive dysfunction. Efficacy was evaluated using ANCOVA for the change from baseline to week 8 in the digit symbol substitution test [DSST]-number of correct symbols as the prespecified primary end point. The patient-reported perceived deficits questionnaire [PDQ] and physician-assessed clinical global impression [CGI] were analyzed in a prespecified hierarchical testing sequence as key secondary end points. Additional predefined end points included the objective performance-based University of San Diego performance-based skills assessment [UPSA] [ANCOVA] to measure functionality, MADRS [MMRM] to assess efficacy in depression, and a prespecified multiple regression analysis [path analysis] to calculate direct vs indirect effects of vortioxetine on cognitive function. Safety and tolerability were assessed at all visits. Vortioxetine was statistically superior to placebo on the DSST [P < 0.05], PDQ [P < 0.01], CGI-I [P < 0.001], MADRS [P < 0.05], and UPSA [P < 0.001]. Path analysis indicated that vortioxetine’s cognitive benefit was primarily a direct treatment effect rather than due to alleviation of depressive symptoms. Duloxetine was not significantly different from placebo on the DSST or UPSA, but was superior to placebo on the PDQ, CGI-I, and MADRS. Common adverse events [incidence ? 5%] for vortioxetine were nausea, headache, and diarrhea. In this study of MDD adults who self-reported cognitive dysfunction, vortioxetine significantly improved cognitive function, depression, and functionality and was generally well tolerated.
I highlighted the part I was curious about. What was this path analysis that would lead them to claim that the effect on cognition was a direct effect, rather than secondary to improvements in depression? On a first pass through these articles, I couldn’t follow their path analysis well enough to understand it, so I linked all the reference material for a rainy day or in case there’s someone out there quicker than I who can explain it to us right now. According to the Medscape article linked above, the FDA will consider this new indication next month, "The agency is expected to make a decision on this new sNDA by March 28, according to the press release." While I didn’t follow their path analysis, the overall analysis of effect sizes was standard fare – and like the two dissenting FDA panelists, I thought they weren’t at all impressive:
Primary Analysis
Based on the ANCOVA analysis, the change from baseline [mean±SE] to week 8 in DSST performance score was 4.60±0.53 for vortioxetine, 4.06±0.51 for duloxetine, and 2.85±0.54 for placebo. The difference from placebo was significant for vortioxetine [Δ +1.75, 95% CI: 0.28, 3.21; P=0.019; ANCOVA, OC], with a standardized effect size of 0.254. The difference from placebo was not significant for the duloxetine group [Δ +1.21, 95% CI: −0.23, 2.65; P=0.099], with a standardized effect size of 0.176.
I’m skeptical about this claim of an independent effect on cognition myself, a skepticism born from both experience and their data. However, the meaning to the Sponsors is clear. If the FDA approves this new indication, we can easily envision that the Cohen’s d of 0.254 [weak] will turn into the kind of difference that’s a marketing department’s dream come true. "Depressed? Having trouble thinking? Brintellix® has been shown to …" all the while hoping to create a blockbuster in spite of the fact that the antidepressant effect was less that the active comparator Duloxetine.
Depression Outcome
The study was validated because both vortioxetine and duloxetine demonstrated a statistically significant change from baseline in mood symptoms compared with placebo at the end of week 8, as measured by change in MADRS [vortioxetine, Δ − 2.3, 95% CI: − 4.3, − 0.4; P < 0.05; duloxetine, Δ − 3.3, 95% CI: − 5.2, − 1.4; P < 0.001; MMRM, FAS].
With 54 clinical trials of vortioxetine on clinicaltrials.gov, 6 of them directly addressing cognition along with the other fishing expeditions looking for indications, the dance between the FDA and industry that has been a major feature of psychopharmacology drug trials continues to play out on a stage of p-values, weak effect sizes, and small differences, just like it has for the last 30 years. Where are the grown-ups?

NOTE: from Table 2 in Keefe et al [2015]. Since we don’t have the actual data, the only way we can do the Omnibus ANOVA is from summary data. Nevertheless, in the DSST analysis, it is not significant. A strict interpretation of that finding invalidates any differences in the pair-wise comparisons. Also note that in both the DSST and MADRS analyses, Vortioxetine and Duloxetine are not statistically different.
 
Mickey @ 8:59 AM

the doctor and the computer…

Posted on Thursday 25 February 2016

"Today the U.S. Senate voted in support of the confirmation of Dr. Robert Califf, M.D. to be Commissioner of U.S. Food and Drug Administration. Dr. Califf has demonstrated a long and deep commitment to advancing the public health throughout his distinguished career as a physician, researcher, and leader in the fields of science and medicine.  He understands well the critical role that the FDA plays in responding to the changes in our society while protecting and promoting the health of the public, across the many areas we regulate – and I am confident that our public health and scientific contributions will further grow under his exceptional leadership."

At some other point, we might not have paid a lot of attention or even noticed the appointment of a new FDA Commissioner. For that matter, I wonder if we would have been aware that the position of Director of the NIMH has just been vacated. But right now, both have been on the front burner. Besides their obvious centrality in the ‘hot topic’ of pharmaceuticals, a prime subject in our daily news, there’s  something else that connects them – their relationship to the information age and its big data. Both are intrigued by and involved in the application of these new technologies to how we do medical research and specifically how they relate to the clinical trials we use to evaluate medications.

I called Insel a breakthrough freak, and I think I’m right about that. He was always in the future, aiming for the next big thing, constantly grumbling about the present. And that kind of thinking might sometimes be a real asset in a researcher. In his case, it didn’t seem balanced by the problems of the present – and so his blogs, speeches, and programs were always aiming for home runs instead of just trying to get on base. The other thing that bothered me was that he never practiced medicine – and it showed. His thinking often reminded me of my own when I was a medical student – uninformed by the clinical experience that builds a grounded medical intuition, something I thought he lacked.

I can’t levy that kind of criticism on Dr. Califf. He’s an academic cardiologist that built a very successful Clinical Research program at Duke before moving to the FDA a year ago. He couldn’t have done that without having a close relationship with the pharmaceutical industry. That was a red flag picked up on by the senators that opposed his nomination and anyone else who pays attention to these matters. He was a Principle Investigator on some big studies himself, recently Xarelto®, the blockbuster anticoagulant that his study got approved. His COI declaration is, indeed, more populated than I want it to be:
Dr Califf currently holds the post of Deputy Commissioner for Medical Products and Tobacco, US Food and Drug Administration. Prior to holding this post, Dr Califf received grant funding from the Patient-Centered Outcomes Research Institute, the National Institutes of Health, the US Food and Drug Administration, Merck, Roche, Aterovax, Bayer, Janssen Pharmaceuticals, Eli Lilly & Company, and Schering-Plough; grants and personal fees from Novartis, Amylin, Scios, and Bristol-Myers Squibb/Bristol-Myers Squibb Foundation; and personal fees from WebMD, Kowa Research Institute, Nile, Parkview, Orexigen, Pozen, Servier International, Bayer Healthcare, Bayer Pharma AG, CV Sight, Daiichi Sankyo/Lilly, Gambro, Gilead, Heart.org–Bayer, Medscape, Pfizer, Regeneron, TMC, GlaxoSmithKline, Genentech, Heart.org–Daiichi Sankyo, and Amgen. Dr Califf also reported holding equity in Nitrox/N30 and Portola. A full listing of disclosure information for Dr Califf for this interval is available at https://www.dcri.org/about-us/ conflict-of-interest.
He’s certainly well represented in our medical literature. His name is on over 1100 articles indexed in PubMed, going back to 1978 when he was young [64-38=26]. The ordinate [y axis] on the chart represents his articles/year [articles/year!!!]. He was involved in starting the Duke Databank for Cardiovascular Disease in 1983 and was the founding director of the Duke Clinical Research Institute in 1996. He authored a particularly telling short editorial way back at the beginning of that graph [1981] that’s worth a read in that it seems to presage his subsequent career – The Doctor and the Computer [for reference: that’s the year the IBM PC was introduced]. The point being that he’s a computer/data guy who got on that bus from his earliest of days.

He’s a Cardiologist, a specialty that’s unrecognizable from my time as a medical resident in the late 1960s. Along with the cardiovascular surgeons, they’ve revolutionized the treatment of heart disease. And he’s been a part of that to his credit. I personally think they’re a too quick to jump in with some of their implantable devices, and their recommendations about Statins are way out of line. I actually feel the same way about Xarelto®. As often as it’s touted as a breakthrough, I see it as only a drug of convenience, and if I develop Atrial Fibrillation like so many of my current peers, I think I’ll stick with Warfarin for myself. Sure enough, there are dietary and medication restrictions, and it needs to be monitored, but you can turn it off. You can’t turn Xarelto® off. One car wreck, and you’re in deep trouble, or a GI bleed, or just being old [right now, I have a 95 year old colleague in the hospital for dangerous spontaneous bleeding on one of Xarelto®‘s competitors]. Monitoring Warfarin just isn’t that hard. But in the main, the progress in cardiovascular medicine is impressive and then some.

But oddly, none of that is what really bothers me about his appointment. Right now there are two opposing critiques of the FDA. Perhaps the loudest is the cry for more drugs and the FDA is seen as a bottleneck in that process. PHARMA is pushing for a more streamlined process to make approval faster and easier. And the world has a voracious appetite for new drugs and is pulling for for the same thing – more, faster, easier. One of the ways people are thinking about doing that is to utilize the wealth of Electronic Medical Record information and the techniques of Big Data to evaluate pharmaceuticals. Dr. Califf has been in the center of that move, recently the editor of an issue of Clinical Trials devoted to pragmatic clinical trials, a version of that genre. Who hasn’t had the fantasy of doing clinical trials as part of ongoing medical care using these newer resources and techniques? I certainly have, particularly in monitoring long term efficacy and safety.

But, what about the CNS drugs? What about all the RCTs done on the antidepressants and antipsychotics that have such shaky science? What about approving Brexpiprazole for treatment resistant depression, or for that matter, any drug for treatment resistant depression? What about that misleading ad I can’t stop talking about? What about HHS and SAMHSA pushing for Behavioral Health moving to primary care with Collaborative Care as a backup, psychiatrists suggesting drug regimens for patients they haven’t even seen. What about waiting room screening – sure to escalate the inappropriate overmedication of patients? The FDA has certainly had its part in colluding with the pharmaceutical industry whether on purpose or inadvertently in all of these things. It took them seventeen years to add a black box warning that was needed after only a few.

The charge of the FDA is to insure the safety and effectiveness of our pharmacopeia, and in my corner of the world [psychiatry], the net effect has been disappointing at times, and scandalous at others. Many of us have asked for something we should have had all along – Data Transparency – the right to look independently at the raw data from clinical trials and reach our own conclusions instead of the fiction of the ghost writers. Guido Rasi and others at the European Medicines Agency have worked at moving us in that direction, but the FDA has not except for occasional rhetorical bursts that invariably fade. And we know that in spite of all of our complaints about short term RCTs, we still need them. Population and "naturalistic" studies of harms and efficacy haven’t worked out for us so far. There’s just no way to say that the FDA has provided effective mechanisms that guarantee safety and efficacy with our medications. They’ve stuck with minimal statistical significance for Efficacy and short term Adverse Event analysis – failing to roll up their sleeves and address the obvious problems that are still with us. The legal record of PHARMA penalties and settlements is in part a monument to those failings.

So my concern is that Robert Califf is coming to the FDA to modernize and streamline things and will follow the path he got on back at the dawn of his career with The Doctor and the Computer. He’s no breakthrough freak like Tom Insel, but I worry that the very real issues of data transparency, ongoing monitoring with careful attention to the adverse effects of drugs that are symptomatic rather than life-saving, the duplicity of the DTC ads, etc. will fall by the wayside. I’m afraid that he won’t get it how important it is to put a definitive end to the notion of PHARMA’s proprietary ownership of clinical trial data. I’m a computer/data guy too. And there’s little question that’s an important part of our future. But one-size doesn’t fit all, and our corner of the world needs some real, focused attention…
Mickey @ 6:55 PM

intended and unintended consequences…

Posted on Wednesday 24 February 2016


Pharmalot @ STAT
What a good compromise. Is it possible that Congressmen might wake up from their long sleep and do something so clearly sensible as this? As I read it, I thought, "And maybe require that they have to add the price per pill to the ad." The pharmaceutical companies moan about all the hoops they have to jump through, but I have a hard time generating any sympathy for their complaints. They can’t imagine the burden they’ve added to the practice of medicine with the ubiquitous "ask your doctor if ____ is right for you." Beside the direct effect of specific requests, it seems to me like it’s part of a change in many patients’ approach to medicine in general.

In the past, I was used to patients presenting their symptoms with a question mark. "Is something wrong?" "If so, what’s wrong?" "What can be done about it?" These days, many patients present with an agenda. While that’s just an impression with no p-value, it’s a strong impression. One get’s used to the drug seeking patient whose symptoms and presentation are aiming you to the inescapable conclusion that they should be prescribed some particular controlled substance. People often use the term sociopath to describe such people. I’m talking about something different – the person who has seen some ad and thinks "I’d like to try that." As the doctor, you are then the hurdle between the patient and Cymbalta  ®  or Rexulti® [but you don’t know that]. Occasionally, that’s fine and good, but it’s not the rule. It’s one of the reasons I’ve become more fluent in drug prices, a language I have never been drawn to particularly.

One might see this state of affairs as including the patient in the decision about treatment. I’m fine with including the patient in decisions about treatment. But I’m not at all fine with those ads ability to create patients, or to suggest that there’s magic in antipsychotics for people who tried to find magic in the antidepressants that came before them, when the actual problem is in the "psycho-social" domain. I don’t like thinking these thoughts particularly, but this "agenda trend" is very real and it’s coming from someplace. The ads may well be aimed at creating a market for specific drugs, but they reinforce the idea that the solution to life’s woes comes in a potion of some kind. And I recurrently wonder how much that contributes to things like this article in my local paper.

That’s not an idle thought. I’ve heard too many times people in drug trouble say, "I tried those antidepressants and they didn’t do the job. But the Meth did." Whatever the case, These are not thoughts for a practitioner, they’re for policy makers who don’t seem to realize that all this talk of integrating behavioral health with primary care is almost guaranteed to accelerate the CNS drug epidemic. The most recent HHS bulletin to appear in my box combined that story line with one about doing the same thing with substance abuse – offering webinars for both. I had the fantasy of countering with a webinar on the dangers of unintended consequences…
Mickey @ 12:58 PM

the verb “to follow”…

Posted on Tuesday 23 February 2016

About six months ago, I saw a patient on an outrageous medication regimen – one that rendered her literally unable to think. I mentioned her first in blitzed…], later in some truths are self-evident…, and most recently in a story: getting near the ending[s]… shortly after Christmas. I obviously had to get her off the medication to get her mind back. I was worried about withdrawal states, but that turned out not to be the problem – it was the masked Tardive Dyskinesia. It only showed when I tapered the Seroquel. It was severe, disfiguring, and maddening – restless muscles, hand wringing, and constant back and forth jaw movements.  Finally after six months, it has mercifully begun to clear, at least enough to be tolerable. Her diagnosis? Personality disorder, passive dependence, life’s woes, etc. Certainly not Schizophrenia or Bipolar Disorder or anything like that. The mental health center has another more tolerable telepsychiatrist now who isn’t going to be doing that kind of medicating [and they are definitely afraid of me now]. Why mention her when I am writing about Schizophrenia?  To remind us all that Tardive Dyskinesia is very real, still out there, and can be just as bad with Atypical Antipsychotics as with the older drugs. It’s not a virtual threat – it’s painfully real, a show-stopper. Treatment? Don’t get it. Otherwise, symptomatic care and tightly crossed fingers. The good news? In some cases, it very gradually abates with time off meds.
Psychiatric Times
by Allen Frances
February 17, 2016
[note the comment by Sandra Steingard]
Psychiatric Times
by Ronald Pies
February 22, 2016
[note the comment by Bernard Carroll]
Those are four very solid citizens highlighted in red, each an expert in their own right. Their opinions differ, and they’re each candid and clear in what’s written in these two articles and comments. No matter which side of the fence you are on, you’ll know more after reading them than you did before. What they’re talking about is a dilemma – a genuine dilemma. It’s a dilemma for people with psychotic illnesses and those of us who care for them. People who talk about this dilemma tend to polarize and can have a field day with simplification and ad hominem attacks on each other, and in what you will read up there, that tendency is held in check. I congratulate all four for that.

I was in training as the Community Mental Health Movement was running out of money and power. The services were disappearing at an alarming rate. The streets of Atlanta were visibly populated with chronic mental patients and the traditional benevolent agencies were stretched beyond their capacities. We did the best we could, but it wasn’t close to good enough. I left training feeling much like Sandra talks about in her comment. I thought that the best approach to the dilemma about antipsychotics was to use them acutely, then aim for a medication-free life, with medicine always available for exacerbations. I thought that if the resources were available, I could bring that off.

I’m an Internist still, and I learned in that role that one of the more important functions of a physician is following patients with chronic diseases of any kind. You don’t know how important the little ways you can help along the way are until you do it. It doesn’t necessarily have to be frequent, but patients with chronic disease need an anchor – some place where their story is known and they can go even if it’s for a referral elsewhere. It’s a mutually rewarding enterprise. It’s what doctors have done forever, and it’s still the right thing to do. I thought that if I followed my patients with psychotic illness, I could manage medication-free. I also thought, and still think, that psychosis isn’t the only problem these patients have, there’s an underlying disability that often comes with these conditions. There’s a particular brand of supportive psychotherapy that can really help many of them negotiate their lives. I still believe that.

I couldn’t bring it off – the medication-free part.  The recurrences were too unpredictable, too disruptive, too expensive, too disheartening. I was game but surprisingly, in time, the patients weren’t. I did do drug holidays, primarily to be sure TD wasn’t lurking underneath like in the case above. I had some patients who were mostly medication free, but it was way down the line. So personally, I was a person who came to agree that maintenance treatment was the best path, with careful following. What would I do with a patient who refused medications? As Sandra says, It’s not my decision. I’d still follow the patient if they were willing and it wasn’t detrimental.

So what I would add to this discussion is just that – following. We know that many psychotic people have a hard time forming and maintaining long term affiliations, and so success isn’t guaranteed. But I’m also sure that living with the long term effects of these illnesses is usually bigger than just the psychotic episodes, and that periodic contact with someone who knows the condition in general and in the specific case can be a factor for a lot of good. We follow heart disease, diabetes, cancer, hypertension. This is perhaps an even greater imperative. Managed Care doesn’t put to follow on the table because its benefits either haven’t been or can’t be measured, but that doesn’t mean they’re not there. And as to the differences among the four authors above – differences are how dilemmas work until they’re no longer dilemmas. We’re not there yet…
Mickey @ 11:42 PM

self-rated metrics…

Posted on Monday 22 February 2016

With all our graphs and tables, we can lose sight of the nuts and bolts of clinical medicine – signs and symptoms. Symptoms are those things patients report, and signs are things a clinician can see. In studying mental illness, we mostly have to rely on subjective reports of symptoms since the objective signs, like a depressive countenance, are under voluntary control and can be feigned by actors  [as demonstrated above and on our television sets more often than I can sometimes bear]. Objective biomarkers in the main elude us. And so we take a big hit from critics about the subjectivity of mental illness and its diagnosis. I’ve lived on both sides [Internal Medicine and Psychiatry], and I am little moved by that criticism. Just because something’s hard doesn’t mean it can’t be done. It’s simply the challenge of clinical experience and training. In the clinical trials of drug effects on mental symptoms, we put a lot of faith in the available clinimetrics – rating scales generated by raters who are blinded to the treatment group. And how are these raters competent to perform this function? That’s an encyclopedic question. Here’s just a sample:
by SD Targum
Journal of Clinical Psychopharmacology. 2006 26[3]:308-310.

Clinical trials rely on ratings accuracy to document a beneficial drug effect. This study examined rater competency with clinical nervous system rating instruments relative to previous clinical experience and participation in specific rater training programs. One thousand two hundred forty-one raters scored videotaped interviews of the Hamilton Anxiety Scale [HAM-A], Hamilton Depression Scale [HAM-D], and Young Mania Rating Scale [YMRS] during rater training programs conducted at 9 different investigator meetings. Scoring deviations relative to established acceptable scores were used to evaluate individual rater competency. Rater competency was not achieved by clinical experience alone. Previous clinical experience with mood-disordered patients ranged from none at all [18%] to 40 years in 1 rater. However, raters attending their first-ever training session [n = 485] were not differentiated on the basis of clinical experience on the HAM-A [P = 0.054], HAM-D [P = 0.06], or YMRS [P = 0.66]. Alternatively, participation in repeated rater training sessions significantly improved rater competency on the HAM-A [P = 0.002], HAM-D [P < 0.001], and YMRS [P < 0.001]. Furthermore, raters with clinical experience still improved with rater training. Using 5 years of clinical experience as a minimum cutoff [n = 795], raters who had participated in 5 or more training sessions significantly outperformed comparably experienced raters attending their first-ever training session on the HAM-A [P = 0.003], HAM-D [P < 0.001], and YMRS [P < 0.001]. The findings show that rater training improves rater competency at all levels of clinical experience. Furthermore, more stringent criteria for rater eligibility and comprehensive rater training programs can improve ratings competency.
So, as to how are these raters competent to perform this function? The short answer is training. It’s their ratings on HAM-D, CGI, MADRS, CDRS-R, PANSS, etc that get turned into those tables, graphs, p-values, and odds ratios that populate the clinical trial reports that fill our journals – numeric proxies for the subjects of study. Parenthetically, I’m kind of impressed at how well they do judged by consistency and inter-rater reliability [better than clinicians on the DSM-5 field trials of diagnosis]. But that’s not my point here. I’m thinking about some other rating scales, the subject self-rating scales that are included in many of clinical trials, either by requirement or convention.

There are sort of two levels for evaluating the outcome of clinical trials of psychopharmacologic treatments. One is statistics. That’s the FDA standard. Their charge is to make sure that a medicine has medicinal properties, isn’t inert like many of the patent medicines of old. And in most cases, that’s that for the FDA – p < 0.05. Clinical significance isn’t their job. A second level might be thought of as the way the Cochrane Collaboration approaches evaluation – not just is it medicinal, but how strong is it. They display and report on the Effect Sizes – things like Cohen’s d, Hedges g, Standardized Mean Difference, Odds Ratio, NNT, NNH. Then they combine these strength of effect measures with the 95% Confidence Intervals [a probability measure] in their familiar forest plots which I find invaluable. But what about what the subjects say? Many of the Observer Rated Metrics have Subject Self-Rated versions that cover the same ground [HAM-D-SR, IDS-SR, QIDS-SR, etc]. And there are others that focus on other areas of subjective experience.

When we sit in our offices, all we have to go on is what our patients have to say about what the medications are doing and how they look when they walk in the door. The scale is simple: "It really helped," "I think it might be helping," "It’s not helping." But the subject as self-rater isn’t so prominently mentioned in the published clinical reports [unless it’s a positive report]. Take for example the recent clinical trials of Brexpiprazole [Rexulti®] in treatment resistant depression that I can’t seem to stop talking about. Remember that there are two sets of efficacy data – a jury-rigged set and the real data [in an appendix]. Here’s some summary info from the real data – primary outcome on top [graphs] and secondary outcomes below [table]:

The lonely IDS-SR [Inventory of Depressive Symptoms-Self Rated] didn’t make the grade. It wasn’t mentioned in the Results in the 2mg study article [on the right]. It happened that in the other article [1mg and 3mg], the jury-rigged data being reported came out with IDS-SR having a p-value of 0.0251, so it was included:
"Brexpiprazole 3 mg showed greater efficacy than placebo (P < .05) on MADRS-defined response rate, CGI-I–defined response rate, and CGI-I at week 6 and in mean change from baseline at week 6 in CGI-S, HDRS-17, HARS, and IDS-SR."
Having developed a late-life hobby of looking at these Clinical Trials, I notice that Effect Sizes are rarely included in the published studies [do-it-yourself guide @ john henry’s hammer: continuous variables II… and john henry’s hammer: continuous variables III…]. But unlike the FDA, clinicians are interested in the robust-ness of response. And like the example here, the subject rated metrics, if included at all, are either passed over or mentioned only in passing. If you even remotely accept my premise that these industry funded clinical trials are more product testing than scientific research, they’ve got things backwards. They show us the outcome that is most sensitive [statistical significance] and leave out the measures of robustness [Effect Sizes] or the subjects’ experiential ratings [self-rated metrics]. We wouldn’t accept that for toothpaste. Why should we accept that for our medications? Are we expected to believe that augmenting the regimen of patients that don’t respond to antidepressants with Rexulti® is a good idea if they don’t report that their symptoms are any different that those who got placebos?

As prescribing physicians, we have access to more information than our patients. All they get is what they see in the media [the actors]. We at least have the papers, but we have to do more these days than just read what’s handed to us. Accepting the deceptive and selective reporting in the published articles just can’t be justified in the climate of our current literature. So it behooves practitioners to go the extra mile to make some simple calculations that regularly go missing, and to take note of metrics like the subject self-ratings that may be mentioned in the Methods, but don’t make it to the Results except buried in a table.

Otherwise, we’re giving our patients no more than the paid actors in the now ubiquitous commercials…
Mickey @ 8:28 PM

’twas brillig…

Posted on Monday 22 February 2016

Well, I did it. I posted links to the special issue of Psychophysiology on the elusive RDoC [RDoC…], and I read some of the articles. I only get what they’re aiming at doing, but I’m left with no personal understanding of how this will get them there. There are things that I know I’ll never understand, and this is one of them – like the others in my little collage. There were snatches along the way that made sense and seemed like productive avenues, but then I was lost at sea once again. At times I thought, "No wonder Insel went to Google," at other times "promising." Also, I was looking for some explanation for what he meant when he said:
"Five years ago, the NIMH launched a big project to transform diagnosis. But did we have the analytical firepower to do that? No. If anybody has it, companies like IBM, Apple or Google do – those kinds of high-powered tech engines…"
I haven’t a real idea of what he was talking about, how a high tech engine might be used at this point. They take a shot at that in the last article [the Data-Web], but it wasn’t altogether clear what they were collecting for their ‘big data’. Most of what I read seemed like think-tank talk to me at this point. But I’ve done my due diligence now. I wish them luck and am moving on. I sure didn’t find anything that suggested that the RDoC has reached a point where it deserves to be a basis for future NIMH Grants. To borrow a line from a friend, like the DSM-III that preceded it, the RDoC isn’t ready for prime time. That’s about all I can say with any confidence…
Mickey @ 11:01 AM