a cul de sac I…

Posted on Saturday 14 February 2015


by Saveanu R, Etkin A, Duchemin AM, Goldstein-Piekarski A, Gyurak A, Debattista C, Schatzberg AF, Sood S, Day CV, Palmer DM, Rekshan WR, Gordon E, Rush AJ, Williams LM.
Journal of Psychiatric Research. 2015 61:1-12.

We aimed to characterize a large international cohort of outpatients with MDD within a practical trial design, in order to identify clinically useful predictors of outcomes with three common antidepressant medications in acute-phase treatment of major depressive disorder (MDD). The international Study to Predict Optimized Treatment in Depression has presently enrolled 1008 treatment-seeking outpatients [18 – 65 years old] at 17 sites [five countries]. At pre-treatment, we characterized participants by symptoms, clinical history, functional status and comorbidity. Participants were randomized to receive escitalopram, sertraline or venlafaxine-extended release and managed by their physician following usual treatment practices. Symptoms, function, quality of life, and side-effect outcomes were assessed 8 weeks later. The relationship of anxiety to response and remission was assessed by comorbid Axis I diagnosis, presence/absence of anxiety symptoms, and dimensionally by anxiety symptom severity. The sample had moderate-to-severe symptoms, but substantial comorbidity and functional impairment. Of completers at week 8, 62.2% responded and 45.4% reached remission on the 17-item Hamilton Rating Scale for Depression; 53.3% and 37.6%, respectively on the 16-item Quick Inventory of Depressive Symptoms. Functional improvements were seen across all domains. Most participants had side effects that occurred with a frequency of 25% or less and were reported as being in the “none” to minimal/mild range for intensity and burden.

Outcomes did not differ across medication groups. More severe anxiety symptoms at pre-treatment were associated with lower remission rates across all medications, independent of depressive severity, diagnostic comorbidity or side effects. Across medications, we found consistent and similar improvements in symptoms and function, and a dimensional prognostic effect of comorbid anxiety symptoms. These equivalent outcomes across treatments lay the foundation for identifying potential neurobiological and genetic predictors of treatment outcome in this sample.
The third paragraph of the paper says:
    Recent efforts have focused on identifying clinical or laboratory-based measures that help to precisely target treatments for specific patients. While several neurobiological markers have been investigated, none have been of sufficient clinical value to be incorporated into treatment guideline recommendations….

That paragraph deserves a little more introduction:

BACKGROUND


As the decade following the coming of Prozac drew to a close, it was apparent that the new antidepressants were no panacea for symptomatic depression – at least not being prescribed in the chaotic way they were being used. Some thought that we should give the medication in a systematic way, with guidelines, with objective measurement data. This thread is about the some that thought that was the problem. We needed alogorithms for our treatment. Who know the right way to sive these medications? The Experts [the some] – that’s who. So we’ll create algorithms for the drugs used to treat Schizophrenia, Bipolar Disorder, Major Depressive Disorder [MDD], by Expert Consensus for our clinicians to follow as a guide to treatment. And where shall we do this, in the largest public mental health system there is – The State of Texas. Thus, in 1996, the Texas Medical Algorithm Project [TMAP] came into existence – generously supported by Foundations, the State of Texas, and multiple pharmaceutical companies, all coordinated by the psychiatrists of the University of Texas system. The algorithms spread to multiple States, and the Federal Government when the Texas Governor became the US President.

The idea of applying systematic study using clinical  trials of these algorithms appealed to the NIMH, and there followed a period of large acronymed Clinical Trials in Depression, Schizophrenia, Bipolar Disorder, and other disorders.. The largest, STAR*D [Sequenced Treatment Alternatives to Relieve Depression], was run by the TMAP team and had a complex, sequential algorithm in which non-responders were changed to another drug or augmentation scheme. In MDD, there was a side project, IMPACTS [algorithmic psychiatry: the fallacy…] that computerized  the algorithms. It had to be scrapped because the clinicians wouldn’t use it if left to their own devices. Then came COMED which combined multiple antidepressants – no help. I think it’s reasonable for me to say that all of these efforts generated much ado and many papers, spent tons of money, but not much changed in the response rates of MDD to antidepressants [if you’re not up to speed, just look up any acronym using the search box at the bottom of this page]:

That pretty much covers the completed efforts in Recent efforts. Now for the Recent part. Since all of this started, there have been some new kids on the block. Genomics, Proteinomics, Functional Neoroimaging, Cognitive Testing. Well, not new kids, but at least more prominent shiny new objects in neuroscience. And in the rest of medicine, Personalized Medicine [picking specific treatment based on unique biomarkers], had become a hot new area. Well, not a new area, but at least more prominent shiny new object in neuroscience. So Personalized Medicine began to be bandied about as a possible exploratory area for picking an antidepressant – the scientific rationale being «fill in the blank?». And a new character entered the ring – Evian Gordon, a brain training Australian [BrainNet] – and his colleague, Leanne Williams [Brain Resources]. They gathered a who’s who of KOLs [list] for their Personalized Medicine Action Group in D.C. in October 2009 to kick off a campaign to personalize antidepressant treatment [The Mayflower] [which is a must-see to understand this line of thinking].

In the year before this conference [2008], Senator Grassley’s congressional investigation had reshuffled the people he exposed who had unreported PHARMA income. John Rush [of TMAP, STAR*D, and CO-MED] left UT and went to Duke in Singapore. Alan Shatzberg, APA President, stepped down as Chairman at Stanford. Charlie Nemeroff, Boss of Bosses, was removed at Emory and went to Miami as chair. The emptiness of the PHARMA new drugs pipeline was looming. Out of that matrix, two large Personalized Medicine studies came into being. iSpot-D was financed by Evian Gordon’s BrainNet/BrainResources [see personalized medicine: the Brain Resources company II…] and added the Grassley-investigated people to his byline…
by Williams LM, Rush AJ, Koslow SH, Wisniewski SR, Cooper NJ, Nemeroff CB, Schatzberg AF, Gordon E.
Trials. 2011 12:4.
Meanwhile, Dr. Madhukar Trivedi, still at UT Southwestern, started a second Personalized Medicine study, EMBARC [Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care], with a large NIMH grant, mentioning many of the STAR*D veterans as resources in his grant proposals [see the race for biomarkers… and Godzilla vs. Ghidorah…]. This second Clinical Trial is still recruiting.

SCIENCE


The scientific premise behind this line of thinking and series of trials was questionable from the outset. The notion that Major Depressive Disorder as defined represents a distinct clinical entity, a unitary disease entity, was conjecture that has now rapidly moved to the realm of fantasy. The evidence for an over-riding biological etiology was equally scant, and has traveled in the same direction. Likewise, by any ongoing reading of the accumulating information, the therapeutic action of the antidepressant drugs is a non-specific, symptomatic effect – not something determined by the kind of precise or controllable biological mechanisms hypothesized in these studies – certainly nothing tied to etiology.

COMMERCE


The entrepreneurial background in this story should really be considered more a part of the foreground. TMAP was exposed by whistle-blower Allen Jones to be a conduit for PHARMA to introduce in-patent drugs to the public sector without evidence of efficacy justifying their use, facilitated by under-the-counter payoffs to officials. TMAP was shut down and the States have tried to retrieve their considerable losses with varying levels of success. To be fair, the payola was between State employees and PHARMA, not the academics. But the academics in all of these trials have been way too tied into industry across the board. These studies have generated a lot of money for participating universities and centers – over $50 M from the NIMH alone. In Academia, there’s a commerce in published articles and STAR*D probably hit an all time high with well over 100 papers, each with way too many authors [see infectious numerology…]. Then in the iSPOT byline, we see some  legendary psychiatrist entrepreneurs [Nemeroff, Schatzberg, Debattista]. Also, there’s Evian Gordon’s more up front private enterprises hoping to develop commercial tests to screen patients to pick an antidepressant in advance, as in this BrainNet’s pitch [see personalized medicine: beyond blockbusters…]:

Note: this document is now gone from the Internet [see it currently here through the Wayback Machine].

DESIGN


These studies have not been designed like the usual RCTs. For example, none had a placebo group, so the response/remission rates were uncorrected for the usual improvements seen in antidepressant trials from inert treatment [placebo]. As a result, the strength of drug effect [NNT, NNH, Odds Ratios, Effect Size] simply could not be calculated, was an unknown. None of them have followed the usual double-blinding, settling for partial schemes of one sort or another. They were described as "naturalistic" – meaning more like the treatment one might receive in a office situation than a strict controlled trial [and it showed]. They had high drop-out rates and there’s a lot of confusion about the various rating instruments used, particularly with STAR*D [see recalculating…  and still recalculating…]. Thus far, the only completely reported study was CO-MED [a negative trial]. In spite of a flood of offshoot papers, the final report for STAR*D never appeared, and the results that were reported begged credibility.

I started this post intending to comment on the preliminary iSPOT study reported above, but thought I ought to remind us about the history of these two studies [iSPOT and EMBARC] that are part of a quest to find biomarkers that will predict response to antidepressants. What I really wanted to do was describe the rationale for thinking that such biomarkers might exist, but looking back over the literature, I can’t find any rationale. Like the efforts of these investigators in earlier outings that looked at various algorithms and guidelines that explored sequencing, combining, or augmenting antidepressant regimens, scientific rationale is missing. Certainly those outings [TMAP, STAR*D, IMPACTS, and CO-MED] have yielded nothing of note. And yet there are two major ongoing studies with authors familiar to all of us, all hotly pursuing this line of investigation. Perhaps, that’s the more interesting question than the actual studies themselves, Why do they persist in going down this road? Why do they keep trying to find a way to make these drugs more effective than they are, with no scientific clues that it’s possible? Why are these efforts funded?

Maybe I’ll take a shot at those questions, but first I’m going to stick to my guns and report on the iSPOT study in the next post like I planned in this one [before I got wordy with my skepticism], because this report deserves some clarification of its own…
  1.  
    Laura Henze Russell
    February 14, 2015 | 9:31 AM
     

    Functional medicine provides a fruitful approach: screen people for genetic defects in methylation and detoxification pathways, and screen them for heavy metals and toxins that can wreak havoc on healthy brain chemistry, electrical and neurotransmitter function. If precision or personalized medicine goes down this route, it can be helpful. If it only looks for targeted Rx to address symptoms and not root causes, not so much. Two instructive books worth reading are Anxiety: Hidden Causes, and Toxic Metal Syndrome. Approaches that “restore” and “reset” brain function to a healthier, more normal state are helping people recover health, and get off a lifetime of drugs.

  2.  
    Catalyzt
    February 14, 2015 | 11:58 AM
     

    Any thoughts on DeBattista’s work on referenced EEG? I thought it was kind of interesting that the anticonvulsants and stimulants seemed to do better than “antidepressants” in the 2011 study. And I couldn’t help wondering if part of the reason eating disorder clinics are using it is that they’re trying to find a back-door way to justify getting their patients off SSRIs. This is a total crackpot idea, absolute wild speculation on my part– I have no idea if that’s what they’re doing or why– but I can imagine that the SSRIs would complicate treatment enormously for patients with ED, at least for a responsible clinician.

  3.  
    Joseph Arpaia
    February 14, 2015 | 9:34 PM
     

    My opinion of the use of rEEG or qEEG for ED patients is unprintable, perhaps because I had a severe anorexic come to me who had been treated for months with amphetamine based on her qEEG. She was near death (<60% of ideal body weight and that with pitting edema to her knees) and needed to be hospitalized for several weeks.

    I have also had a number of patients who had gone through tests to determine what the right medication was for them based on EEG, or genomics, or brain scans. My main recollection is that most seemed to respond for awhile and then relapse.

    My take on all these methods for determining the "right" treatment is that the studies need to control for an enhanced placebo response. Since some of the placebo response is related to the expectation of benefit, when you tell a patient that some whiz-bang test has determined that X is the medication for them, then the probability of X causing a placebo response is greater. This needs to be controlled for.

    Controlling for this is difficult, but could be done. For example you would have three separate groups of clinicians. Clinician group 1 would administer the whiz-bang test (WBT). Clinician group 2 would do a diagnostic interview on the patients. For 1/2 the patients they would have the results of WBT and for the other 1/2 of patients they would not. Group 2 would write up their medication recommendations. Clinician group 3 who was blind to whether the recommendations used the WBTwould then communicate these medication recommendations to the patients informing all of them that the recommendations used the WBT results. This would effectively blind the clinicians and the patients and allow one to measure the effect of the WBT over a diagnostic interview. (Since all patients would be receiving prescriptions based on a diagnostic interview the control group would be receiving adequate treatment.)

    Another problem with evaluating such tests is that when you are running correlations between a clinical outcome and a large number of variables the usual statistical methods give unreliable results. The 2011 paper by Battista et. al. reports p values that look low, but given the number of correlations that are possible those would be expected. That is why so few "statistically significant" results in these types of studies are replicated. The researchers are using the wrong statistics. Papers on this subject tend to make my head hurt, but an approachable one is A Dirty Dozen: Twelve P-Value Misconceptions, Goodman S. Semin Hematol 45: 135-140 (2008)

  4.  
    Catalyzt
    February 16, 2015 | 1:13 AM
     

    Thanks so much for that– and fantastic article, I’ll be gnawing away on that bone for a while, I imagine. “Approachable” is the right word, though for me it’s more like asymptotic– I come closer and closer to understanding, but never quite get there. #3 seems like one that so many writers blow right past when they’re summarizing research– the idea about the effect size, but also the reminder that the outcomes measured might not be clinically important. I remember stumbing across a similar idea when I was trying to digest research about chemotherapy, realizing that rates of tumor recurrence don’t always correlate with what’s important to patients– usually lifespan or quality of life.

    Horrifying story about your patient, terrifying that any clinician would follow that kind of medication advice, no matter where it came from.

    Very interesting idea for controlling for enhanced placebo response. As for the genomic angle, my psychiatrist told me that his general impression is that it just doesn’t work, and has trouble understanding why some of his colleagues thought it might.

  5.  
    wiley
    February 17, 2015 | 5:46 PM
     

    I read recently that subjects in studies have a more powerful placebo response when told that the drug they may or not be taking is very expensive.

    Sad, but true.

Sorry, the comment form is closed at this time.