latter day STAR*D I…

Posted on Monday 26 January 2015

The initial fanfare with the SSRIs introducing the Antidepressant Age [Prozac 1988] was followed by a period of disillusionment as the lower than hoped [hyped] response rates became apparent. While the logic behind what came next isn’t totally clear to me, the non-responding patients were seen as having Treatment Resistant Depression, as if this were some variant of Major Depressive Disorder, and the concept developed that the efficacy of the SSRIs could be enhanced by sequencing, combining, or augmenting the therapeutic power of these drugs using some algorithm or treatment guideline. I don’t know how this idea came into being, but I know where. It was in Texas at UT Southwestern, and was implemented as TMAP [the Texas Medication Algorithm Project], followed by a series of NIMH funded Clinical Trials [and others], looking for ways to improve the efficacy of these drugs:
These studies cost the NIMH well over $50 M, produced hundred of articles, are mentioned in 125 posts on this blog, and came to naught. Here’s the conclusion to my last post on STAR*D [retire the side…]:
    These attempts to turn clinical psychiatry into algorithms for medications driven by symptoms, often gathered by questionnaires, have yielded nothing. Worse, they trivialize both the human experience of patients and the practitioners’ efforts to help them. Let’s hope that the chart up there has finally run out of iterations and we can retire the side…
The STAR*D dataset is available for other uses from the NIMH, and recently several independent investigations have appeared that make interesting use of the data from this large cohort:
by Eiko I. Fried and Randolph M. Nesse
Journal of Affective Disorders. 2014 172C:96-102.

Background: The DSM-5 encompasses a wide range of symptoms for Major Depressive Disorder [MDD]. Symptoms are commonly added up to sum-scores, and thresholds differentiate between healthy and depressed individuals. The underlying assumption is that all patients diagnosed with MDD have a similar condition, and that sum-scores accurately reflect the severity of this condition. To test this assumption, we examined the number of DSM-5 depression symptom patterns in the “Sequenced Treatment Alternatives to Relieve Depression” [STAR*D] study.
Methods: We investigated the number of unique symptom profiles reported by 3703 depressed outpatients at the beginning of the first treatment stage of STAR*D.
Results: Overall, we identified 1030 unique symptom profiles. Of these profiles, 864 profiles [83.9%] were endorsed by five or fewer subjects, and 501 profiles [48.6%] were endorsed by only one individual. The most common symptom profile exhibited a frequency of only 1.8%. Controlling for overall depression severity did not reduce the amount of observed heterogeneity.
Limitations: Symptoms were dichotomized to construct symptom profiles. Many subjects enrolled in STAR*D reported medical conditions for which prescribed medications may have affected symptom presentation.
Conclusions: The substantial symptom variation among individuals who all qualify for one diagnosis calls into question the status of MDD as a specific consistent syndrome and offers a potential explanation for the difficulty in documenting treatment efficacy. We suggest that the analysis of individual symptoms, their patterns, and their causal associations will provide insights that could not be discovered in studies relying on only sum-scores.
It’s not easy to see what they did from the abstract and the full paper isn’t on-line, but here’s the gist of it. They looked at the intake QID-16 screening metric [Quick Inventory of Depressive Symptoms] and fractionated it into 12 Symptoms that correlated with the DSM-IV Major Depressive Disorder criteria:
Each item on the QID-16 is scored 0 to 4. They coded each symptom as absent [score 0 or1] or present [score 2 or 3]. That gave them a way to reclassify the 3703 subjects by twelve symptom profile. Here’s the punchline:
There was a striking heterogeneity of symptom profiles among this large cohort of patients diagnosed as having the DSM-IV Major Depressive Disorder. Striking! To my way of thinking, this simple study is totally brilliant. My only question about this paper is Why hasn’t someone done this before? They only used the intake QID-16, so they avoided the mess STAR*D became as it progressed and they started bouncing from metric to metric. And it showed something that many of us think already in a simple yet convincing way: Major Depressive Disorder is not a unitary diagnostic entity – far from it.
I would have probably reached a slightly different conclusion – one best articulated by historian Dr. Edward Shorter in his book, Before Prozac:

    "Bottom Line: Major Depression doesn’t exist in Nature. A political process in psychiatry created it…"
    January 26, 2015 | 10:57 AM

    It’s satisfying to see this dataset being put to some constructive use. I guess all “data mining” isn’t bad. I love the way you combine history and quantitative analysis, keep it up.

    Bernard Carroll
    January 26, 2015 | 11:42 AM

    This heterogeneity is an inevitable consequence of the DSM-III decision to use diagnostic criteria in a disjunctive format. Disjunctive means either A or B or C, etc. At my last count there were 19 possible symptoms in DSM-IV for major depressive disorder, with only 5 needed for the diagnosis. You do the math. Once MDD became reified, de facto, then research that, like STAR*D, relied on this disjunctive formulation was doomed to irrelevance. Kiss goodbye to $50 million of scarce research funds.

    January 26, 2015 | 4:40 PM

    My colleagues and I have studied this issue as well, albeit not with the STAR*D data (we used one private data set in addition to the National Comorbidity Study):

    It’s possible not only to examine how many profiles (symptom combinations) exist and their frequency, but also to quantify measures of heterogeneity that are comparable across disorders, as well as to examine ‘disjoint’ profiles — cases where two individuals both meet diagnostic criteria yet share no symptoms in common. We examined MDD and PTSD in empirical datasets, and many other disorders on a theoretical level. I invite you to take a look!

    James O'Brien, M.D.
    January 27, 2015 | 11:38 AM

    I’ve been saying for years that disjunction and a la carte menu Major Depression were responsible for the high placebo effect because many of these cases are self-limiting conditions. Antidepressant studies used to be done on inpatients who were seriously depressed and melancholic.

    It’s hard to trust any antidepressant study after 1980 even in honest hands for this reason.

    If I were doing antidepressant research now, I’d up the criteria to 7/9 or something like that. It would nice if the patients at least had a few symptoms in common. I’ve seen some studies using PHQ-9 as the measurement tool which is just pathetic.

    This has been part of the disastrous development of psychiatry National Association of Realtors type marketing to the broad population instead of really focusing on and helping the severely mentally ill.

    I would cringe if there was any discussion of an MMPI3 development even there are some obsolete questions (such as reading a newspaper daily). I’m sure the results would be more watered down and less valid and reliable than past versions.

    January 28, 2015 | 11:28 AM

    Bernard, thanks for discussing the study! To elaborate on three points raised above:
    1) We dichotomized symptoms into absent and present because the symptom scale between 0 and 3 allows for a huge number of possible symptom combinations that may not be meaningfully different from each other.
    2) Instead of the HRSD and the IDS also assessed in the STAR*D, we focused on the QIDS because it features exactly and only DSM-5 criterion symptoms (so nobody can say we included irrelevant symptoms in our count).
    3) You mentioned that the PDF is not freely available on the Journal’s website—you can download it directly from this University website (

    Charles, I hadn’t been aware of your paper—thanks for sharing!

Sorry, the comment form is closed at this time.