Recursive subsetting to identify patients in the STAR*D: a method to enhance the accuracy of early prediction of treatment outcome and to inform personalized care.
by Kuk AY, Li J, and Rush AJ
Journal of Clinical Psychiatry. 2010 71(11):1502-8.
OBJECTIVE: There are currently no clinically useful assessments that can reliably predict–early in treatment–whether a particular depressed patient will respond to a particular antidepressant. We explored the possibility of using baseline features and early symptom change to predict which patients will and which patients will not respond to treatment.METHOD: Participants were 2,280 outpatients enrolled in the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study who had complete 16-item Quick Inventory of Depressive Symptomatology-self-report (QIDS-SR16) records at baseline, week 2, and week 6 (primary outcome) of treatment with citalopram. Response was defined as a ≥ 50% reduction in QIDS-SR16 score by week 6. By developing a recursive subsetting algorithm, we used both baseline variables and change in QIDS-SR16 scores from baseline to week 2 to predict response/nonresponse to treatment for as many patients as possible with controlled accuracy, while reserving judgment for the rest.RESULTS: Baseline variables by themselves were not clinically useful predictors, whereas symptom change from baseline to week 2 identified 280 nonresponders, of which 227 were true nonresponders. By subsetting recursively according to both baseline features and symptom change, we were able to identify 505 nonresponders, of which 403 were true nonresponders, to achieve a clinically meaningful negative predictive value of 0.8, which was upheld in cross-validation analyses.CONCLUSIONS: Recursive subsetting based on baseline features and early symptom change allows predictions of nonresponse that are sufficiently certain for clinicians to spare identified patients from prolonged exposure to ineffective treatment, thereby personalizing depression management and saving time and cost.
A practical approach to the early identification of antidepressant medication non-responders.
by Li J, Kuk AY, and Rush AJ
Psychological Medicine. 2011 Jul 25:1-8. EPub
BACKGROUND: The aim of the present study was to determine whether a combination of baseline features and early post-baseline depressive symptom changes have clinical value in predicting out-patient non-response in depressed out-patients after 8 weeks of medication treatment.METHOD: We analysed data from the Combining Medications to Enhance Depression Outcomes study for 447 participants with complete 16-item Quick Inventory of Depressive Symptomatology – Self-Report (QIDS-SR16) ratings at baseline and at treatment weeks 2, 4 and 8. We used a multi-time point, recursive subsetting approach that included baseline features and changes in QIDS-SR16 scores from baseline to weeks 2 and 4, to identify non-responders (<50% reduction in QIDS-SR16) at week 8 with a pre-specified accuracy level.RESULTS: Pretreatment clinical features alone were not clinically useful predictors of non-response after 8 weeks of treatment. Baseline to week 2 symptom change identified 48 non-responders (of which 36 were true non-responders). This approach gave a clinically meaningful negative predictive value of 0.75. Symptom change from baseline to week 4 identified 79 non-responders (of which 60 were true non-responders), achieving the same accuracy. Symptom change at both weeks 2 and 4 identified 87 participants (almost 20% of the sample) as non-responders with the same accuracy. More participants with chronic than non-chronic index episodes could be accurately identified by week 4.CONCLUSIONS: Specific baseline clinical features combined with symptom changes by weeks 2-4 can provide clinically actionable results, enhancing the efficiency of care by personalizing the treatment of depression.
I suppose I could talk about how Rush took off for a new job in Singapore shortly after being fingered by Senator Grassley for Conflicts of Interest, around the time that the Whistleblower suit against J&J/TMAP, a program Rush headed, heated up. Or I could talk about the mammoth publishing machine, unheralded in my experience, that Rush manages turning out articles month after month [from only one or two studies]. Maybe I could mention how, now somewhat out of favor with the NIMH, he’s partnered up with Brain Resources in Australia to get into the Personalized Medicine business along with some other Grassley alumni. But you already know all of those things.
I think I’d rather mention his perseveration on a single theme. His theme is that one should use antidepressants "vigorously" as mentioned in the last two posts. There I talked about Rush’s hypothesis that if you dosed hard, or kept at it, or changed drugs, or added drugs, you could get more mileage out of the antidepressants. His watchword is "treat to remission." By my reading, none of these massive studies have born out his prediction. Neither sequencing nor using poly-pharmacy made the results much more impressive than before. But he is undaunted. He now has the Singapore Biostatisticians in the game, trying to predict response failure early [using his well worn data] so you can change drugs sooner or add drugs sooner or something like that. Who cares that neither of those strategies actually panned out in his own studies? Meanwhile, the recruitment for iSpot, the Personalized Medicine study with Brain Resources, Rush, Nemeroff, and Schatzberg continues recruitment, aiming to find biomarkers to predict response to drugs in advance, racing against Madhukar Trivedi, his old colleague in Texas, doing a similar NIMH funded study.
It is a sign of the times that this sort of stuff can appear in respectable journals. STAR*D was bad enough as a descriptive, pragmatic study, with outcomes assessed over the telephone. However, to piggyback on STAR*D data with secondary analyses like these and also genome wide genetic marker analyses is more than problematic – it is plain stupid. Why? Because STAR*D lacked the necessary design to give meaningful data for these secondary analyses. Chiefly, STAR*D lacked placebo controls at any level of the study.
In the 2010 report that you discussed, the overall response rate to citalopram at 6 weeks was just 46%. The default assumption has to be that most of these were placebo responders. Applying Occam’s razor, then, all one can conclude from the data is that patients who don’t look like placebo responders at 2 weeks won’t look like placebo responders at 6 weeks either. One cannot conclude anything about whether the drug was actually contributing a pharmacological benefit.
Notice also the impoverished sampling of baseline variables as possible predictors of outcome – gender, anxiety, chronicity, and comorbid general medical problems. Wouldn’t it be nice to see some genuine phenomenology here, too? The classic study of baseline predictors of antidepressant response appeared 35 years ago [http://www.ncbi.nlm.nih.gov/pubmed/793564]. The extent of dumbing down since those days is remarkable.
I have read multiple peer reviewed articles that show that SSRIs, except in cases of SEVERE intractable depression, are no better than placebo.