algorithmic psychiatry – the algorithms…

Posted on Monday 30 May 2011

So I set out to see if any of the authors had conflicts of interest with the drugs used in these studies. First I found which ones were available as generics in the years of the studies and colored them green. Then I started with Madhukar Trivedi to look up conflicts of interest planning to color them red. When I got through, I’d covered every non-generic drug in all three studies. It was the easiest piece of investigative reporting I’ve ever done [notice that they specify XR/SR for the drugs where available – longest exclusivity rights]:





I felt like I was being lazy, just doing Trivedi’s coi, so I went back and looked at John Rush. Also easy. Same thing [except for fluvoxamine Solvay]. But the point is made. These were three studies funded by the NIMH. In every case, the two lead researchers [including the Principle Investigator of each] had financial ties with the pharmaceutical companies whose drugs were being studied. One might say, well since they had ties with all of them, they were impartial [but if you said such a thing, you should wash your mouth out with soap]. The point is that they shouldn’t have financial ties with any of them. One might say, well everyone was doing it in those years [back to the bar of soap for you]. A conflict of interest is what it is, epidemic or not.

As I look at the algorithms themselves, I wonder where they really came from. They talk about them as if they have a meaning – some kind of science. Yet they are nothing more than their thoughts – maybe based on some experience, probably colored by their industry ties, more like tokens pushed around on a game board without discernible scientific rationale. In a later paper about his computerized alogorithms, Trevidi muses:
    "The capacity to base treatment decisions on state-of-the-art knowledge alone appears to be insufficient to motivate behavior change among clinicians."
His justification for "state-of-the-art" must be the earlier TMAP study where the clinics using algorithmic treatment surpassed clinics using treatment as usual. While that was true, a more accurate report would say the responses were less bad – since none of their results were even slightly exciting. The "state-of-the-art" label is more unsubstantiated declaration than anything else. And certainly, the great mess of STAR*D offers no clarification. There was a study from the University of Washington in 1995 using a different algorithm [Agency for Health Care Policy and Research – AHCPR] that also compared treatment as usual to a structured intervention. It was mentioned in the TMAP article references as support for the notion of algorithmic treatment [in that study, they separated Major and Minor depression based on a revised interpretation of the DSM]:
Collaborative Management to Achieve Treatment Guidelines
Impact on Depression in Primary Care

by Wayne Katon, MD; Michael Von Korff, ScD; Elizabeth Lin, MD, MPH; Edward Walker, MD; Greg E. Simon, MD, MPH; Terry Bush, PhD; Patricia Robinson, PhD; and Joan Russo, PhD
JAMA. 1995;273:1026-1031

Abstract
Objective. —To compare the effectiveness of a multifaceted intervention in patients with depression in primary care with the effectiveness of "usual care" by the primary care physician.
Design. —A randomized controlled trial among primary care patients with major depression or minor depression.
Patients. —Over a 12-month period a total of 217 primary care patients who were recognized as depressed by their primary care physicians and were willing to take antidepressant medication were randomized, with 91 patients meeting criteria for major depression and 126 for minor depression.
Interventions. —Intervention patients received increased intensity and frequency of visits over the first 4 to 6 weeks of treatment (visits 1 and 3 wi
th a primary care physician, visits 2 and 4 with a psychiatrist) and continued surveillance of adherence to medication regimens during the continuation and maintenance phases of treatment. Patient education in these visits was supplemented by videotaped and written materials.
Main Outcome Measures. —Primary outcome measures included short-term (30-day) and long-term (90-day) use of antidepressant medication at guideline dosage levels, satisfaction with overall care for depression and antidepressant medication, and reduction in depressive symptoms.

Results. —In patients with major depression, the intervention group had greater adherence than the usual care controls to adequate dosage of antidepressant medication for 90 days or more (75.5% vs 50.0%; P<.01), were more likely to rate the quality of the care they received for depression as good to excellent (93.0% vs 75.0%; P<.03), and were more likely to rate antidepressant medications as helping somewhat to helping a great deal (88.1% vs 63.3%; P<.01). Seventy-four percent of intervention patients with major depression showed 50% or more improvement on the Symptom Checklist—90 Depressive Symptom Scale compared with 43.8% of controls (P<.01), and the intervention patients also demonstrated a significantly greater decrease in depression severity over time compared with controls (P<.004). In patients with minor depression, the intervention group had significantly greater adherence than controls to adequate dosage of antidepressant medication for 90 days or more (79.7% vs 40.3%; P<.001) and more often rated antidepressant medication as helping somewhat to helping a great deal (81.8% vs 61.4%; P<.02). However, no significant differences were found between the intervention and control groups in the percentage of patients who were satisfied with the care they received for depression (94.4% vs 89.3%), in the percentage who experienced a 50% or more decrease in depressive symptoms, or in the decrease of depressive symptoms over time.
Conclusion. —A multifaceted intervention consisting of collaborative management by the primary care physician and a consulting psychiatrist, intensive patient education, and surveillance of continued refills of antidepressant medication improved adherence to antidepressant regimens in patients with major and with minor depression. It improved satisfaction with care and resulted in more favorable depressive outcomes in patients with major, but not minor, depression.
I found that difference underwhelming at best. I did, however, trust the authors of this earlier study. My own take away from the algorithmic studies is that they really proved very little, including STAR*D. They were done badly as I’ve mentioned. But I doubt that a well done version would’ve helped us much more. I think they’ve proven not only that they’re ethically challenged, but also that they’re barking up the wrong tree [casting NIMH money to the wind in the process]…
  1.  
    May 30, 2011 | 10:14 AM
     

    What is striking is Rush & Trivedi’s persistence in advocating for their algorithmic/measurement-based care in the absence of evidence supporting it. They recently published “Measurement-Based Care in Psychiatric Practice: A Policy Framework for Implementation” (http://www.ncbi.nlm.nih.gov/pubmed/21295000) that “provides a policy top-10 list for implementing MBC into standard practice.”

    One way of reading STAR*D is as a continuous infomercial extolling the virtues of their ‘measurement-based’ system of care. This is seen in the title of the first article: “Evaluation of Outcomes With Citalopram for Depression Using Measurement-Based Care in STAR*D: Implications for Clinical Practice,” each of the ensuing steps 2-4 articles, as well as the summary article where they equated ‘high quality care’ with their system stating: “Finally, high quality of care was delivered (measurement-based care)…Consequently, the outcomes in this report MAY EXCEED those that are presently obtained in daily practice wherein neither symptoms nor side-effects are consistently measured and wherein practitioners vary greatly in the timing and level of dosing.” Such claptrap is common in many of the other 100+ articles.

    Besides STAR*D’s outcomes being awful, their algorithmic MBC system likely made the outcomes WORSE than they would have been if responder patients who had not achieved remission had not been encouraged to enter the “next-step” phase in pursuit of the allusive goal of obtaining remission since drug intolerance and dropout increased in each succeeding treatment phase (e.g., going from 30% in step-2 to 60% in step-4).

    While Rush & company argue that STAR*D’s follow-up data support their contention that remission (not mere response) should be the goal of treatment (& therefore if you don’t succeed in your first trial, try another), the actual difference in 12-month survival rates between remitted versus responder patients was not that different (7.1% vs 3.3%).

    The risk of drug intolerance and dropout was far greater for responder patients in each succeeding antidepressant trial than any additional benefit from the ever decreasing likelihood of obtaining remission (with remission rates decreasing from 25.1% in step-2 to 10.1% in step-4).

    Perhaps the fact that Rush holds the copyright to the QIDS, which is key to their measurement-based system of care, has influenced this myopic quest to impose their system for the treatment of depression—now, even coming up with a “policy top-10 list for implementing MBC into standard practice.” It all seems Orwellian to me.

Sorry, the comment form is closed at this time.