In 1996, John Rush, Madhukar Trivedi, and others at the University of Texas, Southwestern developed a set of protocols, algorithms for the treatment of Mental Illnesses in the Texas State public medical systems called TMAP [Texas Medical Algorithm Project]. There are two stories about TMAP. One is about selecting expensive drugs from companies [they had ties with] for the huge cohort of patients in the Texas system [Medicare, Medicaid, Institutions]. That part will be settled in a court someday [it’s not a pretty story]:
A second TMAP story is about the studies they published using their treatment algorithms – the ones that ultimately lead up to STAR*D and CO-MED.
The first article of note [Texas Medication Algorithm Project, phase 3 (TMAP-3): rationale and study design
] in 2003 was published 3 years after the study was completed, but only talked about the study design, not the results. They had two kinds of clinics in the study TAU [Treatment As Usual
] clinics and ALGO [ALGOrithm-driven disease management
] clinics. There were two algorithms [a seven stage one for non-psychotic MDD cases (~90%) and a different five stage one for psychotic MDD cases (~10%)]. This pattern publishing an "about" paper prior to the results continues to the present. These were poverty level clinic patients.
From my perspective, this was a "faith based" study in that reading the paper will not allow you to parse out the various treatments used, all you can see is the overall means. They report significance at p<0.001
There was a second results paper [One-Year Clinical Outcomes of Depressed Public Sector Outpatients: A Benchmark for Subsequent Studies
(first author, John Rush)]. This paper was a reanalysis of the data looking at only the non-psychotic, algorithm-treated, patients whose scores put them in the range expected in efficacy trials. That eliminated a lot of patients [350->118]. Their predefined endpoint was a 50% drop in the clinician-rated Inventory of Depressive Symptomatology scale. Their results were not up to their expectations.
With the predefined response parameter being a drop in score of > 50%, they had 31/118 = 26%. And no matter how they massaged the data [matched pairs, OC, LOCF], the best they could do was ~30%, less than the reports from efficacy studies for many of the drugs
Remember, this was after using an algorithmic treatment program which had everything but the kitchen sink available. Here it is [split for space saving]. Click the graphic for the full view:
Much of their discussion of this last paper was hypothesizing as to why the response rate was so low: "Why are the response and remission rates in the present study so poor as compared with the usual efficacy RCT findings? There are several possible explanations including a) adherence, b) sociodemographic features of the patient population, c) poor treatment delivery, d) the high rates of concurrent general medical, Axis II, or substance abuse/dependence disorders, e) inclusion of more chronically depressed subjects, and f) inclusion of patients with treatment-resistant depression."
Just glancing over their algorithm, it contains all the strategies we’re used to hearing about: monotherapy, changing monotherapies, augmentation, combinations of anti depressants, adding ECT. And since they were doing the drug recommendations, they had access to all the latest drugs. Their point that they were treating a chronic population with longstanding depression is well taken, but even at that, the efficacy wasn’t very exciting in spite of all of their resources. The patterns seen in later studies IMPACTS, STAR*D, CO-MED were already established:
- studies without a control group
- multiple articles from a single study
- analysis and reanalysis of the same data
- opacity that obscures the actual primary data
- a tendency to explain away odd design features ‘on the fly’
- an unshakable faith in the algorithmic treatment approach and methodology
These studies were done with financing from the State of Texas, Institutional Pharma grants, and some NIMH money. This post is an introduction to the next one, which is about Madhukar Trivedi’s moving outside the Texas public system into an NIMH financed study with an added twist – computerization of their algorithm.
BRIEF HISTORY OF TMAP
Late 1990s-early 2000s: Texas mental health official Dr. Steven Shon travels around the country speaking about TMAP. Sixteen other states eventually adopt the protocol.
2003: The President’s New Freedom Commission on Mental Health—headed by Michael Hogan connections to drug companies that developed TMAP—recommends TMAP.
2004: Groups like NAMI publicly endorse TMAP. Daryl Regier, director of research at the APA lauded TMAP and called for increased funding of it. He is Executive Director of the APA’s “non-profit” research group APIRE (American Psychiatric Institute for Research and Education) Scholars in Research Program, which receives grants from Janssen and Eli Lilly, two of the drug companies that funded development of TMAP.
2004: After questioning drug company payments to state officials, whistleblower Allen Jones was fired from his job as an investigator at the Pennsylvania Inspector General’s office.
2004: Because the drug protocol used by many states originated in Texas, Jones filed a lawsuit in Travis County District Court against Johnson & Johnson and some subsidiaries. The lawsuit was sealed from public view because of protections that whistleblowers such as Jones are granted.
October 2006: Shon was forced by superiors to retire from the Texas health department after officials learn of findings of a Texas Attorney General investigation into whether drug companies unduly influenced Shon.
December 2006: Texas Attorney General joins Jones’ lawsuit. The lawsuit was opened to the public.
The Texas AG’s Office suspended a similar program tailored for children called CMAP, because of the allegations of drug companies influencing researchers.