mud + mud = mud…

Posted on Friday 6 May 2011

Do we believe that this category, Major Depressive Disorder, is a unity or a collage [those tables are pieced from the National Comorbidity Survey]? I’m obviously in the collage camp – in fact, I can’t imagine otherwise. I guess I’m still arguing with the DSM III [which is now 31 years old]. And when I read those "MDD is a growing public health problem … the World Health Organization predicts …" introductory paragraphs in articles, I hear a sales pitch – little more. These days, when I read those things, I look in the Acknowledgments for a ghost-writer and for who funded the study. But, for this moment at least, pharmaceutical marketing practices aren’t why I’m writing about this.

I’ve kind of wound down with obsessively going back through articles about these drugs. I’ve learned what I set out to learn [or maybe confirm]. There has been a lot of pseudo-science, just like the rest of you have been saying all along. But when I was reading through Dr. Trivedi’s write-up for his coming NIMH-funded personalized medicine study, I felt badly for him because he doesn’t stand much of a chance.

In STAR*D:
    All participants provided written informed consent at study entry and at entry into each level and the follow-up phase. Only outpatients seeking medical care were eligible [i.e., symptomatic volunteers were excluded]. Participants met DSM-IV criteria for nonpsychotic major depressive disorder at study entry as determined by clinical diagnosis and confirmed with a DSM-IV checklist by the clinical research coordinator.
In CO-MED:
    Potential participants were screened at each clinical site with each site’s standard procedure [variable across sites]. Most sites used two to nine questions from the Patient Health Questionnaire. Patients identified by screening saw their study clinicians and clinical research coordinator to determine study eligibility following written informed consent. Participants Broad inclusion and minimal exclusion criteria ensured a reasonably representative participant group. The outpatient enrollees were 18–75 years old and met the DSM-IV-TR criteria for either recurrent or chronic [current episode lasting at least 2 years] major depression according to a clinical interview and confirmed with a DSM-IV-based symptom checklist completed by the clinical research coordinator. Eligible participants had to have an index episode lasting at least 2 months and had to score at least 16 on the 17-Item Hamilton Depression Rating Scale [HAM-D].
Here are the study locations: STAR*D and CO-MED. In spite of the number of résumés padded by STAR*D, I personally know of no enduring findings  nor do I expect any from CO-MED. I suppose that they designed the studies, turned them over to the Clinical Research Centers, and awaited the data reports:
    Clinical sites were selected on the basis of our prior experience and their performance in the Sequenced Treatment Alternatives to Relieve Depression trial to ensure [1] adequate patient flow, [2] committed administrative support, [3] adequate minority representation, and [4] adequate representation of both primary and psychiatric care sites.
Not mentioned is careful clinical evaluation. The results were two studies with big drop-out rates and missing data they had to bury in the write-ups, changing outcome measures along the way in one study. They and their friends have created a Clinical Research Industry that is an imprecise data mill. My guess is that’s one reason why their non-compliance and drop-out rates are so high. Likewise it doesn’t sound like their intake procedures are very exhaustive [or engaging].

I recently read a chapter Dr. Bernard Carroll wrote entitled Diagnostic Validity and Laboratory Studies: Rules of the Game in [The Validity of Psychiatric Diagnosis, edited by Lee N. Robins and James Barrett, Raven Press Ltd. New York, 1989]. Here’s one of the rules:
    Rule #2: No biologic measure can in principle do better than the clinical independent variable against which it is compared. This rule is a simple point of logic. It follows that if the "gold standard" clinical diagnosis is flawed, then the interpretation of laboratory measures will be compromised. In other words, laboratory measures can never "outperform clinical diagnoses: they can only look worse…
So loose subject evaluation and engagement plus a loose, heterogeneous diagnostic category, in a study to explore objective correlations with laboratory tests is a formula for failure if there ever were one. Any research worth doing is worth personally watching over very carefully. When the data comes back mushy, it’s too late to do anything about the fog, and Trivedi’s recent outings have been just that – judging from the obscurity in the write-ups. It seems like his work fits the computer adage, "garbage in. garbage out." Pity…
  1.  
    May 7, 2011 | 12:07 PM
     

    Couldn’t have said it better myself. (Although that won’t stop me from trying!! haha)

    It is so hard to take psychiatric research seriously. Even if a study is well designed, flawlessly executed, and yields highly significant findings, the premises upon which they’re based are suspect at best.

    This is a divergence which we need to resolve quickly. The more money we pump into grand, sweeping studies like STAR*D and CO-MED without aggressively and accurately defining whom we’re looking at, and what we’re looking for, the more psychiatry becomes the laughing stock of medicine.

  2.  
    Rob Purssey
    May 8, 2011 | 10:39 AM
     

    The problem is the inherent dualism of mentalism / cognitivism, from whence a behavioral approach was born. The terribly named “radical” (meaning consistent, or “to the root”) behaviorism was a scientific approach designed to be MONISTIC, and aontological, which sidesteps such problems. Skinner’s REALLY terribly named 1950 paper “are theories of learning necessary” speaks to this point, and can be understood NOT by reading it, but hearing this lecture – http://contextualpsychology.org/canonical_works (it’s the Canonical Works 2 MP3. – then reading if you happen to be dead keen.

    This comment may seem superfluous/off point, but it really speaks to the issue. A science of pharmacology, AND a functional contextual neuroscience can arise out of the very different scientific strategy which has been developed over the last 60 or so years – http://contextualpsychology.org/contextualism – but this is inherently slow going, scientifically conservative, and pretty hard to sell to inherent biologistially reductive types (eg Tom Insel!)

    cheers, and thanks enormously for your wonderful blog, and hard work.

    Rob Purssey

Sorry, the comment form is closed at this time.