a depressometer?…

Posted on Thursday 8 December 2011

Thinking about a magnificent obsession forced me to actually read some things on the DSM-5 website that I’d formerly only scanned:
I don’t like it, but I have to admit that I may have a conflict of interest in this area, because I didn’t like it before I even read it through. My reaction was visceral rather that logical, so it’s suspect. It’s not that I don’t get the point. In medicine, once you make a diagnosis and initiate treatment, you need something to follow to see if the treatment is working. With infections, that might be body temperature or white blood cell count. With hypertension, it’s the blood pressure. As an Internist, I loved that stuff. People used to tease me because my patient’s charts often had graphs that my fellow house officers called <my last name> + "grams." I remain a graphing fool even today – obvious to anyone reading this blog. There’s nothing more satisfying than a chart of some parameter with little arrows pointing to places where something happened. So I ought to be in love with the idea of objectifying the subjective psychiatric symptoms. But I’m not – not at all.

I have the same reaction when I look at the graphs of the rating scales used in the clinical trials of drug responses that flourish in the psychiatric journals. They don’t have the same meaning to me as a graph of a patient’s white blood cell count or temperature in response to having an infection treated with an antibiotic. Coulter Counters and Thermometers don’t have a subjectivity. They don’t remember their last readings. They just measure what they measure – right now. And they don’t care what you want to hear. They’re objective. Questionnaires and rating scales are themselves subjective, remembered.

I’m not knocking rating scales. They’re the best we can do for our clinical trials. There’s no dumb objective machine available to measure depression. But that does not mean that the HAM-D, the MADRS, the QIDS, the BPRS, and the CGI are objective. They’re just as close as we can get under the circumstances. The simplest example I can give is the placebo response in clinical trials of psychiatric drugs. Here’s one now:
 
 
We’re so used to looking at these graphs that the obvious argument against the Cross-Cutting Dimensional Assessments they contain doesn’t even occur to us. Let’s assume that your depressed patient had a graph of whatever Dimensional Assessment you were using from the DSM-5 that looked like the placebo curve above. You’d say, "He’s better." I looked back over the numerous drug response graphs I’ve posted on this blog and I can’t find a single one where the placebo response is a horizontal line, or even close. We assume that the difference between the placebo curve and the drug curve measures drug effect [but even that assumption is in question when we consider the method used to correct for drop-outs].

And the only reason we accept these graphs is that they are compiled from the mean values from a whole lot of patients. There’s no way that we could look at the data from a single patient and say anything meaningful. This is what raw data looks like [a particular subset from STAR*D]:
A single patient graph would be unintelligible. I could go on and on about this, and am having to fight the impulse to get really boring, so I’ll limit myself to just a snippet. People have all kinds of explanations of the placebo effect, but the phenomena transcends the explanations. It’s just in the nature of the repetitive use of rating scales and questionnaires – whether rated by a clinician or the patient.

It’s hard to argue against the morality used to justify the Cross-Cutting Dimensional Assessments – AKA Measurement-Based or Evidence-Based methodologies. It sounds like you’re arguing against a pillar of medicine – "Follow your patients carefully!" – something we were taught in early medical school. It’s a sermon I would preach myself if given a pulpit, just like I’ve preached it many times before. So is my visceral revulsion to Cross-Cutting Dimensional Assessments an argument against following patients? Hardly. Some of it is my suspicion that it’s more fodder for the maddening Managed Care people, but my primary objection is more rational.

Psychometric creation is a complex science. The likelihood that the collective resources of the American Psychiatric Association can create a simple psychometric that would objectify improvement when given repeatedly that discriminates between drug and placebo effect accurately [or at all] in a single patient is zero. No one has ever done it before and it’s unlikely to occur in this century. My objection is simple. Scales like they propose tend to become objects in and of themselves, become reified, take on a meaning they don’t slightly deserve. These guys are researchers. Surely they know that themselves. If they could do what they propose, they wouldn’t have to do these huge clinical trials.

So how does a clinician follow a depressed patient on treatment? It’s an answer neither the DSM-5 Task Force or the Managed Care people are going to like. Clinical experience is the answer. Depression itself is more subjective than the DSM criteria imply. A clinician’s experience with depressed people is also subjective. "Following patients" in psychiatric disorders is principally subjective. I told you they weren’t going to like the answer. Too bad, because that is the real answer. When I walk into the waiting room, I can often tell if the patient is better or not before they say a word. And if you ask me how I know that, I have to think for a minute to answer the question. I objectify my subjective impression after the fact. The Managed Care people and hard core researchers don’t like that kind of thing at all because it’s neither evidence-based nor measurement-based in their way of thinking [but it is].

I’m going to forgo a discussion of clinical skills, or intuition, or lattice formation in the construction of mental structures – because we’d go to sleep. I just want to say that anyone who doesn’t know why Cross-Cutting Dimensional Assessments are such a naive and bad idea shouldn’t be on the DSM-5 Task Force in the first place. This is just one of my objections to the Cross-Cutting Dimensional Assessments, but it’s enough for now. There’s ample time to talk about the other objections later…
  1.  
    Angus
    December 8, 2011 | 12:36 PM
     

    I’m up for a discussion about clinical skills, intuition or lattice formation in the construction of mental structures (whatever that might be) if you have the inclination and promise not to go to sleep!

    Thanks for the effort you put into your writings.

  2.  
    Bernard Carroll
    December 8, 2011 | 9:53 PM
     

    Maybe a good term for the cross cutting, dimensional, measurement based assessment instruments is McLuhanesque. The medium has become the message. Rating scales and structured interviews are all well and good, even necessary for research studies, but they are only derivative ways of recording the primary experiences of patients and the primary impressions of clinicians. They cannot substitute for these primary data sources.

    Things go off track when the derivative instruments are taken for the primary data, as in industrial scale clinical research projects like antidepressant registration trials and STAR*D.

  3.  
    Gad Mayer
    December 9, 2011 | 3:37 AM
     

    I’m a psychiatrist from Israel, and an avid reader of your blog. I share your “visceral” aversion to Measurement Based Care in psychiatry. When I try to analyze the reasons for my objection, I think of the many facets of information that are lost in the process, such as the clinical observation in the waiting room that you mention, and of how these questionnaires tend to falsely be considered “objective”.
    I do have a question about your argument (if I understand it correctly)about the impossibility of distinguishing between a placebo response and a treatment response in a specific individual: do you think that seasoned clinicians are better at it then questionnaires? I’m not sure about that.
    Thank you for the important work you are doing,
    Gaddy

  4.  
    aek
    December 9, 2011 | 7:36 AM
     

    Having been on both the giving and receiving ends of treatment (as opposed to care – see Jean Watson’s theory of caring in the nursing literature), I have a couple of reactions:

    a) In mental illness treatment, assessment is overwhelmingly conflated with care. When reading the literature, a large majority of clinical topics start and stop with assessment.

    b) Repeated assessment without effective care can result in giving the assessment topics salience and can lead to acting on them. Suicidality, for example, leading to suicide attempt, requires 3 elements, according to Joiner’s interpersonal theory of suicide. They include the perceptions of thwarted belongingness, burdensomness and acclimation to performing it. Repeated assessment of suicidality without treating the causative distressors logically leads to increased acclimation and enhances mental rehearsal.

    Much of what passes as treatment is harmful and perceived as dreadful by patients. Is it any wonder that patients vote with their feet and leave/avoid continuing and repeating their exposure to it?

    WHen searching for successful outcomes, without fail, patients who do well are those with very broad and deep social supports. Treatment within the healthcare non-system, system, barely registers.

    Shouldn’t that be investigated by psychiatrists and other licensed professionals, according to the professional power and control they have been invested with by the social contract?

Sorry, the comment form is closed at this time.