“but this is ridiculous”…

Posted on Wednesday 31 October 2012

Reading this blog, it might be easy to forget that I’m a psychiatrist, because I’m so often criticizing something about the current state of affairs in psychiatry. But this particular criticism is more fundamental. In the usual blog, I’m looking for scientific misbehavior driven in most instances by misguided alliances. In this case, an official Task Force of the American Psychiatric Association didn’t misrepresent their numbers, they just moved the scale. This isn’t the work of some KOL with conflicts of interest, or a ghost-written article from a hired medical writer. They didn’t omit the downside. They just moved the whole scale to fit their desired outcome. It’s right there in their published articles for the world to see. They downgraded psychiatric diagnosis:

The forest plot on the left gives the DSM-5 Field Trial results as they would have been graded in 1980 [DSM-III] or 1994 [DSM-IV]. On the right is what they said in the articles finally published yesterday.

kappa is the index of inter-rater concordance devised by Dr. Spitzer and his statistician colleagues to be our standard for reliability, our fall-back position in the absence of objective markers for validity. In 1980, the real issue was legitimacy, and Spitzer’s use of kappa carried the day – psychiatric diagnosis was not arbitrary. A kappa of zero means observers agree only by chance alone. A kappa of one mean there’s complete agreement. The gradation between zero and one were worked out thirty years ago.

In 1974, Spitzer published a meta-analysis of studies of the DSM-II using the kappa coefficient and found reliability lacking:

He followed that with a study of his revised RDC criteria showing much improved kappa values:

These are the kappa coefficients for some of the resultant DSM-III, ICD-9, and DSM-IV Field Trials:

And this is what we’re being offered from the DSM-5 Task Force:

These values are similar to the values Spitzer rejected almost forty years ago in his 1974 meta-analysis! The reliability shown by kappa was the essential reason for the wide acceptance of the radical revision of our diagnostic system in 1980. Even worse, we can’t even tell if these low reliability numbers [for MDD! for GAD!] are because of bad criteria or a bad set of Field Trials. The variability for the kappas on several diagnoses at different centers is unacceptably great [see Dr. Carroll's comment]. Dr. Frances reaction was appropriately titled [and scathing as well]:
DSM-5 Field Trials Discredit the American Psychiatric Association
Huffington Post
by Allen Frances
October 31, 2012

He concludes with…

It is sad that the American Journal of Psychiatry agreed to publish this sleight of hand interpretation of the remarkably poor DSM-5 field trial results. Clearly, AJP has been forced into the role of a cheerleading house organ, not an independent scientific journal. AJP is promoting APA product instead of critically evaluating it. Scientific journals all have some inherent conflicts of interest – but this is ridiculous.

The DSM-5 field trial fiasco and its attempted cover-up is more proof (if any were needed) that APA has lost its competence and credibility as custodian for DSM. A diagnostic system that affects so many crucial decisions in our society cannot be left to a small professional association whose work is profit driven, lacking in scientific integrity, and insensitive to public weal.

I started with, "Reading this blog, it might be easy to forget that I’m a psychiatrist." I could’ve added, "and proud of it." But the American Psychiatric Association is making my pride hard to find. They really can’t publish this DSM-5 in its current form and expect support from the medical community, the mental health community at large, or psychiatrists of conscience. It has come down to that. If they proceed on schedule, they’ve lost sight of their whole reason for being…
  1.  
    November 1, 2012 | 6:33 PM
     

    Perhaps this reflects a realistic acknowledgement that diagnoses don’t matter — all you need is a code to put in the box — and some arbitrary sequence of drug prescription will follow regardless of the diagnosis.

  2.  
    Gail Mizsur, NP
    November 3, 2012 | 1:44 AM
     

    I used to believe it was essential to be clear about the diagnosis before prescribing, but now I prescribe for symptoms and watch closely to make sure the medication is helpful. The patients are comorbid more often than not, they and their families are inconsistent historians, and some of the bipolars do well on antidepressants alone, (contrary to all the recommendations in favor of mood stabilizers). These are complex disorders and even genetic testing doesnt reveal what will help. It usually just yields some suggestions about which medications should be avoided….so I put a code in the box and hope the medication is not counterfeit or a badly made generic. A psychologist once told me that if a stimulant doesnt help a child and zoloft does, that means the diagnosis is NOT adhd. So the differing interpretations of how to use the DSM and the research data occasionally leave me with the impression that the DSM is handy for students but not so valuable for prescribing. And when it comes to satisfying an insurer in order to have medications approved, the clerk taking the information barely understands and passes a third of it along to a costcutter who denies the request anyway.

Sorry, the comment form is closed at this time.