as he wished it had been…

Posted on Friday 6 September 2013

Did you ever notice how quickly resolutions melt in the face of temptation? I had resolved that I wouldn’t lurk in wait for everything Dr. Lieberman writes so I can attack it as a way to protest the way his APA is running these days. I did pretty well until I got to the paragraphs below, then watched as my resolve evaporated into the ether as I clipped them for this post:
From the President
Psyciatric News
by Jeffrey Lieberman, M.D.
August 29, 2013

… In 1952, the Diagnostic and Statistical Manual of Mental Disorders was created to complement the World Health Organization’s International Classification of Diseases. DSM-I and DSM-II [1968] had no systematic organization and were both relatively short books. Studies in the 1960s and 1970s showed that diagnostic reliability among clinicians using the DSM was poor; the likelihood of any two clinicians agreeing on a particular patient’s diagnosis appeared no better than flipping a coin.

To remedy the problem, APA radically revised its diagnostic approach and published DSM-III in 1980. This edition dramatically expanded the number of disorders and provided detailed descriptions of their defining criteria, with precise lists of symptoms and behaviors. DSM-III revolutionized diagnostic precision in mental health care and provided clinicians with a common language to facilitate communication among themselves and with their patients. In the revisions that will follow DSM-5, which was released in May, we anticipate that psychiatric diagnoses will move beyond descriptive phenomenologic criteria to measures of pathophysiology and etiology and that they will involve laboratory tests to identify lesions and disturbances in specific anatomic structures, neural circuits, or chemical systems, as well as susceptibility genes—the kinds of tests that routinely inform the diagnosis of infection, cardiovascular disease, cancer, and most neurological disorders. The research that occasions these developments may not just enhance our ability to make diagnoses, but may fundamentally redefine the nosology of mental disorders…
He’s still writing about the coming of the DSM-III in 1980 as the moment that lead us from of the wilderness. He starts with reliability:
perhaps forgetting that the reliability of the DSM-5 reported by the Field Trials was lower than the dreaded DSM-II that got revised in the first place [humility…]. He also seems to have forgotten the outcry surrounding the work of the DSM-5 Task Force that hardly reached the consensus that the DSMs "revolutionized diagnostic precision in mental health care" but rather universally condemned it. But then Dr. Lieberman lapses into his standard cheerleader mode, launching his tired sermon about the coming wonders of research and biomarkers, sounding just like Drs. Kupfer and Regier did in 2002 with their Research Agenda for the DSM-V [irrational exuberance…]. If those things weren’t enough, he makes the fundamental error of continuing to speak as if all mental illness has some biological substrate, something no-one believes, making himself and the rest of the profession a target for articles like Gary Greenberg’s latest in the New Yorker [The Psychiatric Drug Crisis]. I extracted these paragraphs from an article that proposes to introduce a series preparing us for a changing future:
We should prepare ourselves now for our future and how it will be formed. Toward that end, subsequent articles will preview the health care reform process; how it will affect models of care, professional roles, and methods of reimbursement for psychiatrists; and how we anticipate scientific advances will transform psychiatry’s understanding and treatment of mental disorders.
I think Dr. Lieberman would be better placed to open with a realistic picture of the past as it has actually been, rather than as he wishes it had been…
  1.  
    wiley
    September 6, 2013 | 8:25 PM
     

    I posted this at the end of a previous long thread, but want to put it here, too. It’s about gene variants previously believed to signal a vulnerability to mental illness.

    Kaufman looked first to see whether the kids’ mental health tracked their SERT variants. It did: The kids with the short variant suffered twice as many mental-health problems as those with the long variant. The double whammy of abuse plus short SERT seemed to be too much.

    Then Kaufman laid both the kids’ depression scores and their SERT variants across the kids’ levels of “social support.” In this case, Kaufman narrowly defined social support as contact at least monthly with a trusted adult figure outside the home. Extraordinarily, for the kids who had it, this single, modest, closely defined social connection erased about 80 percent of the combined risk of the short SERT variant and the abuse. It came close to inoculating kids against both an established genetic vulnerability and horrid abuse.

    Or, to phrase it as Cole might, the lack of a reliable connection harmed the kids almost as much as abuse did. Their isolation wielded enough power to raise the question of what’s really most toxic in such situations. Most of the psychiatric literature essentially views bad experiences—extreme stress, abuse, violence—as toxins, and “risk genes” as quasi-immunological weaknesses that let the toxins poison us. And abuse is clearly toxic. Yet if social connection can almost completely protect us against the well-known effects of severe abuse, isn’t the isolation almost as toxic as the beatings and neglect?

    http://www.psmag.com/health/the-social-life-of-genes-64616/

    Also, I’ve just started reading a report from the DOJ called, “Defending Childhood” that addresses the effects of violence on children, and the mental health problems that come with violence and trauma. From what I’ve read so far, I’m thinking it doesn’t recommend drugs, but reliable and well-trained help and support for the victims of abuse and neglect.

    Holder also wants to change the way juvenile offenders are treated, so that support, counseling, and help is the primary response for most of the kids instead of incarceration. He wants to stop the practice of trying kids as adults.

    http://www.justice.gov/defendingchildhood/index.html

    A lot can be done to make drugs irrelevant and relationships primary. It’s possible that it won’t be long before epigeneticists come to the conclusion that in order to be well, we need to live well with each other, and our gene expression will follow. And perhaps, we could just gradually leave the Lieberman’s of the world in the dust.

  2.  
    TinCanRobot
    September 6, 2013 | 8:33 PM
     

    Dr. Lieberman seems as keen to distort the past as the present. What happened to his “That was then, this is now” speach? I wouldn’t feel too bad about reacting to his statements. Perhaps all reasonable people should.

    Judy Stone, an author on a Scientific American blog responded to Dr. Lieberman’s reponse to ‘AntiPsychiatry’ critically, even though she did not support that position. – She said: “Although masquerading as a reasoned critique, it is anything but that. Rather, the piece is self-promotional and condescending.”
    http://blogs.scientificamerican.com/molecules-to-medicine/2013/05/24/anti-psychiatry-prejudice-a-response-to-dr-lieberman/

    A bit slightly off topic, I always wondered what the “Kappa” term was used for the DSM field trials. There’s no ‘Kappa’ in statistical analysis.

    I did some research into it and found that “Cohen’s kappa coefficient is a statistical measure of inter-rater agreement or inter-annotator agreement for qualitative (categorical) items”.
    http://acl.ldc.upenn.edu/J/J96/J96-2004.pdf

    Kappa was invented for this purpose for psychology, psychiatry, and similiar subjective applications.

    Statistical Analysis for the scientific method is a means of calculating the Cofidence Level that a given sample of observations did not occur by chance. This allows one to calculate with certain specified degree of precision what the likelyhood is that a sample can represent a larger array of observations that sample was expected to be a part of.

    Why then invent and use Kappa in place of of conventional statistical analysis?

    Kappa bascially calculated the degree of total agreement a particular observation occured at all between two samples. Thus if 10% and 10% of 2 equally sized populations of patients both received a diangosis of “Bipolar Disorder” then this would produce a high kappa value. Whereas, if 10% and 30% occured, then there would be a lower Kappa value.

    Aparently Kappa doesn’t calculate whether the *same* 10% of the same patient group recieved the same diangosis by both interviewing psychiatrists. Kappa also has other problems which can affect the end value, and additional calculations are required to create a range of error (not used in the DSM trials as far as i could derive, but they’re behind a paywall)
    http://ptjournal.apta.org/content/85/3/257.long

    Kappa appears to have been invented because psychiatric diagnosis was unable to produce a reasonable Confidence Level that a sample of observations did not occur by chance using conventional Statistical Analysis. At least I can’t see any reason to use Kappa in place of what Statistical Analysis was designed to perform, unless it was soley a ‘statistical trick’ to beat the system.

    What’s your view on that, Mickey? I don’t have access to the full papers for the DSM-5 trials so I can’t really see what they did. I really doubt there’s a Cofidence Interval range in there though.

  3.  
    September 6, 2013 | 9:11 PM
     

    I’m certainly no Kappa expert. As I understand it, Kappa came from a collaboration between Dr. Spitzer [RDC & DSM-III] and the Columbia Statisticians when they were gearing up for the DSM-III revision [see box scores and kappa…, self-evident…, what price, reliability?…]. It compares two raters of the same patient. The Kappa value is the distance between perfect agreement [1.0] and 50% agreement [0] assumed to be chance. The actual implementation has varied in each revision. As always, the Devil is in the details.

    Spitzer, R., Cohen, J., Fleiss, J. L., and Endicott, J. (1967). “Quantification of agreement in psychiatric diagnosis”. Archives of General Psychiatry 17: 83–87.

    Spitzer, Robert L.; Fleiss, Joseph L. (1974). “A re-analysis of the reliability of psychiatric diagnosis”. British Journal of Psychiatry 125 (0): 341–347. doi:10.1192/bjp.125.4.341. PMID 4425771.

    The DSM-5 Field Trials: see finally… and “but this is ridiculous”… , OMG!…. I thought the Field Trial articles were on-line for a while, but I can’t find them now [my forest plots in the posts have the 95% CI]…

  4.  
    TinCanRobot
    September 7, 2013 | 12:34 AM
     

    Mickey, Thanks for the response,

    There seem to be a number of different types of Kappas, such as ‘Weighted kappa’ and ‘Kappa maximum’ which have different types of calculation for agreement, for example Kappa Maximum assumes its theoretical maximum value of 1 “only when both observers distribute codes the same”.

    I see Kappa is for pairwise agreement, I made a mistake reading through a paper, thanks for correcting that. Although if the people working on the DSM bascially created and/or adopted it, that makes me even more supcious. They were under great pressure to create something.

    I had more time to look into this Kappa calculation, I think I found exactly what i suspected.

    I found a nice fancy publication from a company that makes software for statistical analysis, which uses mathmatical proofs for demonstrating the limitations of Kappa calculations for determing ‘reliabilty’ between several raters. Software simulations were also used. The paper came to the conclusion that I supspected. A better mathmatical solution then kappa was also created and demonstrated in the conclusion.

    “However, it is the expression used to compute the probability of agreement by chance that is inappropriate.”

    The title of the paper is:
    “Kappa Statistic is not Satisfactory for Assessing the Extent of Agreement Between Raters”
    http://www.agreestat.com/research_papers/kappa_statistic_is_not_satisfactory.pdf

    Theres not much published on kappa in general, but what little there is cited a rediculous number of times on both sides (that is useful, or that it’s not satisfactory).

    I don’t think it would suprise anyone that if the APA couldn’t prove diagnosis was acceptably accurate scientifically with statistitical analysis, they would have to invent something with worse limitations to get around that. The only part that suprises me more is that they had the balls to publish mental disorders that didn’t even have a remotely reasonable invented Kappa value.. They treat 10’s of millions of people with that manual; that’s really unscrupulous do do that in wide view of the public! wow.

    The DSM-5 field trials are broken links from the APA’s websites, but they have only moved here (abstract available, but all behind paywalls):
    http://ajp.psychiatryonline.org/article.aspx?articleid=1387935
    http://ajp.psychiatryonline.org/article.aspx?articleid=1387906
    http://ajp.psychiatryonline.org/article.aspx?articleid=1387907

Sorry, the comment form is closed at this time.