comparin’…

Posted on Saturday 23 July 2016

Note: I am pleased to announce that Google has moved me from This site may have been hacked to This site is not mobile friendly status. I see this as a real step up in the world and will look into the mobile friendly issue the next time I’ve got nothing else to do. I do appreciate that Google is now monitoring such things. It’s going to mean a safer more useful Internet – a good thing…

Figure A1. A continuum of experimental exploration and the corresponding continuum of statistical wonkiness. On the far left of the continuum, researchers find their hypothesis in the data by post hoc theorizing, and the corresponding statistics are “wonky”, dramatically overestimating the evidence for the hypothesis. On the far right of the continuum, researchers preregister their studies such that data collection and data analyses leave no room whatsoever for exploration; the corresponding statistics are “sound” in the sense that they are used for their intended purpose. Much empirical research operates somewhere in between these two extremes, although for any specific study the exact location may be impossible to determine. In the grey area of exploration, data are tortured to some extent, and the corresponding statistics are somewhat wonky.

Sometimes, a good cartoon can say things better than volumes of written word. This one comes from the exaplanatory text accompanying a republication and translation of Adrian de Groot‘s classic paper on randomized trials explaining why they must be preregistered [see The Meaning of “Significance” for Different Types of Research, the hope diamond…, Why we need pre-registration, For Preregistration in Fundamental Research]. It’s the central point in the proposal suggested by Dr. Bernard Carroll’s Healthcare Renewal blog [CORRUPTION OF CLINICAL TRIALS REPORTS: A PROPOSAL].

Our journals are filled with articles where the data has been tortured [center above] or had the outcome moved to fit the data [left above]. But RCTs [Randomized Clinical Trials] are intended to test an already defined hypothesis, not make one up. They’re like Galileo’s famous experiment [right above]. Define the conditions in advance, then do the experiment to see if those conditions are met. And the only way to insure that the trial follows and is analyzed by those preregistered conditions is to publicly declare them before the experiment is done and afterwards publicly post the analyses done by the preregistered methods. Anything else ends up in wonky·land. Comes now this…
The COMPare Trials Project.
Ben Goldacre, Henry Drysdale, Anna Powell-Smith, Aaron Dale, Ioan Milosevic, Eirion Slade, Philip Hartley, Cicely Marston, Kamal Mahtani, Carl Heneghan.
We know Ben Goldacre from his books, his TED talk, and his AllTrials campaign, but I think his finest achievement is his current enterprise – The COMPare Project. The idea is simple. Compare the a priori Protocol defined outcome variables with those in a published journal article. I personally discovered the  importance of that working on our Paxil Study 329 article, and ever since have gone looking for protocols on the Clinical Trials that have come along. Sometimes they’re listed on clinicaltrials.gov [and sometimes not]. But even if they’re there, there’s rarely enough to do the proper protocol defined analysis. I’ve never found a full a priori portocol except in cases where it has been subpoenaed in litigation. So I wondered how Goldacre’s group was getting them. Here’s what he says:
"Our gold standard for finding pre-specified outcomes is a trial protocol that pre-dates trial commencement, as this is where CONSORT states outcomes should be pre-specified. However this is often not available, in which case, as a second best, we get the pre-specified outcomes from the trial registry entry that pre-dates the trial commencement. Where the registry entry has been modified since the trial began, we access the archived versions, and take the pre-specified outcomes from the last registry entry before the trial began." He explains this further in the FAQ on their website…

He has some hard working medical students and volunteer faculty working on his team and they checked all the trials in five top journals over a four month period last winter, comparing protocol defined outcomes against published outcomes. Here’s what they found:

TRIALS
CHECKED
TRIALS WERE
PERFECT
OUTCOMES NOT
REPORTED
NEW OUTCOMES
SILENTLY ADDED
67 9 354 357

And when they wrote the editors about the discrepancies, only 30% were published. And when authors do respond, they are sometimes combative or defensive [sometimes the COMPare guys get it wrong and apologize, but that’s not often]. I won’t go on and on. The web site is simple and has all the info clearly presented. Epidemic Outcome Switching! Check and Mate!

They haven’t published yet, but we look forward to what’s coming. I personally think the COMPare Project has landed on the center of the problem. We’ve complained about not being able to see the data itself, but to have this much distortion of the specified outcome variables is even more basic. There is no justification for this level of scientific misconduct…
Mickey @ 12:36 AM

modern times…

Posted on Thursday 21 July 2016

So Charlie Chaplin tried to warn us about technology in his 1936 classic, Modern Times. The french philosophers Jean-Paul Sartre, Simone de Beauvoir and Maurice Merlau-Ponty even named their journal, Les Temps modernes, after the film. But neither Chaplin nor the Existentialists warned us about this…

While the site was operating normally, this message appeared on a Google search of 1boringoldman.com. I contacted my hosting service as soon as I was informed. They ran a scan and found malware so they shut the site down on discovery. A quickscan of my computer came up clean, but a full scan found this:

which I removed:

For a fee, my hoster professionally scrubbed my site on the server and it’s back on-line now. The four posts marked as suspicious have been deleted [all ancient history], and my most recent post. Apparently, the hacker attacked my site through an unused website [8 years obsolete!] that had an old version of WordPress. All WordPress versions are now updated and the unused sites deleted.  I’m currently involved in getting Google to recrawl the site [which ain’t easy]. Insofar as I know, I’m squeaky clean, but I’m not going to post anything until Google’s crawler wanders through and the This site may be hacked message disappears.

I would suggest a full scan of your computer. I am profoundly sorry if this bit of modern times affected you too…
Mickey @ 2:40 PM

listening to placebo…

Posted on Thursday 14 July 2016

"Another issue with pediatric clinical trials is that 61 percent of youth respond to the drugs, but 50 percent respond to placebo, compared with 30 percent among adults, making it hard to separate effects."
                Karen Dineen Wagner

Paxil Study 329: HAM-D difference from baselineThe Placebo Effect in Clinical Trials is more than a philosophical matter. It’s a practical consideration, something that is regularly subtracted from the overall effect in presenting the results, yet it’s poorly understood itself. For some, it’s seen as the sampling phenomenon referred to as the regression to the mean [see in the land of sometimes[4]…]; for others, it’s suggestability and expectation; and there’s also a literature exploring a genetic/neurobiological component. But just looking at the graphs, one might well conclude that simply being in as trial is itself a therapy. The placebo effect is certainly a prominent effect in the adolescent antidepressant trials that have frequently graced these pages. Here’s an interesting deconstruction specifically attempting to parse out the role of expectation of benefit in these trials:
by Bret R. Rutherford M.D., Joel R. Sneed Ph.D., Jane M. Tandler H.S., David Rindskopf Ph.D., Bradley S. Peterson M.D. and Steven P. Roose M.D.
Journal of the American Academy of Child & Adolescent Psychiatry. 2011 50[8]:782-95.

Objective: This study investigated how study type, mean patient age, and amount of contact with research staff affected response rates to medication and placebo in acute antidepressant trials for pediatric depression.
Method: Data were extracted from nine open, four active comparator, and 18 placebo-controlled studies of antidepressants for children and adolescents with depressive disorders. A multilevel meta-analysis examined how study characteristics affected response rates to antidepressants and placebo.
Results: The primary finding was a main effect of study type across patient age and contact amount, such that the odds of medication response were greater in open versus placebo-controlled studies [odds ratio 1.87, 95% confidence interval 1.17–2.99, p = .012] and comparator studies [odds ratio 2.01, 95% confidence interval 1.16–3.48, p = .015] but were not significantly different between comparator and placebo-controlled studies. No significant main effects of patient age or amount of contact with research staff were found for analyses of response rates to medication and placebo. Response to placebo in placebo-controlled trials did significantly increase with the amount of therapeutic contact in older patients [age by contact; odds ratio 1.08, 95% confidence interval 1.01–1.15, p = .038].
Conclusions: Although patient expectancy strongly influences response rates to medication and placebo in depressed adults, it appears to be less important in the treatment of children and adolescents with depression. Attempts to limit placebo response and improve the efficiency of antidepressant trials for pediatric depression should focus on other causes of placebo response apart from expectancy.
This topic has been in my mind since our RIAT Project on Paxil Study 329. I had three versions of the Protocol. The 1993 and 1996 a priori versions said:
"Medical Management

Psychotherapy Experience in protocols in depressed adolescents suggest that patients and families expect psychotherapy and are reluctant to consider a course of medication treatment alone, especially where the medication may be solely placebo. On the other hand, a provision of treatment with a psychotherapy which, in retrospect, turned out to be extraordinarily efficacious might well preclude the demonstration of a real, significant, and clinically meaningful medication effect. There are currently several research groups beginning the process of examining different specific psychotherapies [e.g. cognitive behavioral and interpersonal] for adolescent depression. As of yet, however, there are no completed controlled studies which would suggest a "reference" psychotherapy treatment. The present study will include supportive psychotherapy, similar to the management as described by Fawcett in Appendix G.

Weekly visits will consist of a 45 minute visit with the therapist. In unusual circumstances, emergency contact of greater duration is permitted. Duration of all contact including phone calls will be systematically documented."
And the version in the Full Acute Study Report [page 35] for that study said:
"3.5.4 Other Protocol-specified Therapy


Supportive psychotherapy for the depressive episode was provided in a manner similar to that described by Fawcett and coworkers in the Adolescent Depression Collaborative Research Group.[10] Psychotherapy was intended to provide the psychosocial interaction between the patient and the therapist that would permit observation of any pharmacotherapeutic effect of the study medication. Therefore, the sessions were to focus on providing supportive therapy rather than implementing interpersonal or cognitive/behavioral strategies. At each weekly visit, the patient had a 45-minute visit with the therapist. However, emergency contact of greater duration was permitted under unusual circumstances."
As for the Fawcett document explicitly spelling out the supportive psychotherapy mentioned, it’s in Appendix A of the 329 Study Report, freely available online here [Note: see below]. It goes on for 20 plus pages, ending with a list of DOs and DONTs that captures the essence of the recommendations:
DO’S:
  • Speak about current experiences.
  • Inquire about feelings.
  • Acknowledge understanding of feelings.
  • Inquire about events not spontaneously reported.
  • Inquire about the patient’s thoughts about solving problems
  • Express sympathy if misfortunes occur.
  • Communicate shared pleasure at positive events.
  • Congratulate patient for success.
  • Give the patient hope of the likelihood of his/her getting better.
DONTs:
  • Relate current conflict or attitudes to earlier experiences.
  • Draw analogies between behavior toward some people and other, such toward parent and toward friends, siblings, teachers, etc.
  • Challenge patients view of self or others
  • Give specific suggestions for resolving conflict.
  • Bring up childhood experiences.
  • Bring to the patient’s attention that his/her behavior appears to represent specific difficulties, such as fear of failure, fear of rejection, etc.
  • Bring to the patients attention that his/her behavior has intents that he/she is not acknowledging [i.e., punishing parents, getting revenge on friends, trying to prove is generous, etc.].
If it’s not yet obvious where I’m headed with this, it’s pretty simple. Every subject in Paxil Study 329 had weekly psychotherapy as described above. The DONTs on that list, while intended to avoid psychoanalytic interventions, is actually a list of the kinds of interventions traditionally avoided in therapy with children and adolescents because they are perceived as criticisms. And the DOs are pretty close to the effective alternatives.

Paxil Study 329: HAM-D difference from baseline and Response Rates

So in this example of Paxil Study 329 …

  1. the magnitude of the change can hardly be explained simply by regression to the mean
  2. it is highly unlikely that the change is due simply to expectations or suggestion
  3. this cohort had been depressed for over a year on average before the study, so natural course of disease also seems an unlikely explanation
  4. the supportive psychotherapy of 8 weekly sessions fits traditional parameters for therapy with depressed adolescents
… it’s reasonable to conclude that the supportive psychotherapeutic approach was an effective intervention for these depressed adolescents. And in this instance, I feel solid with our conclusion that the Paxil did not have any significant impact on the outcome [see Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence].

So, 1boringoldman, are you suggesting that the placebo effect is in fact an artifact of the clinical trial itself? and the way the study was conducted? and that the magnitude of the effect might correlate with how the adolescents were approached? and that Study 329 was a particularly effective version? Sure that’s what I think, but I can’t go further than speculate or hypothesize because I don’t have any other protocols. And beyond that, I have no way of knowing what was actually done in the studies themselves. But a supportive, non-confrontational psychotherapeutic stance has been effective with adolescents in my own experience, so of course that’s what I think even though it’s an antecdotal opinion [don’t discount it until you read Fawcett].

I have no question that SSRIs can be helpful in cases of OCD or generalized anxiety disorders in adolescents, though used with caution, close observation, and as a therapeutic trial rather than routinely. And I wouldn’t generalize these comments to adults where my experience says that these drugs can be helpful with depressive symptoms. But with depressed teens? Let’s talk…


Note: The version of Fawcett et al in the 329 Appendix A has been modified. The original can be read here. It’s actually a fascinating 12 page article from 1987.
Mickey @ 6:00 AM

a remembrance of things past…

Posted on Wednesday 13 July 2016

    early days…

Practicing as an Internist in a Military Hospital back when the world was young, fresh out of a residency training program, I had a period I think of as on-the-job training. I learned that the majority of patients referred to me as an Internist didn’t have medical diseases. They had some kind of symptoms that worried them, but they weren’t from physical diseases. For most, reassurance was all that was required. But for more than I would’ve guessed, it wasn’t enough.

The first thing I learned with those cases was to say what I thought up front before ordering tests to prove the absence of underlying pathology. "I don’t think you have the ___ that you’re worried about, but let’s make sure and do this-and-that". That way, when they returned, I could say "Good news, the tests look okay." If I didn’t do that, they often left feeling, "He couldn’t find it, but worried that it was there but just not found." Those were easy things to learn. What was harder was how to ask, "What’s going on in your life?" to try to look at why they were having symptoms, probably psychosomatic symptoms. The reason it was harder is obvious. Any such questions can be heard as discounting, invalidating, saying "It’s just in your head."

But I learned, and then they started talking. I had no psychotherapy experience, psycho-anything for that matter. I was hearing the narratives behind the stress and the symptoms, but I had no idea what to say or do with what I was hearing. So I just listened. In those days, neither soldiers nor their families would go to go to the mental health clinic because of a fear it would damage the soldier’s careers. So all I had to offer was a sympathetic ear. But a lot of them got better, actually figured things out, solved some problems. I was surprised. It wasn’t from anything I did because I wasn’t doing anything, at least not anything I knew I was doing.

    is the fact that they got better the placebo effect?

Usually, the placebo effect is thought of as sort of a mind trick. Give people something they think might get them better, and they do – an expectations cure. Others attribute it to the mathematical regression to the mean [see in the land of sometimes[4]…]. While I suspect that both things are factors, but the amount of change frequently seen probably rules out a major role for either or both.

    latter days…

As a retiree, I started volunteering at a local charity clinic several times a month. Each time I’m there, I see about twenty patients in five hours [15 min/each]. As it turns out, many are returns for refills and checking in, so I can take more time with new or difficult patients. I obviously can’t do the psychotherapy I practiced. What I can do is use medications rationally, listen attentively, make comments  when I know what to say, and remember the patients when they return.

My presence has been a raging success and people get better, just like the early days. I’m sure I do more now than in my beginning, but not a lot more in the circumstances. So the same question is still appropriate, "is the fact that they get better the placebo effect?"

    or…

I would now say that in those early somewhat clueless days, I was encountering something basic about human psychology that I just didn’t yet understand. Emotions aren’t just internal signals that something’s awry, they are communications to others as well.  Mom knows the minute her child walks in from a bad day at school [and won’t settle for "fine" as the answer to "how’d it go at school today?"]. Years ago, Bibring hypothesized that depression signaled helplessness or powerlessness which remains a useful rule of thumb. And the help sought is often simply having the communication received and the story heard. The act of narrating the situation itself is cathartic. Likewise, in telling such a story, one is forced to put it into language and there may well be elements you hear for the first time yourself. Clarifying comments, questions, or further understanding are in the range of "gravy" on the meat of human contact [I’d better shut up, or I’ll begin to sound like a psychoanalyst].

The point being that a receptive and benevolent presence is at the root of any and all psychotherapy – therapeutic, in and of itself. All roads start at the same place. So what? Coming soon to a blog near you…
Mickey @ 9:27 PM

this other thing…

Posted on Saturday 9 July 2016

After reading Karen Dineen Wagner’s 2013 deposition [author·ity…], I looked back at that recent PSYCHIATRICNEWS article about her presentation at the May APA meeting [see a blast from the past…]. This time through, there were several things about it that got my attention. So here it is again for review:
PSYCHIATRICNEWS
by Aaron Levin
June 16, 2016

… As for treatment, only two drugs are approved for use in youth by the Food and Drug Administration [FDA]: fluoxetine for ages 8 to 17 and escitalopram for ages 12 to 17, said Wagner. “The youngest age in the clinical trials determines the lower end of the approved age range. So what do you do if an 11-year-old doesn’t respond to fluoxetine?”

One looks at other trials, she said, even if the FDA has not approved the drugs for pediatric use. For instance, one clinical trial found positive results for citalopram in ages 7 to 17, while two pooled trials of sertraline did so for ages 6 to 17. Another issue with pediatric clinical trials is that 61 percent of youth respond to the drugs, but 50 percent respond to placebo, compared with 30 percent among adults, making it hard to separate effects.

When parents express anxiety about using SSRIs and ask for psychotherapy, Wagner explains that cognitive-behavioral therapy [CBT] takes time to work and that a faster response can be obtained by combining an antidepressant with CBT. CBT can teach social skills and problem-solving techniques as well. Wagner counsels patience once an SSRI is prescribed.

A 36-week trial of a drug is too brief, she said. “The clock starts when the child is well, usually around six months. Go for one year and then taper off to observe the effect.” Wagner suggested using an algorithm to plot treatment, beginning with an SSRI, then trying an alternative SSRI if that doesn’t work, then switching to a different class of antidepressants, and finally trying newer drugs.

“We need to become much more systematic in treating depression,” she concluded.
And here’s the drug patent timeline for reference:

• as mentioned, this is TMAP talk [early 90s] all over again …
The reason her presentation felt so "old" was easy to figure out. It was ["old"]. It was the recommendation that started with TMAP/TCMAP 20+ years ago. After the warm glow that initially came with the SSRIs [Listening to Prozac] wore off and it became apparent that the response rate was less than hoped, there were attempts to improve the results. In Texas, John Rush, Madhukar Trivedi, Graham Emslie, Karen Dineen Wagner, and others initiated what I call "algorithmic medicine." The idea was that the tepid response was because the drugs weren’t given right. They advocated pushing the dose, making sure patients stay on the drugs long enough, following an algorithm to sequence different antidepressants or augment in treatment failures, and treating to remission. We mostly know of TMAP because it was the PHARMA-funded scheme to require using in-patent drugs in public clinics that almost bankrupted Texas Medicaid, but it was also a path intended to improve response. This notion lead to a series of NIMH studies:

Wagner’s 2016 APA presentation sounded like a direct transcript from those days – in spite of the fact that none of the studies in between really bolstered the idea that this path lead much of anywhere. STAR*D claimed to support sequencing, but it had methodologic problems, a huge dropout rate, and was never fully reported [see a thirty-five million dollar misunderstanding…]. She mentioned none of this and, at least in the PSYCHIATRICNEWS report, didn’t present her argument with evidence that her recommendations got results.

• but there’s this other thing…
The things that nagged at me and had me looking back at this report were first, her reference points, and then, what it didn’t say.

She referred to the FDA Approved drugs ["fluoxetine for ages 8 to 17 and escitalopram for ages 12 to 17"] and recommended not feeling limited to just these two – pointing out several clinical trials that were positive ["one clinical trial found positive results for citalopram in ages 7 to 17, while two pooled trials of sertraline did so for ages 6 to 17"]. She’s referring to studies in 2003 and 2004 where she is the first author, one being recently heavily criticized [see The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance] and another also widely questioned [at least she left out Paxil Study 329]. So based on her own ghost written industry funded studies from twelve+ years ago, she is encouraging full speed ahead with the aggressive TMAP guidance.

But, to be honest, that’s not what actually got my attention. It was what she didn’t mention. The most obvious thing is the Black Box Warning of potential suicidality in children and adolescents present since 2004. Then she refers to her short term clinical trials from early days, and nothing since then! Both trials come up in the deposition, and her involement was slim to none, and she remembers nothing much about them. She discounts the FDA, evokes these two early ?? trials, and doesn’t mention anything else. There’s no, "In our clinical experience…" or "In follow-up long term studies…" or "In the Galveston clinic, we’ve seen…"

• and another thing…
I wasn’t in love with her recommendation to argue with skeptical parental concerns about using SSRIs in kids either. Based on the evidence, Wagner should be concerned too. And that just got added to the list of things that made me wonder why she was picked to give this update on the treatment of depression in kids at the APA, how she achieved the status of Key Opinion Leader…
Mickey @ 9:40 PM

author·ity…

Posted on Thursday 7 July 2016

Being deposed in a legal case is one of the more odious experiences of adult life. The lawyer asking the questions has pored over whatever you’re there to talk about looking for ways to discredit you, then hammers away trying to get you to admit your sins [whether you’ve committed them or not]. No one looks very good in a deposition transcript. But in this post, I’m only going to talk about snippets from a deposition where Karen Dineen Wagner is not being brow-beaten and simply answers in a matter-of-fact manner. While there are plenty of contentious segments in this deposition, they are for another time.

Karen Dineen Wagner is something of an enigma to me. She is a listed author on four heavily contested ghost-written Clinical Trial Reports and was investigated by the US Senate for unreported income from pharmaceutical companies. And yet she’s the head of Child and Adolescent Psychiatry at UTMB; was the  longstanding Deputy Chairman of Psychiatry there until recently when she was promoted to full Chairman; and she is the President Elect of the American Academy of Child and Adolescent Psychiatry.
A lot of the information in the recent paper by Jon Jureidini, Jay Amsterdam, and Leemon McHenry about the 2004 Celexa Clinical Trial came from the discovery process in a court case [Celexa and Lexapro Marketing and Sales Practices Litigation: Master Docket 09-MD-2067-(NMG)].

In addition to the internal documents from the case mentioned in their article, there was also a deposition of Dr. Wagner. Here are a couple of snippets from that deposition posted on the Drug Industry Document Archive [DIDA]. First, what she had to say when asked about her authorship on the 2001 Paxil Study 329 article:

DEPOSITION OF KAREN DINEEN WAGNER, M.D., Ph.D.
by Michael Baum, Esq., of Baum, Hedlund, Aristei & Goldman
on Tuesday, July 16, 2013, page 61 [pdf page 16]


QUESTION Do you recognize this document we’ve marked as Exhibit 2c?
  ANSWER Yes.
QUESTION And you were a contributing author to this article?
  ANSWER I was an add-on site to this study. I was not involved in the writing of the manuscript.
QUESTION Did you review the manuscript before it was submitted for publication?
  ANSWER I don’t remember reviewing it. I assume — I don’t remember reviewing it. I would guess all of the authors saw a copy before it went out. I just don’t remember.
QUESTION At the time it was issued, did you believe it to be an accurate and truthful statement of the results of the 329 study?
  ANSWER Yes.
QUESTION … just based on your recollection now, do you recall that there’s some data that’s come out or some indication since the publication of what’s now marked as 2c in July of 2001 that indicated that Study 329 was actually a negative result?
  ANSWER There has been some controversy with that. It depends what outcome measure you look at. And again, I was an add-on site to this study.
QUESTION You didn’t have anything to do with the publication of inaccurate information about Study 329, correct?
  ANSWER I didn’t have anything to do with the design of the study, the analyses of the study. I provided subjects, some subjects, for the study. I think on — you can count what my authorship — one, two, three, four, five, six, seven, eight, nine, 10, 11, 12, 13, 14, 15, 16, 17, 18 — I’m the 19th author in this multicenter publication.
QUESTION You were low on the totem pole?
  ANSWER I wasn’t involved in the design or the — this study was designed well before I was involved. The primary outcome measures were chosen before I was involved. All of it was done and they needed some more sites for enrollment.
QUESTION Why is your name on the paper, then?
  ANSWER Because I provided subjects for the study.
QUESTION Okay. Do you know whether or not this paper is listed on your CV?
  ANSWER It would be listed on my CV because my name is on it.

She is being truthful in saying that she was brought into the Paxil Study 329 Clinical Trial late after they realized that recruitment was flagging and they needed to add more sites [Sites 007-012 added one year into the study]:

Table 7 Number of Patients Who Were Randomized (R) to Each Treatment Group
and Who Completed (C) Acute Phase of Treatment at Each Center

Center Investigator Site   Paroxetine     Imipramine     Placebo  
  R C R C R C

001 Geller St. Louis, MO 7 3 5 1 6 4
002 Keller Providence, RI 9 6 11 6 10 10
003 Klein New York, NY 10 8 14 9 11 10
004 Papatheodorou Toronto 5 3 4 1 4 2
005 Ryan Pittsburgh, PA 16 14 15 10 14 13
006 Strober Los Angeles, CA 4 3 2 1 3 2

007 Wagner Galveston, TX 9 5 7 1 5 4
008 Clarke Portland, OR 5 5 6 5 3 3
009 Emslie Dallas, TX 17 13 18 13 18 9
010 Weller Columbus, OH 3 2 2 2 4 3
011 Carlson Stony Brook, NY 2 1 5 4 4 3
012 Kusumakar/Kutcher    Halifax, Nova Scotia 6 4 6 4 5 3
 
  Total 93 67 95 57 87 66

But the rest of her claims of passivity and non-involvement are beyond suspect. In 1999, she was the main speaker at a SmithKline Beecham sales roll-out meeting based on this study [if you haven’t seen this report, it’s worth a look]. And by the time of this deposition, the literature was full of Paxil Study 329 references, GSK had settled the suit brought by New York Attorney General Elliot Spitzer, the Black Box Warning was on every antidepressant package insert, Paxil Study 329 was a major part of a $3.3B GSK settlement, and Wagner had herself been deposed twice about Paxil Study 329:

DEPOSITION OF KAREN DINEEN WAGNER, M.D., Ph.D.
by Michael Baum, Esq., of Baum, Hedlund, Aristei & Goldman
on Tuesday, July 16, 2013, page 12 [pdf page 4]


QUESTION Have you had your deposition taken before?
  ANSWER Yes.
QUESTION Do you know how many times?
  ANSWER I think I’ve been deposed three times.
QUESTION Okay. One was in the Paxil litigation? Does that ring a bell?
  ANSWER Yes.
QUESTION And once in the Celexa/Lexapro securities litigation? Does that sound right?
  ANSWER I don’t know what it was called, but there was a Celexa deposition.
QUESTION Okay. And do you know what the third one was?
  ANSWER I think that there were — I think with Paxil there may have been two, but I’m not certain. It was a long time ago.

So it’s hard to buy that her feigned innocence and unfamiliarity were authentic. But beyond that, she’s stating outright that she didn’t review the data and may have not even read the paper before it was submitted with her name on the BY-LINE – that the only reason she was included was that she was a site director. That is a damning interpretation of the meaning of the word "author." And it goes downhill from there. This next snippet comes from the questioning about the 2004 Citalopram Clinical Trial:

DEPOSITION OF KAREN DINEEN WAGNER, M.D., Ph.D.
by Michael Baum, Esq., of Baum, Hedlund, Aristei & Goldman
on Tuesday, July 16, 2013, page 28 [pdf page 8]


QUESTION Okay. So do you recall whether you had access to patient level data when you were working on this publication?
  ANSWER No. We have access — well, as an individual investigator, you have access to your patients. But the individual patient data from other sites, usually when the data is presented, it’s put together. So I don’t — I just don’t recall if I saw individual — individual data.
QUESTION When you say "put together," does that refer to the pharmaceutical company compiling information and providing it to you?
  ANSWER The data is the property of the pharmaceutical company.
QUESTION And so they collect it and provide some form of summary of it to you?
  ANSWER Correct.
QUESTION And except for the patient level data that you had from your own particular site, you relied upon the information conveyed to you by the pharmaceutical company regarding the other sites. Is that correct?
  ANSWER In multicenter studies, each individual investigator has their own data and then it depends who sponsors the study. This was a Forest-initiated and Forest-sponsored study, so all of the data from the sites go to Forest.
QUESTION Then they compiled it and then did statistical evaluations of it?
  ANSWER Yes.
QUESTION Did you do any of the statistical evaluations yourself?
  ANSWER No.
QUESTION It was essentially provided to you by Forest statisticians?
  ANSWER Correct. I’m not a statistician.

In this Clinical Trial, Wagner was anything but an "add-on," she was the Principal Investigator [PI], and yet her answers are the same. She only saw the actual data from her own site and she didn’t do or check the statistical analysis, accepting the evaluation of Forest Laboratory’s statisticians. After all, "The data is the property of the pharmaceutical company." So she neither reviewed the data nor involved herself in the analysis. And, by the way, she wasn’t involved in drafting the paper either. The internal documents discussed in The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance simply confirm something discovered by the AJP editors after the original paper was published. It was ghost-written. And further, it omitted mentioning that a European Lundbeck Trial had been negative and reported significant harms [outlined in my collusion with fiction… and published in the American Journal of Psychiatry. 2009 166:942-943].

And so to return to my initial comment, "Karen Dineen Wagner is something of an enigma to me." She is known as an authority on psychopharmacology in children and adolescents – presenting CME courses at the APA and AACAP meetings and speaking and writing widely on the topic. These early articles seemed to have launched her career. Yet in these examples she is, at best, a signifier, in large measure a placeholder for the work of others – others with tainted motives at that. So the story raises some very big questions like  "what is an author?" and "what constitutes authority?"
Mickey @ 3:19 PM

the streams III – and a river runs through it…

Posted on Tuesday 5 July 2016

    A review of the streams:
  • a preference for case studies
  • the consequences of the DSM-III unitary depression
  • being questioned about focusing on "old studies"
  • Peter Kramer’s snappy response to Ed Shorter’s review
  • … and then the thread about Lewis, Kiloh, and Parker on Lewis’ cases

Based on his MD Thesis cases, Aubrey Lewis couldn’t confirm the classic separation between neurotic and melancholic depression. I read the Lewis papers [200+ pages], and I couldn’t confirm the diagnostic dichotomy either. So I can’t fault him for his conclusion. But it’s easy to see why his dataset might not be the best choice for evaluating the distinction. These were inpatient cases admitted to the Maudsley Hospital, and they were severely ill – a skewed sample of the general cohort of depressed patients by any read [the hard cases]. So I can fault him for generalizing from this sample, and for sticking to his conclusion throughout his influential career. I think he made a mistake.

In my experience, the clinical differentiation of Melancholia from other depressive states is not so difficult as it sounds in Aubrey Lewis‘ papers. In political sabbatical…, I provided links to articles with diagnostic criteria, including a follow-up article by Gordon Parker [furthering the use of latent-class analysis] and Bernard Carroll‘s article on diagnosis [that also puts it to verse]:

So Dr. Kramer’s Letter to the Editor [Book reviewer promotes a controversial theory about depression] saying that Dr. Shorter was pressing some idiosyncratric idea of his own by mentioning Melancholia,
"Shorter’s critique amounted to little more than a complaint that I have disregarded a controversial theory he favors. He used the review space to give a hobbyhorse a ride."
seems way, way off base to me. Whether by intention or not, Kramer perpetuates the "all depression = brain disease" agenda of the KOLs whose motives are certainly suspect.

But whether his conclusions turned out to be mistaken or not, Lewis’ articles are absolute classics – particularly the last one [Melancholia: prognostic studies and case-material, 1936] which has his 61 case narratives with almost a full journal page each [56 pages ÷ 61 cases = 0.92 pages/case]. You know they’re classics because two different solid investigators were able to find what they needed to do their sophisticated reanalyses using Lewis’ case reports. It was as if those cases had waited patiently [pun intended] for a half century for the statistical methodology and the computing power to come along that could finally analyze them.

Reading Lewis’ commentary and particularly his patient narratives just reminded me of how much I missed case reports. As physicians, it’s the plight of people taken one at a time that matters. We collect them in groups of shared syndromes to see what we can learn from their similarities, but their differences are important too. And we may not be able to find the most important things just yet, but if the narratives endure, maybe some future reader can see what we missed, or bring some new pair of glasses that finds what we were looking for. And that’s exactly what happened here. So the next time I’m asked why I’ve continued to look at "old" studies, I’m going to say, "If the data’s faithfully recorded and maintained, there’s no such thing as an old study" and start talking about the Lewis cases and their reanalysis fifty years later.
    I had a related experience almost 50 years ago myself. When I started a fellowship in rheumatology in a former life, my boss was in the process of happening onto an significant finding. Using an electron microscope, he was studying the capillary morphology in rheumatologic diseases in biopsies from uninvolved muscle. And in Scleroderma [Progressive Systemic Sclerosis] the technicians couldn’t find any capillaries to look at. So he developed a technique to quantify the capillary density, and sure enough, it was dramatically diminished – no small observation for a disease characterized by generalized scarring. So my first project was the literature review of previous reports of vascular problems in Scleroderma. I doubted I’d find anything. In those pre computer search days, it was no small task and I spent months in the dusty library stacks. The results were amazing. It was everywhere, vascular lesions recorded in the meticulous old reviews of pathology in journals from every specialty, every organ system. They didn’t realize the significance of what they were seeing, but they had written down what they saw for me to find years later. They just didn’t have our electron microscope to move them that final inch. We ended up having to radically trim our reference list to only the most pertinent papers.
And that’s it, a river that runs through it. As much as I’d enjoy another good rant about why the distinction between Melancholia [Depression] and the heterogeneous category, depression matters, the real lesson in this story is that it’s the patients and their data that’s important. I think Aubrey Lewis did the best he could with the data he had. I wish he’d taken a later look at a more representative cohort, but that doesn’t detract from what he did do. His legacy, in this instance, is the careful case histories he passed on to Kiloh and Parker. And the impressive thing is that even with this skewed cohort, they were able to convincingly detect the two populations that it represents.  And that’s probably the river that runs through this whole blog and others like it. It’s the patients, their narratives, and the data generated in its rawest form that endures, not the interpretations of its contemporary custodians. Just imagine how much we could learn if we had this kind of data from all the subjects in the clinical trials of antidepressants that we write about.


[Norman Maclean, author of “A River Runs Through It”]

Mickey @ 9:47 PM

the streams II…

Posted on Tuesday 5 July 2016

Leslie Kiloh [1917-1997] was a British psychiatrist, well known for his studies in the classification of depressive disorders and the EEG. In 1962, he became the chair of psychiatry at the new Medical School at the University of New South Wales in Australia – a position he held for 20 years until retiring. He studied depressive cohorts in England and later Australia much as Lewis had done, but used more modern methods [multivariate factor analysis]. The internal workings of factor analysis are way beyond my skill set. Suffice it to say that they record many variables [~35] and the computers whirr all night. While the methodology is clearly too sophisticated to be "bloggable," the results look impressive to me [in multiple studies]. Here’s a representative picture of the kind of separations shown in his studies.


[colored and separated for clarity]
on review, the circled cases were misdiagnosed

Kiloh’s group concluded from his own multiple studies and those of others:
"The analysis of data obtained in this study supports the view that ‘psychotic’ or endogenous depression is a condition with a restricted range of clinical manifestations, consistent with an imputed genetic or biochemical basis, whilst so-called neurotic depression is a diffuse entity encompassing some of the ways in which the patient utilizes his defence mechanisms to cope with his own neuroticism and concurrent environmental stress."
Then in 1977, he published a paper with a colleague that was, in my opinion, a brilliant stroke. Using Aubrey Lewis’ detailed description of the original 61 cases from his 1928/1929 dataset, Kiloh rated them on a number of factors and employed his more modern techniques for analysis, a technique not available in Lewis’ day:

  • Kiloh, LG & Garside, RF. Depression: a multivariate study of Sir Aubrey Lewis’s data on melancholia.
    Australian and New Zealand Journal of Psychiatry [1977]. 11:149-156.
Again, the vicissitudes of multivariate factor analysis are well beyond capabilities of this mere mortal [namely me], but it identified two distinct clusters that had little overlap as shown in the graphic representation of their main results table from the reanalysis of Lewis’ data on the right. They had this to say about Aubrey Lewis’ papers:
"As a result of the clarity with which the two papers are written, few difficulties were experienced in the scoring, but inevitably an occasional decision was necessary."
and this about the Lewis analyses:
"One must agree with Lewis that ordinary scrutiny of his data and the comparison of the clinical features with prognosis shows no discernible patterns and that he was correct, using the methods available at that time, in concluding that he could find no evidence of any qualitative distinction between “melancholia and mild neurasthenic depression” in his material. Nevertheless, more refined techniques now available show that this conclusion was incorrect, and in view of the tremendous influence that these papers have exerted, it is felt that this present analysis was worth carrying out."
And finally, their conclusion:
"Thus, it may be concluded that, once again, a multivariate study has indicated that endogenous depression, though varying in severity, is a categorical condition. A patient either suffers from it or does not. Indeed, as has been pointed out by Kiloh et al. [1972], the dichotomy of depressive illness demonstrated in so many published studies is determined solely by the presence or absence of endogenous features. In other words, neurotic depression is defined by the absence of endogenous features, and this has given to this group of cases the illusion of being an entity. Both Paykel [1971] and Kiloh et al. [1972] have put forward evidence indicating that, when scrutinised by multivariate analysis, neurotic depression consists of a number of separable syndromes."
Gordon Parker followed Kiloh as psychiatry chair at the University of New South Wales [1983-2002] where he became a noted researcher in mood disorders and founded the Black Dog Institute, an organization focused on the treatment of depressive illnesses. In 1993, he and a colleague took yet another look at the Aubrey Lewis dataset.
by GORDON PARKER and DUSAN HADZI-PAVLOVIC
Psychological Medicine. 1993 23:859-870.

Sir Aubrey Lewis studied 61 depressives in considerable detail, principally cross-sectionally but also by reviewing progress. He concluded that he could find no qualitative distinctions between the depressed patients and thus established himself as a strong and influential advocate of the unitary view of depression [i.e. that depression varies dimensionally, not categorically]. Subsequently, Kiloh & Garside [proponents of the binary view of two depressive ‘types’] coded the Lewis data and undertook a principal components analysis. They claimed success in distinguishing ‘endogenous’ and ‘neurotic’ depressive types within Lewis’ sample. In this paper we re-analyse the data set using both a latent class categorical approach and mixture analyses. We suggest that any demonstration of sub-types was limited by relative homogeneity of the sample [in that up to 80% had probable or possible psychotic conditions], and by Lewis rating a number of important features [e.g. delusions] dimensionally rather than categorically. Nevertheless, we identify one categorical class [essentially an agitated psychotic depressive condition] and a residual [presumably heterogeneous] class. The presence of those two classes was supported by demonstrating bimodality in composite scores derived from the fourteen differentiating clinical features [and not evident when all clinical features were considered], and formally confirmed by mixture analyses. Membership of the categorical class was determined principally by psychotic features [delusions and hallucinations] and by objectively-judged psychomotor disturbance, and we consider the nature of that ‘class’. Lewis’ data set is unusual [in having self-report and observationally rated data], and historically important in demonstrating that conclusions may depend on the choice of variables examined and analytical approaches.

Dr. Parker kindly responded with this comment about the paper:
"Leslie Kiloh scored all of Lewis’ subjects on depressive symptoms and analysed the data to show separation.  We then used latent class analysis … to analyse the Kiloh-rated data set evaluating Kendell’s argument that a unimodal distribution would argue for depression differing only by severity and that a bimodal distribution would be required to support the division of melancholic and non-melancholic depression.   When we included all of Aubrey Lewis’ items there was a unimodal distribution.  When we limited analyses to central markers of melancholia [essentially psychomotor disturbance] there was a distinct bimodal one – and we made the point that including non-differentiating items [ie DSM MDE in the main, and most endogeneity symptom lists] in any analysis can ‘swamp’ the true capacity of the contrasting  depressive groups to differentiate. I’m preparing a paper at the moment where we analyse data using the SMPI [Sydney Melancholia Prototype Index] and again we demonstrate distinctive bimodality.  I hope this assists."
In following the fate of Aubrey Lewis’ cases, I’ve left out the many other contributions of these three investigators. I’ve also skipped over the work of Dr. Bernard Carroll and the Dexamethasone Suppression Test and the many others who are catalogued on the by-line of this must-read editorial, Issues for DSM-5: Whither Melancholia? The Case for Its Classification as a Distinct Mood Disorder . You could also put melancholia in the search box below to find my 105 blog posts that mention it. So enough prequel, it’s time to see where these streams converge [see  the streams III – a river runs through it…].
Mickey @ 6:00 PM

the streams I…

Posted on Tuesday 5 July 2016

These are some thoughts that seemed related but it wasn’t immediately clear why. When that happens, I usually just write them down, and in the process, the unifying idea becomes obvious. This time, I made the list of streams, but it was something from the outside that said a river runs through it.

  • For as much as we can learn in medicine from the study of syndromatic groups, my own natural inclination is towards in depth case studies. I lament that our journals rarely, if ever, publish them anymore. Of course, the best of both worlds is when we get both. I noticed when we gained access to the data from Paxil Study 329, even though what we had were various forms, I found myself looking at the marginalia and written comments, trying to see the individual behind the form. Point being that all data doesn’t come in graphs and tables. Sometimes it’s whole people, with stories. That’s hardly a surprising comment from a guy who left a hard science career to practice as a psychotherapist.

  • In my last post I was, once again, going on about what I see as a fundamental flaw in the DSM-III carried forward to today. My persistence in stressing the categorical distinction between Melancholic Depression and depression as a symptom was originally born from my own clinical experience but has come to have other determinants. I suspect that the "lumping" of all depression into Major Depressive Disorder [MDD] was originally a move to get the analysts out of the picture [eliminating depressive neurosis] and a concession to the insurers who have no interest in paying for symptoms arising from "life." But it had pervasive and ominous consequence. The emerging Academic·Industrial alliance co-opted the research being done on Melancholic Depression that was  beginning to edge towards a neuro·biologic causality [a promising bio·marker, sleep abnormalities, response to biological treatment, the genetic component, etc] and teleported those findings to all depression [chemical imbalance, Clinical Neuroscience, etc]. It helped sell a lot of unnecessary drugs hurting some in the process, basically shut down a lot of research along productive lines, and wasted untold research dollars chasing neuro-whatever pipe dreams with the wrongest of cohorts.

  • We spent a lot of time on Paxil Study 329 [2001], and I still hear "Why? It’s so old!" including from Ben Goldacre in a recent Q&A. It took 12 years and a ton of work by a number of people and a few lucky breaks to finally get hold of the complete dataset for that study. And it took another two plus years to analyze and republish it. That effort wasn’t just about Paxil itself. There had been enough major settlements [in part based on that study] to get the point across that the published positive conclusion was simply wrong, based on their own study. But it was vitally important, at least to me, to demonstrate how important Data Access [Data Transparency] is in evaluating the reported results of any clinical trial. Study 329 was a well designed and well implemented clinical trial that was misreported [I think on purpose] using a variety of sleight of hand maneuvers in the analysis. While we suspect that there are many such papers, in this one, we could document it in action. The same thing is true in the recent deconstruction of the Citalopram CIT-MD-18 study [2004]. They didn’t have the trial data, but they had the internal documents that proved that our suspicions that it was actively distorted. So the focus on these older datasets is because they are some of the very few that are available. They’re rare as the proverbial hen’s teeth.

  • Dr. Peter Kramer’s 1993 "Listening to Prozac" was a game changing best-seller with its enthusiasm for future possibilities for biological interventions – "replacing the couch with the capsule." It’s a different time now, and the antidepressants are under attack. Kramer returns with a new book, "Ordinarily Well," that again makes a case for the antidepressants. In a review/opinion piece in the Washington Post, Historian Edward Shorter commented favorably on the book, but made the argument that Kramer saw depression as a unitary condition rather than recognizing the distinction between Depression and depression [Are antidepressants the answer for depression?] and followed it up with a blog post [The Big Divide in US Psychiatry]. His argument was similar to mine a couple of paragraphs up. Peter Kramer responded with a Letter to the Editor [Book reviewer promotes a controversial theory about depression], implying that Shorter was pressing some idiosyncratric idea of his own, "Shorter’s critique amounted to little more than a complaint that I have disregarded a controversial theory he favors. He used the review space to give a hobbyhorse a ride."

In an email thread discussing this last stream, I read a piece of unfamiliar history that seemed to point me towards a river.

Aubrey Lewis [1900-1975] was a major figure in the growth of Psychiatry in England after WWII. He was originally from Adelaide, South Australia where he took his medical degree and trained in psychiatry. Receiving a Rockefeller Scholarship,  he traveled first to the US, working at the Phipps Clinic under Adolf Meyer, then to Germany, and finally ended up in London taking a position at the Maudsley Hospital in 1928. By 1936, he was appointed Clinical Director at the Maudsley. And when it was reorganized after the War, he was appointed Chairman of Psychiatry in the newly created Institute of Psychiatry at the College of London – a position he held until retiring in 1966.

But it’s not Lewis’ long and influential career that matters here. It’s back where it started when he began at the Maudsley in 1928. Apparently, a doctorate of medicine at Adelaide required a thesis, and Lewis submitted his in 1931 on 61 patients he’s seen with Affective Disorders in his first two years at the Maudsley. Those cases and his views were later published as three papers in the Journal of Mental Science which later became the British Journal of Psychiatry [so they’re still in my university’s e-library].

  • Lewis, AJ. Clinical and Historical Survey of Depressive States Based on the Study of 61 Cases.
    M.D. Thesis: University of Adelaide [1931].
  • Lewis, AJ.Melancholia: a historical review.
    Journal of Mental Science [1934]. 80:1-42.
  • Lewis, AJ. Melancholia: a clinical survey of depressive states.
    Journal of Mental Science [1934]. 80:277-378.
  • Lewis, AJ. Melancholia: prognostic studies and case-material.
    Journal of Mental Science [1936]. 82:488-558.
I read the latter three papers feeling the same awe I always feel reading medical classics – the detail, careful clinical observations, logic trains that skip no steps, highlighted areas of confusion, and overall thoroughness – over 200 pages for the three articles. As to the question of reactive versus endogenous [constitutional] depression, an issue as alive in 1930 as it is today, he could find no clear cleavage in his review of his cases. As Lewis said in successive editions of a textbook that spanned his career,
“It is probable that all the tables and classifications in terms of symptoms are nothing more than attempts to distinguish between acute and chronic, mild and severe”.

Aubrey Lewis was an influential figure on both sides of the pond [and the channel], so his opinion became something of a dogma for many of his students and colleagues [but his opinion is not the river that runs through these posts]. There were plenty of equally prestigious people who were on the other side of this argument, investigators who saw Melancholia as distinct and an object of some interest.

This isn’t really the end of this post, or for that matter, anything. It’s just a stopping place because this is getting long. Based on his careful examination of his cases, Lewis couldn’t confirm that there were "two depressions." Robert Spitzer reached something of the same conclusion in the lead-up to the DSM-III, though he based his decision on something else, inter-rater reliability, leading him to the unitary category Major Depressive Disorder that has been a major cause of innumerable problems. In some ways, Aubrey Lewis and Spitzer’s interviewers were in the same boat. They couldn’t reliably discriminate a difference, so they decided that meant that there wasn’t one. But there’s so much more to say, coming in the next post – the streams II…
Mickey @ 3:00 PM

··and sealing-wax…

Posted on Friday 1 July 2016

    “The time has come," the Walrus said,
    “To talk of many things:
    Of shoes··and ships··and sealing-wax··
    Of cabbages··and kings··
    And why the sea is boiling hot··
    And whether pigs have wings.”
    The Walrus and the Carpenter
    Lewis Carroll, 1832 – 1898

It seems like only yesterday, but it’s been three years since the DSM-5 was released [May 18, 2013]. Unlike Spitzer’s 1980 DSM-III, Kupfer and Regier’s DSM-5 was hardly cause for celebration. I was just glad to have it off of the front page. While the debates and harangues had gone on for years, the substantive questions about its basic structure were never even really addressed.

So when I saw the abstract below, I perked up. Dr Kendler was part of the DSM-5 Task Force and maybe he’s finally getting around to examining the flawed MDD category. However, in the days leading up to the DSM-5, Dr. Kendler wrote a report explaining the move to get rid of the Bereavement Exclusion in the DSM criteria for Major Depressive Disorder that I thought was an ill advised rationalization at best. My take is cataloged in depressing ergo-mania….
American Journal of Psychiatry
by Kenneth S. Kendler
Published online: May 3, 2016

How should DSM criteria relate to the disorders they are designed to assess? To address this question empirically, the author examines how well DSM-5 symptomatic criteria for major depression capture the descriptions of clinical depression in the post-Kraepelin Western psychiatric tradition as described in textbooks published between 1900 and 1960. Eighteen symptoms and signs of depression were described, 10 of which are covered by the DSM criteria for major depression or melancholia. For two symptoms [mood and cognitive content], DSM criteria are considerably narrower than those described in the textbooks. Five symptoms and signs [changes in volition/motivation, slowing of speech, anxiety, other physical symptoms, and depersonalization/derealization] are not present in the DSM criteria. Compared with the DSM criteria, these authors gave greater emphasis to cognitive, physical, and psychomotor changes, and less to neurovegetative symptoms. These results suggest that important features of major depression are not captured by DSM criteria. This is unproblematic as long as DSM criteria are understood to index rather than constitute psychiatric disorders. However, since DSM-III, our field has moved toward a reification of DSM that implicitly assumes that psychiatric disorders are actually just the DSM criteria. That is, we have taken an index of something for the thing itself. For example, good diagnostic criteria should be succinct and require minimal inference, but some critical clinical phenomena are subtle, difficult to assess, and experienced in widely varying ways. This conceptual error has contributed to the impoverishment of psychopathology and has affected our research, clinical work, and teaching in some undesirable ways.
I found this abstract confusing. I couldn’t quite land on what he was getting at. I thought the idea of surveying the textbooks historically for their take on the symptoms of depression was clever. At the end, I couldn’t agree more that people have reified the DSM Disorders, or that there’s a categorical error in the woodpile with MDD. But his main point eluded me, so I read the whole article. Alas, neither of those things is what Dr. Kendler seems to be getting at here.

What I think he’s saying is that the fully nuanced clinical picture of Major Depressive Disorder may have many of the features he found in his textbook review that are being overlooked, symptoms like depersonalization or derealization. He tells us that the DSM criteria are simply an index to the Disorder, not the Disorder itself – ergo, there’s nothing wrong with the DSMs per se. The problem is a conceptual problem in the users. I guess he thinks we’ve gotten sloppy. And he apparently buys that there is a unitary Disorder behind these various symptoms. That’s hardly the serious look at Major Depressive Disorder I had hoped to read.

The categorical error that I see is that Major Depressive Disorder [MDD] isn’t a category in the first place, and never has been. When I think back to the 1980 days when the DSM-III first arrived, that’s what I thought on first reading. Gone was the psychiatric disease Depression, AKA Melancholia AKA Endogenous Depression AKA Endogenomorphic Depression. This is the stuff of psychiatry proper that has been with us since the dawn of recorded history. And then there used to be something else – a heterogeneous collection of patients with widely varying severity who had the affective symptom, depression, but not the illness Depression. The DSM [-I] had used the term depressive reaction. The DSM-II gathered them together under the term depressive neurosis. In the DSM-III, they were all included under MDD [my mistake, there were other categories included but they never caught on because they felt too made-up].

It’s easy to see the problem. There are really no boundaries on the much larger second group. I personally thought at the time that this second group had problems in their relationships, or their lives, or carried over from the past, or in the basic structure of their personalities, and that depression was a symptom, a signal to them and to the world that something wasn’t right. Most of my internal medicine colleagues looked into the symptoms that brought them to a doctor’s office. Finding no underlying physical cause, they reassured them and sent them on their way. I did that too, and for many, that was enough. But for a sizable number, it wasn’t, and I got interested in working with those cases [I still am]. So in that group, the gamut runs from unhappy in a lousy marriage to structuralized lifelong personality disorder. No clear borders. And that drives actuaries and third party payers crazy. So I suppose that conflating the Depressions and the severe depressions looked like a solution to many. Major Depressive Disorder then became a way of certifying or validating illness. That’s the only explanation I could come up with at the time for the faux category.

We all know what happened. Instead of tightening a boundary, the DSM-III loosened it [maybe better said, destroyed it]. So the valuable research on Melancholia was stymied by dilution, and the huge number of symptomatically depressed people became fair game for the pharmaceutical industry and the [carpetbagger] KOLs who jumped at the chance to annex them as an eager market for the antidepressants. The scientifically sound ideas that were developing about Melancholic Depression [that it has a biologic basis, that it responds to medications or ECT, that it is a brain disease, that it has a genetic component] flowed into the whole population of people with depressive symptomatology who were told they had a chemical imbalance or a brain disease. And what flowed out was untold billions of dollars in sales of largely unnecessary and sometimes dangerous medications. And in the mix, millions of dollars of unnecessary and unproductive research depleted the funds available for continuing research pathways that might have clarified some more focused piece of the puzzle. In the process, progress in the psychological and social treatments also ground to a halt. We all went backwards.

Dr. Kendler’s offering seems to be an attempt to help us be more forgiving about the short-comings of the DSM and its Major Depressive Disorder category. What I’m able to follow about his idea seems trivial and off the mark – more "cabbages and kings". I found it particularly annoying that this is one of the few commentaries on the DSM-5 from an official and that it continues to skirt the real problems. In my opinion, as shepherds of the DSM, it would behoove he and his colleagues to take another tack. Fix the DSM rather than us, and restore the boundaries to more reasonable scientific domains so we can pick up where we left off 36 years ago and bring some kind of much needed clarity to the current sea of misinformation. And as a corollary, the place of medication in the symptomatic treatment of depression can only be clarified by clinical trials conducted without the epidemic corruption of our current era. The fact that the pharmaceutical industry, the clinical trial industry, and the third party carriers like things just the way they are right now is really not our concern.

So like Lewis Carroll’s Walrus, I think "the time has come … to think of many things." But right now, we’re sure not thinking about the right ones. We’re in the "whether pigs have wings" range. Somewhere on the other side of the morass of commercial interests, ideological differences, guild wars, and a sea of other bias, there’s some system that will deliver the best we’ve got with the resources available. And there’s some path that allows productive researchers to be in an environment that optimizes progress. Neither of those things are likely to happen without a sensible classification system that fits a lot better than the one we have now, without an insistence on honesty in the science we bring to bear on the problems, and without an oversight function that insures that we never again allow what’s happened here to repeat.
Mickey @ 5:50 PM