a new kid on the block…

Posted on Thursday 28 July 2016

National Institute of Health
July 28, 2016

National Institutes of Health Director Francis S. Collins, M.D., Ph.D., announced today the selection of Joshua A. Gordon, M.D., Ph.D., as director of the National Institute of Mental Health [NIMH]. Dr. Gordon is expected to join NIH in September. ”Josh is a visionary psychiatrist and neuroscientist with deep experience in mental health research and practice.  He is exceptionally well qualified to lead the NIMH research agenda to improve mental health and treatments for mental illnesses,” said Dr. Collins. “We’re thrilled to have him join the NIH leadership team”…

Dr. Gordon will join NIH from New York City, where he serves as associate professor of Psychiatry at Columbia University Medical Center and research psychiatrist at the New York State Psychiatric Institute. In addition to his research, Dr. Gordon is an associate director of the Columbia University/New York State Psychiatric Institute Adult Psychiatry Residency Program, where he directs the neuroscience curriculum and administers the research programs for residents.

Joining the Columbia faculty in 2004, Dr. Gordon’s research has focused on the analysis of neural activity in mice carrying mutations of relevance to psychiatric disease. The lab studies genetic models of these diseases from an integrative neuroscience perspective and across multiple levels of analysis, focused on understanding how a given disease mutation leads to a particular behavior. To this end, the lab employs a range of neuroscience techniques including neurophysiology, which is the study of activity patterns in the brain, and optogenetics, which is the use of light to control neural activity. His work has direct relevance to schizophrenia, anxiety disorders and depression, and has been funded by grants from NIMH and other research organizations.  Dr. Gordon maintains a general psychiatric practice, caring for patients who suffer from the illnesses he studies in his lab.

Dr. Gordon pursued a combined M.D./Ph.D. degree at the University of California, San Francisco. Medical school coursework in psychiatry and neuroscience convinced him that the greatest need, and greatest promise, for biomedical science was in these areas. During his Ph.D. thesis, Dr. Gordon pioneered the methods necessary to study brain plasticity in the mouse visual system. Upon completion of the dual degree program at UCSF, Dr. Gordon went to Columbia University for his psychiatry residency and research fellowship…
I’m about to leave for a vacation, so this week has been filled with that getting-things-done activity that I’d rather have than catching-up-on-things when I come back. Part of that was working extra in the clinic to maintain some continuity of care. Tuesday, I saw a young man [20 y/o] that I’ve seen a few times. He is on first meeting a classic case of what has been called Asperger’s – the upper end of the Autism Spectrum. Tuesday, his mother was with him, and though she didn’t say much, there was a pleading look in her eyes. We have no services for him where I live in the mountains, and I had written the Autism Center at Emory in Atlanta in hopes that I could refer him. I  had gotten a very helpful response with offers of a variety of just-what-he-needed services – a medication clinic, socialization training, vocational rehabilitation, etc. Everything I had hoped for. On a hunch, instead of talking to the two of them, I scheduled a further meeting with his mother today. She obviously wasn’t saying what she needed to say when he was in the room.

It was a pretty good hunch. This young man had been well cared for by his family who had been devoted to him. He was teased in school, and quit in the ninth grade. He lives at home, and has been unable to find any place where he fits in. They’ve taken him to therapists and doctors throughout his life, but had never been given a diagnosis – in spite of his having been hospitalized after an angry outburst at a kid that was tormenting him. He had been started on courses of most psychiatric drugs along the way but none ever helped. So I spent about an hour today explaining the diagnosis  and the possibilities up ahead to his mother. Her relief was palpable. It was as if twenty years of her tension began to melt. She had harbored the fear that he had "schizophrenia" [a  condition she had a very distorted understanding of]. She left with a page of phone numbers to call for appointments, and I predict good things.

Driving home, I was thinking about Dr. Insel who was involved with that same Autism Center when he was at Emory before going to the NIMH. And I was thinking about why I have been such his critic. In person, I had liked him, personable, committed, obviously bright. He was not, nor had he ever been, a clinician. I don’t remember what he said at the reception at that Autism Center so long ago where I met him that made me think it, but I left thinking that his zeal for scientific discovery wasn’t tempered by a focus on clinical reality. I remembered that encounter some years later when I felt the same thing as I developed an interest in his NIMH activities – something I called his future-think – an almost desperate race to hit a home run instead of aiming to get on base.

So it was ironic to walk in and read an email that his replacement had been named. I know nothing of Dr. Gordon. I’m encouraged that he has a practice and works with residents. I hope he’s a clinician who can bring some needed balance to the NIMH. In my view, Medicine is a clinical science that has no intrinsic meaning outside of its focus on the patients we see. In my book, the kinds of services I hope my patient can get at the Autism Center are on a par with the lab guys who are looking to find out how neural networks might have something to do with the Autism he and his family struggle with. We need both and that’s not what we have had at the NIMH for a very long time.

I hope Dr. Gordon brings a new vision to the NIMH. And I hope that his first act in September is to disentangle the grant award process from all of the pet projects so important to his predecessor [things like Translational this-and-that; the RDoC; etc.] and give our scientists the latitude to compete for funding based on their own creative ideas rather than fitting into boxes preferred by the Director.
Mickey @ 8:06 PM

what’s it going to take?…

Posted on Sunday 24 July 2016

At this summer’s American Psychiatric Association meeting, Karen Dineen Wagner, president elect of the American Academy of Child and Adolescent Psychiatry discussed the treatment of depressed children and adolescents, saying:
…only two drugs are approved for use in youth by the Food and Drug Administration [FDA]: fluoxetine for ages 8 to 17 and escitalopram for ages 12 to 17, said Wagner. The youngest age in the clinical trials determines the lower end of the approved age range. So what do you do if an 11-year-old doesn’t respond to fluoxetine? One looks at other trials, she said, even if the FDA has not approved the drugs for pediatric use. For instance, one clinical trial found positive results for citalopram in ages 7 to 17…
She’s referring to a 2004 Citalopram study [in which she was the Principle Investigator and first author]:
by Wagner KD, Robb AS, Findling RL, Jin J, Gutierrez MM, Heydorn WE.
American Journal of Psychiatry. 2004 161[6]:1079-1083.
Meanwhile, based on their recent deconstruction of that same study, Jon Jureidini, Jay Armstrong, and Leemon McHenry were waiting for word from the American Journal of Psychiatry [AJP] editor Robert Freedman about their request that this same article be retracted. Dr. Freedman has now responded by email:
RE: Am J Psychiatry 2004; 161:1079–1083 We are not retracting this article. Robert Freedman MD


Had I read this story in college while I was thinking about what comes next, I wonder if I might have been so disillusioned that I would’ve made other choices? Or later when I was thinking about switching my career direction from Internal Medicine to Psychiatry, would I have made the change? It involves the president of the main organization for child and adolescent psychiatry and the editor of the official journal of the professional organization for all of psychiatry. What if I knew then that this is not just an isolated incident, but rather something more like business as usual? I can’t answer those questions, but I know for sure it would’ve mattered, just like it matters now…

The truth about this article has emerged more slowly than for its cousin Paxil Study 329. Shortly after it was published, people noticed that it had a big error in reporting an effect size [see more Wagner et al]. But in 2009 [shortly after this study was an essential ingredient in FDA Approval for use in adolescents], in a suit alleging off-label marketing, discovery documents revealed it was ghost written, that the listed authors weren’t chosen until the study was over and the article was already drafted. Also, it failed to mention another completed study that was negative [see collusion with fiction…]. That suit was settled for $149M. While the AJP published a stern note, they did not retract the article [see the jewel in the crown…].

Now, in the wake of litigation for yet another legal suit, more internal documents make it abundantly clear that  just about everything about the study was jury-rigged – including the outcome switching mentioned in wonky science…. It’s all detailed in this recent publication by Jureidini et al:
by Jureidini, Jon N., Amsterdam, Jay, D, and McHenry, Leemon B.
International Journal of Risk & Safety in Medicine, 2016 28[1]:33-43.

OBJECTIVE: Deconstruction of a ghostwritten report of a randomized, double-blind, placebo-controlled efficacy and safety trial of citalopram in depressed children and adolescents conducted in the United States.
METHODS: Approximately 750 documents from the Celexa and Lexapro Marketing and Sales Practices Litigation: Master Docket 09-MD-2067-[NMG] were deconstructed.
RESULTS: The published article contained efficacy and safety data inconsistent with the protocol criteria. Procedural deviations went unreported imparting statistical significance to the primary outcome, and an implausible effect size was claimed; positive post hoc measures were introduced and negative secondary outcomes were not reported; and adverse events were misleadingly analysed. Manuscript drafts were prepared by company employees and outside ghostwriters with academic researchers solicited as ‘authors’.
CONCLUSION: Deconstruction of court documents revealed that protocol-specified outcome measures showed no statistically significant difference between citalopram and placebo. However, the published article concluded that citalopram was safe and significantly more efficacious than placebo for children and adolescents, with possible adverse effects on patient safety.

Their difficulties getting this paper published are discussed in the background notes. And in addition to the internal emails, Dr. Wagner’s deposition in the case makes it clear that she never even personally reviewed the data or the statistical analyses that were published under her name [see author·ity…]. So this article is just an advertisement, deceptively written by a drug company [Forest Laboratories], not a science-based report for physicians and their patients.

Why would Dr. Freedman, editor of the AJP, send this hostile email with only six words ["We are not retracting this article"] instead of taking the mountain of information about this article seriously? Why does Dr. Wagner continue to quote this article as clinically meaningful [even promoting off-label use in children] in the face of it being so thoroughly discredited? What’s it going to take to put an end to this kind of irresponsible, deceitful behavior at the upper levels of psychiatry? Whatever it is, it’s way past due…
Mickey @ 7:34 PM

wonky science…

Posted on Saturday 23 July 2016

Note: I am pleased to announce that Google has moved me from This site may have been hacked to This site is not mobile friendly status. I see this as a real step up in the world and will look into the mobile friendly issue the next time I’ve got nothing else to do. I do appreciate that Google is now monitoring such things. It’s going to mean a safer more useful Internet – a good thing…


Figure A1. A continuum of experimental exploration and the corresponding continuum of statistical wonkiness. On the far left of the continuum, researchers find their hypothesis in the data by post hoc theorizing, and the corresponding statistics are “wonky”, dramatically overestimating the evidence for the hypothesis. On the far right of the continuum, researchers preregister their studies such that data collection and data analyses leave no room whatsoever for exploration; the corresponding statistics are “sound” in the sense that they are used for their intended purpose. Much empirical research operates somewhere in between these two extremes, although for any specific study the exact location may be impossible to determine. In the grey area of exploration, data are tortured to some extent, and the corresponding statistics are somewhat wonky.

Sometimes, a good cartoon can say things better than volumes of writen word. This one comes from the exaplanatory text accompanying a republication and translation of Adrian de Groot‘s classic paper on randomized trials explaining why they must be preregistered [see The Meaning of “Significance” for Different Types of Research, the hope diamond…, Why we need pre-registration, For Preregistration in Fundamental Research]. It’s the central point in the proposal suggested by Dr. Bernard Carroll’s Healthcare Renewal blog [CORRUPTION OF CLINICAL TRIALS REPORTS: A PROPOSAL].

Our journals are filled with articles where the data has been tortured [center above] or had the outcome moved to fit the data [left above]. But RCTs [Randomized Clinical Trials] are intended to test an already defined hypothesis, not make one up. They’re like Galileo’s famous experiment [right above]. Define the conditions in advance, then do the experiment to see if those conditions are met. And the only way to insure that the trial follows and is analyzed by those preregistered conditions is to publicly declare them before the experiment is done and afterwards publicly post the analyses done by the preregistered methods. Anything else ends up in wonky·land. Comes now this…
The COMPare Trials Project.
Ben Goldacre, Henry Drysdale, Anna Powell-Smith, Aaron Dale, Ioan Milosevic, Eirion Slade, Philip Hartley, Cicely Marston, Kamal Mahtani, Carl Heneghan.
We know Ben Goldacre from his books, his TED talk, and his AllTrials campaign, but I think his finest achievement is his current enterprise – The COMPare Project. The idea is simple. Compare the a priori Protocol defined outcome variables with those in a published journal article. I personally discovered the  importance of that working on our Paxil Study 329 article, and ever since have gone looking for protocols on the Clinical Trials that have come along. Sometimes they’re listed on clinicaltrials.gov [and sometimes not]. But even if they’re there, there’s rarely enough to do the proper protocol defined analysis. I’ve never found a full a priori portocol except in cases where it has been subpoenaed in litigation. So I wondered how Goldacre’s group was getting them. Here’s what he says:
"Our gold standard for finding pre-specified outcomes is a trial protocol that pre-dates trial commencement, as this is where CONSORT states outcomes should be pre-specified. However this is often not available, in which case, as a second best, we get the pre-specified outcomes from the trial registry entry that pre-dates the trial commencement. Where the registry entry has been modified since the trial began, we access the archived versions, and take the pre-specified outcomes from the last registry entry before the trial began." He explains this further in the FAQ on their website…

He has some hard working medical students and volunteer faculty working on his team and they checked all the trials in five top journals over a four month period last winter, comparing protocol defined outcomes against published outcomes. Here’s what they found:

TRIALS
CHECKED
TRIALS WERE
PERFECT
OUTCOMES NOT
REPORTED
NEW OUTCOMES
SILENTLY ADDED
67 9 354 357

And when they wrote the editors about the discrepancies, only 30% were published. And when authors do respond, they are sometimes combative or defensive [sometimes the COMPare guys get it wrong and apologize, but that’s not often]. I won’t go on and on. The web site is simple and has all the info clearly presented. Epidemic Outcome Switching! Check and Mate!

They haven’t published yet, but we look forward to what’s coming. I personally think the COMPare Project has landed on the center of the problem. We’ve complained about not being able to see the data itself, but to have this much distortion of the specified outcome variables is even more basic. There is no justification for this level of scientific misconduct…
Mickey @ 12:36 AM

modern times…

Posted on Thursday 21 July 2016

So Charlie Chaplin tried to warn us about technology in his 1936 classic, Modern Times. The french philosophers Jean-Paul Sartre, Simone de Beauvoir and Maurice Merlau-Ponty even named their journal, Les Temps modernes, after the film. But neither Chaplin nor the Existentialists warned us about this…

While the site was operating normally, this message appeared on a Google search of 1boringoldman.com. I contacted my hosting service as soon as I was informed. They ran a scan and found malware so they shut the site down on discovery. A quickscan of my computer came up clean, but a full scan found this:

which I removed:

For a fee, my hoster professionally scrubbed my site on the server and it’s back on-line now. The four posts marked as suspicious have been deleted [all ancient history], and my most recent post. Apparently, the hacker attacked my site through an unused website [8 years obsolete!] that had an old version of WordPress. All WordPress versions are now updated and the unused sites deleted.  I’m currently involved in getting Google to recrawl the site [which ain’t easy]. Insofar as I know, I’m squeaky clean, but I’m not going to post anything until Google’s crawler wanders through and the This site may be hacked message disappears.

I would suggest a full scan of your computer. I am profoundly sorry if this bit of modern times affected you too…
Mickey @ 2:40 PM

listening to placebo…

Posted on Thursday 14 July 2016

"Another issue with pediatric clinical trials is that 61 percent of youth respond to the drugs, but 50 percent respond to placebo, compared with 30 percent among adults, making it hard to separate effects."
                Karen Dineen Wagner

Paxil Study 329: HAM-D difference from baselineThe Placebo Effect in Clinical Trials is more than a philosophical matter. It’s a practical consideration, something that is regularly subtracted from the overall effect in presenting the results, yet it’s poorly understood itself. For some, it’s seen as the sampling phenomenon referred to as the regression to the mean [see in the land of sometimes[4]…]; for others, it’s suggestability and expectation; and there’s also a literature exploring a genetic/neurobiological component. But just looking at the graphs, one might well conclude that simply being in as trial is itself a therapy. The placebo effect is certainly a prominent effect in the adolescent antidepressant trials that have frequently graced these pages. Here’s an interesting deconstruction specifically attempting to parse out the role of expectation of benefit in these trials:
by Bret R. Rutherford M.D., Joel R. Sneed Ph.D., Jane M. Tandler H.S., David Rindskopf Ph.D., Bradley S. Peterson M.D. and Steven P. Roose M.D.
Journal of the American Academy of Child & Adolescent Psychiatry. 2011 50[8]:782-95.

Objective: This study investigated how study type, mean patient age, and amount of contact with research staff affected response rates to medication and placebo in acute antidepressant trials for pediatric depression.
Method: Data were extracted from nine open, four active comparator, and 18 placebo-controlled studies of antidepressants for children and adolescents with depressive disorders. A multilevel meta-analysis examined how study characteristics affected response rates to antidepressants and placebo.
Results: The primary finding was a main effect of study type across patient age and contact amount, such that the odds of medication response were greater in open versus placebo-controlled studies [odds ratio 1.87, 95% confidence interval 1.17–2.99, p = .012] and comparator studies [odds ratio 2.01, 95% confidence interval 1.16–3.48, p = .015] but were not significantly different between comparator and placebo-controlled studies. No significant main effects of patient age or amount of contact with research staff were found for analyses of response rates to medication and placebo. Response to placebo in placebo-controlled trials did significantly increase with the amount of therapeutic contact in older patients [age by contact; odds ratio 1.08, 95% confidence interval 1.01–1.15, p = .038].
Conclusions: Although patient expectancy strongly influences response rates to medication and placebo in depressed adults, it appears to be less important in the treatment of children and adolescents with depression. Attempts to limit placebo response and improve the efficiency of antidepressant trials for pediatric depression should focus on other causes of placebo response apart from expectancy.
This topic has been in my mind since our RIAT Project on Paxil Study 329. I had three versions of the Protocol. The 1993 and 1996 a priori versions said:
"Medical Management

Psychotherapy Experience in protocols in depressed adolescents suggest that patients and families expect psychotherapy and are reluctant to consider a course of medication treatment alone, especially where the medication may be solely placebo. On the other hand, a provision of treatment with a psychotherapy which, in retrospect, turned out to be extraordinarily efficacious might well preclude the demonstration of a real, significant, and clinically meaningful medication effect. There are currently several research groups beginning the process of examining different specific psychotherapies [e.g. cognitive behavioral and interpersonal] for adolescent depression. As of yet, however, there are no completed controlled studies which would suggest a "reference" psychotherapy treatment. The present study will include supportive psychotherapy, similar to the management as described by Fawcett in Appendix G.

Weekly visits will consist of a 45 minute visit with the therapist. In unusual circumstances, emergency contact of greater duration is permitted. Duration of all contact including phone calls will be systematically documented."
And the version in the Full Acute Study Report [page 35] for that study said:
"3.5.4 Other Protocol-specified Therapy


Supportive psychotherapy for the depressive episode was provided in a manner similar to that described by Fawcett and coworkers in the Adolescent Depression Collaborative Research Group.[10] Psychotherapy was intended to provide the psychosocial interaction between the patient and the therapist that would permit observation of any pharmacotherapeutic effect of the study medication. Therefore, the sessions were to focus on providing supportive therapy rather than implementing interpersonal or cognitive/behavioral strategies. At each weekly visit, the patient had a 45-minute visit with the therapist. However, emergency contact of greater duration was permitted under unusual circumstances."
As for the Fawcett document explicitly spelling out the supportive psychotherapy mentioned, it’s in Appendix A of the 329 Study Report, freely available online here [Note: see below]. It goes on for 20 plus pages, ending with a list of DOs and DONTs that captures the essence of the recommendations:
DO’S:
  • Speak about current experiences.
  • Inquire about feelings.
  • Acknowledge understanding of feelings.
  • Inquire about events not spontaneously reported.
  • Inquire about the patient’s thoughts about solving problems
  • Express sympathy if misfortunes occur.
  • Communicate shared pleasure at positive events.
  • Congratulate patient for success.
  • Give the patient hope of the likelihood of his/her getting better.
DONTs:
  • Relate current conflict or attitudes to earlier experiences.
  • Draw analogies between behavior toward some people and other, such toward parent and toward friends, siblings, teachers, etc.
  • Challenge patients view of self or others
  • Give specific suggestions for resolving conflict.
  • Bring up childhood experiences.
  • Bring to the patient’s attention that his/her behavior appears to represent specific difficulties, such as fear of failure, fear of rejection, etc.
  • Bring to the patients attention that his/her behavior has intents that he/she is not acknowledging [i.e., punishing parents, getting revenge on friends, trying to prove is generous, etc.].
If it’s not yet obvious where I’m headed with this, it’s pretty simple. Every subject in Paxil Study 329 had weekly psychotherapy as described above. The DONTs on that list, while intended to avoid psychoanalytic interventions, is actually a list of the kinds of interventions traditionally avoided in therapy with children and adolescents because they are perceived as criticisms. And the DOs are pretty close to the effective alternatives.

Paxil Study 329: HAM-D difference from baseline and Response Rates

So in this example of Paxil Study 329 …
  1. the magnitude of the change can hardly be explained simply by regression to the mean
  2. it is highly unlikely that the change is due simply to expectations or suggestion
  3. this cohort had been depressed for over a year on average before the study, so natural course of disease also seems an unlikely explanation
  4. the supportive psychotherapy of 8 weekly sessions fits traditional parameters for therapy with depressed adolescents
… it’s reasonable to conclude that the supportive psychotherapeutic approach was an effective intervention for these depressed adolescents. And in this instance, I feel solid with our conclusion that the Paxil did not have any significant impact on the outcome [see Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence].

So, 1boringoldman, are you suggesting that the placebo effect is in fact an artifact of the clinical trial itself? and the way the study was conducted? and that the magnitude of the effect might correlate with how the adolescents were approached? and that Study 329 was a particularly effective version? Sure that’s what I think, but I can’t go further than speculate or hypothesize because I don’t have any other protocols. And beyond that, I have no way of knowing what was actually done in the studies themselves. But a supportive, non-confrontational psychotherapeutic stance has been effective with adolescents in my own experience, so of course that’s what I think even though it’s an antecdotal opinion [don’t discount it until you read Fawcett].

I have no question that SSRIs can be helpful in cases of OCD or generalized anxiety disorders in adolescents, though used with caution, close observation, and as a therapeutic trial rather than routinely. And I wouldn’t generalize these comments to adults where my experience says that these drugs can be helpful with depressive symptoms. But with depressed teens? Let’s talk…


Note: The version of Fawcett et al in the 329 Appendix A has been modified. The original can be read here. It’s actually a fascinating 12 page article from 1987.
Mickey @ 6:00 AM

a remembrance of things past…

Posted on Wednesday 13 July 2016

    early days…

Practicing as an Internist in a Military Hospital back when the world was young, fresh out of a residency training program, I had a period I think of as on-the-job training. I learned that the majority of patients referred to me as an Internist didn’t have medical diseases. They had some kind of symptoms that worried them, but they weren’t from physical diseases. For most, reassurance was all that was required. But for more than I would’ve guessed, it wasn’t enough.

The first thing I learned with those cases was to say what I thought up front before ordering tests to prove the absence of underlying pathology. "I don’t think you have the ___ that you’re worried about, but let’s make sure and do this-and-that". That way, when they returned, I could say "Good news, the tests look okay." If I didn’t do that, they often left feeling, "He couldn’t find it, but worried that it was there but just not found." Those were easy things to learn. What was harder was how to ask, "What’s going on in your life?" to try to look at why they were having symptoms, probably psychosomatic symptoms. The reason it was harder is obvious. Any such questions can be heard as discounting, invalidating, saying "It’s just in your head."

But I learned, and then they started talking. I had no psychotherapy experience, psycho-anything for that matter. I was hearing the narratives behind the stress and the symptoms, but I had no idea what to say or do with what I was hearing. So I just listened. In those days, neither soldiers nor their families would go to go to the mental health clinic because of a fear it would damage the soldier’s careers. So all I had to offer was a sympathetic ear. But a lot of them got better, actually figured things out, solved some problems. I was surprised. It wasn’t from anything I did because I wasn’t doing anything, at least not anything I knew I was doing.

    is the fact that they got better the placebo effect?

Usually, the placebo effect is thought of as sort of a mind trick. Give people something they think might get them better, and they do – an expectations cure. Others attribute it to the mathematical regression to the mean [see in the land of sometimes[4]…]. While I suspect that both things are factors, but the amount of change frequently seen probably rules out a major role for either or both.

    latter days…

As a retiree, I started volunteering at a local charity clinic several times a month. Each time I’m there, I see about twenty patients in five hours [15 min/each]. As it turns out, many are returns for refills and checking in, so I can take more time with new or difficult patients. I obviously can’t do the psychotherapy I practiced. What I can do is use medications rationally, listen attentively, make comments  when I know what to say, and remember the patients when they return.

My presence has been a raging success and people get better, just like the early days. I’m sure I do more now than in my beginning, but not a lot more in the circumstances. So the same question is still appropriate, "is the fact that they get better the placebo effect?"

    or…

I would now say that in those early somewhat clueless days, I was encountering something basic about human psychology that I just didn’t yet understand. Emotions aren’t just internal signals that something’s awry, they are communications to others as well.  Mom knows the minute her child walks in from a bad day at school [and won’t settle for "fine" as the answer to "how’d it go at school today?"]. Years ago, Bibring hypothesized that depression signaled helplessness or powerlessness which remains a useful rule of thumb. And the help sought is often simply having the communication received and the story heard. The act of narrating the situation itself is cathartic. Likewise, in telling such a story, one is forced to put it into language and there may well be elements you hear for the first time yourself. Clarifying comments, questions, or further understanding are in the range of "gravy" on the meat of human contact [I’d better shut up, or I’ll begin to sound like a psychoanalyst].

The point being that a receptive and benevolent presence is at the root of any and all psychotherapy – therapeutic, in and of itself. All roads start at the same place. So what? Coming soon to a blog near you…
Mickey @ 9:27 PM

this other thing…

Posted on Saturday 9 July 2016

After reading Karen Dineen Wagner’s 2013 deposition [author·ity…], I looked back at that recent PSYCHIATRICNEWS article about her presentation at the May APA meeting [see a blast from the past…]. This time through, there were several things about it that got my attention. So here it is again for review:
PSYCHIATRICNEWS
by Aaron Levin
June 16, 2016

… As for treatment, only two drugs are approved for use in youth by the Food and Drug Administration [FDA]: fluoxetine for ages 8 to 17 and escitalopram for ages 12 to 17, said Wagner. “The youngest age in the clinical trials determines the lower end of the approved age range. So what do you do if an 11-year-old doesn’t respond to fluoxetine?”

One looks at other trials, she said, even if the FDA has not approved the drugs for pediatric use. For instance, one clinical trial found positive results for citalopram in ages 7 to 17, while two pooled trials of sertraline did so for ages 6 to 17. Another issue with pediatric clinical trials is that 61 percent of youth respond to the drugs, but 50 percent respond to placebo, compared with 30 percent among adults, making it hard to separate effects.

When parents express anxiety about using SSRIs and ask for psychotherapy, Wagner explains that cognitive-behavioral therapy [CBT] takes time to work and that a faster response can be obtained by combining an antidepressant with CBT. CBT can teach social skills and problem-solving techniques as well. Wagner counsels patience once an SSRI is prescribed.

A 36-week trial of a drug is too brief, she said. “The clock starts when the child is well, usually around six months. Go for one year and then taper off to observe the effect.” Wagner suggested using an algorithm to plot treatment, beginning with an SSRI, then trying an alternative SSRI if that doesn’t work, then switching to a different class of antidepressants, and finally trying newer drugs.

“We need to become much more systematic in treating depression,” she concluded.
And here’s the drug patent timeline for reference:

• as mentioned, this is TMAP talk [early 90s] all over again …
The reason her presentation felt so "old" was easy to figure out. It was ["old"]. It was the recommendation that started with TMAP/TCMAP 20+ years ago. After the warm glow that initially came with the SSRIs [Listening to Prozac] wore off and it became apparent that the response rate was less than hoped, there were attempts to improve the results. In Texas, John Rush, Madhukar Trivedi, Graham Emslie, Karen Dineen Wagner, and others initiated what I call "algorithmic medicine." The idea was that the tepid response was because the drugs weren’t given right. They advocated pushing the dose, making sure patients stay on the drugs long enough, following an algorithm to sequence different antidepressants or augment in treatment failures, and treating to remission. We mostly know of TMAP because it was the PHARMA-funded scheme to require using in-patent drugs in public clinics that almost bankrupted Texas Medicaid, but it was also a path intended to improve response. This notion lead to a series of NIMH studies:

Wagner’s 2016 APA presentation sounded like a direct transcript from those days – in spite of the fact that none of the studies in between really bolstered the idea that this path lead much of anywhere. STAR*D claimed to support sequencing, but it had methodologic problems, a huge dropout rate, and was never fully reported [see a thirty-five million dollar misunderstanding…]. She mentioned none of this and, at least in the PSYCHIATRICNEWS report, didn’t present her argument with evidence that her recommendations got results.

• but there’s this other thing…
The things that nagged at me and had me looking back at this report were first, her reference points, and then, what it didn’t say.

She referred to the FDA Approved drugs ["fluoxetine for ages 8 to 17 and escitalopram for ages 12 to 17"] and recommended not feeling limited to just these two – pointing out several clinical trials that were positive ["one clinical trial found positive results for citalopram in ages 7 to 17, while two pooled trials of sertraline did so for ages 6 to 17"]. She’s referring to studies in 2003 and 2004 where she is the first author, one being recently heavily criticized [see The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance] and another also widely questioned [at least she left out Paxil Study 329]. So based on her own ghost written industry funded studies from twelve+ years ago, she is encouraging full speed ahead with the aggressive TMAP guidance.

But, to be honest, that’s not what actually got my attention. It was what she didn’t mention. The most obvious thing is the Black Box Warning of potential suicidality in children and adolescents present since 2004. Then she refers to her short term clinical trials from early days, and nothing since then! Both trials come up in the deposition, and her involement was slim to none, and she remembers nothing much about them. She discounts the FDA, evokes these two early ?? trials, and doesn’t mention anything else. There’s no, "In our clinical experience…" or "In follow-up long term studies…" or "In the Galveston clinic, we’ve seen…"

• and another thing…
I wasn’t in love with her recommendation to argue with skeptical parental concerns about using SSRIs in kids either. Based on the evidence, Wagner should be concerned too. And that just got added to the list of things that made me wonder why she was picked to give this update on the treatment of depression in kids at the APA, how she achieved the status of Key Opinion Leader…
Mickey @ 9:40 PM

author·ity…

Posted on Thursday 7 July 2016

Being deposed in a legal case is one of the more odious experiences of adult life. The lawyer asking the questions has pored over whatever you’re there to talk about looking for ways to discredit you, then hammers away trying to get you to admit your sins [whether you’ve committed them or not]. No one looks very good in a deposition transcript. But in this post, I’m only going to talk about snippets from a deposition where Karen Dineen Wagner is not being brow-beaten and simply answers in a matter-of-fact manner. While there are plenty of contentious segments in this deposition, they are for another time.

Karen Dineen Wagner is something of an enigma to me. She is a listed author on four heavily contested ghost-written Clinical Trial Reports and was investigated by the US Senate for unreported income from pharmaceutical companies. And yet she’s the head of Child and Adolescent Psychiatry at UTMB; was the  longstanding Deputy Chairman of Psychiatry there until recently when she was promoted to full Chairman; and she is the President Elect of the American Academy of Child and Adolescent Psychiatry.
A lot of the information in the recent paper by Jon Jureidini, Jay Amsterdam, and Leemon McHenry about the 2004 Celexa Clinical Trial came from the discovery process in a court case [Celexa and Lexapro Marketing and Sales Practices Litigation: Master Docket 09-MD-2067-(NMG)].

In addition to the internal documents from the case mentioned in their article, there was also a deposition of Dr. Wagner. Here are a couple of snippets from that deposition posted on the Drug Industry Document Archive [DIDA]. First, what she had to say when asked about her authorship on the 2001 Paxil Study 329 article:

DEPOSITION OF KAREN DINEEN WAGNER, M.D., Ph.D.
by Michael Baum, Esq., of Baum, Hedlund, Aristei & Goldman
on Tuesday, July 16, 2013, page 61 [pdf page 16]


QUESTION Do you recognize this document we’ve marked as Exhibit 2c?
  ANSWER Yes.
QUESTION And you were a contributing author to this article?
  ANSWER I was an add-on site to this study. I was not involved in the writing of the manuscript.
QUESTION Did you review the manuscript before it was submitted for publication?
  ANSWER I don’t remember reviewing it. I assume — I don’t remember reviewing it. I would guess all of the authors saw a copy before it went out. I just don’t remember.
QUESTION At the time it was issued, did you believe it to be an accurate and truthful statement of the results of the 329 study?
  ANSWER Yes.
QUESTION … just based on your recollection now, do you recall that there’s some data that’s come out or some indication since the publication of what’s now marked as 2c in July of 2001 that indicated that Study 329 was actually a negative result?
  ANSWER There has been some controversy with that. It depends what outcome measure you look at. And again, I was an add-on site to this study.
QUESTION You didn’t have anything to do with the publication of inaccurate information about Study 329, correct?
  ANSWER I didn’t have anything to do with the design of the study, the analyses of the study. I provided subjects, some subjects, for the study. I think on — you can count what my authorship — one, two, three, four, five, six, seven, eight, nine, 10, 11, 12, 13, 14, 15, 16, 17, 18 — I’m the 19th author in this multicenter publication.
QUESTION You were low on the totem pole?
  ANSWER I wasn’t involved in the design or the — this study was designed well before I was involved. The primary outcome measures were chosen before I was involved. All of it was done and they needed some more sites for enrollment.
QUESTION Why is your name on the paper, then?
  ANSWER Because I provided subjects for the study.
QUESTION Okay. Do you know whether or not this paper is listed on your CV?
  ANSWER It would be listed on my CV because my name is on it.

She is being truthful in saying that she was brought into the Paxil Study 329 Clinical Trial late after they realized that recruitment was flagging and they needed to add more sites [Sites 007-012 added one year into the study]:

Table 7 Number of Patients Who Were Randomized (R) to Each Treatment Group
and Who Completed (C) Acute Phase of Treatment at Each Center

Center Investigator Site   Paroxetine     Imipramine     Placebo  
  R C R C R C

001 Geller St. Louis, MO 7 3 5 1 6 4
002 Keller Providence, RI 9 6 11 6 10 10
003 Klein New York, NY 10 8 14 9 11 10
004 Papatheodorou Toronto 5 3 4 1 4 2
005 Ryan Pittsburgh, PA 16 14 15 10 14 13
006 Strober Los Angeles, CA 4 3 2 1 3 2

007 Wagner Galveston, TX 9 5 7 1 5 4
008 Clarke Portland, OR 5 5 6 5 3 3
009 Emslie Dallas, TX 17 13 18 13 18 9
010 Weller Columbus, OH 3 2 2 2 4 3
011 Carlson Stony Brook, NY 2 1 5 4 4 3
012 Kusumakar/Kutcher    Halifax, Nova Scotia 6 4 6 4 5 3
 
  Total 93 67 95 57 87 66

But the rest of her claims of passivity and non-involvement are beyond suspect. In 1999, she was the main speaker at a SmithKline Beecham sales roll-out meeting based on this study [if you haven’t seen this report, it’s worth a look]. And by the time of this deposition, the literature was full of Paxil Study 329 references, GSK had settled the suit brought by New York Attorney General Elliot Spitzer, the Black Box Warning was on every antidepressant package insert, Paxil Study 329 was a major part of a $3.3B GSK settlement, and Wagner had herself been deposed twice about Paxil Study 329:

DEPOSITION OF KAREN DINEEN WAGNER, M.D., Ph.D.
by Michael Baum, Esq., of Baum, Hedlund, Aristei & Goldman
on Tuesday, July 16, 2013, page 12 [pdf page 4]


QUESTION Have you had your deposition taken before?
  ANSWER Yes.
QUESTION Do you know how many times?
  ANSWER I think I’ve been deposed three times.
QUESTION Okay. One was in the Paxil litigation? Does that ring a bell?
  ANSWER Yes.
QUESTION And once in the Celexa/Lexapro securities litigation? Does that sound right?
  ANSWER I don’t know what it was called, but there was a Celexa deposition.
QUESTION Okay. And do you know what the third one was?
  ANSWER I think that there were — I think with Paxil there may have been two, but I’m not certain. It was a long time ago.

So it’s hard to buy that her feigned innocence and unfamiliarity were authentic. But beyond that, she’s stating outright that she didn’t review the data and may have not even read the paper before it was submitted with her name on the BY-LINE – that the only reason she was included was that she was a site director. That is a damning interpretation of the meaning of the word "author." And it goes downhill from there. This next snippet comes from the questioning about the 2004 Citalopram Clinical Trial:

DEPOSITION OF KAREN DINEEN WAGNER, M.D., Ph.D.
by Michael Baum, Esq., of Baum, Hedlund, Aristei & Goldman
on Tuesday, July 16, 2013, page 28 [pdf page 8]


QUESTION Okay. So do you recall whether you had access to patient level data when you were working on this publication?
  ANSWER No. We have access — well, as an individual investigator, you have access to your patients. But the individual patient data from other sites, usually when the data is presented, it’s put together. So I don’t — I just don’t recall if I saw individual — individual data.
QUESTION When you say "put together," does that refer to the pharmaceutical company compiling information and providing it to you?
  ANSWER The data is the property of the pharmaceutical company.
QUESTION And so they collect it and provide some form of summary of it to you?
  ANSWER Correct.
QUESTION And except for the patient level data that you had from your own particular site, you relied upon the information conveyed to you by the pharmaceutical company regarding the other sites. Is that correct?
  ANSWER In multicenter studies, each individual investigator has their own data and then it depends who sponsors the study. This was a Forest-initiated and Forest-sponsored study, so all of the data from the sites go to Forest.
QUESTION Then they compiled it and then did statistical evaluations of it?
  ANSWER Yes.
QUESTION Did you do any of the statistical evaluations yourself?
  ANSWER No.
QUESTION It was essentially provided to you by Forest statisticians?
  ANSWER Correct. I’m not a statistician.

In this Clinical Trial, Wagner was anything but an "add-on," she was the Principal Investigator [PI], and yet her answers are the same. She only saw the actual data from her own site and she didn’t do or check the statistical analysis, accepting the evaluation of Forest Laboratory’s statisticians. After all, "The data is the property of the pharmaceutical company." So she neither reviewed the data nor involved herself in the analysis. And, by the way, she wasn’t involved in drafting the paper either. The internal documents discussed in The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance simply confirm something discovered by the AJP editors after the original paper was published. It was ghost-written. And further, it omitted mentioning that a European Lundbeck Trial had been negative and reported significant harms [outlined in my collusion with fiction… and published in the American Journal of Psychiatry. 2009 166:942-943].

And so to return to my initial comment, "Karen Dineen Wagner is something of an enigma to me." She is known as an authority on psychopharmacology in children and adolescents – presenting CME courses at the APA and AACAP meetings and speaking and writing widely on the topic. These early articles seemed to have launched her career. Yet in these examples she is, at best, a signifier, in large measure a placeholder for the work of others – others with tainted motives at that. So the story raises some very big questions like  "what is an author?" and "what constitutes authority?"
Mickey @ 3:19 PM

the streams III – and a river runs through it…

Posted on Tuesday 5 July 2016

    A review of the streams:
  • a preference for case studies
  • the consequences of the DSM-III unitary depression
  • being questioned about focusing on "old studies"
  • Peter Kramer’s snappy response to Ed Shorter’s review
  • … and then the thread about Lewis, Kiloh, and Parker on Lewis’ cases

Based on his MD Thesis cases, Aubrey Lewis couldn’t confirm the classic separation between neurotic and melancholic depression. I read the Lewis papers [200+ pages], and I couldn’t confirm the diagnostic dichotomy either. So I can’t fault him for his conclusion. But it’s easy to see why his dataset might not be the best choice for evaluating the distinction. These were inpatient cases admitted to the Maudsley Hospital, and they were severely ill – a skewed sample of the general cohort of depressed patients by any read [the hard cases]. So I can fault him for generalizing from this sample, and for sticking to his conclusion throughout his influential career. I think he made a mistake.

In my experience, the clinical differentiation of Melancholia from other depressive states is not so difficult as it sounds in Aubrey Lewis‘ papers. In political sabbatical…, I provided links to articles with diagnostic criteria, including a follow-up article by Gordon Parker [furthering the use of latent-class analysis] and Bernard Carroll‘s article on diagnosis [that also puts it to verse]:

So Dr. Kramer’s Letter to the Editor [Book reviewer promotes a controversial theory about depression] saying that Dr. Shorter was pressing some idiosyncratric idea of his own by mentioning Melancholia,
"Shorter’s critique amounted to little more than a complaint that I have disregarded a controversial theory he favors. He used the review space to give a hobbyhorse a ride."
seems way, way off base to me. Whether by intention or not, Kramer perpetuates the "all depression = brain disease" agenda of the KOLs whose motives are certainly suspect.

But whether his conclusions turned out to be mistaken or not, Lewis’ articles are absolute classics – particularly the last one [Melancholia: prognostic studies and case-material, 1936] which has his 61 case narratives with almost a full journal page each [56 pages ÷ 61 cases = 0.92 pages/case]. You know they’re classics because two different solid investigators were able to find what they needed to do their sophisticated reanalyses using Lewis’ case reports. It was as if those cases had waited patiently [pun intended] for a half century for the statistical methodology and the computing power to come along that could finally analyze them.

Reading Lewis’ commentary and particularly his patient narratives just reminded me of how much I missed case reports. As physicians, it’s the plight of people taken one at a time that matters. We collect them in groups of shared syndromes to see what we can learn from their similarities, but their differences are important too. And we may not be able to find the most important things just yet, but if the narratives endure, maybe some future reader can see what we missed, or bring some new pair of glasses that finds what we were looking for. And that’s exactly what happened here. So the next time I’m asked why I’ve continued to look at "old" studies, I’m going to say, "If the data’s faithfully recorded and maintained, there’s no such thing as an old study" and start talking about the Lewis cases and their reanalysis fifty years later.
    I had a related experience almost 50 years ago myself. When I started a fellowship in rheumatology in a former life, my boss was in the process of happening onto an significant finding. Using an electron microscope, he was studying the capillary morphology in rheumatologic diseases in biopsies from uninvolved muscle. And in Scleroderma [Progressive Systemic Sclerosis] the technicians couldn’t find any capillaries to look at. So he developed a technique to quantify the capillary density, and sure enough, it was dramatically diminished – no small observation for a disease characterized by generalized scarring. So my first project was the literature review of previous reports of vascular problems in Scleroderma. I doubted I’d find anything. In those pre computer search days, it was no small task and I spent months in the dusty library stacks. The results were amazing. It was everywhere, vascular lesions recorded in the meticulous old reviews of pathology in journals from every specialty, every organ system. They didn’t realize the significance of what they were seeing, but they had written down what they saw for me to find years later. They just didn’t have our electron microscope to move them that final inch. We ended up having to radically trim our reference list to only the most pertinent papers.
And that’s it, a river that runs through it. As much as I’d enjoy another good rant about why the distinction between Melancholia [Depression] and the heterogeneous category, depression matters, the real lesson in this story is that it’s the patients and their data that’s important. I think Aubrey Lewis did the best he could with the data he had. I wish he’d taken a later look at a more representative cohort, but that doesn’t detract from what he did do. His legacy, in this instance, is the careful case histories he passed on to Kiloh and Parker. And the impressive thing is that even with this skewed cohort, they were able to convincingly detect the two populations that it represents.  And that’s probably the river that runs through this whole blog and others like it. It’s the patients, their narratives, and the data generated in its rawest form that endures, not the interpretations of its contemporary custodians. Just imagine how much we could learn if we had this kind of data from all the subjects in the clinical trials of antidepressants that we write about.


[Norman Maclean, author of “A River Runs Through It”]

Mickey @ 9:47 PM

the streams II…

Posted on Tuesday 5 July 2016

Leslie Kiloh [1917-1997] was a British psychiatrist, well known for his studies in the classification of depressive disorders and the EEG. In 1962, he became the chair of psychiatry at the new Medical School at the University of New South Wales in Australia – a position he held for 20 years until retiring. He studied depressive cohorts in England and later Australia much as Lewis had done, but used more modern methods [multivariate factor analysis]. The internal workings of factor analysis are way beyond my skill set. Suffice it to say that they record many variables [~35] and the computers whirr all night. While the methodology is clearly too sophisticated to be "bloggable," the results look impressive to me [in multiple studies]. Here’s a representative picture of the kind of separations shown in his studies.


[colored and separated for clarity]
on review, the circled cases were misdiagnosed

Kiloh’s group concluded from his own multiple studies and those of others:
"The analysis of data obtained in this study supports the view that ‘psychotic’ or endogenous depression is a condition with a restricted range of clinical manifestations, consistent with an imputed genetic or biochemical basis, whilst so-called neurotic depression is a diffuse entity encompassing some of the ways in which the patient utilizes his defence mechanisms to cope with his own neuroticism and concurrent environmental stress."
Then in 1977, he published a paper with a colleague that was, in my opinion, a brilliant stroke. Using Aubrey Lewis’ detailed description of the original 61 cases from his 1928/1929 dataset, Kiloh rated them on a number of factors and employed his more modern techniques for analysis, a technique not available in Lewis’ day:
  • Kiloh, LG & Garside, RF. Depression: a multivariate study of Sir Aubrey Lewis’s data on melancholia.
    Australian and New Zealand Journal of Psychiatry [1977]. 11:149-156.
Again, the vicissitudes of multivariate factor analysis are well beyond capabilities of this mere mortal [namely me], but it identified two distinct clusters that had little overlap as shown in the graphic representation of their main results table from the reanalysis of Lewis’ data on the right. They had this to say about Aubrey Lewis’ papers:
"As a result of the clarity with which the two papers are written, few difficulties were experienced in the scoring, but inevitably an occasional decision was necessary."
and this about the Lewis analyses:
"One must agree with Lewis that ordinary scrutiny of his data and the comparison of the clinical features with prognosis shows no discernible patterns and that he was correct, using the methods available at that time, in concluding that he could find no evidence of any qualitative distinction between “melancholia and mild neurasthenic depression” in his material. Nevertheless, more refined techniques now available show that this conclusion was incorrect, and in view of the tremendous influence that these papers have exerted, it is felt that this present analysis was worth carrying out."
And finally, their conclusion:
"Thus, it may be concluded that, once again, a multivariate study has indicated that endogenous depression, though varying in severity, is a categorical condition. A patient either suffers from it or does not. Indeed, as has been pointed out by Kiloh et al. [1972], the dichotomy of depressive illness demonstrated in so many published studies is determined solely by the presence or absence of endogenous features. In other words, neurotic depression is defined by the absence of endogenous features, and this has given to this group of cases the illusion of being an entity. Both Paykel [1971] and Kiloh et al. [1972] have put forward evidence indicating that, when scrutinised by multivariate analysis, neurotic depression consists of a number of separable syndromes."
Gordon Parker followed Kiloh as psychiatry chair at the University of New South Wales [1983-2002] where he became a noted researcher in mood disorders and founded the Black Dog Institute, an organization focused on the treatment of depressive illnesses. In 1993, he and a colleague took yet another look at the Aubrey Lewis dataset.
by GORDON PARKER and DUSAN HADZI-PAVLOVIC
Psychological Medicine. 1993 23:859-870.

Sir Aubrey Lewis studied 61 depressives in considerable detail, principally cross-sectionally but also by reviewing progress. He concluded that he could find no qualitative distinctions between the depressed patients and thus established himself as a strong and influential advocate of the unitary view of depression [i.e. that depression varies dimensionally, not categorically]. Subsequently, Kiloh & Garside [proponents of the binary view of two depressive ‘types’] coded the Lewis data and undertook a principal components analysis. They claimed success in distinguishing ‘endogenous’ and ‘neurotic’ depressive types within Lewis’ sample. In this paper we re-analyse the data set using both a latent class categorical approach and mixture analyses. We suggest that any demonstration of sub-types was limited by relative homogeneity of the sample [in that up to 80% had probable or possible psychotic conditions], and by Lewis rating a number of important features [e.g. delusions] dimensionally rather than categorically. Nevertheless, we identify one categorical class [essentially an agitated psychotic depressive condition] and a residual [presumably heterogeneous] class. The presence of those two classes was supported by demonstrating bimodality in composite scores derived from the fourteen differentiating clinical features [and not evident when all clinical features were considered], and formally confirmed by mixture analyses. Membership of the categorical class was determined principally by psychotic features [delusions and hallucinations] and by objectively-judged psychomotor disturbance, and we consider the nature of that ‘class’. Lewis’ data set is unusual [in having self-report and observationally rated data], and historically important in demonstrating that conclusions may depend on the choice of variables examined and analytical approaches.

Dr. Parker kindly responded with this comment about the paper:
"Leslie Kiloh scored all of Lewis’ subjects on depressive symptoms and analysed the data to show separation.  We then used latent class analysis … to analyse the Kiloh-rated data set evaluating Kendell’s argument that a unimodal distribution would argue for depression differing only by severity and that a bimodal distribution would be required to support the division of melancholic and non-melancholic depression.   When we included all of Aubrey Lewis’ items there was a unimodal distribution.  When we limited analyses to central markers of melancholia [essentially psychomotor disturbance] there was a distinct bimodal one – and we made the point that including non-differentiating items [ie DSM MDE in the main, and most endogeneity symptom lists] in any analysis can ‘swamp’ the true capacity of the contrasting  depressive groups to differentiate. I’m preparing a paper at the moment where we analyse data using the SMPI [Sydney Melancholia Prototype Index] and again we demonstrate distinctive bimodality.  I hope this assists."
In following the fate of Aubrey Lewis’ cases, I’ve left out the many other contributions of these three investigators. I’ve also skipped over the work of Dr. Bernard Carroll and the Dexamethasone Suppression Test and the many others who are catalogued on the by-line of this must-read editorial, Issues for DSM-5: Whither Melancholia? The Case for Its Classification as a Distinct Mood Disorder . You could also put melancholia in the search box below to find my 105 blog posts that mention it. So enough prequel, it’s time to see where these streams converge [see  the streams III – a river runs through it…].
Mickey @ 6:00 PM