didn’t get very far…

Posted on Monday 16 February 2015

“If one is given a puzzle to solve one will usually, if it proves to be difficult, ask the owner whether it can be done. Such a question should have a quite definite answer, yes or no, at any rate provided the rules describing what you are allowed to do are perfectly clear. Of course the owner of the puzzle may not know the answer. One might equally ask, ‘How can one tell whether a puzzle is solvable?’, but this cannot be answered so straightforwardly. The fact of the matter is that there is no systematic method of testing puzzles to see whether they are solvable or not. If by this one meant merely that nobody had ever yet found a test which could be applied to any puzzle, there would be nothing at all remarkable in this statement. It would have been a great achievement to have invented such a test, so we can hardly be surprised that it has never be done. But it is not merely that the test has never been found. It has been proved that no such test ever can be found." [referring to Kurt Gödel’s Incompleteness Theorems, 1931]

Last week, I was on a road trip and did some pleasure reading along the way in Seeing Further The Story of Science, Discovery, and the Genius of the Royal Society about the scientists and the science that rates the term ‘genius.’ I also saw the Academy Award nominated The Imitation Game about Alan Turing [speaking of British genius]. Back at home, I watched the older UK Channel 4 movie, Codebreaker [on Netflix], also about Alan Turing – a docudrama that covered his life as seen through the eyes of his psychiatrist. It was as riveting as the recent movie. I even read some of Turing’s papers [at least I saw the words with occasional glimpses of understanding]. The quote above is almost a random sample, just something to let me say, "Who thinks about things like that?" Then I left the geniuses behind and settled back in to my home life. I had obviously wanted to say something about the iSPOT paper [a cul de sac I, II, III, IV…], so I didn’t get around to looking over my usual sites to see what I’d missed last week until this morning. I didn’t get very far…

PsychiatricNews
by Philip R. Muskin, M.D. and Paul Summergrad, M.D.
Feb 12, 2015

APA’s 2015 annual meeting in the cosmopolitan city of Toronto promises to be an unforgettable educational experience. The breadth of the scientific program is impossible to capture in a brief article. The highlights contained here and throughout this issue of PsychiatricNews are but a small sample of what you can expect as we bring together some of the best minds in psychiatry to present compelling clinical, research, and practice-related sessions in one dynamic meeting.
Making the meeting even more timely, Dr. Summergrad has planned a series of presidential symposia to address topics that are particularly relevant. For example, one is ’21st-Century Psychiatry at the Interface of Genetics, Neurobiology, and Clinical Science’ with Charles Nemeroff, M.D., Ph.D., Daniel Weinberger, M.D., Karl Deisseroth, M.D., Ph.D., and David Rubinow, M.D.
One of the meeting’s most popular formats is the interactive sessions, in which meeting attendees can engage directly with experts. This year’s meeting will have 14 interactive sessions, and among their leaders are Dr. Summergrad, Dr. Mayberg, Dr. Nemeroff, Melissa Arbuckle, M.D., Barbara Coffey, M.D., Glen Gabbard, M.D., Otto Kernberg, M.D., Russell Lim, M.D., John Oldham, M.D., Alan Schatzberg, M.D., Nora Volkow, M.D., and Stuart Yudofsky, M.D.
I’ll have to say that after a week of reading about some of the scientific high points in history and the Royal Society, this article felt like a splash of ice water. I’m not in the APA and it’s hardly for me to say how the organization presents itself, but headlining Drs. Nemeroff and Schatzberg seems kind of bizarre. Were I to list entrepreneurial psychiatrists, they’d occupy the top two positions. Both stepped down prematurely from chairmanships [Emory and Stanford] in the wake of Senator Grassley’s Congressional Investigation of undisclosed PHARMA payments [with Dr. Nemeroff moving on to chair in Miami]. Both have been guest authors for a ghost written book and numerous articles – Schatzberg as recently as December [the recommendation?…] and both are part of the Brain Resources iSPOT enterprise [a cul de sac I, II, III, IV…]. They’ve lead the league in industry connections by any measurable dimension, and everybody knows that. So why they’re showcased in this article about this May’s APA meeting is beyond my faculties. Maybe my choice of Turing’s SOLVABLE AND UNSOLVABLE PROBLEMS paper wasn’t so random after all, because I sure don’t have a solution to explain this…
Mickey @ 12:49 PM

a cul de sac IV…

Posted on Sunday 15 February 2015

So in a cul de sac I… and a cul de sac II… I looked at this new? iSPOT paper [The International Study to Predict Optimized Treatment in Depression [iSPOT-D]: Outcomes from the acute phase of antidepressant treatment], but I kept having a nagging feeling, like a déjà vu – like I’d heard it before somewhere. Then looking back into an old post in June 2012 [entrpreneurialism prior to investigation…], I found this that jogged my memory:
NASDAQ
by Zacks Equity Research
June 13, 2012

This will be the catalyst to driving growth of the personalized medicine business, Brain Resource’s major focus and the area that is expected to be the impetus to the company’s long-term revenue and earnings growth. Findings from the iSPOT trial will be used to develop these depression and ADHD biomarkers. iSPOT is the world’s largest clinical trial to predict treatment response in depression and ADHD. iSPOT-D [for depression] has enrolled over 1,700 patients and analysis started on the first 1,000 patients.  Data from iSPOT-D was presented at two major U.S. medical conferences during 2011, including an invitation-only presentation at The American Conference of Neuropsychopharmacology in December and more recently formed a panel of presentations at NCDEU in Arizona…  Brain Resource is planning to submit study outcomes to the FDA for approval [likely via PMA] of a depression and an ADHD test in the near-to-mid term [discussions with the FDA regarding the regulatory approval pathway are ongoing]…
So, as Dr. Carroll hypothesized in his comment, something’s awry. This paper is old news. The 1008 subjects were signed, sealed, and delivered four years ago [2011]. And nosing around the Brain Resources web site, there’s more. For example, on this page, we learn that…
iSPOT-D “Test” cohort (n=1008) complete. First “Replication” cohort (n=700) is locked. 
FDA meeting on first biomarker outcomes from the iSPOT studies. View Report
a report that says…
3 July 2012
FDA meeting on BRC’s Depression Treatment Test
BRC met with the FDA [June 28] t o discuss our Pre-IDE submission in regards to the company’s test to predict optimized treatment for D epression based on our international iSPOT study. This was a positive meeting which addressed the issues previously raised by the FDA. Significantly, the FDA considered a de novo pathway as a possible pathway for our submission. The implication of this being typically less complexity and shorter approval times as compared to a PMA submission . As such, we continue to believe we are on track to develop and obtain FDA marketing clearance in the United States for a personalized predictive test in Depression. Our next steps include filing a Pre-IDE supplement to clarify the pivotal validation study [expected within the next two months], this laying the ground for us to begin working on our final submission.
And here, we learn that Dr. Rush’s publication record is, indeed, part of the grand plan:
iSPOT-D Publication Management Team
The Publication Management Team [PMT] is the administrative body that provides support and structure to ensure a large volume of high caliber papers are produced from the data. The iSPOT-D PMT follows a structured publication plan and has been formed with John Rush, MD as chairman. John Rush, MD was the PI of the first-of-its’ kind STAR*D Depression research study that produced over 120 papers [published across more than 20 peer reviewed journals internationally].
And there have been some changes along the way [from the recent paper]:

  • "Dr. Williams has previously received fees as a consultant for Brain Resource Ltd and in the last 3 years and was a stockholder in Brain Resource Ltd."
  • "iSPOT-D is sponsored by Brain Resource Company Operations Pty Ltd. Dr Williams was the Academic Principal Investigator for iSPOT-D from 2008 to 2013."
  • Dr. Williams is now at Stanford working on the NIMH RDoC, iSPOT, and PTSD…
  • And speaking of a change, Dr. Charlie Nemeroff was a prominent member of the Brain Resources team. However, in the recent paper, first author Radu Saveanu, his Director of Education, represented the University of Miami instead [detoxifying the by-line?].
What does all of this mean? What came of that pivotal validation study? the FDA Submission? John Rush’s  structured publication plan? Why did Williams move to Stanford? or why was this old data published now? Has this commercial venture fizzled? or is there some personalized predictive test in Depression about to come bursting onto the scene?

I haven’t a clue about the answer to any of these questions, but I do know this. This entire personalized medicine scenario has been heavily colored by a quest to find a marketable pick-the-antidepressant test. It has played out among the usual suspects of entrepreneurial psychiatry in high places, spread through the pages of academic peer-reviewed journals, and been nurtured in the departments of psychiatry at some prestigious universities. In fact, this whole story going back to TMAP could be folded into that last sentence – much ado about not very much. Haven’t we explored this cul de sac long enough?

Mickey @ 1:53 PM

a cul de sac III…

Posted on Saturday 14 February 2015

Speaking of cul de sacs, we interrupt this narrative for this utterly brilliant and pertinent commentary from John Oliver…

Mickey @ 8:29 PM

a cul de sac II…

Posted on Saturday 14 February 2015


by Saveanu R, Etkin A, Duchemin AM, Goldstein-Piekarski A, Gyurak A, Debattista C, Schatzberg AF, Sood S, Day CV, Palmer DM, Rekshan WR, Gordon E, Rush AJ, Williams LM.
Journal of Psychiatric Research. 2015 61:1-12.

We aimed to characterize a large international cohort of outpatients with MDD within a practical trial design, in order to identify clinically useful predictors of outcomes with three common antidepressant medications in acute-phase treatment of major depressive disorder [MDD]. The international Study to Predict Optimized Treatment in Depression has presently enrolled 1008 treatment-seeking outpatients [18 – 65 years old] at 17 sites [five countries]. At pre-treatment, we characterized participants by symptoms, clinical history, functional status and comorbidity. Participants were randomized to receive escitalopram, sertraline or venlafaxine-extended release and managed by their physician following usual treatment practices. Symptoms, function, quality of life, and side-effect outcomes were assessed 8 weeks later. The relationship of anxiety to response and remission was assessed by comorbid Axis I diagnosis, presence/absence of anxiety symptoms, and dimensionally by anxiety symptom severity. The sample had moderate-to-severe symptoms, but substantial comorbidity and functional impairment. Of completers at week 8, 62.2% responded and 45.4% reached remission on the 17-item Hamilton Rating Scale for Depression; 53.3% and 37.6%, respectively on the 16-item Quick Inventory of Depressive Symptoms. Functional improvements were seen across all domains. Most participants had side effects that occurred with a frequency of 25% or less and were reported as being in the “none” to minimal/mild range for intensity and burden.

Outcomes did not differ across medication groups. More severe anxiety symptoms at pre-treatment were associated with lower remission rates across all medications, independent of depressive severity, diagnostic comorbidity or side effects. Across medications, we found consistent and similar improvements in symptoms and function, and a dimensional prognostic effect of comorbid anxiety symptoms. These equivalent outcomes across treatments lay the foundation for identifying potential neurobiological and genetic predictors of treatment outcome in this sample.

First, to state the obvious, this is a commercial study funded by Brain Resources. The authors in the by-line in red have received research support from Brain Resources. The ones in blue are employees of or principals in Brain Resources. And the underlined authors are the editor-in-chief and assistant editor of this very journal [Journal of Psychiatric Research]. So at a time when questions of conflict of interest have finally made it to the front page in medical research, these people don’t seem to have gotten the message. There are other anachronisms. Listed author AJ Rush was Director of TMAP and Principle Investigator for STAR*D and CO-MED – studies that notoriously churned out a seemingly endless stream of published papers. This article is, at best, an interim report, likely signalling yet another flood of articles pouring out of this iSPOT enterprise. And medical writer Jon Kilner who has been ghost?/editorial assistant? throughout the studies mentioned in the last post [a cul de sac I…], is still around doing whatever he does once more in this publication.

As to this study itself, it’s bare bones. There’s no placebo control group and neither subject nor rater blinding – just testing on entry and at 8 weeks with phone checkups every couple of weeks for side effects in between. The point was to develop a cohort of responders and remitters to each of three antidepressants [Lexapro, Zoloft, and Effexor XR] while collecting multiple parameters to search for biological predictors of outcome.
… iSPOT-D follows current usual care setting clinical practice in the prescription of antidepressant medications, coupled with collection of a broad range of potential predictor measures [e.g. genetics, neurobiological, psychophysiological etc]. This was done in order to arrive at a battery of tests that can be used in future prospectively-designed validation studies that will test the proposed individual patient-level treatment optimization algorithm built on these predictors.
Although it’s not a standard clinical trial, it’s presented as if that’s what it is. This table shows the results of the primary outcome variables [HRSD17 and QIDS-SR16]. To their credit, they showed their response and remission results in two ways: as the percent of completers, and as the percent of all participants [the latter being what one might see in an office]. Those latter numbers [1/3 respond, 1/4 remit] seem kind of right – remember, that’s with no placebo control:

They were less forthcoming with the side effects, showing only those of completers [which were themselves pretty impressive]:

So here we are three decades after the first SSRI, two decades beyond the hypothesis that there’s some special way these drugs can be used to increase their efficacy looking at a study that says that they’re all the same. And we know what’s coming – a string of studies sifting through all tests neuro looking for something that will point to one or the other drug [with no real rationale that such a test might exist]. We have here a study financed by a commercial firm aiming for a future commercial product, authored by its employees, and published in a journal edited by some of the authors who are also involved with the company. The ticket into a peer reviewed medical journal is presumably the academic credentials of some of those authors.

This article doesn’t answer any questions that haven’t already been answered. It only highlights an ongoing question, "Why is this mis-use of our medical literature for commercial purposes still tolerated?"
Mickey @ 7:30 PM

a cul de sac I…

Posted on Saturday 14 February 2015


by Saveanu R, Etkin A, Duchemin AM, Goldstein-Piekarski A, Gyurak A, Debattista C, Schatzberg AF, Sood S, Day CV, Palmer DM, Rekshan WR, Gordon E, Rush AJ, Williams LM.
Journal of Psychiatric Research. 2015 61:1-12.

We aimed to characterize a large international cohort of outpatients with MDD within a practical trial design, in order to identify clinically useful predictors of outcomes with three common antidepressant medications in acute-phase treatment of major depressive disorder (MDD). The international Study to Predict Optimized Treatment in Depression has presently enrolled 1008 treatment-seeking outpatients [18 – 65 years old] at 17 sites [five countries]. At pre-treatment, we characterized participants by symptoms, clinical history, functional status and comorbidity. Participants were randomized to receive escitalopram, sertraline or venlafaxine-extended release and managed by their physician following usual treatment practices. Symptoms, function, quality of life, and side-effect outcomes were assessed 8 weeks later. The relationship of anxiety to response and remission was assessed by comorbid Axis I diagnosis, presence/absence of anxiety symptoms, and dimensionally by anxiety symptom severity. The sample had moderate-to-severe symptoms, but substantial comorbidity and functional impairment. Of completers at week 8, 62.2% responded and 45.4% reached remission on the 17-item Hamilton Rating Scale for Depression; 53.3% and 37.6%, respectively on the 16-item Quick Inventory of Depressive Symptoms. Functional improvements were seen across all domains. Most participants had side effects that occurred with a frequency of 25% or less and were reported as being in the “none” to minimal/mild range for intensity and burden.

Outcomes did not differ across medication groups. More severe anxiety symptoms at pre-treatment were associated with lower remission rates across all medications, independent of depressive severity, diagnostic comorbidity or side effects. Across medications, we found consistent and similar improvements in symptoms and function, and a dimensional prognostic effect of comorbid anxiety symptoms. These equivalent outcomes across treatments lay the foundation for identifying potential neurobiological and genetic predictors of treatment outcome in this sample.
The third paragraph of the paper says:
    Recent efforts have focused on identifying clinical or laboratory-based measures that help to precisely target treatments for specific patients. While several neurobiological markers have been investigated, none have been of sufficient clinical value to be incorporated into treatment guideline recommendations….

That paragraph deserves a little more introduction:

BACKGROUND


As the decade following the coming of Prozac drew to a close, it was apparent that the new antidepressants were no panacea for symptomatic depression – at least not being prescribed in the chaotic way they were being used. Some thought that we should give the medication in a systematic way, with guidelines, with objective measurement data. This thread is about the some that thought that was the problem. We needed alogorithms for our treatment. Who know the right way to sive these medications? The Experts [the some] – that’s who. So we’ll create algorithms for the drugs used to treat Schizophrenia, Bipolar Disorder, Major Depressive Disorder [MDD], by Expert Consensus for our clinicians to follow as a guide to treatment. And where shall we do this, in the largest public mental health system there is – The State of Texas. Thus, in 1996, the Texas Medical Algorithm Project [TMAP] came into existence – generously supported by Foundations, the State of Texas, and multiple pharmaceutical companies, all coordinated by the psychiatrists of the University of Texas system. The algorithms spread to multiple States, and the Federal Government when the Texas Governor became the US President.

The idea of applying systematic study using clinical  trials of these algorithms appealed to the NIMH, and there followed a period of large acronymed Clinical Trials in Depression, Schizophrenia, Bipolar Disorder, and other disorders.. The largest, STAR*D [Sequenced Treatment Alternatives to Relieve Depression], was run by the TMAP team and had a complex, sequential algorithm in which non-responders were changed to another drug or augmentation scheme. In MDD, there was a side project, IMPACTS [algorithmic psychiatry: the fallacy…] that computerized  the algorithms. It had to be scrapped because the clinicians wouldn’t use it if left to their own devices. Then came COMED which combined multiple antidepressants – no help. I think it’s reasonable for me to say that all of these efforts generated much ado and many papers, spent tons of money, but not much changed in the response rates of MDD to antidepressants [if you’re not up to speed, just look up any acronym using the search box at the bottom of this page]:

That pretty much covers the completed efforts in Recent efforts. Now for the Recent part. Since all of this started, there have been some new kids on the block. Genomics, Proteinomics, Functional Neoroimaging, Cognitive Testing. Well, not new kids, but at least more prominent shiny new objects in neuroscience. And in the rest of medicine, Personalized Medicine [picking specific treatment based on unique biomarkers], had become a hot new area. Well, not a new area, but at least more prominent shiny new object in neuroscience. So Personalized Medicine began to be bandied about as a possible exploratory area for picking an antidepressant – the scientific rationale being «fill in the blank?». And a new character entered the ring – Evian Gordon, a brain training Australian [BrainNet] – and his colleague, Leanne Williams [Brain Resources]. They gathered a who’s who of KOLs [list] for their Personalized Medicine Action Group in D.C. in October 2009 to kick off a campaign to personalize antidepressant treatment [The Mayflower] [which is a must-see to understand this line of thinking].

In the year before this conference [2008], Senator Grassley’s congressional investigation had reshuffled the people he exposed who had unreported PHARMA income. John Rush [of TMAP, STAR*D, and CO-MED] left UT and went to Duke in Singapore. Alan Shatzberg, APA President, stepped down as Chairman at Stanford. Charlie Nemeroff, Boss of Bosses, was removed at Emory and went to Miami as chair. The emptiness of the PHARMA new drugs pipeline was looming. Out of that matrix, two large Personalized Medicine studies came into being. iSpot-D was financed by Evian Gordon’s BrainNet/BrainResources [see personalized medicine: the Brain Resources company II…] and added the Grassley-investigated people to his byline…
by Williams LM, Rush AJ, Koslow SH, Wisniewski SR, Cooper NJ, Nemeroff CB, Schatzberg AF, Gordon E.
Trials. 2011 12:4.
Meanwhile, Dr. Madhukar Trivedi, still at UT Southwestern, started a second Personalized Medicine study, EMBARC [Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care], with a large NIMH grant, mentioning many of the STAR*D veterans as resources in his grant proposals [see the race for biomarkers… and Godzilla vs. Ghidorah…]. This second Clinical Trial is still recruiting.

SCIENCE


The scientific premise behind this line of thinking and series of trials was questionable from the outset. The notion that Major Depressive Disorder as defined represents a distinct clinical entity, a unitary disease entity, was conjecture that has now rapidly moved to the realm of fantasy. The evidence for an over-riding biological etiology was equally scant, and has traveled in the same direction. Likewise, by any ongoing reading of the accumulating information, the therapeutic action of the antidepressant drugs is a non-specific, symptomatic effect – not something determined by the kind of precise or controllable biological mechanisms hypothesized in these studies – certainly nothing tied to etiology.

COMMERCE


The entrepreneurial background in this story should really be considered more a part of the foreground. TMAP was exposed by whistle-blower Allen Jones to be a conduit for PHARMA to introduce in-patent drugs to the public sector without evidence of efficacy justifying their use, facilitated by under-the-counter payoffs to officials. TMAP was shut down and the States have tried to retrieve their considerable losses with varying levels of success. To be fair, the payola was between State employees and PHARMA, not the academics. But the academics in all of these trials have been way too tied into industry across the board. These studies have generated a lot of money for participating universities and centers – over $50 M from the NIMH alone. In Academia, there’s a commerce in published articles and STAR*D probably hit an all time high with well over 100 papers, each with way too many authors [see infectious numerology…]. Then in the iSPOT byline, we see some  legendary psychiatrist entrepreneurs [Nemeroff, Schatzberg, Debattista]. Also, there’s Evian Gordon’s more up front private enterprises hoping to develop commercial tests to screen patients to pick an antidepressant in advance, as in this BrainNet’s pitch [see personalized medicine: beyond blockbusters…]:

Note: this document is now gone from the Internet [see it currently here through the Wayback Machine].

DESIGN


These studies have not been designed like the usual RCTs. For example, none had a placebo group, so the response/remission rates were uncorrected for the usual improvements seen in antidepressant trials from inert treatment [placebo]. As a result, the strength of drug effect [NNT, NNH, Odds Ratios, Effect Size] simply could not be calculated, was an unknown. None of them have followed the usual double-blinding, settling for partial schemes of one sort or another. They were described as "naturalistic" – meaning more like the treatment one might receive in a office situation than a strict controlled trial [and it showed]. They had high drop-out rates and there’s a lot of confusion about the various rating instruments used, particularly with STAR*D [see recalculating…  and still recalculating…]. Thus far, the only completely reported study was CO-MED [a negative trial]. In spite of a flood of offshoot papers, the final report for STAR*D never appeared, and the results that were reported begged credibility.

I started this post intending to comment on the preliminary iSPOT study reported above, but thought I ought to remind us about the history of these two studies [iSPOT and EMBARC] that are part of a quest to find biomarkers that will predict response to antidepressants. What I really wanted to do was describe the rationale for thinking that such biomarkers might exist, but looking back over the literature, I can’t find any rationale. Like the efforts of these investigators in earlier outings that looked at various algorithms and guidelines that explored sequencing, combining, or augmenting antidepressant regimens, scientific rationale is missing. Certainly those outings [TMAP, STAR*D, IMPACTS, and CO-MED] have yielded nothing of note. And yet there are two major ongoing studies with authors familiar to all of us, all hotly pursuing this line of investigation. Perhaps, that’s the more interesting question than the actual studies themselves, Why do they persist in going down this road? Why do they keep trying to find a way to make these drugs more effective than they are, with no scientific clues that it’s possible? Why are these efforts funded?

Maybe I’ll take a shot at those questions, but first I’m going to stick to my guns and report on the iSPOT study in the next post like I planned in this one [before I got wordy with my skepticism], because this report deserves some clarification of its own…
Mickey @ 8:00 AM

political sabbatical…

Posted on Monday 9 February 2015

My first thumbing through the DSM-III in 1980 landed me in Major Depressive Disorder [MDD], and it’s where I’ve remained. I’ve written about it enough here to thoroughly earn my 1boringoldman moniker [see a mistake…, further thoughts on the mistake…, and yet another mistake… for just one version]. In their zeal to bring psychiatric classification more into the medical realm, they eliminated the most medical of all psychiatric diagnoses – Melancholic Depression. As I’ve said before here often, in a former time the word depression had different connotations than it does today. It referred to a felt emotion, something all of us are capable of feeling – a biological given. A major clinical distinction was depression, the emotion, and Depression, the illness known clinically as Melancholic Depression, which is both experienced and observed as something very different from depression [see melancholia…]. Several years back, Dr. Bernard Carroll put Melancholia to verse [right][see Bringing Back Melancholia]. On the road to the DSM-5, a Who’s Who from the ranks of psychiatry lobbied in vain to have this diagnosis returned to the official classification [Note: That list of authors included Dr. Robert Spitzer himself, who was the person who had eliminated it some thirty years before].
by Gordon Parker, M.D.; Max Fink, M.D.; Edward Shorter, Ph.D.; Michael Alan Taylor, M.D.; Hagop Akiskal, M.D.; German Berrios, M.D.; Tom Bolwig, M.D.; Walter A. Brown, M.D.; Bernard Carroll, M.B.B.S.; David Healy, M.D.; Donald F. Klein, M.D.; Athanasios Koukopoulos, M.D.; Robert Michels, M.D.; Joel Paris, M.D.; Robert T. Rubin, M.D.; Robert Spitzer, M.D.; and Conrad Swartz, M.D.
American Journal of Psychiatry 2010 167:745-747.

Melancholia, a syndrome with a long history and distinctly specific psychopathological features, is inadequately differentiated from major depression by the DSM-IV specifier. It is neglected in clinical assessment [e.g., in STAR*D] and treatment selection [e.g., in the Texas Medication Algorithm Project]. Nevertheless, it possesses a distinctive biological homogeneity in clinical experience and laboratory test markers, and it is differentially responsive to specific treatment interventions. It therefore deserves recognition as a separate identifiable mood disorder.
Melancholia is a lifetime diagnosis, typically with recurrent episodes. Within the present classification it is frequently seen in severely ill patients with major depression and with bipolar disorder. Melancholia’s features cluster with greater consistency than the broad heterogeneity of the disorders and conditions included in major depression and bipolar disorder. The melancholia diagnosis has superior predictive validity for prognosis and treatment, and it represents a more homogeneous category for research study. We therefore advocate that melancholia be positioned as a distinct, identifiable and specifically treatable affective syndrome in the DSM-5 classification.
I recently read a remarkable paper by the lead author above, Dr. Gordon Parker of The Black Dog Institute in New South Wales Australia, that came at this distinction in a unique way:
by Parker G, Paterson A, and Hadzi-Pavlovic D.
Acta Psychiatrica Scandanavia 2015 Jan 6. [Epub ahead of print]

OBJECTIVE: We sought to determine whether putative depressive diseases could be differentiated categorically from clinical depressive disorders and non-clinical mood states.
METHOD: We interviewed volunteers who reported or denied any lifetime depressive mood state and analyzed data from the former group reporting on their ‘most severe’ depressive episode. We employed latent class analysis [LCA] to determine whether a two-class solution was supported and the contribution of individual variables to class allocations.
RESULTS: All variables were significant predictors of class allocation. LCA-assigned Class I participants reported more depressive symptoms, had more distressing episodes and more lasting consequences, were more likely to view their depression as ‘like a disease’, and as being both disproportionately more severe and persistent in relation to any antecedent stressor. Validation involved comparison of LCA assignment with DSM-IV diagnosis for their most severe depressive episode. Of those assigned to Class I, 89% had a DSM diagnosis of melancholic, psychotic or bipolar depression. Class II had all those failing to meet criteria for a depressive episode and the majority of those with a non-melancholic depressive condition.
CONCLUSION: Despite not including individual depressive symptoms, study variables strongly differentiated putative depressive diseases from a composite of clinical depressive conditions and subclinical depressive states.
The point here is that those unique symptoms we’ve always seen as defining Melancholia moved with this study from prose and poetry to an objective instrument. And complex cluster analysis of the results of that instrument in a mixed cohort of subjects with and without a previous depressive episode dramatically separated them into two relatively homogeneous groups – melancholic versus something else. Further, cross checking those clusters with conventional diagnostic criteria [DSM-IV] was a surprisingly good fit. I hope this paper becomes freely available; gets presented in more detail elsewhere; and is scheduled for replication trials forthwith. So far, Melancholic depression separates from the other depressions as a unique entity as cleanly as one could hope from the technology available for such an analysis. Melancholia is a thing of its own just like it has always been – not a feature of something else which never was. Maybe we can now pick up this thread that went on political sabbatical in 1980, and figure out what Melancholia is after all…
Mickey @ 12:01 AM

a drawer full?…

Posted on Friday 6 February 2015

Will I ever let an article about retroactive Data Transparency pass without commenting on it? I kind of doubt it, at least for the present. It’s the arguments pro and con that have my attention at the moment. The current focus is on patient privacy and consent forms. There’s something very wrong with the way this is framed, and I’m having trouble putting my finger on it. Here’s Stuart Buck from the BMJ talking about the issue and the IOM report:
BMJ: Blogs
by Stuart Buck
30 Jan, 15

… But for all the good that ethics review boards do, today they often block undeniably valuable research from going forward. The re-analysis of clinical trial data is a recent case where specious ethics objections are used to stymie good research into the effectiveness of drugs given to patients. The reason to re-analyze clinical trials is simple: far too many clinical trials are misreported or hidden from public view entirely. One recent study strikingly found that only 22 percent of trials obeyed the federal law requiring results to be reported publicly within a year of completion. To make matters worse, published clinical trials can still be highly misleading, because articles are often selectively written so as to highlight a desired finding. As a result, we are misled into overestimating the benefits, and underestimating the risks, of drugs.

The best way to ensure accurate information about all drug benefits and side effects is for all NIH-funded or FDA-regulated clinical trials to share patient level data with independent researchers. It is encouraging that the Institute of Medicine [IOM] recently issued a massive report recommending just that. In the IOM’s words, “Limited data sharing prevents maximum utilization of knowledge gained. …Greater data sharing could enhance public well-being by accelerating the drug discovery and development process, reducing redundant research, and facilitating scientific innovation.” Right on. Unfortunately, the IOM’s recommendations apply only to future clinical trials—2015 and beyond—not to what it calls “legacy” trials. To put this in context, as of the end of 2014, an all-time total of 1,496 drugs had been approved by the FDA, yet only 41 of those approvals occurred in 2014.
Nobody seems to be comfortable just saying that the point of retrospective Data Transparency is to right a previous wrong, and remove incorrect medical information from the medical literature resulting from inappropriate analyses of the raw data from Randomized Clinical Trials. What other reason could there be? All the stuff about Data Sharing for further learning is fine and good – like the study of diagnostic heterogeneity using the STAR*D dataset [see latter day STAR*D I…]. But it’s the conclusions from the studies as designed that need to be set right. In psychiatry, there’s more than ample evidence that many of those conclusions are either greatly exaggerated, minimized, or altogether wrong:
Thus, data sharing for future trials is good, but it is not enough. Virtually all drugs prescribed by doctors were studied in clinical trials prior to the IOM committee’s report in January 2015. Shouldn’t independent scholars be free to re-analyze those trials and see whether drugs are actually as effective and safe as promised? It seems like a no-brainer, but here’s where ethics committees rear their heads, reasoning that it would actually be unethical to re-analyze data unless the original patients sign another consent form. As an IOM committee member told The Wall Street Journal, “The primary challenge is the issue of informed consent. Patients who participated in trials in the past were most likely in trials that did not include [a provision for] sharing data publicly. So if we want to now share data, ethically, investigators should go back to get informed consent from the participants.
«NOTE TO SELF» Is this correct? or is it the "something wrong with the way this is framed"? This point requires future study…
Remember, no one is talking about re-doing a trial and exposing new patients to new drugs. The only question is whether the underlying data should be independently re-analyzed to see how well the drug actually worked. For anyone worried about privacy, there is no more reason to be worried about the independent analyst than about whoever did the original trial, and it is trivially easy to protect privacy in the same way that the original trialists did — sometimes even more so, now that we have software that permits someone to analyze data without ever downloading or seeing an individual patient’s records. Requiring patients to consent a second time is therefore unnecessary and irrelevant…
While I agree with his argument here, it’s premature. What do those consent forms actually say? verbatum? Most consent forms I sign in hospitals and doctors’ offices cover their asses, not mine. I expect they say that the data generated can be used for research purposes, but do they specify by whom? We need to get hold of a bunch of those consent forms and see what they actually say. It wouldn’t be beyond PHARMA to let us prattle on about consent forms that don’t specify by whom at all and not clue us in.
More importantly, what value will it be to the surviving patients to have to bother with yet another consent form? They already consented to be in a study, and they surely never meant for their data to be misused and misrepresented, as Ben Goldacre documents in his book Bad Pharma. No one has identified a single patient, in all of history, who has stated anything like the following:
    When I signed up for that clinical trial, I was putting my health at risk and exposing my life to examination only for the benefit of one pharmaceutical company’s sales or one professor’s publication record. I never intended for any independent scholars to be able to double-check whether the drug actually works.
If, as the IOM report suggests, some patients previously signed consent forms that actually ruled out future data-sharing, the data should still be subject to sharing. This is because those consent forms are what should be deemed unethical. After all, it is hard to imagine that any pharmaceutical company told patients:
    You’re signing a form that guarantees that our paid analysts and ghostwriters will write about this data only if it is in our financial interest. Thanks to your signature, we will be able to convince “ethicists” that you didn’t want anyone to double-check our analysis, and we will thus be able to misrepresent the trial more effectively.
Indeed, the very fact that anyone made patients sign such consent forms in the first place is a sign that the trial’s data could be especially in need of re-analysis. To be sure, it can still be difficult to re-analyze older clinical trials, due to software and formatting challenges. But when trial sponsors rely on the putative interests of patients in order to argue that data should be restricted, that is a direct obstruction to open science. Remember why patients made the sacrifice of participating in clinical trials in the first place: to generate accurate information about ways to cure disease. It is perverse to use those patients’ purported wishes as an excuse to allow misinformation to stand unchecked.
Obviously what we want is the raw data from the Clinical Trial – the same thing that the pharmaceutical company has to work with. In the good old days, only one agency was involved. Nowadays, the study sites [CRC – Clinical Research Centers] are coordinated and managed by a Clinical Research Organization who collects the data and passes it on to the PHARMA. The CRO/PHARMA analyze it and pass the summary to the writers. Once it’s drafted, the guest authors come into the picture and send it to a journal. If there is an FDA NDA submission, it comes from the PHARMA, prepared by some mixture of the CRO, writers, and PHARMA. What we see on the FDA web site is the FDA’s Reviewer report about the submission, but not the contents of the submission.

I don’t actually know what the NDA [New Drug Application] contains – summary data? the individual participant data [IPD]? clinical report forms [CRF]? is it the clinical study report [CSR]? But for the moment, the question is how does whatever consent the subject gives pass to the FDA to enforce? Where does the authority of the FDA to withhold the data they analyzed come from? We all know that the claim of privacy for the subjects is bogus, but the actual process of the system, the content of the consents, the authority of the FDA, and who has what is murky – at least to me. We’re mounting arguments, but they’re blowing in the wind because we [at least I] don’t know who we’re arguing with about what. I have the feeling the people in the Institute of Medicine are in the same boat. It seems to me that the first order of business is to get hold of those consent forms and see what they really say. And what about that «NOTE TO SELF»? Did the consent have a provision to give the data to the FDA? How is making the data available for research purposes to the PHARMA Sponsors or the FDA different from making the data available for research purposes to  independent researchers? Is the consent to the CRC? the CRO? the PHARMA Sponsor? any researcher? Many questions…

How do we query these things? Anybody got a drawer full of consent forms or know the answer?
Mickey @ 7:00 AM

do what george says…

Posted on Thursday 5 February 2015

I just read George Dawson’s excellent advice to psychiatry residents on Real Psychiatry [Advice To Residents and Advice To Residents – continued]. As a former residency director, I can say that, as usual, he’s right on the mark. There are some things I might add or come at from a different angle, but just editorial comments. His posts are full of pearls. Some thoughts:

The Suicidal [or Homicidal] Patient: These patients are, indeed, the ones where we need to do things right. And the thing I saw over and over was that lethal patients frightened the resident and the interview turned into an is-he-going-to-do-it-? or a what-do-I-need-to-do-now-? agenda driven interview. People who become suicidal have plenty of things rattling around in their minds. In Man Against Himself, Karl Meninger said a suicidal person wants to kill [rage], wants to die [a communication], and wants to be dead [relief]. So they have lots of things they need to say and they don’t need some nervous interviewer who is focused on not making mistakes. They need someone who will deeply understand their suicidality from the inside. If you get there, you’ll know what to do. If you don’t, that tells you something too. There’s plenty of time at the end to deal with the agendas they obviously raise.

Crisis Intervention: This was the  outgrowth of World War II that inspired the Community Mental Health Movement – how to manage patients who are in crisis states filled with emotion and unable to think clearly or make rational decisions. All they can think of is feeling un-panic. It’s an essential skill with simple principles. Go to the library and look it up. Those dusty books from the fifties and sixties are full of wisdom. It’s a skill to never leave home without. It’s part of preventive psychiatry, because the goal is to head off the huge mistakes they make and bad patterns they learn acting in the crisis mode.

Neuroscience: I completely agree with George when he says…
    There are many excellent psychiatrist-researchers in this area already and I encourage reading their research and some of their popular works as a starting point.  There are any number of Luddites out there who seem to think that psychiatry needs to remain stagnated in the 1950s to provide any value.   I don’t think there is a shred of evidence to support that contention or that neuroscience will never be of value to psychiatrists…
…[plus, who wants to be a Luddite?] But I’d add that his admonishment to not get stuck in the past has an important corollary, Don’t get stuck in the future either. A lot of the recent psychiatric focus has been on our neuroscience future, but our patients are sick in the very-much-right-now. There was a massive backlash against psychoanalysis, existential psychiatry, the biopsychosocial perspective, group therapy and group dynamics, cultural and social psychiatry, etc. after the coming of the DSM-III in 1980. So my advice would be to learn all the neuroscience you can, and everything else too. Don’t let your teachers’ that have lived through this last 35 polarizing years transfer their dogmas and attitudes along with their knowledge. And seek out mentors from all sides of the realm – brain, mind, society, culture, etc. You’ll have plenty of time for skepticism down the road.

Identity: I’ve never been smarter or more competent than the day I finished my Internal Medicine Residency. I hadn’t been brought down to size by the whips and scorns of medical life, made my mistakes, been dwarfed by the afflictions that beset mankind. But my identity as a doctor was as solid as a rock. When I finished my psychiatry residency I felt anything but smart and competent. And that’s not vastly changed even now in retirement. I realized that I only feel like a psychiatrist when I’m being one. Otherwise, I’m just a guy with a bunch of diplomas and a ton of facts. And I noticed that was how my senior residents felt as they approached graduation – wary. I think it’s still true for them, even with their heads filled with neuroscience and evidence-based medicine, because you never know what’s going to walk in that door and our classification systems are hardly maps that lead straight to gold. I suggested that the residents start with a part time job doing something they already knew how to do – clinic, mental health center, etc. And let their practice grow slowly as they get in touch with how much they really had learned and how competent they really had become. We talk about how hard it is without biomarkers to guide our work. Internal markers are equally hard to find too, and hard to hold on to once located.

But those are just some other perspectives to George’s advice. I did medicine multiple ways along the road. And as much as I talk about problems in psychiatry in this blog, that doesn’t detract from the fact that I can’t imagine having followed any different path. Psychiatry is about the people that come to see us, not the books and conferences along the way. I wouldn’t have missed having my career in psychiatry for anything I can think of.
Mickey @ 4:19 PM

mental health workers requiring salary need not apply…

Posted on Thursday 5 February 2015


WHNT News 19 at 5:00 p.m.
by David Kumbroch
February 4, 2015

HUNTSVILLE, Ala. [WHNT] – Alabama Psychiatric Services confirms it will end services on February 13, 2015.  It cites a decrease in funding from Blue Cross/Blue Shield and a change in how the insurance company covers behavorial health services as the reason for the closure. An employee at the Madison facility confirmed the closure earlier today, as did several patients.  The Madison employee tells us the closure of APS’ statewide offices could impact as many as 200,000 patients. In north Alabama, APS also has facilities in Cullman, Decatur and Florence.  The Madison location is on Lanier Road off Hughes Road. The company posted this message on apsy.com on Wednesday afternoon:
    “After over thirty years of service to our clients for their behavioral health needs, APS will be ending services on Friday, February 13, 2015. It has been our privilege to have offered services to our clients across Alabama, and to have been a critical part of the behavioral health service delivery system across our great state. At the request of Blue Cross/Blue Shield, we opened offices throughout Alabama. Unfortunately, due to a decrease in funding from Blue Cross/Blue Shield of Alabama and a change in its model of providing behavioral health, we are not able to continue our mission. We would have liked to have given both our patients and our employees more notice of our closure, but this was not possible under the circumstances. APS is making every effort to provide care and transition patients to other providers and our own providers who join or develop their own practices. APS is cooperating fully with other organizations to facilitate the resolution of this intense period.”
APS did not mail out letters.  We’re told there were several copies of this two-page letter available for patients to read in the Madison facility [page 1, page 2]. WHNT News 19 is told several doctors at the Madison branch will form their own practice.
A major resource lost from my wife’s hometown area. This is the inevitable consequence of the state of medical reimbursement for mental health services these days. I’m surprised it’s taken so long to arrive, particularly in poor states with ultra-conservative legislatures and impoverisheding Medicaid and third party systems. Programs training mental health workers in these areas [like where I live now] would do well to only accept applicants who are independently wealthy do-gooders…
Mickey @ 1:32 PM

a hole in the system…

Posted on Thursday 5 February 2015


Institute for Safe Medication Practices
QuarterWatch
Monitoring FDA MedWatch Reports
January 28, 2015

Executive Summary
The U.S. Food and Drug Administration’s Adverse Event Reporting System [FAERS] — based on MedWatch reports – is the government’s primary safety surveillance system designed to identify harms from therapeutic drugs. For the last six years these vital data have formed the core of ISMP’s QuarterWatch™ drug safety reports. In this issue we decided to look closely at the system itself. Our conclusion: it seems clear that this drug safety monitoring system is in need of modernization. It suffers from a flood of low quality reports from drug manufacturers and has not yet been updated for the changing environment in which drugs are marketed to health professionals and consumers. We discuss key problems below and offer some recommendations as an organization that relies heavily on data collected through FAERS. This issue of QuarterWatch includes two recently released calendar quarters of FAERS data, from 2013 Q4, and 2014 Q1. To provide a broader perspective, the main analysis focuses on the 12 months ending with 2014 Q1, and includes all adverse event reports received by the FDA in that one-year period. Previous issues of QuarterWatch have focused on a subset of these case reports, those with a serious outcome and reported by patients in the United States…
The recent announcement that the NIH and FDA intend to beef up clinicaltrials.gov and insist that it be used as intended has been welcomed news and we’re eager to see how their resolve is implemented [see ]. But this is equally, if not more important. The original charge of the FDA was SAFETY – Adverse Events cum Serious Adverse Events in modern parlance. And that’s as it should be. An individual doctor can’t possibly discover adversity until it happens, and the incidence of dangerous toxicity is often low enough to pop up late in a practitioner’s use of the drug. Clozoril is a good example. It’s a hell of a fine antipsychotic, the best there is. I’ve only seen it used a few times in desperately ill cases that were unresponsive to any usual measures, and it lived up to expectations in all of the cases where I’ve seen. But one case of fatal agranulocytosis is enough to remove it from a doctor’s personal formulary. Fortunately, the frequent monitoring of blood counts seems to give ample warning for preventive discontinuation, but the possibility of an ominous outcome is still always in mind when the drug is used. There are innumerable examples throughout medicine where infrequent cataclysmic toxicity is what rules drug usage. EFFICACY was an FDA add-on on the 1960s after Thalidomide, an irony in that Thalidomide is, by report, one of the most effective antiemetics around and is still used where there’s no chance of fetal exposure. In summary, there’s no possibility of a risk/benefit ratio without both a numerator [SAFETY] and a denominator [EFFICACY]. And an accurate risk/benefit ratio is part of every prescription written, whether it’s conscious or not. It’s part of the medical auto-pilot, or at least it should be.

One of the important dimensions of the risk/benefit equation is TIME. If you are rendered totally miserable by a case of poison ivy, I can radically [and safely] improve your lot with a short, but generous tapering course of corticosteroids – the powerful suppressor of inflammation. But I’ve got to be sure you don’t have a condition like a Herpes eye infection that can be exacerbated by even a short course. The same with an asthma attack that doesn’t respond to bronchodilators. You might say time is on your side, because the diseases themselves are time-limited. But if you show up with a generalized case of Rheumatoid Arthritis, the corticosteroids are a double edged sword. The disease is going to be around for a while – long enough for the prodigious down side of chronic steroid treatment to raise its ugliest of heads. So the job is to look for a long term treatment from Day 1, and only use steroids up front, if at all. The same thing is true of anti-anxiety drugs or narcotic analgesics. Benzodiazepines are the cat’s meow for a crisis state with unmanageable anxiety, but danger, danger for the patient with a lifelong anxiety disorder because of tolerance and addiction ["if one is good, two is better"]. The same goes for narcotics. Just right for a heart attack or a kidney stone, but danger, danger, danger for some chronic back and other pain problems. Once you know they work, it’s harder and harder not to take them, and then you enter the nightmare of addiction.

And even if our short term Clinical Trials are conducted and analyzed thoroughly and honestly, they tell us little about long term use. That’s a real problem for psychoactive drugs [all of them]. It’s also a problem for many treatments of medical conditions that stick around for a long time [or forever]. So we need to know about the long term toxicity, and our Clinical Trials can’t tell us – emphasis on can’t. So we need a long term monitoring system to give us the information we need, and we just don’t have it. The current FDA MedWatch Report System just doesn’t seem to be it. Good for them that they noticed.

Besides a reporting system that works, identification of adverse effects is often a problem in and of itself, particularly in a poly-medicated world. You can’t report a side effect to a drug if you don”t know it’s a side effect. I was once called to see a patient in the hospital to be worked up for a neuromuscular problem with periodic extreme weakness. A morning muscle biopsy hae been aborted in the face of him having a panic attack. He was a fit male lying in the bed with a wet towel on his forehead in an obvious terror state – very hard to interview. He reported episodes of extreme weakness, mostly at work, worsening over several months. He worked in in a factory that made electrical cables, the big kind that are strung from tower to tower across the landscape. He ran the last machine in the process that wrapped the twisted cable onto huge spools for storage and transport. It was all done in a big un-airconditioned warehouse that was like an oven in the Georgia summers. The punchline was, "Doc. I feel like if I could just sweat, I’d be all right!" Flash back to a routine physical the winter before when he had mild hypertension and was started on medications. After trying several different drugs he couldn’t tolerate, he saw a specialist and was put on a new drug, Inderal, brand new at the time. It’s a Norepinephrine Beta-Blocker that incidentally turns off the sweating process. So he was right. And his panic disorder? As a boy, he had witnessed the slow demise of a favored uncle from Lou Gehig’s Disease, which is what he privately thought he had. His Blood Pressure? "White Coat Syndrome." It was normal when taken by his wife at home. Once we knew what was wrong, the cure was simple. Another example close at hand is the frequent misinterpretation of antidepressant withdrawal symptoms as a recrudescence of the original depression or the emergence of an anxiety syndrome, and the problem medication is restarted. Unfortunately, that maneuver works and perpetuates the cause. So side effects, adverse events, are difficult to spot.

In a world where pharmaceutical companies minimize adverse effects, third party payers pay for drive-by doctors’ visits, and beautiful people parade across our television screens during the mumbled warnings, collecting adverse events with an accurate numerator, denominator, and time-marker is no trivial task. DavidHealy.org has modeled a shot with his Rxisk, but it, too, has a voluntary IN and OUT and lacks official conduits to the Agencies that matter. But it seems to me that the FDA has fallen down on this part of  their mandate. I think a place to start a pilot project would be adding a pharmacy prescription-based system, periodically querying ongoing prescriptions and those that go unrefilled for selected medications. But this is not close to my area of expertise. I know we need to make it both easy and desirable for doctors and patients to report their adverse experiences without adding to the CYA or silly screening burden of doctor’s visits.
    The last time I had my blood drawn, I was asked if I’d heard voices telling me to hurt myself or others in the last three weeks [among other things]. I was tempted to say, "No, it’s been well over a month."
A doctor’s medical auto-pilot will hold a lot of information, but it needs to be programmed. The task ahead on this problem is a big [and important] job for the FDA. It’s what CME [Continuing Medical Education] was originally designed to pass on…
Mickey @ 12:09 PM