meta meta meta meta meta meta meta…

Posted on Saturday 6 September 2014

Last time around [note to self…], I mentioned a Cochrane Review [also published in the Lancet] of head to head studies of the antidepressants. This same group also did a meta-analysis for each of the major antidepressants. This is the one they did for Zoloft® [Sertraline]:
by Cipriani A, La Ferla T, Furukawa TA, Signoretti A, Nakagawa A, Churchill R, McGuire H, and Barbui C.
Cochrane Database Systematic Reviews. 2010 [4]:CD006117.

BACKGROUND: The National Institute for Health and Clinical Excellence clinical practice guideline on the treatment of depressive disorder recommended that selective serotonin reuptake inhibitors should be the first-line option when drug therapy is indicated for a depressive episode. Preliminary evidence suggested that sertraline might be slightly superior in terms of effectiveness.
OBJECTIVES: To assess the evidence for the efficacy, acceptability and tolerability of sertraline in comparison with tricyclics [TCAs], heterocyclics, other SSRIs and newer agents in the acute-phase treatment of major depression.
SEARCH STRATEGY: MEDLINE [1966 to 2008], EMBASE [1974 to 2008], the Cochrane Collaboration Depression, Anxiety and Neurosis Controlled Trials Register and the Cochrane Central Register of Controlled Trials up to July 2008. No language restriction was applied. Reference lists of relevant papers and previous systematic reviews were hand-searched. Pharmaceutical companies and experts in this field were contacted for supplemental data.
SELECTION CRITERIA: Randomised controlled trials allocating patients with major depression to sertraline versus any other antidepressive agent.
DATA COLLECTION AND ANALYSIS: Two review authors independently extracted data. Discrepancies were resolved with another member of the team. A double-entry procedure was employed by two reviewers. Information extracted included study characteristics, participant characteristics, intervention details and outcome measures in terms of efficacy [the number of patients who responded or remitted], acceptability [the number of patients who failed to complete the study] and tolerability [side-effects].
MAIN RESULTS: A total of 59 studies, mostly of low quality, were included in the review, involving multiple treatment comparisons between sertraline and other antidepressant agents. Evidence favouring sertraline over some other antidepressants for the acute phase treatment of major depression was found, either in terms of efficacy [fluoxetine] or acceptability/tolerability [amitriptyline, imipramine, paroxetine and mirtazapine]. However, some differences favouring newer antidepressants in terms of efficacy [mirtazapine] and acceptability [bupropion] were also found. In terms of individual side effects, sertraline was generally associated with a higher rate of participants experiencing diarrhoea.
AUTHORS’ CONCLUSIONS: This systematic review and meta-analysis highlighted a trend in favour of sertraline over other antidepressive agents both in terms of efficacy and acceptability, using 95% confidence intervals and a conservative approach, with a random effects analysis. However, the included studies did not report on all the outcomes that were pre-specified in the protocol of this review. Outcomes of clear relevance to patients and clinicians were not reported in any of the included studies.
Looking at the full meta-analysis, I was awed with the detail and precision [as I always am with a Cochrane Systematic Review]. Everything possible is documented. It’s an all volunteer army of independent scientists and as far as I’m concerned, it’s on the side of truth, justice, and the scientific way. As I looked it over, I started making some notes and got a bit carried away – producing my first ever 1boringoldman meta-meta-analysis. I make no claim of thoroughness. They had 59 studies and looked at everything possible. I dropped the ones on drugs I’d never heard of and ended up with only 42. I only looked at one parameter, the Odds Ratio that compared "Responders" in each study [in the OR column, a value <1 favors Zoloft® and a value >1 favors the Comparator drug]. The p column is the significance of the OR for that study. The Funding column is who funded the study – if it could be determined. That’s it [maybe I ought to call it a mini-meta]:

So, first off, why did I do a mini-meta-analysis? In order of importance: It rained all afternoon. The one television program we’d scheduled to tape didn’t [tape]. And I wanted to chase my questions from before – "Am I suggesting that the ghostwritten literature on Zoloft® is extensive enough or distorted enough to skew a meta-analysis of all published head-to-head clinical trials? I don’t know that, so I guess the answer is currently unknown." and "There has to be some explanation for the discrepancy between the lackluster FDA Approval data and its glowing performance in the literature [and in the marketplace]". So my plan was simple – I would look at the industry funded studies compared to the others to see if that explained the difference. Pretty clever in my opinion. The problem was that there wasn’t any other category. That was a myth in my mind. My tally was:
Pfizer 15/42, "?" 13/42, "industry" 5/42, Lilly 4/42, GSK 2/42, Forest 2/42, "none" 1/42
The "?"s were ones where neither the authors nor I could figure out the funding. The ones that say "industry" were ones that the authors had copies of but I couldn’t get to. A lot of the "?"s were in the Journal of Clinical Psychiatry so I think it would be a good guess they were industry funded [but I was being good like the Cochrane people]. Reading the abstracts of the "?"s and "industry"s, one could easily guess the funding source [but again, I was being good]. So one thing I learned from doing this is that my fantasy that at some magic year in the past they started requiring declarations of funding source was more wishful thinking than reality. Nowadays, if the study is registered on, you can figure out funding. But many of these trials antedate

On my first run through, I didn’t have the p column. But even though the Sertraline studies won the OR face off 64% of the time, those weren’t very impressive numbers. So I went back and added the p values of the Odds Ratios. Well that was instructive:
3/42 were significant, 8/42 were unknown, and 31/42 were NOT significant [p<0.05]
I was kind of surprised. I guess that’s why Cipriani et al said, "a trend in favour of sertraline over other antidepressive agents." I didn’t compile the Adverse Events or much look at the other efficacy measures. But I’d had enough. The biggest lesson in my little exercise is right there at the end of the abstract, "Outcomes of clear relevance to patients and clinicians were not reported in any of the included studies." 42 studies with eight or nine thousand subjects, and they were jockeying for market share [apparently successfully]…
Mickey @ 12:59 PM
Filed under: politics
note to self…

Posted on Friday 5 September 2014

My last post about Zoloft® and it’s approval [an echo that needs to keep reverberating…] got me thinking about a number of things. In the UPDATE, I finally found that Laura A. Plumlee et. al. v. Pfizer had been denied on a technicality for the second time just this week. I also found a Louisiana Suit filed on the same grounds:
Courthouse News
October 29, 2013

Pfizer defrauded the public about its blockbuster antidepressant Zoloft by writing its own articles about it for medical journals and paying medical researchers to put their names on them, in a brazen campaign of "fraudulent and wanton marketing, selling and labeling," Louisiana’s attorney general claims in court. Attorney General Buddy Caldwell claims Zoloft is barely more effective at treating depression than a placebo, but Pfizer has persuaded doctors and consumers otherwise. In its lawsuit in East Baton Rouge Parish Court, the state claims Pfizer engaged in "false, misleading, unfair, and deceptive acts in the marketing, promotion, and sale" of Zoloft, affecting the elderly, disabled and "most needy" Louisiana citizens covered by the state’s Medicaid program…
Long before Zoloft was approved by the FDA, Pfizer knew it had "serious issues with efficacy" because in early Zoloft trials, the placebo group actually had better results, the state claims. "These early trials showed that ‘placebo still seems to be the most effective group’ and that "there is still no striking evidence of beneficial drug effect with placebo often being the superior treatment,’" the complaint states. "Nonetheless Pfizer chose to go forward in attempting FDA approval."

The attorney general claims that to do this, Pfizer published only information that pertained to Zoloft efficacy, and suppressed conflicting studies. Pfizer then engaged in a "ghostwriting program to misleadingly enhance Zoloft’s credibility," the lawsuit states. "Ghostwriting is a process where someone with a vested interest in an article, like Pfizer, that does not want their association with the article to be known, provides a written draft to an author who then publishes the article under that author’s name," the complaint states. "The published article contains no express or implied association with the interested person – Pfizer’s involvement in drafting the article is unknown to the public. Not surprisingly, ghostwritten articles tout the benefits and efficacy of the drug in question."

In fact, the state claims, Pfizer realized it could ensure Zoloft’s success through "manufacturing ‘research’ and articles that enhance Zoloft’s safety and credibility." Pfizer, or a company hired by Pfizer, would write a study specifically designed to showcase Zoloft’s effectiveness, and Pfizer would then pay prominent members of the medical field to put their name on the articles, and to "ultimately conceal all Pfizer involvement," the complaint states. "Publication of clinical findings is the ultimate basis for treatment decisions; thus Pfizer’s misleading publications regarding Zoloft efficacy are a key component of its fraudulent scheme," the attorney general says.

"An internal Pfizer document demonstrates its ghostwriting and selective publication scheme in full effect," the complaint states. "First, the document clearly reveals the intent to manipulate inefficacy results in a published manuscript: ‘… but now we need some help in dealing with the most important issue … i.e. the huge placebo response in the continuation phase which wiped out the significant superiority of Zoloft at six weeks.’ "The email goes on to list a number of ways to deal with the placebo response, including ‘using less stringent criteria for relapse’ and the suggestion that ‘Table III certainly must be deleted.’ Lastly, the email requests ‘the list of French investigators identifying the proposed authors. [Emphasis added.]

"Pfizer’s ghostwriting operation and its selective publication of data, prevented healthcare providers, consumers, and ultimately the State of Louisiana from obtaining accurate information regarding the efficacy of Zoloft. Pfizer’s scheme directly influenced the prescribing practices of healthcare providers through its misleading and inaccurate information bolstering Zoloft’s efficacy"…
I don’t know if the Louisiana v. Pfizer was piggy-backed onto Plumlee v. Pfizer and whether it was also blowing in the wind [Note to self: Find out]. But moving right along, George Dawson of Real Psychiatry commented on my last post bringing up a large meta-analysis of the antidepressants published in 2009 that looked at head-to-head studies of the antidepressants [studies comparing multiple drugs] and picked Zoloft® as the first-line drug [published after Zoloft's patent expired]:
by Cipriani A, Furukawa TA, Salanti G, Geddes JR, Higgins JP, Churchill R, Watanabe N, Nakagawa A, Omori IM, McGuire H, and Tansella M, and Barbui C
Lancet. 2009 373[9665]:746-58.
BACKGROUND: Conventional meta-analyses have shown inconsistent results for efficacy of second-generation antidepressants. We therefore did a multiple-treatments meta-analysis, which accounts for both direct and indirect comparisons, to assess the effects of 12 new-generation antidepressants on major depression.
METHODS: We systematically reviewed 117 randomised controlled trials [25 928 participants] from 1991 up to Nov 30, 2007, which compared any of the following antidepressants at therapeutic dose range for the acute treatment of unipolar major depression in adults: bupropion, citalopram, duloxetine, escitalopram, fluoxetine, fluvoxamine, milnacipran, mirtazapine, paroxetine, reboxetine, sertraline, and venlafaxine. The main outcomes were the proportion of patients who responded to or dropped out of the allocated treatment. Analysis was done on an intention-to-treat basis.
FINDINGS: Mirtazapine, escitalopram, venlafaxine, and sertraline were significantly more efficacious than duloxetine [odds ratios [OR] 1.39, 1.33, 1.30 and 1.27, respectively], fluoxetine [1.37, 1.32, 1.28, and 1.25, respectively], fluvoxamine [1.41, 1.35, 1.30, and 1.27, respectively], paroxetine [1.35, 1.30, 1.27, and 1.22, respectively], and reboxetine [2.03, 1.95, 1.89, and 1.85, respectively]. Reboxetine was significantly less efficacious than all the other antidepressants tested. Escitalopram and sertraline showed the best profile of acceptability, leading to significantly fewer discontinuations than did duloxetine, fluvoxamine, paroxetine, reboxetine, and venlafaxine.
INTERPRETATION: Clinically important differences exist between commonly prescribed antidepressants for both efficacy and acceptability in favour of escitalopram and sertraline. Sertraline might be the best choice when starting treatment for moderate to severe major depression in adults because it has the most favourable balance between benefits, acceptability, and acquisition cost.
It is an extensive review, in fact, part of a family of publications in journals and the subject of several Cochrane Systematic Reviews by this group analyzing the head to head antidepressant studies. It seems to be well conducted by a credible team. But there’s another side to the story. It relies on published papers. And whole lot of those published papers were funded by Pfizer and prepared by Current Medical Directions, a medical writing firm:

BACKGROUND: Changes in the character of medical authorship. Aims To compare the impact of industry-linked and non-industry linked articles.
METHOD: We compared articles on sertraline being coordinated by a medical writing agency with articles not coordinated in this way. We calculated numbers of Medline-listed articles per author, journal impact factors, literature profiles and citation rates of both sets of articles.
RESULTS: Non-agency-linked articles on sertraline had an average of 2.95 authors per article, a mean length of 3.4 pages, a mean Medline listing of 37 articles per author [95% CI 27-47] and a mean literature profile of 283 per article [95% CI 130-435]. Agency-linked articles on sertraline had an average of 6.6 authors per article, a mean length of 10.7 pages, a mean Medline listing of 70 articles per author [95% CI 62-79] and a mean literature profile of 1839 per article [95% CI 1076-2602]. The citation rate for agency articles was 20.2 [95% CI 13.4-27.0] and for non-agency articles it was 3.7 [95% CI 3.3-8.1].
CONCLUSIONS: The literature profiles and citation rates of industry-linked and non-industry-linked articles differ. The emerging style of authorship in industry-linked articles can deliver good-quality articles, but it raises concerns for the scientific base of therapeutics.
Am I suggesting that the ghostwritten literature on Zoloft® is extensive enough or distorted enough to skew a meta-analysis of all published head-to-head clinical trials? I don’t know that, so I guess the answer is currently unknown. But after reading the Louisiana suit and this article, I’m curious [Note to self: Look for the emails and documents referenced above]. There has to be some explanation for the discrepancy between the lackluster FDA Approval data and its glowing performance in the literature [and in the marketplace]…
Mickey @ 2:47 PM
Filed under: politics
an echo that needs to keep reverberating…

Posted on Thursday 4 September 2014

Dr. Roy Poses of Healthcare Renewal often writes about a concept – the anechoic effect [see themes…]. We all know about it. Some big story comes along and there’s a big reaction, outrage around, but then interest peters out and it’s forgotten – worse, nothing is done about it. It happens all the time. And in keeping up with the antics of the pharmaceutical companies, it’s the rule rather than the exception. There ought to be a registry of things to keep on the front burner. In my case, the registry is some phrases scratched on the back of a coffe stained envelope pinned to the wall. One of them says "Plumlee – Zoloft?" Back in the beginning of 2013, I ran across a suit, Laura A. Plumlee et. al. v. Pfizer [see a wide net…], that was intriguing. It alleged that Zoloft didn’t work and asked that Pfizer refund the money to those taken in by the drug’s ads. At first, it seemed far fetched, but not after I read the case. So I went looking for the NDA on the FDA site, but it wasn’t there. So I submitted an FOIA request to the FDA and when it showed up, it gave me plenty to write about:

They submitted six studies [only one made the grade, and it was a very weak showing]:

Placebo Controlled Clinical Trials





protocol 103
outpatient fixed dose questionable
protocol 101
inpatient fixed dose
protocol 310
inpatient fixed dose
protocol 104
outpatient titrated dose
protocol 315
outpatient titrated dose
protocol 320
outpatient open label, relapse whatever

The FDA reviewers did not recommend approval. The committee was on its way to denying approval when the head of the FDA [Dr. Paul Leber] who had assured Pfizer he could get it approved entered the discussion with a speech and, as he predicted, he got "it through." It was hardly an exemplary day for the FDA. Reading it, I could easily see why the suit was filed. The plaintiff was right on target. Oh. by the way – it had been turned down already in Europe, but the FDA committee didn’t know that [because Pfizer didn't tell them].

This graph is number of prescriptions written by year. Zoloft passed Prozac in 2000 [about halfway through its patent life], and it continued to dominate the market share until going generic in 2006 [when it was replaced at the top by generic Zoloft]. It was a $30 B drug:
I’ve checked along the way occasionally, but true to the anechoic effect’s power, I missed this report in March and only saw it on a back-of-the-envelope registery inventory this morning [sorry about the length, but I thought it deserved a full read]:
Lawyers and Settlements
by Gordon Gibb
March 24, 2014

A proposed Zoloft class-action lawsuit alleging Zoloft is a defective drug because it offers little more efficacy than a placebo, or so it is alleged, was recently tossed by a federal judge due to a time-barring issue and other legal implications. However all is not lost; the presiding magistrate left the door open a crack for a possible continuation of the complaint, with some revisions.

In Plumlee v. Pfizer Inc., Case No. 5:13-cv-00414, in the US District Court for the Northern District of California, plaintiff Laura Plumlee took Zoloft manufacturer Pfizer to task for marketing a drug that was alleged to be ineffective, with questionable efficacy, due to a claim that most clinical trials found that Zoloft was no more effective than a placebo, or so Plumlee claimed. Her lawsuit alleges that Pfizer purposely omitted, in Zoloft labeling, any studies that showed Zoloft to be ineffective, while favoring studies that showed Zoloft was, indeed, more effective than a placebo. Plumlee also alleged that Pfizer’s marketing and advertising was also misleading in touting Zoloft, an antidepressant, as effective.

However, Plumlee’s claim was dismissed not on her argument of effectiveness, but due to time barring. It has been reported that Plumlee brought her defective drug lawsuit under two statutes observed by the state of California: that of the Unfair Competition Law, and the Consumer Legal Remedies Act and False Advertising Law.
Was plaintiff’s claim time-barred?
The two aforementioned statutes, under California law, carry limitations of four years and three years, respectively. In her ruling dismissing the plaintiff’s claim, US District Judge Lucy Koh ruled that Plumlee’s complaint went beyond the limitation boundaries, given the plaintiff’s claim that she last used Zoloft in 2008 but waited until January 2013 to bring her lawsuit.

lumlee challenged that such limitations were tolled until 2012, the point at which Plumlee first discovered that Zoloft had been misrepresented. The judge, however, held that Plumlee’s claim to discovering Zoloft’s inadequacies in “early 2012” was too general a frame of time. Judge Koh also was not satisfied with the detail supporting the time and surrounding circumstances of her discovery.

To that end, the judge pointed to the existence of various scientific articles – cited by the plaintiff – that had been published long before Plumlee brought her drug defects lawsuit, and thus did not accept the plaintiff’s claim. However, the judge left the door open.
Hope springs eternal…
All is not lost for this Zoloft defective medical products action
In dismissing the plaintiff’s claim, Judge Koh is allowing Plumlee to amend her complaint going forward. It is telling, as well, that the California judge ruled that Pfizer has the freedom to access certain aspects of the plaintiff’s medical history. Plumlee had sought to block Pfizer’s access to her medical records. A previous magistrate’s ruling that allowed Pfizer access was supported by Judge Koh on grounds that Plumlee had waived any privilege of protecting her medical history when she argued that the statutes of limitations were tolled due to her learning of Zoloft’s alleged deficiencies only in early 2012.

Plumlee, according to various reports, had sought to represent a proposed class of plaintiffs who may have used Zoloft from the point at which it was introduced to market in 1991, through to present day. However, the judge suggested that Plumlee may not be typical of the class, given that she claims to have used Zoloft for a period of three years even though it did not appear to be working for her. Records also demonstrated that the lead plaintiff relied more upon Zoloft marketing and advertising, than the advice of her doctor.

Pundits suggest that in leaving the door open, the judge feels the proposed class-action lawsuit may have merit, in spite of deficiencies exhibited by Plumlee’s claim. The potential, thus, is for Plumlee to amend her claim that satisfies time-barred limitations and other deficiencies as articulated by the presiding judge. Could the proposed class-action lawsuit proceed with a different lead plaintiff?

Harmful drugs are often shown to carry risks, in spite of the position of the US Food and Drug Administration (FDA) that holds that a drug’s benefits outweigh the risks for the class or constituency of patients to which the drug is targeted. In the same vein, however, drug defects can also include deficiencies that suggest a drug is not worth the financial outlay, either by an individual or group, in exchange for potentially limited effectiveness.

The aforementioned Zoloft lawsuit alleges Zoloft does not live up to its promises. The proposed class action, alleging defective medical products (Zoloft, as ineffective), could continue with amendments – but perhaps not in its present form.
I can’t find anything else. I’ll write Baum·Hedlund·Aristei·Goldman, the firm handling the case, to see what I can find out. This is an echo that needs to keep reverberating…

UPDATE: Nosing around looking for email addresses, I ran across this:
By Sindhu Sundar
September 02, 2014

Pfizer Inc. on Friday defeated for the second time allegations that it greatly exaggerated the efficacy of its antidepressant Zoloft, when a federal judge in California ruled the proposed class action claims are time-barred and dismissed the suit with prejudice.

U.S. District Judge Lucy H. Koh, who in February had dismissed Laura Plumlee’s suit but allowed the plaintiff to amend her suit to address the court’s timeliness concerns, granted Pfizer’s motion to dismiss the suit with prejudice Friday…

Plumlee last bought Zoloft or its generic equivalent in 2008, and by the time she brought her suit in 2013, she had exceeded by at least seven months the statutes of limitations under the various California consumer protection laws she invoked, Judge Koh ruled.

"The court finds that each of plaintiff’s claims is time-barred and that despite being granted an opportunity to amend her complaint, plaintiff has still not met her burden of showing that the statutes of limitations have been tolled by the delayed discovery rule," Judge Koh said in her opinion.

Plumlee, who had filed her original suit in January 2013, claimed that she did not learn about Pfizer’s alleged over-representation’s about Zoloft’s effectiveness until she watched a "60 Minutes" segment in May 2012, according to the order.

"We are pleased with the decision and believe the court applied California law correctly in ruling to dismiss the case with prejudice," Pfizer spokesman Steven Danehy said in a statement Tuesday. "Pfizer has always believed that the plaintiff’s amended complaint fails to adequately address the deficiencies of the original complaint, which was previously dismissed."

An attorney for Plumlee could not immediately be reached for comment Tuesday.

Plumlee had sought to represent a proposed class of patients who used Zoloft made by Pfizer between the drug’s launch date in 1991 through the present. She claimed that Zoloft’s labeling failed to mention the studies showing it to be ineffective, that Pfizer favored researchers who showed Zoloft to be effective, and that the company’s advertisements misleadingly touted the drug as effective, among other allegations.

She claimed for instance, that Pfizer buttered up doctors with blandishments including ski trips and "fancy" meals, to encourage them to prescribe Zoloft, according to court documents.
Damn! And look at the date! I must’ve heard it in my sleep last night. Back to the drawing board…
Mickey @ 12:08 PM
Filed under: politics
along the road…

Posted on Wednesday 3 September 2014

Irving Kirsch published an article in 2008 [Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration] that concluded:
Drug–placebo differences in antidepressant efficacy increase as a function of baseline severity, but are relatively small even for severely depressed patients. The relationship between initial severity and antidepressant efficacy is attributable to decreased responsiveness to placebo among very severely depressed patients, rather than to increased responsiveness to medication.
[see first rate madness…]. His 2009 book [The Emperor's New Drugs: Exploding the Antidepressant Myth] suggests that the antidepressants are simply powerful placebos and offered a strong critique of the "chemical imbalance theory."

This new article capitalizes on the 2004 court order that GSK must post all of its clinical trials on a publicly available web site [GSK Clinical Study Register]. The amount of information on each study is highly variable – from the complete CSR and IPD for Study 329 [the proband for the court order] to short summaries for many other Clinical Trials. But still, we know all of the trials, so the unpublished studies problem evaporates. While the variable amount of data limits what can be done, this article looks at every RCT done by GSK on Paroxetine that used the Hamilton Rating Scale for either Anxiety [HRSA] or Depression [HRSD], and limited its scope to efficacy [not adverse events]. The key parameter, Effect Size [Cohen's d] was calculated from reported means, standard deviations, and number of subjects – not derived from full data sets. In spite of these limitations, the study seems well done and has plenty to say:

A Meta-Analysis of Change on the Hamilton Rating Scales
by Michael A. Sugarman, Amy M. Loree, Boris B. Baltes, Emily R. Grekin, and Irving Kirsch
PLoS·ONE. 08/27/2014 DOI:10.1371/journal.pone.0106337

Background: Previous meta-analyses of published and unpublished trials indicate that antidepressants provide modest benefits compared to placebo in the treatment of depression; some have argued that these benefits are not clinically significant. However, these meta-analyses were based only on trials submitted for the initial FDA approval of the medication and were limited to those aimed at treating depression. Here, for the first time, we assess the efficacy of a selective serotonin reuptake inhibitor [SSRI] in the treatment of both anxiety and depression, using a complete data set of all published and unpublished trials sponsored by the manufacturer.
Methods and Findings: GlaxoSmithKline has been required to post the results for all sponsored clinical trials online, providing an opportunity to assess the efficacy of an SSRI [paroxetine] with a complete data set of all trials conducted. We examined the data from all placebo-controlled, double-blind trials of paroxetine that included change scores on the Hamilton Rating Scale for Anxiety [HRSA] and/or the Hamilton Rating Scale for Depression [HRSD]. For the treatment of anxiety [k = 12], the efficacy difference between paroxetine and placebo was modest [d = 0.27], and independent of baseline severity of anxiety. Overall change in placebo-treated individuals replicated 79% of the magnitude of paroxetine response. Efficacy was superior for the treatment of panic disorder [d = 0.36] than for generalized anxiety disorder [d = 0.20]. Published trials showed significantly larger drug-placebo differences than unpublished trials [d’s = 0.32 and 0.17, respectively]. In depression trials [k = 27], the benefit of paroxetine over placebo was consistent with previous meta-analyses of antidepressant efficacy [d = 0.32].
Conclusions: The available empirical evidence indicates that paroxetine provides only a modest advantage over placebo in treatment of anxiety and depression. Treatment implications are discussed.
First, the graphs [which beg for a bit of clarification]. This one is from the studies of Paroxetine in Anxiety States that relied on the Hamilton Rating Scale for Anxiety [HRSA]. Their outcome variable, Cohen’s d, measures the strength of the drug effect, not simply its statistical significance, and plots it against the baseline severity of the HRSA:

In this case, the red and blue lines showing the changes in drug and placebo over the course of the study and their significant change with severity are consistent with the regression to the mean error [your on your own her]. For my purposes, ignore them. The bottom green line shows the strength of effect for the drug. It is not significantly related to the severity of the Anxiety State. The mean Cohen’s d is 0.27 [I've marked it with a horizontal arrow] [recall that a rough interpretation of Cohen's d is: 0.25 = weak effect, 0.50 = moderate effect, and 0.75 = strong effect]. From this graph, we can conclude that Paroxetine has a definite anxiolytic effect, but that it’s not anything to write home about. It’s not inert, but it’s nowhere even close to a wonder drug.

Now to the meta-analysis of the Paroxetine trials in depressed patients. The opposite effect of severity on the pre-post drug and placebo response [red and blue lines] was not clear to me, but unlike the meta-analysis in 2008 of FDA Approval studies, there was not a significant effect of severity on response. The mean Cohen’s d in depression is 0.32 [I've again marked it with a horizontal arrow].

The only study in adolescents [the infamous Study 329] is also marked. The other GSK adolescent trials [published after the patent expired] used other rating systems [MADRS, K-SADS-L and CDRS-R] and were both decidedly negative [see paxil in adolescents: “five easy pieces”…].

They also assessed the impact of the trial length on the strength of the effect and found no significance:

And there’s more [as they say on those television ads]. They also looked at whether there was a difference in Cohen’s d between those studies done before FDA Approval or afterwards, and whether there was a difference between published and unpublished studies. While there were differences in the means in both cases, neither achieved statistical significance:
…the mean paroxetine-placebo effect size did not differ significantly as a function of approval status [Q[1] = 3.27, p = .077], although there was a trend towards a greater drug-placebo benefit in pre-approval trials [Pre-Approval: d = 0.41 [95% CI: 0.30,0.53]; Post-Approval: d = 0.29 [95% CI: 0.22,0.36]].
The weighted mean difference between paroxetine and placebo was not significantly different between published and unpublished trials [Q[1] = 1.50, p = .221]. Published trials [k = 16] had a weighted mean effect size of d = 0.36 [95% CI: 0.27,0.44] and unpublished trials [k = 11] had an effect size of d = 0.28 [95% CI: 0.20,0.37].
I obviously liked this study. They used the GSK Clinical Study Register to produce some solid conclusions. Paroxetine has antidepressant qualities but is nowhere near being a powerhouse. They used the term modest to describe it’s effect [I guess modest is between weak and moderate], and that fits my clinical experience with the SSRIs. I thought it was surprising how consistent all of these studies really were. Eliminating publication bias gives us a fuller picture of the drug than we had before. They have a thorough discussion of the difference between statistical significance and clinical significance in the paper and I recommend it. This can’t be called a thorough vetting of the Paroxetine efficacy, however, because they were working with GSK derived means and variance rather than the actual individual Participant Data [IPD] [see it matters…].

They didn’t look at the critical Adverse Event data, so this meta-analysis only addresses one side of the risk benefit equation. They have a nice discussion of the Adverse Events in the discussion, but it is not data-based from their own work. This article is what a little-bit of Data Transparency can tell us, but it’s just not enough. There’s the company machine operating between most of this data and the raw clinical trial. Considering where we’ve lived for decades, I think it’s a landmark article, but only a marker along the road to where we need to be.

Parenthetically, I find myself thinking that this efficacy data holds for this class of drugs. I don’t prescribe Paroxetine, not because of efficacy differences, but because of its high propensity for withdrawal symptoms, reported here as 66%. So it appears to me that there are differences among these drugs in that area, but it’s just my impression and what I’ve read of the impressions from others. I look forward to the day when we have compilations we can trust on the Adverse Events for this drug [and all of our other medications]. I don’t think we really have that information in an accurate form for any of the SSRIs, SNRIs, or the Atypical Antipsychotics…
Mickey @ 9:17 PM
Filed under: politics
that maturity…

Posted on Tuesday 2 September 2014

    We shall not cease from exploration
    And the end of all our exploring
    Will be to arrive where we started
    And know the place for the first time…
    Little Gidding  T.S. Eliot 1942

I hate being so repetitive with my quotes. This one has had many re-runs here. But I guess that’s the way it is with the good ones. Maybe next time I’ll use the story of the Holy Grail or the Wizard of Oz to say the same thing. Or maybe it’s just part of the experience of being an old man, to begin to see how cyclic human life can be. I came to psychiatry interested in psychotherapy at a time of transition. The psychiatry of the time was operating at its most "eclectic," or so I thought. There were models galore – psychodynamic, biological, medical, existential, social, behavioral, etc. Come one, come all. I thought that was great, myself. And then things changed dramatically and one was apparently supposed to choose – specifically choose biomedical. So those of us who didn’t moved to the side [because there was no place else to go]. At least that’s how it seemed. But that’s ancient history, albeit my own. I sure didn’t start writing well into retirement to rehash those days. I started writing because I woke up to the fact that a dominant paradigm in psychiatry throughout my career – psychopharmacology – had been invaded by industry and was more corrupt than I could’ve imagined. So I exhumed skills from a former career in hard-science-medicine and began to look at what I consider the carnage that resulted from an academic-pharmaceutical alliance that has afflicted a too-big sector of psychiatry.

It appears I came along at another time of transition. Now, there are real moves to clean up some of the side effects that came during the neoKraepelinian revolution that swept through the specialty of psychiatry in my early days. Industry had its day in the sun and seems for the moment moving on to greener pastures. The Clinical Trial world, seat of some of the major corruption is under a microscope and the target of a growing movement for Data Transparency. And people at least say "bio-psycho-social" model now frequently – a term that has long been only whispered. Then today, I look over the new American Journal of Psychiatry and read this:
by Kendler KS
American Journal of Psychiatry. 2014 May 16. [Epub ahead of print]

This essay addresses two interrelated questions: What is the structure of current psychiatric science and what should its goals be? The author analyzed all studies addressing the etiology of psychiatric disorders in the first four 2013 issues of 12 psychiatry and psychology journals. He classified the resulting 197 articles by the risk factors examined using five biological, four psychological, and three environmental levels. The risk factors were widely dispersed across levels, suggesting that our field is inherently multilevel and already practicing empirically based pluralism. However, over two-thirds of the studies had a within-level focus. Two cross-level patterns emerged between 1) systems neuroscience and neuropsychology and 2) molecular or latent genetic factors and environmental risks. The author suggests three fundamental goals for etiological psychiatric research. The first is an eclectic effort to clarify risk factors regardless of level, including those assessed using imaginative understanding, with careful attention to causal inference. An interventionist framework focusing on isolating causal effects is recommended for this effort. The second goal is to clarify mechanisms of illness that will require tracing causal pathways across levels downward to biological neuroscience and upward to social factors, thereby elucidating the important cross-level interactions. Here the philosophy of biology literature on mechanisms can be a useful guide. Third, we have to trace the effects of these causal pathways back up into the mental realm, moving from the Jasperian level of explanation to that of understanding. This final effort will help us expand our empathic abilities to better understand how symptoms are experienced in the minds of our patients.

Conclusion: …A vigorous debate between different scientific perspectives on psychiatric illness is to be valued. More problematic has been our tendency to develop “fervent monism.” This position, at times strongly advocated by psychoanalysis, early biological psychiatry, social psychiatry, and most recently, molecular psychiatry, is that their approach was the only valid one. Fervent monism, especially when applied to the field of human behavior, reflects epistemic hubris. It is helpful, in concluding, to revisit an old but central question: Is there a single “best” level at which to address the causes of psychiatric illness? Do we expect that over time one specific level of explanation for psychiatric illness will “win” the scientific competition and beat out all other kinds of explanations? I think that the mere posing of this question illustrates its implausibility. We are “stuck” with the dappled causal world for psychiatric disorders. In the introductory epigraph to this essay, Chang makes a point worth re-emphasizing. It is only the immature fields of science that advocate monism. Tolerance for diversity and humility come with scientific maturity.
Saying that "risk factors were widely dispersed across levels, suggesting that our field is inherently multilevel and already practicing empirically based pluralism" may well be a bit of an exaggeration, if he’s referring to psychiatric practice or research. As an aging pluralist, it has felt and still feels pretty monistic to me – even with a break in the clouds. My point is only that it has been a long time since I’ve seen an essay in the American Journal of Psychiatry that acknowledges the "dappled causal world for psychiatric disorders" and adds "It is only the immature fields of science that advocate monism. Tolerance for diversity and humility come with scientific maturity". We could use some of that maturity…
Mickey @ 8:31 PM
Filed under: politics
it matters…

Posted on Monday 1 September 2014

I wrote this less than four years ago [selling seroquel I: background…]:
This email response to a researcher who was requesting funding from Zeneca several months after the F.D.A.’s approved Seroquel might seem odd or even Machiavellian to a Basic Scientist, a Practicing Clinician, or a patient-to-be, but if your business is selling the product, it makes perfect sense:
Zeneca had poured years and a lot of money into getting their drug approved. Now it was time to focus on reaping the benefits of their hard work…
I remember feeling kind of shocked when I ran across it. I guess I was a "newbie" – naive to a fault. But that was just the beginning of a long series of similar disillusionments in the interim. Like most, the more I read and looked into things, the more my attention was drawn to Clinical Trials, primarily Industry-Funded Clinical Trials. When I think about it now, I can’t imagine being so gullible as to believe that Industry-Funded Clinical Trials would be on the up-and-up. I guess I thought, after all, they were written by the upper levels of academic psychiatry oblivious to how the game had come to be played [I really was a "newbie" back then]. Over time I realized that others already knew all of this, and I joined their growing outcry. We’re approaching a potential landmark – the European Medicines Agency will release it’s policy – and we’ll find out whether they will stick to their initial promise of full Data Transparency or whether they will cave in like it appeared they were going to do a few months ago. So, I thought I’d do a review of the scorecard to use when they announce that policy in early October. For many, this will be old hat. For others, it will be TMI [too much information]. But if it’s not something you know, the EMA decision will be undecipherable:
    The PROTOCOL is a formal document that lays out in detail how the study is to be conducted. It has multiple functions, but for our purposes, it’s important because there are a number of ways that a study can be skewed in favor of a drug – like picking a dose for the comparator drug that is either to low to be effective or to high and likely to cause a number of side effects and discontinuations. It’s important because it lays out the primary and secondary variables and how they will be analyzed. All of this is a priori, before the study is done. One reason is that given enough creativity, one can frequently find significance after the fact by running tests on any and everything. By making a priori declarations of both target and technique, the PROTOCOL assures us that we’re not being taken down some after-the-fact garden path. The PROTOCOL is an essential element for Data Transparency.

    The CRF‘s [CASE REPORT FORMS] are the primary raw data source, and may be hundreds of pages long for each research subject. They are the various forms filled out by the study coordinators and have intake information, the actual forms of the subjects tests, the trial staffs’ recording of adverse events, the medication records etc. All of this information is recorded before the blind is broken. Things like adverse reactions are recorded in plain english rather than coded by some system. This is the data in its rawest available form [see below]. This information would be required for any serious vetting of adverse events for reasons stated below.

    The IPD [INDIVIDUAL PARTICIPANT DATA] is the compilation of the data from the CRFs in tabular form, spreadsheets in one format or another, data transposed from the CRFs and ready for analysis [separated by treatment eg after the blind has been broken]. For an analysis of the efficacy data, this is all that would be required. The adverse event data is also usually in tabular form in either an abbreviated or encoded in some standardized way. If there are no questions about adverse events, the IPD tables would be fine for a reanalysis. But if adverse event data is in question, the actual CRFs are required reading to avoid lost-in-translation errors, as tedious as that might be.

    The CSR [CLINICAL STUDY REPORT] is a long narrative document that tells the story from start to finish including the results. Sometimes it contains the actual data [IPD] as an Appendix and sometimes it has only summary tables. It’sa version of the published paper in the long form – typicallyseveral hundred pages. Depending on what questions are being asked and how thoroughly they include the IPDs, it might be adequate for checking a study – or it may not. It’s the report that industry would like to be Data Transparency, but it’s usually not close enough to the Raw data to suffice.

    The published ARTICLE is what we’re used to seeing. There was a time in my lifetime that we thought that what we read in our journals was the real deal, but that time has passed. The many ways that data has been manipulated, misrepresented, jury-rigged, seen through rose-colored glasses has become a source of daily amazement to me in my retirement years. As I’ve often said, never in my wildest dreams. There are so many examples that industry really doesn’t have much of a legitimate argument for their claim of proprietary ownership of Clinical Trial data. Their record of gross abuse of that privilege is self-indicting.

I know this is all repetitive and tedious. If you already know it, it’s boring. If you don’t know it, it’s still boring. But the European Medicines Agency’s decision and Canada’s Vanessa’s Law [see doing the right thing…] are the two concrete markers for the cause of Data Transparency. There are lots of questions about Clinical Trials and their ultimate value, but right now the point is anterior to those considerations – basic honesty in scientific reporting. The tools of statistics, analysis, and presentation have been grossly perverted too frequently, for too long.

Just having the raw data available is of no value if someone doesn’t analyze published studies that are in question. Having done a some of that in the years between my naivete four years ago and the present, I can assure you that it’s not for the faint of heart. People who take on the task of vetting someone else’s data need to be well versed in modern analytic techniques or at least willing to take the time to get up to speed. If reading this post is tedious and boring, it’s a piece of cake compared to doing a careful analysis of a Clinical Trial. But the evidence of the last quarter century dictates that there is really no other choice if physicians are to be accurately informed about the medications they prescribe and patients are to know what they’re taking.  So when the EMA policy comes out next month, read it carefully to see what is actually going to be available. It matters, and the devil is in the details…

To wit:

by Ed Silverman
Aug 28, 2014

Amid ongoing debate over the extent to which clinical trial data should be divulged, a new survey finds that an overwhelming majority of members of the Royal College of Physicians in the U.K. believe that such information should be disclosed and accessible. To wit, 95 percent say all trials should be registered; 89% says increased publication of results, including those that are negative, will lead to better medicines and patient healthcare; 81% agree that a “moral duty” exists for drug makers to make completed data available to trial participants, the public and the scientific community; and 87% says increased scrutiny of data will lead to better science and research.

At the same, time 10% believe increased publication and dissemination of clinical trial results will harm commercial interests of drug makers and only 18% say that increased access to trial data will harm commercial interests. Just 5% believe companies should not be required to release clinical trial data into the public, and only 27% say publication of completed data should be linked to market authorization.

“The world has changed,” writes Keith Bragman, president of the Royal College’s Faculty of Pharmaceutical Medicine, a standards-setting body at the Royal College, in remarks accompanying the survey results. “Society now demands greater transparency in clinical trials.”

Data disclosure, you may recall, has been a contentious topic following scandals over safety of effectiveness data that was not publicly shared. The survey, which queried 430 of the faculty’s 1,500 members, comes as regulators, academic researchers and drug makers dicker over policies for releasing trial data. The issue has been particularly fraught in Europe, where the European Medicines Agency has repeatedly delayed the release of its new policy amid criticism the agency has backtracked on a previous commitment to new-found openness. Last month, the regulator indicated formal adoption is scheduled to take place at an October board meeting…
Mickey @ 10:51 PM
Filed under: politics
justification for “what they’re for”…

Posted on Monday 1 September 2014

This is an extension of the last post and its comments, specifically:
  1. "and this graph suggests that the patients essentially moved next door into our prisons"
  2. " If there is ever a place where the parable of the blind men and the elephant fits like a glove, this is it"
  3. "I begin to wonder about diagnosis. How many have psychotic illnesses? Are these the homeless chronic patients who have been picked up for minor crimes? How many are primarily substance abusers?"
  4. "the time for decrying, blaming, or ignoring this has passed"
  5. "This effort should be lead by the National Institute of Mental Health and the Substance Abuse and Mental Health Services Administration"
We see the world through the lens of our own experience, our biases, our desires. What possible other sources can we rely on? So when we accuse the pharmaceutical industry of being driven by the wish to sell drugs, we are only stating the obvious. We are given to simplifying the motives of others while thinking that our own are complex, nuanced, well considered. It’s just what we do – over and over. Simplifying, discounting, blaming, even demonizing – all part of being human. And we enjoy nothing more that finding a like-minded cohort so we can all do it together. That cynical view I just expressed itself is an example of itself. We apply it to others, but rarely ourselves [see #2].

I posted a couple of graphics [what they’re for…] that paint the picture that chronic mental patients are being warehoused in correctional facilities because there’s no place else for them to go in the post-deinstitutionalization era. I believe that and I don’t think it’s a good thing. But I don’t really know how accurate those figures are and I really don’t know what the people who compiled those figures consider chronic mental illness. How much is substance abuse? How much is chronic psychosis? The figures I found are mostly compiled by the correctional institutions and subject to the bias of their feeling overwhelmed. In the comments to the last post, those questions were raised. We need accurate information, which is the only way to really understand the magnitude of this problem. And in-so-far as I can see, we don’t have it. And because of the variability of our states and state governments, it has to be a national information gathering for any accuracy.

When I read about it, there’s way too much "it has been estimated…" This is something we need to know, not estimate. And it’s science, epidemiology is what it’s called. And who should gather that kind of information? The CDC? The NIMH? SAMHSA? I expect they all have some version of that information, but it lags and if it’s in a usable form, I can’t locate it. That’s why I say, "This effort should be lead by the National Institute of Mental Health and the Substance Abuse and Mental Health Services Administration" [see #5]. Our NIMH has chased the very shaky World Health Organization data about the prevalence of mental illness and dire predictions of the future. But they’ve largely ignored this problem. And they haven’t taken the first step, defining the magnitude and nature of the problem. They’ve spent their time preoccupied with neuroscience and the monocle of psychopharmacology. The state of chronic mental health care in the US is the number one scientific question on the mental health table and prison is the logical place to start. In the comments, George Dawson says that the NIMH is a bad choice because it is a basic science organization. I happen to think epidemiology is a basic science par excellence. Likewise, the CDC, our traditional infectious disease tracker, needs to join in the gathering and tracking mainly because of proven expertise. The fact that we don’t know the magnitude and nuances of this problem is to our shame, all of us – thus Dr. Frances’ title " The Hall of Shame – Who Is Failing the Severely Ill?" [see #1, #3, #5].

The essence of science is to find out answers, or the best answers, to things we don’t know. We [human-kind] don’t know how to deal with chronic psychosis effectively, and we never have. We thought we had at least separated the problem of chronic psychosis from antisocial behavior and criminality, but the charts above suggest that even that was not a solid conclusion. So I personally think we need to start where we are rather that indulge our natural propensity to ignore problems we don’t know what to do with, to blame the state of affairs on each other, or to self-righteously decry how things are without taking action [see #4].

George Dawson suggest that this problem is the result of the influence of Managed Care or perhaps government agencies dropping the ball. Sandra Steingard suggests the surge of substance abuse problems and an adherence to the medical model might be part of the problem. DJ Jaffe implies that the organizations that should be involved are off dealing with lesser more lucrative issues. I happen to think all of those things and that our governmental agencies have become playgrounds for idealogues. But it doesn’t really matter what we think is the cause of the problem [see #2]. All that actually matters is that what already looks like a massive problem is getting worse, and we don’t have a solid handle on the details needed to understand it. Maybe I’m in left field thinking existing agencies can figure out how to get us the information in a detailed and unbiased form and put us on the road to a best case solution, maybe we need a Task Force, a Manhattan Project, a NASA…
Mickey @ 12:23 PM
Filed under: life
what they’re for…

Posted on Saturday 30 August 2014

Ever since I ran across this graph of the rates of institutionalization, I’ve been mulling over the plight of the severely mentally ill during my time in psychiatry [that faint line above the abscissa marks when I was directly involved]. Writing about it a week or so ago [functional improvement…], I called it Transinstitutionalization – a term from those days predicting that this is what would happen. It does seem naive in retrospect to think that one could Deinstitutionalize the patients in our massive State Hospital system simply by shutting it down. The planned Community Mental Health system was never fully realized, and this graph suggests that the patients essentially moved next door into our prisons:

Looking around to find the magnitude of the problem, I ended up on the National Institute of Corrections web site where I found this:

Mentally Ill Persons in Corrections

Mentally ill persons increasingly receive care provided by corrections agencies. In 1959, nearly 559,000 mentally ill patients were housed in state mental hospitals. A shift to "deinstitutionalize" mentally ill persons had, by the late 1990s, dropped the number of persons housed in public psychiatric hospitals to approximately 70,000. As a result, mentally ill persons are more likely to live in local communities. Some come into contact with the criminal justice system.

In a 2006 Special Report, the Bureau of Justice Statistics estimated that 705,600 mentally ill adults were incarcerated in State prisons, 78,800 in Federal prisons and 479,900 in local jails. In addition, research suggests that "people with mental illnesses are overrepresented in probation and parole populations at estimated rates ranging from two to four time the general population. Growing numbers of mentally ill offenders have strained correctional systems.

There are so many ways to think about this, most of them suffused with cynicism. When I’m in a cynical mood, I can look at these numbers and cast blame in all directions, but then I come up short, because I’m not really sure what to do about it either. If there is ever a place where the parable of the blind men and the elephant fits like a glove, this is it:

Everyone’s looking at the part that effects them, and the big picture gets lost in the shuffle. Like most mental health types, I tend to accept the MAD vs BAD distinction and want to separate out the mental patients and get them out of the prisons and into the community – the battle cry of my era and the Community Mental Health Movement. Allen Frances and DJ Joffe have an excellent post up that somewhat takes that perspective [see The Hall of Shame - Who Is Failing the Severely Ill?]. I agree with their every word, but always worry that it will fall on deaf ears like it has for such a long time. I was actually impressed with some of the information and policy discussions on the National Institute of Corrections web site as well as a report I found there [Improving Outcomes for People with Mental Illnesses under Community Corrections Supervision:] focused on the parole system. They’re much more mental illness savvy than I realized.

Again, like most mental health types, I look at that pie graph up there and after I get over the magnitude of the problem, I begin to wonder about diagnosis. How many have psychotic illnesses? Are these the homeless chronic patients who have been picked up for minor crimes? How many are primarily substance abusers? I haven’t been able to find those numbers yet, but I’m still looking.

Looking at that graph at the top, I have to remind myself that it is not populations in jail or mental hospitals, it’s rate of institutionalization. They aren’t the patients from the days of Denstitutionalization, they’re a new generation. And it looks to me as if we have a surprisingly fixed rate of removing people from our society for one reason or another. The graph itself is from a study of violent crime – homicide [An Institutionalization Effect: The Impact of Mental Hospitalization and Imprisonment on Homicide in the United States, 1934–2001]. And what it shows is that there was a dramatic increase in the homicide rate in the 1970s and 1980s when Institutionalization had its big dip. It’s a complex legal article, and you’ll have to read it yourself to figure out what they make of their findings:

Right now, Law Enforcement is having to carry the ball for the most important mental problems in our country. While they seem to be doing a credible job under the circunstances, it’s not what their system was designed to do. Here’s what Dr. Frances and DJ Jaffe have to say:
Dr Jaffe writes:
The bipartisan Helping Families in Mental Health Crisis Act (HR3717) has wide support among those who advocate for the 5 percent of the population with the most serious mental illnesses. But there are parts of the mental health industry that ignore the seriously ill. Over 500,000 of the most seriously ill are incarcerated or homeless, largely because the mental health industry focuses on all others.
  • Substance Abuse and Mental Health Services Administration: SAMHSA distributes over $400 million in mental health block grants to states and tells them how to spend it. But as Representative Tim Murphy noted, "SAMHSA has not made the treatment of the seriously mentally ill a priority… It’s as if SAMHSA doesn’t believe serious mental illness exists." SAMHSA encourages states to spend block grants on the highest functioning. It wants to replace the scientific medical model with their internally invented recovery model, and creates it’s own "illnesses" — bullying and trauma being the most recent.
  • Consumer Groups: The National Coalition for Mental Health Recovery (NCMHR) is the umbrella organization for SAMHSA-funded consumer groups like the National Empowerment Center and National Mental Health Consumers Self Help Clearinghouse. Rather than advocating for the seriously ill, they advocate for anyone with "lived experience." They believe everyone should self-direct their own care, thereby ignoring those too sick to do so.
  • Mental Health Lawyers: The Bazelon Law Center, ACLU, the National Disability Rights Network (NDRN) and State Disability Rights organizations not only ignore the most seriously ill, their actions cause harm. These non-profit law centers fight against Assisted Outpatient Treatment and creation of hospital beds for the most seriously ill thereby making incarceration inevitable for many.
  • Mental Health America: Mental Health America is a trade association for service providers. Rather than serious mental illness, MHA is "dedicated to helping all Americans achieve wellness." MHA of Essex County New Jersey is one of the few chapters that does try to help the most seriously ill.
  • National Council for Community Behavioral Health: This organization represents behavioral healthcare conglomerates. They mainly lobby for funding Mental Health First Aid (MHFA) classes they sell. MHFA is based on the false premise that the mentally ill are so asymptomatic special training is needed to identify them and that once identified services are available to refer to. MHFA is not proven to help the seriously mentally ill.
  • National Alliance on Mental Illness: Historically, NAMI did focus on serious mental illness because it was founded by families of the very seriously ill. In 1993, NAMI argued for parity for people with severe mental illness. In 1995, NAMI endorsed various forms of involuntary treatment when needed. Cut to today. Instead of the 14 million who are most seriously ill, NAMI National now claims to represent 60 million people with any mental health issue. Some brave state and local chapters like NAMI/NYS have refused to follow their lead and they still focus on helping people with serious mental illness.
  • American Psychiatric Association: The APA represents psychiatrists and publishes the Diagnostic and Statistical Manual that determines what is and isn’t a mental health problem and therefore gets a billing code. It is in the APAs interest to have everyday problems declared a disorder so members can be reimbursed for treating them. A subset of psychiatrists do treat the seriously ill and immediate past president, Dr. Jeffrey A. Lieberman has gone out of his way to increase the visibility of serious mental illness, but serious mental illness is still only a small part of APAs focus.
  • American Psychological Association: This APA represents "130,000 researchers, educators, clinicians, consultants and students." The most popular subjects for their members are addiction, bullying, marriage and divorce, personality, sexual abuse, and depression, not serious mental illness.
  • Celebrity Centric Advocacy Organizations: None of the 29 events sponsored by The Rosalynn Carter Symposium on Mental Health Policy focused on serious mental illness. Patrick Kennedy’s One Mind for Research is primarily involved in post-traumatic stress disorder, traumatic brain injury, and stigma education, not schizophrenia and bipolar. He has used The Kennedy Forum on Mental Health to call for an end to the IMD Exclusion, but has not spoken out on important initiatives like implementing Assisted Outpatient Treatment or criticized the CMHCs created by his uncle for refusing to serve the most seriously ill.
  • Law Enforcement: Ironically, this is the one bright spot. Law enforcement organizations like The National Sheriffs Association, the New York State Association of Chiefs of Police  have stepped in to fill the void left by the mental health industry’s abandonment of the most seriously ill. They’ve become powerful advocates for increasing hospital beds for the seriously ill and are working to force the mental health system to stop ignoring them. Law enforcement is vigorously supporting Rep. Tim Murphy’s Helping Families in Mental Health Crisis Act and working with families of the seriously ill helped it gain 95 cosponsors from both parties. Those who want to help people with serious mental illness should ask their Representative to support this bill.
From Dr. Frances:
And I would add one more name to DJ’s shame list. The National Institute Of Mental Health devotes almost all of its enormous research budget to glamorous, but very long shot, biological research that over the last four decades has contributed exactly nothing to the treatment and lives of the severely ill. Surely, biological progress will eventually be made, but at best it will take decades to have any impact on the current real world problems of the mentally ill.
The only things I would add at this point to their synopsis is that the time for decrying, blaming, or ignoring this has passed. It’s time for the mental health agencies and the professional organizations to turn their attention to where our most-in-need patients actually live – our jails and prisons. And I would amplify Dr. Frances’ point. This effort should be lead by the National Institute of Mental Health and the Substance Abuse and Mental Health Services Administration. That’s what they’re for. Psychiatry came into being to care for these specific patients and we’ve abandoned them…
Mickey @ 10:15 PM
Filed under: politics

Posted on Tuesday 26 August 2014

My last post [and wasted research dollars…] lead me to Dr. Nemeroff’s 1984 paper announcing that Cortictrophin-Releasing Factor [CRF] is significantly elevated in the CSF [cerebrospinal fluid] of patients with Major Depressive Disorder – a thirty year old observation that has figured heavily in his research since then – culminating in the clinical trial listed in the last post. It’s a short paper in Science and it’s been open on my desktop for several days. The graph haunts me when I look at it. Here’s the abstract and that one figure from the paper, followed by a description of the analytic methods from the paper:
by Nemeroff CB, Widerlöv E, Bissette G, Walléus H, Karlsson I, Eklund K, Kilts CD, Loosen PT, Vale W.
Science. 1984 Dec 14;226(4680):1342-4.

The possibility that hypersecretion of corticotropin-releasing factor (CRF) contributes to the hyperactivity of the hypothalamo-pituitary-adrenal axis observed in patients with major depression was investigated by measuring the concentration of this peptide in cerebrospinal fluid of normal healthy volunteers and in drug-free patients with DSM-III diagnoses of major depression, schizophrenia, or dementia. When compared to the controls and the other diagnostic groups, the patients with major depression showed significantly increased cerebrospinal fluid concentrations of CRF-like immunoreactivity; in 11 of the 23 depressed patients this immunoreactivity was greater than the highest value in the normal controls. These findings are concordant with the hypothesis that CRF hypersecretion is, at least in part, responsible for the hyperactivity of the hypothalamo-pituitary-adrenal axis characteristic of major depression.

"The results (see Fig. 1) were statistically analyzed by both parametric [analysis of variance (ANOVA) and Student-Newman-Keuls test] and nonparametric (Mann-Whitney U test) methods."

One of the most frustrating things about papers like this is that the raw data isn’t available, even if one has the time to go over it in detail. Once again, I find myself looking at a graph that I’m told is meaningful, significant, has something to say important about a major psychiatric syndrome. And what I see looks like a trivial difference that is probably meaningless, and I even doubt significant. So I did something that I’ve been tempted to do many times. I opened it in a graphics program and reconstituted the data by measuring the pixel count to the center of each data point and using that table, the baseline, and the ordinate scale to reproduce the data. I wouldn’t recommend this on a Nobel Prize application or even in a paper, but I thought I’d give it a shot because I don’t believe the analysis is correct, or correctly done [the next paragraphs is only for the hardy].

So armed with my little made-up table, I proceeded to the analysis. It says that they used an ANOVA. That means considering the numbers as a continuous variable. In an ANOVA with four groups, first you check the whole dataset to see if there is any significance to the grouping. If there is, then you test the groups against each other to locate the significant difference. But with a small dataset like this where the assumptions of ANOVA [normal distribution] are questionable, it is more accurate to use a non-parametric statistic that only considers the ranking of each value, not its magnitude. With four groups, the drill is the same. First one tests the whole dataset to see if the grouping in significant [Kruskal-Wallis]. If it is, you test the groups against each other to find the significant differences [Mann-Whitney]. If you read their paragraph [in italics], it’s hard to figure out exactly what they did but it looks like some steps were skipped. Here’s my version using the R statistical package.

The top green value [p = 0.007656] says that the ANOVA is significant [p<0.05]. But in the table of pairwise comparisons, it’s not the difference between NORMAL and MDD that achieves significance [p = 0.10858]. In the non-parametric test, the overall Kruskal-Wallis test of the table is not significant [0.1485]. There’s nothing there. Whether my crude method is valid or not, it sure doesn’t say this:
The CSF concentration of CRF-LI was significantly increased (by both methods of statistical analysis) in patients with major depression compared to either the normal controls or the patients with schizophrenia or senile dementia."
My point in playing this little game is that we deserve the access to raw data for this very reason. This 30 year old study has been rehashed and discussed for years and has been nuclear to several grant requests, including the Clinical Trial in the previous post. It looks like a thorough vetting thirty years ago might well have put it to rest. I can’t find further studies to confirm this finding and nothing that suggests that this compound has any solid connection with PTSD. If you haven’t figured it out yet. I think this whole line of research is based on unsubstantiated speculations.

As you may recall, when we looked at Dr. Nemeroff’s NYU Grand Rounds and London lecture to the Institute of Psychiatry, we were alerted to a study reported as positive that Dr. Nemeroff, himself, had reported as based on an error so the significance disappeared, yet he presented it as a valid study in those presentations [see has to stop…]. So the best predictor of future behavior is past behavior. Now we have GSK, the VAH, and the NIMH chasing some new drug as a treatment for PTSD based on the very shakiest of speculations. Shame on him. Shame on them. And shame on journals that don’t vet questionable studies like this.

Maybe we ought to say shame on me too for using a pixel count to get my numbers. But instead of that – why not support Data Transparency so I don’t have to resort to extreme measures to confirm my reaction to that graph. Like I said, this kind of silliness has to stop…

Whoops: [for the even more hardy] I left out this plot from the R package. The upper and lower borders of the "boxes" represent 25% and 75% of the points. The fact that the Means [bold horizontal lines] aren’t centered in the box points to a skewing of the data [not normally distributed], suggesting that the ANOVA is not the best choice of statistics, and that the non-parametric test is a more appropriate choice [Kruskal-Wallis]. My method of data capture is also more likely to be accurate using only the rank order.

Mickey @ 8:12 PM
Filed under: politics
and wasted research dollars…

Posted on Saturday 23 August 2014

It’s unlikely that anyone reading this blog or following the peculiar trajectory of academic and organized psychiatry doesn’t know a lot about Charlie Nemeroff and his fall from "Boss 0f Bosses" as Chairman at Emory. He’s become a paradigm for so many things – ghost writing, conflicts of interest, speaker’s bureaus, advisory boards, wheelings-and-dealings, etc. After the fall in 2008, he landed on his feet as Chairman in Miami by 2009 and by 2012 he got himself back on the Grand Rounds Circuit with the topic, The Neurobiology of Child Abuse: Treatment Implications and as an NIMH grantee with PROSPECTIVE DETERMINATION OF PSYCHOBIOLOGICAL RISK FACTORS FOR PTSD [speechless…].

During Dr. Nemeroff’s time as Chairman at Emory, I had already left the Department there as a full time faculty person during the revolution in psychiatry in the 1980s, remaining on the clinical faculty. One interest was PTSD, the psychological part, but I didn’t even know that the chairman of my department was following this other path, one he still follows. I don’t personally believe PTSD has anything to do do with neurobiology or psychobiology, but what I think is not what this post is about. It’s about something known as grantsmanship and wasted research dollars.

I’m not the only person that follows the Travels with Charlie. Carl Elliot of Fear and Loathing in Bioethics points us to Dr. Nemeroff’s coming visit to the Department of Psychiatry at his University of Minnesota.

Grand Rounds – 2014-2015

September 3, 2014 Grand Rounds: TBD
Presenter: Charles B. Nemeroff, MD, Professor and Chair, Dept. of Psychiatry and Behavioral Sciences and Director, Center on Aging, University of Miami

I don’t know if the topic will be The Neurobiology of Child Abuse: Treatment Implications like it was at NYU or in London. The video of the NYU version has unfortunately been taken down, but here’s a synopsis of his closing slides:

I don’t happen to believe any of the speculative parts of that [3, 4, & 5]are known or even likely, but like I said, what I think is not what this post is about. Here’s a piece of that grant write-up for orientation:

Although the majority of trauma victims experience the cardinal symptoms of re-experiencing, avoidance and hyperarousal, for the large majority of such individuals, these symptoms do not become chronic nor do they develop syndromal PTSD. It is important to identify the large minority of trauma victims with a high likelihood of developing PTSD because of the very significant medical and psychiatric morbidity and mortality associated with this disorder. There is already considerable evidence that the likelihood of developing PTSD after trauma exposure is due to a combination of genetic and environmental factors. This two-site, linked R-01 application seeks to utilize state-of-the art advances in genomics, transcriptomics and epigenetics, coupled with comprehensive clinical and psychological measures, to address this seminal unanswered question in PTSD clinical service and research…

I don’t happen to believe there is"already considerable evidence that the likelihood of developing PTSD after trauma exposure is due to a combination of genetic and environmental factors" either, but…

So to the grant itself. So far, we’re into it for a bit over a million NIMH dollars. This is the second time around for this project. Last time, they recruited subjects from hospital waiting rooms and ads on rapid transit [MARTA]. How did they get funded to do it again? I’m not sure, but I think it’s that they’re taking different measurements and using different analyses [?], but what I really think is that Dr. Nemeroff is a master of grantsmanship

To achieve this goal, 500 trauma-exposed subjects will be recruited at the University of Miami Ryder Trauma Center and the Emory University affiliated Grady Memorial Hospital and followed at regular intervals for one year. This focused, hypothesis-driven study will scrutinize previously identified psychological and biological risk factors. Genetic risk factors include polymorphisms of the ADCYAP1R1, FKBP5, DAT, BDNF, COMT, CRFR1, 5HTTLPR, RGS2, GABA2 and 5HT3R genes, novel genetic and epigenetic risk factors and most importantly, the primary downstream effects of these genomic and epigenetic findings by the use of conventional and newer statistical modeling methods.

We were all mystified that he got an NIMH grant at all with his track record [speechless…], and particularly with this topic – a tired remnant from the days when the biology-is-everything mantra was king. I doubt that anyone much thinks that anything will come from this study. So why would he be so quickly rehired after being definitively discredited and how did he get a grant for this of all topics? That is what this post is about. Back in the day, Dr. Nemeroff became the paradigmatic insider. He knew all the people in power [some of whom he'd helped to get there]. And he was an expert in parlaying his influence in raising money from the pharmaceutical companies and the NIMH. He got away with some outrageous antics because he brought home the money to his university and department, so people looked the other way. Even in disgrace, he was still an effective power broke – thus landing on his feet. And what’s the point? He’s still bringing home the bread. And, oh look, three fifths of the way through this grant life, what has been charged to it?

It’s pretty easy to see that these articles don’t have anything to do with the PROSPECTIVE DETERMINATION OF PSYCHOBIOLOGICAL RISK FACTORS FOR PTSD. But that’s not to say that the study isn’t going on. I expect at the end we’ll be treated to slides of findings added to those from the other time around. But it’s highly unlikely that the results will add anything to our understanding of biology or PTSD. At best, they will become references for a further grant application. Over the course of the years, Dr. Nemeroff has been PI on ~$45M worth of NIMH Grants. To my knowledge, none have produced anything that is a lasting addition to the scientific record [note the Senator Grassley Gap 2008-2011]:

When I hear the criticisms of the modern bio-bio-bio psychiatry, while I often agree, I add something else in my mind – motives. The upper layer of academic psychiatry is populated predominantly by people selected by their medical schools because they can do some version of what’s described in this post – bring home the bacon from the NIMH, industry, foundations, et cetera. And for thirty plus years, they’ve talked about little other than biological research and pharmaceutical studies, selecting their future academic colleagues from the like-minded pool [that got us where we are today]. Dr. Nemeroff isn’t an exception, he’s just bolder, more reckless – reckless enough to have been busted for a time. And that’s just the NIMH story. The financing from pharmaceutical companies was probably even more impressive, and flexed the same muscles as the NIMH grantsmanship. He’s just one among many. I recently described a $50M version from UT Southwestern [retire the side…] – equally expensive, with equally non-memorable results. And there are too many more examples.

Update: Oh yeah. This seems related – an example of using both the NIMH and industry…
by Boadie W Dunlop,corresponding author1 Barbara O Rothbaum,1 Elisabeth B Binder,1,2 Erica Duncan, Philip D Harvey, Tanja Jovanovic,1 Mary E Kelley,5 Becky Kinkead, Michael Kutner,5 Dan V Iosifescu, Sanjay J Mathew,7 Thomas C Neylan,8 Clinton D Kilts, Charles B Nemeroff, and Helen S Mayberg
Trials. 2014; 15: 240.

Funding for the study is provided from a grant from the National Institute of Mental Health, U19 MH069056 (BWD, HM). Additional support was received from K23 MH086690 (BWD) and VA CSRD Project ID 09S-NIMH-002 (TCN). GlaxoSmithKline contributed the study medication and matching placebo, as well as funds to support subject recruitment and laboratory testing. GSK is uninvolved in the data collection, data analysis (excepting some pharmacokinetic analysis), or interpretation of findings. The GSK561679 compound is currently licensed by Neurocrine Biosciences, which will also perform pharmacokinetic analyses.
Going back thirty years!
by Nemeroff CB, Widerlöv E, Bissette G, Walléus H, Karlsson I, Eklund K, Kilts CD, Loosen PT, Vale W.
Science. 1984 Dec 14;226(4680):1342-4.

The possibility that hypersecretion of corticotropin-releasing factor (CRF) contributes to the hyperactivity of the hypothalamo-pituitary-adrenal axis observed in patients with major depression was investigated by measuring the concentration of this peptide in cerebrospinal fluid of normal healthy volunteers and in drug-free patients with DSM-III diagnoses of major depression, schizophrenia, or dementia. When compared to the controls and the other diagnostic groups, the patients with major depression showed significantly increased cerebrospinal fluid concentrations of CRF-like immunoreactivity; in 11 of the 23 depressed patients this immunoreactivity was greater than the highest value in the normal controls. These findings are concordant with the hypothesis that CRF hypersecretion is, at least in part, responsible for the hyperactivity of the hypothalamo-pituitary-adrenal axis characteristic of major depression.
hat-tip to James O’Brien
Some things never change…
Mickey @ 6:40 PM
Filed under: politics