in praise of monotonous clinical trial reports…

Posted on Thursday 11 September 2014

I’ve noticed that there’s a pattern in this blog [and in your responses]. I write a post, and something sticks in my mind, and I chase it in the next. That often goes on for a couple of iterations. It’s apparently how I think. Readers tend to comment the first time around, then run out of juice as I perseverate on things for a while. That seems reasonable. This is one of those perseveration posts – I’m still on the Hickie/Rogers review of agomelatine [which is beginning to smell like the "Study 329" of review articles]. If you’re over it, you might want to skip this one [and maybe the next one too]. I come by my 1boringoldman moniker honestly…

Without my really planning it, my last three posts have relied on the work of the same authors, Dr. Andrea Cipriani and Corrado Barbui, who were at the University of Verona at the time these articles were written. Dr. Cipriani is now at the University of Oxford and the Editor in Chief of Evidence-Based Mental Health [the journal of the article below], the Editorial Board of Lancet Psychiatry, and one of the Editors of the Cochrane Collaboration for Depression, Anxiety and Neurosis. They have been involved with many of the Cochrane Reviews of the Antidepressants [and may deserve the title clinical trial gurus in a different way than Dan Sfera of South Coast Clinical Trials].

In my last post [in praise of anonymous, contactable members of the public], I find that I missed an article in the thread, perhaps the most important one of the lot. In the timeline, it came after the Hickie and Rogers Review [Novel melatonin-based therapies: potential advances in the treatment of major depression] but before the BMJ article [Agomelatine efficacy and acceptability revisited: systematic review and meta-analysis of published and unpublished randomised trials] and the Cochrane Collaboration Systematic Review. And to my point, it came before they had the unpublished agomelatine trials in hand. So they were in the same boat we all were when the Hickie and Rogers review came out [see my of sound and fury…] in 2011:
by Corrado Barbui and Andrea Cipriani
Evidence Based Mental Health. 2012 15:2-3.

Introduction: … This essay aims at raising awareness on the need to set a standard for reporting clinical data in articles dealing with basic science issues.

Claims of efficacy in review articles on the pharmacology of agomelatine: The most recent review article on the pharmacology of agomelatine was published by Hickie and Rogers. In this report, a table summarised the placebo-controlled and active comparator trials of agomelatine in individuals with major depression, and in the accompanying text it is clearly stated that agomelatine has clinically significant antidepressant properties. It is reported that it is more effective in patients with more severe depression, and that agomelatine is similarly effective in comparison with some comparator antidepressants, and is more effective than fluoxetine. In terms of tolerability, Hickie and Rogers reported a safe profile, and concluded that agomelatine might occupy a unique place in the management of some patients with severe depression…

The content of these reports is in line with the content of most recently published reports on the chronobiotic effect of agomelatine and other melatonin agonists. A PubMed search [June 17th 2011 back to January 1st 2009] using the terms ‘agomelatine’ and ‘depression’ identified 73 hits of which 34 were review articles on agomelatine as a new option for depression treatment. On the basis of the abstract, we noted that 80% of these reports made claims of efficacy of agomelatine as an antidepressant, 41%7 reported a safe tolerability profile and only four reports mentioned that agomelatine is probably hepatotoxic.

The problem: narrative-based approach to evidence synthesis: We note that these articles make claims of efficacy that are based on narrative rather than systematic reviews of the evidence base. In addition, lack of a methodology to summarise the results of each clinical trial, and lack of overall treatment estimates, make data interpretation rather difficult. As a consequence, often claims of efficacy are not consistent with the efficacy data that are presented. A re-analysis of the efficacy of agomelatine versus placebo, carried out applying standard Cochrane methodology to the data reported in table 3 and table 4 of the Hickie and Rogers report, revealed that agomelatine 25 mg and agomelatine 50 mg have minimal antidepressant efficacy, with no dose-response gradient [table 1]. In comparison with placebo, acute treatment with agomelatine is associated with a difference of 1.5 points at the Hamilton scale. No research evidence or consensus is available about what constitutes a clinically meaningful difference in Hamilton scores, but, as reported by some authors of agomelatine clinical trials, it seems unlikely that a difference of less than 3 points could be considered clinically meaningful. In addition, a bias in properly assessing the depressive core of major depression may exist with the Hamilton scale, leaving the possibility that a 1.5 difference may only reflect a weak effect on sleep regulating mechanisms rather than a genuine antidepressant effect…

Re-analyses of the efficacy of agomelatine versus control antidepressants revealed that agomelatine is not more effective than fluoxetine [table 1] and might be less effective than paroxetine, although results are heterogeneous and of borderline statistical significance [table 1]. In the abstract of the Hickie and Rogers article the results of the comparisons between agomelatine and some control antidepressants are mentioned, but the comparison between agomelatine and paroxetine is omitted. The authors reported similar efficacy between agomelatine and venlafaxine, and better efficacy in comparison with sertraline. However, these results, based only on one study for each comparator antidepressant, are not placed in perspective, and no attempt has been made to critically contextualise their conclusions with what is already known. For example, the pattern of comparative efficacy of agomelatine does not fit with a recent network meta-analysis which showed sertraline and venlafaxine on the top of the efficacy ranking and paroxetine among the least effective antidepressants.

In terms of tolerability, most reviews did not mention the potential relationship between agomelatine and hepatic problems. According to the European Medicine Agency, increases in liver function parameters were reported commonly in the clinical documentation [on 50 mg agomelatine] and in general, more often in agomelatine treated subjects than in the placebo group, leading to the cautionary note that agomelatine is contraindicated in patients with hepatic impairment…

In terms of long-term data, according to these reports the protective role of agomelatine is considered a well-established finding. However, in the Hickie and Rogers article three long-term studies are described, two negative and one positive comparisons between agomelatine and placebo. For one of the two negative studies, efficacy data are not available, and therefore re-analysis was not possible. In the text of the article and in the abstract, only the positive study is mentioned.

Expected consequences: Expected consequences are that these review articles will be highly disseminated by pharmaceutical representatives, keen to extol the virtue of their product, and clinicians’ behaviour will be moulded by the clinical data presented and formularies restructured based on their conclusions…
So here’s why I think it’s the most important paper in the thread, and in fact makes a point that generalizes to the whole of the Clinical Trials literature. This is not a formal Cochrane mega-meta-analysis. They’re working from the tables as presented by Hickie and Rogers using the standard Cochrane protocols for representing those studies, and their conclusions are dramatically different from the original paper containing those tables. Their methods are more sophisticated than mine. All I did was turn the tables into graphs [green = non·significant, red = significant]:
They calculated the Standardized Mean Differences AKA Cohen’s d [SMD] and the Mean differences [MD]:
 
But we both came to the same conclusion, agomelatine is not a clinically significant antidepressant. However, that’s not the point of this post. The point is that these clinical trials or the meta-analyses of these clinical trials aren’t that complicated or that different one from another.  And yet each one is written up in a different way as if they require some kind of unique narrative to understand them. That’s actually an illusion. One could essentially come up with a few templates for analyzing and presenting the results – a protocol of sorts. It would make for monotonous reading, but it would cut through the ways authors use narratives and idiosyncratic presentations to distort the truth. Corrado Barbui and Andrea Cipriani are meta-analysis types, so they suggest a standard developed for meta-analyses. But one could use the same standards for individual trials as well. Here’s their version of the argument:
What can be done – quickly, easily and with no additional costs: Innovation in clinical psychopharmacology is of paramount importance, as current pharmacological interventions for mental disorders suffer from several limitations. However, we argue that reporting of clinical trial results, even in review articles that are focused on new biological mechanisms, should follow a standard of reporting similar to that required from authors of systematic reviews. We recently applied this reasoning to the documents released by the European Medicine Agency, as this standard should be considered a term of reference irrespective of the nature of the report being disseminated. If the clinical data presented in the review articles on the pharmacology of agomelatine had been analysed with standard Cochrane methodology, conclusions would have been that the efficacy of agomelatine is minimal. Comparisons with other antidepressants are only partly informative, because there were only few trials and because of the high between-study heterogeneity. Long-term data are inconclusive. Clearly, different doctors may interpret the same evidence base differently and, for example, the difference between agomelatine and placebo might be described as modest rather than minimal, the bottom line being basically the same and radically different from the take-home messages of these reviews.

Most medical journals require adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).12 It is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. Adherence to PRISMA is not required in review articles dealing with basic science issues as these articles are not focused on clinical trials. In practice, however, the agomelatine case indicates that clinical data are regularly included and reviewed with no reference to the rigorous requirements of the PRISMA approach. These articles have this way became a modern Trojan horse for reintroducing the brave old world of narrative-based medicine into medical journals.

We argue that medical journals should urgently apply this higher standard of reporting, which is already available, easy to implement and inexpensive, to any form of clinical data presentation. Lack of such a standard may have negative consequences for practising doctors who may be erroneously induced to prescribe drugs that may be less effective, or less tolerable, than others already in the market, with additional acquisition costs for patients and the wider society. This risk is particularly high when medical journals with a high reputation are involved.
Such good sense. Clinical Trials aren’t creative endeavors. If anything, they’re the opposite – tightly controlled exercises to evaluate efficacy and toxicity of drugs. Why are they written up like creative writing projects? like the weekly themes in a composition class? What’s needed? A table with the results of the prespecified primary and secondary outcome variables analyzed in a pre·specified way. Another couple of tables with the adverse events and serious adverse events tallied and analyzed in some pre·specified way. Likewise, notice that Barbui and Cipriani imply that the responsibility for insuring that kind of reporting and analysis lies with the journal. That makes sense to me too. So what we have here is a case of 1 boring old man advocating that clinical trial reports need to become boring, monotonous, rote. The same for meta-analyses. Calling a meta-analysis "a review" doesn’t change the obligation for integrity. If you’ve ever actually read one of the Cochrane Collaboration Systematic Reviews, that’s exactly what they are – boring, monotonous, rote [and long]. They usually come in several versions [for that very reason?]:

 

The more I look at this Hickie/Rogers review of algomelatine, the worse it looks. I expect that the article reported here lead the group of authors to consider doing the Systematic Review [where they came across the unpublished studies that made the review look so weak].
Mickey @ 8:00 AM

in praise of anonymous, contactable members of the public

Posted on Tuesday 9 September 2014

Ed Silverman reports here on the end of a long story that has appeared in fragments for several years. It’s about Agomelatine, a Melatonin like compound, and a review article published in the Lancet by Ian Hickie and Naomi Rodgers at the end of 2011. Here’s the Pharmalot post:
Pharmalot
By Ed Silverman
September 8, 2014

As the pharmaceutical industry battles with regulators over the extent to which clinical trial information should be released, one drug maker has run afoul of U.K. industry rules for disclosure. The Association of British Pharmaceutical Industry has ruled that Servier breached several codes by failing to publish trial results for its Valdoxan antidepressant. Specifically, there were five trials involving U.K. patients between 2008 and 2011 in which Servier did not publish results promptly and three other trials in which results had not been disclosed.

As a result, the drug maker breached three codes – bringing discredit upon, and reducing confidence in, the pharmaceutical industry; failing to maintain high standards and failing to meet the required timeframe in which to disclose clinical trials, according to the ABPI. The trade group monitors a voluntary industry code and late last week publicized the completed findings. Disclosing trial results has been a contentious topic that has immersed drug makers, regulators and researchers following scandals over safety or effectiveness data that was not publicly shared. The debate has mostly centered on the extent to which proprietary trade information, as well as confidential patient data, may be released.

As we wrote previously, the issue has been particularly troublesome in Europe, where the European Medicines Agency has repeatedly delayed the release of a new policy amid criticism the agency backtracked on a previous commitment to new-found openness. Recently, the regulator indicated formal adoption would take place at a scheduled board meeting next month. The case against Servier emerged after the ABPI last fall published a survey suggesting there was greater industry transparency concerning clinical trial data. An unidentified member of the public, however, suspected otherwise and, after reviewing the results, reported possible violations to the industry trade group.

For its part, Servier “strongly refuted” that it discredited the industry or failed to maintain high standards, and acknowledged only a technical breach concerning a  failure to meet the required timeframe in which to disclose trial results, according to the case report. We asked the drug maker if an appeal is planned and will update you accordingly. Meanwhile, the ABPI findings were publicized in the form of advertisements that are running in three publications that are widely read in the U.K. medical industry – BMJ, the Pharmaceutical Journal and The Nursing Standard.

Here’s the time-line. Three years ago, Dr. Ian Hickie and a colleague published this review of Agomelatine in depression in the Lancet, I think by invitation from the journal itself. It was published on-line in May 2011 and in print in August 2011:
by Ian B Hickie and Naomi L Rogers
Lancet 2011 378: 621–631.

Major depression is one of the leading causes of premature death and disability. Although available drugs are effective, they also have substantial limitations. Recent advances in our understanding of the fundamental links between chronobiology and major mood disorders, as well as the development of new drugs that target the circadian system, have led to a renewed focus on this area. In this review, we summarise the associations between disrupted chronobiology and major depression and outline new antidepressant treatment strategies that target the circadian system. In particular, we highlight agomelatine, a melatonin-receptor agonist and selective serotonergic receptor subtype [ie, 5-HT2C] antagonist that has chronobiotic, antidepressant, and anxiolytic effects. In the short-term, agomelatine has similar antidepressant efficacy to venlafaxine, fluoxetine, and sertraline and, in the longer term, fewer patients on agomelatine relapse [23·9%] than do those receiving placebo (50·0%). Patients with depression treated with agomelatine report improved sleep quality and reduced waking after sleep onset. As agomelatine does not raise serotonin levels, it has less potential for the common gastrointestinal, sexual, or metabolic side-effects that characterise many other antidepressant compounds.
Dr. Hickie may have been impressed with Agomelatine, but a veritable international army of credible critics weren’t, and they wrote letters to the Lancet published in January 2012 saying why:
I wasn’t very impressed myself with Hickie’s review [of sound and fury…, it’s about time…]. Even the Lancet Editor, Richard Horton, had his say on Twitter:
    [1] Tomorrow, we are very heavily criticised for publishing a review on melatonin-based drugs for depression. Biased and overstated, say many.
    [2] The bias in this paper is very disturbing – it might be fine to argue your case in a Viewpoint or letter. But…
    [3] …this paper purported to be an unbiased review of a new drug class. Peer review improved it, yet not enough.
    [4] As troubling is the fact that one author took part in speaking engagements for the company making one of these drugs.
    [5] It is this kind of complicity that damages any hopes of a positive partnership between medicine and industry.
In spite of fairly clear evidence of extensive COI problems [long overdue…] and the weakness of the review itself, Dr. Hickie mounted a loud campaign against his critics [a bad sign…, if you can’t do the time, Ian Hickie: on Twitter, The Lancet and my critics]. Then in September 2013, a year and a half after Hickie’s review was published, there was a breath of fresh air…, a meta-analysis that had access to the unpublished Agomelatine trials:
by Markus Koesters, Giuseppe Guaiana, Andrea Cipriani, Thomas Becker, and Corrado Barbui
The British Journal of Psychiatry. 2013 203:179-18.

Background: Agomelatine is a novel antidepressant drug with narrative, non-systematic reviews making claims of efficacy.
Aims: The present study systematically reviewed published and unpublished evidence of the acute and long-term efficacy and acceptability of agomelatine compared with placebo in the treatment of major depression.
Method: Randomised controlled trials comparing agomelatine with placebo in the treatment of unipolar major depression were systematically reviewed. Primary outcomes were (a) Hamilton Rating Scale for Depression (HRSD) score at the end of treatment (short-term studies) and (b) number of relapses (long-term studies).
Results: Meta-analyses included 10 acute-phase and 3 relapse prevention studies. Seven of the included studies were unpublished. Acute treatment with agomelatine was associated with a statistically significant superiority over placebo of –1.51 HRSD points (99% CI –2.29 to –0.73, nine studies). Data extracted from three relapse prevention studies failed to show significant effects of agomelatine over placebo (relative risk 0.78, 99% CI 0.41–1.48). Secondary efficacy analyses showed a significant advantage of agomelatine over placebo in terms of response (with no effect for remission). None of the negative trials were published and conflicting results between published and unpublished studies were observed.
Conclusions: We found evidence suggesting that a clinically important difference between agomelatine and placebo in patients with unipolar major depression is unlikely. There was evidence of substantial publication bias.
Hickie’s and Rogers’ initial review reported on  number of Clinical Trials for Agomelatine against placebo and active comparators. They’re summarized in of sound and fury…, and they’re pretty lame – thus all of the criticism. But this later meta-analysis also had the unpublished Clinical Trials – and it was a textbook example of publication bias at its most obvious [see a breath of fresh air…]:
 
The forest plot on the left shows the effect size separated by publication status and the funnel plot on the right compares study size with effect size. The bias introduced by non-publication of negative studies is readily apparent. This article was also a Cochrane Systematic Review which commented:
Selective reporting
Study protocols were not available for all the studies. Data from CL3-022; CL3-023; CL3-024 were limited, as we could not gain access to the full dataset because the studies were unpublished; attempts to obtain data from the pharmaceutical company manufacturing agomelatine [Servier] were unsuccessful. Loo 2002a, CL3-022, CL3-023 and CL3-024 were not registered, thus increasing risk of selective reporting…
Meanwhile there’s another shorter timeline to consider. The pharmaceutical industry and its professional organizations [EFPIA – European Federation of Pharmaceutical Industries and Associations ; PhRMA – Pharmaceutical Research and Manufacturers of America; ABPI – Association of the British Pharmaceutical Industry; etc.] are under increasing pressure to clean up their act, not just in the area if CNS drugs but also industry-wide. In July 2013, EFPIA and PhRMA endorsed a joint set of Principles for Responsible Clinical Trial Data Sharing: Our Commitment to Patients and Researchers [adopted by the ABPI].

  • Patient-level clinical trial data, study-level clinical trial data, full clinical study reports, and protocols from clinical trials in patients for medicines approved in the United States and European Union will be shared with qualified scientific and medical researchers upon request and subject to terms necessary to protect patient privacy and confidential commercial information. Researchers who obtain such clinical trial data will be encouraged to publish their findings.
  • Companies will work with regulators to provide a factual summary of clinical trial results to patients who participate in clinical trials.
  • The synopses of clinical study reports for clinical trials in patients submitted to the Food and Drug Administration, European Medicines Agency, or national authorities of EU member states will be made publicly available upon the approval of a new medicine or new indication.
  • Biopharmaceutical companies have also reaffirmed their commitment to publish clinical trial results regardless of the outcome. At a minimum, results from all phase 3 clinical trials and clinical trial results of significant medical importance should be submitted for publication.
In November, the ABPI commissioned, funded, and published a survey reporting a positive trend of increasing levels of disclosure for industry-sponsored clinical trials. But someone, known as an anonymous, contactable member of the public called foul and pointed to Servier and the long unpublished studies mentioned in the BJP article above. The ABPI [to their credit] investigated and just released their findings – guilty as charged. They published announcements in major journals as mentioned above in Ed Silverman’s Pharmalot piece [to their credit]:


click the figure for full view

What’s missing [not to their credit] is the name of the drug or anything that would link it up with the story I’ve outlined here. But it’s definitely a positive step for mankind. Servier, of course, protests and admits to no wrong-doing, a PHARMA standard response that’s wearing beyond thin.

It may not seem like much, a blog post in the WSJ, a pdf on a rarely visited web site, a cryptic ad in the BMJ, but it’s big to me because it’s the pharmaceutical industry censuring a drug company for not publishing negative clinical trial results, thereby falsifying their drug’s efficacy. And look at what it took: a brace of well respected watchdogs from the Cochrane Collaboration, Healthy Skepticism, Dr. Carroll; candid comments from the Lancet editor; an article in the British Journal of Psychiatry; a Cochrane Systematic Review; echos from Senator Grassley, Paul Thacker, David Healy’s work, Ben Goldacre’s books, AllTrials, Fiona Godlee, Peter Doshi, Tom Jefferson, the irreplacable Ed Silverman, and the list goes on and on – a huge effort by a growing throng of contributors including even an anonymous, contactable member of the public, all focused on getting that one simple ad published in a few medical journals.

So, was all that energy worth it for such a small yield? Of course it was, because it couldn’t have even happened before this year, or without that much pressure. Next time will be easier. And a message is on the books that says that the drug industry [and its KOLs] can’t get away with falsifying and spinning Clinical Trial data like they’ve done for decades. There’s a growing cottage industry out here that will dog them to the ends of the earth…
Mickey @ 12:06 PM

note to self…

Posted on Friday 5 September 2014

My last post about Zoloft® and it’s approval [an echo that needs to keep reverberating…] got me thinking about a number of things. In the UPDATE, I finally found that Laura A. Plumlee et. al. v. Pfizer had been denied on a technicality for the second time just this week. I also found a Louisiana Suit filed on the same grounds:
Courthouse News
By SABRINA CANFIELD
October 29, 2013

Pfizer defrauded the public about its blockbuster antidepressant Zoloft by writing its own articles about it for medical journals and paying medical researchers to put their names on them, in a brazen campaign of "fraudulent and wanton marketing, selling and labeling," Louisiana’s attorney general claims in court. Attorney General Buddy Caldwell claims Zoloft is barely more effective at treating depression than a placebo, but Pfizer has persuaded doctors and consumers otherwise. In its lawsuit in East Baton Rouge Parish Court, the state claims Pfizer engaged in "false, misleading, unfair, and deceptive acts in the marketing, promotion, and sale" of Zoloft, affecting the elderly, disabled and "most needy" Louisiana citizens covered by the state’s Medicaid program…
Long before Zoloft was approved by the FDA, Pfizer knew it had "serious issues with efficacy" because in early Zoloft trials, the placebo group actually had better results, the state claims. "These early trials showed that ‘placebo still seems to be the most effective group’ and that "there is still no striking evidence of beneficial drug effect with placebo often being the superior treatment,’" the complaint states. "Nonetheless Pfizer chose to go forward in attempting FDA approval."

The attorney general claims that to do this, Pfizer published only information that pertained to Zoloft efficacy, and suppressed conflicting studies. Pfizer then engaged in a "ghostwriting program to misleadingly enhance Zoloft’s credibility," the lawsuit states. "Ghostwriting is a process where someone with a vested interest in an article, like Pfizer, that does not want their association with the article to be known, provides a written draft to an author who then publishes the article under that author’s name," the complaint states. "The published article contains no express or implied association with the interested person – Pfizer’s involvement in drafting the article is unknown to the public. Not surprisingly, ghostwritten articles tout the benefits and efficacy of the drug in question."

In fact, the state claims, Pfizer realized it could ensure Zoloft’s success through "manufacturing ‘research’ and articles that enhance Zoloft’s safety and credibility." Pfizer, or a company hired by Pfizer, would write a study specifically designed to showcase Zoloft’s effectiveness, and Pfizer would then pay prominent members of the medical field to put their name on the articles, and to "ultimately conceal all Pfizer involvement," the complaint states. "Publication of clinical findings is the ultimate basis for treatment decisions; thus Pfizer’s misleading publications regarding Zoloft efficacy are a key component of its fraudulent scheme," the attorney general says.

"An internal Pfizer document demonstrates its ghostwriting and selective publication scheme in full effect," the complaint states. "First, the document clearly reveals the intent to manipulate inefficacy results in a published manuscript: ‘… but now we need some help in dealing with the most important issue … i.e. the huge placebo response in the continuation phase which wiped out the significant superiority of Zoloft at six weeks.’ "The email goes on to list a number of ways to deal with the placebo response, including ‘using less stringent criteria for relapse’ and the suggestion that ‘Table III certainly must be deleted.’ Lastly, the email requests ‘the list of French investigators identifying the proposed authors. [Emphasis added.]

"Pfizer’s ghostwriting operation and its selective publication of data, prevented healthcare providers, consumers, and ultimately the State of Louisiana from obtaining accurate information regarding the efficacy of Zoloft. Pfizer’s scheme directly influenced the prescribing practices of healthcare providers through its misleading and inaccurate information bolstering Zoloft’s efficacy"…
I don’t know if the Louisiana v. Pfizer was piggy-backed onto Plumlee v. Pfizer and whether it was also blowing in the wind [Note to self: Find out]. But moving right along, George Dawson of Real Psychiatry commented on my last post bringing up a large meta-analysis of the antidepressants published in 2009 that looked at head-to-head studies of the antidepressants [studies comparing multiple drugs] and picked Zoloft® as the first-line drug [published after Zoloft’s patent expired]:
by Cipriani A, Furukawa TA, Salanti G, Geddes JR, Higgins JP, Churchill R, Watanabe N, Nakagawa A, Omori IM, McGuire H, and Tansella M, and Barbui C
Lancet. 2009 373[9665]:746-58.
BACKGROUND: Conventional meta-analyses have shown inconsistent results for efficacy of second-generation antidepressants. We therefore did a multiple-treatments meta-analysis, which accounts for both direct and indirect comparisons, to assess the effects of 12 new-generation antidepressants on major depression.
METHODS: We systematically reviewed 117 randomised controlled trials [25 928 participants] from 1991 up to Nov 30, 2007, which compared any of the following antidepressants at therapeutic dose range for the acute treatment of unipolar major depression in adults: bupropion, citalopram, duloxetine, escitalopram, fluoxetine, fluvoxamine, milnacipran, mirtazapine, paroxetine, reboxetine, sertraline, and venlafaxine. The main outcomes were the proportion of patients who responded to or dropped out of the allocated treatment. Analysis was done on an intention-to-treat basis.
FINDINGS: Mirtazapine, escitalopram, venlafaxine, and sertraline were significantly more efficacious than duloxetine [odds ratios [OR] 1.39, 1.33, 1.30 and 1.27, respectively], fluoxetine [1.37, 1.32, 1.28, and 1.25, respectively], fluvoxamine [1.41, 1.35, 1.30, and 1.27, respectively], paroxetine [1.35, 1.30, 1.27, and 1.22, respectively], and reboxetine [2.03, 1.95, 1.89, and 1.85, respectively]. Reboxetine was significantly less efficacious than all the other antidepressants tested. Escitalopram and sertraline showed the best profile of acceptability, leading to significantly fewer discontinuations than did duloxetine, fluvoxamine, paroxetine, reboxetine, and venlafaxine.
INTERPRETATION: Clinically important differences exist between commonly prescribed antidepressants for both efficacy and acceptability in favour of escitalopram and sertraline. Sertraline might be the best choice when starting treatment for moderate to severe major depression in adults because it has the most favourable balance between benefits, acceptability, and acquisition cost.
It is an extensive review, in fact, part of a family of publications in journals and the subject of several Cochrane Systematic Reviews by this group analyzing the head to head antidepressant studies. It seems to be well conducted by a credible team. But there’s another side to the story. It relies on published papers. And whole lot of those published papers were funded by Pfizer and prepared by Current Medical Directions, a medical writing firm:
by DAVID HEALY and DINAH CATTELL
BRITISH JOURNAL OF PSYCHIATRY 2003 183:22-27.

BACKGROUND: Changes in the character of medical authorship. Aims To compare the impact of industry-linked and non-industry linked articles.
METHOD: We compared articles on sertraline being coordinated by a medical writing agency with articles not coordinated in this way. We calculated numbers of Medline-listed articles per author, journal impact factors, literature profiles and citation rates of both sets of articles.
RESULTS: Non-agency-linked articles on sertraline had an average of 2.95 authors per article, a mean length of 3.4 pages, a mean Medline listing of 37 articles per author [95% CI 27-47] and a mean literature profile of 283 per article [95% CI 130-435]. Agency-linked articles on sertraline had an average of 6.6 authors per article, a mean length of 10.7 pages, a mean Medline listing of 70 articles per author [95% CI 62-79] and a mean literature profile of 1839 per article [95% CI 1076-2602]. The citation rate for agency articles was 20.2 [95% CI 13.4-27.0] and for non-agency articles it was 3.7 [95% CI 3.3-8.1].
CONCLUSIONS: The literature profiles and citation rates of industry-linked and non-industry-linked articles differ. The emerging style of authorship in industry-linked articles can deliver good-quality articles, but it raises concerns for the scientific base of therapeutics.
Am I suggesting that the ghostwritten literature on Zoloft® is extensive enough or distorted enough to skew a meta-analysis of all published head-to-head clinical trials? I don’t know that, so I guess the answer is currently unknown. But after reading the Louisiana suit and this article, I’m curious [Note to self: Look for the emails and documents referenced above]. There has to be some explanation for the discrepancy between the lackluster FDA Approval data and its glowing performance in the literature [and in the marketplace]…
Mickey @ 2:47 PM

an echo that needs to keep reverberating…

Posted on Thursday 4 September 2014

Dr. Roy Poses of Healthcare Renewal often writes about a concept – the anechoic effect [see themes…]. We all know about it. Some big story comes along and there’s a big reaction, outrage around, but then interest peters out and it’s forgotten – worse, nothing is done about it. It happens all the time. And in keeping up with the antics of the pharmaceutical companies, it’s the rule rather than the exception. There ought to be a registry of things to keep on the front burner. In my case, the registry is some phrases scratched on the back of a coffe stained envelope pinned to the wall. One of them says "Plumlee – Zoloft?" Back in the beginning of 2013, I ran across a suit, Laura A. Plumlee et. al. v. Pfizer [see a wide net…], that was intriguing. It alleged that Zoloft didn’t work and asked that Pfizer refund the money to those taken in by the drug’s ads. At first, it seemed far fetched, but not after I read the case. So I went looking for the NDA on the FDA site, but it wasn’t there. So I submitted an FOIA request to the FDA and when it showed up, it gave me plenty to write about:

They submitted six studies [only one made the grade, and it was a very weak showing]:

Placebo Controlled Clinical Trials


Study


  Site


  Type


  Outcome


protocol 103
outpatient fixed dose questionable
protocol 101
inpatient fixed dose
negative
protocol 310
inpatient fixed dose
negative
protocol 104
outpatient titrated dose
positive
protocol 315
outpatient titrated dose
negative
protocol 320
outpatient open label, relapse whatever

The FDA reviewers did not recommend approval. The committee was on its way to denying approval when the head of the FDA [Dr. Paul Leber] who had assured Pfizer he could get it approved entered the discussion with a speech and, as he predicted, he got "it through." It was hardly an exemplary day for the FDA. Reading it, I could easily see why the suit was filed. The plaintiff was right on target. Oh. by the way – it had been turned down already in Europe, but the FDA committee didn’t know that [because Pfizer didn’t tell them].

This graph is number of prescriptions written by year. Zoloft passed Prozac in 2000 [about halfway through its patent life], and it continued to dominate the market share until going generic in 2006 [when it was replaced at the top by generic Zoloft]. It was a $30 B drug:
I’ve checked along the way occasionally, but true to the anechoic effect’s power, I missed this report in March and only saw it on a back-of-the-envelope registery inventory this morning [sorry about the length, but I thought it deserved a full read]:
Lawyers and Settlements
by Gordon Gibb
March 24, 2014

A proposed Zoloft class-action lawsuit alleging Zoloft is a defective drug because it offers little more efficacy than a placebo, or so it is alleged, was recently tossed by a federal judge due to a time-barring issue and other legal implications. However all is not lost; the presiding magistrate left the door open a crack for a possible continuation of the complaint, with some revisions.

In Plumlee v. Pfizer Inc., Case No. 5:13-cv-00414, in the US District Court for the Northern District of California, plaintiff Laura Plumlee took Zoloft manufacturer Pfizer to task for marketing a drug that was alleged to be ineffective, with questionable efficacy, due to a claim that most clinical trials found that Zoloft was no more effective than a placebo, or so Plumlee claimed. Her lawsuit alleges that Pfizer purposely omitted, in Zoloft labeling, any studies that showed Zoloft to be ineffective, while favoring studies that showed Zoloft was, indeed, more effective than a placebo. Plumlee also alleged that Pfizer’s marketing and advertising was also misleading in touting Zoloft, an antidepressant, as effective.

However, Plumlee’s claim was dismissed not on her argument of effectiveness, but due to time barring. It has been reported that Plumlee brought her defective drug lawsuit under two statutes observed by the state of California: that of the Unfair Competition Law, and the Consumer Legal Remedies Act and False Advertising Law.
Gulp!
Was plaintiff’s claim time-barred?
The two aforementioned statutes, under California law, carry limitations of four years and three years, respectively. In her ruling dismissing the plaintiff’s claim, US District Judge Lucy Koh ruled that Plumlee’s complaint went beyond the limitation boundaries, given the plaintiff’s claim that she last used Zoloft in 2008 but waited until January 2013 to bring her lawsuit.

lumlee challenged that such limitations were tolled until 2012, the point at which Plumlee first discovered that Zoloft had been misrepresented. The judge, however, held that Plumlee’s claim to discovering Zoloft’s inadequacies in “early 2012” was too general a frame of time. Judge Koh also was not satisfied with the detail supporting the time and surrounding circumstances of her discovery.

To that end, the judge pointed to the existence of various scientific articles – cited by the plaintiff – that had been published long before Plumlee brought her drug defects lawsuit, and thus did not accept the plaintiff’s claim. However, the judge left the door open.
Hope springs eternal…
All is not lost for this Zoloft defective medical products action
In dismissing the plaintiff’s claim, Judge Koh is allowing Plumlee to amend her complaint going forward. It is telling, as well, that the California judge ruled that Pfizer has the freedom to access certain aspects of the plaintiff’s medical history. Plumlee had sought to block Pfizer’s access to her medical records. A previous magistrate’s ruling that allowed Pfizer access was supported by Judge Koh on grounds that Plumlee had waived any privilege of protecting her medical history when she argued that the statutes of limitations were tolled due to her learning of Zoloft’s alleged deficiencies only in early 2012.

Plumlee, according to various reports, had sought to represent a proposed class of plaintiffs who may have used Zoloft from the point at which it was introduced to market in 1991, through to present day. However, the judge suggested that Plumlee may not be typical of the class, given that she claims to have used Zoloft for a period of three years even though it did not appear to be working for her. Records also demonstrated that the lead plaintiff relied more upon Zoloft marketing and advertising, than the advice of her doctor.

Pundits suggest that in leaving the door open, the judge feels the proposed class-action lawsuit may have merit, in spite of deficiencies exhibited by Plumlee’s claim. The potential, thus, is for Plumlee to amend her claim that satisfies time-barred limitations and other deficiencies as articulated by the presiding judge. Could the proposed class-action lawsuit proceed with a different lead plaintiff?

Harmful drugs are often shown to carry risks, in spite of the position of the US Food and Drug Administration (FDA) that holds that a drug’s benefits outweigh the risks for the class or constituency of patients to which the drug is targeted. In the same vein, however, drug defects can also include deficiencies that suggest a drug is not worth the financial outlay, either by an individual or group, in exchange for potentially limited effectiveness.

The aforementioned Zoloft lawsuit alleges Zoloft does not live up to its promises. The proposed class action, alleging defective medical products (Zoloft, as ineffective), could continue with amendments – but perhaps not in its present form.
I can’t find anything else. I’ll write Baum·Hedlund·Aristei·Goldman, the firm handling the case, to see what I can find out. This is an echo that needs to keep reverberating…

UPDATE: Nosing around looking for email addresses, I ran across this:
Law360
By Sindhu Sundar
September 02, 2014

Pfizer Inc. on Friday defeated for the second time allegations that it greatly exaggerated the efficacy of its antidepressant Zoloft, when a federal judge in California ruled the proposed class action claims are time-barred and dismissed the suit with prejudice.

U.S. District Judge Lucy H. Koh, who in February had dismissed Laura Plumlee’s suit but allowed the plaintiff to amend her suit to address the court’s timeliness concerns, granted Pfizer’s motion to dismiss the suit with prejudice Friday…

Plumlee last bought Zoloft or its generic equivalent in 2008, and by the time she brought her suit in 2013, she had exceeded by at least seven months the statutes of limitations under the various California consumer protection laws she invoked, Judge Koh ruled.

"The court finds that each of plaintiff’s claims is time-barred and that despite being granted an opportunity to amend her complaint, plaintiff has still not met her burden of showing that the statutes of limitations have been tolled by the delayed discovery rule," Judge Koh said in her opinion.

Plumlee, who had filed her original suit in January 2013, claimed that she did not learn about Pfizer’s alleged over-representation’s about Zoloft’s effectiveness until she watched a "60 Minutes" segment in May 2012, according to the order.

"We are pleased with the decision and believe the court applied California law correctly in ruling to dismiss the case with prejudice," Pfizer spokesman Steven Danehy said in a statement Tuesday. "Pfizer has always believed that the plaintiff’s amended complaint fails to adequately address the deficiencies of the original complaint, which was previously dismissed."

An attorney for Plumlee could not immediately be reached for comment Tuesday.

Plumlee had sought to represent a proposed class of patients who used Zoloft made by Pfizer between the drug’s launch date in 1991 through the present. She claimed that Zoloft’s labeling failed to mention the studies showing it to be ineffective, that Pfizer favored researchers who showed Zoloft to be effective, and that the company’s advertisements misleadingly touted the drug as effective, among other allegations.

She claimed for instance, that Pfizer buttered up doctors with blandishments including ski trips and "fancy" meals, to encourage them to prescribe Zoloft, according to court documents.
Damn! And look at the date! I must’ve heard it in my sleep last night. Back to the drawing board…
Mickey @ 12:08 PM

along the road…

Posted on Wednesday 3 September 2014

Irving Kirsch published an article in 2008 [Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration] that concluded:
Drug–placebo differences in antidepressant efficacy increase as a function of baseline severity, but are relatively small even for severely depressed patients. The relationship between initial severity and antidepressant efficacy is attributable to decreased responsiveness to placebo among very severely depressed patients, rather than to increased responsiveness to medication.
[see first rate madness…]. His 2009 book [The Emperor’s New Drugs: Exploding the Antidepressant Myth] suggests that the antidepressants are simply powerful placebos and offered a strong critique of the "chemical imbalance theory."

This new article capitalizes on the 2004 court order that GSK must post all of its clinical trials on a publicly available web site [GSK Clinical Study Register]. The amount of information on each study is highly variable – from the complete CSR and IPD for Study 329 [the proband for the court order] to short summaries for many other Clinical Trials. But still, we know all of the trials, so the unpublished studies problem evaporates. While the variable amount of data limits what can be done, this article looks at every RCT done by GSK on Paroxetine that used the Hamilton Rating Scale for either Anxiety [HRSA] or Depression [HRSD], and limited its scope to efficacy [not adverse events]. The key parameter, Effect Size [Cohen’s d] was calculated from reported means, standard deviations, and number of subjects – not derived from full data sets. In spite of these limitations, the study seems well done and has plenty to say:

A Meta-Analysis of Change on the Hamilton Rating Scales
by Michael A. Sugarman, Amy M. Loree, Boris B. Baltes, Emily R. Grekin, and Irving Kirsch
PLoS·ONE. 08/27/2014 DOI:10.1371/journal.pone.0106337

Background: Previous meta-analyses of published and unpublished trials indicate that antidepressants provide modest benefits compared to placebo in the treatment of depression; some have argued that these benefits are not clinically significant. However, these meta-analyses were based only on trials submitted for the initial FDA approval of the medication and were limited to those aimed at treating depression. Here, for the first time, we assess the efficacy of a selective serotonin reuptake inhibitor [SSRI] in the treatment of both anxiety and depression, using a complete data set of all published and unpublished trials sponsored by the manufacturer.
Methods and Findings: GlaxoSmithKline has been required to post the results for all sponsored clinical trials online, providing an opportunity to assess the efficacy of an SSRI [paroxetine] with a complete data set of all trials conducted. We examined the data from all placebo-controlled, double-blind trials of paroxetine that included change scores on the Hamilton Rating Scale for Anxiety [HRSA] and/or the Hamilton Rating Scale for Depression [HRSD]. For the treatment of anxiety [k = 12], the efficacy difference between paroxetine and placebo was modest [d = 0.27], and independent of baseline severity of anxiety. Overall change in placebo-treated individuals replicated 79% of the magnitude of paroxetine response. Efficacy was superior for the treatment of panic disorder [d = 0.36] than for generalized anxiety disorder [d = 0.20]. Published trials showed significantly larger drug-placebo differences than unpublished trials [d’s = 0.32 and 0.17, respectively]. In depression trials [k = 27], the benefit of paroxetine over placebo was consistent with previous meta-analyses of antidepressant efficacy [d = 0.32].
Conclusions: The available empirical evidence indicates that paroxetine provides only a modest advantage over placebo in treatment of anxiety and depression. Treatment implications are discussed.
First, the graphs [which beg for a bit of clarification]. This one is from the studies of Paroxetine in Anxiety States that relied on the Hamilton Rating Scale for Anxiety [HRSA]. Their outcome variable, Cohen’s d, measures the strength of the drug effect, not simply its statistical significance, and plots it against the baseline severity of the HRSA:

In this case, the red and blue lines showing the changes in drug and placebo over the course of the study and their significant change with severity are consistent with the regression to the mean error [your on your own her]. For my purposes, ignore them. The bottom green line shows the strength of effect for the drug. It is not significantly related to the severity of the Anxiety State. The mean Cohen’s d is 0.27 [I’ve marked it with a horizontal arrow] [recall that a rough interpretation of Cohen’s d is: 0.25 = weak effect, 0.50 = moderate effect, and 0.75 = strong effect]. From this graph, we can conclude that Paroxetine has a definite anxiolytic effect, but that it’s not anything to write home about. It’s not inert, but it’s nowhere even close to a wonder drug.

Now to the meta-analysis of the Paroxetine trials in depressed patients. The opposite effect of severity on the pre-post drug and placebo response [red and blue lines] was not clear to me, but unlike the meta-analysis in 2008 of FDA Approval studies, there was not a significant effect of severity on response. The mean Cohen’s d in depression is 0.32 [I’ve again marked it with a horizontal arrow].

The only study in adolescents [the infamous Study 329] is also marked. The other GSK adolescent trials [published after the patent expired] used other rating systems [MADRS, K-SADS-L and CDRS-R] and were both decidedly negative [see paxil in adolescents: “five easy pieces”…].

They also assessed the impact of the trial length on the strength of the effect and found no significance:

And there’s more [as they say on those television ads]. They also looked at whether there was a difference in Cohen’s d between those studies done before FDA Approval or afterwards, and whether there was a difference between published and unpublished studies. While there were differences in the means in both cases, neither achieved statistical significance:
…the mean paroxetine-placebo effect size did not differ significantly as a function of approval status [Q[1] = 3.27, p = .077], although there was a trend towards a greater drug-placebo benefit in pre-approval trials [Pre-Approval: d = 0.41 [95% CI: 0.30,0.53]; Post-Approval: d = 0.29 [95% CI: 0.22,0.36]].
The weighted mean difference between paroxetine and placebo was not significantly different between published and unpublished trials [Q[1] = 1.50, p = .221]. Published trials [k = 16] had a weighted mean effect size of d = 0.36 [95% CI: 0.27,0.44] and unpublished trials [k = 11] had an effect size of d = 0.28 [95% CI: 0.20,0.37].
I obviously liked this study. They used the GSK Clinical Study Register to produce some solid conclusions. Paroxetine has antidepressant qualities but is nowhere near being a powerhouse. They used the term modest to describe it’s effect [I guess modest is between weak and moderate], and that fits my clinical experience with the SSRIs. I thought it was surprising how consistent all of these studies really were. Eliminating publication bias gives us a fuller picture of the drug than we had before. They have a thorough discussion of the difference between statistical significance and clinical significance in the paper and I recommend it. This can’t be called a thorough vetting of the Paroxetine efficacy, however, because they were working with GSK derived means and variance rather than the actual individual Participant Data [IPD] [see it matters…].

They didn’t look at the critical Adverse Event data, so this meta-analysis only addresses one side of the risk benefit equation. They have a nice discussion of the Adverse Events in the discussion, but it is not data-based from their own work. This article is what a little-bit of Data Transparency can tell us, but it’s just not enough. There’s the company machine operating between most of this data and the raw clinical trial. Considering where we’ve lived for decades, I think it’s a landmark article, but only a marker along the road to where we need to be.

Parenthetically, I find myself thinking that this efficacy data holds for this class of drugs. I don’t prescribe Paroxetine, not because of efficacy differences, but because of its high propensity for withdrawal symptoms, reported here as 66%. So it appears to me that there are differences among these drugs in that area, but it’s just my impression and what I’ve read of the impressions from others. I look forward to the day when we have compilations we can trust on the Adverse Events for this drug [and all of our other medications]. I don’t think we really have that information in an accurate form for any of the SSRIs, SNRIs, or the Atypical Antipsychotics…
Mickey @ 9:17 PM

that maturity…

Posted on Tuesday 2 September 2014

    We shall not cease from exploration
    And the end of all our exploring
    Will be to arrive where we started
    And know the place for the first time…
    Little Gidding  T.S. Eliot 1942

I hate being so repetitive with my quotes. This one has had many re-runs here. But I guess that’s the way it is with the good ones. Maybe next time I’ll use the story of the Holy Grail or the Wizard of Oz to say the same thing. Or maybe it’s just part of the experience of being an old man, to begin to see how cyclic human life can be. I came to psychiatry interested in psychotherapy at a time of transition. The psychiatry of the time was operating at its most "eclectic," or so I thought. There were models galore – psychodynamic, biological, medical, existential, social, behavioral, etc. Come one, come all. I thought that was great, myself. And then things changed dramatically and one was apparently supposed to choose – specifically choose biomedical. So those of us who didn’t moved to the side [because there was no place else to go]. At least that’s how it seemed. But that’s ancient history, albeit my own. I sure didn’t start writing well into retirement to rehash those days. I started writing because I woke up to the fact that a dominant paradigm in psychiatry throughout my career – psychopharmacology – had been invaded by industry and was more corrupt than I could’ve imagined. So I exhumed skills from a former career in hard-science-medicine and began to look at what I consider the carnage that resulted from an academic-pharmaceutical alliance that has afflicted a too-big sector of psychiatry.

It appears I came along at another time of transition. Now, there are real moves to clean up some of the side effects that came during the neoKraepelinian revolution that swept through the specialty of psychiatry in my early days. Industry had its day in the sun and seems for the moment moving on to greener pastures. The Clinical Trial world, seat of some of the major corruption is under a microscope and the target of a growing movement for Data Transparency. And people at least say "bio-psycho-social" model now frequently – a term that has long been only whispered. Then today, I look over the new American Journal of Psychiatry and read this:
by Kendler KS
American Journal of Psychiatry. 2014 May 16. [Epub ahead of print]

This essay addresses two interrelated questions: What is the structure of current psychiatric science and what should its goals be? The author analyzed all studies addressing the etiology of psychiatric disorders in the first four 2013 issues of 12 psychiatry and psychology journals. He classified the resulting 197 articles by the risk factors examined using five biological, four psychological, and three environmental levels. The risk factors were widely dispersed across levels, suggesting that our field is inherently multilevel and already practicing empirically based pluralism. However, over two-thirds of the studies had a within-level focus. Two cross-level patterns emerged between 1) systems neuroscience and neuropsychology and 2) molecular or latent genetic factors and environmental risks. The author suggests three fundamental goals for etiological psychiatric research. The first is an eclectic effort to clarify risk factors regardless of level, including those assessed using imaginative understanding, with careful attention to causal inference. An interventionist framework focusing on isolating causal effects is recommended for this effort. The second goal is to clarify mechanisms of illness that will require tracing causal pathways across levels downward to biological neuroscience and upward to social factors, thereby elucidating the important cross-level interactions. Here the philosophy of biology literature on mechanisms can be a useful guide. Third, we have to trace the effects of these causal pathways back up into the mental realm, moving from the Jasperian level of explanation to that of understanding. This final effort will help us expand our empathic abilities to better understand how symptoms are experienced in the minds of our patients.

Conclusion: …A vigorous debate between different scientific perspectives on psychiatric illness is to be valued. More problematic has been our tendency to develop “fervent monism.” This position, at times strongly advocated by psychoanalysis, early biological psychiatry, social psychiatry, and most recently, molecular psychiatry, is that their approach was the only valid one. Fervent monism, especially when applied to the field of human behavior, reflects epistemic hubris. It is helpful, in concluding, to revisit an old but central question: Is there a single “best” level at which to address the causes of psychiatric illness? Do we expect that over time one specific level of explanation for psychiatric illness will “win” the scientific competition and beat out all other kinds of explanations? I think that the mere posing of this question illustrates its implausibility. We are “stuck” with the dappled causal world for psychiatric disorders. In the introductory epigraph to this essay, Chang makes a point worth re-emphasizing. It is only the immature fields of science that advocate monism. Tolerance for diversity and humility come with scientific maturity.
Saying that "risk factors were widely dispersed across levels, suggesting that our field is inherently multilevel and already practicing empirically based pluralism" may well be a bit of an exaggeration, if he’s referring to psychiatric practice or research. As an aging pluralist, it has felt and still feels pretty monistic to me – even with a break in the clouds. My point is only that it has been a long time since I’ve seen an essay in the American Journal of Psychiatry that acknowledges the "dappled causal world for psychiatric disorders" and adds "It is only the immature fields of science that advocate monism. Tolerance for diversity and humility come with scientific maturity". We could use some of that maturity…
Mickey @ 8:31 PM

it matters…

Posted on Monday 1 September 2014

I wrote this less than four years ago [selling seroquel I: background…]:
This email response to a researcher who was requesting funding from Zeneca several months after the F.D.A.’s approved Seroquel might seem odd or even Machiavellian to a Basic Scientist, a Practicing Clinician, or a patient-to-be, but if your business is selling the product, it makes perfect sense:
Zeneca had poured years and a lot of money into getting their drug approved. Now it was time to focus on reaping the benefits of their hard work…
I remember feeling kind of shocked when I ran across it. I guess I was a "newbie" – naive to a fault. But that was just the beginning of a long series of similar disillusionments in the interim. Like most, the more I read and looked into things, the more my attention was drawn to Clinical Trials, primarily Industry-Funded Clinical Trials. When I think about it now, I can’t imagine being so gullible as to believe that Industry-Funded Clinical Trials would be on the up-and-up. I guess I thought, after all, they were written by the upper levels of academic psychiatry oblivious to how the game had come to be played [I really was a "newbie" back then]. Over time I realized that others already knew all of this, and I joined their growing outcry. We’re approaching a potential landmark – the European Medicines Agency will release it’s policy – and we’ll find out whether they will stick to their initial promise of full Data Transparency or whether they will cave in like it appeared they were going to do a few months ago. So, I thought I’d do a review of the scorecard to use when they announce that policy in early October. For many, this will be old hat. For others, it will be TMI [too much information]. But if it’s not something you know, the EMA decision will be undecipherable:
    The PROTOCOL is a formal document that lays out in detail how the study is to be conducted. It has multiple functions, but for our purposes, it’s important because there are a number of ways that a study can be skewed in favor of a drug – like picking a dose for the comparator drug that is either to low to be effective or to high and likely to cause a number of side effects and discontinuations. It’s important because it lays out the primary and secondary variables and how they will be analyzed. All of this is a priori, before the study is done. One reason is that given enough creativity, one can frequently find significance after the fact by running tests on any and everything. By making a priori declarations of both target and technique, the PROTOCOL assures us that we’re not being taken down some after-the-fact garden path. The PROTOCOL is an essential element for Data Transparency.

    The CRF‘s [CASE REPORT FORMS] are the primary raw data source, and may be hundreds of pages long for each research subject. They are the various forms filled out by the study coordinators and have intake information, the actual forms of the subjects tests, the trial staffs’ recording of adverse events, the medication records etc. All of this information is recorded before the blind is broken. Things like adverse reactions are recorded in plain english rather than coded by some system. This is the data in its rawest available form [see below]. This information would be required for any serious vetting of adverse events for reasons stated below.

    The IPD [INDIVIDUAL PARTICIPANT DATA] is the compilation of the data from the CRFs in tabular form, spreadsheets in one format or another, data transposed from the CRFs and ready for analysis [separated by treatment eg after the blind has been broken]. For an analysis of the efficacy data, this is all that would be required. The adverse event data is also usually in tabular form in either an abbreviated or encoded in some standardized way. If there are no questions about adverse events, the IPD tables would be fine for a reanalysis. But if adverse event data is in question, the actual CRFs are required reading to avoid lost-in-translation errors, as tedious as that might be.

    The CSR [CLINICAL STUDY REPORT] is a long narrative document that tells the story from start to finish including the results. Sometimes it contains the actual data [IPD] as an Appendix and sometimes it has only summary tables. It’sa version of the published paper in the long form – typicallyseveral hundred pages. Depending on what questions are being asked and how thoroughly they include the IPDs, it might be adequate for checking a study – or it may not. It’s the report that industry would like to be Data Transparency, but it’s usually not close enough to the Raw data to suffice.

    The published ARTICLE is what we’re used to seeing. There was a time in my lifetime that we thought that what we read in our journals was the real deal, but that time has passed. The many ways that data has been manipulated, misrepresented, jury-rigged, seen through rose-colored glasses has become a source of daily amazement to me in my retirement years. As I’ve often said, never in my wildest dreams. There are so many examples that industry really doesn’t have much of a legitimate argument for their claim of proprietary ownership of Clinical Trial data. Their record of gross abuse of that privilege is self-indicting.

I know this is all repetitive and tedious. If you already know it, it’s boring. If you don’t know it, it’s still boring. But the European Medicines Agency’s decision and Canada’s Vanessa’s Law [see doing the right thing…] are the two concrete markers for the cause of Data Transparency. There are lots of questions about Clinical Trials and their ultimate value, but right now the point is anterior to those considerations – basic honesty in scientific reporting. The tools of statistics, analysis, and presentation have been grossly perverted too frequently, for too long.

Just having the raw data available is of no value if someone doesn’t analyze published studies that are in question. Having done a some of that in the years between my naivete four years ago and the present, I can assure you that it’s not for the faint of heart. People who take on the task of vetting someone else’s data need to be well versed in modern analytic techniques or at least willing to take the time to get up to speed. If reading this post is tedious and boring, it’s a piece of cake compared to doing a careful analysis of a Clinical Trial. But the evidence of the last quarter century dictates that there is really no other choice if physicians are to be accurately informed about the medications they prescribe and patients are to know what they’re taking.  So when the EMA policy comes out next month, read it carefully to see what is actually going to be available. It matters, and the devil is in the details…


To wit:

Pharmalot
by Ed Silverman
Aug 28, 2014

Amid ongoing debate over the extent to which clinical trial data should be divulged, a new survey finds that an overwhelming majority of members of the Royal College of Physicians in the U.K. believe that such information should be disclosed and accessible. To wit, 95 percent say all trials should be registered; 89% says increased publication of results, including those that are negative, will lead to better medicines and patient healthcare; 81% agree that a “moral duty” exists for drug makers to make completed data available to trial participants, the public and the scientific community; and 87% says increased scrutiny of data will lead to better science and research.

At the same, time 10% believe increased publication and dissemination of clinical trial results will harm commercial interests of drug makers and only 18% say that increased access to trial data will harm commercial interests. Just 5% believe companies should not be required to release clinical trial data into the public, and only 27% say publication of completed data should be linked to market authorization.

“The world has changed,” writes Keith Bragman, president of the Royal College’s Faculty of Pharmaceutical Medicine, a standards-setting body at the Royal College, in remarks accompanying the survey results. “Society now demands greater transparency in clinical trials.”

Data disclosure, you may recall, has been a contentious topic following scandals over safety of effectiveness data that was not publicly shared. The survey, which queried 430 of the faculty’s 1,500 members, comes as regulators, academic researchers and drug makers dicker over policies for releasing trial data. The issue has been particularly fraught in Europe, where the European Medicines Agency has repeatedly delayed the release of its new policy amid criticism the agency has backtracked on a previous commitment to new-found openness. Last month, the regulator indicated formal adoption is scheduled to take place at an October board meeting…
Mickey @ 10:51 PM

justification for “what they’re for”…

Posted on Monday 1 September 2014

This is an extension of the last post and its comments, specifically:

  1. "and this graph suggests that the patients essentially moved next door into our prisons"
  2. " If there is ever a place where the parable of the blind men and the elephant fits like a glove, this is it"
  3. "I begin to wonder about diagnosis. How many have psychotic illnesses? Are these the homeless chronic patients who have been picked up for minor crimes? How many are primarily substance abusers?"
  4. "the time for decrying, blaming, or ignoring this has passed"
  5. "This effort should be lead by the National Institute of Mental Health and the Substance Abuse and Mental Health Services Administration"
We see the world through the lens of our own experience, our biases, our desires. What possible other sources can we rely on? So when we accuse the pharmaceutical industry of being driven by the wish to sell drugs, we are only stating the obvious. We are given to simplifying the motives of others while thinking that our own are complex, nuanced, well considered. It’s just what we do – over and over. Simplifying, discounting, blaming, even demonizing – all part of being human. And we enjoy nothing more that finding a like-minded cohort so we can all do it together. That cynical view I just expressed itself is an example of itself. We apply it to others, but rarely ourselves [see #2].

I posted a couple of graphics [what they’re for…] that paint the picture that chronic mental patients are being warehoused in correctional facilities because there’s no place else for them to go in the post-deinstitutionalization era. I believe that and I don’t think it’s a good thing. But I don’t really know how accurate those figures are and I really don’t know what the people who compiled those figures consider chronic mental illness. How much is substance abuse? How much is chronic psychosis? The figures I found are mostly compiled by the correctional institutions and subject to the bias of their feeling overwhelmed. In the comments to the last post, those questions were raised. We need accurate information, which is the only way to really understand the magnitude of this problem. And in-so-far as I can see, we don’t have it. And because of the variability of our states and state governments, it has to be a national information gathering for any accuracy.

When I read about it, there’s way too much "it has been estimated…" This is something we need to know, not estimate. And it’s science, epidemiology is what it’s called. And who should gather that kind of information? The CDC? The NIMH? SAMHSA? I expect they all have some version of that information, but it lags and if it’s in a usable form, I can’t locate it. That’s why I say, "This effort should be lead by the National Institute of Mental Health and the Substance Abuse and Mental Health Services Administration" [see #5]. Our NIMH has chased the very shaky World Health Organization data about the prevalence of mental illness and dire predictions of the future. But they’ve largely ignored this problem. And they haven’t taken the first step, defining the magnitude and nature of the problem. They’ve spent their time preoccupied with neuroscience and the monocle of psychopharmacology. The state of chronic mental health care in the US is the number one scientific question on the mental health table and prison is the logical place to start. In the comments, George Dawson says that the NIMH is a bad choice because it is a basic science organization. I happen to think epidemiology is a basic science par excellence. Likewise, the CDC, our traditional infectious disease tracker, needs to join in the gathering and tracking mainly because of proven expertise. The fact that we don’t know the magnitude and nuances of this problem is to our shame, all of us – thus Dr. Frances’ title " The Hall of Shame – Who Is Failing the Severely Ill?" [see #1, #3, #5].

The essence of science is to find out answers, or the best answers, to things we don’t know. We [human-kind] don’t know how to deal with chronic psychosis effectively, and we never have. We thought we had at least separated the problem of chronic psychosis from antisocial behavior and criminality, but the charts above suggest that even that was not a solid conclusion. So I personally think we need to start where we are rather that indulge our natural propensity to ignore problems we don’t know what to do with, to blame the state of affairs on each other, or to self-righteously decry how things are without taking action [see #4].

George Dawson suggest that this problem is the result of the influence of Managed Care or perhaps government agencies dropping the ball. Sandra Steingard suggests the surge of substance abuse problems and an adherence to the medical model might be part of the problem. DJ Jaffe implies that the organizations that should be involved are off dealing with lesser more lucrative issues. I happen to think all of those things and that our governmental agencies have become playgrounds for idealogues. But it doesn’t really matter what we think is the cause of the problem [see #2]. All that actually matters is that what already looks like a massive problem is getting worse, and we don’t have a solid handle on the details needed to understand it. Maybe I’m in left field thinking existing agencies can figure out how to get us the information in a detailed and unbiased form and put us on the road to a best case solution, maybe we need a Task Force, a Manhattan Project, a NASA…
Mickey @ 12:23 PM

what they’re for…

Posted on Saturday 30 August 2014

Ever since I ran across this graph of the rates of institutionalization, I’ve been mulling over the plight of the severely mentally ill during my time in psychiatry [that faint line above the abscissa marks when I was directly involved]. Writing about it a week or so ago [functional improvement…], I called it Transinstitutionalization – a term from those days predicting that this is what would happen. It does seem naive in retrospect to think that one could Deinstitutionalize the patients in our massive State Hospital system simply by shutting it down. The planned Community Mental Health system was never fully realized, and this graph suggests that the patients essentially moved next door into our prisons:

Looking around to find the magnitude of the problem, I ended up on the National Institute of Corrections web site where I found this:


Mentally Ill Persons in Corrections

Mentally ill persons increasingly receive care provided by corrections agencies. In 1959, nearly 559,000 mentally ill patients were housed in state mental hospitals. A shift to "deinstitutionalize" mentally ill persons had, by the late 1990s, dropped the number of persons housed in public psychiatric hospitals to approximately 70,000. As a result, mentally ill persons are more likely to live in local communities. Some come into contact with the criminal justice system.

In a 2006 Special Report, the Bureau of Justice Statistics estimated that 705,600 mentally ill adults were incarcerated in State prisons, 78,800 in Federal prisons and 479,900 in local jails. In addition, research suggests that "people with mental illnesses are overrepresented in probation and parole populations at estimated rates ranging from two to four time the general population. Growing numbers of mentally ill offenders have strained correctional systems.


There are so many ways to think about this, most of them suffused with cynicism. When I’m in a cynical mood, I can look at these numbers and cast blame in all directions, but then I come up short, because I’m not really sure what to do about it either. If there is ever a place where the parable of the blind men and the elephant fits like a glove, this is it:

Everyone’s looking at the part that effects them, and the big picture gets lost in the shuffle. Like most mental health types, I tend to accept the MAD vs BAD distinction and want to separate out the mental patients and get them out of the prisons and into the community – the battle cry of my era and the Community Mental Health Movement. Allen Frances and DJ Joffe have an excellent post up that somewhat takes that perspective [see The Hall of Shame – Who Is Failing the Severely Ill?]. I agree with their every word, but always worry that it will fall on deaf ears like it has for such a long time. I was actually impressed with some of the information and policy discussions on the National Institute of Corrections web site as well as a report I found there [Improving Outcomes for People with Mental Illnesses under Community Corrections Supervision:] focused on the parole system. They’re much more mental illness savvy than I realized.

Again, like most mental health types, I look at that pie graph up there and after I get over the magnitude of the problem, I begin to wonder about diagnosis. How many have psychotic illnesses? Are these the homeless chronic patients who have been picked up for minor crimes? How many are primarily substance abusers? I haven’t been able to find those numbers yet, but I’m still looking.

Looking at that graph at the top, I have to remind myself that it is not populations in jail or mental hospitals, it’s rate of institutionalization. They aren’t the patients from the days of Denstitutionalization, they’re a new generation. And it looks to me as if we have a surprisingly fixed rate of removing people from our society for one reason or another. The graph itself is from a study of violent crime – homicide [An Institutionalization Effect: The Impact of Mental Hospitalization and Imprisonment on Homicide in the United States, 1934–2001]. And what it shows is that there was a dramatic increase in the homicide rate in the 1970s and 1980s when Institutionalization had its big dip. It’s a complex legal article, and you’ll have to read it yourself to figure out what they make of their findings:

Right now, Law Enforcement is having to carry the ball for the most important mental problems in our country. While they seem to be doing a credible job under the circunstances, it’s not what their system was designed to do. Here’s what Dr. Frances and DJ Jaffe have to say:
Dr Jaffe writes:
The bipartisan Helping Families in Mental Health Crisis Act (HR3717) has wide support among those who advocate for the 5 percent of the population with the most serious mental illnesses. But there are parts of the mental health industry that ignore the seriously ill. Over 500,000 of the most seriously ill are incarcerated or homeless, largely because the mental health industry focuses on all others.
  • Substance Abuse and Mental Health Services Administration: SAMHSA distributes over $400 million in mental health block grants to states and tells them how to spend it. But as Representative Tim Murphy noted, "SAMHSA has not made the treatment of the seriously mentally ill a priority… It’s as if SAMHSA doesn’t believe serious mental illness exists." SAMHSA encourages states to spend block grants on the highest functioning. It wants to replace the scientific medical model with their internally invented recovery model, and creates it’s own "illnesses" — bullying and trauma being the most recent.
  • Consumer Groups: The National Coalition for Mental Health Recovery (NCMHR) is the umbrella organization for SAMHSA-funded consumer groups like the National Empowerment Center and National Mental Health Consumers Self Help Clearinghouse. Rather than advocating for the seriously ill, they advocate for anyone with "lived experience." They believe everyone should self-direct their own care, thereby ignoring those too sick to do so.
  • Mental Health Lawyers: The Bazelon Law Center, ACLU, the National Disability Rights Network (NDRN) and State Disability Rights organizations not only ignore the most seriously ill, their actions cause harm. These non-profit law centers fight against Assisted Outpatient Treatment and creation of hospital beds for the most seriously ill thereby making incarceration inevitable for many.
  • Mental Health America: Mental Health America is a trade association for service providers. Rather than serious mental illness, MHA is "dedicated to helping all Americans achieve wellness." MHA of Essex County New Jersey is one of the few chapters that does try to help the most seriously ill.
  • National Council for Community Behavioral Health: This organization represents behavioral healthcare conglomerates. They mainly lobby for funding Mental Health First Aid (MHFA) classes they sell. MHFA is based on the false premise that the mentally ill are so asymptomatic special training is needed to identify them and that once identified services are available to refer to. MHFA is not proven to help the seriously mentally ill.
  • National Alliance on Mental Illness: Historically, NAMI did focus on serious mental illness because it was founded by families of the very seriously ill. In 1993, NAMI argued for parity for people with severe mental illness. In 1995, NAMI endorsed various forms of involuntary treatment when needed. Cut to today. Instead of the 14 million who are most seriously ill, NAMI National now claims to represent 60 million people with any mental health issue. Some brave state and local chapters like NAMI/NYS have refused to follow their lead and they still focus on helping people with serious mental illness.
  • American Psychiatric Association: The APA represents psychiatrists and publishes the Diagnostic and Statistical Manual that determines what is and isn’t a mental health problem and therefore gets a billing code. It is in the APAs interest to have everyday problems declared a disorder so members can be reimbursed for treating them. A subset of psychiatrists do treat the seriously ill and immediate past president, Dr. Jeffrey A. Lieberman has gone out of his way to increase the visibility of serious mental illness, but serious mental illness is still only a small part of APAs focus.
  • American Psychological Association: This APA represents "130,000 researchers, educators, clinicians, consultants and students." The most popular subjects for their members are addiction, bullying, marriage and divorce, personality, sexual abuse, and depression, not serious mental illness.
  • Celebrity Centric Advocacy Organizations: None of the 29 events sponsored by The Rosalynn Carter Symposium on Mental Health Policy focused on serious mental illness. Patrick Kennedy’s One Mind for Research is primarily involved in post-traumatic stress disorder, traumatic brain injury, and stigma education, not schizophrenia and bipolar. He has used The Kennedy Forum on Mental Health to call for an end to the IMD Exclusion, but has not spoken out on important initiatives like implementing Assisted Outpatient Treatment or criticized the CMHCs created by his uncle for refusing to serve the most seriously ill.
  • Law Enforcement: Ironically, this is the one bright spot. Law enforcement organizations like The National Sheriffs Association, the New York State Association of Chiefs of Police  have stepped in to fill the void left by the mental health industry’s abandonment of the most seriously ill. They’ve become powerful advocates for increasing hospital beds for the seriously ill and are working to force the mental health system to stop ignoring them. Law enforcement is vigorously supporting Rep. Tim Murphy’s Helping Families in Mental Health Crisis Act and working with families of the seriously ill helped it gain 95 cosponsors from both parties. Those who want to help people with serious mental illness should ask their Representative to support this bill.
From Dr. Frances:
And I would add one more name to DJ’s shame list. The National Institute Of Mental Health devotes almost all of its enormous research budget to glamorous, but very long shot, biological research that over the last four decades has contributed exactly nothing to the treatment and lives of the severely ill. Surely, biological progress will eventually be made, but at best it will take decades to have any impact on the current real world problems of the mentally ill.
The only things I would add at this point to their synopsis is that the time for decrying, blaming, or ignoring this has passed. It’s time for the mental health agencies and the professional organizations to turn their attention to where our most-in-need patients actually live – our jails and prisons. And I would amplify Dr. Frances’ point. This effort should be lead by the National Institute of Mental Health and the Substance Abuse and Mental Health Services Administration. That’s what they’re for. Psychiatry came into being to care for these specific patients and we’ve abandoned them…
Mickey @ 10:15 PM

pixels?…

Posted on Tuesday 26 August 2014

My last post [and wasted research dollars…] lead me to Dr. Nemeroff’s 1984 paper announcing that Cortictrophin-Releasing Factor [CRF] is significantly elevated in the CSF [cerebrospinal fluid] of patients with Major Depressive Disorder – a thirty year old observation that has figured heavily in his research since then – culminating in the clinical trial listed in the last post. It’s a short paper in Science and it’s been open on my desktop for several days. The graph haunts me when I look at it. Here’s the abstract and that one figure from the paper, followed by a description of the analytic methods from the paper:
by Nemeroff CB, Widerlöv E, Bissette G, Walléus H, Karlsson I, Eklund K, Kilts CD, Loosen PT, Vale W.
Science. 1984 Dec 14;226(4680):1342-4.

The possibility that hypersecretion of corticotropin-releasing factor (CRF) contributes to the hyperactivity of the hypothalamo-pituitary-adrenal axis observed in patients with major depression was investigated by measuring the concentration of this peptide in cerebrospinal fluid of normal healthy volunteers and in drug-free patients with DSM-III diagnoses of major depression, schizophrenia, or dementia. When compared to the controls and the other diagnostic groups, the patients with major depression showed significantly increased cerebrospinal fluid concentrations of CRF-like immunoreactivity; in 11 of the 23 depressed patients this immunoreactivity was greater than the highest value in the normal controls. These findings are concordant with the hypothesis that CRF hypersecretion is, at least in part, responsible for the hyperactivity of the hypothalamo-pituitary-adrenal axis characteristic of major depression.

"The results (see Fig. 1) were statistically analyzed by both parametric [analysis of variance (ANOVA) and Student-Newman-Keuls test] and nonparametric (Mann-Whitney U test) methods."

One of the most frustrating things about papers like this is that the raw data isn’t available, even if one has the time to go over it in detail. Once again, I find myself looking at a graph that I’m told is meaningful, significant, has something to say important about a major psychiatric syndrome. And what I see looks like a trivial difference that is probably meaningless, and I even doubt significant. So I did something that I’ve been tempted to do many times. I opened it in a graphics program and reconstituted the data by measuring the pixel count to the center of each data point and using that table, the baseline, and the ordinate scale to reproduce the data. I wouldn’t recommend this on a Nobel Prize application or even in a paper, but I thought I’d give it a shot because I don’t believe the analysis is correct, or correctly done [the next paragraphs is only for the hardy].

So armed with my little made-up table, I proceeded to the analysis. It says that they used an ANOVA. That means considering the numbers as a continuous variable. In an ANOVA with four groups, first you check the whole dataset to see if there is any significance to the grouping. If there is, then you test the groups against each other to locate the significant difference. But with a small dataset like this where the assumptions of ANOVA [normal distribution] are questionable, it is more accurate to use a non-parametric statistic that only considers the ranking of each value, not its magnitude. With four groups, the drill is the same. First one tests the whole dataset to see if the grouping in significant [Kruskal-Wallis]. If it is, you test the groups against each other to find the significant differences [Mann-Whitney]. If you read their paragraph [in italics], it’s hard to figure out exactly what they did but it looks like some steps were skipped. Here’s my version using the R statistical package.

The top green value [p = 0.007656] says that the ANOVA is significant [p<0.05]. But in the table of pairwise comparisons, it’s not the difference between NORMAL and MDD that achieves significance [p = 0.10858]. In the non-parametric test, the overall Kruskal-Wallis test of the table is not significant [0.1485]. There’s nothing there. Whether my crude method is valid or not, it sure doesn’t say this:
The CSF concentration of CRF-LI was significantly increased (by both methods of statistical analysis) in patients with major depression compared to either the normal controls or the patients with schizophrenia or senile dementia."
My point in playing this little game is that we deserve the access to raw data for this very reason. This 30 year old study has been rehashed and discussed for years and has been nuclear to several grant requests, including the Clinical Trial in the previous post. It looks like a thorough vetting thirty years ago might well have put it to rest. I can’t find further studies to confirm this finding and nothing that suggests that this compound has any solid connection with PTSD. If you haven’t figured it out yet. I think this whole line of research is based on unsubstantiated speculations.

As you may recall, when we looked at Dr. Nemeroff’s NYU Grand Rounds and London lecture to the Institute of Psychiatry, we were alerted to a study reported as positive that Dr. Nemeroff, himself, had reported as based on an error so the significance disappeared, yet he presented it as a valid study in those presentations [see has to stop…]. So the best predictor of future behavior is past behavior. Now we have GSK, the VAH, and the NIMH chasing some new drug as a treatment for PTSD based on the very shakiest of speculations. Shame on him. Shame on them. And shame on journals that don’t vet questionable studies like this.

Maybe we ought to say shame on me too for using a pixel count to get my numbers. But instead of that – why not support Data Transparency so I don’t have to resort to extreme measures to confirm my reaction to that graph. Like I said, this kind of silliness has to stop…


Whoops: [for the even more hardy] I left out this plot from the R package. The upper and lower borders of the "boxes" represent 25% and 75% of the points. The fact that the Means [bold horizontal lines] aren’t centered in the box points to a skewing of the data [not normally distributed], suggesting that the ANOVA is not the best choice of statistics, and that the non-parametric test is a more appropriate choice [Kruskal-Wallis]. My method of data capture is also more likely to be accurate using only the rank order.

Mickey @ 8:12 PM