important work…

Posted on Saturday 29 August 2015

Well look here. Those two guys from in the details…, Tom Jefferson and Peter Doshi, just popped up again. I didn’t know we’d hear from them again within the week! And they brought a new friend. I know that the topic of Clinical Study Reports [CSRs] isn’t the sexiest of blog topics, but important things are like that – hiding down in the cracks or behind a bush. These Clinical Trials and their reporting are in the absolute eye of a storm that threatens to over-run medical care with the influence of commercial interests. Not that commercial forces are intrinsically evil, but they’re hardly self regulating and need to be held in check by strong ethical and scientific watchdogs. So the work of people like Jefferson and Doshi is a vital piece of the quality of healthcare that will be delivered far downstream from their own labs and computers:
British Medical Journal
by Khaled El Emam, Tom Jefferson, Peter Doshi
27 Aug, 15

In late 2010, the European Medicines Agency [EMA] became the first regulator in history to promulgate a freedom of information policy that covered the release of manufacturer submitted clinical trial data. Under a separate, new policy [policy 070], the EMA will take an additional step and create a web based platform for sharing manufacturers’ clinical study reports [CSRs] upon a decision being made on a marketing authorization application or its withdrawal.

CSRs contain significant details that are often missing in journal publications of the same trials—for example, details pertaining to patient relevant outcomes and adverse events—and are an important new tool for those engaged in research synthesis. While the policy anticipates that the agency will require individual participant data [IPD] to also be shared, the EMA has not yet committed to a final timeline for this.

But as the EMA works towards finalizing its guidance on the anonymization of CSRs, some companies and industry initiated guidance may be promoting practices that would diminish the value of the data the regulator ultimately distributes. For example, one recent industry guidance favors the redaction and removal of significant standard content in CSRs, ostensibly in an effort to have simple rules for anonymizing these documents. This includes the removal of patient narratives [for example, of serious adverse events and patient dropouts]; line listings [tables of individual level information about participants]; and the redaction of all patient demographics, dates of birth, and other items such as event or assessment dates.

Simple rules have the advantage of being easy to understand and do not require much sophistication to implement. Unfortunately, the major disadvantage is the resulting extensive information loss across the board. CSRs are already written without the use of directly identifying personal information, and maintaining as much of the original information in the CSRs is important to be able to perform accurate analysis—for example, to evaluate the risk of bias of trials.

Thus far, the EMA’s draft guidance has erred towards less redaction of already partially anonymized CSRs, and away from blanket removal and redaction. It instead advocates a more nuanced risk analysis in compliance with recommendations from EU data protection authorities in order to maximize scientifically useful information in the CSR. The suggested approaches for further anonymization include selective masking/redaction, randomization, and generalization techniques…
When the European Medicines Agency announced that they would begin to put the Clinical Study Reports in the public domain, an enormous shot in the arm for Data Transparency. The pharmaceutical industry has mounted an equally enormous campaign to undermine that promise [remember the Abbvie/Intermune suits? see the timeline at ema data transparency…]. And Jefferson and Doshi are in the middle of that game – fighting to stop the EMA from backing down under pressure. They’re worrying here that if there’s not a strong demand for this data, momentum might be lost:
Today, demand for data is an important driver of investments in clinical trial data sharing infrastructure, and it is debatable whether demand is growing as rapidly as some have expected or hoped. The GSK initiated ClinicalStudyDataRequest.com portal reports 104 valid requests [as of end of June 2015]; Project Data Sphere reports 900 authorized users; and the EMA, under its 2010 freedom of information based policy, has released over 2 million pages of regulatory documents. While this access has resulted in some publications and high profile research, such as the Cochrane review of neuraminidase inhibitors, one hopes for more…
I understand their concern, having spent a couple of years working with a RIAT team on just such a  project. As I’ve said, it’s one thing to lobby for data transparency, but quite another to know what to do with it once you’ve got it.

For the moment, there’s no infrastructure or funding support for such enterprises, and it’s certainly a lot of work. We were on our own. But it was plenty rewarding and well worth the effort. It’s the perfect kind of project for graduate students and junior faculty who need a challenge that will flex a wide range of analytic skills with a definable output. I’m sure I’ve learned as much doing this as back during my jurassic age in an NIH fellowship. I can even imagine "watch-dogging" unpublished or questionable studies and reevaluating them as coming into the domain of some group like the Cochrane Collaboration or a similarly structured  independent academy.

But the point right now isn’t how to incorporate increased Data Transparency into some future formal scheme, because we don’t yet really even have that kind of access. Right now, all we can do is support the persistence of people like these "Tamiflu guys," Jefferson and Doshi, in their important work. The EMA has postponed the quest for their posting of the IPD [Individual Participant Data] which is required for a thorough vetting of trials, and we can only hope that the fight for that data will be waged with the same fervor…

ICH HARMONISED TRIPARTITE GUIDELINE
STRUCTURE AND CONTENT OF CLINICAL STUDY REPORTS – E3
30 November 1995
Mickey @ 8:00 AM

some decent purpose…

Posted on Thursday 27 August 2015


[recent releases not shown]

Among other things, one major change from the psychiatry I began with and the modern era was that I didn’t know which company produced which drug – or for that matter, which ones were on or off patent. I thought of the TCAs [Tricyclic Antidepressants] and the MAOIs [Monamine Oxidase Inhibitors] as primarily drugs for patients with Major Depression [what we called Major Depression back then meaning Melancholia, Manic Depressive Illness with a Depressive Episode, etc] and not for the outpatients where my own focus had landed. I don’t remember discussing them for children or adolescents. In my six month fellowship on an adolescent treatment unit, no kid was on antidepressants. I certainly recall talking about "kids being depressed," but never about them "having" a depressive illness. Recently, on an impulse, I found an old $0.01 copy of the late 1960s GAP [Group for the Advancement of Psychiatry] Psychopathological Disorders of Childhood we used and ordered it. It was as I recalled. The only mention of depression was in a listing of symptoms, but no diagnostic entity.

So I’m pretty sure that the existence of the then newer antidepressants created the diagnostic entity «Depression» in children and adolescents, and I’m even more sure that the hype that built around the drugs perpetuated the diagnosis «Major Depressive Disorder» in Adults. As things developed, the phrase «Treatment Resistant Depression» [with an acronym of its own – TRD] found wide usage in both kids and adults, as if it was itself also a diagnostic entity. I can’t right off think about other such situations, places where a treatment had such a profound effect on the creation and shaping of a diagnosis. I also wonder if there has ever been a single drug class that has been so heavily studied or had so much energy, time, or money spent trying to magnify a weak effect [sequencing, combining, augmenting, "personalizing", etc].

Since finally finishing the Study 329 RIAT project, I’ve been thinking about what else this Antidepressant Era in psychiatry might have to tell us for the future – assuming it’s finally drawing to a close. PHARMA seems to have genuinely flown the coop, moving away from CNS research for the most part. I hope, on psychiatry’s side, that free of the influences of PHARMA and KOLs, these drugs can be evaluated for their true worth and indications. We’re all so locked in to the polarized rhetoric that it may be a while before all the meta-analyses and retrospectives might offer some clarity. But I think that there’s perhaps a larger contribution that can be made to Medicine in general.

I had no idea that such a thing could happen, that the tools of scientific investigation could be so regularly perverted in plain sight. For example, that by withholding negative studies, one could actually create the illusion of a clinically relevant medication from a compound with a trivial effect. It makes perfect sense if you’ve seen it before. I just never would’ve thought to look.

And there are so many other examples, things in the analysis or the presentation that with ever so slight changes make a big difference in the outcome. Like in the article described in supplement·a·tion: a strange kind of sense…, speaking about Study 329, Dr. Wagner writes:
Paroxetine resulted in significantly greater rates of response (defined as Hamilton Rating Scale for Depression [HAM-D] score <8) compared with placebo in the last observation carried forward population. Response rates were higher for paroxetine (76%; P=.02), imiprarmine (64%), and placebo (58%) among those patients who completed the 8-week trial. There was no statistically significant difference between paroxetine or imipramine and placebo on the HAM-D total score at end point. However, there was a significantly greater increase in the Clinical Global Impression (CGI) improvement scores for the paroxetine group compared with the placebo group.
In the past, I might have scanned through that without a blip on my radar. But there should’ve been a bright red flag. " greater rates of response" but "no statistically significant difference between paroxetine or imipramine and placebo on the HAM-D total score"? It’s things like this that people have been glossing over through this whole era. Once the alarm has gone off, it’s easy even with the only the paper in hand to figure out this is a ruse, that the  "HAM-D total score" is a primary outcome and the "response" is an later add-in. These Clinical Trial papers are full of just that kind of thing – benign sentences that are anything but benign [that example will become much clearer in a couple of weeks]. But there’s a much more general point.

In every specialty, a lot of the clinical side of medical training is learning to see a lot of patients in a practice day, process a ton of information, and get to the point where you almost automatically hear bells go off when some small piece of information comes your way that says, "stop, look, and listen" – some symptom that doesn’t fit, a hint that this cough is the early symptom of tumor instead of a cold, this rash is going to turn into a Stevens-Johnson syndrome in an hour, this back pain is an aneurysm about to dissect. Over these last few years of reading the psychiatric Clinical Trial literature, I’m beginning to hear those same bells when I scan through a Clinical Trial journal article. And I wish my training for reading these articles had been as rigorous as my bedside and consulting room training.

As peculiar as it might sound, I think that a big segment of the literature [particularly the industry funded Clinical Trial literature] that came out of the antidepressant era spanning the time in that top figure needs to be formally re-evaluated, not just to get a more accurate picture of these drugs, but also to familiarize all of us to the subtle ways it can be distorted – something like clinical training in journal reading. It’s certainly a treasure trove of tricks that ride just this side of fraud. While I’ve yet to find one of these studies where the data has been directly changed, I can’t think of any, off-hand, that played it totally straight in the analysis/presentation/conclusion realm either. We’ve always said doctors should be taught critical reading. It’s obvious that we’re woefully off the mark in that area when it comes to Clinical Trials. This kind of shenanigans isn’t just a problem in psychiatry. It’s medicine-wide. But psychiatry sure has the biggest library of examples on the block. We might as well find a way to use those papers to some decent purpose.

And speaking of purpose. I gave two small examples. One was not publishing negative studies. The other was carefully cherry picking things to say that put a positive spin on a decidedly negative study. Those are both wrong on purposedeliberate attempts to deceive. There’s no statute of limitations on deliberate deceit, AKA lying. Our literature on Clinical Trials is riddled with deliberate deceit, and it’s time to start naming it for what it really is. This story needs a heading that fits…
Mickey @ 4:54 PM

in the details…

Posted on Wednesday 26 August 2015

BMJ Open
by Peter Doshi and Tom Jefferson
26 February 2013

Objective To explore the structure and content of a non-random sample of clinical study reports [CSRs] to guide clinicians and systematic reviewers.
Search strategy We searched public sources and lodged Freedom of Information requests for previously confidential CSRs primarily written by the industry for regulators.
Selection criteria CSRs reporting sufficient information for extraction [‘adequate’].
Primary outcome measures Presence and length of essential elements of trial design and reporting and compression factor [ratio of page length for CSRs compared to its published counterpart in a scientific journal].
Data extraction Data were extracted on standard forms and crosschecked for accuracy.
Results We assembled a population of 78 CSRs [covering 90 randomised controlled trials; 144,610 pages total] dated 1991–2011 of 14 pharmaceuticals. Report synopses had a median length of 5 pages, efficacy evaluation 13.5 pages, safety evaluation 17 pages, attached tables 337 pages, trial protocol 62 pages, statistical analysis plan 15 pages and individual efficacy and safety listings had a median length of 447 and 109.5 pages, respectively. While 16 [21%] of CSRs contained completed case report forms, these were accessible to us in only one case [765 pages representing 16 individuals]. Compression factors ranged between 1 and 8805.
Conclusions Clinical study reports represent a hitherto mostly hidden and untapped source of detailed and exhaustive data on each trial. They should be consulted by independent parties interested in a detailed record of a clinical trial, and should form the basic unit for evidence synthesis as their use is likely to minimise the problem of reporting bias. We cannot say whether our sample is representative and whether our conclusions are generalisable to an undefined and undefinable population of CSRs.
Tom Jefferson and Peter DoshiOne runs into the most interesting people in the oddest of ways, looking into the most unlikely of topics. Tom Jefferson and Peter Doshi are the Tamiflu guys, the ones that spent years running down an epidemiology story that needed to be told, and at least for me, began to unravel the torturous and kind of boring story of how Clinical Trials are cataloged and documented. Tom is a British Epidemiologist with the Cochrane Collaboration in Rome, Italy. Peter, his colleague, is on our side of the pond in Maryland and now an editor with the BMJ. But this blog isn’t about their interesting story with Tamiflu and other things. It’s about a topic they’ve tried to clarify for all of us – the Clinical Study Reports that become part of the record of Clinical Drug Trials – a should·be·simple topic that’s complex·and·confusing [perhaps deliberately so].

After a few years being preoccupied with evaluating the documents from a single Clinical Trial [Paxil Study 329] for a RIAT project [A Milestone in the Battle for Truth in Drug Safety, Restoring Study 329], I made this diagram trying to clarify things for myself. The upper part is clear – the a priori PROTOCOL and the SAP [Statistical Analysis Plan] – are evaluated by the Institutional Review Board [IRB] and, and if approved become the directives for the Clinical Trial. That part’s blinded [dark glasses]. Once the blind is broken, the Raw Data as Case Report Forms [CRFs] are assembled and organized. They’re transcribed into data tables known collectively as the IPD [Individual Participant Data]. Notice there aren’t any glasses there [yet] because these nuclear document have rarely ever been seen. Then someone uses the IPD to write an exhaustive CSR [Clinical Study Report]. Later all of this becomes the highly condensed published ARTICLE usually under the byline of academic authors. I put the glasses there because it’s seen and unblinded, but notice they’re "rose colored glasses." The important part is how to get untinted glasses in the IPD and CRF quadrant where they’re needed, without the "rose coloring" of conflict of interest:

Doshi and Jefferson explain some of why this eyes-on view of the Raw data is so hard to get hold of. The issue is that the meaning and requirements of CSR varies from company to company and agency to agency – and it’s not at all clear whether the CSR must contain the IPD [ergo, Raw Data] or not. In the study we looked at [Paxil Study 329], the IPD was listed in Appendices to the CSR, but in GSK’s initial release – the Appendices were nowhere to be found in the 2004 posting, only appearing 8 years later [2012] at the insistence of Peter Doshi working with the NY Attorney General. And apparently, there’s a lot of variability. So if this is a remote interest, by all means read their paper [maybe even if it isn’t, because this at the heart of the Data Transparency question].

Looking back over the history of Clinical Trials from the days of Kefauver-Harris and Louis Lasagna, it’s a cascade of reforms with loopholes – one after the other. What I like about the move for Data Transparency is that it puts the regulation in the hands of Medicine itself. If you see something questionable, mount a research effort of your own and prove it. I think Data Transparency means putting independent researchers on an equal footing with the original investigators. Why not? And I personally think the patient confidentiality argument is absurd. These aren’t patients, they’re subjects. Whatever their reasons for doing a study, they’re not volunteering to be complicit with fraud. As it stands, they can be easily anonymized. Maybe the NSA could find them [but the NSA could find them anyway]. So get informed about these details and help in the effort to look at the data with clear lenses. The details are where this movement will sink or swim…
Mickey @ 8:51 PM

supplement·a·tion: a strange kind of sense…

Posted on Tuesday 25 August 2015

ABSTRACT- This article provides an overview of the use of paroxetine in the treatment of mood and anxiety disorders in children and adolescents. Although not currenty approved for use in patients younger than 18 years of age, the efficacy and safety of paroxetine have been studied in several pediatric mood and anxiety disorders. The epidemiology, diagnosis, and course of major depression, obsessive-compulsive disorder, social anxiety disorder, and panic disorder are discussed briefly. Current available data on the safety and efficacy of paroxetine based on double-blind placebo-controlled tnais and open-label studies for Ihe treatment of mood and anxiety disorders In children and adolescents are reviewed Clinical guidelines for the use of paroxetine in children and adolescents and recommendations regarding future directions of study are discussed. Psychophamacology Bulletin 2003;37[Suppl 1]: 167-175.

It’s easy to forget what it was like in 2003. We think of something like the Keller et al paper we know as Paxil Study 329 in isolation, a decidedly negative Clinical Trial published as positive. But we forget that it was just the launching pad for a much larger campaign. This Psychopharmacology Bulletin Supplement was published in the Spring of 2003, financed by GSK, moderated and assembled by Dr. Charlie Nemeroff, then Chairman of Psychiatry at Emory University and a major GSK Adviser. Now take a look at the Supplement’s Authors and the Journal’s Editorial Staff here, a veritable Who’s Who with multiple Departmental Chairmen and future alumni of Senator Grassley’s Senate investigations into fiscal corruption. And in the Supplement, Dr. Wagner’s version of the Keller et al paper [that she co-guest-authored] is certainly beyond generous:

The findings of one of the largest randomized, double-blind, multicenter, controlled trials of an SSR1 in the treatment of adolescents with major depression was reported by Keller and associates in 2001. The efficacy and safety of paroxetine was demonstrated in 275 adolescent outpatients with major depression ranging in age from 12 to 18 years. Patients were randomized to paroxetine, imipramine, or placebo for an 8-week trial Dose ranges for paroxetine were 20 to 40 mg per day, with a mean daily dose of 28 mg. Dose ranges for imipramine were 200 to 300 mg per day, with a mean daily dose of 205 mg. Paroxetine resulted in significantly greater rates of response (defined as Hamilton Rating Scale for Depression [HAM-D] score <8) compared with placebo in the last observation carried forward population. Response rates were higher for paroxetine (76%; P=.02), imiprarmine (64%), and placebo (58%) among those patients who completed the 8-week trial. There was no statistically significant difference between paroxetine or imipramine and placebo on the HAM-D total score at end point. However, there was a significantly greater increase in the Clinical Global Impression (CGI) improvement scores for the paroxetine group compared with the placebo group. Of patients in the paroxetine group, 66% were much or very much improved (P=02 versus placebo) compared with 52% of patients in the imipramine group (P=64 versus placebo) and 48% of patients in the placebo group.
The two Outcome Variables she mentions [HAM-D < 8 and CGI = 1 or 2] were add-ons, not even mentioned in the Protocol and both only significant in week 8. However, the HAM-D total score at end point was a defined Primary Outcome Variable that did not reach significance [as did none of the other Protocol defined Primary or Secondary Outcome Variables]. This paper ends:
Conclusion
In double-blind, placebo-controlled trials, paroxetine has demonstrated efficacy and safety in the treatment of major depression in adolescents, and in the treatment of OCD and social anxiety disorder in children and adolescents. Selective serotonin reuptake inhibitors such as paroxetine are currently the first-line treatment for children and adolescents suffering from major depression and anxiety disorders. Additional well-controlled studies are needed to further advance the treatment and outcome of children with depression and anxiety disorders.

Disclosure
This work was supported by an unrestricted educational grant from GlaxoSmithKline. Dr. Wagner serves as scientific adviser and consultant for and receives research support from Abbott Laboratories, Eli-Lilly, Forest laboratories, GlaxoSmithKline, Pfizer, and Wyeth-Ayerst. She also receives research support from Bristol-Myers Squibb, Organon and the National Institute of Mental Health. Dr. Wagner serves as scientific adviser and consultant for Janssen, Novartis, Otsuka, and UCB Pharma.
From Keller et al’s negative-turned-into-positive Clinical Trial, Paxil had jettisoned into the first-line treatment for children and adolescents suffering from major depression. Quite a jump. But then that was 2003.

Dr. Nemeroff’s star had begun to droop in the sky by then. He had become the Editor in Chief of Neuropsychopharmacology, the ACNP Journal, in 2001 – quite an honor. First, he published a review article in Nature Neuroscience [Treatment of mood disorders] in 2002 recommending three treatments that he had a direct financial interest in without declaring those interests. This omission was exposed by Drs. Bernard Carroll and Bob Rubin resulting in a change in policy for all Nature journals. He did it again in 2004! [see hubris… for the details]. But the growing crack turned into a canyon in 2006 when Dr. Nemeroff published a ghost-written review of a vagal nerve stimulator for treating depression [VNS Therapy in Treatment-Resistant Depression: Clinical Evidence and Putative Neurobiological Mechanisms – full text] in his own journal [Neuropsychopharmacology] with a raft of other colleagues, all of whom were involved with that nerve zapper’s company [including him] without any  acknowledgment. He was confronted again by Drs. Carroll and Rubin. In quick succession, he stepped down [was removed] as Editor and his activity was heavily restricted by his University. The final blow came in 2008 when Senator Grassley and Staffer Paul Thacker started a Senatorial investigation into academic psychiatrists with undisclosed pharmaceutical income, and Nemeroff was on the top of the list. That was the nail that ended his career as Chairman of Psychiatry at Emory. Many of the people on the editorial board and the Supplement above were on Senator Grassley’s list, including Dr. Martin Keller of Keller et al and Dr. Karen Dineen Wagner, author of this article.

Finally, the in-your-face version of the alliance between PHARMA by this segment of academic psychiatry came out of the shadows. When Dr. Nemeroff was confronted by Emory, his defense was that he was just doing his job with all of his industry connections – cataloging the money he’d raised to support his department at Emory through educational grants. And that makes a strange kind of sense, as convoluted as it seems. Dr. Nemeroff was hired because he could [and would] bring in the money from unrestricted educational grant[s] from any number of pharmaceutical companies. The changes – managed care, the closing of psychiatric hospitals, the collapse of the Community Mental Health Centers, the failure of the States to provide public psychiatric care – had devastated psychiatric departments in medical schools. And so they hired the likes of Dr. Nemeroff and his colleagues who could [and would] solve the problem by alliances with PHARMA. It’s really that $imple. And if you think about it, the dethroning of Dr. Nemeroff and hi$ colleague$ wa$ not becau$e of hi$ rai$ing PHARMA money for hi$ department. It was becau$e he wa$ rai$ing money for him$elf. And that was true of all the people on Senator Grassley’s list. It was their undeclared personal income that brought them to grief. If you want to hear what that sounds like from inside, check out Conflicts of Interest, an extremely telling blog by Psycritic [Update: Here‘s a public version].

I’ve heard that Deep Throat’s famous line, "Follow the Money," in the movie All the President’s Men didn’t really happen, but was a product of some writer somewhere. It doesn’t matter where it came from. It’s brilliant. This Supplement and the many others like it from that era [and now] stand as testimony to just how brilliant it is. And the problem of universities financing academic medical departments by garnering the favors of commercial interests is as alive today as it ever was. It’s not just true in psychiatry where it’s so obvious. It’s medicine-wide. If the world wants an ethical Medicine, it’s going to have to help us provide an environment where it can flourish again. Just decrying its decline isn’t going to be enough…
hat tip to Psycritic and many others… 
Mickey @ 9:08 PM

POM·posity…

Posted on Friday 21 August 2015

Note: POM = Primary Outcome Measures
An Observational Study of Five Psychiatry Journals That Mandate Prospective Clinical Trial Registration
PLoS | ONE
by Amelia Scott, Julia J. Rucklidge, and Roger T. Mulder
August 19, 2015

Objective: To address the bias occurring in the medical literature associated with selective outcome reporting, in 2005, the International Committee of Medical Journal Editors [ICMJE] introduced mandatory trial registration guidelines and member journals required prospective registration of trials prior to patient enrollment as a condition of publication. No research has examined whether these guidelines are impacting psychiatry publications. Our objectives were to determine the extent to which articles published in psychiatry journals adhering to ICMJE guidelines were correctly prospectively registered, whether there was evidence of selective outcome reporting and changes to participant numbers, and whether there was a relationship between registration status and source of funding.
Materials and Methods: Any clinical trial [as defined by ICMJE] published between 1 January 2009 and 31 July 2013 in the top five psychiatry journals adhering to ICMJE guidelines [The American Journal of Psychiatry, Archives of General Psychiatry/JAMA Psychiatry, Biological Psychiatry, Journal of the American Academy of Child and Adolescent Psychiatry, and The Journal of Clinical Psychiatry] and conducted after July 2005 [or 2007 for two journals] was included. For each identified trial, where possible we extracted trial registration information, changes to POMs between publication and registry to assess selective outcome reporting, changes to participant numbers, and funding type.
Results: Out of 3305 articles, 181 studies were identified as clinical trials requiring registration: 21 [11.6%] were deemed unregistered, 61 [33.7%] were retrospectively registered, 37 [20.4%] had unclear POMs either in the article or the registry and 2 [1.1%] were registered in an inaccessible trial registry. Only 60 [33.1%] studies were prospectively registered with clearly defined POMs; 17 of these 60 [28.3%] showed evidence of selective outcome reporting and 16 [26.7%] demonstrated a change in participant numbers of 20% or more; only 26 [14.4%] of the 181 the trials were prospectively registered and did not alter their POMs or the time frames at which they were measured. Prospective registration with no changes in POMs occurred more frequently with pharmaceutical funding.
Discussion: Although standards are in place to improve prospective registration and transparency in clinical trials, less than 15% of psychiatry trials were prospectively registered with no changes in POMs. Most trials were either not prospectively registered, changed POMs or the timeframes at some point after registration or changed participant numbers. Authors, journal editors and reviewers need to further efforts to highlight the value of prospective trial registration.

[see also Is Mandatory Trial Registration Decontaminating the Psychiatric Literature? by Julia Rucklidge, Ph.D. on Mad in America].
I ended [post-it notes…] on the RIAT Initiative with:
So to my post-it notes. They add one other vital thing, the a priori study protocol [I’m 100% serious about the vital part]. Among many other things, it lays out what variables will be assessed and exactly how they will be analyzed. Most, if not all, RCT distortion involves not following the a priori study protocol [or having a biased protocol from the start]…
And then this study appears just to show us exactly why the a priori protocol is so vital. The proper sequential procedure in any Clinical Trial involves:
    «declare the Primary Outcome Measures»
    «register the trial including an a priori protocol»
    «conduct the blinded trial»
    «break the blind»
    «analyze the Primary Outcome Measures according to the a priori protocol»
    «report the results»

It’s pretty cut-and-dried. These investigators looked at the trials that recruited after a defined date and published 2009-2013 in the five major psychiatric journals. They compiled the start date, the registration date, the Primary Outcome Measures [POMs], and whether the declared POMs carried through to the paper itself. This is an abbreviated version of what they found.


[truncated and rearranged to fit]

And of the 60 that were preregistered and carried into the  publication, only 26 POMs were used without some evidence of jury-rigging [14.1 %]! Pitiful!

And ~half of those retrospectively registered trials were registered over a year after the study began. Also pitiful!

And more…

The numbers in the published paper get somewhat confusing to follow, so here’s the summary from the Mad in America article [which is clearer]:
Twenty-one [11.6%] of the 181 studies were deemed unregistered, 61 [33.7%] were retrospectively registered, 37 [20.4%] had unclear POMs either in the article or the registry and 2 [1.1%] were registered in an inaccessible trial registry. Only 60 [33.1%] studies were prospectively registered with clearly defined POMs.

But seventeen of these 60 [28.3%] properly registered trials showed evidence of selective outcome reporting – this means that there had been changes to POMs based on a comparison of the trial registry and the publication. In total, only 26 [14.4%] of the 181 trials were prospectively registered and did not alter their POMs or the time frames at which they were measured. Prospective registration with no changes in POMs occurred more frequently with pharmaceutical funding.
The authors and sponsors bear the ultimate responsibility for this state of affairs, but the editors/journals share equally in the indictment. They’re in a position to check these things and insist that they’re correct. The journal reader has no way to see these parameters in making their judgement about the article. If that weren’t enough – these articles in their study are from a time after the level of duplicity  in clinical trials of psychiatric drugs was becoming well known, well beyond the more gullible era of Study 329, Study 352, Study 15, etc. I guess the inertia of deceit is stronger than the currents of exposure and/or reform. Data Transparency means including the a priori protocol…

Parenthetically, having spent some time looking for this kind of data myself, I know that it’s hard work. A follow-up on how they located all the data for this study would help the rest of us…

Afterthought: I suppose I should’ve acknowledged the journals who got it sort of right [actually a bit of a surprise]…

Mickey @ 11:19 AM

paxil withdrawal…

Posted on Thursday 20 August 2015

As I’ve mentioned before, back in the early days of SSRIs, I was lucky to have a good friend whose wife, a sophisticated Social Worker, had taken Paxil shortly after it came on the market. When she stopped, she got ill and was perceptive enough to recognize that it wasn’t anything like her "depression coming back." It was something else. It was a withdrawal syndrome. She described it clearly back then, including "brain zaps," just like we’ve come to know it now that it’s more recognized and well characterized. So I knew about withdrawal early, and began slow tapering off of SSRIs a long time ago with all of them [never prescribing Paxil]. When I mentioned it to colleagues, they didn’t know what I was taking about.

Today, I ran across a post from Bob Fiddaman, a long-time reference source on all things Paxil [Seroxat], linking two previous posts of his I’d missed. He’s been on the trail of some Clinical Trials of Paxil done in Yugoslavia [back when there was a Yugoslavia to do trials in].
Here’s the gist of the story. Back in 1988, SmithKline Beecham initiated a trial [called the relapse trial].
At this time, SKB were seeking approval of Paxil and the Yugoslavia trial was to show the FDA (the US drug regulator) how effective Paxil was in treating depression – they would also try to show the FDA how it was important to keep taking Paxil and not to stop… because if you did stop then you would go into relapse, in other words, SKB were trying to prove that stopping Paxil meant the patient’s original illness would return.
So, after a period on Paxil, half the patients changed to placebo, and sure enough, they got ill – interpreted as a return of the illness:
With the results they wanted, SKB then provided the FDA with apparent evidence that showed patients staying on Paxil continued to enjoy a normal, "depression free" life, but that those abandoning the drug would  suffer relapse back into a depressive state. One thing that was irksome to SKB was that they had to convince the FDA that the relapses shown in the study were not simply patients suffering withdrawal.
Bob’s source is the transcript from a suit resolved in 2002 Nguyen & Farber, plaintiffs vs. SmithKline Beecham Corporation. The transcript goes on to describe how SKB was able to essentially game Dr. Thomas Laughren into helping them convince the FDA panel that this was recurrent depression rather than withdrawal. When Bob wrote the MHRA with an FOIA request, they wrote back that they didn’t have any records. The MHRA official is a former GSK employee.
The thing that struck me in this story is that they even did the relapse trial in the first place. They must’ve known about the discontinuation syndrome early on and actually did this Clinical Trial to spin it away.

That brings up something that has nagged at me for years. I still see depression in two major categories, just as I did before I even came into psychiatry. Depression [with a capital "D"] meaning Melancholia, the Depressive Episodes of Manic Depressive Illness, Post-Partum Depression, etc. And depression [with a little "d"] as in everything else [formerly known as Neurotic Depression]. Whichever the case, depression is a time-limited condition – and medication for depression is a time limited medication. The guideline I recall from the days of the Tricyclics and MAO Inhibitors was that patients who had responded to these medicatiuons should continue them for 6 months before stopping to prevent a relapse [I have no clue how that guideline got into my mind so long ago].

I followed that practice even after the SSRIs came along. Years later, when I retired and started seeing patients in a clinic where patients had been medicated by someone else, it was apparent that the rest of the world saw things differently. Patients had been on an antidepressant for years and obviously thought it was keeping their depression away. And getting them to give it up required knowing them for a while, and withdrawing slowly as a trial. Otherwise, it was like taking away a talisman, or a comforting blanket. I’ve wondered how that idea of antidepressant as preventative-forever ever came about. Avoiding withdrawal may well be the answer…

An Anecdote: The last time I worked in the clinic, I was seeing a woman with a fairly striking anxiety syndrome, persistent since getting out of a physically abusive relationship [actually a life-threatening physically abusive relationship]. She was accompanied by her current boyfriend, a nice and supportive biker-guy with cap on backwards, tank top, and liberal signage [tatoos]. She and I were discussing medication, and he piped in. "Man, don’t give her that Paxil. Miss a dose of that stuff and you’re Jones-ing – sure enough." I guess if a biker type in rural Georgia knows about Paxil withdrawal, it has to be a widespread problem…
Mickey @ 10:00 AM

nothing to be passive about…

Posted on Tuesday 18 August 2015

This time last year, I was feeling steamed that the pharmaceutical companies had been granted the right to treat the data from Clinical Trials as proprietary, private property, in the first place. That Law needed to be overturned! So I couldn’t find the Law and I wrote people in the know for help. They agreed to help, but nobody could remember. As it turned out, they couldn’t remember because there wasn’t any Law, PHARMA just assumed ownership and that was that [see repeal the proprietary data act… and except where necessary to protect the public…]. I wrote Canadian Law Professor and expert Trudo Lemmens who confirmed that, mentioning that it was loosely based on Trade Agreements and PHARMA was trying to strengthen their position. I knew I was way into something I could never master, so I moved on. But today, this article came my way:

Intellectual Property Watch
By Deborah Gleeson and Ruth Lopert
18 AUG 2015

Failure to reach agreement over expanded intellectual property [IP] protections for medicines has proven to be a stumbling block to completion of the 12-country Trans Pacific Partnership negotiations. As expected, the US is continuing to pressure negotiating partners to adopt broader and longer monopoly protections for medicines. But the risks for their health systems are very high – and will be much higher if they don’t stick together in rejecting the US demands.

A leaked draft of the agreement’s IP chapter from May 2014 showed that, with the exception of Japan, the other countries had been consistent in rejecting the US proposals. But more recently, another leaked draft dated 11 May 2015 and published by Knowledge Ecology International last week demonstrated a splintering of this opposition. Rather than pushing back collectively, individual countries appear to be attempting to craft creative language and qualifying footnotes to give the appearance of conceding to US demands while preserving existing standards where possible.

One could be forgiven for thinking that this might seem like a reasonable outcome. But there are at least five good reasons why this is a risky approach – and why pushing back collectively through the final stages of negotiations would be a wiser strategy.
  1. The US demands are inherently unreasonable. It is widely recognised that the US is seeking monopoly protections that will frustrate and delay access to affordable medicines in countries that are party to the agreement – and due to spillover effects, potentially to others that are not…
  2. Bargaining with the US is risky business. Under pressure from the US, and anxious to secure gains in market access in exchange for apparent concessions on medicines, negotiators seem to be pursuing solutions in creative wording…
  3. Adopting prescriptive obligations in a trade agreement constrains future policy options,even if creative wording averts substantive changes. Tight specification of IP obligations locks countries into existing regimes and prevents the kinds of reforms recommended by the former Australian Government’s Review of Pharmaceutical Patents – such as winding back patent term extensions or reducing effective patent life.
  4. The obligations in the TPP will become the template for the next trade agreement. What countries accept – or appear to have accepted, footnotes and exemptions notwithstanding – will then form part of the standard template for the next trade agreement negotiated by the US…
  5. Developing countries will undoubtedly get the raw end of the deal. Vietnam, as the lowest income country, is likely to be worst affected. While the TPP parties have agreed to a transition period for developing countries, it does not cover all of the pharmaceutical obligations. In the latest leaked draft, earlier proposals for a transition period based on development indicators seem to have been abandoned in favour of a time-based transition…
Danger. Danger. The US PHARMA Lobby is very well heeled. In this case, they’re not after Proprietary Data ownership [I don’t think]. Rather, they’re trying to extend their patent monopoly period. That was originally a maneuver designed to pay them back for developing medications. But increasingly, they’re not even developing them, they’re buying them [see down-right unAmerican…]. And they’re selling them for outrageous prices. This is capitalism at its worst – fleecing the sick. If anyone has any information about this trade agreement, pass it on to the rest of us. This is nothing to be passive about…

UPDATE: Many thanks to Rob Purssey for the jump-start.  I seemed to have missed the obvious by using some too-esoteric search criteria. Using the simple TPP or Trans Pacific Partnership it’s everywhere. Will pick up the thread in the morning. Here’s a starter:
Mad In America
By Erik Monasterio, MD
November 20, 2014
Mickey @ 8:38 PM

post-it notes…

Posted on Monday 17 August 2015

As a piece of their support of the movement for Data Transparency in Randomized Clinical Trials [RCTs], the BMJ featured a particular article [Restoring invisible and abandoned trials: a call for people to publish the findings] with this cover graphic [minus my added post-it notes] on their June 22, 2013 print edition:

Since then, I have learned a lot more about the details of Clinical Trial reporting and have talked about it here ad nauseum. Two of the authors, Peter Doshi and Tom Jefferson, went the extra mile in their quest to vet the studies on Roche‘s Tamiflu® for their Cochrane Collaboration review [Oseltamivir for influenza in adults and children: systematic review of clinical study reports and summary of regulatory comments] – spending literally years chasing down the raw data for the published and unpublished trials. Then they were joined by Kay Dickersin, David Healy, and Swaroop Vedula to collect even more data at large and launch the RIAT Initiative, a move supported by the BMJ and PLoS editors [see “a bold remedy”…]:

Restoring invisible and abandoned trials: a call for people to publish the findings
Analysis
British Medical Journal
by Peter Doshi, Kay Dickersin, David Healy, S Swaroop Vedula, and Tom Jefferson
June 13, 2013
[full text on-line]
Restoring the integrity of the clinical trial evidence base
Editorial
British Medical Journal
by Elizabeth Loder and Fiona Godlee [BMJ] and Virginia Barbour and Margaret Winker [PLoS Medicine]
June 13, 2013
[full text on-line]
In several dimensions, there’s still no consensus about what concretely constitutes Data Transparency. The overwhelming majority of RCTs are conducted, or at least managed, by the large Contract Research Organizations [CROs], whose job it is to follow the Study Protocol to the letter and insure the integrity of the blind. When the blind is broken, the raw data is passed on to the Sponsor who connects the subjects with their treatment, and usually does the analysis of the data in-house. What’s evolved is a system where the results are written up by a ghost-writer from a medical writing firm with variable input from the listed academic authors who serve as a ticket into the peer-reviewed medical literature. It’s hard for imagine how this system evolved to it’s current level of corruptability.

Data Transparency proposes to reform it by skipping the system altogether. Allow independent investigators to see the same thing the system sees as soon as the blind is broken and reanalyze it, checking the work of the Sponsor/Journal. So that’s the reason for my post-its upstairs. When the RIAT paper was written, I presume they looked for the clinical study report, the electronic patient level datasets, and the completed case report forms.

Having spent some time on a RIAT Team, here two+ years later, I would revise that cover slightly. The CSR is an exhaustive report on the study, but it’s not "raw." It’s something prepared by human beings, the same human beings responsible for the published article – thereby as prey to distortion and corruption as the paper itself. What they got as electronic patient level datasets are one type of IPD [individual patient data]. It’s the compiled database derived directly from the forms filled out as the study progressed – forms called the case report forms [CRFs]. Unless one suspects fraudulent transcription, the IPD is what matters – then you need the CRFs to check with. The CSR is only useful if it contains the IPD. Otherwise, it’s just the long form of the paper you already suspect has something awry.

So to my post-it notes. They add one other vital thing, the a priori study protocol [I’m 100% serious about the vital part]. Among many other things, it lays out what variables will be assessed and exactly how they will be analyzed. Most, if not all, RCT distortion involves not following the a priori study protocol [or having a biased protocol from the start]. So remember these post-it notes when evaluating a RCT or a RIAT reanalysis:

Mickey @ 8:25 PM

increasingly questionable …

Posted on Saturday 15 August 2015

STAR*D was an elaborate NIMH study aiming to define some sequencing method that would improve the response to antidepressants using the algorithm on the left [as seen from space]. The main outcomes were a database mined for several hundred publications, a self-rating depression scale [QIDS-SR], and a template for future studies called naturalistic at the time – no control group, no blinding, and progress was monitored by a telephone version of the QIDS-SR. From my point of view, it was a thirty-five million dollar misunderstanding…

But the idea that there is some way to predict who might respond to which antidepressant definitely has staying power. Borrowing the term personalized medicine from physical medicine, the search for predictor of response continues. There are two large ongoing studies [iSPOT-D and EMBARC] aiming towards locating biosignatures that might predict a response, and they are both beginning to report findings [Godzilla vs. Ghidorah ,,,, ]. Here’s one from the STAR*D director reporting on an aspect of iSPOT-D…
by Bruce A. Arnow, Christine Blasey, Leanne M. Williams, Donna M. Palmer, William Rekshan, Alan F. Schatzberg, Amit Etkin, Jayashri Kulkarni, James F. Luther, and A. John Rush.
American Journal of Psychiatry. 2015 172[8]:743-750.

Objective: The study aims were 1] to describe the proportions of individuals who met criteria for melancholic, atypical, and anxious depressive subtypes, as well as subtype combinations, in a large sample of depressed outpatients, and 2] to compare subtype profiles on remission and change in depressive symptoms after acute treatment with one of three antidepressant medications.
Method: Participants 18–65 years of age [N=1,008] who met criteria for major depressive disorder were randomly assigned to 8 weeks of treatment with escitalopram, sertraline, or extended-release venlafaxine. Participants were classified by subtype. Those who met criteria for no subtype or multiple subtypes were classified separately, resulting in eight mutually exclusive groups. A mixed-effects model using the intent-to-treat sample compared the groups’ symptom score trajectories, and logistic regression compared likelihood of remission [defined as a score ≤5 on the 16-item Quick Inventory of Depressive Symptomatology–Self-Report].
Results: Thirty-nine percent of participants exhibited a pure-form subtype, 36% met criteria for more than one subtype, and 25% did not meet criteria for any subtype. All subtype groups exhibited a similar significant trajectory of symptom reduction across the trial. Likelihood of remission did not differ significantly between subtype groups, and depression subtype was not a moderator of treatment effect.
Conclusions: There was substantial overlap of the three depressive subtypes, and individuals in all subtype groups responded similarly to the three antidepressants. The consistency of these findings with those of the Sequenced Treatment Alternatives to Relieve Depression trial suggests that subtypes may be of minimal value in antidepressant selection.
… that we all already know, since clinical subtype is a usual add-on to any antidepressant Clinical Trial and is regularly unrevealing. Here’s another in the search for biomarkers of antidepressant response:
by Alan F. Schatzberg, Charles DeBattista, Laura C. Lazzeroni, Amit Etkin, Greer M. Murphy, Jr., and Leanne M. Williams
American Journal of Psychiatry. 2015 172[8]:751-759.

Objective:: The ABCB1 gene encodes P-glycoprotein, which limits brain concentrations of certain antidepressants. ABCB1 variation has been associated with antidepressant efficacy and side effects in small-sample studies. Cognitive impairment in major depressive disorder predicts poor treatment outcome, but ABCB1 genetic effects in patients with cognitive impairment are untested. The authors examined ABCB1 genetic variants as predictors of remission and side effects in a large clinical trial that also incorporated cognitive assessment.
Method:: The authors genotyped 10 ABCB1 single-nucleotide polymorphisms [SNPs] in 683 patients with major depressive disorder treated for at least 2 weeks, of whom 576 completed 8 weeks of treatment with escitalopram, sertraline, or extended-release venlafaxine [all substrates for P-glycoprotein] in a large randomized, prospective, pragmatic trial. Antidepressant efficacy was assessed with the 16-item Quick Inventory of Depressive Symptomatology–Self-Rated [QIDS-SR], and side effects with a rating scale for frequency, intensity, and burden of side effects. General and emotional cognition was assessed with a battery of 13 tests.
Results:: The functional SNP rs10245483 upstream from ABCB1 had a significant effect on remission and side effect ratings that was differentially related to medication and cognitive status. Common homozygotes responded better and had fewer side effects with escitalopram and sertraline. Minor allele homozygotes responded better and had fewer side effects with venlafaxine, with the better response most apparent for patients with cognitive impairment.
Conclusions:: The functional polymorphism rs10245483 differentially affects remission and side effect outcomes depending on the antidepressant. The predictive power of the SNP for response or side effects was not lessened by the presence of cognitive impairment.
That is one pretty graph – efficacy on the left, safety on the right, here I am, stuck in the middle with you. This study looked at ten SNP‘s on a gene that "among blood-brain barrier transporter proteins, P-glycoprotein transports several commonly prescribed antidepressants." It looks like one’s genetics has something to do with antidepressant response. Lexapro® or Zoloft® for G/Gs and Effexor XR® for T/Ts. However, reading the methodology, one wonders. It looks as if they did the honorable thing and corrected to avoid false positives.
To account for the testing of multiple SNPs, SNP p values of 0.05/18<0.0028 ere considered significant using a Bonferroni correction for nine SNPs, each tested for one main effect and one interaction effect. All p values <0.05 are reported for completedness, because of the possibility of false negative results at this significance level.
But buried in the math [which we don’t quite follow], there’s one of those "uh-oh" things:
Remission. In the modified intent-to-treat sample, age [p=0.01] and baseline QIDS-SK score [p<0.001] were significant predictors of remission. Hence, genetic analyses were performed covarying for both.

Within the significant overall model [X2=67.58, df=29, p<0.001] only rsl0245483 contributed significantly to prediction of remission. For rsl0245483, there was a significant main effect on remission using multiple testing correction [W=12.64, p<0.001; main effect odds ratio=3.48] and a significant interaction by treatment arm [W=11.18, p=0.001; interaction odds ratio=1.73]. Common allele homozygotes for rsl0245483 responded significantly better to escitalopram [p=0.032] and sertraline [p=0.020] than did minor allele homozygotes. Minor allele homozygotes responded significantly better to venlafaxine [p=0.018]. There were no effects noted in the heterozygotes. The specific contribution of rsl0245483 as a predictor of remission was also verified in univariate models assessing each SNP one at a time.

The effect was similar in whites and nonwhites. In white participants in the modified intent-to-treat sample [N=423], within the significant overall model [X2=61.51, df=29, p<0.001] rsl0245483 had a significant main effect on remission [W=7.22, p=0.007; main effect odds ratio=3.54] and a significant interaction by treatment arm [W=6.99, p=0.008; interaction odds ratio=1.78], which did not pass the multiple testing threshold. For nonwhites, within the significant overall model [X2=55.79, df=28, p=0.001], there was a main effect of rsl0245483 on remission [W=11.42, p=0.001; main effect odds ratio=14.38] and a significant interaction between rsl0245483 and treatment [W=9.81, p=0.002; interaction odds ratio=3.54] that met the multiple testing correction threshold.
I would like to be competent in looking at those statistics and knowing what they mean, but in genetics studies, I come up short. These studies are often non-replicable, and I’m suspicious this will turn out to be one of those too. First off, there’s no control, no placebo group. So like in STAR*D, we don’t know what those responder figures really mean. And while they say the effect was similar in whites and nonwhites, the rest of that paragraph doesn’t confirm the assertion, including a dramatic difference in the odds ratios. One really has to be suspicious with such a paragraph that there a decepticon in the mix. And speaking of STAR*D, "The authors acknowledge the editorial support of Jon Kilner, M.S., M.A." Jon was the medical writer for most of the STAR*D articles. And the two first authors spent years chasing a glucocorticoid blocker in psychotic depression without results [but lots of profit]. Throw in their extensive COIs including commercial genetic testing labs, the financing of Australian entrepreneur Evian Gordon’s brain resources and suspicion is well justified.

My take on this whole line of thinking is suffused with suspicion. It seems to me that the assumption that there will be biomarkers that predict antidepressant response is widely held, but I’m in the dark as to why. At least this study has a hypothesis – the transport of drugs across the blood-brain barrier.  This article is accompanied by a Perspective article from the NIMH Intramural Research Program [Clinically Useful Genetic Markers of Antidepressant Response: How Do We Get There From Here?] that suggests [you guessed it] more research.

STAR*D began in 2001 as an outgrowth of the TMAP program [often referred to as the infamous TMAP program] and has spawned a steady stream of both public and industry financed research into super-charging antidepressants ever since – now chasing the dream of predictive genetic biomarkers [which might lead to a productive enterprise in commercial testing]. Meanwhile, back at STAR*D, it looks as if someone’s going to have another shot at it:
Medscape Medical News
by Kenneth Bender
August 14, 2015

… Somaia Mohamed, MD, PhD, VA Connecticut Health Care System, and coauthors of the article describing the VAST-D study credit the Sequenced Treatment Alternatives to Relive Depression [STAR*D] study for highlighting the frequent inadequate response to initial treatments, but point out that the study did not ultimately identify optimal interventions after initial treatment failure.

They also note that the STAR*D study did not include an atypical antipsychotic augmentation treatment arm, because the study was conducted prior to FDA approval of that indication for an agent in this class.

The VAST-D study will incorporate atypical antipsychotic augmentation in the protocol, and the authors indicate that it will answer two principle questions unanswered by STAR*D: "For which patients, under what circumstances, is switching to vs augmenting with other antidepressants the most effective ‘next-step’ strategy, and how does augmentation with atypical antipsychotics compare to either switching or augmenting with antidepressants?"

by Somaia Mohamed, Gary R. Johnson, Julia E. Vertrees, Peter D. Guarino, Kimberly Weingart, Ilanit Tal Young, Jean Yoon, Theresa C. Gleason, Katherine A. Kirkwood, Amy M. Kilbourne, Martha Gerrity, Stephen Marder, Kousick Biswas, Paul Hicks, Lori L. Davis, Peijun Chen, Alexandra Mary Kelada, Grant D. Huang, David D. Lawrence, Mary LeGwin, and Sidney Zisook
Psychiatry Research. Published Online: August 05, 2015

Highlights
  • Over 2/3s of Major Depressive Disorder cases do not achieve remission on initial treatment.
  • Urgent need to identify effective next step treatments for MDD.
  • Switching to bupropion-SR vs. augmenting with bupropion-SR or aripiprazole.
  • Compare 12-week remission and relapse for up to 6 months after remission.
  • Seven methodological issues to balance efficacy and effectiveness.
Abstract
Because two-thirds of patients with Major Depressive Disorder do not achieve remission with their first antidepressant, we designed a trial of three “next-step” strategies: switching to another antidepressant [bupropion-SR] or augmenting the current antidepressant with either another antidepressant [bupropion-SR] or with an atypical antipsychotic [aripiprazole]. The study will compare 12-week remission rates and, among those who have at least a partial response, relapse rates for up to 6 months of additional treatment. We review seven key efficacy/effectiveness design decisions in this mixed “efficacy-effectiveness” trial.
"Urgent need to identify effective next step treatments for MDD" assumes there is some such "effective next step" hidden somewhere in the current pharmacopeia yet to be identified. That’s an increasingly questionable assumption…
Mickey @ 1:46 PM

don’t take sides…

Posted on Wednesday 12 August 2015

While it’s not fashionable these days to acknowledge that Sigmund Freud even existed other than as a caricature, a container for the things he got wrong, this is one of those places where I would evoke something he got right – neutrality. Actually, his daughter Anna wrote it down in the most frequently quoted form, saying the analyst "takes his stand at a point equidistant from the id, the ego, and the superego." In just plain English that means, "Don’t take sides."
PsychiatricNews
by Renée Binder, M.D.
July 13, 2015

… In June, I arranged for the Board of Trustees to tour San Quentin in northern California. It was a powerful, moving, and formative experience, and I’m thankful to Dr. Paul Burton, the chief psychiatrist, and to the California Department of Corrections and Rehabilitation for giving us that access. Our visit was important because, if one wants to understand firsthand the toll mental illness is taking on our country, one just needs to peer beyond the bars of our nation’s jails and prisons. It’s also important to have a detailed and nuanced understanding of the situation. Our tour was a no-holds-barred look at San Quentin State Prison. For three hours, we were shown various aspects of prison life…. We specifically saw the psychiatric facilities, which are highly used by the inmates. In many ways they are state of the art. As you’d expect, sharp angles in the halls and cells, down to the door hinges and door handles, were filed smooth to prevent inmates from using them to aid in a suicide attempt. Many cells provided a sanctuary for inmates, nearly always curled up on a plain bed with a blanket covering them head to toe.

But the rooms that captivated our group were the group therapy rooms. Separate enclosures or “modules” formed a semi-circle for people who are at once both dangerous and needing and deserving of help. We briefly observed one group. Those participating highly praised the care they were getting. One patient’s body language told us when he had enough of our interruption; in a visceral way, it was clear he valued his treatment.

The four psychiatrists with whom we interacted were obviously compassionate and concerned about each of their patients. At each stop, it was abundantly clear that the care provided by the staff psychiatrists was superb and professional. Even if incarceration itself is likely a detriment to many individuals’ mental health, the physician-patient interactions we witnessed gave us hope…

Jails and prisons have become the front lines of treatment for mental illness. The data indicate that San Quentin is an anomaly in the quality of care that’s available in such a setting. This is likely due to its proximity to a highly desirable metropolitan area and its affiliation with the University of California, San Francisco, Department of Psychiatry.

According to a 2010 study by the Treatment Advocacy Center and the National Sheriff’s Association, there was one psychiatric bed for every 300 Americans in 1955. By 2005, that rate dropped to one psychiatric bed for every 3,000 Americans. Over time mental illness has been criminalized, and our jails and prisons take up the slack, despite being seriously ill-equipped to do so. Our jails and prisons have turned into warehouses for those with mental illness; the number of people with mental illness in jails is three to six times higher than that of the general public.

This is why APA and the American Psychiatric Association Foundation have joined forces with the National Association of Counties and the Council of State Governments Justice Center in the “Stepping Up” Initiative. The initiative seeks to reduce the number of people with mental illness in our prisons and jails by promoting the use of mental health courts and diverting minor offenders who have mental illness to treatment resources rather than incarceration…

We must reduce the use of our jails and prisons as warehouses for Americans with mental illness, partly to help our patients, but also because of what this tragedy says about the kind of nation we are. This is an effort for our patients, for our profession, and for our nation.
I personally couldn’t be more pleased that Dr. Binder wants to do something about this very real  problem, but this reporting doesn’t acknowledge the layers of history and controversy that lie embedded in this particular mustard seed – dating all the way back to Philippe Pinel and beyond. It frames the problem from one particular vantage point. But as soon as one does that, you hear:
… psychiatry’s concern about the imprisonment of the mentally ill is being used by advocates of forced outpatient treatment as a Trojan Horse.  The advocates for forced treatment in outpatient settings [such as the Treatment Advocacy Center] argue that forced drug treatment would prevent the mentally ill from ending up in prison, and thus their legislation, which in fact curbs the civil rights of citizens in profound ways, comes cloaked in the rhetorical garb of “humanism.” If we are going to have an honest societal discussion about the shame of imprisoning the “mentally ill,” then it needs to be completely decoupled from that legislative agenda. Indeed, an argument can be made that the growing imprisonment of the “mentally ill” is yet another example of how our drug-based paradigm of care has failed us. The use of psychiatric medications in our society has exploded over the past 25 years; there is great societal pressure put on people diagnosed with schizophrenia or bipolar disorder to take their medications; and yet we now have this problem of hundreds of thousands of “mentally ill” in prisons and jails…

However, I do agree with Allen Frances on this point: Any effort to remake mental health care in this country needs to include a focus on what can be done to help the multitudes of poor people and disenfranchised people who show up in distressed emotional states in emergency rooms and homeless shelters, and the eventual routing of many such people to jails and prisons.  But, in my opinion, if we want to find a solution, we should focus on providing housing, social support and jobs that help people lead meaningful lives. If we want to reduce the number of people said to be mentally ill and in jail, then we should focus on reducing poverty in this country. Substantially raising the minimum wage would, undoubtedly, be a good first step in addressing this problem…
Sound familiar? Here’s another version from the last time around:
For some time now I have maintained that commitment—that is, the detention of persons in mental institutions against their will—is a form of imprisonment; that such deprivation of liberty is contrary to the moral principles embodied in the Declaration of Independence and the Constitution of the United States; and that it is a crass violation of contemporary concepts of fundamental human rights. The practice of "sane" men incarcerating their "insane" fellow men in "mental hospitals" can be compared to that of white men enslaving black men. In short, I consider commitment a crime against humanity. In the first place, the difference between committing the "insane" and imprisoning the "criminal" is the same as that between the rule of man and the rule of law: whereas the "insane" are subjected to the coercive controls of the state because persons more powerful than they have labeled them as "psychotic" "criminals" are subjected to such controls because they have violated legal rules applicable equally to all…

The fundamental parallel between master and slave on the one hand, and institutional psychiatrist and involuntarily hospitalized patient on the other, lies in this: in each instance, the former member of the pair defines the social role of the latter, and casts him in that role by force…

In this therapeutic-meliorist view of society, the ill form a special class of ”victims” who must, both for their own good and for the interests of the community, be "helped"  — coercively and against their will, if necessary — by the healthy, and especially by physicians who are "scientifically" qualified to be their masters. This perspective developed first and has advanced farthest in psychiatry, where the oppression of "insane patients" by "sane physicians" is by now a social custom hallowed by medical and legal tradition. At present, the medical profession as a whole seems to be emulating this model. In the Therapeutic State toward which we appear to be moving, the principal requirement for the position of Big Brother may be an M.D. degree.
I want to say "ditto" to my last post. Dr. Binder seems to be a decent person, and I expect her concern for the jailed psychotic people is genuine. But she isn’t talking like she knows what awaits her up ahead. The collective other have decided that the problem isn’t mental illness but something else. She should probably go to the Mad in America site and read a few blogs, then read the British Psychological Society’s report, then think about this era of KOLs we’ve just been though, before proceeding as if good intentions will carry the day. She doesn’t seem to grasp that she’s now representing an organization that many see as the problem rather than part of the solution.

Right now, there’s an enormous and somewhat understandable back lash to the recent era of psychopharmacological/neuroscientific goings on, and pressing ahead without addressing all the conflicts in the air is likely to be a lesson in futility. The sentiment expressed above by Robert Whitaker and in the British Psychological Society’s Report suggests that psychotic conditions aren’t, in fact, mental illness at all, but rather some barometer of social ills and imbalances, or even a sign of psychiatrists not listening.

If Dr. Binder is serious about approaching this problem, she’s going to have to address all of the views. The official mouthpieces in psychiatry right now seem to think that they can just ignore what’s happened in these last 20 or 30 years if they start behaving in more rational ways now. They think that they can avoid acknowledging the sins of the fathers. That is unlikely to help anyone at this point.  All they’re going to hear about is forced drugging, imprisonment, overmedication, "bio-bio-bio," medical models, the DSM-whatever, pharma this and pharma that.

Dr. Lieberman had a shot at attacking and discounting the critics which was decidedly ill-conceived. Dr. Summergrad was more balanced, but didn’t address or acknowledge the conflicts in the air. Dr. Binder has the opportunity to take a different and more realistic tack, but I’m afraid that the course she’s setting here could use a bit more thought because it raises specters she’s failing to mention. For the moment, it would be a much better idea to look into problems like this one, and get input from as many players as she can find. "We must reduce the use of our jails and prisons as warehouses for Americans with mental illness" is totally correct. But for the moment, she needs to listen to some time honored advice, "Don’t take sides." At least not yet…
Mickey @ 3:25 PM