in the land of sometimes[5]

Posted on Sunday 3 January 2016

This figure is obviously adapted from the last post [oh well…]. It’s the strength of effect [Standardized Mean Difference AKA Cohen’s d AKA ~Hedges g] of the Atypical Antipsychotics [in black] in the treatment of Schizophrenia with the values for Brexpiperazole  added below [in green]. The  upper values are from thousands of patients compiled in a meta-analysis and the Brexpiperazole values are from a few hundred in two recent RCTs [see the spice must flow…]. In an earlier version of this figure, I hadn’t put in the 95% Confidence Limits [because I didn’t know how]. The diamond at the bottom is the weighted mean for the Brexpiprazole values [SMD = -0.28].

While the practice of using p·values as a gold standard continues in our psychiatry journals, that’s not the case throughout the scientific world. To make the point for the thousandth time, all a p·value tells you is that two populations differ, but not by how much. Statistical separation can vary from trivial to meaningful, and the p·value doesn’t reveal anything about that variability. So the American Psychological Association recommends that the Effect Size with its 95% Confidence Intervals be universally reported [and many of their journals won’t publish articles that don’t include them]. The Effect Size not only adds the dimension of the magnitude of an effect, in many cases, it can be used for comparisons between studies.

Though the Effect Size adds a quantitative dimension to the RCT analysis over the qualitative p·value, it’s no gold standard either. Note the width of the 95% Confidence Intervals in the Brexpiprazole data. Is that because these are individual RCTs rather than pooled studies with much larger groups? Is it because these are CRO-run studies spread over [too] many sites [60 each!]? We don’t know that. All we really know is that it sure is a wide interval, apparent also in the standard deviation values [σ]:

BREXPIPRAZOLE Trials in Schizophrenia

STUDY DRUG MEAN SEM σ n p d lower upper
Correll et al placebo -12.01 1.60 21.35 178
0.25mg -14.90 2.23 20.80 87 0.3 0.14 -0.120 0.393
2mg -20.73 1.55 20.80 180 <0.0001 0.41 0.204 0.623
4mg -19.65 1.54 20.55 178 0.0006 0.36 0.155 0.574
Kane et al placebo -13.53 1.52 20.39 180
1mg -16.90 1.86 20.12 117 0.1588 0.166 -0.067 0.399
2mg -16.61 1.49 19.93 179 0.1488 0.153 -0.054 0.360
4mg -20.00 1.48 13.45 181 0.0022 0.321 0.113 0.529
[values in red calculated by yours truly – not in the paper]

A thing of beauty, this little table. It doesn’t answer all the questions, but it gives you all the information you need to think about things in an informed way. The Effect Size [d] values and their 95% Confidence Intervals are a decided plus. In a way, these two articles are exemplary. The papers themselves tell you that it is an industry funded, ghost written article. All the information in the table is either in the paper or easily calculated [in red]. Add the information on, and you are in a position to make informed decisions about what these two studies have to say about the efficacy of Brezpiprazole. And now to the point of this post. These papers represent progress. While it’s not Data Transparency proper, it’s close enough for everyday work. But this is the exception to the rule, to be able to make that table with a minimal expenditure of energy.

These two papers are not available on line, and the abstracts in PubMed don’t have enough information to construct the table. So you have to have access to the papers. If your retirement hobby is trying to evaluate these RCTs of psychiatric drugs, you already know that in the majority of papers, it isn’t possible to make that table this easily, if at all. Though all you really need is the MEAN, the number of subjects, and either the SEM [Standard Error of the Mean] or the Standard Deviation for each group – and you’re good to go. But invariably, they’re not available. The results are frequently reported in some other way, and one gets the feeling that it’s by design – a bit of subterfuge. But whether that conclusion is an acquired paranoia on my part or the truth doesn’t matter. What matters is that you can’t do what’s needed to understand the efficacy results and to compare them with other studies.

This is the stuff of premature hair loss and nervous twitches in the brave souls who do meta-analyses. And they’ve responded by finding ways to gather the needed information to produce tables like this one. Their methodology is buried in big thick books like the Cochrane Handbook for Systemic Reviews of Interventions or expensive software like the Comprehensive Meta-Analysis software. This is hardly the stuff for your basic hobby blogger like me. Fortunately, there are all kinds of formulas and Internet calculators scattered around that allow one to get the job done. So the next one of these in the land of sometimes… posts will likely be a short catalog of these tools [for we mortals] for exhuming the information that should’ve been provided by the authors/writers in the first place. Boring? yes. Necessary? also yes. The alternative is tantamount to drinking the Kool-ade.
hat tip to Glen Spielmans and other teachers… 
Mickey @ 12:15 PM

oh well…

Posted on Friday 1 January 2016

So there’s an article that I’ve looked up over and over. I referred to it a lot looking into Brexpiprazole. It compares the strength of effect of various parameters for the Atypical Antipsychotics. I thought it was a shame it wasn’t open access, so I went after its forest plots with my graphics program and removed all but the bare bones [including the drugs not available in the US]. After I got through, I went to post it and found that there was a pdf available on-line after all. Oh well. My summary graphic is below anyway…
by Stefan Leucht, Andrea Cipriani, Loukia Spineli, Dimitris Mavridis, Deniz Örey, Franziska Richter, Myrto Samara, Corrado Barbui, Rolf R Engel, John R Geddes, Werner Kissling, Marko Paul Stapf, Bettina Lässig, Georgia Salanti, and John M Davis
The Lancet. 2013 382:951-962.

Background The question of which antipsychotic drug should be preferred for the treatment of schizophrenia is controversial, and conventional pairwise meta-analyses cannot provide a hierarchy based on the randomised evidence. We aimed to integrate the available evidence to create hierarchies of the comparative efficacy, risk of all-cause discontinuation, and major side-effects of antipsychotic drugs.
Methods We did a Bayesian-framework, multiple-treatments meta-analysis [which uses both direct and indirect comparisons] of randomised controlled trials to compare 15 antipsychotic drugs and placebo in the acute treatment of schizophrenia. We searched the Cochrane Schizophrenia Group’s specialised register, Medline, Embase, the Cochrane Central Register of Controlled Trials, and for reports published up to Sept 1, 2012. Search results were supplemented by reports from the US Food and Drug Administration website and by data requested from pharmaceutical companies. Blinded, randomised controlled trials of patients with schizophrenia or related disorders were eligible. We excluded trials done in patients with predominant negative symptoms, concomitant medical illness, or treatment resistance, and those done in stable patients. Data for seven outcomes were independently extracted by two reviewers. The primary outcome was efficacy, as measured by mean overall change in symptoms. We also examined all-cause discontinuation, weight gain, extrapyramidal side-effects, prolactin increase, QTc prolongation, and sedation.
Findings We identifi ed 212 suitable trials, with data for 43 049 participants. All drugs were significantly more effective than placebo. The standardised mean differences with 95% credible intervals were: clozapine 0·88, 0·73–1·03; amisulpride 0·66, 0·53–0·78; olanzapine 0·59, 0·53–0·65; risperidone 0·56, 0·50–0·63; paliperidone 0·50, 0·39–0·60; zotepine 0·49, 0·31–0·66; haloperidol 0·45, 0·39–0·51; quetiapine 0·44, 0·35–0·52; aripiprazole 0·43, 0·34–0·52; sertindole 0·39, 0·26–0·52; ziprasidone 0·39, 0·30–0·49; chlorpromazine 0·38, 0·23–0·54; asenapine 0·38, 0·25–0·51; lurasidone 0·33, 0·21–0·45; and iloperidone 0·33, 0·22–0·43. Odds ratios compared with placebo for all-cause discontinuation ranged from 0·43 for the best drug [amisulpride] to 0·80 for the worst drug [haloperidol]; for extrapyramidal side-eff ects 0·30 [clozapine] to 4·76 [haloperidol]; and for sedation 1·42 [amisulpride] to 8·82 [clozapine]. Standardised mean differences compared with placebo for weight gain varied from –0·09 for the best drug [haloperidol] to –0·74 for the worst drug [olanzapine], for prolactin increase 0·22 [aripiprazole] to –1·30 [paliperidone], and for QTc prolongation 0·10 [lurasidone] to –0·90 [sertindole]. Efficacy outcomes did not change substantially after removal of placebo or haloperidol groups, or when dose, percentage of withdrawals, extent of blinding, pharmaceutical industry sponsorship, study duration, chronicity, and year of publication were accounted for in meta-regressions and sensitivity analyses.
Interpretation Antipsychotics differed substantially in side-effects, and small but robust diff erences were seen in effi cacy. Our findings challenge the straightforward classification of antipsychotics into first-generation and second- generation groupings. Rather, hierarchies in the different domains should help clinicians to adapt the choice of antipsychotic drug to the needs of individual patients. These findings should be considered by mental health policy makers and in the revision of clinical practice guidelines.
Funding None
Mickey @ 8:04 PM

happy new year…

Posted on Friday 1 January 2016

Mickey @ 12:32 AM

a story: the epilogue…

Posted on Thursday 31 December 2015

I thought I needed two endings. The first would emphasize the importance of Data Transparency to allow us to thoroughly vet these industry-funded productions like the examples of these Atypical Antipsychotic Augmentation of Treatment Resistant Depression trials. Then the second ending would say that these industry-funded approval-oriented RCTs are way over-valued and no basis for drug usage – that’s the job of practitioners, academics, and patients in the wide world of clinical medicine. But on re-reading it, those two seemingly dichotomous points of view just kind of ran together in the writing. Which one is the more important one? Both. I couldn’t keep them apart even when  I was trying…
Mickey @ 2:02 PM

a story: the second ending…

Posted on Thursday 31 December 2015

    December Tales

I am personally drawn to the mathematical/statistical analyses employed in a properly conducted Randomized Double Blind Clinical Trial [RCT], and would like to think that if we had access to the raw data from Clinical Trials [Data Transparency], we could generate an accurate efficacy and safety profile for our drugs based that data. That was the general gist of the first ending to this story. In that scenario, the carefully analyzed RCT would become the gold standard for evaluating medications. Certainly, the RCT is, by Congressional Decree, already the gold standard for the FDA’s Approval process.

There are other ways to look at this [see hearing voices… and something essential betrayed…]. While the RCT may be a reasonable way for the FDA to evaluate new drugs for approval, that doesn’t necessarily translate to a standard for clinical practice [see this comment by Dr. Bernard Carroll and Pharmageddon by Dr. David Healy]. Subjects are screened for exclusions. Subjects are recruited and often paid whereas Patients are help-seeking. The RCTs are short-term [6-8 weeks] while Patients are on medications much longer [these days, sometimes indefinitely]. Subjects are seen weekly for a while [metrics and questionnaires].  Patients are seen for brief med-checks at infrequent intervals. Subjects are assessed using sensitive standardized scales. Patients are asked, "How’s it going?" Lots of differences.

Those are concrete differences between the RCTs that we read about in our journals and the actual practice of medicine. But there’s another glaring difference. The RCTs are done by the company that has the patent on the drug, and who obviously wants to sell it widely for a maximal price. This blog and many others are filled with examples where the RCT is selectively-analyzed/reported to achieve that end. For example, all of the studies of Atypical Antipsychotic Augmentation of Treatment Resistant Depression studies reported in this series were statistically significant, but the Strength of the Effect for the primary outcome variables wasn’t reported. Here are the values extracted in the Spielman et al meta-analysis for those outcomes and the values I got using their same methodology [Cochrane Handbook] in the Geodon® and Rexulti® reports [also showing the IDS-SR Effect Sizes where available]:

SOURCE      MADRS/HAM-D [d]      IDS-SR [d]

Spielmans et al
  0.32   0.14
Brexpiprazole 1mg-3mg
  0.29   0.16
Brexpiprazole 2mg
  0.29   0.20

The observer rated MADRS/HAM-D numbers are in the weak to moderate range, but the self-rated scales are dramatically near-null – Statistically Significant, but trivial Effect from the Subject’s perspective. So with Data Transparency, willing independent analysts, and cooperative journal editors we could correct a lot of the misbehavior and get all the information.

But these short-term industry run "Approval" RCTs really are still nothing more than a starting place for understanding either efficacy or safety, even at their best. Perhaps the analogy of a model airplane to a Jumbo Jet might be a more reasonable way to conceptualize their place. The real test is when the drugs are put into general use by Clinicians and Patients. And we often don’t hear about those results until well down the line when the side effects and harms start showing up in court-rooms. Patients have plenty of time in waiting rooms getting "screened" or watching tv getting "detailed" by DTC ads, but very little time in the med-check world of Managed Care getting carefully evaluated before or after prescriptions are written. And the time allotted for reporting is… well it just isn’t in the program. That clown at the World of DTC Marketing in the last post [when pigs can fly…] has plenty of suggestions about how we can talk to our patients about the provocative questions raised by his silly ads when they "ask the doctor", but apparently doesn’t know that we already don’t even have time to do an adequate medical interview much less what he suggests.

These papers on Atypical Antipsychotic Augmentation of Treatment Resistant Depression offer us a good lesson in the problems of the day. The RCTs are all deceptively presented, accentuating the positive and eliminating the negative. The actual felt impact of the intervention is in the slim to none range and these short term RCTs don’t show the grave potential harms of the Metabolic Syndrome or the potential for Tardive Dyskinesia which occur later. What they do tell us is that cleaning up RCTs won’t be enough. We need longitudinal information – not just the rough and inadequate start-up data. We need to have some way for physicians to learn and execute in an ongoing way the kind of clinical skills it takes to practice decent medicine, to prescribe with informed therapeutic intent. We need to grow an RxISK system that doesn’t just live on a Server in the UK, but is used responsibly and universally by patients and practitioners. If can do it, surely we can build a system that tells us how medications are playing in the field once  they’re approved. By the  way, that system used to be called Academic Medicine, but maybe we need a replacement. And, by the way, we don’t need to give Atypical Antipsychotics to people with life’s depressions except in dire circumstances, and then for a limited time only [if at all]…
Mickey @ 12:48 PM

when pigs can fly…

Posted on Wednesday 30 December 2015

By Ed Silverman
December 30, 2015

In a controversial move, the American Medical Association recently called for a ban on advertising prescription drugs and medical devices directly to consumers. The effort is largely symbolic because any ban would have to be authorized by Congress. But doctors resent the increasing pressure the ads place on them to write prescriptions out of concern patients will switch physicians. And they argue that many ads aimed at consumers promote more expensive medicines. Richard Meyer, a former Eli Lilly marketing executive, who is now an industry consultant who runs the World of DTC Marketing blog, explains why he believes the AMA is misguided.
the Restasis® Cyborg
By Richard Meyer, pharmaceutical consultant

Last month, members of the American Medical Association declared that drug makers should stop advertising their products directly to consumers because they feel it contributes to an increase in health care costs and pushes patients to ask for products that either they may not need or is not right for them.  This approach is, at best, misguided, and, at worst, ignores the benefits of direct-to-consumer advertising for patients. According to a study on DTC marketing that was conducted by Eli Lilly, 25 percent of patients who were prompted to visit their doctor after seeing an ad were given a new diagnosis. Of those patients, 43 percent were a “high priority” diagnosis for a serious health condition, like diabetes or hypertension. That same study indicated that 53 percent of physicians felt that DTC ads lead to better discussions with patients because patients are better educated and informed. In addition, a 2004 Food and Drug Administration survey of physicians and patients found that exposure to DTC ads spurred 27 percent of Americans to make an appointment with their doctor to talk about a condition they had not previously discussed. Another study found that the small print in a drug ad was strongly associated with patients contacting their health care providers.

But there is more.

A November 2006 report from the General Accountability Office noted that only 2 percent to 7 percent of patients who requested a drug in response to a drug ad ultimately received a prescription for the medicine. In another study, DTC ads increased the likelihood that a patient would initiate a dialogue with a physician to request an advertised drug. In still another study, which was conducted in 2010 by Prevention magazine, 79 percent of those queried said they sought a specific product. At that point, the magazine had tracked such data for 13 years and the figure was an all-time high. Yet, only 19 percent of the patients actually received the product they sought, an all-time low. DTC advertising increases awareness of health problems and leads to a better informed and educated patient who can engage their physician in a dialogue rather than a monologue.

So what’s really going on here?

First, insurers are taking more prescription writing power away from doctors.  They first want patients to try generic medications which now make up 88 percent of all available prescription drugs. Second, higher patient copayments for office visits and insurance mean consumers are “shopping” for health care and health care treatments. This makes doctors very uncomfortable. Even with all these changes, research continually validates the notion that patients view their doctors as the gatekeepers to their prescription medicines. If a doctor doesn’t feel it’s right for the patient, then they won’t write for it.

The AMA would be better served to remind doctors to have the so-called “weight” conversation with patients, since obesity is at epidemic levels here in the United States and is costing health care billions of dollars. Patients should be warned of potential problems, if any, and, in conjunction with insurers, a comprehensive wellness plan should be developed. DTC advertising leads patients to their health care providers and, depending on the health condition, does not lead to high-priced unnecessary scripts. The AMA should reach out and work with pharma to improve DTC marketing, not request a ban on all DTC ads.
I only posted this absurd bit of spin as an example of how dumb the ad-men must think doctors are – and I haven’t been able to use my when-pigs-can-fly graphics for a while [I can’t find my lipstick-on-a-pig graphic]. For more BS, visit the World of DTC Marketing blog…

UPDATE: Saved by a friend!
Mickey @ 7:23 PM

persists in memory…

Posted on Tuesday 29 December 2015

Robert Spitzer, the author of the DSM-III, died on Christmas Day at age 83. While the commentaries so far praise his removing psychoanalysis from psychiatry, he gets mixed reviews for the system he developed. All agree that he became the most influential psychiatrist of his time.

I’ve never been able to say much about him or his DSM-III without hearing back that my opinion is suspect because I am a psychoanalyst. Unlike the modern KOLs, I wouldn’t argue that it’s not a Conflict of Interest that colors what I think. It definitely does. But I agree that psychoanalysis shouldn’t have been a part of psychiatry or psychiatric diagnosis in the first place. The fact that it became allied with psychiatry in America was neither what Freud wanted, nor what happened in the rest of the world. Parenthetically, psychoanalysis as a non-denominational discipline has actually thrived since the separation from psychiatry. But that’s another story.

At the time of the DSM-III introduction, I thought it read like an academic exercise more focused on itself than the patients it purported to classify. Spitzer wasn’t a clinician, and his DSM-III ablated established categorical clinical distinctions in the service of improving inter-rater reliability, most notably, Melancholic Depression versus depression as a symptom. Presumabely, he also wanted to get rid of depressive neurosis in the process, but that stroke opened the floodgates to the later SSRI/Atypical/TRD craze, and stopped affective disorder research in its tracks for no clear reason.  I personally thought the DSM-III subtracted from rather than added to or improved upon, so I paid it little mind. But within a very short time, psychiatry was undergoing a massive sea change, and there was clearly a push towards pharmaceutical treatment and research along with changes in third party payments. And for some of us, it was a time for a change in employment.

Looking back on those days, I don’t think psychiatry changed just because of Robert Spitzer. Robert Spitzer’s DSM-III was the more public face of a broader effort with strong political and economic undercurrents that transcended the stated scientific agenda. I doubt that Robert Spitzer; or the Medical Director of the APA [Dr. Mel Sabshin]; or those colleagues Spitzer called "the invisible college" [the neoKraepelinians centered at Washington University] had any inkling of the power they were ceding to the pharmaceutical and insurance industries in the process, or how that scenario would play out over the next three decades.

The biggest ringer in the story is what we now call the KOLs, the group of psychiatrists in high places who joined up with PHARMA for personal and institutional gain. I would never have dreamed that would happen. I doubt if Robert Spitzer did either. They’re still there, and without a thorough cleansing, I doubt that there will be any meaningful resolution of anything. You know who they are. Their names are spread throughout the posts on this and many other blogs. Robert Spitzer was as victimized by their antics as the rest of us.

So back to Spitzer’s legacy. Alongside of his depression gaff, there’s another place where I think he deserves to be personally blamed. He kept a lot of what he was doing under his hat, shared only with his confidants. So the DSM-III process was something of a politically maneuvered bloodless coups d’etat orchestrated in concert with an inner circle of the APA. That behind-the-scenes oligarchy has persisted, undermining any sense that the APA represents its membership [better characterized these days as its following]. Whether the stealth and all the palace intrigue was necessary or not [ends justifying the means], it has persisted as a style for 35 years to all of our detriment.

I was in training at the New York Psychiatric Institute during the time when the DSM-III was being framed and I remember him as a boyish, hurried, preoccupied guy darting down the hallway. I didn’t know who he was or what he was up to, but I recognized him from his pictures later when the Manual was published. He was one of those people who catches your attention and persists in memory…
Mickey @ 8:00 PM

a story: the first ending…

Posted on Monday 28 December 2015

When something’s wrong, it’s a lot easier to say what’s wrong than it is to know what to do about it. The cross-fire of the academic-industrial complex and Managed Care is a choke-hold that rarely lets one come up for breath. Here we are about to finally escape from under the patent-life of all these drugs, and up pops Rexulti®, an Abilify® clone, and the FDA grants them a benefit-of-the-doubt approval for this Atypical Antipsychotic Augmentation of Treatment Resistant Depression indication coming right out of the pipeline.

The only thing I know to do is to try to chase down and make public how much of a cliff-hanger this FDA Approval was, and to try to make the actual state of affairs in the profile of this drug as clear as possible [see a story: the beginning of the end…]. I hardly know what to make of the way this was published – two ghost-written industry prepared articles with only one academic author, a notorious KOL at that, in back to back articles in the same journal with a lot of identical text; attempting to sell the idea that changing a protocol in midstream is an acceptable bit of science; living on p-values while ignoring Effect Size measurements. It’s really a bit too much all around.

I personally think that the antipsychotic drugs, atypical or otherwise, are too dangerous to give to patients with "office depression" and wouldn’t think of doing that, even if they worked well. When I found myself in an office full of people on them, I discontinued them pell mell. The patients felt better [and so did I]. So I never really looked into the literature that addressed Atypical Antipsychotic Augmentation of Treatment Resistant Depression. But the publication of a Clinical Trial in this month’s American Journal of Psychiatry lead me down that path through the articles and meta-analyses. About halfway through, I realized I was writing a story, so I included that in the titles. I’ve even made up a name:
    December Tales

It’s a story about using antipsychotic medication frivolously, in my now entrenched opinion. If you use Atypical Antipsychotic Augmentation of Treatment Resistant Depression, I’d suggest you read these posts, or at least glance at the references [see particularly 5, 8, and 12]. I found where this practice was first tried [see 6], and I’ve gradually realized that it has been an excuse to use dangerous medications casually. It’s situations like this that make Data Transparency such an essential goal. I had a really difficult time gathering all the pieces for my Rexulti® posts [7, 9, 10, 11] and Spielmans et al had to scramble for their meta-analysis [5]. It’s absolutely silly for the pharmaceutical companies and the FDA to continue to hide the raw data, and it’s time for it to stop. That’s just a given. In the meantime, I hope people will join in the task of going over these articles with a fine tooth comb in vetting these industry productions with all their spin.

If there’s a prime example to illustrate why the Direct-to-Consumer advertisements need to be taken off the air, psychiatric medications are the prime contender. The ads have been relentless, misleading, and the very real damage from the overmedication in part fueled by these ads is all around us. The AMA has recommended that they be banned [damn the torpedoes! full speed ahead…]. I’d up that recommended to demanded were it up to me. Those ads are as destructive as the cigarette ads used to be, and deserve the same fate.

And speaking of prime examples, this story of the Atypical Antipsychotic Augmentation of Treatment Resistant Depression is one to use to illustrate how commercial interests have invaded medical practice. Besides the obvious dangers of the Metabolic Syndrome and Tardive Dyskinesia, these drugs don’t really do what they’re advertised to do – make the antidepressants work a lot better. They seem to have a small effect [weak at best], but the Effect Sizes are surprisingly low, and the feedback from self rated scales barely registers any clinically relevant effect. I’ve maxed out my boring-ness to make this story available with the appropriate reference in hopes that people might happen by and take a look…
Mickey @ 8:00 PM

a story: getting near the ending[s]…

Posted on Monday 28 December 2015

Starting to work in a public clinic after 20 years in practice as a psychotherapist and five years of retirement was something of a shock. Besides the general polypharmacy, the number of people taking Atypical Antipsychotics was staggering to me. At the time, Seroquel® was the number one selling drug in the country, and everyone was on it. I didn’t understand how that had come to be, but I did know what to do about it. So by my second year back at work, no patients in our clinic who were not psychotic were on those drugs.

Recently, they changed the policy at the clinic, and so we’ve taken on a new crop of patients. And here they come again – this time on a greater variety, though primarily on Seroquel® and Abilify®. And this time around, I’ve seen three cases of Tardive Dyskinesia [TD]. Mercifully, two were mild and seem to be slowly clearing, but one isn’t. It’s the patient I wrote about in blitzed… and some truths are self-evident… who was on an outrageous drug regimen. The further along we get with tapering the drugs, the more apparent her TD symptoms – the worst case I’ve personally ever seen. She’s never been psychotic, and her diagnosis is probably best characterized as an attachment disorder.

That’s the case that’s in my mind as I read these Atypical-Antipsychotic-Augmentation-of-Treatment-Resistant-Depression articles, and probably why I’ve stayed glued to this topic for so long. I’ve realized what others may have already known – the Atypical-Antipsychotic-Augmentation-of-Treatment-Resistant-Depression has been advertised far and wide and has given permission to clinicians of many ilks to add Atypicals to their poly-pharmacy regimens. And it’s no surprise that the cases I’ve seen are on Seroquel® and Abilify®. They’re the only two FDA Approved for this indication, and are legally allowed to advertise:

Both Seroquel® and Abilify® are now off patent, so the ads will disappear. But now the FDA has approved Rexulti® for this indication based on these lackluster-at-best clinical trials I’ve been reviewing, so we can count on seeing in Rexulti®-in-Depression ads soon. Even worse, it has gotten that approval at the beginning of its patent life, so it will be with us for a very long time, perpetuating this practice of Atypical-Antipsychotic-Augmentation-of-Treatment-Resistant-Depression. Can the Direct-to-Consumer ads be far behind?

What’s doubly tragic to me, is that as this patient has awakened from her drug-induced-stupor, it’s obvious that she’s a particularly kind of patient well known to me and others in my tribe. She’s a person with an attachment disorder [often called Borderline Personality] whose adulthood has been chaotic because of her difficulty with and in relationships. She’s been seen by a variety of mental health types, and has been obviously chronically overmedicated [with her participation]. Such patients finally wear out trying to make the world adapt to their volatility and childish needs, and become very treatable in later adulthood. This process has been called a Hegira of therapies [Gerald Adler]. So here she is, finally ready to perhaps live a different and more fulfilling life, and she may have developed an unnecessary and disfiguring iatrogenic illness to contend with for the rest of her days.

In-so-far as I can tell from my recent review of the literature, there’s no real treatment for TD except not to get it in the first place. And although there are exceptions [some early onset cases], one’s chance of developing TD on antipsychotics increases steadily the longer you’re on the drugs. Add to that the danger of developing what’s called the Metabolic Syndrome [weight gain and Diabetes] and the risk side of the risk/benefit equation becomes very heavily weighted. Furthermore, these days, people are kept on medicines once started indefinitely. So this approval of Rexulti® for Atypical-Antipsychotic-Augmentation-of-Treatment-Resistant-Depression has the potential to do some real damage – as that done in my patient to absolutely no good purpose. No short term RCT is ever going to make this risk apparent. And no case of TD is worth the minimal effects the drugs might have on depression.

So that’s an additional reason I’m so stuck on this particular story. On to the endings…
Mickey @ 10:00 AM

a story: the beginning of the end…

Posted on Sunday 27 December 2015

I started the last post with a borrowed literary device, but I was just practicing. In The French Lieutenant’s Woman John Fowles popped into his story and spoke in the first person with the reader. He did that, or at least claimed he did that, because he couldn’t decide how to end his novel. He ended up writing two endings. Well that’s my problem with this story. I’ve got two different endings, both of which matter to me, but they aim at substantively different conclusions and there’s no resolving them. So I’m just going to have two endings and be done with it. but first

In a story [starting in the middle]…, I discussed the two Brexpiprazole studies and their attempt to make some post-hoc changes that would make it become significant, sleight of hand PHARMA-style. Then in a story: the end of the middle…, we saw how the FDA reviewer didn’t let them get away with it, but in the end recommended approval of for Augmentation in Treatment Resistant Depression based on the strong showing in the other study.
Thus, with one strong·ly positive trial and supportive evidence from two additional trials, there is adequate evidence of efficacy to approve this product for the adjunctive treatment of MDD.
And that 2MG study did achieve a p=0.002 that survived correction for multiple variables.

But is that what strong means? Very significant? We actually have other ways of looking at strong, called strength of effect or Effect Size.  I mentioned them in in the land of sometimes[2] and in the land of sometimes[3]. The Primary Outcome Variable in these studies is the MADRS score at 6 weeks compared to baseline [a continuous variable]. The usual measure of Effect Size would be Cohen’s d, but they neither calculated it nor gave us enough parameters to do it ourselves. But they did use the MADRS score to tally Responders [≥ 50% reduction] and Remitters [≥ 50% reduction and ≤ 10]. So I looked to see how strong they were in the 2MG study. I found the NNT [Number Needed to Treat] and OR [Odds Ratio] for the original Efficacy Population and the Final Efficacy Protocol using their Table 2 and the Supplement:

The NNT of 13 says that you have to treat 13 patients to get one that exceeds what you’d get with placebo alone. That’s a lousy number, on the other end of the scale from strong. It’s a weak if any effect. No matter which study you look at, or which population you use, these are all lackluster numbers. And the Odds Ratios consistently under 2.0 are low. Here’s the 1MG AND 3MG study, equally dismal:

You might notice that the graphs look better than the numbers. You can do that using a change from baseline scale that obscures the fact that the MADRS is a 60 point scale and then using the SEM for limit markers rather than the SD or the 95% CIs which might show how close these lines really are [I think of it as presentation spin].

Since I couldn’t do the Cohen’s d for the MADRS scores, I had a look at the HDRS-17:

Finally, since I have a spreadsheet that converts the usual Mean, SEM, n figures into Effect Sizes [Cohen’s d AKA Standardized Mean Diffierence], I kept going and put  in all of the continuous Secondary Outcome Variables for the two studies [recall d=0.2 is weak, d=0.5 is moderate, and d=0.8 is strong]:


Scanning down the forest plots, there’s nothing at all that I see justifying the strong moniker. Brexpiperazole just didn’t leave many footprints in the sand. And as Spielmans et al pointed out in their meta-analysis [the extra mile…], the self-reported depression scale [IDS-SR] is in the hardly-noticed range. Searching the text of the two articles, measurements of Effect Size are barely mentioned or reported. Certainly they play no important role in their analysis or discussion. So I’ll add that to my list of complaints about these papers.

Let’s face it, these are not scientific papers. They’re sloppy sales jobs conducted by industry marketeers with a penchant for cutting and pasting, stealthily presented with scientific bells and whistles, and full of sound and fury signifying nothing. This medication doesn’t do much of anything strong for the targeted illness. Dr. Michael Thase and his academic institution [Perelman School of Medicine of the University of Pennsylvania, Philadelphia, Pennsylvania] should be ashamed of themselves for even participating. Editorials to follow…
Mickey @ 11:48 AM