in the land of sometimes[2]

Posted on Sunday 13 December 2015

This is just some fluff. After all this time looking at the industry-funded clinical trials [RCTs], I’ve learned a few tricks for spotting the mechanics of deceit being used, but I realize that I need to say a bit about the basic science of RCTs before attempting to catalog things to look for. Data Transparency is likely coming, but very slowly. And even with the data, it takes a while to reanalyze suspicious studies – so to these more indirect methods. If you’re not a numbers type or you know your statistics, just skip this post. But if you want to become an amateur RCT vetter, read on. After a few more posts, there will be a summary and a guide to the Internet calculators to do the math. It’s something a critical reader needs to know something about.

As I said in in the land of sometimes[1] "the word sta·tis·tics is derived from the word state, originally referring to the affairs of state. With usage, it has come to mean general facts about a group or collection, and the techniques used to compare groups. In statistical testing, we assume groups are not different [the null hypothesis], then calculate the probability of that assumption." That post a month ago was about some of the basic statistical tests used to evaluate continuous variables – where to variable studied can have any value. The continuous variables are the numbers of arithmetic, with decimal values; the numbers for making smooth x-y graphs; the numbers of parametric statistics. We talk about means, standard deviations, Student T-Tests, analyses of variance [ANOVA]. And in that post, we discussed Cohen’s d as the value we often calculate to measure the Effect Size, the relative strength of the variable’s effect [see in the land of sometimes[1]]:
d = 1 – μ2) ÷ σ
"While there’s no strong standard for d like there is for p, the general gist of things is that: d = 0.25 [25%] is weak, d = 0.50 [50%] is moderate, and d = 0.75 [75%] is strong." This statistic is sometimes called the standardized mean difference.

In what I think of as the land of sometimes, mathematics are different than those we learned in high school [unless we took statistics] because any given variable is only sometimes true. The fixed meanings of pure mathematics disappear as we approach the inevitable variability in our measurements and in the nature of nature itself. So there are no absolutes, just likelihoods and probabilities [and no matter how improbable, it’s still possible to be dealt a poker hand with four aces – sometimes].

Not all parameters are continuous variables. Some are yes/no categorical variables – based on some criteria, "did the patient respond to the drug or not". So we’ve introduced something new – criteria – and we have to use an entirely different computational system to look at Probability and Effect Size with this kind of data. The visuals are even different. Here are two graphs adapted from our Study 329 paper that show two different treatments of the HAM-D values – the difference from baseline on the left [a continuous variable] and the response rate on the right [a categorical variable] [with the criteria being that a responder has a HAM-D score either < 50% baseline or < 8 and a non-responder has a HAM-D score both > 50% baseline and > 8]:
So if you have two groups and you know the sizes and the percent responding in each, that’s all you need. No means, no standard deviations, no assumptions about a normal distribution. The classic test is the Chi Square contingency table:
Fill in the numbers for a, b, c, and d. Use the totals to calculate the expected values [if you look at the formulas long enough, the why the expected values represent the null hypothesis of no difference? will become obvious]. Then compute a value for each of the four cells using…
… and add the four values to get the X2 test statistic If this were 1971, you’d take the test statistic and the degrees of freedom [rows-1] × [columns-1] and look up the p value in a book of statistical tables. But it’s not 1971, it’s 2015. So you’ll forget all the calculating and use an Internet calculator like vasserstats by simply filling in a, b, c, and d, and like magic the p values will just appear. There’s a bigger calculator if you have more than two groups. There are some subtleties [Pearson’s, Fisher’s, Yates’s] that I can’t keep quite straight myself from time to time, but they are easy and well explained on the Wikipedia and vassarstats pages. Thanks to the Internet, the p value is just seconds away.

So what about the Effect Size with categorical variables? It’s just as important as it is for the continuous variables. There are two indices in common use: the number needed to treat [NNT] and the odds ratio [OR]. We use those same monotonous parameters [a, b, c, and d] to calculate their values. First, the formulas:

NNT = 1 ÷ (a÷(a+b) – c÷(c+d))
 OR = (a÷(a+b)) ÷ (c÷(c+d)))

The NNT is the easiest to interpret [though the derivation isn’t so intuitive]. It’s the number needed to treat to  get one responder you wouldn’t get with placebo. With the OR, it’s the easier to understand the logic, but harder to interpret. More about its values later [in an example]. One very important fact to always keep in mind about these statistics with categorical variables  – what they measure is meaningless if you don’t know the criteria used to derive them [like above with "a responder has a HAM-D score either < 50% baseline or < 8 and a non-responder has a HAM-D score both > 50% baseline and > 8"]. Often you will read "The Odds Ratio for responders is…" but that’s not enough. You still  need to know precisely how they defined and extracted "responders."

I said in  in the land of sometimes[1] that I’d throw in these statistical interludes when it’s a rainy day with nothing going on. It’s not raining and there’s plenty happening, but I have a reason for both this post and the next. I really want to compare two meta-analyses of the same topic looking at mostly the same data, and I can’t do that without at least something of an introduction to some basic statistics, particularly Effect Sizes. For many reasons, we’re in an age of meta-analyses in psychiatry, so it seems an appropriate [as well as my trademark boring] thing to be talking about…
Mickey @ 4:42 PM

an aside…

Posted on Saturday 12 December 2015

Some of the more important learning in medicine doesn’t come from medical school, or residency, or even sitting at the feet of the masters. It comes from reflecting on the long days seeing patients – what kind of things keep coming up over and over? Early on, it hadn’t occurred to me that my own experience was such an important source of information. I recall how I first began to realize otherwise.
Most people seeing physicians don’t have an illness in need of treatment. They have symptoms, and have often fallen prey to a universal phenomenon – once you notice a symptom that there’s something wrong, you lose your intuition for the aches and pains of living – every bodily sensation becomes suspect as part of the unknown disease. So on arriving in the doctor’s office, the story is laced with extraneous reports, and the task is to sort through and find the things that matter. The average expectable patient wants three things:
  1. To be taken seriously – listened to and heard. If they leave without that, they have the same confusion that brought them to the doctor in the first place.
  2. They want the symptoms that brought them to have no ominous meaning. As in: The dermatologist looks at the skin lesion and says, "That’s not cancer."
  3. They want the symptom to go away.
A lot of doctoring isn’t spent treating disease. It’s spent documenting its absence. But if you reassure too quickly, you risk leaving the patient feeling unheard. Just because the doctor knows It’s not cancer, doesn’t mean the patient does. And often, some kind of testing is required to rule out disease. What I noticed early on was that when I reported that the tests were negative [which should be good news], some patients left still worried. He couldn’t find anything isn’t the same as I’m fine – so for some, it meant the mystery persists. I found that if I simply said what I thought up front, then said I was going to order some tests to make sure – in most patients, the later negative report was received as the expected relief and we could talk about symptom relief. And if I added, If you continue to feel concerned, come back and we’ll take another look, I’d done my job [I would now say the most important part of that sentence is the word we].
In a modern world where the television ads are blaring and patients have friends that are taking antidepressants, the patients come in with the idea that they may have a depression disease, and want to try some depression medicine. Even if they know that their life situation is deplorable and they can’t change it – but they can take a depression medicine. If it doesn’t have the desired effect, they’re understandably disappointed and want to try something else. They know that it’s out there as seen on television, but they think they just haven’t found it yet. So like in the above example, one is well advised to develop a similar pre-emptive line.
Sometimes adages help. In this case, Honesty is the best policy is the one I have in mind. So there are several things I say up front:
  • These medications are not like they are in the television ads. They don’t help everyone. When they do help, they usually take several weeks before they have their effect. We’d all like for them to work like this [left], but when they work, it’s more likely to be like this [right].
  • Some people have a paradoxical reaction and feel worse – feel agitated. If anything like that happens, stop taking it and let me hear from you. [If the patient is an SSRI virgin, I say words like suicidal homicidal aggressive – also usually say rarely, but it’s not rare if it’s you].
  • Some people have withdrawal symptoms when they stop taking these drugs. While that’s most often when they’ve been on them for a while, we routinely recommend discontinuing them slowly – tapering.
It’s what I’ve seen in practice and so it’s what I actually say [no mumbling allowed]. If they respond to the medications, I make my for-a-while, not for-ever point.
I started with the "finger thing" not as a ploy to lower expectations [though that’s not a bad idea]. I did it because I had a number of patients who would stop after a short time without giving the medication a chance, or who were looking for more than these drugs ever offer. I reasoned that if they’re going to take them, they need to have a realistic idea of what to expect and take them in a way to optimize their chances. To my surprise, patients would often walk in to a follow-up visit holding up their fingers in a some way to communicate how much. It actually helped.
But the point of this post is really a sermon about the patients who are on some gaggle of medications already and/or who have tried a bunch of different antidepressants and who come in saying My antidepressant has stopped working. They talk as if they have a depression disease, and it’s one of those diseases that medications wear out on – so they need another medicine [I know of no such depression disease]. Sometimes, they’re pissed, like I’ve let them down because I haven’t created the drug they’d like to take.
 
Although they’re often changed to a different medication or [gulp] some new medication is added to an already overflowing pharmacopoeia, I think we owe them more than that. They’re doing the only thing they know how to do, even if they’re hostile or defeatist about saying it. I think it’s time to take a real history instead of pretending patients are chemistry sets, even if the waiting room is full. The yield of things one finds out is well worth the journey. It’s a much more productive use of time than chasing symptoms with medication [when that has already proven itself to be something of a dead end]. This is the situation where clinicians think about augmenting or combining or sequencing which often only adds to the risk and side effect burden. End of sermon…

One final idiosyncratic comment. I live in a rural and beautiful area of Appalachia – the area where the moonshiners and revenuers played Thunder Road in a more colorful era – where the legacy of NASCAR and Hot Rods from the white lightening drivers of yore is still with us. Drug use has replaced the white lightening, particularly Methamphetamine and Opiods. A new Sheriff helped, but it’s still endemic.  I always ask [even if they’ve said no before]. And I frequently find this coming out of my mouth…

We don’t have any medicine that comes close to whatever-you-were-taking on the streets. Good for you that you stopped, but the only thing on your side is time and the help from others who have been through it. No antidepressant is going to stop the craving or replace learning to live drug free. We can give antidepressants a try, but your best bet by far is…
… and I launch into my NA speech [I’m surprised and pleased by how many of them end up going!]…
Mickey @ 4:16 PM

damn the torpedoes! full speed ahead…

Posted on Friday 11 December 2015

I so rarely disagree with anything Ed Silverman of Pharmalot has to say, I relish the opportunity to disagree with him for a change. When I read that the AMA had voted to call for a ban on Direct-to-Consumer [DTC] ads on TV [doctor power…], I felt almost giddy about my fellow physicians finally making that particular stand. Ed thinks it won’t work, thinks that it’s a dead end. I see it as the Phoenix now rising from the ashes!
Pharmalot
By Ed Silverman
December 8, 2015

Open any magazine or flick on the television, and chances are you’ll see an advertisement for an arthritis treatment, an erectile dysfunction drug, an anticlotting agent, or some other medicine you don’t need. For years, doctors have complained these kinds of ads targeted directly to consumers can sway patients to ask for unnecessary, and potentially harmful, prescription drugs. The American Medical Association, the nation’s most influential physicians group, was long mum on the subject. But last month, the 168-year-old organization — which represents around 235,000 doctors — finally took a stand, calling for an outright ban on consumer ads for prescription drugs and medical devices. “Now is the time,” Dr. Patrice Harris, chair-elect of the AMA Board of Trustees and a psychiatrist in Atlanta, told STAT. “We hope to make this a reality.”

Whether this notion is realistic is debatable.

For a ban to go into effect, Congress would have to pass a new law. And given the slim chances for political change, the AMA might be better off reaching a compromise than setting up an antagonistic relationship with drug makers. The United States is the only country other than New Zealand that permits drug advertising directed at consumers. Last year, the pharmaceutical industry spent $4.8 billion pitching its products and messages to Americans, a 17 percent increase from 2013, according to the market research firm Kantar Media. That much money buys a lot of airtime — so it’s no wonder that physicians gripe that drug ads can create inappropriate expectations in consumers or make doctors feel pressured to write prescriptions out of concern their patients will go elsewhere.

But the AMA is now seizing on another reason for a ban. The organization argues that the ads largely feature the latest and priciest medicines that few can afford. “Most of the ads are often for newer drugs that are getting more expensive,” said John Mack, a marketing consultant to drug makers and publisher of Pharma Marketing News. “I think their concerns are valid.” By pointing to high prices, the AMA is making a smart move. The accelerating cost of medicines has galvanized Americans. About three-quarters of the public believes that prices for brand-name drugs are too high, according to a poll conducted last month by STAT and the Harvard T.H. Chan School of Public Health.

This tactic is more likely to resonate than the most commonly voiced arguments against drug ads — namely, that they misinform patients, overemphasize benefits, and encourage overuse. In another recent public opinion poll from the Kaiser Family Foundation, about half of the respondents said that prescription drug advertising is mostly a good thing and that the ads do a good job of describing potential benefits and side effects.

The fact is that a majority of Americans believe drug ads allow greater patient involvement in health care decisions — and the Pharmaceutical Research and Manufacturers of America agrees. “It’s not a bad thing for patients to bring questions to the doctor’s office,” said Dr. Michael Ybarra, an emergency physician and the trade group’s senior director of alliance development…
I could unleash a rant for all seasons at this point, but I’ll try to exercise restraint of pen and tongue and stick to my own specialty, though this applies to all of medicine. In psychiatry, we’ve gone through a horrible era – an era in which medications have been over-valued and over-prescribed. The Direct-to-Consumer ads have had a big part in that. People are told that drugs are the answer, specifically in-patent expensive drugs, and they’ve turned unhappy people into a pressure group to lobby physicians. There have been too many complicit KOLs who have pounded that drum, but those ads have fanned the flames. Patients literally plead for the newer in-patent drugs they see on tv, and are disappointed if they’re given a generic or no prescription at all. From the point of view of a practitioner, the time honored journals are flooded with industry jury-rigged articles, and the patients come primed from last night’s ads. Ask your doctor if XYZ is right for you. If ABC happens, call your doctor. The damned ad writers get to spend a lot more time with our patients than we do!  [restraint 1boringoldman, use restraint]. And people get hurt in the process. My contention is that any drug that is advertised on television should be available Over-the-Counter [OTC]. Get me out of the loop.

Ed doesn’t think physicians as a group can make a stand like this. I do and I hope it happens. As a group, we could do what I do. I don’t prescribe drugs when they first come out, even things like Prozac®. The FDA isn’t charged with telling us what to use – just with assessing safety and minimal efficacy [no snake oil]. We learn about drugs through use, not from ads. So I wait until I hear from colleagues and patients and then I’m willing to give it a try. That’s the way it has been since the beginning of my career. But I’ve added something new. I don’t prescribe drugs advertised on television at all. I doubt all specialties could do that. But I can, and do. Aren’t I depriving people of the up-to-the-minute-world’s-greatest-breakthroughs-ever? I can’t imagine what they would be. I can’t think of an example.

So I think if physicians actually get behind this AMA resolution, we can make this happen. After all, we write the prescriptions and it’s our job to do it responsibly. If I see a patient and put them on a drug that’s dangerous without an adequate warning, I’m liable for not being honest in obtaining informed consent. And if I mumble and thereby minimize the dire consequences like they do in television ads, I am and should be called to task. Those mumbles are hardly adequate, and, in my opinion, they are essentially false advertising. So unlike Ed, I don’t think we need to spin anything. I think we need to fight fire with fire. We hold these truth to be self evident! Damn the torpedoes! Full speed ahead!
Mickey @ 7:55 PM

skepticism unchanged…

Posted on Friday 11 December 2015

In the last post [creative funding III & some other things…], I mentioned a 2009 Meta-analysis of antidepressant augmentation using Atypical Antipsychotics and I wanted to say some more about it:
by J. Craig Nelson, M.D. and George I. Papakostas, M.D.
American Journal of Psychiatry. 2009 166:980-991., September 2009

Objective: The authors sought to determine by meta-analysis the efficacy and tolerability of adjunctive atypical antipsychotic agents in major depressive disorder.
Method: Searches were conducted of MEDLINE/PubMed [1966 to January 2009], the Cochrane database, abstracts of major psychiatric meetings since 2000, and online trial registries. Manufacturers of atypical antipsychotic agents without online registries were contacted. Trials selected were acute-phase, parallel-group, double-blind controlled trials with random assignment to adjunctive atypical antipsychotic or placebo. Patients had nonpsychotic unipolar major depressive disorder that was resistant to prior antidepressant treatment. Response, remission, and discontinuation rates were either reported or obtained. Data were extracted by one author and checked by the second. Data included study design, number of patients, patient characteristics, methods of establishing treatment resistance, drug doses, duration of the adjunctive trial, depression scale used, response and remission rates, and discontinuation rates for any reason or for adverse events.
Results: Sixteen trials with 3,480 patients were pooled using a fixed-effects meta-analysis. Adjunctive atypical antipsychotics were significantly more effective than placebo [response: odds ratio=1.69, 95% CI=1.46–1.95, z=7.00, N=16, p<0.00001; remission: odds ratio=2.00, 95% CI=1.69–2.37, z=8.03, N=16, p<0.00001]. Mean odds ratios did not differ among the atypical agents and were not affected by trial duration or method of establishing treatment resistance. Discontinuation rates for adverse events were higher for atypical agents than for placebo [odds ratio=3.91, 95% CI=2.68–5.72, z=7.05, N=15, p<0.00001].
Conclusions: Atypical antipsychotics are effective augmentation agents in major depressive disorder but are associated with an increased risk of discontinuation due to adverse events.
It’s a decent article, well researched with lots of useful information. Looking at the studies that have followed, I didn’t think they added very much – so this article is still current. While I would reach different conclusions, I appreciate their collecting the data to for us to consider. I couldn’t find anything in this report [or the ones that followed] that suggested using one specific drug over others that I didn’t already know [eg side effect profiles]. Since I don’t augment, I’m dumb about this topic. So I thought as long as I’m in the neighborhood, I’d nose around a bit. First about the strength of the effect. This graph shows the values from this study with the later Arpiprazole and the Ziprasidone studies added in. It’s the Number Needed to Treat plotted against the total number of subjects in the study:

For reference, in a treatment situation like this, an NNT < 4.0 would be a respectable effect size [TCAs are in the range of 3]. An NNT > 10 would be considered no effect at all. As you can see, for the large studies [n > 100], they range from 5.0 to 15.5, average 8.9. That’s a weak effect for this situation. It means treating ~nine cases to beat placebo in one. 

In addition, there’s the side effect burden to consider:
While the efficacy of the atypical agents for adjunctive therapy in major depressive disorder appears fairly well established, there are other considerations. The rate of discontinuation due to adverse events was significantly higher for the atypical group [9.1%] than the placebo group [2.3%]. The risk difference by meta-analysis was 0.06, with a number needed to harm of 17. While the discontinuation rates did not differ significantly among the agents, rates of specific side effects may be quite different. In addition, during continuing treatment there may be other secondary effects that in the aggregate affect tolerability and patient acceptance. The atypical agents also are associated with a variety of relatively serious adverse effects, such as metabolic syndrome, extrapyramidal symptoms, and rare but serious symptoms such as tardive dyskinesia and neuroleptic malignant syndrome. As a consequence, the risk-benefit ratio appears to be different from that of several alternative treatments for major depressive disorder.

And in today’s world, there are always drug prices to keep in mind [this is the current cost/pill from goodRx with their discount coupon paying cash @ Walmart] [users of insurance are paying more] [the dates may be slightly off because of my patent-exclusivity-dyslexia, but they’re close]:

Let me say that again – "this is the current cost/pill" and a low-ball version at that [for the Brand® drugs, this is today’s cost as I couldn’t find their cost when introduced]. There aren’t many people around to whom this wouldn’t matter – at least not where I live. The escalation in price for the Brand drugs is well beyond inflation. The cost for the generic looks to be moving skyward. Unless you’re using one of the generics from the 1990s’ drugs, you’re quickly into sticker-shock territory.

A couple of other things. I’m not excited about adding a withdrawal-prone drug to another withdrawal-prone drug. I’ve had several cases where getting somebody off the combination is something of a nightmare. There’s another thing I don’t really understand. In the past, we saw depression as time limited. If a person responded to antidepressants, we usually continued them for 6 months to prevent relapse. But except in cases of patients with recurrent episodes of depression, we stopped the drugs after a time. These days, people stay on medications forever. At first, I thought it was what their doctors were telling them, but the patients themselves are reticent to stop. For some, it’s a misinterpretation of withdrawal. But for many, they are reluctant to stop them for no apparent reason. I presume they’ve got the idea that they are treating or protecting against some depressive disease. So I’ve taken to saying that the drugs aren’t for-ever, they’re for-a-while. up front. And these days, those responding patients who just won’t stop are in my mind every time I write a prescription.

This has obviously turned into an exposition of my confirming my skepticism about augmenting antidepressants with Atypical Antidepressants – and I’m still not done. I went looking for some scientific reasons to do it. All I found was speculative neurotransmitter double-talk. And I can’t think of a reason myself to prescribe an antipsychotic to people who aren’t psychotic. Why did someone do it in the first place? My best guess is that they tried Atypical Antipsychotics because they were there to try. Nothing much deeper than that. That’s not good enough for me with this kind of risk/benefit [and cost/benefit] profile..
Mickey @ 10:24 AM

creative funding III & some other things…

Posted on Wednesday 9 December 2015

I know creative funding I… and creative funding II… were probably as tedious to read as they were to write, but since they were not available on-line, I felt like I had to say what they were in order to talk about them. I’ve sort of slacked off of vetting Clinical Trials for a while. Total immersion in Study 329 for several years was enough for a while, but my habit has been to look at the big journals once a month to see what the RCT  fairy has put under the pillow this time. And when I looked at the December American Journal of Psychiatry, there were two RCTs, each with an accompanying editorial blurb:
EDITORIALS
  • Pediatric Psychopharmacology Trials: Beyond Efficacy – by Graham J. Emslie, M.D.
  • Adjunctive Ziprasidone in Major Depression and the Current Status of Adjunctive Atypical Antipsychotics – by J. Craig Nelson, M.D.
REVIEWS AND OVERVIEWS
  • Extended-Release Guanfacine for Hyperactivity in Children With Autism Spectrum Disorder – by Lawrence Scahill, M.S.N., Ph.D., et al.
  • Ziprasidone Augmentation of Escitalopram for Major Depressive Disorder: Efficacy Results From a Randomized, Double-Blind, Placebo-Controlled Study – by George I. Papakostas, M.D., et al.
My initial reaction was that these were advertisements. I object to advertisement RCTs being funded with public funds. I object to the  American Journal of Psychiatry publishing advertisement RCTs. But among the things I learned from my Study 329 experience is that The Devil is in the Details is a very wise saying indeed. And when I looked at the details of these articles and get beyond the two over-riding complaints above, these articles share a couple of things, but they are also very different. So, I’m going to live up to the boring in 1boringoldman and linger for a while to look at some of those details.

So first to Extended-Release Guanfacine for Hyperactivity in Children With Autism Spectrum Disorder – by Lawrence Scahill, M.S.N., Ph.D., et al. I spent four or five years in my retirement volunteering at a local Child and Adolescent agency. I saw a number of special needs kids, including the Autistic Spectrum variety. Many of them do have the Attention/Hyperactivity diathesis described here, and the question of medication was often in the wind. I had seen several kids on Tenex where it helped a lot and several where the side effect burden was prohibitive, so when I saw this article, I was drawn to it by interest. I wish they had done the study as proposed with a stimulant arm [see creative funding II…], but for unclear reasons, that didn’t happen.

As I said, I thought their reporting was balanced and useful. My only complaints were that they used Extended-Release Guanfacine [Intuniv] and that Dr. Cahill is an adviser to Shire – but that’s no small complaint. The introduction of Extended Release products as a way of extending patents and profits is well known [Paxil, Seroquel, etc] and I think it’s "dirty pool" unless they have solid evidence that it’s better than the original [and they never do]. Likewise, this is a publicly funded study keyed to a commercial product [when there’s no reason to pick the ER product]. Either let Shire foot the bill, or do Guanfacine for Hyperactivity in Children With Autism Spectrum Disorder study. If the why? isn’t obvious, take a long look at this:

Walmart Pharmacy [with goodRx card]
drug   size   price   quanity   unit cost
Tenex [Brand] 1mg $79.77 30 $2.66
Guanfacine 1mg $4.00 30 $0.13
Intuniv® 2mg $200.23 30 $6.67
Guanfacine ER § 2mg $31.62 30 $1.05
§ not available when this study was conceived

Even if Shire had funded it, all Extended Release extend-the-patent studies should require the non-extended-release drug as a camparator. Except for that point [again, a big point], I think the study, particularly as originally described, was deserving of its public funding. As for Dr. Emslie’s editorial, he’s is a high volume KOL whose name is all over the industry-funded C&A RCTS. In the commentary, he encourages more independently funded RCTs to look for more than efficacy, actually a reasonable point.

On the other hand, I can’t find any redemption for Ziprasidone Augmentation of Escitalopram for Major Depressive Disorder: Efficacy Results From a Randomized, Double-Blind, Placebo-Controlled Study. In the NIMH write-up, they say…
If not found to be either safe or effective, the results of this proposed trial would also be highly informative given the significant proportion of TRD patients who, despite the relative paucity of data from independently-funded studies of rigorous design, are prescribed atypical antipsychotic agents off-label"…
… implying that this independently-funded [NIMH rather than industry] would be more bias-free, more rigorous. Yet all the MD Authors have Pfizer connections [COI]. It was conducted by the MGH RCT Network [essentially a CRO]. The a priori outcome variables [from clinicaltrials.gov] were odd. The declared primary barely made significance [and I questioned the statistical analysis which used baseline severity as a covariant]. And it failed one of two declared secondary outcome variables. The primary and both secondaries are all derived from the HAM-D, but give widely divergent signals. One would like to do an ANOVA on the raw scores, but they’re not available. In the write-up, other scales [QIDS-SR, CGI] were promoted to secondaries. So I can’t see that this study is any different from the usual industry-funded exper·o·mmercial other that the NIMH picked up the tab. Actually, the commentary had the more balanced interpretation of the outcome:
The number needed to treat for ziprasidone was 7, which is similar to the other atypicals; however, the 95% confidence interval for the number needed to treat in this sample of 139 patients is very broad. As a result, we should be very cautious about efficacy comparisons. The relatively high rate of discontinuation due to adverse events is similar to the rate for the 300 mg/day dosage of extended-release quetiapine. As with quetiapine, somnolence is the most frequent side effect of ziprasidone. In this study, 34% of patients in the ziprasidone group experienced somnolence or fatigue… On other parameters, the rate of akathisia with ziprasidone was lower than with aripiprazole but greater than with quetiapine or olanzapine. Weight gain was less with ziprasidone than with olanzapine but greater than with aripiprazole or quetiapine. The present study suggests that adjunctive ziprasidone is effective in major depression, but it appears to have a relatively high rate of discontinuation due to adverse events and a high level of somnolence compared with other atypicals…

And as for medication cost:

Walmart [with goodRx card]
drug   size   price   quanity   unit cost
Geodon® 60mg $983.42 60 $16.39
Ziprasidone § 60mg $118.96 60 $  1.98
§ not available when this study was conceived

Interestingly, the first author on this paper [] and the author of the commentary [] were co-authors of a 2009 meta-analysis of the Atypical Antipsychotics as augmentation for Treatment Resistant Depression [Atypical Antipsychotic Augmentation in Major Depressive Disorder: A Meta-Analysis of Placebo-Controlled Randomized Trialsfull text].  This figure is adapted from that meta-analysis [the NNT from this Geodon® study was 6.8 and the Odds Ratio was 1.7]:

There have been three additional Aripiprazole Studies since that meta-analysis [I couldn’t extract the values from the complicated Fava et al crossover study]. In addition, there are two recent positive studies for brexpiprazole [cloned studies] [for an Abilify clone]  recently approved by the FDA for augmentation in TRD [full text here and here]:

Additional Aripiprazole Studies
Study   Treatment   Control   NNT   OR   95% CI
Lenze et al. § 40/91 26/90 6.6 2.0 1.1-3.7
Kamijima et al 159/392 55/195 8.1 1.4
Fava et al essentially a negative study [crossover]
§ an independently funded study

So back to the main thread here. I was disappointed that the NIMH had funded the Scahill et al study of Guanfacine in Autism Spectrum Disorder with the in-patent Extended Release form of the drug rather than the generic. It struck me as an ad for Shire’s product. I’m currently of the mind that it’s time for us to stop allowing PHARMA to use our journal articles for any commercial purposes. The article, however, was useful so I would’ve settled for just a comment about the funding somewhere. But with the Geodon article, I can see no reason for the NIMH to fund that study period. Its only purpose is to say, "Me too. My drug augments TRD too!" And with as much literature as we already have on this point, who actually needs to hear that? So I didn’t like the NIMH funding it nor the American Journal of Psychiatry publishing it. Send it to the Journal of Clinical Psychiatry [and let Pfizer pay for it]. We’ve had enough of that use of the scientific literature in psychiatry for this whole century already.

Parenthetically, these Effect Size numbers [Odds Ratios, NNT] are not that impressive, particularly for as much effort as has gone into generating this literature. I haven’t personally had a case yet where I was willing to add an Atypical Antipsychotic to an SSRI. I can conceive of such a case, but it just hasn’t shown up in my office. But I expect I’m on the conservative end of some spectrum…
Mickey @ 1:49 PM

creative funding II…

Posted on Tuesday 8 December 2015


by Lawrence Scahill, M.S.N., Ph.D., James T. McCracken, M.D., Bryan H. King, M.D., Carol Rockhill, M.D., Bhavik Shah, M.D., Laura Politte, M.D., Roy Sanders, M.D., Mendy Minjarez, Ph.D., Jennifer Cowen, Ph.D., Jennifer Mullett, R.N., Chris Page, B.S., Denise Ward, M.A., Yanhong Deng, M.P.H., Sandra Loo, Ph.D., James Dziura, Ph.D., Christopher J. McDougle, M.D., and Research Units on Pediatric Psychopharmacology Autism Network
American Journal of Psychiatry. 2015 172[12]:1197-1206.

Objective: Hyperactivity, impulsiveness, and distractibility are common problems in children with autism spectrum disorder [ASD]. Extended-release guanfacine is approved for children with attention deficit hyperactivity disorder but not well studied in ASD.
Method: In a multisite, randomized clinical trial, extended-release guanfacine was compared with placebo in children with ASD accompanied by hyperactivity, impulsiveness, and distractibility.
Results: Sixty-two subjects [boys, N=53; girls, N=9; mean age=8.5 years SD=2.25] were randomly assigned to guanfacine [N=30] or placebo [N=32] for 8 weeks. The guanfacine group showed a 43.6% decline in scores on the Aberrant Behavior Checklist-hyperactivity subscale [least squares mean from 34.2 to 19.3] compared with a 13.2% decrease in the placebo group [least squares mean from 34.2 to 29.7; effect size=1.67]. The rate of positive response [much improved or very much improved on the Clinical Global Impression-Improvement scale] was 50% [15 of 30] for guanfacine compared with 9.4% [3 of 32] for placebo. A brief cognitive battery tapping working memory and motor planning showed no group differences before or after 8 weeks of treatment. The modal dose of guanfacine at week 8 was 3 mg/day [range: 1–4 mg/day], and the modal dose was 3 mg/day [range: 2–4 mg/day] for placebo. Four guanfacine-treated subjects [13.3%] and four placebo subjects [12.5%] exited the study before week 8. The most common adverse events included drowsiness, fatigue, and decreased appetite. There were no significant changes on ECG in either group. For subjects in the guanfacine group, blood pressure declined in the first 4 weeks, with return nearly to baseline by endpoint [week 8]. Pulse rate showed a similar pattern but remained lower than baseline at endpoint.
Conclusions: Extended-release guanfacine appears to be safe and effective for reducing hyperactivity, impulsiveness, and distractibility in children with ASD.
Intuniv® [Extended-Release Guanfacine] was approved by the FDA in September 2009. The FDA Orange Book gives exclusivity dates from November 2017 to May 2018 depending on which patent, but there are approved generic versions [in-patent cost][generic cost]. There’s much ado about suits and settlements – but I have severe patent-exclusivity-dyslexia, so it’s all way beyond my pay grade to fully follow. For my purposes, it was in-patent when this study was conceived [see clinicaltrial.gov:  NCT01238575].

The triad of hyperactivity, impulsiveness, and distractability is common in children with Autism [precise incidence is unclear, as under the DSM-IV, diagnosing ADHD in ASD was discouraged]. In ASD, the symptoms respond to ADHD treatment with stimulants, though side effects were greater than in ADHD. This group had done a small trial of Tenex® [Guanfacine] in ASD unresponsive to stimulants that was encouraging and decided to try Intuniv® in this trial. In the original clinicaltrial.gov version, there was an arm with Intuniv® plus Ritalin, but it disappeared somewhere along the way. The answer to my question, Why was this clinical trial funded with NIMH/NIH grant money rather than by Shire, the patent holder? was nowhere answered that I could find.

Supported by NIMH grants to Dr. Scahill (R01MH083707), Dr. McDougle (RO1MH83739), Dr. McCracken (RO1MH083747), and Dr. King (R01MH86927); by a Yale Clinical and Transitional Science Award (UL1 RR024139) from the NIH National Center for Research Resources; and by Atlanta Clinical and Translational Science Institute, Emory University, which is supported by the NIH National Center for Advancing Translational Sciences under award UL1TR000454. Shire Pharmaceuticals provided active extended-release guanfacine and placebo.
Description [R01MH083707]: This is a multi-site collaborative R01 application from The Research Units on Pediatric Psychopharmacology [RUPP] Autism Network (Indiana University, Seattle Children’s Research Institute, UCLA, and Yale University). Autism is a major public health concern throughout the world. The cost of the disability is estimated to be more than $30 billion annually in the U.S. alone. Recent data indicate that as many as 50% of children with pervasive developmental disorders (PDDs) have moderate to severe problems of hyperactivity and impulsiveness. The impact of these symptoms may be profound and make the child less able to make use of educational and behavioral interventions. Consensus is lacking on how to treat children with PDD accompanied by hyperactivity. Compared to typically developing children with ADHD, children with PDD often show less benefit and greater side effect burden. Guanfacine is commonly used in this population, but poorly studied. Our pilot data indicate that guanfacine is a promising treatment for hyperactivity in children with PDD with a good tolerability profile. In addition, we have identified biomarkers (genetic and neurochemical) that may be associated with positive effects. For these reasons, we chose guanfacine for the proposed rigorous and possibly definitive study in this population. The study involves an 8-week randomized, double-blind, placebo- controlled trial of guanfacine for the 170 children (ages 5-13 years) with PDD accompanied by hyperacti9vity and impulsiveness. Subjects who show a positive response will be invited to enter an 8-week Extension phase (treatment mask will not be broken). The treatment blind will be broken for children who do not achieve a positive response in the Double-blind phase. Children who show no change or deterioration on placebo will be treated with guanfacine in an 8-week Open-label phase. Children who show a partial response to guanfacine will be randomly assigned to a 4-week add on trial of methylphenidate or placebo to evaluate the potential benefits of combined treatment. We expect that 50 subjects will enter this pilot trial. The role of gene variants and urinary adrenergic/noradrenergic measures as biomarkers in moderating response to guanfacine on primary efficacy measures and adverse effects will be explored.
Well the methylphenidate extension part didn’t make it and the trial size shrunk [1/3], but the results were pretty straightforward. Intuniv® was effective in treating the target symptoms, but carried an impressive side effect burden:
 

truncated to show only significant AEs
Unlike the study in creative funding I…, this trial seemed legit to me; however, I would still put it in the exper·o·mmercial category because:

  • the PI has a significant COI [see below];
  • Intuniv® is in-patent and projected to be there for a while;
  • it could’ve been done with a generic [generic Guanfacine];
  • it reads like something a pharmaceutical rep might hand me. In fact, that pharmaceutical rep might say, "This is an NIMH study," implying that made it more legitimate, and would emphasize that it was done with the Extended Release [AKA in-patent] Intuniv®.
In the companion editorial, Pediatric Psychopharmacology Trials: Beyond Efficacy by Graham J. Emslie, he makes that argument, "… as shown by the study conducted by Scahill et al., such independently funded studies can establish a standard in the field for assessment and treatment and can provide important clinical information beyond the primary outcome" implying here and throughout the editorial that independently funded studies are more reliable and informative. While that’s certainly true, the solution isn’t for the NIMH to fund all the real studies. The solution is to only publish real studies in the first place, instead of flooding our literature with not-so-very-real studies. And that’s what data transparency, responsible editorial oversight, and peer review are supposed to be about. Sorry to drag this out, but it’s going to need one more post to complete the thought…

Conflict of Interest Statements
Dr. Scahill has served as a consultant for Bracket, Coronado, MedAdvante, Neuren, Roche, and Shire, and he has served on the speaker’s bureau for the Tourette Syndrome Association.
Dr. McCracken has served as a consultant for Dart Neuroscience and Roche; he has received research support from Roche; he has received study drug and matching placebo from Shire; and he has served on the speaker’s bureau for the Tourette Syndrome Association.
Dr. King has received research support from Roche and Seaside Therapeutics.
All other authors report no financial relationships with commercial interests.
Mickey @ 7:00 AM

December 7, 1941…

Posted on Monday 7 December 2015

Back then, I was just four days old, so of course I didn’t understand. But now aged 74 years? I still don’t – either the then version or the now version…
Mickey @ 4:48 PM

creative funding I…

Posted on Monday 7 December 2015

Pfizer’s Geodon® [Ziprasidone] was the fifth Atypical Anipsychotic approved [2001]. The Orange Book lists its patent exclusivity through 2019, though it appears that generics have been approved since 2012 that are now available [see current costs] [I’ve given up trying parse patent/exclusivity information, so that’s all I know]. Independent of that confusion, there’s a Clinical Trial in the December American Journal of Psychiatry that deserves some attention – Geodon® augmentation of Lexapro® in Treatment Resistant Depression. Here’s the abstract:
by Papakostas GI, Fava M, Baer L, Swee MB, Jaeger A, Bobo WV, and Shelton RC.
American Journal of Psychiatry. 2015 172[12]:1251-1258.

OBJECTIVE: The authors sought to test the efficacy of adjunctive ziprasidone in adults with nonpsychotic unipolar major depression experiencing persistent symptoms after 8 weeks of open-label treatment with escitalopram.
METHOD: This was an 8-week, randomized, double-blind, parallel-group, placebo-controlled trial conducted at three academic medical centers. Participants were 139 outpatients with persistent symptoms of major depression after an 8-week open-label trial of escitalopram [phase 1], randomly assigned in a 1:1 ratio to receive adjunctive ziprasidone [escitalopram plus ziprasidone, N=71] or adjunctive placebo [escitalopram plus placebo, N=68], with 8 weekly follow-up assessments. The primary outcome measure was clinical response, defined as a reduction of at least 50% in score on the 17-item Hamilton Depression Rating Scale [HAM-D]. The Hamilton Anxiety Rating scale [HAM-A] and Visual Analog Scale for Pain were defined a priori as key secondary outcome measures.
RESULTS: Rates of clinical response [35.2% compared with 20.5%] and mean improvement in HAM-D total scores [-6.4 [SD=6.4] compared with -3.3 [SD=6.2]] were significantly greater for the escitalopram plus ziprasidone group. Several secondary measures of antidepressant efficacy also favored adjunctive ziprasidone. The escitalopram plus ziprasidone group also showed significantly greater improvement on HAM-A score but not on Visual Analog Scale for Pain score. Ten [14%] patients in the escitalopram plus ziprasidone group discontinued treatment because of intolerance, compared with none in the escitalopram plus placebo group.
CONCLUSIONS: Ziprasidone as an adjunct to escitalopram demonstrated antidepressant efficacy in adult patients with major depressive disorder experiencing persistent symptoms after 8 weeks of open-label treatment with escitalopram.
This article is typical of the indication creep that followed the initial approvals of the Atypical Antipsychotics as they flowed from the pipeline. First Schizophrenia for initial approval, then Mania, then a shot at monotherapy for Major Depressive Disorder or augmentation in cases that didn’t respond to SSRIs [AKA Treatment Resistant Depression] – accompanied by exper·o·mmercial articles designed to generate reprints for detail men to hand out on their visits. What’s different in this case is the funding:
Supported by the NIMH grant R01MH081235, Pfizer [which supplied blinded ziprasidone and placebo pills], and Forest Laboratories [which supplied escitalopram].
DESCRIPTION [provided by applicant]: Identifying novel treatments for resistant depression [TRD] is urgently needed to help improve the standard of care. To date, several preliminary studies have examined the use of atypical antipsychotic agents as adjuncts to standard antidepressants for TRD. However, the efficacy of this popular off-label treatment strategy has yet to be firmly established, while very little is known regarding the long-term effects [in terms of efficacy, tolerability and safety] of this treatment strategy. The atypical antipsychotic agent ziprasidone, in particular, may offer a unique opportunity to study as an adjunct for TRD for two principal reasons: I] its unique receptor-affinity profile, and, II] its favorable side-effect profile compared to the other agents in the class. Unfortunately, however, double-blind, placebo controlled trials of ziprasidone augmentation for TRD have not been conducted to date. If safe and effective as an antidepressant adjunct, ziprasidone would represent an attractive option for many of these patients who have had unsatisfactory initial response to standard treatment. If not found to be either safe or effective, the results of this proposed trial would also be highly informative given the significant proportion of TRD patients who, despite the relative paucity of data from independently-funded studies of rigorous design, are prescribed atypical antipsychotic agents off-label"…
The NIMH Grant was awarded to Richard Shelton [then at Vanderbilt] and ultimately came to $1,656,479 over the 5 years [2008-2012]. The study was carried out in conjunction with Mass General’s trial network, Vanderbilt, and the University of Alabama [Shelton apparently moved from Vanderbilt to the University of Alabama]. Other than the fact that it was financed by the NIMH, it was not unlike all the other indication creep exper·o·mmercials of the day – including a Conflict of Interest declaration [appended below] that should make anyone blush [note the presence of Pfizer in 4/4 MD Authors].

As for the study itself, I have to comment that finding the information wasn’t easy. In the clinicaltrials.gov write-up [NCT00633399], the primary outcome variable was a fall in the HAM-D score of 50% during the 8 weeks on Geodon. The secondary outcome variables were a HAM-D score < 8 at Week 8, and Comparing Scores on HAM-D Baseline Visit to Phase 2 Final Visit at Week 8. They reported the primary outcome variable significant at p = 0.04 and the p values for the secondary outcomes respectively as p = 0.32 and p = 0.04. Obviously without the data, these couldn’t be checked using fancy tests and software, but using a simple 2×2 test, I couldn’t confirm the primary p-value.

In their article, they mention other "secondary" outcome variables using the QIDS-SR and the HAM-A all of which are positive. Another bit of questionable reporting had to do with weight gain. It’s not mentioned in the abstract and in the body of the paper, it says:
There were no significant between-group differences in reported rates of sexual dysfunction or weight gain.
But then at the very [very] end of the paper, there was this:
Addendum: Additional analysis found a trend significance in QTc by 8.8 msec in the ziprasidone-tested group and a significant increase in weight gain of 3.5 kg in the ziprasidone-treated group versus 1.0 kg on placebo. Further information on these and other laboratory parameters is available from Dr. Papakostas [gpapakostas@partners.org].
I almost passed over this article, but a quick glance at the funding and COI declarations at the end gave me pause. "How in the world did they get the NIMH to fund this study?" was enough of a question to pique my interest. The argument [circa 2008] that "the relative paucity of data from independently-funded studies of rigorous design" justified NIMH funding was certainly unexpected [and after nosing around, unjustified]. There’s nothing rigorous about the COI statement, or the analyses, or the add-on post-hoc variables, or weight gain being an afterthought, or the minimalist results report in clinicaltrials.gov.  I’m not sure that anyone much cares about the topic itself. It’s in the genre of those sequencing, combining, augmenting, personalizing studies trying to eke more out of the SSRIs than they apparently have – and as such is anachronistic. But I for one care that this study is in the American Journal of Psychiatry in December of 2015. And I sure care that the NIMH footed the bill. There’s an accompanying editorial [Adjunctive Ziprasidone in Major Depression and the Current Status of Adjunctive Atypical Antipsychotics] as well [first page available here].

I’m going to stop here for the moment, but I’m not done. There’s another article I would consider an indication creep exper·o·mmercial in the same issue of the AJP that’s NIMH funded. So after a breather, I’ll briefly mention the other one [also with its accompanying editorial] and comment on the two simultaneously in the next post.

Conflict of Interest Statements
Dr. Papakostas has received research support from or served as a consultant or speaker for Abbott, AstraZeneca, Avanir, Brainsway, Bristol-Myers Squibb, Cephalon, Dey Pharma, Eli Lilly, Forest, Genentech, GlaxoSmithKline, Evotec AG, Lundbeck, Inflabloc, Janssen Global Services, Jazz Pharmaceuticals, Johnson & Johnson, NIMH, Novartis, One Carbon Therapeutics, Otsuka, Pamlab, Pfizer, Pierre Fabre, Ridge Diagnostics [formerly known as Precision Human Biolaboratories], Shire, Sunovion, Takeda, Theracos, Titan Pharmaceuticals, and Wyeth.
Dr. Fava has received research support from or served on advisory boards or as a consultant for Abbott, Affectis Pharmaceuticals, Alkermes, Amarin Pharma, American Cyanamid, Aspect Medical Systems, AstraZeneca, Auspex, Avanir, AXSOME Therapeutics, Bayer AG, Best Practice Project Management, BioMarin Pharmaceuticals, BioResearch, Biovail Corporation, BrainCells, Bristol-Myers Squibb, CeNeRx BioPharma, Cephalon, Cerecor, Clintara, CNS Response, Compellis Pharmaceuticals, Covance, Covidien, Cypress Pharmaceutical, Dainippon Sumitomo, DiagnoSearch Life Sciences, Dov Pharmaceuticals, Edgemont Pharmaceuticals, Eisai, Eli Lilly, EnVivo Pharmaceuticals, ePharmaSolutions, EPIX Pharmaceuticals, Euthymics Bioscience, Fabre-Kramer Pharmaceuticals, Forest Pharmaceuticals, Forum Pharmaceuticals, Ganeden Biotech, GenOmind, GlaxoSmithKline, Grunenthal GmbH, Harvard Clinical Research Institute, Hoffman-LaRoche, Icon Clinical Research, i3 Innovus/Ingenix, Janssen, Jazz Pharmaceuticals, Jed Foundation, Johnson & Johnson, Knoll Pharmaceuticals, Labopharm, Lichtwer Pharma GmbH, Lorex, Lundbeck, MedAvante, Merck, MSI Methylation Sciences, NARSAD, National Center for Complementary and Alternative Medicine, Naurex, Nestlé Health Sciences, Neuralstem, Neuronetics, NextWave Pharmaceuticals, NIDA, NIMH, Novartis AG, Nutrition 21, Orexigen Therapeutics, Organon, Otsuka, Pamlab, Pfizer, Pharmacia-Upjohn, Pharmaceutical Research Associates, PharmaStar, Pharmavite, PharmoRx Therapeutics, Photothera, Precision Human Biolaboratory, Prexa Pharmaceuticals, PPD, Puretech Ventures, PsychoGenics, Psylin Neurosciences, Reckitt Benckiser, Rexahn Pharmaceuticals, Ridge Diagnostics, Roche Pharmaceuticals, RCT Logic [formerly Clinical Trials Solutions], Sanofi-Aventis, Schering-Plough, Sepracor, Servier Laboratories, Shire, Solvay, Somaxon, Somerset, Stanley Medical Research Institute, Sunovion, Supernus, Synthelabo, Takeda, Tal Medical, Tetragenex, TransForm Pharmaceuticals, Transcept Pharmaceuticals, Vanda Pharmaceuticals, and Wyeth-Ayerst; he has served as a speaker or author for Adamed, Advanced Meeting Partners, American Psychiatric Association, American Society of Clinical Psychopharmacology, AstraZeneca, Belvoir Media Group, Boehringer Ingelheim GmbH, Bristol-Myers Squibb, Cephalon, CME Institute/Physicians Postgraduate Press, Eli Lilly, Forest Pharmaceuticals, GlaxoSmithKline, Imedex, MGH Psychiatry Academy/Primedia, MGH Psychiatry Academy/Reed Elsevier, Novartis, Organon, Pfizer, PharmaStar, United BioSource, and Wyeth-Ayerst Laboratories; he has equity holdings in Compellis and PsyBrain; he receives royalty, patent, or other income for patents for sequential parallel comparison design, licensed by MGH to Pharmaceutical Product Development, and has a patent application for a combination of ketamine plus scopolamine in major depressive disorder, licensed by MGH to Biohaven; he is a copyright holder for the MGH Cognitive and Physical Functioning Questionnaire, Sexual Functioning Inventory, Antidepressant Treatment Response Questionnaire, Discontinuation-Emergent Signs and Symptoms, Symptoms of Depression Questionnaire, and SAFER; and he receives royalties from Lippincott Williams & Wilkins, Wolkers Kluwer, and World Scientific Publishing.
Dr. Bobo has received research support from Cephalon, the Mayo Foundation, NARSAD, and NIMH and has served on speakers bureaus for Janssen and Pfizer.
Dr. Shelton has received research support from or served as a consultant for Alkermes, Assurex Health, Avanir Pharmaceuticals, Bristol-Myers Squibb, Cerecor, Clintara, Cyberonics, Elan, Forest Pharmaceuticals, Janssen, Medtronic, MSI Methylation Sciences, Naurex, Nestlé Health Sciences–Pamlab, Novartis, Otsuka, Pfizer, Ridge Diagnostics, Shire, and Takeda.
The other authors report no financial relationships with commercial interests.
Mickey @ 3:26 PM

write his way out…

Posted on Monday 7 December 2015


Martin Shkreli remains as unrepentant as ever
Pharmalot
December 3, 2015

In a brief, but illuminating appearance at a health industry gathering on Thursday, the controversial chief executive of Turing Pharmaceuticals rejected fresh criticism that he went back on his promise to lower the price of the life-saving medicine Daraprim.

“Our shareholders expect us to make as much as money as possible,” said a defiant Shkreli, who wore a hooded sweatshirt and sneakers in an auditorium that was otherwise filled with buttoned-down executives, physicians, and investors. “That’s the ugly, dirty truth.”

Shkreli was responding to questions about his recent decision to leave intact the $750-a-tablet list price for Daraprim. Turing bought the drug last summer and quickly jacked up the price from $13.55, a 5,000 percent increase that became a flashpoint in the debate over prescription drug costs…
While it is unlikely that Martin Shkreli will ever be canonized like Jean Genet was by Jean Paul Sartre for his authenticity, Shkreli’s “ugly, dirty truth” that “Our shareholders expect us to make as much as money as possible” is refreshingly honest after the decades of spin. The drug in question, Daraprim, is almost exclusively used in the treatment of the Toxoplasmosis infections seen in immunologic compromised patients, primarily people with AIDS. And to my knowledge, there’s no justification for Shkreli’s breathtaking price hike other than those given above.

After a life of crime, Jean Genet was languishing in a French prison when he began to write. He was ultimately liberated from prison for his brilliance by the likes of Jean-Paul Sartre and Pablo Picasso. He went on to become a commanding presence in the literature of 20th century, and he never returned to prison. I find myself thinking that we should afford Martin Shkreli the same opportunity. Lock him up, and see if he can write his way out…
Mickey @ 10:00 AM

reflections on original sin…

Posted on Sunday 6 December 2015

The pre-DSM-III version of psychiatry was fine with me. My internal medicine and research training had been heavily weighted on the objective side. But practicing, I had realized that the subjective experience of illness [and life] was a much more important part of things than I had realized. So the mix of objectivity and subjectivity in those days was exactly what I personally was looking for. And when the DSM-III and its changes came, I naively thought that it was a call for more balance. It took me a while to get it that subjectivity [at least the version I was interested in] was being given its walking papers. Once I realized it was more war zone with a long history than a matter of emphasis, I walked.

Looking back, psychiatry itself might’ve been better off aiming for that balance after all, but that’s a speculation about things long passed. What still haunts us, however, are some of the consequences of the decisions made in those days – specifically the decisions about classifying the depressions [see what price, reliability?…]. Whether an honest mistake or testimony to bias, those choices became a tragic flaw that’s still playing out thirty-five years later. Paradoxically, they crippled research into both Depression-the-Disease [Endogenous Depression, Melancholia, the Depressions of Manic Depressive Illness, etc] as well as the much more common depression-as-a-symptom. And it became a categorical error that opened a wide portal for commerce-driven  bull-shit  malarkey like this…

    "Major depression is now recognized as a highly prevalent, chronic, recurrent, and disabling biological disorder with high rates of morbidity and mortality. Indeed, major depression, which is projected to be the second leading cause of disability worldwide by the year 2020, is associated with high rates of mortality secondary to suicide and to the now well-established increased risk of death due to comorbid medical disorders, such as myocardial infarction and stroke…"
This business about waiting room screening for depression seems to me to be just another domino in the a long chain of ramifications of that original sin. I’ve never personally seen a case of melancholic depression that would make it through a doctor’s visit undetected. But if there were such a case, the person in need of screening would be the doctor. So early detection of Occult Melancholia is hardly a reason for waiting room screening for depression.

Of course it is within the purview of good medical practice for a physician to notice and comment on depressive mood states, much as it is to follow other signs of dysfunction like jaundice or dyspnea. But there’s no rationale that I can see to put depression into the domain of legitimate Preventive Medicine. If Primary Care Physician visits are too short to even notice depressive affect and simply say, "You seem down today. What’s up?" They’ve been shortened way too much.

Dr. Insel’s recurrent lament in recent years has been…

… after giving heart and soul to mental-health problems over the last 13 years working in government, I have not seen any improvement for either morbidity or mortality for serious mental illness – so I’m ready to try a different approach…
We all know how he means that, but there’s another obvious interpretation –  waiting room screening for depression isn’t going to make  any improvement for either morbidity or mortality for serious mental illness other than keep the cascade of falling dominos in play. And one of those next dominos is the so-called Collaborative Care, yet another illusion in the string of illusions dating from the original sin

Mickey @ 6:20 PM