feels like old times…

Posted on Saturday 25 April 2015

I don’t actually believe that most of the patients who showed up in my office looking for help were afflicted with a biologically determined brain disease, but from the first case I ever saw of melancholic depression to the present, I never doubted that biology was the major player in that condition. One of the paradoxical consequences of the biological revolution in psychiatry with the DSM-III in 1980 was that it shut down substantive research on Melancholia by eliminating it altogether [see political sabbatical… – Feb 2015]. In that post, I mention an editorial [Issues for DSM-5: Whither Melancholia? The Case for Its Classification as a Distinct Mood Disorder – Jan 2010], a plea to the DSM-5 Task Force to reinstate it written by an impressive cast of researchers. There was another excellent paper mentioned in my post that demonstrated separation among the depressive syndromes on clinical grounds [Cleaving depressive diseases from depressive disorders and non-clinical states – Jan 2015]. The first author of both papers was Gordon Parker, Director of the Black Dog Institute in New South Wales, Australia. And now we’re treated to another nuclear paper from that group about Melancholia, this time a neuroimaging study:
by Matthew P. Hyett, Michael J. Breakspear, Karl J. Friston, Christine C. Guo, and Gordon B. Parker
JAMA Psychiatry. 2015 72[4]:350-358.

IMPORTANCE Patients with melancholia report a distinct and intrusive dysphoric state during internally generated thought. Melancholia has long been considered to have a strong biological component, but evidence for its specific neurobiological origins is limited. The distinct neurocognitive, psychomotor, and mood disturbances observed in melancholia do, however, suggest aberrant coordination of frontal-subcortical circuitry, which may best be captured through analysis of complex brain networks.
OBJECTIVE To investigate the effective connectivity between spontaneous [resting-state] brain networks in melancholia, focusing on networks underlying attention and interoception.
DESIGN, SETTING, AND PARTICIPANTS We performed a cross-sectional, observational, resting-state functional magnetic resonance imaging study of 16 participants with melancholia, 16 with nonmelancholic depression, and 16 individuals serving as controls at a hospital-based research institute between August 30, 2010, and June 27, 2012.We identified 5 canonical resting-state networks [default mode, executive control, left and right frontoparietal attention, and bilateral anterior insula] and inferred spontaneous interactions among these networks using dynamic causal modeling.
MAIN OUTCOMES AND MEASURES Graph theoretic measures of brain connectivity, namely, in-degree and out-degree of each network and edge connectivity, between regions composed our principal between-group contrasts.
RESULTS Melancholia was characterized by a pervasive disconnection involving anterior insula and attentional networks compared with participants in the control [Mann-Whitney, 189.00; z = 2.38; P = .02] and nonmelancholic depressive [Mann-Whitney, 203.00; z = 2.93; P = .004] groups. Decreased effective connectivity between the right frontoparietal and insula networks was present in participants with melancholic depression compared with those with nonmelancholic depression [χ² = 8.13; P = .004]. Reduced effective connectivity between the insula and executive networks was found in individuals with melancholia compared with healthy controls [χ² = 8.96; P = .003].
CONCLUSIONS AND RELEVANCE We observed reduced effective connectivity in resting-state functional magnetic resonance imaging between key networks involved in attention and interoception in melancholia. We propose that these abnormalities underlie the impoverished variety and affective quality of internally generated thought in this disorder.
[This is a rough stab at introducing their methodology from one boring old psychoanalyst – so be gentle]. 25 years ago, Seiji Ogawa discovered that the MRI could detect the difference between oxygenated and deoxegenated hemoglobin. Since neuronal activity uses oxygen, it meant that we could actually see areas of increased neuronal activity in the brain – the blood-oxygen-level dependent contrast imaging or BOLD technique. We’ve all seen the pictures of areas in the brain lighting up in response to various stimulus. Attention then turned to the brain at rest, without stimuli [Resting state fMRI] where a number of consistent functional areas have been identified. These functional areas in the resting fMRI are connected, detected by timeline studies of simultaneous activity, so connection [and the direction of these connections] can be determined by some statistical/mathematical analyses beyond comprehension by most of the earth’s population – certainly mine. These connections [networks] may be visible [eg connectomes] or functional. Thus ends my stab at an intro to their methodology.

What Matthew Hyett et al did was to run resting fMRIs on 16 subjects each, from three cohorts – normals, non-melancholic depressed people, and a group with Melancholia. They identified five of the functional areas and studied their connectivity in each group. What they found was a clear separation among the three groups – documenting "reduced effective connectivity in resting-state functional magnetic resonance imaging between key networks involved in attention and interoception in melancholia."

This article begins with:
Despite advances in pursuing the neurobiological causes of clinical depressive conditions, the literature is characterized by divergent findings, likely reflecting their heterogeneity and varying causes. One such condition, melancholia [previously termed endogenous depression], has long held consistent ascriptions: being genetically weighted, having prominent biological perturbations, evidencing overrepresented clinical features, and showing a greater response to physical therapies than to psychotherapy. As psychiatry strives toward a diagnostic nosology based on genetic, behavioral, and neurobiological criteria, melancholia arguably represents a canonical test case…
Melancholia was already canonical test case way back when I started psychiatry in forty years ago, being hotly pursued by biological researchers hunting for biomarkers [eg Bernard Carroll’s Dexamethasone Suppression Test, David Kupfer’s REM Sleep Latency]. I now know some of the story about how and why this time-honored disease disappeared from the psychiatric diagnostic manual, but knowing the story doesn’t make it any less senseless than it was in 1980 [or in 1988, 1994, and 2013 when the diagnostic system was revised]. So in the intervening years, mainstream psychiatry has pretended that all depression fits Major Depressive Disorder with some kind of biologic substrate while ignoring this one diagnosis [Melancholia] where that’s most likely to actually be true. So this group studies this obvious candidate. They go on to lay out their hypothesis:
Historical failure to identify specific neurobiological correlates of melancholia is consistent with recent advances in cognitive neuroscience that regard the brain as a complex network, whereby psychiatric conditions reflect changes in functional integration rather than perturbations within an isolated region. Large-scale brain networks supporting mood regulation, interoception, and cognition [eg, concentration and attention] are thus likely candidates for furthering understanding of melancholia’s neurobiology…
Then they present their data in a matter-of-fact way, and conclude:
Conclusions: We position the neurobiological features of the spontaneous dysphoria of melancholia as a weakening of interactions among regions that subserve attention,mood regulation, and interoception. Computational accounts of internally generated thought highlight the importance of a critical homeostatic balance between stable self-regulation and dynamic instability. We propose that our findings reflect a loss of this optimal balance, undermining the adaptive role of interoception.
I obviously really liked this research. In my mind, it is a definitive affirmation that Melancholia is a distinct clinical entity with demonstrable brain abnormalities. The article goes about "furthering [the] understanding of melancholia’s neurobiology." The neuroscience parallels our traditional clinical diagnosis. As a matter of fact, it probably wouldn’t be funded by our NIMH these days: it’s an uncommon illness that doesn’t fit the RDoC [more like DSM-5 Major Depressive Disorder with Melancholic Features] and it doesn’t Translate into anything. Even though I don’t really understand the vicissitudes of all of the analytic techniques, they’re clear enough to follow and one of the authors [Karl J. Friston] was involved in their development. Likewise, Dr. Gordon Parker’s work has been consistently solid science. And then there’s this:
    Conflict of Interest Disclosures: None reported.
    Funding/Support: This study was supported by grants from the National Health and Medical Research Council of Australia [program grants 510135 and 1037196] [Mr Hyett and Drs Breakspear and Parker], Queensland Health [Mr Hyett and Dr Breakspear], and the Wellcome Trust [Dr Friston].
it feels like old times…
Mickey @ 4:19 PM

the spice must flow…

Posted on Wednesday 22 April 2015

And so another atypical antipsychotic may be dripping from the FDA Approval pipeline…
Lundbeck Press Release
September 24, 2014

H. Lundbeck A/S [Lundbeck] and Otsuka Pharmaceutical Co., Ltd. [Otsuka] today announced that the US Food and Drug Administration [FDA] has determined that the New Drug Application [NDA] for brexpiprazole for monotherapy in adult patients with schizophrenia and for adjunctive treatment of major depressive disorder [MDD] in adult patients is sufficiently complete to allow for a substantive review and the NDA is considered filed as of 9 September 2014 [60 days after submission]. The PDUFA date is July 11, 2015…
PsychiatricNews
April 17, 2015

New findings from a phase 3 clinical trial, published today in AJP in Advance, suggest that a recently developed antipsychotic may prove to be one of the next treatments for schizophrenia. Researchers from the Department of Psychiatry at Hofstra North Shore-LIJ School of Medicine conducted a randomized, double-blind, placebo-controlled study with 636 patients with schizophrenia to investigate the efficacy, safety, and tolerability of brexpiprazole—a  serotonin-dopamine activity modulator that acts as a partial agonist at serotonin 5-HT1A receptors and dopamine D2 receptors, while antagonizing serotonin 5-HT2A receptors and noradrenaline alpha receptors…

“It is important for clinicians and patients to have a range of treatment options to manage symptoms effectively and safely … as response to therapy can vary greatly from individual to individual and from one medication to the next.” Correll informed Psychiatric News that the Food and Drug Administration will make its final decision about the approval of brexpiprazole for the treatment of schizophrenia as well as major depressive disorder in July…
I thought it might be interesting in the light of all of our enthusiasm for Data Transparency and other reforms to take a look at the Clinical Trials for Brexpiprazole and see how they stack up with some of the suggested changes:
Clinicaltrials.gov has 30 entries for Brexpiprazole, 25 being Phase 3. They’re obviously aiming for the adjunctive-antidepressant market [which must be much larger than the schizophrenia market]. They don’t get high marks in the Clinical Trial Results Database in that none of the completed studies have results posted [at least 8 completed over a year ago]. Here are the two recently published studies in Schizophrenia which I presumes are the ones being submitted to the FDA for approval [each study listed 60 locations as clinical sites!]:
by Correll CU, Skuban A, Ouyang J, Hobart M, Pfister S, McQuade RD, Nyilas M, Carson WH, Sanchez R, and Eriksson H.
American Journal of Psychiatry. 2015 Apr 16 [Epub ahead of print]

OBJECTIVE: The efficacy, safety, and tolerability of brexpiprazole and placebo were compared in adults with acute schizophrenia.
METHOD: This was a multicenter, randomized, double-blind, placebo-controlled study. Patients with schizophrenia experiencing an acute exacerbation were randomly assigned to daily brexpiprazole at a dosage of 0.25, 2, or 4 mg or placebo [1:2:2:2] for 6 weeks. Outcomes included change from baseline to week 6 in Positive and Negative Syndrome Scale [PANSS] total score [primary endpoint measure], Clinical Global Impressions Scale [CGI] severity score [key secondary endpoint measure], and other efficacy and tolerability measures.
RESULTS: The baseline overall mean PANSS total score was 95.2, and the CGI severity score was 4.9. Study completion rates were 62.2%, 68.1%, and 67.2% for patients in the 0.25-, 2-, and 4-mg brexpiprazole groups, respectively, versus 59.2% in the placebo group. At week 6, compared with placebo, brexpiprazole dosages of 2 and 4 mg produced statistically significantly greater reductions in PANSS total score [treatment differences: -8.72 and -7.64, respectively] and CGI severity score [treatment differences: -0.33 and -0.38]. The most common treatment-emergent adverse event for brexpiprazole was akathisia [2 mg: 4.4%; 4 mg: 7.2%; placebo: 2.2%]. Weight gain with brexpiprazole was moderate [1.45 and 1.28 kg for 2 and 4 mg, respectively, versus 0.42 kg for placebo at week 6]. There were no clinically or statistically significant changes from baseline in lipid and glucose levels and extrapyramidal symptom ratings.
CONCLUSIONS: Brexpiprazole at dosages of 2 and 4 mg/day demonstrated statistically significant efficacy compared with placebo and good tolerability for patients with an acute schizophrenia exacerbation.

First off, in the PsychiatricNews article above, the comment "Researchers from the Department of" isn’t accurate. It should read "A researcher from the Department of" since all the other authors on the by-line are employees of either Lundbeck or Otsuka [in blue].The PHARMA companies seem to have dropped the multiple academics on the by-line method and settled for only one. Dr. Christoph Correll made the following COI declaration:
Dr. Correll has been a consultant and/or advisor to or has received honoraria from Actelion, Alexza, American Academy of Child and Adolescent Psychiatry, Bristol-Myers Squibb, Cephalon, Eli Lilly, Genentech, Gerson Lehrman Group, IntraCellular Therapies, Lundbeck, Medavante, Medscape, Merck, National Institute of Mental Health, Janssen/J&J, Otsuka, Pfizer, ProPhase, Roche, Sunovion, Takeda, Teva, and Vanda; he has received grant support from Bristol-Myers Squibb, Feinstein Institute for Medical Research, Janssen/J&J, National Institute of Mental Health, NARSAD, and Otsuka; and he has been a Data Safety Monitoring Board member for Cephalon, Eli Lilly, Janssen, Lundbeck, Pfizer, Takeda, and Teva.
And did he [they] have any help writing the paper?
Funded by Otsuka Pharmaceutical Development & Commercialization, Inc., and H. Lundbeck A/S. Jennifer Stewart, M.Sc. [QXV Communications, Maccles field, U.K.] provided writing support that was funded by Otsuka Pharmaceutical Development & Commercialization, Inc., and H. Lundbeck A/S.
So far, we’re not getting a lot of reform-is-in-the-air vibes. How about the other paper that’s part of the FDA NDA submission?
by Kane JM, Skuban, Ouyang, Hobart, Pfister, McQuade, Nyilas, Carson, Sanchez, and Eriksson.
Schizophrenia Research. 2015 Feb 12. [Epub ahead of print]

The objective of this study was to evaluate the efficacy, safety and tolerability of brexpiprazole versus placebo in adults with acute schizophrenia. This was a 6-week, multicenter, placebo-controlled double-blind phase 3 study. Patients with acute schizophrenia were randomized to brexpiprazole 1, 2 or 4mg, or placebo [2:3:3:3] once daily. The primary endpoint was changed from baseline at week 6 in Positive and Negative Syndrome Scale [PANSS] total score; the key secondary endpoint was Clinical Global Impressions-Severity [CGI-S] at week 6. Brexpiprazole 4mg showed statistically significant improvement versus placebo [treatment difference: -6.47, p=0.0022] for the primary endpoint. Improvement compared with placebo was also seen for the key secondary endpoint [treatment difference: -0.38, p=0.0015], and on multiple secondary efficacy outcomes. Brexpiprazole 1 and 2mg also showed numerical improvements versus placebo, although p>0.05. The most common treatment-emergent adverse events were headache, insomnia and agitation; incidences of akathisia were lower in the brexpiprazole treatment groups [4.2%-6.5%] versus placebo [7.1%]. Brexpiprazole treatment was associated with moderate weight gain at week 6 [1.23-1.89kg versus 0.35kg for placebo]; there were no clinically relevant changes in laboratory parameters and vital signs. In conclusion, brexpiprazole 4mg is an efficacious and well-tolerated treatment for acute schizophrenia in adults… BEACON trial.


[recolored to match the graph above]
Again, there is only one academic author, Dr. John Kane, with the following COI declaration:
Dr Kane has been a consultant for Amgen, Alkermes, Bristol-Meyers Squibb, Eli Lilly, EnVivo Pharmaceuticals [Forum] Genentech, H. Lundbeck. Intracellular Therapeutics, Janssen Pharmaceutica, Johnson and Johnson, Merck, Novartis, Otsuka, Pierre Fabre, Proteus, Reviva, Roche and Sunovion. Dr Kane has been on the Speakers Bureaus for Bristol-Meyers Squibb, Eli Lilly, Janssen, Genentech and Otsuka, and is a shareholder in MedAvante, Inc.
Writing help?
Ruth Steer, PhD, [QXV Communications, Macclesfield, UK] provided writing support, which was funded by Otsuka Pharmaceutical Development & Commercialization, Inc. [Princeton, USA] and H. Lundbeck A/S [Valby, Denmark].
Does QXV Communications sound familiar? And speaking of sounding familiar, in case you didn’t follow the author links, Dr. John Kane is Chairman of Psychiatry at Hofstra where Dr. Christoph Correll is on the faculty [it was one·stop shopping for both ghost·writers and KOLs]. Both authors are at the Feinstein Institute for Medical Research. Both articles say:
From the Zucker Hillside Hospital, Glen Oaks, N.Y.; Otsuka Pharmaceutical Development & Commercialization, Princeton, N.J.; and H. Lundbeck A/S, Valby, Copenhagen, Denmark.
I know this is getting long, but let me add one other tidbit. The top article referenced a 2013 meta-analysis in the Lancet [Comparative efficacy and tolerability of 15 antipsychotic drugs in schizophrenia: a multiple-treatments meta-analysis] of the Effect Sizes and Discontinuation Rates of the Atypical Antipsychotics which I liked, by authors who do Cochrane meta-analyses. I lifted one of their figures and added the results from these Brexpiprazole studies for comparison. The Effect Size is an index of the magnitude of the drug’s effect. It was included in the AJP article but not the Schizophrenia Research article [I didn’t calculate the 95% Confidence Limits for the Brexpiprazole studies]. It shows a problem in replicability [2mg]:
First, I apologize for going on and on about this. When I started, I had no intention of writing a blog·epic. I didn’t know that the academic authors were from the same department; or that the ghost·writers were from the same firm; or that they were ignoring the Clinicaltrials.gov Results Database; or that there were 60 sites for each study. I guess they are taking advantage of the formula used by their predecessors to maximize their impact on launch with slick writing and optimized choices in data presentation. If you read the actual papers, they even read like advertisements. They’re obviously going to be handout reprints for the drug reps to give out to primary care physicians. They are in some of the top journals, quickly published after submission, and presented together as a poster at the American College of Neuropsychopharmacology [ACNP] meeting in December.

I just wrote a three part series on the Academic·Industrial·Complex that was as overly detailed as this post – which is yet another example of what that phrase means. I think I’ve been chasing down the details, looking in vain for something that breaks out of the mold of scientific enterprise being used as a commercially driven advertising platform. And what I find is that the further I look, the worse it gets. I was tempted to say that Brexpiprazole is a weak sister Atypical Antipsychotic with a low Adverse Event profile, but I’m not even sure that’s defensible with just two six week trials. And unmentioned here, they used some idiosyncratic analytic techniques that were unfamiliar to me, but I just didn’t have the libido to look into them further [because there was no primary data to test them with].

This is pitiful, in my humble opinion. We’ve clearly still got a long way to go to reclaim our academic literature, in spite of the good news coming from various directions…

Afterthought: Looking at all of those sites on the two clinical trials, it is highly unlikely that any author ever even met a single subject in either trial…

Another Afterthought: They could call Brexpiprazole son of Abilify
 
Mickey @ 10:05 PM

Academic·Industrial·Complex III…

Posted on Monday 20 April 2015

In Academic·Industrial·Complex II… I was implying that the $700,000 spent on the study of Seroquel XR® in Borderline patients was a waste of money, but another way of looking at that is that their various trials of Seroquel XR® were overall worth it, because Seroquel® stayed in the Blockbuster range even after the patent expired thanks to Seroquel XR®. That’s an easy mistake to make – from drugs.comto forget how much money is involved in the commerce of these drugs. Abilify® is currently involved in a similar patent extension scheme, and Johanna Ryan has a first-person blog on RxISK about the pressure to take Abilify®. Even in this era when the pharmaceutical industry has all but abandoned CNS drug development and their patent protections are disappearing, we still feel the rumblings of the now passed golden era of psychopharmacology. The KOL class in academic psychiatry is still playing the only tune they’ve known. And that far right point on the Abilify® chart [on the right] represents the number one best selling drug in the US.  The Academic·Industrial·Complex is still very much a major player.

When a new Chairman arrived at my university in the early 1980s carrying the banner of the new psychiatry, he traveled around meeting the faculty as is customary. He didn’t ask questions, but instead talked about the need for the faculty to do research. When it came my turn, I told him I’d looked forward to finally having some time to finish several papers. I was about to tell what they were, but he interrupted and talked about psycho-immunology, because I had been an Immunologist. I’d never even heard the word. It took me several months to figure out that what he really wanted us to do were Clinical Trials for drug companies. And if you read Dr. Schultz’s interview twenty five years later describing his vision of his Minnesota department, the topic had changed very little:
"How many people had been imaged at the CMRR by faculty in the Department of Psychiatry in 1999? Zero. The Department of Psychiatry now images more people in CMRR than all other departments on the campus combined…"

"I developed an idea to focus our academics on imaging, genetics, and clinical trial research, and the rationale for that being we had one of the greatest imaging centers and that’s what the NIH wanted to do. Genetics were emerging. There hadn’t been a person imaged in CMRR, there hadn’t been a blood drawn for genotyping in the department, and I said – We have to get going in these areas. Both of those areas, I thought, could interact with doing very good clinical trial studies…"

"So, like Dr. [David] Mrazek at Mayo says – "Let’s draw your blood and find out what’s going on in your serotonin or your transporter genes or your metabolic genes and that’ll help us with your treatment. The same is also true for imaging, where we’re now imaging at baseline, giving them medicine, imaging after the study is over, and seeing where does the drug act in the brain…"

"We are now 25th out of 135 medical schools, and that puts us in a good position, but we still have a ways to go. We went from 39th in 1999 to 25th now, and we tripled our NIH funds and expanded the breadth and scope of what we’re able to do. But I’m hoping we’ll be able …  to take it up a notch, and move up maybe another ten spots…" 
Dr. Schulz doesn’t mention industry funded Clinical Trials, but there were plenty along the way. I’m aware that such metrics are common among academic administrators, but in academic psychiatry as I’ve known it since the 1980s, this is most all of what there was. When the next new Chairman showed up [Charlie Nemeroff], they sent around his CV. It was a bound book with hundreds of publications. I’d never seen anything quite like it. Research became the most important thing – but not for what it discovered, but rather how much was being done. Schulz’s string of Clinical Trials of Atypicals in Borderline Personality Disorder mentioned earlier wasn’t really about treatment – it was about the income stream [A Drug Trial’s Frayed Promise]. That happens to be the way things work in an Academic·Industrial·Complex. And as a matter of fact, in the early 1980s, we heard about imaging and genetic studies even back then [that’s why it has been so hard for me to accept that PHARMA was just opportunizing on the 1980 revolution in psychiatry rather than having been a prime mover]. But whichever the case, in an Academic·Industrial·Complex, Academia prospers, but only if Industry prospers.

Here’s the part where I’m speculating [but for what it’s worth, I happen to believe this speculation]. In the course of too many training programs and too many fellowships, I’ve worked in or been affiliated with a number of clinical research units and their staff. They have been the most cracker-jack support staff I’ve ever had the pleasure to work with – the crème de la crème, inspired by the possibility of advancing medical science and care – loyal to the researchers as part of the team. But in an Academic·Industrial·Complex, they’re grinding out Clinical Trials that don’t have such lofty goals – "just making a buck." The researchers are less available. For example, Dr. Olson saw Dan Markingson just "four or five times" during his six month’s stint in the CAFE trial, and I don’t know that Dr. Schulz ever met him. My speculation is that the "just making a buck" attitude filters down the ranks – manifesting as a disillusioned "just going through the motions" mentality.

Dr. Faustus' pact with the Devil...But whether my speculations are true or not, Drs. Elliot and Turner have opened a window into the world of psychiatry’s Clinical Trials and the whole Academic·Industrial·Complex that goes much further than the tragic case of Dan Markingson, than the Department of Psychiatry, than the State of Minnesota. We need even more than just the Data Transparency we’re seeking. We need Transparency for the whole Clinical Trial process. While I’m personally sensitive to the problem of financing psychiatric education, an area I left reluctantly – no matter what problems the Academic·Industrial·Complex tried to fix, it didn’t justify the obvious lapses in medical ethics that arose out of the solution. Faustian contracts with the Devil rarely do…
Mickey @ 5:59 AM

Academic·Industrial·Complex II…

Posted on Sunday 19 April 2015

Several years ago, I ran across a business magazine cover with a high level AstraZeneca executive being praised for figuring out how to extend the patent life of their blockbuster Seroquel® by getting Seroquel XR® approved. I could never find it again. I still feel the loss because it proved that Seroquel XR® was just a marketing ploy…

New York Times
By KATIE THOMAS
APRIL 17, 2015

Last fall, an article in the American Journal of Psychiatry caught the attention of specialists who treat borderline personality disorder, an intractable condition for which no approved drug treatment exists. The article seemed to offer a glimmer of hope: The antipsychotic drug Seroquel XR® reduced some of the disorder’s worst symptoms in a significant number of patients…
I have to interrupt Katie for just a moment. I wrote about that article in July 2014 when it was published on-line. There was no glimmer of hope seen here. I thought it was shameful [see an anachronism…]. Seroquel XR® was just a patent-life extender. Borderline Personality Disorder is a very complex topic and the notion that there will ever be a "right drug" for the condition is about as likely as pigs flying. When I reviewed the article, I located a 2004 Medscape C.M.E. where Dr. Schulz had presented industry-funded studies he’d done with Risperdal®, Zyprexa®, and Seroquel® in patients with Borderline Personality – each one showing some effect; each one written up as if it mattered. I actually saw them as playing the companies off against each other.
In the realm of clinical trials, however, reality is sometimes far messier than the tidy summaries in medical journals. A closer look at the Seroquel XR® study shows just how complicated things can get when a clinical trial involves psychiatric disorders and has its roots in intersecting and sometimes competing interests: a drug company looking to hold onto sales of a best-selling drug, a prominent academic with strong ties to the pharmaceutical industry and a university under fire for failing to protect human study subjects…
This article goes on to mention any number of problems with this study. Dr. Schulz had big  COI problems having been paid over $100K by the sponsor for various things in recent years. 100% of subjects screened were accepted [unheard of, implying that they were taking all comers to boost their sagging enrollment]. But the top story was that two of the subjects turned out to be sex-offenders living at a halfway house who had signed onto the study for the money. And one of them had spiked the breakfast oatmeal at their halfway house with his Seroquel XR®, a stunt that got him sent back to prison.

In Academic·Industrial·Complex I… we read Dr. Schulz’s plan to finance his department by doing «very good clinical trials» and I suggested two criteria for what that might mean 1. trials that were scientifically justified rather that simply commercials [experimercials] and 2. trials that were well executed. Well this whole series of trials on Atypical Antipsychotics generally flunks number 1. and really outdoes itself with Seroquel XR® [after already studying plain old Seroquel®]. And it flunks number 2. in that the execution here is embarrassingly sloppy. This was a Clinical  Trial apparently randomizing all comers, with some outrageous antics along the way, that took five years, cost $700K, involved 100 subjects, and was dolled up and published in the American Journal of Psychiatry as if it said something that mattered. The published results actually made very little sense [see an anachronism…].

Bioethicist Carl Elliot’s main focus is on the conduct of the Clinical Trials in the University of Minnesota’s Department of Psychiatry, and he’s collected an impressive array of examples of dysfunction at multiple levels. The recent investigations by an independent panel appointed by the Association for the Accreditation of Human Research Protection Programs and a report from the Office of the Legislative Auditor not only agreed with Elliot but amplified on his observations. The result was a suspension of all Clinical Trials pending further investigation. Dr. Schulz, Chairman of the Department of Psychiatry stepped down. There is a consensus that the Board of Regents and the University President have little understanding of how to do their jobs. Whistle-blowing nurse Niki Gjere describes an atmosphere of fear, but I was more impressed that everyone from the bottom up seems to be just going through the motions, doing nothing wrong, clueless about the uproar.

While I’m obviously relieved that something is finally going to be done about the situation with the University of Minnesota Clinical Trial program, I don’t think that we’ve yet landed on the most basic ethical dimension of this story. The studies on the table right now are a head to head comparison of three [in-patent at the time] Atypical Antipsychotics in First Episode Schizophrenic cases [CAFE], and a string of Clinical Trials of Atypical Antipsychotics in patients with Borderline Personality Disorder as the drugs arrived on the market. All are studies funded by industry and I believe investigator initiated. I question whether there was any scientific justification for any of these trials at the time they were being conducted. Instead, I would propose that all of them were studies probing for some commercial advantage to the pharmaceutical company paying for the trial, and that a strong motive in proposing these trials in the first place was to finance an Department of Psychiatry.

Every Clinical Trial is human experimentation. We’ve decided that human experimentation is allowed if there have been careful limited trials to assess safety and that the potential outcome of the trial will be of wide benefit to others. There is no reasonable argument that suggests a time release version of Seroquel® will be more effective than the already tested regular Seroquel® in any situation. For that matter, there’s nothing that I know about the Borderline Personality Disorder that suggest any medication will be of lasting benefit. Here’s Dr. Schulz’s slide of the drugs that have already been tried:

And as to CAFE – is it reasonable to recruit patients with a First Episode of Psychosis [likely the biggest event in their lives] into a study that commits them to a year of medication [blinded] in order to use the outcome in some future commercial or sales pitch? I think not.

And so to the heart of the Academic·Industrial·Complex and the ethical dilemma that it contains. The academic part of the pairing agrees to test the drugs [or at least sign off as a tester], to serve as a ticket into the peer reviewed academic journals, and often to promote the drug as a speaker or in a CME presentation, in return for the funds desperately needed to finance Medical Education. The payoff for industry is obvious. And human experimentation in the form of Clinical Trials sits at the interface between the academy and industry. Is the Borderline or First Episode Psychotic patient told that the study is being done for its commercial value? Is some of the lackluster and torpid performance of the Clinical Trial staff because they know either directly or intuitively that a given study is not really for the advancement of science? but rather for sales and marketing like in the email mentioned earlier in Academic·Industrial·Complex I…
… R&D is no longer responsible for Seroquel® research – it is now the responsibility of Sales and Marketing. So preclinical research studies aimed at mode of action, although very interesting to both of us, do not translate to marketable messages that will impact sales [at least this is what my commercial colleagues say]. On the other hand, clinical studies that extend the indications for Seroquel® can directly impact sales. With limited budgets, funding of clinical studies will therefore come first.
Mickey @ 9:01 PM

Academic·Industrial·Complex I…

Posted on Saturday 18 April 2015

Now that the case of Dan Markingson’s 2004 suicide is no longer in the realm of unacknowledged tragedy, moved into the public domain by two recent reports villifying the University of Minnesota’s Administration and Clinical Trials program for its handling of the case throughout, we can begin to think about what it all means.  The implication in many of the discussions of the case is that the Department of Psychiatry’s Clinical Trials program is more revenue generator than a scientific enterprise, and that Markingson’s case is just the tip of the iceberg large enough to sink the Titanic. And speaking of icebergs, there’s the broader question of the involvement of many other Departments of Psychiatry in churning out industry sponsored [and industry controlled] studies of commercial products with results tipped  towards the needs of these sponsor’s products.

I recall being told as a Medicine Resident in Memphis Tennessee in the late 1960s that the "Biological" Psychiatry programs were the ones "along the Mississippi River." Well the Mississippi River arises in Minnesota and ends in New Orleans Louisiana, and looking along its course, that generalization seems to be true. In 1980, the year of the DSM-III "revolution," Paula Clayton moved from Washington University in Saint Louis [the epicenter of "Biological" Psychiatry] to become Chairman at UMn. She was replaced in 1999 by Charles Schulz in 1999, who stayed until last week. By the time he arrived, psychiatry in the US was "Biological" in general. There’s a long interview of Dr. Schulz done in 2010 as part of an oral history project [Interview with S. Charles Schulz, M.D.] where he lays out the plan he had he had when he left Case Western Reserve in 1999 to become Chairman at Minnesota – Neuroimaging [Minnesota had a state-of-the-art Neuroimaging Center], Genetics, and Drug Trials. In fact, in laying out his plan, he’s defining the story of academic psychiatry in the modern era – research as fund-raising:
In the number of visits I made to come here, learn about what was here, talk with Dean Michael, administrators, faculty, Apostolos, etc., I developed an idea to focus our academics on imaging, genetics, and clinical trial research, and the rationale for that being we had one of the greatest imaging centers and that’s what the NIH wanted to do. Genetics were emerging. There hadn’t been a person imaged in CMRR, there hadn’t been a blood drawn for genotyping in the department, and I said – We have to get going in these areas. Both of those areas, I thought, could interact with doing very good clinical trial studies, and I felt a university department was very important for the faculty involved in what was the latest things happening. My experience at Case, especially working with Herb [Meltzer] and with Joe Calabrese, were that the participation in the clinical trials of new compounds led our faculty to be expert in them, basically the day they were approved. I thought also that by doing very good clinical trials, we could use those results in an interface with imaging and in genetics. So, like Dr. [David] Mrazek at Mayo says – "Let’s draw your blood and find out what’s going on in your serotonin or your transporter genes or your metabolic genes and that’ll help us with your treatment. The same is also true for imaging, where we’re now imaging at baseline, giving them medicine, imaging after the study is over, and seeing where does the drug act in the brain. Or, can we tell who’s going to respond and who’s not going to respond. So the interplay was actually more important than the three items of imaging, genes, and clinical trials.
I don’t particularly like that kind of talk either, but having been an administrator in an academic department being pushed [kicking and screaming] in that academic direction, I know the pressures on a medical department chair to raise money to run a department. Academic Medical Education is funded by… the Academics. That’s just life. If you do heart surgery, the money pours in from the faculty’s work. If you do psychiatry, welcome to the Sahara Desert – there’s no free lunch. So I can live with Schulz’s profiteering, or at least understand it. But the fulcrum is on the two meanings of the words «very good clinical trials».
  1. One meaning of «very good clinical trials» is "trials that add something to the body of medical/psychiatric knowledge" [rather than experimercials being financed by the drug companies for commercial reasons]. In the modern era, that’s not such an easy task. Remember, this is AstraZeneca, maker of Seroquel®, and we recall this famous memo [November 1997] making it very clear what kinds of things AstraZeneca might be willing to fund:
    click image for the source
  2. The second meaning of «very good clinical trials» is "doing the clinical trials well" – ethically, humanely, carefully, honestly, etc.  And the whole reason we’re talking about this is the Dan Markingson case. It’s unlikely anyone reading this is going to say that this case is an example of doing anything well
So here is the core structure and the core problem of the Academic·Industrial·Complex that grew in psychiatry in the 1970s, 1980s, and 1990s as the funding from government sources dried up [along with the private sources]. All of Academic Medicine has to struggle with these forces, but psychiatry was hit with a perfect storm because of the decimating impact of Managed Care on psychiatry specifically thrown into the mix. In this example, uncovered by the suicide of Dan Markingson, we have a window into how widely its impact was felt. As a matter of fact, in the interview, Dr. Schulz is asked about Conflicts of Interest [here]. Here are a few quotes from his response:
A difficult and challenging topic, and I think I mentioned earlier that when I came and drew up a strategic plan for our department and worked with Dr. Cerra at the AHC [Academic Health Center] and with Al Michael, I strongly felt that a Department of Psychiatry should be able to be involved in clinical trials to advance treatment and to be very familiar with medicines as they came out. I felt that patients need to be well cared for and highly respected, so with Dr. Cerra and Dr. Michael, I was able to get as part of my package to come here the resources to build the ambulatory research center. And this is in our professional building, it’s 5,000 square feet of space devoted to clinical trials, to assessment for imaging studies, etc. The interview rooms are nice, they have a window, etc. There’s a wonderful reception area and each person who comes to be in the research, is greeted by a person. They have a little area for children to sit if they are going to be in the clinical trial, etc. Exam rooms, conference rooms, the whole thing.
On arrival, he had his Clinical Trial program set up and ready to go. He goes on to talk about Conflicts of Interest in other areas of Medicine – Orthopedic Surgery [artificial joints], Cardiology [stents], etc. But then the wheels start coming off the wagon, or at least start getting very loose…
… but psychiatry, boy … front page in the New York Times for even the president of the American Psychiatric Association – grilling, nasty things. His university investigated him thoroughly, and he had done nothing wrong. As a matter of fact, what he had done is he had done exactly what the president of Stanford had asked him to do, Dr. Schatzberg. So, in my impression, looking at this, there probably are some instances of the high-flying industry utilizing academia in ways that was not fully appropriate, that the new guidelines for managing conflicts of interest and improving transparency are very, very appropriate in my mind. I think they, if followed in the way that I think our university has put forward and the way Dr. Cerra has expressed his wish, he, from the first day I met him to now, has said – I want us to be able to collaborate with industry, whether it is pharmaceutical or device, or whatever; but, let’s make it real clear what we’re doing. I think we can move ahead with this. And our conflict of interest policy here at the University is really pretty much that way. Not pretty much, very much – it’s actually as strict as any conflict of interest in the US.
…and then he starts talking about the Markingson case and the impact of UMn Bioethicist Carl Elliot‘s campaign on his grand scheme. I’ll leave that to you to read. His reference to Dr. Schatzberg is to Senator Grassley’s investigation of academic psychiatrists not reporting personal income from drug companies. He’s claiming that academic psychiatry was unfairly persecuted [many of us think otherwise – not only were they not falsely persecuted, we wish they’d been truly prosecuted].

But that’s enough for one post. Carl Elliot and colleague Leigh Turner have felt all along that this case was indicative of a problem that had wide implications. After all, Carl’s first article was entitled "The Deadly Corruption of Clinical Trials", not "The Deadly Corruption of a single Clinical Trial". And yesterday’s New York Times has another case from a Cliinical Trial done in Minnesota [A Drug Trial’s Frayed Promise], obviously pursuing the idea that the Markingson case was just a loud example of something generally rotten in the state of Minnesota.

There’s little question about what the next post is going to address…
Mickey @ 4:50 PM

down came the rain…

Posted on Saturday 18 April 2015

Note: The rain lowers the "count" but the more sensitive instruments [like the nose] still register its presence…


[cut short by external events…]
Mickey @ 8:00 AM

at face value…

Posted on Friday 17 April 2015

After graduating from high school in 1960, we scattered to the winds – at least I did. It would be decades before I caught up with the people from those days and heard the stories of how those friends from my earlier life negotiated the tumultuous years that followed. Howard and I reconnected 40 years later on an email thread someone started in the lead-in to a class reunion. A classmate started sending around those hyper-patriotic emails with animated flags that followed 911. The invasion of Iraq was in the air. Howard and I were among the few who opposed it, hardly the majority opinion, and in the next few years, we reconnected in person.

an early sketch of how it works... I don’t need to tell his story because it’s where everything else that matters is – on the Internet [see Howard Moreland on  Wikipedia]. The short version is that he left high school pursuing his dream of being a pilot. After a few years as an Air Force pilot flying transports back and forth to Viet Nam, he cut that career short and by 1979, he was a full time anti-nuclear activist and the independent journalist who wrote the article "The H-Bomb Secret: How We Got It, Why We’re Telling It." There was a big First Amendment court battle as the Department of Energy tried to halt publication. It ultimately failed and the article was published in November 1979.

The article is true to the title – how to get an atomic bomb [fission] to ignite a fusion reaction in the hydrogen fuel in the millionth of a second before everything scatters to the wind – no small feat. The article makes it very clear [it’s a "bank shot"]. Equally interesting is getting the story with no access to classified information [see his slide show]. If it’s not obvious why I’m telling his story, it has to do with the logic used to justify keeping secrets. And I remembered some of the things Howard said when he first told me this story as I was reading Rationale for WHO’s New Position Calling for Prompt Reporting and Public Disclosure of Interventional Clinical Trial Results calling for Data Transparency.

He said that nuclear proliferation was too important to rely on keeping the process of making an H-Bomb a secret. First, others can figure it out if they try hard enough and ask enough questions, and he proved his point by doing just that. But if you really want to deal with nuclear proliferation, you have to do something out front and effective – like control and monitor access to raw materials with international accords and strict surveillance. At first, I wondered why Howard’s story was in my mind as it seemed very different from the point about Data Transparency in Clinical Trials. But then I realized that only reason to keep the H-Bomb secret is that I have it and they don’t, and that gives me an unfair power advantage. It’s an argument in front of a huge conflict of interest. If it’s not a secret, then I also have to engage in and abide by negotiated restraints just like everyone else. But if it’s just my secret, I get to call all the shots.

This analogy first came to my mind as I was reading the PLoS article. When it came back to mind again as I was reading Ed Silverman’s version on Pharmalot yesterday, I decided my unconsious was trying to tell me something:
Pharmalot: WSJ
By Ed Silverman
04/15/2015

But some academics were more circumspect. Harlan Krumholz, a Yale University cardiologist who runs the Yale Open Data Access Project, says the WHO statement “is a great addition to the chorus and worthy of note, but not game changing.” An issue, he says, is that the WHO is “articulating an aspiration. But they might suggest a consequence to non-compliance.”

Similarly, Peter Doshi, an assistant professor of pharmaceutical health services at the University of Maryland and an associate editor at BMJ, a medical journal that has pushed for greater disclosure, says the WHO statement does not call for internal documents such as clinical study reports to be made available. And he notes it does not pressure regulators to release trial data in their possession.

“Instead, it calls for researchers to publish in peer-reviewed journals and upload results into trial registries within 24 and 12 months, respectively. While these are important goals, they only move us so far in terms of clinical trial data transparency, as both journals and registries generally only report aggregate and limited amounts and types of clinical trial data.”

In an accompanying essay in PLOS Medicine, Ben Goldacre, who co-founded the AllTrials campaign, writes that audits are needed to ensure that transparency is achieved. “Previous calls for registration were not enough to fix publication bias, and positive statements require practical implementation,” he writes.
Harlan Krumholtz, Peter Doshi, and Ben Greenacre are some of the heavy lifters in the push for Data Transparency, and their reservations reminded me of the kind of talk we’re hearing in negotiations about the nuclear deal with Iran these days – people with experience making sure that this isn’t just more empty rhetoric, but is instead effective and enforcable substantial policy. There have been waves of reform in the Clinical Trial arena since it became a requirement in 1962 with the Kefauver-Harris Amendment. Industry has either ignored them [clinicaltrials.gov] or found loopholes to drive trucks through. The notion that the raw data from Clinical Trials are proprietary and can be kept secret, leaving doctors and patients with only the cosmetic published articles created by the companies themselves and placed in the academic literature by company-affiliated doctors, is an absurd state of affairs [as is keeping the results of FDA Inspections of trial sites secret – see without firm action…]. So as practicing physicians, we’ve had to learn about the adverse effects by causing them in our own patients rather than being alerted in advance. The same is true with efficacy. The secrecy is unfounded on face value, and the reasons given – protecting subject confidentiality and commercially confidential information – sound like the productions of an early adolescent playing with his newly acquired skills in rationalizing away self-serving motives. Like Howard’s point about the HBomb Secret, the consequences are just too damned dangerous to play around with…
hat tip to Howard Moreland…  
Mickey @ 7:45 AM

now, back to basics!…

Posted on Wednesday 15 April 2015

Note for the media
14 April 2015

14 APRIL 2015 | GENEVA | WHO today issued a public statement calling for the disclosure of results from clinical trials for medical products, whatever the result. The move aims to ensure that decisions related to the safety and efficacy of vaccines, drugs and medical devices for use by populations are supported by the best available evidence.

“Our intention is to promote the sharing of scientific knowledge in order to advance public health,” said Dr Marie-Paule Kieny, WHO Assistant Director-General for Health Systems and Innovation. “It underpins the principal goal of medical research: to serve the betterment of humanity.”

“Failure to publicly disclose trial results engenders misinformation, leading to skewed priorities for both R&D and public health interventions,” said Dr Kieny. “It creates indirect costs for public and private entities, including patients themselves, who pay for suboptimal or harmful treatments.”

Unreported trials lead to misinformation

For example, in a study that analysed reporting from large clinical trials (more than 500 participants) registered on ClinicalTrials.gov and completed by 2009, 23% had no results reported. These unreported trials included nearly 300 000 participants. Among clinical trials of vaccines against 5 diseases registered in a variety of databases between 2006-2012, only 29% had been published in a peer-reviewed journal by the WHO recommended deadline of 24 months following study completion.

“We need the collaboration of all these actors to enforce transparency in their jurisdictions in order to increase the benefits and decrease the risks for patients, clinical trial volunteers and the general public,” concluded Dr Kieny.

International Clinical Trials Registry Platform furthers transparency

WHO’s call for disclosure includes older unreported clinical trials, the results of which may still have an important bearing on scientific research today. WHO also reaffirms the need for all clinical trials to be registered on a WHO primary clinical trial registry so that they can be accessible through the International Clinical Trials Registry platform. This will ensure transparency as to which clinical trials have occurred, and allow verification of compliance with public disclosure requirements.

The recent WHO move expands on a 2005 call for all clinical trials to be registered, and the subsequent establishment of the International Clinical Trials Registry Platform. This registry platform regularly imports trial records from ClinicalTrials.gov, ISRCTN registry, EU Clinical Trials Register, Australia New Zealand Clinical Trial Registry, Pan African Clinical Trial Registry and Clinical Trial Registries from China, India, Brazil, Republic of Korea, Cuba, Germany, Iran, Japan, Sri Lanka, The Netherlands and Thailand.
hat tip to pharmagossip… 
And there’s more!
More coming after a thorough reading…

From AllTrials: "You can read more about the WHO’s statement and responses to it on the AllTrials website, in Science and The Verge and from Reuters."
Mickey @ 12:00 PM

one of the many ways…

Posted on Wednesday 15 April 2015

the psychologist
British Psychological Society
17th March 2015

A lively debate was held at London’s Senate House yesterday with panellists from neuroscience and psychology discussing the question: is science broken? If so, how can we fix it? The discussion covered the replication crisis along with areas of concern regarding statistics and larger, more general problems…

Neuroskeptic, a Neuroscience, Psychology and Psychiatry researcher and blogger, gave a personal perspective on problems with science, speaking of the events which led him to lose faith in the research in the field. He said that as undergraduate students people are taught to do statistics in a very particular way, but once a person begins PhD research things change vastly. After gathering some results for his PhD research, Neuroskeptic found he had one significant result out of seven tasks performed by his participants. He said: ‘I thought back to my undergraduate days and thought “what if you do a Bonferroni correction across all the tasks?”. I got the idea that I’d suggest this to my supervisor but don’t think I ever did, I realised that just wasn’t how it was done. I was very surprised by this. I learned as an undergraduate you do a Bonferroni correction if you have multiple tasks. I started to wonder if we aren’t doing this who else isn’t doing it? I began to lose faith in research in the field.’

Neuroskeptic said he wondered whether there was a good reason that multiple comparisons correction was not used. He added: ‘I still don’t think there’s a good reason we can’t do that. We have come to the tacit decision to accept methods which we would never teach undergraduates were a statistically good idea, but we decide that we’re happy to do them ourselves. That’s how I got on the road to blogging about these issues.’

Neuroskeptic is something of a blogger’s blogger, maintaining his anonymity on his personal blog for years, and now as a blogger for Discovery Magazine. He writes about a variety of topics, and they’re usually interesting whether they’re in your field or not. His nom de plume, Neuroskeptic, was a good choice. He not a "neuro-cynic," but rather a person who doesn’t believe in absolute truth just like his namesake, Pyrrho of Ellis, the founder of Skepticism in ancient Greece [as opposed to Dogmatism] [see my old Greek…]Neuroskeptic brings his skeptial attitude to everything he writes.  I linked to his blog about this topic, Is Science Broken?, in case you’re interested, but I wanted to talk about the specific example he’s using here, the Bonferroni Correction, as it relates to Clinical Trials.

My own biostatistics and research experience was in another medical field over forty years ago, so when I began to look at the math of clinical trials, it was familiar but only just. Besides coursework, my only hands-on experience was using ANOVA to partition the variance of interactions of effects, so there was  much to learn. But I do have a Bonferroni Correction story to tell from those days. During an Immunology fellowship, my clinical work was with a Rheumatology Section. Rheumatology is like Psychiatry in that there are many conditions where the etiology [cause] was and is unknown. In the 1960s, Rheumatologists were collecting large databases on every patient they saw to develop criteria for diagnoses [sound familiar?]. Databases were new, as were the mainframe computers that held the data entered with punch cards and stored on tapes. Statistics were run with home-grown Fortran programs that ran over-night [if you were lucky]. Bill Gates hadn’t yet made it to high school. Excel was something you did in sports. And correcting for multiple variables was something kind of new.

One afternoon, the statistician and clinical staff blocked out a two hour conference to show us the results from the clinical database they were collecting [with great pride]. It was one of those after-lunch conferences where the eyelids are hard to hold open. Towards the end, the statistician showed us a thick stack of computer print outs with all the significant findings – disorders across the top, parameters down the side, cells filled with probabilities. Then he said something like, "Of course we had to correct the statistics for multiple measurements." I don’t remember the term Bonferroni Correction, but I do remember what he did. He divided all those p-values by the number of things measured, and then he showed a slide of what significance remained from that thick stack of printouts. It evaporated, and left a table that fit on one readable slide. I was pretty impressed, but he seemed deflated watching his fine p-values go up in smoke.

The logic behind correcting for multiple variables is pretty sensible, and simple. If you do an experiment and measure one outcome variable, p<0.05 means there’s less than 1 in 20 odds that the result happened by chance. However, if you measure 20 outcome variables, one will come out p<0.05 by chance alone. The Bonferroni Correction is to divide each p-value by 20 [the number of outcome variables] – so you’d need a p< 0.0025 [0.05 ÷ 20] to claim the same level of significance. With 10 outcome variables, you would need p<0.005 [0.05 ÷ 10]. Piece of cake? Well Neuroskeptic is absolutely right. Many [if not most] Clinical Trials just ignore this correction altogether. Others try to explain not using it, like this from Morrison et al [Cognitive therapy for people with a schizophrenia spectrum diagnosis not taking antipsychotic medication: an exploratory trial reported in slim pickings… recently]:
"Dependent t tests were used to analyse changes in outcome measures for the normally distributed variables; non-parametric analyses using Wilcoxon’s signed ranks test were used for skewed data. Tests of significance were two-tailed, but no correction was made for multiple comparisons given that this was a feasibility study in which we were less concerned about type 1 error."
[Note: A type I error is a false positive] 
Well, we have a really impressive false positive problem, that’s for sure. The Bonferroni Correction is very tough on results – a harsh test. There have been other methods developed that are gentler, but they’re not used very much either. Another point: the method of correction, like any piece of the analysis, should be declared in the a priori protocol, and that’s rarely done. The reason is obvious. Post hoc, knowing the results, you can pick your correction method [if you even pick one] to fit how you want things to come out. So  Neuroskeptic is absolutely correct, this is an almost institutionalized problem in Clinical Trials – just one of the many ways people get control of what their data says – like correction for attrition, or study design, or choice of statistical tests, etc. It’s why Data Transparency is so vital – so you can see under the places where deceitful analysis can change things but remain hidden…
and break science…
Mickey @ 10:00 AM

GSK – churning presidents…

Posted on Tuesday 14 April 2015

I find this next chapter of GSK USA surrealistic. In January 2011, I read Deirdre Connelly’s speech with amazement. She was the new president of GSK USA, and she was going to shut down the program where Drug Rep’s bonuses were based on the number of prescriptions their territory’s doctors wrote [“so what went wrong?”…]…
NPR
by Scott Hensley
January 24, 2011

The same day the feds said they recovered $4 billion related to health care fraud in the government’s last fiscal year, a leading drug company exec acknowledged the industry had gone off course. In a speech Monday to hundreds of people who make their living keeping drugmakers on the straight and narrow, GlaxoSmithKline’s U.S. President Deirdre Connelly noted the huge fines paid in recent years by drugmakers and the low esteem consumers have for the companies these days. Then she asked the obvious question. “So what went wrong?”

In the speech, whose prepared text we got from the company, Connelly said, “The answer, I believe, is that, in some ways, our industry lost its way.” Nobody I know would argue with that. She faulted a “competitive selling model” that works fine for autos or candy, but just isn’t right for medicines that can save people’s lives… Her prescription for change included focusing on patients’ needs and operating with greater transparency. One specific change worth noting: Glaxo won’t be paying drug reps bonuses based on increases in prescriptions in their territories anymore. Instead, Connelly said, Glaxo will base the compensation on specific scientific and business knowledge, customer feedback and performance of the business unit they’re part of. You can read the full text of Connelly’s speech here
But alas, she only lasted four years. Progress like that has a way of melting [see A Glaxo Exec Retires Amid Sales Slump and Reps Question a Bonus Program] [see also no good deed goes unpunished… ]. She was replaced after a sales slump and bad rap from her sales force who wanted to go back to the incentive plan. I said then [February 20, 2015]:
I may yet be naive and gullible, but I still think Deirdre Connelly was legit. If that’s true, her retirement brings up something fundamental. Can a publicly owned pharmaceutical company survive in a world where a drug isn’t hyped up and sold with aggressive and deceitful marketing? where it’s instead allowed to stand on its actual worth based on efficacy and safety? or will the push for profits, and the dreams of becoming a blockbuster drive the show? Whatever the case, let’s hope that her influence and policies have at least some sticking power.
Ask, and you shall receive:
Pharmalot:WSJ
By Ed Silverman
April 13, 2015

One month after assuming responsibility for running GlaxoSmithKline pharmaceutical operations in North America, Jack Bailey is looking at changing a compensation program for sales reps that has generated complaints and frustration inside the drug maker. A task force is “in the process of looking at more comprehensive options to simplify” what Glaxo calls its Patient First program, which was begun four years ago in a bid to overhaul marketing practices aimed at physicians. The review was disclosed to Glaxo employees in an April 1 memo, which we have obtained and which was first reported by Bloomberg News.

As we have noted previously, the program was seen as ground breaking because reps are not paid bonuses based on the volume of prescriptions written by doctors. Instead, bonuses have been based on product knowledge, business acumen and understanding needs of patients and physicians, which were assessed in written tests and simulations conducted by third parties. The hope has been to remove the pressure reps may feel to persuade doctors to write prescriptions, which federal authorities charged sometimes led to inappropriate marketing practices. One year after Glaxo began the program, the drug maker paid a $3 billion fine to the U.S. government to settle allegations of improper drug marketing to physicians, among other things, and signed a corporate integrity agreement that requires Glaxo to monitor its marketing practices.

Sales reps, however, have complained the program contributed to sales declines, because there was less incentive to promote prescriptions, even though managers sometimes emphasized prescriptions, anyway. Glaxo recently began cutting $1.6 billion in expenses annually through 2017 amid falling sales in its key respiratory franchise and an overall sales drop in the U.S. market. Two months ago, Bailey replaced Deirdre Connelly, who launched the program in 2011. The drug maker said she retired. For now, the program is being tweaked. To appease its reps, Glaxo will no longer base their compensation on so-called simulations in which actual sales calls are simulated and observed as a way to determine bonuses, according to the memo from Bailey. However, a Glaxo spokeswoman writes us that the drug maker remains “resolutely committed to our commercial model.”

Glaxo, she writes, “has led the industry by changing the way we reward our sales representatives.  Our approach is based on the core principle that we will not link the compensation of our individual sales representatives with the number of prescriptions generated. Throughout the program we’ve looked for ways to simplify the process but the fundamentals remain the same. This approach has now been rolled out in 150 countries where we operate.” One Glaxo sales rep viewed the move cautiously. “We remain hopeful about the upcoming changes yet we remain realistic and guarded as well,” says this rep, who asked not to be identified. “We are under our corporate integrity agreement until 2019 and there isn’t a whole lot the company can do to make major changes to incentivize the reps the way they should be measured on performance.”
Here’s Jack Baily [last year]:

It’s not at all clear what the new approach will be, what the Sales Reps bonuses will actually be based on. It is, however, clear that the Sales Reps rule! The only other thing clear to me is that their last last two choices for president look like people retiring from the cast of The Young and the Restless.
Mickey @ 10:01 PM