rebranding…

Posted on Friday 2 September 2016

I’m still in the throes of recovering my technology from our lightning strike, but I think I can hack out an all text post on an unusual article in the BMJ this week on my ancient notebook. It’s the disclosures about the author, Alastair Matheson, that caught my attention:
Contributors and sources: The author worked from 1994 to 2012 as an independent consultant in the pharmaceutical, marketing and medical communications sectors, with extensive experience of market analysis, strategic communications planning, publications planning and medical writing. He has also studied the interface between academia and commerce from the standpoints of anthropology and publications policy.
Competing interests: I have read and understood BMJ policy on declaration of interests and declare the following interests: Between 1994 and 2012 most of my income came from consultancy and writing services provided to pharmaceutical corporations, either directly or via marketing agencies. In 2015 I acted as a paid expert witness on behalf of the plaintiffs in a US federal legal action against a drug company. I have valued friendships and acquaintances in the corporate pharmaceutical and marketing sectors. I consider myself to be a supporter of innovative pharmaceutical research, but a critic of some forms of marketing. I was solely responsible for all aspects of conception, design, and writing of this article and am the guarantor.
Pretty intriguing. Since this BMJ article is behind a pay wall, here are a couple of things he’s written that are full text online – an exchange with Howard Brody on Hooked: Ethics, Medicine, and Pharma and a PLoS article:
The BMJ article is worth trying to get hold of for a full reading. Here are a few quotes from the specific part I wanted to mention but there’s a lot more of substance in this short piece:
by Alastair Matheson
British Medical Journal 2016 354:i4578.

During the past decade, the pharmaceutical publications trade has campaigned to persuade medicine, journals, ethicists, and the media that it is opposed to ghostwriting. Yet industry practices have changed little, and commercial drafting of clinical trial reports, consensus statements, and reviews that are authored by recruited academics remains routine. Here, I show that industry’s opposition is based on a redefinition of the term ghostwriting that obscures the continued, widespread use of the practice as originally defined in medical journal literature…

Outside medicine, a ghostwriter is “a person whose job it is to write material for someone else who is the named author.” Drug companies and their agencies use writers to develop articles for academic authors, and in the 1990s and early 2000s this was commonly understood as ghostwriting, both by journal editors and the pharmaceutical publications trade. But today, the trade promotes an alternative definition, whereby writers are not classified as ghosts if they are named in a footnote…

The Global Alliance of Publication Professionals, a trade advocacy unit, states: “A ghostwriter is someone who writes a paper, but whose name does not appear on the paper.” Simply by acknowledging writers for editorial or writing “assistance”—a practice always widespread—commercially written literature is by these definitions freed from the egregious ghostwriting label. Consider, for instance, two commercially drafted studies of paroxetine, both criticised for using ghostwriting to produce allegedly biased content. One, study 352, mentions no writer but the other, the notorious study 329, discloses at the end of a tract of small print that “Editorial assistance was provided by Sally K Laden, MS.” By the traditional definition, both articles are ghostwritten, but according to the trade definitions quoted above, Study 329 is not, even though the medical writer’s credit is inconspicuous…
This rebranding is something we’ve watched unfold before our eyes. As recently as 2010, it was shocking news when Paul Thacker, then at POGO, exposed a number of ghost written articles in a letter to the head of the NIH  [see roaches…]. But Pharma didn’t miss a beat. As Matheson points out, they just started mentioninging the ghost-writer in the Acknowledgements and the ghostwritten articles kept coming. They just redefined ghostwriting to mean only articles with no mention of the ghost. And in the process, never addressed the fact that these articles are industry-run, industry-analyzed, and indistry-written. They might as well put the academic authors in the Acknowledgements for publication assistance, and the real authors [ghosts] on the byline.

What Matheson calls rebranding goes much further. For example, in Paxil Study 329 and Celexa CIT-MD-18, they redefined the meaning of a priori to before the blind is broken rather than before the study begins. Redefining Outcome variables is another common form of rebranding. But that’s just one of the tools of the trade in the process of misreporting the results of these clinical trial reports [I wonder if there’s a handbook somewhere that catalogs all the various tools and techniques available to misreport clinical trial reports?].

Matheson in an insider who knows the tools of the trade, and as such, he deserves to be listened to carefully – particularly when he suggests what should be done about all of this. He proposes a number of thoughtful standards for accurate attribution of Authorship, focusing his attention on the International Committee of Medical Journal Editors [ICMJE]. But I’m not sure that would change things any more than the many other attempts at reform – COI declarations, acknowledging funding sources, naming the ghostwriters, the Sunshine Act, etc. They’re all proxies. They don’t get at the central issue directly, which is accurate information about the trial results.

So while I think Matheson’s suggestions are appropriate and even necessary, I doubt that they alone will make any big difference. I have the same concern about data transparency. Having done a reanalysis with data in hand, I learned what a difficult task it was and I wonder who would do all the checking necessary on the large number of published studies. The industry generated published clinical trial reports have been so regularly distorted that I doubt there’s any cosmetic fix. The only real solution I can see is an easily accessible,  timely, independent analysis of the raw data according to the a priori protocol. And the only entity with the access and resources to do such an analysis is the FDA. They already do it in the process of approving new drugs or new indications. Were that information immediately available to editors, peer reviewers, you and me – there wouldn’t be any problem to solve.
Mickey @ 9:20 AM

scathing indictments…

Posted on Saturday 27 August 2016

In spite of an undergraduate mathematics degree and statistical training in a subsequent academic fellowship, I think I had retained a relatively naive yes/no way of thinking rather than seeing various shades of maybe. My brother-in-law is a social psychologist who taught statistics and for a time studied the factors involved in voting patterns. It all seemed way too soft for the likes of me to follow. Like many physicians, I’m afraid I just listened for the almighty yes/no p-value at the end. So when I became interested in the clinical drug trials that pepper the psychiatric literature, I was unprepared for the many ways the analyses can be manipulated and distorted. I was unfamiliar with things like power calculations and effect sizes. So as I said recently, I had previously read that little thing at the top… [the abstract] without critically going over the body of the paper, making the assumption that the editor and peer reviewers had already done the work of vetting the article for me.

In this blog, I’ve been preoccupied with studies where scientific results have been manipulated on purpose for financial gain by the pharmaceutical sponsors of clinical drug trials. But there are other motives to distort research findings eg academic advancement «publish or perish». And then, of course, you can just do it wrong, misuse the complicated tools of statistical analyses. Ten years ago, John Ioannidis published a widely read article that focused attention on the magnitude of the problem:
PLOS Medicine
by John P. A. Ioannidis
August 30, 2005

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
The gold standard for scientific research is replication – can an independent researcher repeat the study and reproduce the results. Last year, Bryan Nosek was able to engage colleagues to repeat 100 studies in academic psychology – with the cooperation of the original authors. The results were eye-opening, now known as the Replication Crisis:
Science
Open Science Collaboration: Corresponding Author Bryan Nosek
August 28, 2015

INTRODUCTION: Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. Scientific claims should not gain credence because of the status or authority of their originator but by the replicability of their supporting evidence. Even research of exemplary quality may have irreproducible empirical findings because of random or systematic error.
RATIONALE: There is concern about the rate and predictors of reproducibility, but limited evidence. Potentially problematic practices include selective reporting, selective analysis, and insufficient specification of the conditions necessary or sufficient to obtain the results. Direct replication is the attempt to recreate the conditions believed sufficient for obtaining a previously observed finding and is the means of establishing reproducibility of a finding with new data. We conducted a large-scale, collaborative effort to obtain an initial estimate of the reproducibility of psychological science.
RESULTS: We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. There is no single standard for evaluating replication success. Here, we evaluated reproducibility using significance and P values, effect sizes, subjective assessments of replication teams, and meta-analysis of effect sizes. The mean effect size [r] of the replication effects [Mr = 0.197, SD = 0.257] was half the magnitude of the mean effect size of the original effects [Mr = 0.403, SD = 0.188], representing a substantial decline. Ninety-seven percent of original studies had significant results [P < .05]. Thirty-six percent of replications had significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
CONCLUSION: No single indicator sufficiently describes replication success, and the five indicators examined here are not the only ways to evaluate reproducibility. Nonetheless, collectively these results offer a clear conclusion: A large portion of replications produced weaker evidence for the original findings despite using materials provided by the original authors, review in advance for methodological fidelity, and high statistical power to detect the original effect sizes. Moreover, correlational evidence is consistent with the conclusion that variation in the strength of initial evidence [such as original P value] was more predictive of replication success than variation in the characteristics of the teams conducting the research [such as experience and expertise]. The latter factors certainly can influence replication success, but they did not appear to do so here…

This week, Denes Szucs and John Ioannidis released a preprint of a study of 5 years of reearch papers in 18 prominent journals from psychology, neuroscience, and medicine, estimating a whopping 50% False Positive rate [or even worse in the cognitive neuroscience articles]:
by Denes Szucs, John PA Ioannidis
doi: http://dx.doi.org/10.1101/071530

We have empirically assessed the distribution of published effect sizes and estimated power by extracting more than 100,000 statistical records from about 10,000 cognitive neuroscience and psychology papers published during the past 5 years. The reported median effect size was d=0.93 [inter-quartile range: 0.64-1.46] for nominally statistically significant results and d=0.24 [0.11-0.42] for non-significant results. Median power to detect small, medium and large effects was 0.12, 0.44 and 0.73, reflecting no improvement through the past half-century. Power was lowest for cognitive neuroscience journals. 14% of papers reported some statistically significant results, although the respective F statistic and degrees of freedom proved that these were non-significant; p value errors positively correlated with journal impact factors. False report probability is likely to exceed 50% for the whole literature. In light of our findings the recently reported low replication success in psychology is realistic and worse performance may be expected for cognitive neuroscience.
They had some kind of automated system to extract p-values, power calculations, degrees of freedom, effect sizes, and the various statistical indices from a large number of papers. Their conclusions were based on distortions due to widespread under·powered research [I should say at this point, "or something like that"]. The precise nuts and bolts of this paper’s methodology and analyses are hardly apparent at first glance, and will likely become a topic of discussion in their own right. But the work comes from a solid source, is consistent with other investigations, and will surely add fuel to what seems to be a much needed look at how the scientific community conducts and publishes research. These are scathing indictments. I think I’ve been like one of those blind men feeling only part of the elephant, thinking this was a problem confined to the commercially sponsored clinical trials of pharmaceuticals. It’s obviously much bigger than that – maybe as big as the whole domain of science…
Mickey @ 1:42 PM

so during the halftime…

Posted on Monday 22 August 2016

… of Sunday’s Olympic final basketball game, lightning hit our cabin and fried all things electronic to a crisp [emphasis on all]. Back in a few days.
Mickey @ 5:42 PM

rio·2016…

Posted on Sunday 21 August 2016

[Andrew Medichini/Associated Press]

Living in England in 1972, we got to spend two weeks at the Munich Olympic games. And in spite of the craziness of the terrorist attack, those days remain a peak experience in my memory theater. Then in 1996, the games came to our house [Atlanta]. But even in all those years where they only played out on a television screen, it has always been the same – something special. rio·2016 is no exception. The Olympics games are just one of the best ideas humankind ever came up with…
Mickey @ 10:53 AM

shrinking…

Posted on Saturday 20 August 2016


by Tara F. Bishop, Joanna K. Seirup, Harold Alan Pincus, andJoseph S. Ros
Health Affairs. 2016 35[7]:1271-1277.

A large proportion of the US population suffers from mental illness. Limited access to psychiatrists may be a contributor to the underuse of mental health services. We studied changes in the supply of psychiatrists from 2003 to 2013, compared to changes in the supply of primary care physicians and neurologists. During this period the number of practicing psychiatrists declined from 37,968 to 37,889, which represented a 10.2 percent reduction in the median number of psychiatrists per 100,000 residents in hospital referral regions. In contrast, the numbers of primary care physicians and neurologists grew during the study period. These findings may help explain why patients report poor access to mental health care. Future research should explore the impact of the declining psychiatrist supply on patients and investigate new models of care that seek to integrate mental health and primary care or use team-based care that combines the services of psychiatrists and nonphysician providers for individuals with severe mental illnesses.
MEDPAGETODAY
by Neel A. Duggal
07·17·2016

The number of practicing psychiatrists in the U.S. has stalled over the last decade, in contrast to an upward trend among many other specialties, researchers found. An analysis of data from the Health Resources and Services Administration [HRSA] revealed a 0.2% decline in the number of psychiatrists in practice, compared with increases for neurologists [35.7%], primary care physicians [9.5%], and all practicing physicians [14.2%], Tara Bishop, MD, MPH, of Weill Cornell Medical College, and colleagues reported in the July issue of Health Affairs.

Co-author Harold Pincus, MD, of Columbia University and New York-Presbyterian Hospital, told MedPage Today that more medical school students were going into psychiatry in the 60s and 70s, but there’s been a "generational shift and this proportion has declined. Thus, psychiatrists are not being replaced at a sufficient rate." He offered two potential reasons for this shift: "First, psychiatrists are one of the lowest compensated specialties," he said. "Secondly, there are a greater number of other professionals providing behavioral health services, such as mid-level providers and counselors"… In addition to the overall totals, they saw a 10.2% decline in the median number of psychiatrists per 100,000 residents in hospital referral regions — compared with a 15.8% per capita increase for neurologists, and stable per capita proportions for primary care doctors and all practicing physicians…

The researchers suggested that the decline in psychiatrists might explain "why people report poor access to mental healthcare and why a large portion of psychiatrists are able to sustain practices without accepting insurance."

Petros Levounis, MD, chair of psychiatry at Rutgers Medical School, who wasn’t involved in the study, noted that there have been policy efforts to increase reimbursement for mental health services, such as the Mental Health and Substance Use Disorder Act of 2008 — but its implementation "has been slow," he said. "Reimbursements are very low," Levounis added. "Thus, many psychiatrists don’t accept insurance, such as those in the greater New York area." From his perspective as a medical school instructor, Levounis said that medical students "are initially interested in mental health and addiction. However, as their education progresses, their interest drops significantly"…

Pincus suggested that giving psychiatrists a supervisory role to guide other behavioral health professionals, while diminishing their own face time with patients, may be the best path to managing population mental health. But Levounis disagrees: he believes such a move will online increase the need for psychiatrists. On the other hand, telemedicine may be able to pick up the slack, although it’s "early to tell the success of their outcomes," Levounis said. The researchers concluded that "policy makers, payers, and the medical community simultaneously must develop strategies to enhance recruitment into psychiatry and rapidly develop and effectively disseminate new care models to use the psychiatric workforce more efficiently in the near term."

Co-authors disclosed financial relationships with Medtronic, Johnson & Johnson, and Blue Cross Blue Shield.
It’s worth reading the brief Wikipedia entry for the Community Mental Health Act of 1963, the last legislation signed by President Kennedy before his assassination – a good idea that never really happened. But as a result, in the days of my Internal Medicine Residency, the government was paying people to go into Psychiatry to deal with the perceived shortage.

psychiatry residency positionsBack then, the need was for physicians [psychiatrists] to staff the mental health centers that were an essential ingredient of the Community Mental Health enterprise. It was actually a good plan in my estimation, though by the time I arrived in the mid·1970s, it was in its waning days – collapsing under the weight of under·funding, under·staffing, and stripped of the necessary hospital backup for stabilization. With the coming of the DSM-III, brain science, and the medicalization of psychiatry, there was the kind of heyday new paradigms often bring. But in general, the relative number of US graduating seniors choosing psychiatry is basically flat compared to other medical specialties.

These authors mention the low pay and competition from other disciplines as explanatory factors. But those things have always been true and are unlikely to explain the more recent state of the specialty. Of course the number of psychiatrists is falling, but the obvious explanations weren’t mentioned here – things like psychiatrists are only covered by insurance to prescribe medications, and heavily criticized for prescribing too much medication; psychiatrists are not covered to talk to patients, and criticized for not talking to their patients; the upper echelons of psychiatry contain a number of tainted key opinion leaders who have gone over to the dark side and allied themselves with the commercial interests of the pharmaceutical industry; and critics see all psychiatrists as members of this tainted group and are globally contemptuous. That’s just for starters. And, oh yeah, I didn’t take insurance because with that came directives about practice that were unacceptable [and in my opinion, wrong]. I preferred to make less money.

And so the authors suggest the solution is for psychiatrists to no longer see patients or to see them even less, but rather direct treatment from afar by working through a Clinical Coordinator AKA Collaborative Care or literally afar through a computer screen AKA Telepsychiatry. Who wants to spend a career doing either? What patient wants to be treated that way? And the authors’ Conflicts of Interest are telling. Their suggestions are exactly what industry wants from psychiatrists [prescribe meds more rationally than the Primary Care docs do, but don’t get involved with talking to or even meeting with the patients because that runs up costs]. So it looks to me as if they’re using the declining number data to justify directing the specialty even further into exactly what Medtronic, Johnson & Johnson, and Blue Cross Blue Shield want it to be.

But my real reason for writing this is to point out that the policy wonks neither mention the widespread corruption of the academic·pharmaceutical complex nor the ubiquitous inaccurate information about our drugs. That absence in this discussion is particularly striking…
Mickey @ 10:32 AM

the best predictor…

Posted on Thursday 18 August 2016

I don’t know anything about Ketamine except what I read. It is an FDA Approved off-patent anesthetic by day, and a club drug by night [Special K]. We began to hear about it as an antidepressant around the time that it became apparent that PHARMA was abandoning CNS drug development in 2012. It’s given iv and is a lightweight hallucinogen. The excitement was that the antidepressant effects seemed to persist for a variable period of time [days] after the druggie effects wore off [hours]. And in a modern world, there was another problem – it’s generic – ergo no blockbuster potential in sight [or so I thought]. But then the cynical part of my mind got engaged when I saw commentaries by major KOLs  suggesting caution[???] [full text on-line]:
These are not the kind of people who suggest caution[???] about drugs, and I freely admit that I was immediately suspicious that they were holding out for something PHARMA could market and downplaying Ketamine itself [call me paranoid? or call me wise?]. That last Nemeroff paper was a review article and I wrote about it in a touch of paralysis…. I did think they were talking up Rapastinel, a patented Ketamine analog and that the evidence base for that drug was beyond shaky [see infomercials… and shame on us]. Then shortly after the Nemeroff article was published, another contender came on the scene from Johnson and JohnsonEsketamine. Esketamine and Ketamine are enantiomers – moleculers that have the same chemical formula but are mirror images that can’t be superimposed on each other. There is some evidence that their properties as drugs differ, summarized here. In November 2015, Johnson and Johnson received a Breakthrough Therapy Designation for  treatment resistant depression based on this report of a phase 2 clinical trial:
by Jaskaran B. Singh , Maggie Fedgchin, Ella Daly, Liwen Xi, Caroline Melman, Geert De Bruecker, Andre Tadic, Pascal Sienaert, Frank Wiegand, Husseini Manji, Wayne C. Drevets, and Luc Van Nueten
Biological Psychiatry. November 3, 2015. EPub ahead of print.

BACKGROUND: The purpose of this study was to assess the efficacy and safety and to explore the dose response of esketamine intravenous [IV] infusion in patients with treatment-resistant depression [TRD].
METHODS: This multicenter, randomized, placebo-controlled trial was conducted in 30 patients with TRD. Patients were randomly assigned 1:1:1 to receive an IV infusion of .20 mg/kg or .40 mg/kg esketamine or placebo over 40 minutes on day 1. The primary end point was change in Montgomery-Åsberg Depression Rating Scale total score from day 1 [baseline] to day 2. Nonresponders who received placebo on day 1 were randomly assigned again 1:1 to IV esketamine .20 mg/kg or .40 mg/kg on day 4. Secondary efficacy and safety measures were also evaluated.
RESULTS: Of the enrolled patients, 97% [29 of 30] completed the study. The least squares mean changes [SE] from baseline to day 2 in Montgomery-Åsberg Depression Rating Scale total score for the esketamine .20 mg/kg and .40 mg/kg dose groups were -16.8 [3.00] and -16.9 [2.61], respectively, and showed significant improvement [one-sided p = .001 for both groups] compared with placebo [-3.8 [2.97]]. Esketamine showed a rapid [within 2 hours] and robust antidepressant effect. Treatment-emergent adverse events were dose dependent. The most common treatment-emergent adverse events were headache, nausea, and dissociation; the last-mentioned was transient and did not persist beyond 4 hours from the start of the esketamine infusion.
CONCLUSIONS: A rapid onset of robust antidepressant effects was observed in patients with TRD after a 40-min IV infusion of either .20 mg/kg or .40 mg/kg of esketamine. The lower dose may allow for better tolerability while maintaining efficacy
But that was just for starters. They’ve been working on an intranasal delivery version for seven years, recently presented [reference: Canuso C, et al. "Esketamine for the Rapid Reduction of the Symptoms of Major Depressive Disorder, Including Suicidal Ideation, in Subjects Assessed to be at Imminent Risk for Suicide." Society of Biological Psychiatry 71st Annual Scientific Meeting. May 12-14, 2016.]. Now Johnson and Johnson has just been granted Breakthrough Therapy Designation for the intranasal version for both treatment resistant depression and major depressive disorder with imminent risk for suicide [Breakthrough Therapy Designation is essentially a fast-track FDA Approval process for promising and much-needed drugs].
Johnson and Johnson: Press Release
August 16, 2016

Janssen Research & Development, LLC, one of the Janssen Pharmaceutical Companies of Johnson & Johnson, announced today that the U.S. Food and Drug Administration [FDA] has granted a Breakthrough Therapy Designation for  esketamine, an investigational antidepressant medication, for the indication of major depressive disorder with imminent risk for suicide. If approved by the FDA, esketamine would be one of the first new approaches to treat major depressive disorder available to patients in the last 50 years.

This also marks the second time esketamine has received a Breakthrough Therapy Designation from the U.S. regulatory authority. Esketamine was first granted this designation for treatment-resistant depression in November 2013. Breakthrough Therapy Designation is intended to expedite development and review timelines when preliminary clinical evidence indicates the drug may demonstrate substantial improvement on one or more clinically significant endpoints over available therapies for serious or life-threatening conditions.

The esketamine Phase 2 clinical trial data presented by Janssen in May 2016 at the Society of Biological Psychiatry 71st Annual Scientific Meeting in Atlanta, Georgia, provided preliminary clinical evidence to support the Breakthrough Therapy Designation for major depressive disorder with imminent risk for suicide.

About Esketamine

Esketamine for intranasal administration is an investigational compound being studied by Janssen as part of a global development program. Esketamine is a non-competitive and subtype non-selective activity-dependent N-methyl-D-aspartate [NMDA] receptor antagonist, which has a novel mechanism of action, meaning it works differently than currently available therapies for depression. The program in treatment-resistant depression is currently in Phase 3, with six ongoing clinical trials…
Alex GorskyI’ll have to admit that the notion of seeing a suicidal patient and offering them a sniff of some version of Special K as a treatment strikes me as borderline ludicrous, hard for me to say with a straight face – but stranger things have happened. That’s not why I’m writing about it. Remember, this is Johnson and Johnson, the people who gave us TMAP and all of the Risperidal® sheenanigans. The CEO is still Alex Gorsky – West Point graduate and lifelong Boy Scout, who brought in Billions with all kinds of hanky-panky, flooding the literature with articles churned out by Excerpta Medica. Johnson and Johnson gladly paid the cost-of-doing-business fines of around $2.2 B – trivial in the face of their ill-gotten gains.  And with Risperidal®, they were also first-on-the-market:

Here’s a roster of their Esketamine clinical trials from clinicaltrials.gov. As you can see, they’re going long on Esketamine. It’s a hungry market and they’re going after it. And then there’s that old saying, "The best predictor of future behavior is past behavior."

NCT #
 
  Title   Status Phase Start
NCT01394757 Network Dysfunction, Schizophrenia and Pharmacological Magnetic Resonance Imaging [phMRI] Completed 08/2011
NCT00847418 Pharmacokinetics and Pharmacodynamics of Nasally Applied Esketamine Completed 1 02/2009
NCT01780259 A Study to Assess the Pharmacokinetics, Safety, and Tolerability of Intranasally Administered Esketamine in Healthy Participants Completed 1 12/2012
NCT02060929 A Study to Evaluate the Pharmacokinetics of Intranasal Esketamine Administered With and Without a Nasal Guide on the Intranasal Device Completed 1 10/2013
NCT01980303 A Study to Assess the Pharmacokinetics of Intranasally Administered Esketamine in Healthy Japanese and Caucasian Volunteers Completed 1 11/2013
NCT02129088 A Pharmacokinetic, Safety and Tolerability Study of Esketamine in Healthy Elderly and Adult Participants Completed 1 03/2014
NCT02094378 A Study to Evaluate the Effect of Intranasal Esketamine on Cognitive Functioning in Healthy Subjects Completed 1 06/2014
NCT02154334 Study to Assess the Effects of Allergic Rhinitis and Co-administration of Mometasone or Oxymetazoline on the Pharmacokinetics, Safety, and Tolerability of Intranasal Esketamine Completed 1 06/2014
NCT02228239 Study to Assess the Effects of Esketamine on Safety of On-road Driving in Healthy Participants Completed 1 09/2014
NCT02345148 Pharmacokinetic, Safety, and Tolerability Study of Intranasally Administered Esketamine in Elderly and and Healthy Younger Adult Participants Completed 1 12/2014
NCT02343289 A Study to Evaluate the Absolute Bioavailability of Intranasal and Oral Esketamine and the Effects of Clarithromycin on the Pharmacokinetics of Intranasal Esketamine in Healthy Participants Completed 1 01/2015
NCT02568176 Pharmacokinetic Study of Intranasal Esketamine and Its Effects on the Pharmacokinetics of Orally-Administered Midazolam and Bupropion in Healthy Participants Completed 1 10/2015
NCT02611505 A Study to Assess the Effects of Hepatic Impairment on the Pharmacokinetics, Safety, and Tolerability of Intranasally Administered Esketamine Recruiting 1 11/2015
NCT02606084 A Study to Assess the Effects of Renal Impairment on the Pharmacokinetics, Safety, and Tolerability of Intranasally Administered Esketamine Recruiting 1 12/2015
NCT02846519 Pharmacokinetic, Safety, and Tolerability Study of Intranasally Administered Esketamine in Healthy Han Chinese, Korean, Japanese, and Caucasian Participants and the Effects of Rifampin on the Pharmacokinetics of Intranasally Administered Esketamine Completed 1 02/2016
NCT02682225 Crossover Study to Evaluate the Abuse Potential of Intranasal Esketamine Compared to Racemic Intravenous Ketamine in Nondependent, Recreational Drug Users Recruiting 1 03/2016
NCT02674295 A Mass Balance Study With a Microtracer Dose of 14C-esketamine in Healthy Male Participants Recruiting 1 03/2016
NCT02737605 A Study to Evaluate the Effects of Esketamine on Cardiac Repolarization in Healthy Participants Not yet recruiting 1 07/2016
NCT02857777 Pharmacokinetic, Safety, and Tolerability Study of Intranasally Administered Esketamine in Elderly Japanese, and Healthy Younger Adult Japanese Subjects Not yet recruiting 1 08/2016
NCT01640080 A Study of the Efficacy of Intravenous Esketamine in Adult Patients With Treatment-Resistant Depression Completed 2 06/2012
NCT01998958 A Study to Evaluate the Safety and Efficacy of Intranasal Esketamine in Treatment-resistant Depression Completed 2 01/2014
NCT02133001 A Double-blind Study to Assess the Efficacy and Safety of Intranasal Esketamine for the Rapid Reduction of the Symptoms of Major Depressive Disorder, Including Suicidal Ideation, in Participants Who Are Assessed to be at Imminent Risk for Suicide Completed 2 05/2014
NCT02417064 A Study to Evaluate the Efficacy, Safety, and Tolerability of Fixed Doses of Intranasal Esketamine Plus an Oral Antidepressant in Adult Participants With Treatment-resistant Depression Recruiting 3 08/2015
NCT02418585 A Study to Evaluate the Efficacy, Safety, and Tolerability of Flexible Doses of Intranasal Esketamine Plus an Oral Antidepressant in Adult Participants With Treatment-resistant Depression Recruiting 3 08/2015
NCT02422186 A Study to Evaluate the Efficacy, Safety, and Tolerability of Intranasal Esketamine Plus an Oral Antidepressant in Elderly Participants With Treatment-resistant Depression Recruiting 3 08/2015
NCT02497287 A Long-term, Safety and Efficacy Study of Intranasal Esketamine in Treatment-resistant Depression Recruiting 3 09/2015
NCT02493868 A Study of Intranasal Esketamine Plus an Oral Antidepressant for Relapse Prevention in Adult Participants With Treatment-resistant Depression Recruiting 3 10/2015
NCT02782104 A Long-term Safety Study of Intranasal Esketamine in Treatment-resistant Depression Recruiting 3 06/2016
Mickey @ 6:25 PM

and then there was one…

Posted on Wednesday 17 August 2016

I’ve gotten email along the way asking me [or chiding me] about my being preoccupied with clinical trials that are more than a decade old, I guess implying that it’s even too boring for the likes of me. I’d like to respond, because I think there’s something very important in the answer. These trials are designed to detect short term efficacy and log adverse effects to determine regulatory approval. They were never intended to direct clinical use of the drugs in practice. And yet they’ve frequently been treated as if they’re the final word, the gold standard in clinical psychiatry. And further, in the published reports, the subjects’ self-rating scales are often insignificant, if they’re even reported, even though the thing that really matters with many of the conditions is the patient’s subjective experience of symptom relief. Then after a drug is approved, further testing is directed towards greener pastures, lucrative approvals for other diagnoses. And as to the continued relevance of these ancient trials, Karen Wagner’s update at the 2016 APA meeting relied on her same old studies [see Child Psychiatrists Look at Specialty From Both Macro, Micro Perspectives]:
Notice my formatting. I think that the 2001 Paxil Study 329 and the 2004 CIT-MD-18 papers have been convincingly debunked by the recent exhaustive second looks: Restoring Study 329… and The citalopram CIT-MD-18 pediatric depression trial…. And as long as we’re in the neighborhood, what about Wagner’s 2003 Pfizer-funded Sertraline trial?
by Wagner KD, Ambrosini P, Rynn M, Wohlberg C, Yang R, Greenbaum MS, Childress A, Donnelly C, Deas D; and the Sertraline Pediatric Depression Study Group
Journal of the American Medical Association. 2003, 290[8]:1033-41.

CONTEXT: The efficacy, safety, and tolerability of selective serotonin reuptake inhibitors [SSRIs] in the treatment of adults with major depressive disorder [MDD] are well established. Comparatively few data are available on the effects of SSRIs in depressed children and adolescents.
OBJECTIVE: To evaluate the efficacy and safety of sertraline compared with placebo in treatment of pediatric patients with MDD.
DESIGN AND SETTING: Two multicenter randomized, double-blind, placebo-controlled trials were conducted at 53 hospital, general practice, and academic centers in the United States, India, Canada, Costa Rica, and Mexico between December 1999 and May 2001 and were pooled a priori.
PARTICIPANTS: Three hundred seventy-six children and adolescents aged 6 to 17 years with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition-defined MDD of at least moderate severity.
INTERVENTION: Patients were randomly assigned to receive a flexible dosage [50-200 mg/d] of sertraline [n = 189] or matching placebo tablets [n = 187] for 10 weeks.
MAIN OUTCOME MEASURES: Change from baseline in the Children’s Depression Rating Scale-Revised [CDRS-R] Best Description of Child total score and reported adverse events.
RESULTS: Sertraline-treated patients experienced statistically significantly greater improvement than placebo patients on the CDRS-R total score [mean change at week 10, -30.24 vs -25.83, respectively; P=.001; overall mean change, -22.84 vs -20.19, respectively; P=.007]. Based on a 40% decrease in the adjusted CDRS-R total score at study end point, 69% of sertraline-treated patients compared with 59% of placebo patients were considered responders [P=.05]. Sertraline treatment was generally well tolerated. Seventeen sertraline-treated patients [9%] and 5 placebo patients [3%] prematurely discontinued the study because of adverse events. Adverse events that occurred in at least 5% of sertraline-treated patients and with an incidence of at least twice that in placebo patients included diarrhea, vomiting, anorexia, and agitation.
CONCLUSION: The results of this pooled analysis demonstrate that sertraline is an effective and well-tolerated short-term treatment for children and adolescents with MDD.
Figure 2. Weekly and Overall Adjusted Mean CDRS-R Scores
CDRS-R indicates Children’s Depression Rating Scale—Revised Best Description of Child total score. Data are least square means at each visit week, with mean scores averaged to give the overall mean, from a repeated-measures mixed-model analysis with age category, site, treatment, week, and week-by-treatment interaction used as fixed effects, subject as a random effect, and baseline effect as a covariate. Error bars indicate SE of the adjusted means, derived from the repeated-measures mixed-model procedure. P values are as follows: week 1. P=.09; week 2, P=.08; week 3, P =.01; week 4, P=.008; week 6, P=.37; week 8, P=.18; week 10, P=.001; and mean response, P=.007.

Right out of the gate, we have good reason to question these conclusions just by looking at the primary outcome variable graph which only achieved significance at 3, 4, and 10 weeks. But there’s a lot more to give us pause. In all of the depositions, Wagner makes it clear that she neither saw all the data nor checked the analysis in this study either, and that it was written at Pfizer with her coming in as the drafts were revised. Pfizer has gotten off light in the critiques of these clinical trials in that they had a writing firm, Current Medical Directions, that ghost wrote their Zoloft articles faster than they could find KOLs to sign up as guest authors [see zoloft: beyond the approval I… for this part of the story].

But there’s something else that has always bothered me about this paper, and it’s right there in the title – … two randomized controlled trials. What’s that about? Everywhere this trial is reported, they say some version of, "Data from both studies were pooled in a prospectively defined combined analysis." So we are to believe that they had two identical studies that they planned to pool together for analysis, and the obvious question is what makes them two studies? Sounds like one big study to me [and plenty of others reading this paper]. If you were planning two identical studies, it would be likely that you were trying the speed things up to get the two studies needed for FDA Approval. And then we read this from one of the study group investigators in the study:

To the Editor: Dr Wagner and colleagues reported that sertraline was more effective than placebo for treating children with major depressive disorder and that it had few adverse effects. As one of the study group investigators in this trial, I am concerned about the way the authors pooled the data from 2 trials, a concern that was raised by previous letters critiquing this study. The pooled data from these 2 trials found a statistically marginal effect of medication that seems unlikely to be clinically meaningful in terms of risk and benefit balance.

New information about these trials has since become available. The recent review of pediatric antidepressant trials by a British regulatory agency includes the separate analysis of these 2 trials. This analysis found that the 2 individual trials, each of a good size [almost 190 patients], did not demonstrate the effectiveness of sertraline in treating major depressive disorder in children and adolescents.
    E. Jane Garland, MD, FRCPC
    Department of Psychiatry
    University of British Columbia
    Vancouver
Committee on Safety of Medicines. Medicines and Health Care Products Regulatory Agency. Selective Serotonin Reuptake Inhibitors [SSRIs] — overview of regulatory status and CSM advice relating to major depressive disorder [MDD] in children and adolescents: summary of clinical trials. Available at: http://medicines.mhra.gov.uk/ourwork/monitorsafequalmed/safetymessages/ssrioverviewclintnaldata_101203.htm. Accessibility verified March 12, 2004.
Well, if that’s not enough of a nail in this coffin, there’s even more in Wagner’s response to Dr. Garland’s letter:
Reply: In response to Dr Garland, our combined analysis was defined a priori, well before the last participant was entered into the study and before the study was unblinded. The decision to present the combined analysis as a primary analysis and study report was made based on …
I won’t even finish her explanation because it doesn’t matter, since a priori means before the study starts, not "well before the last participant was entered into the study and before the study was unblinded" [this ploy of redefining the meaning of a priori is familiar to us from a similar bit of sleight of hand in Paxil Study 329]. So looked at from any angle, this is yet another negative clinical trial reported as positive after much jury-rigging – just as worthy of retraction as the other two.
Wagner is scheduled for her usual presentation [Treatment of Depressive Disorders in Children and Adolescents] at the October American Academy of Child and Adolescent Psychiatry annual meeting. I wonder what she’ll find to talk about this year?
Mickey @ 11:30 PM

ignor·ance…

Posted on Tuesday 16 August 2016

Ignorance is a state of being uninformed [lack of knowledge]. The word ignorant is an adjective describing a person in the state of being unaware and is often used to describe individuals who deliberately ignore or disregard important information or facts.

    ig·nore (ïg-nôr′)
    verb
          To refuse to pay attention to; disregard.

It’s telling that the words ignorance and ignorant come from the verb ignore, implying a chosen version of not knowing. It certainly comes up a lot in matters surrounding clinical trials. For example, the rules for  preregistration or resuls posting on clinicaltrials.gov or preregistration of outcomes as a prerequisite to being considered for publication have been regularly ignored [see the wrong tree…]. Long before we conceived of our RIAT reanalysis of Paxil Study 329, I made a formal request to the Ethics Coimmittee of the American Academy of Child and Adolescent Psychiatry. The committee [and the president-elect] responded, investigated, and told me that the question would be discussed at their yearly meeting. Nothing happened and it was not discussed. I later learned that the editor, Andres Martin, found out and insisted they have no more contact with me [ignored, disregarded] and that was the end of that.

In general, the major organizations [AACAP, APA] have ignored distortion of clinical trial reports in medical journals, including their own official journals when it has been pointed out. In spite of our reanalysis, that paper’s notoriety, and the $3 B settlement, the  Study 329 article still sits in our literature – unretracted. And now, there’s another obvious candidate – the Citalopram trial in children and adolescents called CIT-MD-18. The recent article about that paper reveals how it was actively distorted by the sponsor – Forest Laboratories [see this tawdry era…]:
by Jureidini, Jon N., Amsterdam, Jay, D, and McHenry, Leemon B.
International Journal of Risk & Safety in Medicine, 2016 28[1]:33-43.

OBJECTIVE: Deconstruction of a ghostwritten report of a randomized, double-blind, placebo-controlled efficacy and safety trial of citalopram in depressed children and adolescents conducted in the United States.
METHODS: Approximately 750 documents from the Celexa and Lexapro Marketing and Sales Practices Litigation: Master Docket 09-MD-2067-[NMG] were deconstructed.
RESULTS: The published article contained efficacy and safety data inconsistent with the protocol criteria. Procedural deviations went unreported imparting statistical significance to the primary outcome, and an implausible effect size was claimed; positive post hoc measures were introduced and negative secondary outcomes were not reported; and adverse events were misleadingly analysed. Manuscript drafts were prepared by company employees and outside ghostwriters with academic researchers solicited as ‘authors’.
CONCLUSION: Deconstruction of court documents revealed that protocol-specified outcome measures showed no statistically significant difference between citalopram and placebo. However, the published article concluded that citalopram was safe and significantly more efficacious than placebo for children and adolescents, with possible adverse effects on patient safety.
The authors wrote the American Journal of Psychiatry where the original article is published requesting a retraction of Wagner et al’s A Randomized, Placebo-Controlled Trial of Citalopram for the Treatment of Major Depression in Children and Adolescents. That letter is in the Background Notes, and it received only this terse, belated reply:
RE: Am J Psychiatry 2004; 161:1079–1083 We are not retracting this article. Robert Freedman MD
In a rational world, one might think that an editor might be embarrassed a bit, but generally be glad to hear about an article that’s as far in left field as this one and look into it – particularly an article he himself had censured in the past for undeclared ghost-writing and omitting a negative european study [see collusion with fiction…]. By the way, throw in its centrality in a pricey legal suit [Drug Maker Forest Pleads Guilty: Will Pay More Than $313 Million to Resolve Criminal Charges and False Claims Act Allegations]. But Freedman’s disregarding imperative [We are not retracting this article] is anything but that kind of response – more in the range of a scowl in text.

So the authors appealed to a higher power, Maria A. Oquendo, M.D., President of the American Psychiatric Association. Dr. Oquendo appears to be one of the good guys, able to have a distinguished and successful academic career without the taint of KOL-dom like her predecessor as Chairman at Penn, Dwight Evans [on Senator Grassley’s list of COI violators among other things]. That letter is posted here on this blog [a time for change…], and cosigned by a number of impressive people [and others]. It has been two weeks since it was sent, and there’s been no response yet.

Juriedini et al’s Deconstruction of CIT-MD-18 is substantively different from our Restoring Study 329 article, even though they’re in the same genre. Ours was a reanalysis, and while we were able to access the Raw Data and definitively show it was a negative study, the RIAT format didn’t allow for the subpoenaed emails that expose internal goings on. The CIT-MD-18 Deconstruction brings the corrupt processes out into the light of day as well as showing that this trial was also negative. This paper is important for two reasons: this study was part of the FDA Approved indication in adolescents AND it doesn’t have the negative notoriety of Paxil Study 329. In fact, first author Karen Dineen Wagner used it in her presentation of antidepressant use in kids at the May 2016 APA meeting. So its findings are still very much in use [see what’s it going to take?…].


It’s time for the upper levels of academic psychiatry and journal editors to stop ignoring these corrupt clinical trial reports that literally haunt our literature, moving forward as if none of what happened ever really did happen. They have tainted the reputation of the specialty and the profession in general. These papers sit in the literature like rotten fruit in a barrel and they’re not going to go away. This particular study is unusually influential and must be dealt with. As for the authors, "don’t do the crime if you can’t do the time." I suspect that many of them had no idea what they were signing on to, but that’s no excuse. In the case of Dr. Wagner, the first author [and principal investigator], by her own testimony she neither looked at the whole data set nor checked the statistical analysis [see author·ity…].

So the task before Dr. Oquendo is a big one. Is she going to stick her head in the sand? ignore the obvious? or step up to the plate?…
Mickey @ 10:46 PM

it didn’t work…

Posted on Monday 15 August 2016

Well, I couldn’t do it. I thought I could just log the last post and keep my mouth shut, but it didn’t work. I made the mistake of reading this article again. So with apologies, I have some more things to say:
New England Journal of Medicine. 2016; 375:405-407.

… A key motivation for investigators to conduct RCTs is the ability to publish not only the primary trial report, but also major secondary articles based on the trial data. The original investigators almost always intend to undertake additional analyses of the data and explore new hypotheses. Moreover, large, multicenter trials with large numbers of investigators often require several articles to fully describe the results. These investigators are partly motivated by opportunities to lead these secondary publications. We believe 6 months is insufficient for performing the extensive analyses needed to adequately comprehend the data and publish even a few articles. Once the investigators who have conducted the trial no longer have exclusive access to the data, they will effectively be competing with people who have not contributed to the substantial efforts and often years of work required to conduct the trial.

… In summary, we recommend that the ICMJE come together with trialists and other stakeholders to discuss the potential benefits, risks, burdens, and opportunity costs of its proposal and explore alternatives that will achieve the same goals efficiently. Moreover, we recommend modifying the proposal as follows. First, the timeline for providing deidentified individual patient data should allow a minimum of 2 years after the first publication of the results and an additional 6 months for every year required to complete the study, up to a maximum of 5 years. Second, to enhance readers’ confidence in published data, an independent statistician should have the opportunity to conduct confirmatory analyses before publication of an article, thereby advancing the ICMJE’s stated goal of increasing “confidence and trust in the conclusions drawn from clinical trials.” Finally, persons who were not involved in an investigator-initiated trial but want access to the data should financially compensate the original investigators for their efforts and investments in the trial and the costs of making the data available.
I ran across a blog post that mirrored some of my reactions and brought up a few interesting points that hadn’t occurred to me:
techdirt
by Glyn Moody
August 8, 2016
This NEJM article sounds like Nero playing his violin while Rome burns to me. I can think of nothing in Medicine that comes even close the level of corruption achieved in the distortion of Clinical Trial results in the fifty years since the Kefauver-Harris Amendment made trials a pre-requisite for drug approval – something that was intended to be a reform, a check on the often inert patent medicines of the day. There’s nothing wrong with that piece of legislation itself. It’s what people did with these clinical trials that has wreaked so much ongoing havoc.

While the standard for approval is low, FDA approval isn’t intended to direct clinicians. It simply means that the drug has demonstrated medicinal properties and is deemed safe for human use. The malignant problem has arisen from the version of those trials that has made it into the medical literature, regularly inflating efficacy, downplaying toxicity, and laying a base for advertising campaigns that opportunize on our patien’ts desire for symptom relief and/or wellness. The aura of FDA approval and publication in the scientific literature has been parlayed into billions of dollars in ill-gotten profits. Worse, it has catapulted medication induced mortality and morbidity into the majors.

Clinical Trials are referred to as "research" and the people in charge of these studies call themselves "researchers." Research is exploration in search of new findings. These clinical Trials aren’t research, they’re product testing – governed by strict rules to be validating. And so many of the published results make a mockery of these basic rules: Randomization, Double Blinding, Preregistration of Outcomes followed by Replication of positive results. The NEJM article describes a mythical process bearing little resemblance to what actually happens in these largely commercially conducted trials. The analysis could, and perhaps should, be done the day the trial ends and the blind is broken. If they want to play around with the data for some kind of exploratory research, they could simply hold on to the original protocol directed analysis until their other data play is finished. Truth-be-told, this plea for time to play around with the data is an admission of guilt – Hypotheses After Results Known!

But the regularity of corruption in Clinical Trial reporting in the medical journals trumps any argument they might muster, any violin they might play. This is likely the biggest scandal in the history of Medicine. Rome really is on fire…
Mickey @ 11:36 PM

no neutrality…

Posted on Monday 15 August 2016

In 1980, Arnold Relman, then editor of the New England Journal of Medicine, gave us a dire warning. He saw what was coming and laid it out for all to see:
While he underestimated the involvement of the pharmaceutical industry, he was still dead on in general – that industry was poised to invade Medicine with a ferocity the rest of us couldn’t yet imagine. And this very problem played out in the editorial offices of his journal itself not very many years later resulting in the firing of Relman’s successor, Jerome Kassirer [see a narrative… and not so proud…]. For the last year, Relman‘s point has been the subject of a debate, largely in the pages of the New England Journal of Medicine. But this time, the  editor, Jeffrey Drazen, has been on the other side. Here are the relevant articles, arranged by date, all available full-text on-line:
I listed the primary sources for you to read for yourself because I have no neutrality in this matter. From my point of view, Jeffrey Drazen, presumably backed by the current powers that be at the New England Journal of Medicine, has shamed his journal and our profession. He’s used the time-honored medical journal of record to push his own corporate-friendly agenda. Let’s hope he and his like-minded associates find a way to move on quickly…
Mickey @ 11:33 AM