STOP, LOOK, and LISTEN…

Posted on Tuesday 24 May 2016

David Healy‘s blog has a guest post called The Pill That Steals Lives: One woman’s terrifying journey to discover the truth about antidepressants by Katinka Blackford Newman – an introduction to a book about her experiences with psychiatric medications due out in early July. It’s one of those all too familiar stories where a negative reaction to one medication was interpreted as an illness that was treated by adding other medications in an escalating cycle:
It had started when I had hit a wall of despair while going through a divorce. Sleepless nights took me to a psychiatrist who prescribed an antidepressant. Within hours I was hallucinating, believed I had attacked my children and in fact attacked myself with a knife. I ended up in a private hospital where doctors clearly thought I had a screw loose when I told them I was being filmed and that there was a suicide pact with God. The psychosis ended when I said I wanted to stop taking the escitalopram but doctors insisted I take more pills. This began a terrible decline where I couldn’t leave the house, dress myself, finish a sentence. But the worst thing of all was that I couldn’t feel love for my children, Lily and Oscar, who were 10 and 11 at the time. At the end of a year I was about to end it all. As a last resort I tried to get myself readmitted to the same private hospital, but my insurance had run out. And that was how I ended up sectioned at this NHS hospital that had made the decision to take me off all the drugs [Lithium, Olanzapine, Sertraline, Prozac, Lamotrigine]. I was climbing the walls, screaming, shouting, and begging my family to get me out of there. If I’d been suicidal while on the drugs, withdrawal made me far worse…
It looks to be an interesting book [and advertises a revelation along the way]. But that’s not why it’s here. It’s this:
[Lithium, Olanzapine, Sertraline, Prozac, Lamotrigine]
No matter what you believe about mental illness [disease or not] or about psychopharmacology [disease specific or symptomatic], it’s hard for me to imagine when…
[two antidepressants plus two mood stabilizers plus an atypical antipsychotic]
…would ever be an appropriate drug regimen for any condition I know of. What illness does that treat? How about with the new NpN terminology they’re so excited about? Would renaming it …
[glutamate with yet to be determined enzyme interaction plus a dopamine, serotonin receptor antagonist [D2, 5-HT2] plus a serotonin reuptake inhibitor [SERT] plus a glutamate voltage-gated sodium channel blocker]
…make things any better? [one shudders to think what it would become in RDoC talk]. I’m being sarcastic and I shouldn’t be because this is a deadly serious point. A case like this transcends the usual discussions about efficacy or indication. There’s just no rational rationale for this drug cocktail that I can think of for anything. And I’ve seen cases like this over and over. I recently catalogued such a journey [good news bulletin… see case number 3 and its links].

The way this happens is that a patient gets started on a medication and things go badly. So other medicines are tried without stopping the last. As the patient continues to go downhill, the medications get added irrationally. There may be akathisia and/or withdrawal mixed in with the medication effects. It ends like the story I think I’ll read when the book arrives – an impossible situation where the patient still may or may not have the problem they came with, are living in an obtunded mental state from all the medications, and have the added prospect of one or more withdrawal syndromes to face. One unholy mess!

The solution when a case is headed this way is to STOP adding things and gradually taper off of all medication, using something like a benzos for distress if you have to. LOOK at the patient as if it’s a brand new case. Perhaps get a consult from someone you respect, and LISTEN to what they say and the patient says. Unfortunately, the people who get into such messes are reticent to let them see the light of day so finally somebody else [family?] intervenes. And when you see a case like this – somebody on five medicines who’s getting or has already gotten worse – always think medication effects until proven otherwise.
Mickey @ 5:19 PM

a thorny problem, this one…

Posted on Monday 23 May 2016

Reading through Sergio Sismondo’s Ghosts in the Machine was confirming, validating my own impression that there is a  secretive commercially driven enterprise manipulating the processes by which we know about medications. I knew it was there, but I really didn’t know it was so ubiquitous, nor did I know it was a profession. But there were parts of his essay I just didn’t get. This was one of them:
Implicit in many of the exposes of ghostwriting in the medical science and popular literature is an assumption that ghostwritten science is formally inferior. Given the very high acceptance rates of ghost-managed papers, that assumption is questionable in general – though it may be right about important cases. Pharmaceutical company research, analysis, and writing results in knowledge. It is not different from other medical research, analysis, and writing in the fact that companies and their agents make choices in the running of clinical trials, in interpretations of data and established medical science, and in the messages they convey in papers and presentations. This point is straightforwardly suggested by STS’s longstanding commitment to symmetry. It is justified by the results of canonical studies that have shown how science is choice-laden. Thus, the work of pharmaceutical companies to produce research and place it prominently in medical journals is not merely a corporate use of the patina of science. It is science, though it is science done in a new, corporate mode.
It was one of those paragraphs you read over and over with the same question·mark look on your face. And apparently I’m not the only one [see Leemon McHenry’s Ghosts in the Machine: Comment on Sismondo]:
The commercial medical science that has created the ghostwriting industry is a corruption of science, and not merely as Sismondo puts it "science done in a new, corporate mode."
Which is what I think. Back to Sismondo, there was a part of Ghosts in the Machine that I really liked reading:
Everybody systematically connected with publication planning wants established formal rules of conduct. As sub-contractors, publication planners would like to reduce uncertainty, so that they can plan ahead and so that they can produce exactly the papers that will satisfy all of the different parties with whom they interact. Both publication planners and pharmaceutical companies want formal rules to guide and cover their work, to legitimize it so that its exposure does not amount to scandal…
In the course of life, I spent almost a decade as an educator/administrator. And as much as I enjoyed those years, there was one part I had no problem getting away from. It was hearing the question, "What is your policy about …?" To me, that meant "give me a rule so I’ll know how close to the line I can get without consequences" [and then the search for the loopholes began]. Here’s another solution that we might initially think we could agree with:
by SERGIO SISMONDO AND MATHIEU DOUCET
Bioethics. 2010 24[6]:273–283.

It is by now no secret that some scientific articles are ghost authored – that is, written by someone other than the person whose name appears at the top of the article. Ghost authorship, however, is only one sort of ghosting. In this article, we present evidence that pharmaceutical companies engage in the ghost management of the scientific literature, by controlling or shaping several crucial steps in the research, writing, and publication of scientific articles. Ghost management allows the pharmaceutical industry to shape the literature in ways that serve its interests.

This article aims to reinforce and expand publication ethics as an important area of concern for bioethics. Since ghost-managed research is primarily undertaken in the interests of marketing, large quantities of medical research violate not just publication norms but also research ethics. Much of this research involves human subjects, and yet is performed not primarily to increase knowledge for broad human benefit, but to disseminate results in the service of profits. Those who sponsor, manage, conduct, and publish such research therefore behave unethically, since they put patients at risk without justification. This leads us to a strong conclusion: if medical journals want to ensure that the research they publish is ethically sound, they should not publish articles that are commercially sponsored.

But then the doubts come creeping in. How would journals be financed? Industry is the only act in town able to finance trials. Where would physicians get information? package inserts? Who says the FDA Approval tells us enough to make clinical decisions? Who says the FDA is uncorruptable? And so on, and so on, and scooby-dooby-do…

A thorny problem, this one…
Mickey @ 2:09 PM

philosophic insomnia…

Posted on Sunday 22 May 2016

"In the ghost management of medical research by pharmaceutical companies, we have a novel model of science. This is corporate science, done by many hidden workers, performed for marketing purposes, and drawing its authority from traditional academic science. The high commercial stakes mean that all of the parties connected with this new science can find reasons or be induced to participate, support, and steadily normalize it. It is likely here to stay for a while."
Sergio Sismondo

I may claim the right to be boring, but I’m sure not bored with this topic of industry invading academic medicine and the medical literature. As a late-comer, I still happen on to unexplored corners all the time that open up a whole new cache of things to think about. That term, ghost management, is one of those unexplored corners. Without knowing it exactly, I’ve been running into an example [Vortioxetine] of publication planning for a while now. There’s a ghost behind this machine:

MAY 2013:   beyond unacceptable…
way past time…
academic?…
the squeaky wheel…
DEC 2014: the recommendation?…
FEB-APR 2016: indications…
more vortioxetine story…
the empty pipeline…
a parable…
still going strong?…
MAY 2016: publication bias I…
publication bias II…
publication bias III…
publication bias IV…
publication bias – a post-script…

And so to the work of Canadian Philosopher/Sociologist Sergio Sismondo who has enough publications on this topic to be named an honorary KOL himself. Here’s a long one that can be read full text on the the Internet – Ghosts in the Machine 2009. His work is one of those unexplored corners that occupied my Sunday.

I once had a friend who was working on his PhD thesis on Husserl at Duke. One night, he left Durham and drove to Memphis and enrolled in Medical School, never looking back. He later said something like, "That night, I started to finally ‘get it.’ And I realized that if I continued in philosophy and ‘got it’ much more, I’d be too depressed to ever get any sleep." Sismundo’s work has some of that flavor. His command of ghost management, publication planning, and the stealth goings on in the pharmaceutical industry is encyclopedic and illuminating, but I wonder how he sleeps at night…
Mickey @ 8:33 PM

publication bias – a post-script…

Posted on Sunday 22 May 2016

Wading around in the meta-analyses comparing antidepressant efficacy and safety among them can be like Kafka’s The Trial or McCarthy’s The Road. You’re never sure where you are or if you’ve arrived anywhere. One of the most quoted versions is Cipriani et al’s Comparative efficacy and acceptability of 12 new-generation antidepressants: a multiple-treatments meta-analysis that produced this chart.

…comparing efficacy and acceptability. The abcsissa [horizontal axis] is the rank from 1st down to 12th and the ordinate [vertical axis] is the probability of each rank for the drugs. Now recall this forest plot of Vortioxetine versus comparators [Duloxetine] from Cosgrove et al:
They went head to head with one of their least effective rivals – and lost. New isn’t necessarily better…
Mickey @ 11:15 AM

publication bias IV – close encounters of the second kind…

Posted on Sunday 22 May 2016

I doubt that my discussion of the difficulty getting contrarian studies published would stand up in Evidence-Based Medicine court – too few examples to be called anything but anecdotes. But if you asked the few investigators who’ve given it a shot, I’d bet p would approach (1 ÷ ∞). Consider, for example, this contentious [and telling] response to Juireidini et al’s submission:

from Dr. Greenblatt [Editor of the Journal of Clinical Psychopharmacology]

We do not share your concerns about what you term "ghostwriting" — more properly described as manuscript preparation assistance. This is just another form of assistance or collaborative effort in the course of research — similar to technical assistance with measurement of plasma drug levels, or statistical consultant input in analysis of data. In the end, the listed authors take responsibility for the content of the manuscript, and that is what matters.
We also are not worried about the participation of the pharmaceutical manufacturer in the execution of the study, or the preparation of the manuscript. This is to be expected — they are the sponsors, and they have the most knowledge about the candidate drug and the data… We also note that we ARE concerned that you are serving as a reimbursed expert witness on behalf of the plaintiffs, proposing in the present manuscript what we expect are similar arguments as presented in the context of the litigation.
With all of that said, we certainly could reconsider a revised manuscript in which the focus was ONLY on the scientific content of the paper(s) in question. If you disagree with the scientific content or its interpretation, that can be presented, but without the court documents and internal E-mails, and without accusations of malfeasance, misrepresentation, manipulation, whitewashing, complicity, etc. The issue of manuscript preparation assistance is also not in the picture…

Greenblatt’s response misses the central point that by ignoring protocol directed exclusions for the primary outcome variable and changing a secondary outcome altogether, the analysis falsely reported a negative study as positive. Perhaps that’s what he didn’t want to hear.

But even in the BMJ response to Le Noury et al where the journal was sympathetic to the reanalysis, they performed their own audit of the harm findings because of similar concern about COI, and asked for additional post hoc efficacy analyses  [multiple imputation]. To borrow a $5 word from theological scholarship:
   her·me·neu·tic
     noun
     1. a method or theory of interpretation
The RCT submissions are viewed with a hermeneutic of acceptance [innocent until proven guilty], and the contrarian submissions are evaluated under a hermeneutic of suspicion [guilty until proven innocent]. There’s not much a contrarian author can do about that. The journals are independent entities and they set their own sails.

Publication bias of the first kind [not publishing negative studies] presents a false view of a drug by inflating the drug’s profile. Publication bias of this second kind [not publishing or under-publishing contrarian articles – criticism] achieves the same result through a different mechanism. The first is calculated, intentional. The second is more insidious, and often has to do with the general climate. Dr. Greenblatt’s comments aren’t about the specific drug study criticized by Jureidini et al. He’s reacting to their implied criticism of the system that he, himself, is a part of. He conflates the legal challenge that produced the documents used in Jureidini et al and the criticism of the specific drug study itself, and lashes out at both in a single breath [rejecting the article out of hand].

Cosgrove et al has had a similar fate so far. The paper not only presents a right-sized view of Vortioxetine and exposes the mechanisms used to inflate it in the first place, it also looks at the structure of the system that allowed that to happen and suggests reform. We don’t know the details, but the net result is that their paper is buried and largely inaccessible. I doubt anyone thought, "Let’s block this paper from the general discourse" along the way. But that’s what happened. The article says "Your system needs to change" to an audience that is fine with the system as it is. I agree with these authors that the system needs to change. And though my recipe for change might differ somewhat, I would welcome these opinions to the conversation. But that’s not the mainstream point of view – unfortunately reflected in their article’s fate.

I first heard the phrase, "Sunlight is the best disinfectant" in Ben Goldacre’s initial Ted Talk [see something of value…]. And it is dead-on accurate. It’s the motivating force behind AllTrials, around Data Transparency, with articles like Restoring Study 329… and The citalopram CIT-MD-18 pediatric depression trial…, blogs like this, etc. I think the main problem with Cosgrove et al right now is not the Impact Factor of the journal where it’s published, but the fact that it’s not easily accessible on-line. Negotiating ResearchGate is not the same as just clicking a link. The other articles mentioned here are on-line, and I would hope that theirs will be too at some point. It could use the kind of sunlight only the Internet can provide…
Mickey @ 7:00 AM

publication bias III – close encounters of the second kind…

Posted on Friday 20 May 2016

by Lisa Cosgrove, Steven Vannoy, Barbara Mintzes, and Allen Shaughnessy
Accountability in Research. 2016 23[5]:257-279.

The relationships among academe, publishing, and industry can facilitate commercial bias in how drug efficacy and safety data are obtained, interpreted, and presented to regulatory bodies and prescribers. Through a critique of published and unpublished trials submitted to the Food and Drug Administration [FDA] and the European Medicines Agency [EMA] for approval of a new antidepressant, vortioxetine, we present a case study of the "ghost management" of the information delivery process. We argue that currently accepted practices undermine regulatory safeguards aimed at protecting the public from unsafe or ineffective medicines. The economies of influence that may intentionally and unintentionally produce evidence-biased-rather than evidence-based-medicine are identified. This is not a simple story of author financial conflicts of interest, but rather a complex tale of ghost management of the entire process of bringing a drug to market. This case study shows how weak regulatory policies allow for design choices and reporting strategies that can make marginal products look novel, more effective, and safer than they are, and how the selective and imbalanced reporting of clinical trial data in medical journals results in the marketing of expensive "me-too" drugs with questionable risk/benefit profiles. We offer solutions for neutralizing these economies of influence.

This article differs from the others in this series in that it is primarily a perspective/opinion piece about the intersection of industry, the medical literature, and the regulatory processes that uses the recent FDA approval of Vortioxetine [Brintellix®] as an exemplare. It’s a well-framed discussion of a complex issue, offering suggestions for reform. While some may not agree with their emphases or solutions, it’s an excellent "big picture" article – a thoughtful contribution. But specific to the Vortioxetine theme from the last post [publication bias II – close encounters of the second kind…], their research into the Vortioxetine RCTs is definitely value-added to the other reports. And it’s another example of a contrarian article that they had a really hard time getting published.

Cosgrove et al looked at all the Vortioxetine trials, published and unpublished, and introduced a term I hadn’t run across before – Ghost Management. Just saying an article is ghost-written doesn’t tell the whole story, because the ghost-writer is employed by the RCT’s sponsor. Ghost Management is much more accurate in that it conveys the extent of the sponsor’s control over the entire RCT process. I found some of their findings jarring, even after my years of reading these reports. For example, the extent of industry COI:
Vortioxetine is typical of virtually all new drugs in that the pre-market RCTs were all sponsored by the manufacturer. However, congruent with commercialized publication planning, every author in all of the published short term RCTs, as well as one longer term randomized drug withdrawal study, had significant commercial ties to the manufacturer, well beyond research funding [e.g., they were employees, participated on advisory boards, and/or had received consulting monies or honoraria]…

Below is a summary of the industry-publishing relationships of the eight published studies submitted to the FDA and one additional study submitted to the EMA that was published:
  • In eleven of the thirteen publications, the majority of authors were employees of the manufacturer, and in four of the thirteen published studies, all authors were company employees.
  • In all of the trial reports, the authors explicitly thank an employee of the manufacturer for ‘assistance in the preparation and writing‘ of the manuscript or note that assistance with preparing and writing the article was provided by an employee.
  • In nine of the thirteen published articles, the following issue was disclosed: ‘[the manufacturer] was involved in the study design, in the collection, analysis and interpretation of data, and in the writing of the report.’
  • The thirteen published studies were published in seven academic journals. The editors of five of these journals had financial ties to vortioxetine’s manufacturer…
It had never occurred to me to look at the COI of the journal editors. Another finding – one of their suggestions is that the FDA consider all trials instead of the best ones, and take the performance against comparators into account. They produced this meta-analysis across trials:
At this point, my usual M.O., having highlighted a few "teasers," would be to suggest you read the whole article yourself. There’s plenty more I haven’t mentioned. But, in this case, you can’t read it. I can’t either. My usual resource for access is through my faculty connection to our library. But lthough the journal is indexed, it offers no provision to read the article [unusual]. On-line, it’s not Open Access but behind a $41 pay wall. Making it Open Access would’ve cost the authors ~$3000, a mighty stiff fee for any academic [but trivial for PHARMA]. I had written Dr. Cosgrove to ask if she had cataloged or written about their Hegira looking for a publisher. She hadn’t, but she mentioned in passing that they were disappointed with the response to their article. I don’t doubt that, since the only people who can read it are subscribers to Accountability in Research, rich people, or those who have written the authors for a copy. It’s hidden under a bushel, or so I thought. But then I discovered ResearchGate! And got to the paper full text even before I joined [which I later did]. That’s where that [full text on-line ?] link came from. Definitely check out ResearchGate if you haven’t already. And however you get this article, it’s well worth the trouble.

So back to the main theme here. This is a solid article that shouldn’t be hard to publish. Credentialed authors. Well researched. Thoughtful analyses. Certainly relevant. But it’s contrarian. It swims upstream against the conclusions the original authors presented. And those articles are [to say it yet again] hot potatoes, just like Juriedini, Amsterdam, and McHenry’s paper. In this case, the research on the Vortioxetine [Brintellix®] trials adds an essential dimension to those papers mentioned in the last post. This is just not an Impact Factor 0.826 article. So is this a form of publication bias? Close enough for me, but I’m open for any other name as long as it says that it’s not right. Well documented dissent is [and has always been] an integral part of the scientific process…

Publication bias is the term for what occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies. Simply put, when the research that is readily available differs in its results from the results of all the research that has been done in an area, readers and reviewers of that research are in danger of drawing the wrong conclusion about what that body of research shows…
Mickey @ 2:33 PM

publication bias II – close encounters of the second kind…

Posted on Friday 20 May 2016

With the Paxil Study 329 paper, our problem getting it in print didn’t have to do with journal shopping, it had to do with a tough love review process and a year of uncertainty that went with it. It was a top journal [British Medical Journal Impact Factor 17.445] and I’m glad it’s there. The recent Citalopram paper did have to do a lot of journal  shopping  hopping [see background notes], moving from the Journal of Affective Disorders [Impact Factor 3.383] to JAMA Psychiatry [Impact Factor 12.008] to the Acta Psychiatrica Scandinavica [Impact Factor 5.605] to the International Journal of Risk and Safety in Medicine [Impact Factor 0.86]. Quite a journey.

A few months back, I was writing about an elaborate KOL-rich campaign by Takeda and Lundbeck to get FDA Approval for Vortioxetine [Brintellix®] in Cognitive Dysfunction in Major Depressive Disorder [see indications… and more vortioxetine story…]. I thought it was a commercially driven attempt [indication creep] and the science was woefully lacking. I was pleased that the FDA later agreed [a parable…] and didn’t approve the indication. Prior to that, my only encounter with Vortioxetine [Brintellix®] was a industry-produced review article in the Journal of Clinical Psychiatry [Impact Factor 5.498]:

by Alan F. Schatzberg, Pierre Blier, Larry Culpepper, Rakesh Jain, George I. Papakostas, and Michael E. Thase.
Journal of Clinical Psychiatry 2014 75[12]:1411–1418.

Six clinicians provide an overview of the serotonergic antidepressant vortioxetine, which was recently approved for the treatment of major depressive disorder in adults. They discuss the pharmacologic profile and receptor-mediated effects of vortioxetine in relation to potential outcomes. Additionally, they summarize the clinical trials, which demonstrate vortioxetine’s efficacy, and discuss findings related to safety and tolerability that have high relevance to patient compliance.
Speaking of KOL-rich, this was simply the worst article I’ve ever seen in a medical journal. It’s hard to imagine that they published it, but I’ve had my say about that [see the recommendation?…]. While I was in the Vortioxetine neighborhood, I ran across an article that had been accepted but not yet published by Lisa Cosgrove and colleagues that looked interesting. I wrote them about it and they kindly sent me an advance copy:
by Lisa Cosgrove, Steven Vannoy, Barbara Mintzes, and Allen Shaughnessy
Dr. Cosgrove was well known to me for running down the extent of Conflicts of Interest among members of the DSM-5 Task Force [see must be crazy…] and as a coauthor of the recent book Psychiatry Under the Influence: Institutional Corruption, Social Injury, and Prescriptions for Reform. In sending the article, she mentioned that they had a very difficult time getting it published. I thought it was an important article and made a note to blog about it later when it was published and fully available in Accountability in Research [Impact Factor 0.826]. It includes a critical look at the Vortioxetine Clinical Trials that I’ll mention later. In the references, I found a meta-analysis of Vortioxetine published in the Journal of Psychiatry and Neuroscience [Impact Factor 5.86]:
by Chi-Un Pae, Sheng-Min Wang, Changsu Han, Soo-Jung Lee, Ashwin A. Patkar, Praksh S. Masand, and Alessandro Serretti
Journal of Psychiatry and Neuroscience. 2015 40[3]: 174–186.

Background: Vortioxetine was approved by the U.S. Food and Drug Administration [FDA] in September 2013 for treating major depressive disorder [MDD]. Thus far, a number of randomized, double-blind, placebo-controlled clinical trials [RCTs] of vortioxetine have been conducted in patients with MDD. We performed a meta-analysis to increase the statistical power of these studies and enhance our current understanding of the role of vortioxetine in the treatment of MDD.
Methods: We performed an extensive search of databases and the clinical trial registry. The mean change in total scores on the 24-item Hamilton Rating Scale for Depression [HAM-D] and the Montgomery–Åsberg Depression Rating Scale [MADRS] from the baseline were the primary outcome measures. The secondary efficacy measures were the response and remission rates, as defined by a 50% or greater reduction in HAM-D/MADRS total scores and as a score of 10 or less in the MADRS and 7 or less in the HAM-D total scores at the end of treatment.
Results: We included 7 published and 5 unpublished short-term [6–12 wk] RCTs in our meta-analysis. Vortioxetine was significantly more effective than placebo, with an effect size [standardized mean difference [SMD]] of ?0.217 [95% confidence interval [CI] ?0.313 to ?0.122] and with odds ratios [ORs] for response and remission of 1.652 [95% CI 1.321 to 2.067] and 1.399 [95% CI 1.104 to 1.773], respectively. Those treated with vortioxetine did not differ significantly from those treated with selective norepinephrine reuptake inhibitors/agomelatine with regard to the SMD of the primary outcome measure [0.081, ?0.062 to 0.223] or for response [OR 0.815, 95% CI 0.585 to 1.135] and remission [OR 0.843, 95% CI 0.575 to 1.238] rates. Discontinuation owing to lack of efficacy [OR 0.541, 95% CI 0.308 to 0.950] was significantly less common among those treated with vortioxetine than among those who received placebo, whereas discontinuation owing to adverse events [AEs; OR 1.530, 95% CI 1.144 to 2.047] was significantly more common among those treated with vortioxetine than among those receiving placebo. There was no significant difference in discontinuation rates between vortioxetine and comparators owing to inefficacy [OR 0.983, 95% CI 0.585 to 1.650], whereas discontinuation owing to AEs was significantly less common in the vortioxetine than in the comparator group [OR 0.728, 95% CI 0.554 to 0.957].
Limitations: Studies examining the role of vortioxetine in the treatment of MDD are limited.
Conclusion: Although our results suggest that vortioxetine may be an effective treatment option for MDD, they should be interpreted and translated into clinical practice with caution, as the meta-analysis was based on a limited number of heterogeneous RCTs.


Effect Size of the Primary Outcome Variable
[adapted for blog]

I’ve obviously got two themes going in this post. One is the articles about the RCTs that lead the FDA to approve Vortioxetine [Brintellix®] for the treatment of Major Depressive Disorder. The other has to do with what I’m calling publication bias of the second kind – the difficulty getting contrarian articles about Clinical Trials published and into the general dialog. I’ll get to the Cosgrove et al article in detail in the next post, but for now just say that it is in the contrarian category. So, we have an industry created [incredible] review article in a journal with an Impact Factor of 5.498, a credible meta-analysis in a journal with an Impact Factor of 5.86, and the Cosgrove et al article that ended up in a journal with an Impact Factor of 0.826. In my next post in this series, I’m going to continue both themes and claim that this disparity in journal ratings deserves our attention…
Mickey @ 8:30 AM

publication bias I – close encounters of the second kind…

Posted on Thursday 19 May 2016

Publication bias is the term for what occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies. Simply put, when the research that is readily available differs in its results from the results of all the research that has been done in an area, readers and reviewers of that research are in danger of drawing the wrong conclusion about what that body of research shows…

The usual version of publication bias is the practice of simply not publishing negative clinical trials. The result is obvious. The positive trials "average out" to a falsely inflated result and the drug looks better than it actually is. We’ve pretty much caught on to that ploy by requiring all trials be registered in advance [clinicaltrials.gov], developingtools to detect missing trials [funnel plots], and doing meta-analyses that include unpublished trials [the Tamiflu saga]. A variant would be publishing Paxil Study 329 which was claimed to be positive, but not publishing two negative trials of the same drug [Paxil Study 377 and Paxil Study 701] until after the patent had run out [see paxil in adolescents: “five easy pieces”… and study 329 x – “it wasn’t sin – it was spin”…]. This post isn’t about that kind of publication bias. It’s about a getting published version of publication bias.

[Note: This particular post is kind of redundant, not too different from the last one. But it’s here because I finally figured out what I wanted to say].

I was on the RIAT team that reanalyzed Paxil Study 329 [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence]. It was the end product of over a decade of effort by armies of people from many professions all over the world working to make it happen. It was in a high impact journal [British Medical Journal] and was extremely well received. In spite of all that preparatory work and a receptive editorial staff, writing it and getting it published was a grueling experience, roughly outlined at Restoring Study 329 [in Background], and there will be more to follow. But in spite of its success, you have to ask yourself "Why did it take such a herculean effort by an army and fourteen years to get it in print?" And for that matter, "Why was I an author – a retired adult psychiatrist who writes a blog at the edge of the galaxy?" It should’ve been authored by a chairman of child psychiatry at a prestigious medical school or the president of the American Academy of Child an Adolescent Psychiatry. Actually, Study 329 should never have been published in the first place as it is written [Efficacy of Paroxetine in the Treatment of Adolescent Major Depression: A Randomized, Controlled Trial], or should have been subsequently retracted.

And now we have another article about an antidepressant clinical trial in kids, The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance. It’s authored by a heavily credentialed multidiscipliary team. In this paper, the data comes from the sponsor’s own internal documents and focuses in on the processes involved in creating the original publication [A Randomized, Placebo-Controlled Trial of Citalopram for the Treatment of Major Depression in Children and Adolescents]. While they show the same things we documented [deviations from the a priori protocol, add-on parameters, questionable analyses, etc], they go further. Using verbatim quotes from the sponsor, they show that all of this jury-rigging of data, analysis, and presentation was done by the sponsor’s employees with the specific intent of deceiving the reader by making this negative trial look positive. And yet they had a hell of a time getting it published [see background notes]. It wasn’t turned down because it was wrong, or badly written, or poorly documented, or didn’t have proven authors. "Why did they have such a hard time getting it published?" It’s in the International Journal of Risk and Safety in Medicine, a perfectly legitimate peer reviewed medical journal, but hardly in the upper echelon. Since it was unfunded research, the journal graciously made it Open Access by waiving the fee. But thus far, it hasn’t been covered in the main stream media. "Why not?" It is  at least as important as our article, if not more-so, as it documents active fraudulent behavior on the part of the sponsor.

To my mind, this is publication bias of a second kind [see a mighty hard row…], a problem getting a contrarian take on a clinical trial published and into the discourse where it belongs. It fits the opening definition to a tee. This is a form of publication bias exerted by the publishers themselves, and may not necessarily be the result of direct or even indirect involvement of the original clinical trial’s sponsor. I’m not going to speculate further about all the motivations involved here. There’s more than enough speculation flying around about this topic already. But I do want to talk about it’s impact, and illustrate it with a contemporary example…
Mickey @ 2:48 PM

a mighty hard row…

Posted on Tuesday 17 May 2016

“…it has become standard practice for pharmaceutical companies to pay medical communication companies to write articles [based on industry-designed studies], for academic physicians to be paid to essentially sign off on the articles, and then for communication companies to place the articles in prestigious medical journals, a process known as “ghost management.” Ghost management has resulted in the selective and imbalanced reporting of clinical trial data in medical journals, which in turn has supported the marketing of expensive new drugs with questionable risk/benefit profiles.”

Under the Influence: The Interplay among Industry, Publishing, and Drug Regulation.
by Cosgrove, Vannoy, Mintzes, and Shaughnessy

Looking at the twelve year old Citalopram in Adolescence Study, the recent deconstruction, and the problems they had getting it published reminded me of another more contemporary article that I had wanted to return to. It was about the RCTs used to gain FDA Approval for Vortioxetine. I mentioned it earlier [see a couple of points…], but at the time it was only available as an abstract:

As you can see from the opening quote, this article is about a lot more than just the specific papers or even the FDA Approval, it’s about the perversion of the scientific process in the industry funded RCTs, what the authors call ghost management [AKA total commercial control of the trial process from design and registration through the drafting of the paper for publication]. Being reminded of this paper, I recalled that when I first came across it I contacted the authors to get a copy and was told that they too had had a very hard time getting it published. It’s in a lower impact journal and it’s not available from the publisher on-line [an expensive proposition unless it’s part of grant or you’re involved with a well-heeled corporation]. So I recontacted them to find out if they had written about their publishing problems, and they hadn’t. But they mentionedf that they were disappointed with the paper’s reception, as was I. It’s a really strong and well documented article. Thinking back about our experience with the Paxil Study 329 RIAT article, the recent Citalopram deconstruction article, this Vortioxetine paper, the papers about the Tamiflu re-evaluation, and countless others, it’s apparent that there’s a common theme. So I want to revisit this Vortioxetine article later [I’ve found a way to get to it on-line], but first, I’d like to frame the common theme that unites these papers.

I’ve always had kind of a problem with “ear worms.” I get some song in my mind that repeats in the background. If I think about it, usually I can figure out how it got there. I was out and about all day, and there it was – an old Woodie Guthrie song called Pastures of Plenty:

    It’s a mighty hard row that my poor hands have hoed
    My poor feet have traveled a hot dusty road
    Out of your Dust Bowl and Westward we rolled
    And your deserts were hot and your mountains were cold
    I worked in your orchards of peaches and prunes
    I slept on the ground in the light of the moon
    On the edge of the city you’ll see us and then
    We come with the dust and we go with the wind…
It had to do with thinking about this Cosgrove et al paper this morning before I set out on my errand-filled day. Okay – my unconscious might be overly dramatic, even histrionic, but it’s on the right track. It is a mighty hard row and it can be a largely thankless task. We sure paid the piper to get our Paxil Study 329 paper in print [as did the Tamiflu team]. We made it to a big journal and it was well received, but there were years of work by a number of people to bring the paper we reanalyzed to the forefront. Even with that, we were scrutinized like few other [see Restoring Study 329: The Path To Publication]. Jureidini et al were able to take advantage of a lot of antecedent legal work producing documentation, but still traveled that same mighty hard row [see background notes]. Cosgrove et al didn’t have any specific pre-emptory work  by others, so they were flying blind.

What unites these articles is obvious – the are contrarian. They say that somebody else’s work is wrong, and in these cases, wrong on purpose. They use that somebody-else’s data in reanalysis to reach a different, often opposite conclusion. They make a serious charge, so they deserve careful scrutiny sure enough. But in the case of all four articles, they’re not, primarily, opinion pieces. They’re data-based analyses that, by definition, render an opinion, but it’s an evidence-based opinion rather than speculative. Yet even after being subjected to a rigorous examination far exceeding that applied to the originals, they’re still hot potatoes. There’s obviously a fear of reprisal, legal challenges by the powerful sponsors of the originals. And there’s something else, the authors are under suspicion for having ulterior motives, conflicts of interest. The obvious commercial conflicts of interest in the originals are buried in small print, but the fact that the reanalysis articles are contrarian in and of itself seems grounds for such a charge.

And so to a few of the bumps along the mighty hard row for the authors of contrarian articles:
  • data collection:
    In each of these cases, just getting hold of the data was itself a daunting task, coming from reluctant sources in formats that made analysis difficult.
  • funding:
    This is largely unfunded research. If there is grant support, it’s often for the investigator without allowance for statistical or technical support, indirect costs, or publication expenses.
  • heightened standards:
    The burden of proof for contrarian articles is uniformly higher than for the originals. Likewise, requirements for freedom from conflicting interests are more stringent – more on the side of guilty until proven innocent.
  • opinion rather than science:
    Contrarian articles tend to be judged as opinions rather than scientific exploration – and biased opinion at that.
  • accessibility:
    Because they’re often in lower impact journals and have to rely on charity to be available online, they can get lost in an ivory tower or a dusty corner of the library either never catching the wind, or if they do, having a short hang time.
  • seen as deprivation : 
    While their intent is health promoting, they can be seen as taking away something.
  • no PR:
    They’re not necessarily picked up by the news media, mentioned on the business pages, or reviewed in the professional trade journals

This is hardly a comprehensive list, just something off the cuff. But I think it captures the fact that these are hard articles to write and hard articles to publish. But that’s just the beginning. Often, getting them into the public and professional discourse is a another uphill climb. And while it’s tempting to see the difficulties as coming from evil opposing forces, and I’m certain that complaint is often justified, it’s also not the whole story. Contrarian literature in science is like that all on its own. We want to hear about breakthroughs, not breakdowns. That’s just the nature of things.

So now onto the skipped-over article by Cosgrove et al…
Mickey @ 11:58 AM

captain ben and his crew…

Posted on Sunday 15 May 2016

There are times when being wrong is just fine. When I first read about Ben Goldacre‘s COMPare Project, I didn’t think it would have much of an impact. What he was proposing to do was put together an army of medical students who would look over Clinical Trial papers and when they found one that didn’t follow the a priori Protocol, they’d start writing letters to the Journal, calling it "outcome switching." While I certainly agree with the sentiment, I thought his campaign was too simplistic – more parrying than combat…
Retraction Watch
by Alison McCook
May 15, 2016

A major medical journal has updated its instructions to authors, now requiring that they publish protocols of clinical trials, along with any changes made along the way.

We learned of this change via the COMPare project, which has been tracking trial protocol changes in major medical journals — and been critical of the Annals of Internal Medicine‘s response to those changes. However, Darren Taichman, the executive deputy editor of the journal, told us the journal’s decision to publish trial protocols was a long time coming…
    This change was something we planned prior to COMPARE and were intending to implement with an update of our online journal that is in process. However, the barrier COMPARE encountered in obtaining a protocol for one of the studies in their audit prompted us to implement it earlier…
Read the whole thing. It’s the real deal – a success that could be bigger than Ben’s AllTrials campaign. So I guess that one moral of the story is Don’t bet against Ben Goldacre. His TED talk was a landmark as was his AllTrials campaign. He seems to have the gift of both method and timing – something of a swashbuckler in an age of plodders.

While I still believe that Data Transparency is the ultimate goal to combat the rampant corruption, I realized when we were writing our RIAT paper that we needed a preventive strategy as well – something to head off the deceit in the first place. In the original Paxil Study 329, the Celexa Study in my last post [this tawdry era…], and for that matter, the overwhelming majority of the distorted RCTs I’ve looked at over the years, deviating from the a priori Protocol and/or the Statistical Analysis Plan to find something to call significant has been a ubiquitous practice, and the standard means for turning all those sow’s ears into silk purses.

It’s simple, up front, something that happens at the level of the journal publications where it needs to happen, and he’s brought it off in a major journal. So my hat’s off to Captain Ben and his crew…
Mickey @ 3:51 PM