smoke and mirrors…

Posted on Friday 27 May 2016

When I began to look at Clinical Trial reports a few years back, I had to relearn how to approach our literature. The studies are now industry funded, and that turned out to mean that the pharmaceutical companies control every step of the process, resulting in distorted and unreliable efficacy and side effect profiles. One might think that with all of the focused attention and attempts at reform, things might’ve changed more. But the beat just goes on.

The newest antidepressant on the block, Vortioxetine [Brintellix®, now Trintellix®], is sticking to the same old formulas. The publications’ authors are all either employees or otherwise tainted. A ghosted review article has an army of KOLs on the byline [see the recommendation?…]. They made a yeoman’s attempt at indication creep, including an all day KOL-rich Institute of Medicine production and a special FDA re-hearing trying [and failing] to create a new indication, Cognitive Dysfunction in Depression [see more vortioxetine story… and a parable…]. There have been a number of independent meta-analyses and critiques:
Now we have yet another Vortioxetine meta-analysis, this time by the manufacturers [Takeda/Lundbeck], again with their [everyman’s] KOL, Michael Thase, and four company employees on the byline:
Thase ME, Mahableshwarkar AR, Dragheim, M, Loft H, and Vieta E.
European Neuropsychopharmacology. Mar 25, 2016 [Epub ahead of print]
2014/2015 Impact Factor 4.369

The efficacy and safety of vortioxetine, an antidepressant approved for the treatment of adults with major depressive disorder (MDD), was studied in 11 randomized, double-blind, placebo-controlled trials of 6/8 weeks? treatment duration. An aggregated study-level meta-analysis was conducted to estimate the magnitude and dose-relationship of the clinical effect of approved doses of vortioxetine [5-20mg/day]. The primary outcome measure was change from baseline to endpoint in Montgomery-Åsberg Depression Rating Scale [MADRS] total score. Differences from placebo were analyzed using mixed model for repeated measurements [MMRM] analysis, with a sensitivity analysis also conducted using last observation carried forward. Secondary outcomes included MADRS single-item scores, response rate [≥50% reduction in baseline MADRS], remission rate [MADRS ≤10], and Clinical Global Impressions scores. Across the 11 studies, 1824 patients were treated with placebo and 3304 with vortioxetine [5mg/day: n=1001; 10mg/day: n=1042; 15mg/day: n=449; 20mg/day: n=812]. The MMRM meta-analysis demonstrated that vortioxetine 5, 10, and 20mg/day were associated with significant reductions in MADRS total score [Δ-2.27, Δ-3.57, and Δ-4.57, respectively; p<0.01] versus placebo. The effects of 15mg/day [Δ-2.60; p=0.105] were not significantly different from placebo. Vortioxetine 10 and 20mg/day were associated with significant reductions in 9 of 10 MADRS single-item scores. Vortioxetine treatment was also associated with significantly higher rates of response and remission and with significant improvements in other depression-related scores versus placebo. This meta-analysis of vortioxetine [5-20mg/day] in adults with MDD supports the efficacy demonstrated in the individual studies, with treatment effect increasing with dose.
The article has a number of figures, forest plots of the parameters they compiled from the various Clinical trials. The two below are from Figures 2B and 4B. Take a look at the versions in the paper first. In the ones below, I’ve removed some of the columns that were irrelevant to the points I wanted to make, and I’ve sorted them differently – first by region [US, mixed, non-US] and then by dose [my apologies for the "waviness" – an artifact of my graphic capabilities]. In both of my versions, I see nothing of the "dose response" effect they advertise in the text. Instead, heterogeneity seems to be the order of the day. Another thing that appears obvious is that there’s a big difference between the US and non-US sites in both tables.

In the MADRS Total Score Differences [above], the forst plot abscissa is plotted as the raw difference. While that’s a legitimate way to show Effect Size, the units are unfamiliar, at least to me. In the far right column, they show the more familiar Standardized Effect Size [Mean Difference ÷ Standard Deviation] roughly scaled as 0.25=weak, 0.50=moderate, and 0.75=strong. Only one of the US sites reaches something that might remotely be clinically significant.

The MADRS Remission data is similar: Dose response curve? Not so much. And in the US sites, nothing achieves significance [p<0.05] or has an NNT or an Odds Ratio that is the range of a clinically solid antidepressant.

Is it fair to imply as I have done here that the US data is more reliable? I know that’s sure what I think. But my real point in redoing these figures is to show that the way the data is presented can be [and often is] easily used to guide the reader towards some preferred conclusion. Omission is another way to lead the reader. What’s missing here is that six of these studies had active comparators, included in some of the other papers listed above. In this paper, they explain why they left them out in the text…
This meta-analysis centers on the comparison between vortioxetine and placebo in the 11 individual studies and does not evaluate differences between vortioxetine and the active references [duloxetine and venlafaxine XR]. The results of the active references can be found in the publications for the individual studies. In two of the previous meta-analyses of the vortioxetine data, direct comparisons between vortioxetine and the active reference were included. Direct comparison of vortioxetine and the active reference is not appropriate, as the individual studies were not designed or powered to enable this comparison. Rather, the rationale for including an active reference in these studies was for the internal validation of the study design [i.e., assay sensitivity]. To evaluate the efficacy of vortioxetine relative to another antidepressant would require a study that is specifically designed for that purpose, that is, an active-comparator study. In addition, in the six studies that include an active reference, patients were excluded – for ethical reasons – if they had known hypersensitivity or a history of lack of response to previous treatment with the active reference, which introduces the potential for bias in favor of the active reference.
… which is baloney. This plot of those comparisons from Cosgrove et al [see publication bias III – close encounters of the second kind…] is the likely explanation for why the active comparators were omitted. Vortioxetine just doesn’t come out very well:

It’s sort of unusual to see an industry produced meta-analysis of their own clinical trials. I’ve given just a few examples where the way things are displayed or omitted falsely inflate the overall picture of how the drug performs. There are others, but even without the window dressing, this is a weak antidepressant on the best of days [if that]. I would speculate that this industry created meta-analysis is intended to neutralize the less than enthusiastic findings of the independent meta-analyses.

We’re so used to these industry prepared, ghost managed papers that there are some aspects of these articles that we don’t even notice. All along the way, when there’s something that looks negative, the narrative explains it away as in the omission of the active comparators. They’re written and presented aiming towards a conclusion rather that trying to clearly present the facts and allowing the reader to reach their own conclusions. Also, it’s worth noticing that this article is published in European Neuropsychopharmacology which is the publication of the European College of Neuropsychopharmacology, a scientific organization – a place where one might expect the most rigorous of scientific papers to appear rather than what looks to be a ghost-managed commercial advertisement…
Mickey @ 11:41 PM

“that’ll preach”…

Posted on Wednesday 25 May 2016

My good friend Andy died a couple of years back. By education, he was a minister, but he did other things instead, mostly good. Maybe he’d left the pulpit, but his way of saying things betrayed his history, so when he ran across something that really mattered or was unusually right, he’d smile and say "That’ll preach!" Reading this editorial, I heard his voice…
There are better solutions to the “reproducibility crisis” in research
by Eric J. Topol
British Medical Journal. 2016 353:i2770.

Money back guarantees are generally unheard of in biomedicine and healthcare. Recently, the US provider Geisenger Health System, in Pennsylvania, started a programme to give patients their money back if they were dissatisfied. That came as quite a surprise. Soon thereafter, the chief medical officer at Merck launched an even bigger one, proposing an “incentive-based approach” to non-reproducible results — what he termed a “reproducibility crisis” that “threatens the entire biomedical research enterprise.”

The problem of irreproducibility in biomedical research is real and has been emphasised in multiple reports.  In the same vein, the retraction of academic papers has been rising, attributable, in nearly equal parts, to irreproducible results or data that have been falsified. But this problem is not confined to basic science or animal model work from academic laboratories. Clinical trials, the final common pathway for the validation and approval of new drugs, have been plagued with serious drawbacks.

The bad science in clinical trials has been well documented and includes selective publication of positive results, data dredging, P hacking, HARKing, and changing the outcomes that were prespecified at the beginning of the study [below]. Indeed, the high prevalence of switching outcomes in drug industry funded trials led Ben Goldacre and colleagues at the University of Oxford to organise Compare, an initiative to track this considerable problem. Furthermore, the disparity between what appears in peer reviewed journals and what has been filed with regulatory agencies is long standing and unacceptable.
    Bad science
  • Data dredging — Mining a dataset with innumerable unplanned analyses
  • P hacking — Repetitively analysing data in ways not prespecified to find a significant P value
  • HARKing (hypothesising after the results are known) — Retrofitting the hypothesis after the results are known to portray an exploratory, retrospective analysis as if it was prospectively declared
In case you don’t recognize the name, Eric Topol is the Cardiologist who started the ball rolling that exposed the dangers of Vioxx®. He later left the Cleveland Clinic in a conflict over corporate influences and now directs the Scripps Institute [among other things].
Transparency is key

What is missing is the deep commitment—across academia and the life science industry—for open science and open data. Everyone asks for accountability of research findings, which can be vastly promoted by making them fully transparent. But compliance is poor. Even the mandatory requirement for publishing results on Clinicaltrials.gov within two years was evaded for 87% of 4347 clinical trials in academic centres.

When we start to see all the protocol, prespecified hypotheses, and raw data available for review, along with full disclosure of methods and analyses and what, if anything, changed along the course of experiments, be it at the bench or in clinical trials, we’ll have made substantive progress. A promising, low cost digital solution exists to capture all of the data and promote trust and reproducibility in biomedical research. Use of blockchain technology has recently been shown to provide an immutable ledger of every step in a clinical research protocol, and this could easily be adapted to…

Until we develop the right system, we don’t need or want money back guarantees on research reproducibility. But I’d be interested to pick up on that refund offer for my medications or any medical care that doesn’t work.
Maybe I should just say "Amen, brother!"That’s all Andy would’ve said…
Mickey @ 5:24 PM

STOP, LOOK, and LISTEN…

Posted on Tuesday 24 May 2016

David Healy‘s blog has a guest post called The Pill That Steals Lives: One woman’s terrifying journey to discover the truth about antidepressants by Katinka Blackford Newman – an introduction to a book about her experiences with psychiatric medications due out in early July. It’s one of those all too familiar stories where a negative reaction to one medication was interpreted as an illness that was treated by adding other medications in an escalating cycle:
It had started when I had hit a wall of despair while going through a divorce. Sleepless nights took me to a psychiatrist who prescribed an antidepressant. Within hours I was hallucinating, believed I had attacked my children and in fact attacked myself with a knife. I ended up in a private hospital where doctors clearly thought I had a screw loose when I told them I was being filmed and that there was a suicide pact with God. The psychosis ended when I said I wanted to stop taking the escitalopram but doctors insisted I take more pills. This began a terrible decline where I couldn’t leave the house, dress myself, finish a sentence. But the worst thing of all was that I couldn’t feel love for my children, Lily and Oscar, who were 10 and 11 at the time. At the end of a year I was about to end it all. As a last resort I tried to get myself readmitted to the same private hospital, but my insurance had run out. And that was how I ended up sectioned at this NHS hospital that had made the decision to take me off all the drugs [Lithium, Olanzapine, Sertraline, Prozac, Lamotrigine]. I was climbing the walls, screaming, shouting, and begging my family to get me out of there. If I’d been suicidal while on the drugs, withdrawal made me far worse…
It looks to be an interesting book [and advertises a revelation along the way]. But that’s not why it’s here. It’s this:
[Lithium, Olanzapine, Sertraline, Prozac, Lamotrigine]
No matter what you believe about mental illness [disease or not] or about psychopharmacology [disease specific or symptomatic], it’s hard for me to imagine when…
[two antidepressants plus two mood stabilizers plus an atypical antipsychotic]
…would ever be an appropriate drug regimen for any condition I know of. What illness does that treat? How about with the new NpN terminology they’re so excited about? Would renaming it …
[glutamate with yet to be determined enzyme interaction plus a dopamine, serotonin receptor antagonist [D2, 5-HT2] plus a serotonin reuptake inhibitor [SERT] plus a glutamate voltage-gated sodium channel blocker]
…make things any better? [one shudders to think what it would become in RDoC talk]. I’m being sarcastic and I shouldn’t be because this is a deadly serious point. A case like this transcends the usual discussions about efficacy or indication. There’s just no rational rationale for this drug cocktail that I can think of for anything. And I’ve seen cases like this over and over. I recently catalogued such a journey [good news bulletin… see case number 3 and its links].

The way this happens is that a patient gets started on a medication and things go badly. So other medicines are tried without stopping the last. As the patient continues to go downhill, the medications get added irrationally. There may be akathisia and/or withdrawal mixed in with the medication effects. It ends like the story I think I’ll read when the book arrives – an impossible situation where the patient still may or may not have the problem they came with, are living in an obtunded mental state from all the medications, and have the added prospect of one or more withdrawal syndromes to face. One unholy mess!

The solution when a case is headed this way is to STOP adding things and gradually taper off of all medication, using something like a benzos for distress if you have to. LOOK at the patient as if it’s a brand new case. Perhaps get a consult from someone you respect, and LISTEN to what they say and the patient says. Unfortunately, the people who get into such messes are reticent to let them see the light of day so finally somebody else [family?] intervenes. And when you see a case like this – somebody on five medicines who’s getting or has already gotten worse – always think medication effects until proven otherwise.
Mickey @ 5:19 PM

a thorny problem, this one…

Posted on Monday 23 May 2016

Reading through Sergio Sismondo’s Ghosts in the Machine was confirming, validating my own impression that there is a  secretive commercially driven enterprise manipulating the processes by which we know about medications. I knew it was there, but I really didn’t know it was so ubiquitous, nor did I know it was a profession. But there were parts of his essay I just didn’t get. This was one of them:
Implicit in many of the exposes of ghostwriting in the medical science and popular literature is an assumption that ghostwritten science is formally inferior. Given the very high acceptance rates of ghost-managed papers, that assumption is questionable in general – though it may be right about important cases. Pharmaceutical company research, analysis, and writing results in knowledge. It is not different from other medical research, analysis, and writing in the fact that companies and their agents make choices in the running of clinical trials, in interpretations of data and established medical science, and in the messages they convey in papers and presentations. This point is straightforwardly suggested by STS’s longstanding commitment to symmetry. It is justified by the results of canonical studies that have shown how science is choice-laden. Thus, the work of pharmaceutical companies to produce research and place it prominently in medical journals is not merely a corporate use of the patina of science. It is science, though it is science done in a new, corporate mode.
It was one of those paragraphs you read over and over with the same question·mark look on your face. And apparently I’m not the only one [see Leemon McHenry’s Ghosts in the Machine: Comment on Sismondo]:
The commercial medical science that has created the ghostwriting industry is a corruption of science, and not merely as Sismondo puts it "science done in a new, corporate mode."
Which is what I think. Back to Sismondo, there was a part of Ghosts in the Machine that I really liked reading:
Everybody systematically connected with publication planning wants established formal rules of conduct. As sub-contractors, publication planners would like to reduce uncertainty, so that they can plan ahead and so that they can produce exactly the papers that will satisfy all of the different parties with whom they interact. Both publication planners and pharmaceutical companies want formal rules to guide and cover their work, to legitimize it so that its exposure does not amount to scandal…
In the course of life, I spent almost a decade as an educator/administrator. And as much as I enjoyed those years, there was one part I had no problem getting away from. It was hearing the question, "What is your policy about …?" To me, that meant "give me a rule so I’ll know how close to the line I can get without consequences" [and then the search for the loopholes began]. Here’s another solution that we might initially think we could agree with:
by SERGIO SISMONDO AND MATHIEU DOUCET
Bioethics. 2010 24[6]:273–283.

It is by now no secret that some scientific articles are ghost authored – that is, written by someone other than the person whose name appears at the top of the article. Ghost authorship, however, is only one sort of ghosting. In this article, we present evidence that pharmaceutical companies engage in the ghost management of the scientific literature, by controlling or shaping several crucial steps in the research, writing, and publication of scientific articles. Ghost management allows the pharmaceutical industry to shape the literature in ways that serve its interests.

This article aims to reinforce and expand publication ethics as an important area of concern for bioethics. Since ghost-managed research is primarily undertaken in the interests of marketing, large quantities of medical research violate not just publication norms but also research ethics. Much of this research involves human subjects, and yet is performed not primarily to increase knowledge for broad human benefit, but to disseminate results in the service of profits. Those who sponsor, manage, conduct, and publish such research therefore behave unethically, since they put patients at risk without justification. This leads us to a strong conclusion: if medical journals want to ensure that the research they publish is ethically sound, they should not publish articles that are commercially sponsored.

But then the doubts come creeping in. How would journals be financed? Industry is the only act in town able to finance trials. Where would physicians get information? package inserts? Who says the FDA Approval tells us enough to make clinical decisions? Who says the FDA is uncorruptable? And so on, and so on, and scooby-dooby-do…

A thorny problem, this one…
Mickey @ 2:09 PM

philosophic insomnia…

Posted on Sunday 22 May 2016

"In the ghost management of medical research by pharmaceutical companies, we have a novel model of science. This is corporate science, done by many hidden workers, performed for marketing purposes, and drawing its authority from traditional academic science. The high commercial stakes mean that all of the parties connected with this new science can find reasons or be induced to participate, support, and steadily normalize it. It is likely here to stay for a while."
Sergio Sismondo

I may claim the right to be boring, but I’m sure not bored with this topic of industry invading academic medicine and the medical literature. As a late-comer, I still happen on to unexplored corners all the time that open up a whole new cache of things to think about. That term, ghost management, is one of those unexplored corners. Without knowing it exactly, I’ve been running into an example [Vortioxetine] of publication planning for a while now. There’s a ghost behind this machine:

MAY 2013:   beyond unacceptable…
way past time…
academic?…
the squeaky wheel…
DEC 2014: the recommendation?…
FEB-APR 2016: indications…
more vortioxetine story…
the empty pipeline…
a parable…
still going strong?…
MAY 2016: publication bias I…
publication bias II…
publication bias III…
publication bias IV…
publication bias – a post-script…

And so to the work of Canadian Philosopher/Sociologist Sergio Sismondo who has enough publications on this topic to be named an honorary KOL himself. Here’s a long one that can be read full text on the the Internet – Ghosts in the Machine 2009. His work is one of those unexplored corners that occupied my Sunday.

I once had a friend who was working on his PhD thesis on Husserl at Duke. One night, he left Durham and drove to Memphis and enrolled in Medical School, never looking back. He later said something like, "That night, I started to finally ‘get it.’ And I realized that if I continued in philosophy and ‘got it’ much more, I’d be too depressed to ever get any sleep." Sismundo’s work has some of that flavor. His command of ghost management, publication planning, and the stealth goings on in the pharmaceutical industry is encyclopedic and illuminating, but I wonder how he sleeps at night…
Mickey @ 8:33 PM

publication bias – a post-script…

Posted on Sunday 22 May 2016

Wading around in the meta-analyses comparing antidepressant efficacy and safety among them can be like Kafka’s The Trial or McCarthy’s The Road. You’re never sure where you are or if you’ve arrived anywhere. One of the most quoted versions is Cipriani et al’s Comparative efficacy and acceptability of 12 new-generation antidepressants: a multiple-treatments meta-analysis that produced this chart.

…comparing efficacy and acceptability. The abcsissa [horizontal axis] is the rank from 1st down to 12th and the ordinate [vertical axis] is the probability of each rank for the drugs. Now recall this forest plot of Vortioxetine versus comparators [Duloxetine] from Cosgrove et al:
They went head to head with one of their least effective rivals – and lost. New isn’t necessarily better…
Mickey @ 11:15 AM

publication bias IV – close encounters of the second kind…

Posted on Sunday 22 May 2016

I doubt that my discussion of the difficulty getting contrarian studies published would stand up in Evidence-Based Medicine court – too few examples to be called anything but anecdotes. But if you asked the few investigators who’ve given it a shot, I’d bet p would approach (1 ÷ ∞). Consider, for example, this contentious [and telling] response to Juireidini et al’s submission:

from Dr. Greenblatt [Editor of the Journal of Clinical Psychopharmacology]


We do not share your concerns about what you term "ghostwriting" — more properly described as manuscript preparation assistance. This is just another form of assistance or collaborative effort in the course of research — similar to technical assistance with measurement of plasma drug levels, or statistical consultant input in analysis of data. In the end, the listed authors take responsibility for the content of the manuscript, and that is what matters.
We also are not worried about the participation of the pharmaceutical manufacturer in the execution of the study, or the preparation of the manuscript. This is to be expected — they are the sponsors, and they have the most knowledge about the candidate drug and the data… We also note that we ARE concerned that you are serving as a reimbursed expert witness on behalf of the plaintiffs, proposing in the present manuscript what we expect are similar arguments as presented in the context of the litigation.
With all of that said, we certainly could reconsider a revised manuscript in which the focus was ONLY on the scientific content of the paper(s) in question. If you disagree with the scientific content or its interpretation, that can be presented, but without the court documents and internal E-mails, and without accusations of malfeasance, misrepresentation, manipulation, whitewashing, complicity, etc. The issue of manuscript preparation assistance is also not in the picture…

Greenblatt’s response misses the central point that by ignoring protocol directed exclusions for the primary outcome variable and changing a secondary outcome altogether, the analysis falsely reported a negative study as positive. Perhaps that’s what he didn’t want to hear.

But even in the BMJ response to Le Noury et al where the journal was sympathetic to the reanalysis, they performed their own audit of the harm findings because of similar concern about COI, and asked for additional post hoc efficacy analyses  [multiple imputation]. To borrow a $5 word from theological scholarship:
   her·me·neu·tic
     noun
     1. a method or theory of interpretation
The RCT submissions are viewed with a hermeneutic of acceptance [innocent until proven guilty], and the contrarian submissions are evaluated under a hermeneutic of suspicion [guilty until proven innocent]. There’s not much a contrarian author can do about that. The journals are independent entities and they set their own sails.

Publication bias of the first kind [not publishing negative studies] presents a false view of a drug by inflating the drug’s profile. Publication bias of this second kind [not publishing or under-publishing contrarian articles – criticism] achieves the same result through a different mechanism. The first is calculated, intentional. The second is more insidious, and often has to do with the general climate. Dr. Greenblatt’s comments aren’t about the specific drug study criticized by Jureidini et al. He’s reacting to their implied criticism of the system that he, himself, is a part of. He conflates the legal challenge that produced the documents used in Jureidini et al and the criticism of the specific drug study itself, and lashes out at both in a single breath [rejecting the article out of hand].

Cosgrove et al has had a similar fate so far. The paper not only presents a right-sized view of Vortioxetine and exposes the mechanisms used to inflate it in the first place, it also looks at the structure of the system that allowed that to happen and suggests reform. We don’t know the details, but the net result is that their paper is buried and largely inaccessible. I doubt anyone thought, "Let’s block this paper from the general discourse" along the way. But that’s what happened. The article says "Your system needs to change" to an audience that is fine with the system as it is. I agree with these authors that the system needs to change. And though my recipe for change might differ somewhat, I would welcome these opinions to the conversation. But that’s not the mainstream point of view – unfortunately reflected in their article’s fate.

I first heard the phrase, "Sunlight is the best disinfectant" in Ben Goldacre’s initial Ted Talk [see something of value…]. And it is dead-on accurate. It’s the motivating force behind AllTrials, around Data Transparency, with articles like Restoring Study 329… and The citalopram CIT-MD-18 pediatric depression trial…, blogs like this, etc. I think the main problem with Cosgrove et al right now is not the Impact Factor of the journal where it’s published, but the fact that it’s not easily accessible on-line. Negotiating ResearchGate is not the same as just clicking a link. The other articles mentioned here are on-line, and I would hope that theirs will be too at some point. It could use the kind of sunlight only the Internet can provide…
Mickey @ 7:00 AM

publication bias III – close encounters of the second kind…

Posted on Friday 20 May 2016


by Lisa Cosgrove, Steven Vannoy, Barbara Mintzes, and Allen Shaughnessy
Accountability in Research. 2016 23[5]:257-279.

The relationships among academe, publishing, and industry can facilitate commercial bias in how drug efficacy and safety data are obtained, interpreted, and presented to regulatory bodies and prescribers. Through a critique of published and unpublished trials submitted to the Food and Drug Administration [FDA] and the European Medicines Agency [EMA] for approval of a new antidepressant, vortioxetine, we present a case study of the "ghost management" of the information delivery process. We argue that currently accepted practices undermine regulatory safeguards aimed at protecting the public from unsafe or ineffective medicines. The economies of influence that may intentionally and unintentionally produce evidence-biased-rather than evidence-based-medicine are identified. This is not a simple story of author financial conflicts of interest, but rather a complex tale of ghost management of the entire process of bringing a drug to market. This case study shows how weak regulatory policies allow for design choices and reporting strategies that can make marginal products look novel, more effective, and safer than they are, and how the selective and imbalanced reporting of clinical trial data in medical journals results in the marketing of expensive "me-too" drugs with questionable risk/benefit profiles. We offer solutions for neutralizing these economies of influence.

This article differs from the others in this series in that it is primarily a perspective/opinion piece about the intersection of industry, the medical literature, and the regulatory processes that uses the recent FDA approval of Vortioxetine [Brintellix®] as an exemplare. It’s a well-framed discussion of a complex issue, offering suggestions for reform. While some may not agree with their emphases or solutions, it’s an excellent "big picture" article – a thoughtful contribution. But specific to the Vortioxetine theme from the last post [publication bias II – close encounters of the second kind…], their research into the Vortioxetine RCTs is definitely value-added to the other reports. And it’s another example of a contrarian article that they had a really hard time getting published.

Cosgrove et al looked at all the Vortioxetine trials, published and unpublished, and introduced a term I hadn’t run across before – Ghost Management. Just saying an article is ghost-written doesn’t tell the whole story, because the ghost-writer is employed by the RCT’s sponsor. Ghost Management is much more accurate in that it conveys the extent of the sponsor’s control over the entire RCT process. I found some of their findings jarring, even after my years of reading these reports. For example, the extent of industry COI:
Vortioxetine is typical of virtually all new drugs in that the pre-market RCTs were all sponsored by the manufacturer. However, congruent with commercialized publication planning, every author in all of the published short term RCTs, as well as one longer term randomized drug withdrawal study, had significant commercial ties to the manufacturer, well beyond research funding [e.g., they were employees, participated on advisory boards, and/or had received consulting monies or honoraria]…

Below is a summary of the industry-publishing relationships of the eight published studies submitted to the FDA and one additional study submitted to the EMA that was published:
  • In eleven of the thirteen publications, the majority of authors were employees of the manufacturer, and in four of the thirteen published studies, all authors were company employees.
  • In all of the trial reports, the authors explicitly thank an employee of the manufacturer for ‘assistance in the preparation and writing‘ of the manuscript or note that assistance with preparing and writing the article was provided by an employee.
  • In nine of the thirteen published articles, the following issue was disclosed: ‘[the manufacturer] was involved in the study design, in the collection, analysis and interpretation of data, and in the writing of the report.’
  • The thirteen published studies were published in seven academic journals. The editors of five of these journals had financial ties to vortioxetine’s manufacturer…
It had never occurred to me to look at the COI of the journal editors. Another finding – one of their suggestions is that the FDA consider all trials instead of the best ones, and take the performance against comparators into account. They produced this meta-analysis across trials:
At this point, my usual M.O., having highlighted a few "teasers," would be to suggest you read the whole article yourself. There’s plenty more I haven’t mentioned. But, in this case, you can’t read it. I can’t either. My usual resource for access is through my faculty connection to our library. But lthough the journal is indexed, it offers no provision to read the article [unusual]. On-line, it’s not Open Access but behind a $41 pay wall. Making it Open Access would’ve cost the authors ~$3000, a mighty stiff fee for any academic [but trivial for PHARMA]. I had written Dr. Cosgrove to ask if she had cataloged or written about their Hegira looking for a publisher. She hadn’t, but she mentioned in passing that they were disappointed with the response to their article. I don’t doubt that, since the only people who can read it are subscribers to Accountability in Research, rich people, or those who have written the authors for a copy. It’s hidden under a bushel, or so I thought. But then I discovered ResearchGate! And got to the paper full text even before I joined [which I later did]. That’s where that [full text on-line ?] link came from. Definitely check out ResearchGate if you haven’t already. And however you get this article, it’s well worth the trouble.

So back to the main theme here. This is a solid article that shouldn’t be hard to publish. Credentialed authors. Well researched. Thoughtful analyses. Certainly relevant. But it’s contrarian. It swims upstream against the conclusions the original authors presented. And those articles are [to say it yet again] hot potatoes, just like Juriedini, Amsterdam, and McHenry’s paper. In this case, the research on the Vortioxetine [Brintellix®] trials adds an essential dimension to those papers mentioned in the last post. This is just not an Impact Factor 0.826 article. So is this a form of publication bias? Close enough for me, but I’m open for any other name as long as it says that it’s not right. Well documented dissent is [and has always been] an integral part of the scientific process…

Publication bias is the term for what occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies. Simply put, when the research that is readily available differs in its results from the results of all the research that has been done in an area, readers and reviewers of that research are in danger of drawing the wrong conclusion about what that body of research shows…
Mickey @ 2:33 PM

publication bias II – close encounters of the second kind…

Posted on Friday 20 May 2016

With the Paxil Study 329 paper, our problem getting it in print didn’t have to do with journal shopping, it had to do with a tough love review process and a year of uncertainty that went with it. It was a top journal [British Medical Journal Impact Factor 17.445] and I’m glad it’s there. The recent Citalopram paper did have to do a lot of journal  shopping  hopping [see background notes], moving from the Journal of Affective Disorders [Impact Factor 3.383] to JAMA Psychiatry [Impact Factor 12.008] to the Acta Psychiatrica Scandinavica [Impact Factor 5.605] to the International Journal of Risk and Safety in Medicine [Impact Factor 0.86]. Quite a journey.

A few months back, I was writing about an elaborate KOL-rich campaign by Takeda and Lundbeck to get FDA Approval for Vortioxetine [Brintellix®] in Cognitive Dysfunction in Major Depressive Disorder [see indications… and more vortioxetine story…]. I thought it was a commercially driven attempt [indication creep] and the science was woefully lacking. I was pleased that the FDA later agreed [a parable…] and didn’t approve the indication. Prior to that, my only encounter with Vortioxetine [Brintellix®] was a industry-produced review article in the Journal of Clinical Psychiatry [Impact Factor 5.498]:

by Alan F. Schatzberg, Pierre Blier, Larry Culpepper, Rakesh Jain, George I. Papakostas, and Michael E. Thase.
Journal of Clinical Psychiatry 2014 75[12]:1411–1418.

Six clinicians provide an overview of the serotonergic antidepressant vortioxetine, which was recently approved for the treatment of major depressive disorder in adults. They discuss the pharmacologic profile and receptor-mediated effects of vortioxetine in relation to potential outcomes. Additionally, they summarize the clinical trials, which demonstrate vortioxetine’s efficacy, and discuss findings related to safety and tolerability that have high relevance to patient compliance.
Speaking of KOL-rich, this was simply the worst article I’ve ever seen in a medical journal. It’s hard to imagine that they published it, but I’ve had my say about that [see the recommendation?…]. While I was in the Vortioxetine neighborhood, I ran across an article that had been accepted but not yet published by Lisa Cosgrove and colleagues that looked interesting. I wrote them about it and they kindly sent me an advance copy:
by Lisa Cosgrove, Steven Vannoy, Barbara Mintzes, and Allen Shaughnessy
Dr. Cosgrove was well known to me for running down the extent of Conflicts of Interest among members of the DSM-5 Task Force [see must be crazy…] and as a coauthor of the recent book Psychiatry Under the Influence: Institutional Corruption, Social Injury, and Prescriptions for Reform. In sending the article, she mentioned that they had a very difficult time getting it published. I thought it was an important article and made a note to blog about it later when it was published and fully available in Accountability in Research [Impact Factor 0.826]. It includes a critical look at the Vortioxetine Clinical Trials that I’ll mention later. In the references, I found a meta-analysis of Vortioxetine published in the Journal of Psychiatry and Neuroscience [Impact Factor 5.86]:
by Chi-Un Pae, Sheng-Min Wang, Changsu Han, Soo-Jung Lee, Ashwin A. Patkar, Praksh S. Masand, and Alessandro Serretti
Journal of Psychiatry and Neuroscience. 2015 40[3]: 174–186.

Background: Vortioxetine was approved by the U.S. Food and Drug Administration [FDA] in September 2013 for treating major depressive disorder [MDD]. Thus far, a number of randomized, double-blind, placebo-controlled clinical trials [RCTs] of vortioxetine have been conducted in patients with MDD. We performed a meta-analysis to increase the statistical power of these studies and enhance our current understanding of the role of vortioxetine in the treatment of MDD.
Methods: We performed an extensive search of databases and the clinical trial registry. The mean change in total scores on the 24-item Hamilton Rating Scale for Depression [HAM-D] and the Montgomery–Åsberg Depression Rating Scale [MADRS] from the baseline were the primary outcome measures. The secondary efficacy measures were the response and remission rates, as defined by a 50% or greater reduction in HAM-D/MADRS total scores and as a score of 10 or less in the MADRS and 7 or less in the HAM-D total scores at the end of treatment.
Results: We included 7 published and 5 unpublished short-term [6–12 wk] RCTs in our meta-analysis. Vortioxetine was significantly more effective than placebo, with an effect size [standardized mean difference [SMD]] of ?0.217 [95% confidence interval [CI] ?0.313 to ?0.122] and with odds ratios [ORs] for response and remission of 1.652 [95% CI 1.321 to 2.067] and 1.399 [95% CI 1.104 to 1.773], respectively. Those treated with vortioxetine did not differ significantly from those treated with selective norepinephrine reuptake inhibitors/agomelatine with regard to the SMD of the primary outcome measure [0.081, ?0.062 to 0.223] or for response [OR 0.815, 95% CI 0.585 to 1.135] and remission [OR 0.843, 95% CI 0.575 to 1.238] rates. Discontinuation owing to lack of efficacy [OR 0.541, 95% CI 0.308 to 0.950] was significantly less common among those treated with vortioxetine than among those who received placebo, whereas discontinuation owing to adverse events [AEs; OR 1.530, 95% CI 1.144 to 2.047] was significantly more common among those treated with vortioxetine than among those receiving placebo. There was no significant difference in discontinuation rates between vortioxetine and comparators owing to inefficacy [OR 0.983, 95% CI 0.585 to 1.650], whereas discontinuation owing to AEs was significantly less common in the vortioxetine than in the comparator group [OR 0.728, 95% CI 0.554 to 0.957].
Limitations: Studies examining the role of vortioxetine in the treatment of MDD are limited.
Conclusion: Although our results suggest that vortioxetine may be an effective treatment option for MDD, they should be interpreted and translated into clinical practice with caution, as the meta-analysis was based on a limited number of heterogeneous RCTs.


Effect Size of the Primary Outcome Variable
[adapted for blog]

I’ve obviously got two themes going in this post. One is the articles about the RCTs that lead the FDA to approve Vortioxetine [Brintellix®] for the treatment of Major Depressive Disorder. The other has to do with what I’m calling publication bias of the second kind – the difficulty getting contrarian articles about Clinical Trials published and into the general dialog. I’ll get to the Cosgrove et al article in detail in the next post, but for now just say that it is in the contrarian category. So, we have an industry created [incredible] review article in a journal with an Impact Factor of 5.498, a credible meta-analysis in a journal with an Impact Factor of 5.86, and the Cosgrove et al article that ended up in a journal with an Impact Factor of 0.826. In my next post in this series, I’m going to continue both themes and claim that this disparity in journal ratings deserves our attention…
Mickey @ 8:30 AM

publication bias I – close encounters of the second kind…

Posted on Thursday 19 May 2016

Publication bias is the term for what occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies. Simply put, when the research that is readily available differs in its results from the results of all the research that has been done in an area, readers and reviewers of that research are in danger of drawing the wrong conclusion about what that body of research shows…

The usual version of publication bias is the practice of simply not publishing negative clinical trials. The result is obvious. The positive trials "average out" to a falsely inflated result and the drug looks better than it actually is. We’ve pretty much caught on to that ploy by requiring all trials be registered in advance [clinicaltrials.gov], developingtools to detect missing trials [funnel plots], and doing meta-analyses that include unpublished trials [the Tamiflu saga]. A variant would be publishing Paxil Study 329 which was claimed to be positive, but not publishing two negative trials of the same drug [Paxil Study 377 and Paxil Study 701] until after the patent had run out [see paxil in adolescents: “five easy pieces”… and study 329 x – “it wasn’t sin – it was spin”…]. This post isn’t about that kind of publication bias. It’s about a getting published version of publication bias.

[Note: This particular post is kind of redundant, not too different from the last one. But it’s here because I finally figured out what I wanted to say].

I was on the RIAT team that reanalyzed Paxil Study 329 [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence]. It was the end product of over a decade of effort by armies of people from many professions all over the world working to make it happen. It was in a high impact journal [British Medical Journal] and was extremely well received. In spite of all that preparatory work and a receptive editorial staff, writing it and getting it published was a grueling experience, roughly outlined at Restoring Study 329 [in Background], and there will be more to follow. But in spite of its success, you have to ask yourself "Why did it take such a herculean effort by an army and fourteen years to get it in print?" And for that matter, "Why was I an author – a retired adult psychiatrist who writes a blog at the edge of the galaxy?" It should’ve been authored by a chairman of child psychiatry at a prestigious medical school or the president of the American Academy of Child an Adolescent Psychiatry. Actually, Study 329 should never have been published in the first place as it is written [Efficacy of Paroxetine in the Treatment of Adolescent Major Depression: A Randomized, Controlled Trial], or should have been subsequently retracted.

And now we have another article about an antidepressant clinical trial in kids, The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance. It’s authored by a heavily credentialed multidiscipliary team. In this paper, the data comes from the sponsor’s own internal documents and focuses in on the processes involved in creating the original publication [A Randomized, Placebo-Controlled Trial of Citalopram for the Treatment of Major Depression in Children and Adolescents]. While they show the same things we documented [deviations from the a priori protocol, add-on parameters, questionable analyses, etc], they go further. Using verbatim quotes from the sponsor, they show that all of this jury-rigging of data, analysis, and presentation was done by the sponsor’s employees with the specific intent of deceiving the reader by making this negative trial look positive. And yet they had a hell of a time getting it published [see background notes]. It wasn’t turned down because it was wrong, or badly written, or poorly documented, or didn’t have proven authors. "Why did they have such a hard time getting it published?" It’s in the International Journal of Risk and Safety in Medicine, a perfectly legitimate peer reviewed medical journal, but hardly in the upper echelon. Since it was unfunded research, the journal graciously made it Open Access by waiving the fee. But thus far, it hasn’t been covered in the main stream media. "Why not?" It is  at least as important as our article, if not more-so, as it documents active fraudulent behavior on the part of the sponsor.

To my mind, this is publication bias of a second kind [see a mighty hard row…], a problem getting a contrarian take on a clinical trial published and into the discourse where it belongs. It fits the opening definition to a tee. This is a form of publication bias exerted by the publishers themselves, and may not necessarily be the result of direct or even indirect involvement of the original clinical trial’s sponsor. I’m not going to speculate further about all the motivations involved here. There’s more than enough speculation flying around about this topic already. But I do want to talk about it’s impact, and illustrate it with a contemporary example…
Mickey @ 2:48 PM