hardly our finest hour…

Posted on Sunday 25 September 2016

When I left the faculty in the early 1980s in the wake of the medicalization of our department after the DSM-III revolution, I didn’t think of it as leaving psychiatry [I sort of thought of it as psychiatry leaving me]. But circumstances were such that I got busy with my practice and teaching, drifting further and further from what was going on in psychiatry local [Dr. Nemeroff’s department] and psychiatry at large. It was 25 years later when two things woke me from my slumber – the Washington Irving's Rip Van Winklerevelations of widespread corruption in academic psychiatry [the KOLs] and volunteering in a clinic and being horrified at the medication regimens I found people on there. So I had a lot of catching up to do, luckily finding others who were willing to help. I think that the CROs [Contract Research Organizations], the Medical Writing Firms, that whole industry that entered the clinical trial scene must have been in its infancy about the time I was going into seclusion, because I didn’t know about any of it, though part of my reason for leaving had to do with a new administration that was keen on teaming up with PHARMA [another unfamiliar  term]. I think of the time from going into practice until five or six years into my retirement as my "Rip Van Winkle" period:

I periodically tell that story partly because I feel guilty for not noticing what was happening, and sometimes to explain why I never even heard terms like evidence-based medicine, RCTs, meta-analysis, systematic reviews, or even the word pharma until five or six years ago [I’m apparently a heavy sleeper]. This time, however, I have another reason. Research watchdog, John Ioannidis, has a new article. And it was a graph in his paper that led to my retelling that snippet of my history:
by JOHN P.A. IOANNIDIS
Milbank Quarterly. 2016 94[3]:485-514.

POLICY POINTS: Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses. Instead of promoting evidence-based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools. Suboptimal systematic reviews and meta-analyses can be harmful given the major prestige and influence these types of studies have acquired. The publication of systematic reviews and meta-analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.
CONTEXT: Currently, most systematic reviews and meta-analyses are done retrospectively with fragmented published information. This article aims to explore the growth of published systematic reviews and meta-analyses and to estimate how often they are redundant, misleading, or serving conflicted interests.
METHODS: Data included information from PubMed surveys and from empirical evaluations of meta-analyses.
FINDINGS: Publication of systematic reviews and meta-analyses has increased rapidly. In the period January 1, 1986, to December 4, 2015, PubMed tags 266,782 items as "systematic reviews" and 58,611 as "meta-analyses." Annual publications between 1991 and 2014 increased 2,728% for systematic reviews and 2,635% for meta-analyses versus only 153% for all PubMed-indexed items. Currently, probably more systematic reviews of trials than new randomized trials are published annually. Most topics addressed by meta-analyses of randomized trials have overlapping, redundant meta-analyses; same-topic meta-analyses may exceed 20 sometimes… Many other meta-analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta-analyses are both non-misleading and useful.
CONCLUSIONS: The production of systematic reviews and meta-analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted.
The abscissa on the graph goes from 1986-2014 and the ordinate goes from 0-30,000 [articles/year!], red  being systematic reviews and blue being meta-analyses [I wonder what the clinical trial graph would looks like?]. Ioannidis obviously takes a dim view of this epidemic. He’s a smart guy, so I expect he knows what he’s talking about.

But when I looked at Ioannidis’ graph, I saw something else – a historical epoch of medicine. Is it any wonder that I didn’t know about meta-analyses and systematic reviews? There weren’t any at the time I went all Rip Van Winkle! That graph parallels a profound change in medicine the age of evidence-based medicine, the age of managed care, the age of corporatization, the age of the clinical trial. Whatever you want to call it, it has been a distinct era. And it is hardly our finest hour.

Again, I don’t miss Ioannidis’ point, that many of these meta-analyses and systematic reviews can be a way for academics to rack up publications for academic advancement  without doing any original bench research or clinical studies of their own. But it’s also possible that one of the reasons for that is that research funding is so hard to come by these days – except from corporate sponsors [with strings attached]. And another reason for the burst of secondary publications might be that there’s been so much questionable research[?] in this time frame that genuinely does need a critical second [or third] look.

So at least in psychiatry, I welcome the flurry of independent meta-analyses and systematic reviews. We’ve had a pipeline of psychotherapeutic agents steadily pouring into our landscape during the time under discussion, literally changing the direction of the specialty, and we still can’t trust our literature to tell us about, either their safety or their efficacy:
The meta-analyses and systematic reviews have been our only real window into any rational understanding of these drugs. And they still have a lot to tell us. Here’s a very recent example:
by Ymkje Anna de Vries, Annelieke M. Roest, Lian Beijers, Erick H. Turner, Peter de Jonge
European Neuropsychophaemacology. 2016 Article in press.

Mickey @ 9:46 AM

clinical trials – concordance…

Posted on Thursday 22 September 2016

My preacher friend Andy had a book, The Concordance, that showed the four gospels with their different accounts of the same event aligned to be side by side. I liked the idea, and used it in my own teaching on a very different topic. But in this case, I’m co-opting the term to say that these four documents should be a Concordance, say the same thing, reach the same conclusions:

published
journal article
clinicaltrials.gov
results database
fda medical and
statistical reviews
raw data
[IPD, CRF]

Under the current arrangement, the only entities that have actually seen and analyzed the raw data from a clinical trial of an FDA regulated drug are the trial sponsors and the FDA. The FDA has extensively analyzed the data itself and said either yeah or nay to a new drug application or an approval for a new indication of an approved drug. That they keep the raw data a secret is something I would argue with, but for the moment, it is a longstanding convention so I wouldn’t get very far with my argument. They are giving in to the Subject Privacy and Commercially Confidential Information [CCI] arguments.

But the FDA silence goes deeper than that. They alone know the results of the prespecified Primary and Secondary Outcome variable analysis, yet have remained silent as our literature fills up with journal articles that deliberately distort those results – have remained silent as clinicaltrials.gov‘s results database has been ignored. So by remaining silent, the FDA, the agency charged with insuring that our pharmaceutical formulary is both safe and effective, the only agency that has access to the definitive clinical trial data and the results of its analysis, has been an active partner in the corrupt journal trial reports that have swept through our medical literature. Likewise, the NIH has also been a party to the corruption by being passive while their clinicaltrials.gov results database has been systematically ignored, even in situations where reporting is mandatory.

This is a giant loophole in the system. Whether through passivity or feeling they didn’t have the mandate or authority, the FDA and NIH have become a major part of this problem by remaining silent, being passive. And there is a solution:

1boringoldman Simple Facts

On the day that a sponsor and authors submit an article about their clinical trial to a journal, they already have the results at their fingertips. There is no reason that they could not easily post the required results on the clinicaltrials.gov results database…

On the day that the clinicaltrials.gov results database is populated, the FDA already has their own results at their fingertips. There is no reason that they could not easily check for concordance between those posted by the sponsor and their own analysis, and add their findings as a commentary to the results database.

In this scenario, the journal editors and peer reviewers still wouldn’t have the raw data, but they would have the results and a commentary about those results in hand which would bring their decision making process into the realm of evidence based medicine.

By taking an active role in this process, the FDA would be stepping up to the plate in fulfilling its broad charge of insuring safety, efficacy, and integrity to clinical trial reporting landscape, and they could do this without compromising either subject confidentiality or commercially confidential information.

Last week, the FDA, NIH, and clinicaltrials.gov announced broad changes directly addressing the problem of corrupt clinical trial reporting. They are to be commended for their initiative and many of their changes. But they didn’t plug the loophole being discussed here – publicly addressing Concordance among the versions of the results, with coordination between agencies, and with surveillance and enforcement.

What’s it going to take? An Act of Congress? Stay tuned…
Mickey @ 7:30 AM

clinical trials visited again…

Posted on Wednesday 21 September 2016

By all rights, the usual randomized clinical trial [RCT] of a medication ought to be relatively straightforward:
  1. Decide on the target population, the intervention arms of the study, and what outcome difference you would accept as clinically meaningful.
  2. Do the power computations to determine the study’s size.
  3. Preregister the study, declaring the Primary and Secondary Outcome Variables and the way they will be collected and analyzed.
  4. Begin the clinical trial, randomizing and blinding everyone.
  5. After the last subject completes the trial, break the blind and apply the [exact] methods specified in 3.
  6. Post those results on the Registry.
Some persist in arguing that 3. is too restrictive, that they should be able to change Outcomes or Analytic Methods in mid-stream before the blind is broken. First, the validity of the statistical methods depends on preregistration. And second, the track record of "outcome switching" being used to turn negative data into positive is too abysmal to even consider that argument. Preregistration is the only way to insure against HARKing [hypothesis after results known], adapting variable choice and analytic method to get the desired result.

Building on a 1988 registry of HIV studies, in 1997 Congress created clinicaltrials.gov, a public registry for clinical trials [see the history on Wikipedia and clinicaltrials.gov]. It prescribed the exact system described in the box above: preregistration of outcomes and analytic methods, and posting of the results prescribed before the outset of the trial. It is a [sort of] user friendly structured online database, well designed for the task. I have no idea why it was created as part of the NIH instead of the FDA where it seemed most pertinent. But sadly, it has been a flop. Because when it wasn’t being ignored, it was being abused.

Studies weren’t often preregistered, they were registered after the fact [sometimes after the study was completed]. The results database, due within one year of completion, was almost universally simply ignored.  Several attempts to strengthen the requirements had little or no impact. It was a great idea [see clinicaltrials.gov revisited…], but it flopped because it needed  a cop:

The 1boringoldman Doctrine

No reform of the clinical trials system will work if any part of it relies on either ethical or voluntary compliance. The stakes are just too high. Clear legalistic rules with surveillance and predefined consequences are essential requirements.

There is, of course, a simple solution. Publish the a priori Protocol, the Statistical Analysis Plan, and the raw data as an online Addendum to the journal article. PHARMA has fought this solution with a vengeance arguing that it breaches Subject Privacy and contains Confidential Commercial Information. I think those are spurious arguments – a smoke screen. But so far they’ve been successful so Data Transparency is right now not the solution many of us hoped it might be. So where we’ve stood for a long time is that there are four potential versions of clinical trial results on FDA regulated drugs…

published
journal article
clinicaltrials.gov
results database
fda medical and
statistical reviews
raw data
[IPD, CRF]

… but we have only gotten to see one of them – the published journal article. Most sponsors neither registered a priori nor even filled out the clinicaltrials.gov results database. The fda medical and statistical reviews are very slow in becoming public, if at all, and the raw data itself is locked up in a safe in a place called PHARMA. Singing the praises of Data Transparency, PHARMA reframed it as benevolent Data Sharing for further research rather than checking for malfeasance, and offered a very restricted window for the persistent few who are willing the go through the process.

Fortunately the outrage has grown and the evidence of widespread data manipulation and corruption has accumulated to the point where people are beginning to call for action – including the NIH and the FDA! to be continued…
Mickey @ 8:19 PM

drugs@FDA visited…

Posted on Tuesday 20 September 2016

Whereas clinicaltrials.gov was designed to be a public online interface with structured data fields, drugs@FDA is a different species altogether. In clinicaltrials.gov revisited…, I joked…
While it’s discussed almost like it’s some complex governmental agency, clinicaltrials.gov is just a very large, online, searchable database of clinical trials using human subjects. It’s only a registry, not unlike the bridal registries filled out by prospective brides as the big day approaches…
I didn’t mean to trivialize it – just to describe it as a structured database. It really is part of a complex governmental agency – the National Institute of Health. On the other hand, the web site, drugs@FDA isn’t really a structured database. It’s just an Internet menu system in front of what data they have made available [which is like swiss cheese – some really good stuff with lots of holes]. You start with a screen that looks like this:
So you wind your way through several screens to find the drug you’re looking for, ending up on a screen like this:
 
It’s that highlighted link that says "Approval History, Letters, Reviews, and Related Documents" that’s what you came for. It leads to a screen with a table like this:
 
It’s the Reviews you’ve come to find. The rest of it is stodgy labeling revisions and moderately unintelligible letters in beaurocratese. So pick one [here‘s the link to Abilify‘s initial Review]. At this point, it seems like every single drug is different. Sometimes there’s nothing there [because they haven’t put anything there], particularly with newer drugs. And sometimes they’re like this Abilify selection – pages and pages of pdfs.

It may seem like in describing the hit-or-miss quality of this site and its sometimes chaotic organization, I’m complaining. Quite the contrary. I’m mentioning it for several reasons. First, my impression is that it’s not often visited, being the haunt of people who do systematic reviews rather than mere mortals. But mainly I want to say that after five or six years of repeatedly looking up things here, I’m impressed. These reviews are extensive. They’re all different in that the reviewers "follow their noses" going through the data. Sometimes they reanalyze the raw data, slicing and dicing it in different ways from the way it’s submitted. Sometimes they explore different analytic methods. But I’ve never found one where the didn’t follow the a priori protocol or took the sponsor’s version and computation for granted. Their standard is low [two positive studies], but they go at it competently. The statistical reviews are often illuminating. I’ve disagreed with their conclusions at times, but never their methodology or thoroughness. These reports are reviewed by committees who say yeah or nay. Actually, so do the reviewers. And I think the final decision rests with the director of the FDA.

They’re obviously in no hurry to post their documents or to be comprehensive [did I mention swiss cheese], but that’s not what they’re for. And I’ve been glad for what I could get. I’m talking about this for a specific reason. The intended public interface for clinical trials is clinicaltrials.gov, an NIH database created to describe the trial design, its progress, and its results. But it has been purposefully underpopulated and essentially rendered impotent. The real information is inside the FDA with its ability to see all the information [see Evidence of Clinical Effectiveness and Data Requirements For an NDA], but it’s latecoming to their site and spotty for the likes of those of us who try to investigate these drugs and vet the published studies.

Two windows into the Clinical Trials results just across town from each other, but in some ways, they might as well be on the opposite poles of the planet:

Mickey @ 7:54 PM

clinicaltrials.gov revisited…

Posted on Sunday 18 September 2016

With all the reports of the NIH/FDA reforms coming out yesterday, I thought I’d dabble in the details of the various elements being discussed. Usually, one would start with an abstract discussion of how such a thing as clinicaltrials.gov came to be and what it was intended to accomplish, but this time I think it makes more sense to briefly start with what it is concretely, and then move to the loftier narrative. As the philosophers sometimes said: "existence precedes essence."

While it’s discussed almost like it’s some complex governmental agency, clinicaltrials.gov is just a very large, online, searchable database of clinical trials using human subjects [see clinicaltrials.gov]. It’s only a registry, not unlike the bridal registries filled out by prospective brides as the big day approaches. Its contents are all entered by the trial sponsors at various points along the process of doing their trial. Once you locate the trial you’re looking for, you’re offered several different ways to view the contents that relate to that specific trial:

The opening screen [full text view] shows the information gathered when a trial is registered. It tells what’s being studied and why, who’s in charge of the trial, and who’s paying for it. It briefly defines the primary and secondary outcome parameters and how they’ll be analyzed at the end. There’s a section about eligibility and another identifying the location[s] of the study sites. There are lots of dates: when it was registered, when it was completed, etc [notably, it usually doesn’t say when the study actually began].

There’s another window into the same contents [tabular view] with a completely different layout. I’m usually visiting this site to see if they played by the rules, and so I find this view more helpful. For one thing, they report what the outcome variables were when the study was originally registered AND when the study was completed [for some reasons mentioned below, this isn’t always as helpful as it ought to be]. I think this view is the one most useful to scientists, physicians, and trialists – less "fishing around" required.

The results database has been the most disappointing aspect of this enterprise, not because of its structure, but because it has been essentially ignored – even in trials where it is legally mandated. Compliance percentages have been in the teens, even for NIMH/NIH funded trials, and close to zero for the contested studies I’ve tried to look at. The structural layout is fine – the results of the primary and secondary Outcomes and the Adverse Events. But it has way too often simply said No Study Results Posted.

Clicking the link above the tabbed menu [History of Changes], you get a log of all the times it has been updated along the way, and a link that says "Continue to the history of changes for this study on the ClinicalTrials.gov Archive Site." That leads to a screen that lets you scroll through the changes made during the trial. It’s a more primitive database·y interface, but it’s an invaluable vetting tool for exploring the changes, particularly changes in the Outcome Parameters along the way. It’s for the hardy among us.

This was a really great idea, collecting all the data about a trial in a simple public database. There are some important omissions like the date the first patient started the trial or who is actually conducting the trial [which CRO]. Some parts have been degraded over time like the specifics about the study sites. And requirements have changed over time too, like who was required to use it to report results. As government databases go, the design and interface get the job done. But it has been a flop for one simple reason – they didn’t do it! Garbage In, Garbage Out [or in this case, Nothing In, Nothing Out].

Some trials weren’t registered until after they were completed [instead of before they started as they were supposed to]. Most were late. The results database was largely ignored – true for both commercial and government or foundation funded trials. It’s really a shame, because it’s a great idea. I think it’s actually a better way to report the results from clinical trials than journal articles. As I’ve said endlessly, these trials are not research, they’re product testing. So what we need to know are the basics – the results of the prespecified Outcomes and a compilation of the Adverse Events. Those extra words in the journal articles are often rhetoric – part of a smoke screen or a sales pitch – and frequently, some of the essentials have gone missing.

Recently, things have improved. Results have begun to show up in a more timely manner. Trials have been registered on time. This NIH/FDA crackdown that was released yesterday has been in the works for a while and as a result, people have been "being good." The earlier honor system didn’t work. So now, with compliance, surveillance, and enforcement, maybe these new changes will allow clinicaltrials.gov to live up to the dreams of its creators. Time to pore over those articles from last week and see if they have the kind of teeth that will put an end to the corruption we’ve been subjected to for far too long:
Mickey @ 7:55 PM

a blast off…

Posted on Friday 16 September 2016

JAMA
by Kathy L. Hudson, PhD; Michael S. Lauer, MD; and Francis S. Collins, MD, PhD
September 16, 2016.
New England Journal of Medicine
by Deborah A. Zarin, M.D., Tony Tse, Ph.D., Rebecca J. Williams, Pharm.D., M.P.H., and Sarah Carr, B.A.
September 16, 2016
STAT politics
By Charles Piller
September 16, 2016
Well, speaking of launches! Francis Collins [Director of the NIH], Deborah Zahn [Boss at ClinicalTrials.gov], Robert Califf [new FDA Commissioner] and Ben Goldacre [ClinicalTrials Everyman] in the STAT piece – big guns all around. And then add this to the mix:
Promoting innovation and access to health technologies
September 2016
A lot of good words in these reports. The spirit is there in all of them at first reading. It’s not yet time to pick apart those encouraging words and see if they will translate into actionable changes in the Clinical trial process. That’s for a close reading of the official documents when they’re fully available. But we can frame some important things to look for.

In the past, there have been reforms that should have worked, at least helped. They have failed us for several reasons, but one stands out. After the trumpets stopped blaring, they were just forgotten. The people doing the trials either didn’t do them, or didn’t do them right, or didn’t do them on time. The people in charge didn’t keep up with them or do anything in response to infractions if they even knew about them. They sounded good, but they flunked compliance, surveillance, and enforcement. So as we’re looking over these various changes and reforms, these provisions are paramount. Without these elements, it’s just another failed exercise.

I’m just going to be glad they’re doing something, and that the agencies seem to be working together for a change. These links are here for starters, but now it’s time to read them and their official versions and see what’s behind the good words, with an eye out for compliance, surveillance, and enforcement. Feel free to join in the fun…
Mickey @ 9:29 PM

an anniversary…

Posted on Friday 16 September 2016

Today is an anniversary for me. This post from a year ago today …

Posted on Wednesday 16 September 2015

Well, our RIAT article, Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence, is finally published online at the British Medical Journal. It’s fairly straightforward. The emphasis is on the harms analysis for obvious reasons – an accurate representation of a drug’s safety is always the first order of business…

… marked the end of an intense couple of years, all focused on our RIAT team getting to say this in print:

by Le Noury J, Nardo J, Healy D, Jureidini J, Raven M, Tufanaru C, & Abi-Jaoude E.
British Medical Journal. 2015 …

Conclusions: Neither paroxetine nor high-dose imipramine demonstrated efficacy for major depression in adolescents, and there was an increase in harms with both drugs. Access to primary data from trials has important implications for both clinical practice and research, including that published conclusions about efficacy and safety should not be read as authoritative. The reanalysis of Study 329 illustrates the necessity of making primary trial data available to increase the rigour of the evidence base.

I first happened onto Paxil Study 329 in 2010 reading a letter from Paul Thacker then at the Project on Government Oversight [POGO] exposing it as ghostwritten [see roaches…]. The more I learned about it, the worse it looked. Over time, I had come to admire Jon Jureidini who had mounted the first challenge to the article in 2003 and David Healy who had pioneered our awareness of akathisia and suicidality with the Antidepressants. So I was honored to be asked to join them on the RIAT reanalysis team.

There are layers and layers of stories to tell about the writing of this article. It’s a story I expect will ultimately be told in detail. This is just an outline. We had originally planned to reanalyze the data already released by court order strictly following the a priori protocol [from before the study began]. We were able to stick with that plan in the efficacy analysis, supplementing it with a more modern correction method for missing values requested by the peer reviewers [Multiple Imputation]. But that wasn’t possible with the harms analysis. The method for cataloging harms wasn’t specified in the protocol, and the one used in the original paper obscured  the findings. So we apploed to GSK for the CRFs [original case report forms] and after another saga in its own right, we were granted remote access [50,000± pages]. The process of gaining the access and using the constricting remote access system are some of those other stories – partially documented on our web site [study329.org].

When it was finally completed and submitted, it wasn’t an ending, but the beginning of another story. Over the next year, it went through seven major resubmissions, multiple peer reviews and independent analyses. Acceptance was never assured until near the end of the process. And there were plenty of frayed nerves within and among everyone involved, particularly near the end of that year. It was something new for the authors, the journal, and the genre – and that showed in the process. I only wish other clinical trial reports were as closely looked at as this one. So the prepublication year was another whole story unto itself.

In an area where people question how involved the listed authors are in the production of a published article, or whether the journals are thorough in their review, our paper stands on its own. It was unfunded research. The authors on the byline were the only act in town. We did the negotiating, extracted the data, ran the analyses, checked each others work, wrote and edited the narrative, drew the graphics, made the submission, fielded the correspondence, etc. No ghosts anywhere in [or out of] sight. As mentioned, it was vetted like none other by the BMJ. And the final paper was well received and is widely quoted. It was definitely all well worth it. I got to work with some amazing colleagues on something that mattered. Who could ask for more than that? I hope others will follow the principles of a RIAT anaysis and take a look at many other questioned studies.

You can’t learn clinical medicine from just reading books. You ultimately learn it in a clinic. After this experience, I’d say the same about clinical trials and their results. The process of being on this team was an invaluable way to understand their ins and outs, how to evaluate them, and to see how easily they can be distorted. For me, it’s an anniversary to remember…
Mickey @ 1:37 PM

er…

Posted on Friday 16 September 2016

NEWS IN BRIEF
The Onion. 2015 51[7]:1-2.

Noting that similar outcomes were achieved under both approaches, a landmark decade-long study of mental health treatment options published Tuesday has found that talk therapy and antidepressant medications are equally effective at monetizing clinical depression. “Our data indicate that regular counseling sessions and prescription drugs have similarly high success rates in generating large sums of money from the clinically depressed,” said Katherine Hutton of the University of Oklahoma, the study’s lead author, noting that both methods demonstrated consistent positive earnings across chronic, episodic, and seasonal depression cases. “While some people make tremendous profits with drugs, others see substantial revenues from therapy. Together, these are two very powerful tools for improving the health care industry’s bottom line.” The study concluded that when both approaches are combined, financial results are likely to be reached far more quickly than with one method alone.
Don’t we wish this was just a joke? When I tell my friends that I like volunteering, working for free, this is what I’m talking about. Money is the root of most COI…
Mickey @ 12:19 PM

say amen…

Posted on Wednesday 14 September 2016

I first heard the term ghost-writing used in the scientific literature at the end of 2010 [see roaches…]. I must’ve known what was coming because I added added the roach to the ad image [from something our exterminator said, "For every roach you see, there are a hundred behind the walls"]. Sure enough, six months later I got hold of The Rothman Report [see detestable…] that described a medical writing firm ghost-writing Risperidone® articles faster than they could find guest authors to front them. I guess one can only take in so much at a time. I exhumed that old STI ad to show the top line, the significance of which eluded me at the time – publication planning, advocacy development.

If you read the content of that STI ad, it talks about a lot more than just medical writing, but I was slow to catch on – apparently a lot of us were. By the time I saw the ad in 2010, Scientific Therapeutics Information Inc had been at it for a very long time [for example, managing the Paxil Launch in 1993]. Since then, it has gradually dawned on us that most all of the industry funded clinical trial articles in psychiatry are ghost-written. And these days, it’s right there in the Acknowledgments under editorial-assistance-provided-by. So exposing professional ghost-writing hasn’t made much of a difference [see rebranding…]. Industry and journal editors just adapted.

But I don’t think I personally got the full extent of industry’s modus operandi until recently, first from reading Lisa Cosgrove et al‘s article and the idea of ghost management
by Lisa Cosgrove, Steven Vannoy, Barbara Mintzes, and Allen Shaughnessy
Accountability in Research. 2016 23[5]:257-279.

The relationships among academe, publishing, and industry can facilitate commercial bias in how drug efficacy and safety data are obtained, interpreted, and presented to regulatory bodies and prescribers. Through a critique of published and unpublished trials submitted to the Food and Drug Administration [FDA] and the European Medicines Agency [EMA] for approval of a new antidepressant, vortioxetine, we present a case study of the "ghost management" of the information delivery process. We argue that currently accepted practices undermine regulatory safeguards aimed at protecting the public from unsafe or ineffective medicines. The economies of influence that may intentionally and unintentionally produce evidence-biased-rather than evidence-based-medicine are identified. This is not a simple story of author financial conflicts of interest, but rather a complex tale of ghost management of the entire process of bringing a drug to market. This case study shows how weak regulatory policies allow for design choices and reporting strategies that can make marginal products look novel, more effective, and safer than they are, and how the selective and imbalanced reporting of clinical trial data in medical journals results in the marketing of expensive "me-too" drugs with questionable risk/benefit profiles…

… and then reading one of Karen Dineen Wagner‘s depositions [see author·ity…] –

DEPOSITION OF KAREN DINEEN WAGNER, M.D., Ph.D.
by Michael Baum, Esq., of Baum, Hedlund, Aristei & Goldman
on Tuesday, July 16, 2013, page 28 [pdf page 8]


QUESTION Okay. So do you recall whether you had access to patient level data when you were working on this publication?
  ANSWER No. We have access — well, as an individual investigator, you have access to your patients. But the individual patient data from other sites, usually when the data is presented, it’s put together. So I don’t — I just don’t recall if I saw individual — individual data.
QUESTION When you say "put together," does that refer to the pharmaceutical company compiling information and providing it to you?
  ANSWER The data is the property of the pharmaceutical company.
QUESTION And so they collect it and provide some form of summary of it to you?
  ANSWER Correct.
QUESTION And except for the patient level data that you had from your own particular site, you relied upon the information conveyed to you by the pharmaceutical company regarding the other sites. Is that correct?
  ANSWER In multicenter studies, each individual investigator has their own data and then it depends who sponsors the study. This was a Forest-initiated and Forest-sponsored study, so all of the data from the sites go to Forest.
QUESTION Then they compiled it and then did statistical evaluations of it?
  ANSWER Yes.
QUESTION Did you do any of the statistical evaluations yourself?
  ANSWER No.
QUESTION It was essentially provided to you by Forest statisticians?
  ANSWER Correct. I’m not a statistician.

– that she hadn’t actually seen the data or participated in the analysis, even as a Principle Investigator for the study. And it wasn’t just that fact itself. It’s that she said it in such a matter-of-fact way, like what she was saying was some kind of explanation that we should understand. It reminded me of a piece from several years back that addressed what happened when a recruited author insisted on actually seeing the study data. See selling seroquel VII: indication sprawl… for a telling example in which Nassir Ghaemi did exactly that, and was summarily unrecruited.

I synopsized my own dawning awareness here because without having gone through that process, I doubt I could’ve read Alastair Matheson’s papers and really understood them. I might have seen his waving us off of focusing so much on ghostwriting [Ghostwriting: the importance of definition and its place in contemporary drug marketing] as a bias from his years working in industry. But that’s not how I see his writings now. He brings a fresh point of view to the discussion of this pharmaceutical invasion of the medical literature, having spent close to twenty years doing many of the things we write about [see rebranding… and directly as he proposes…]. He’s an "insider":

"I worked on over one hundred drugs, most of which were, in my estimation, mediocre products that could be better pitched if a more persuasive scientific angle could be found for them. I visited corporate headquarters and congresses; analyzed markets, products, and competitors; groomed key opinion leaders; ghostwrote manuscripts; developed publications plans; and devised marketing strategies."
He’s cataloged his writings on a website:
In The Disposable Author: How Pharmaceutical Marketing Is Embraced within Medicine’s Scholarly Literature, Matheson makes a number of points: First, the academic authors are recruited based on reputation and status. They can be disposed of and replaced. In a way, they’re "ghosts" too. While he validates that Pharma does all kinds of egregious stuff, behind the scenes, but he warns against getting preoccupied with that. It reinforces an evil pharma meme that takes attention away from his next point. It’s a system and every part of it gains something. There aren’t any innocent victime in the system as it now stands [except maybe the patients]:
Such devices are widespread in medicines peer-reviewed journal literature. But who is behind them, and whose interests are served? Here we come to the crux. Everyone is behind them, and each party benefits in its own way. Companies get the elixir of endorsement on which advocacy marketing depends; academics reap the rewards of authorial status and generally fed that they deserve top billing; journals sell reprints; and culturally I believe, academic medicine and its journals crave the sense that the research scene remains in their hands. It is customary for academic “investigators" to be placed at the front of the byline, and indeed, it is understandable that readers who will prescribe the drug want to read the opinions of qualified peers who have used it in their patients…
The net result is that the academic and commercial interests merge. He suggests that by demonizing one or another element in the system, for example Pharma, one doesn’t pay attention to the journal editors who allow these jury-rigged articles to be published as a way of funding their journals. But his argument is nuanced and should be read in full rather than in snippets. His suggested solution is simple:
"Let me then define contemporary advocacy-based marketing punctiliously, as a practice in which content with potential commercial or promotional utility is planned, convened, funded, influenced or owned by a company, but communicated by, or disproportionately attributed to, the peers or opinion leaders of the intended customers. Advocacy marketing thus defined is routine in medicine and its scholarly literature, and the chief policy conclusion of this essay is that it should be banned outright."
Of course that’s right. So long as the academic authors are essentially functioning as a  sales force, there is no meaning to the word academic. How can we refer to Karen Dineen Wagner as an academic author when she signs on to a study where she’s neither seen the data, nor reviewed the analysis, nor even come into the writing process until after it’s drafted? How can we call a journal part of the scholarly literature when the editor doesn’t retract articles that are clearly wrong by wide-ranging consensus. Rather than academic medicine being a watchdog, a counterpoint to commercial interests, the academic is disappearing as it merges with the powerful commercial forces. So he sees "integration, not subterfuge, as the danger."

I say "Amen"
Mickey @ 10:35 PM

miles[1959]

Posted on Wednesday 14 September 2016


So what… Miles Davis

Mickey @ 9:26 PM