please sign the petition!…

Posted on Thursday 29 September 2016

[see also  clinical trials – an act of congress!…]

In the course of a Clinical Trial on an FDA regulated drug, the Sponsor submits information to three distinct entities – each with a different purpose:

  1. NIH [clinicaltrials.gov]: …clinicaltrials.gov is intended to be the public interface for the clinical trial, The first submission [registration] describes the trial and lays out how it is to be conducted and should be submitted after the study has been approved by the Institutional Review Board and before the study begins. It is a proxy for the a priori Protocol identifying the Primary and Secondary Outcome Variables and how they will be analyzed – a basic element of the RCT process. At the end of the clinical trial proper with the breaking of the blind, there is a second submission to clinicaltrials.gov [results database] giving the results of the analysis of the Primary and Secondary Outcome Variables. clinicaltrials.gov hasn’t had the desired impact, not because it wasn’t a good idea or well designed. People just ignored it. Trials have not been registered prior to starting the study. The results database has rarely been populated on time, if at all. The only consequence for ignoring it, even when it’s required by law, is being listed in one of many articles documenting that it’s being ignored.
  2. FDA: …The Sponsor’s submissions to the FDA re formal requests for a New Drug Approval, an approval for a new indication, an application for the pediatric extension of Exclusivity. Those submissions include the results of multiple trials [see Evidence of Clinical Effectiveness and Data Requirements For an NDA]. The Sponsor’s FDA Submission is not publicly available and the results of the FDA’s analyses are not published promptly – added much later for selected studies on Drugs@FDA. Those not posted can be requested via an FOIA request. The FDA’s decisions are yes/no and what’s published are the mutually negotiated package inserts whicg are collected yearly in the PDR [Physician’s Desk Reference].
  3. Medical Journal: …articles in peer reviewed academic medical journals have been the traditional source of physicians for medical information for well over a century. Academic Medical Journals are independent and self-regulated. In the case of these clinical trial reports, the only information the editors and peer reviewers have is what’s submiited by the trial sponsor. No one these days think that the data was analyzed or the paper was written by the academic authors on the by-line. The analysis was done by the sponsor and the paper written by a professional medical writer from a prepared summary provided by the sponsor. The listed non-industry authors are involved in that to a greater or lesser extent. Over time, the purpose of the publications has essentially become a powerful advertising tool for marketing the drug – often inflating efficacy and minimizing harms.
So in spite of the fact that there should be four versions of the data results for a clinical trial, from where we sit, there’s only one. The clinicaltrials.gov version is usually empty. The IPD [Individual participant data] and the CRFs [Clinical Report Forms] are locked away in the sponsor’s filing cabinet/computer. The only other access is through the FDA who is sworn to secrecy. And so all we can see is the published article. Not even the ghost-writers, the listed academic authors, the journal editors, or the peer reviewers have access to the actual data and unfiltered results in the study.
Notice the bifurcation in the pathways followed for the FDA Submission and the Article Submission. Add in the fact that both clinicaltrials.gov submissions are either misused or ignored. So while we appreciate the recent initiatives by the NIH and the FDA to do something about all of this, what they propose is not enough! In the current system in the graphic above or the system as proposed by their initiative, the only entity with eyes-on access to the raw data [the FDA] surveys neither the information on clinicaltrials.gov nor that published in journal article for accuracy.

So that’s why we’re urging you to sign our petition for Congressional action mandating that the FDA and NIH take an active role and use these already-in-place tools to put some teeth into ending the scientific disgrace of our clinical trial reporting! That’s what these agencies are for!


[click the ostriches to see and or sign the petition]

Mickey @ 5:00 PM

clinical trials – an act of congress!…

Posted on Thursday 29 September 2016

My posts last week were meant to be a prequel to this one. In clinicaltrials.gov revisited… and drugs@FDA visited…, I tried to present a simple outline of the two agency online windows into the world of clinical trials for anyone who hasn’t already explored them. And in clinical trials visited again… and clinical trials – concordance…, I wanted to frame the giant loophole in our system of evaluating clinical trial data. Those posts were in anticipation of my being part of an initiative that I think might really help us get something done about finally plugging the loophole.

Back in May, in commenting here on why is that?…, Dr. Bernard Carroll had a spark of an idea about something that could lead us out of the morass of confusion and corruption that surrounds the industry-funded clinical trial reports that contaminate our literature and medical practices. In June, he expanded that initial comment into a blog post on Healthcare Renewal [see CORRUPTION OF CLINICAL TRIALS REPORTS: A PROPOSAL]. Over the summer, there’s been a lot of back and forth on this proposal and I’m grateful to be a part of the group that’s formed around deciding how to proceed. Also included in the group are John H. Noble Jr., a noted academic and longtime advocate for honesty in science and clinical trials, and Shannon Brownlee, well-known author, activist, and journalist, now with the Lown Institute and its Right Care Movement. Working through the Lown Institute, today we are launching a petition on Change.org designed to get this gaping loophole in government oversight and  the proposed solution into the place where it belongs, the United States Congress. Let me repeat that:

… TODAY WE ARE LAUNCHING A PETITION ON CHANGE.ORG DESIGNED TO GET THIS LOOPHOLE IN GOVERNMENT OVERSIGHT AND THE PROPOSED SOLUTION INTO THE PLACE IT BELONGS, THE UNITED STATES CONGRESS.


[click the ostriches to see and or sign the petition]

The full text of our petition is long and detailed. This snippet has its essence:
"A Specific Proposal
We now petition Congress to require the FDA and NIH to coordinate their monitoring and sharing of key information through ClinicalTrials.gov. Working together, the two agencies could enable stakeholders to verify whether purported scientific claims are faithful to the a priori protocols and plans of analysis originally registered with the FDA. Publication of analyses for which such fidelity cannot be verified shall be prohibited unless the deviations are positively identified (as in openly declared unplanned, secondary analyses). This prohibition shall include scientific claims for on-label or off-label uses made in medical journals, archival conference abstracts, continuing education materials, brochures distributed by sales representatives, direct-to-consumer advertising, and press releases issued by companies or their academic partners. It shall extend to FDA Phase 2, Phase 3, and Phase 4 clinical trials. By acting on this petition, Congress will create a mechanism for stakeholders independently to verify whether inferences about clinical use suggested by the unregulated corporate statistical analyses can be trusted."
But please read over the whole document. It has a lot of sweat equity in the details.

Even though petitioning Congress may seem like a long shot, this is the right time to take it. There is a general outrage at the behavior of the pharmaceutical industry over disreputable practices including clinical trials. The FDA, NIH, and clinicaltrials.gov just last week released their own planned reforms [see a blast off…]. And while they didn’t plug the giant loophole, their reforms will help [and they telegraph a willingness to go after a solution]. But as I said in clinical trials visited again…:

No reform of the clinical trials system will work if any part of it relies on either ethical or voluntary compliance. The stakes are just too high. Clear legalistic rules with surveillance and predefined consequences are essential requirements.

If our experience with previous reform efforts [particularly clinicaltrials.gov] has taught us anything, it’s that we need a mandate from Congress, the law of the land, behind any moves forward. An Act of Congress. And this petition is a first step in that direction.

So please read our petition over and sign it if you agree. If you’ve got a blog or are on a listserve or email chain, pass it on. We’re going to need a lot of signatures to push for a Congressional Hearing…
Mickey @ 12:00 PM

study 329 – something new…

Posted on Wednesday 28 September 2016

Well our second Paxil Study 329 paper was published at the end of last week. I waited to mention it here until David Healy had a post about it – out today [see Study 329 Continuation Phase]. We originally submitted it to the Journal of the American Academy of Child and Adolescent Psychiatry who turned it down [their peer review comments are on our website Restoring Study 329 – interesting in their own right]. I think what I’ll do is show a couple of graphs from that data, then reverse my usual m.o. by talking about it first and ending with the abstract:

Paxil Study 329 had a Continuation Phase where they followed the responders only, blinded on the same meds for six months. In the a priori Protocol, it was a Secondary Outcome Variable hoping to measure the relapse rate. They didn’t mention it in Keller et al. I think they must’ve looked at that upper graph of the drop-out rate and shied away from the Continuation Phase altogether. The lower graph has the Raw HAM-D scores and, as expected, they showed no differences. But we never said that this was a badly designed study. To the contrary, it’s better than most and this six month follow-up data is about the only longer term SSRI dataset around, certainly in kids – so we decided to take a look.

In our original RIAT paper [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence], we wanted to analyze the data as it should have been analyzed in the first place – according to the a priori Protocol [see faith-based medicine…]. We were able to do that with the efficacy data. We couldn’t exactly do that with the safety analysis. For one thing, the Protocol didn’t specify a system. And for the other, the system used by Keller et al obscured suicidality. So we used a more modern, more appropriate system. But even with that change, we remained in the Hypothesis Testing mode. However, with this Continuation Phase, from my point of view we were doing something else – called Material-Exploration by Adrianne de Groot in his 1956 classic [The Meaning of “Significance” for Different Types of Research] [see also the hope diamond…]:
Hypothesis Testing Research vs Material-Exploration

Scientific research and reasoning continually pass through the phases of the well-known empirical-scientific cycle of thought: observation – induction – deduction – testing [observe – guess – predict – check]. The use of statistical tests is of course first and foremost suited for “testing”, i.e., the fourth phase. In this phase one assesses whether certain consequences [predictions], derived from one or more precisely postulated hypotheses, come to pass. It is essential that these hypotheses have been precisely formulated and that the details of the testing procedure [which should be as objective as possible] have been registered in advance. This style of research, characteristic for the [third and] fourth phase of the cycle, we call hypothesis testing research.

This should be distinguished from a different type of research, which is common especially in [Dutch] psychology and which sometimes also uses statistical tests, namely material-exploration. Although assumptions and hypotheses, or at least expectations about the associations that may be present in the data, play a role here as well, the material has not been obtained specifically and has not been processed specifically as concerns the testing of one or more hypotheses that have been precisely postulated in advance. Instead, the attitude of the researcher is: “This is interesting material; let us see what we can find.” With this attitude one tries to trace associations [e.g., validities]; possible differences between subgroups, and the like. The general intention, i.e. the research topic, was probably determined beforehand, but applicable processing steps are in many respects subject to ad hoc decisions. Perhaps qualitative data are judged, categorized, coded, and perhaps scaled; differences between classes are decided upon “as suitable as possible”; perhaps different scoring methods are tried along-side each other; and also the selection of the associations that are researched and tested for significance happens partly ad-hoc, depending on whether “something appears to be there”, connected to the interpretation or extension of data that have already been processed.

When we pit the two types so sharply against each other it is not difficult to see that the second type has a character completely different from the first: it does not so much serve the testing of hypotheses as it serves hypothesis-generation, perhaps theory-generation — or perhaps only the interpretation of the available material itself…
If you only take one thing away from this entire 1boringoldman blog, let this be it. What’s been wrong with the clinical trial literature is that the papers are written as if they are some kind of anything-goes, free-wheeling, Material Explorations with changing outcomes, creative statistics, and speculations-presented-as-facts. That’s dead wrong. They are Hypothesis Testing enterprises that require every bit of the rigor and attention to protocol described by de Groot. Product Testing exercises, not Exploratory Research! Hypothesis Testing not Material-Exploration! …End of Sermon…

Now back to our Paxil  Study 329 Continuation Phase paper. I’m not even going to try to summarize it because fellow author David Healy has done such a good job in Study 329 Continuation Phase. He and Jo Le Noury have a collective knack for looking at adverse event data. We did find some things after all, in spite of the drop-out rate – primarily by looking closely at the timing and various states of medication use. So look over the paper and be sure to read David’s posts, the one today and the one coming next week, for the details of what we found. Some pretty interesting Material Explorations in my book. Here’s another graphic and the abstract:

by Le Noury, Joanna; Nardo, John M; Healy, David; Jureidini, Jon; Raven, Melissa; Tufanaru, Catalin; and Abi-Jaoude, Elia.
International Journal of Risk & Safety in Medicine. 2016 28[3]:143-161.

OBJECTIVE: This is an analysis of the unpublished continuation phase of Study 329, the primary objective of which was to compare the efficacy and safety of paroxetine and imipramine with placebo in the treatment of adolescents with unipolar major depression. The objectives of the continuation phase were to assess safety and relapse rates in the longer term. The objective of this publication, under the Restoring Invisible and Abandoned Trials [RIAT] initiative, was to see whether access to and analysis of the previously unpublished dataset from the continuation phase of this randomized controlled trial would have clinically relevant implications for evidence-based medicine.
METHODS: The study was an eight-week double-blind randomized placebo-controlled trial with a six month continuation phase. The setting was 12 North American academic psychiatry centres, from 20 April 1994 to 15 February 1998. 275 adolescents with major depression were originally enrolled in Study 329, with 190 completing the eight-week acute phase. Of these, 119 patients [43%] entered the six-month continuation phase [paroxetine n=49; imipramine n=39; placebo n=31], in which participants were continued on their current treatment, blinded. As per the protocol, we have looked at rates of relapse [based on Hamilton Depression Scale scores] across both acute and continuation phases, and generated a safety profile for paroxetine and imipramine compared with placebo for up to six months. ANOVA testing [generalized linear model] using a model including effects of site, treatment and site x treatment interaction was applied. Otherwise we used only descriptive statistics.
RESULTS: Of patients entering the continuation phase, 15 of 49 for paroxetine [31%], 12 of 39 for imipramine [31%] and 12 of 31 for placebo [39%] completed as responders. Across the study, 25 patients on paroxetine relapsed [41% of those showing an initial response], 15 on imipramine [26%], and 10 on placebo [21%]. In the continuation and taper phases combined there were 211 adverse events in the paroxetine group, 147 on imipramine and 100 on placebo. The taper phase had a higher proportion of severe adverse events per week of exposure than the acute phase, with the continuation phase having the fewest events.
CONCLUSIONS: The continuation phase did not offer support for longer-term efficacy of either paroxetine or imipramine. Relapse and adverse events on both active drugs open up the risks of a prescribing cascade. The previously largely unrecognised hazards of the taper phase have implications for prescribing practice and need further exploration.
Mickey @ 11:22 PM

faith-based medicine…

Posted on Wednesday 28 September 2016

At the risk of overworking my Rip Van Winkle analogy, before I went to sleep in the early 1980s, I recall that conflict of interest standards in medicine and science were similar to those for our judicial counterparts – even the possibility of a conflict of interest was grounds for recusal. Had someone announced in a conference, "I am a Consultant for by SmithKline Beecham, but that will not cloud my judgements about Paxil," the room would’ve dissolved into laughter and cat-calls. Likewise, scientific experiments built-in procedures to eliminate the possibility of bias – randomization, double-blinding, etc are there for a reason and sacrosanct.

So  when I got really into reading the volumes of material we had about Paxil Study 329 and read in the Clinical Study Report [CSR] that the Outcome Parameters had been changed in the final months of the trial, my eyes crossed. And they came out of their sockets when I realized that the only significant outcome variables were those very ones added in the 11th hour. No argument can convince me that somebody didn’t "peek" at the data before the study was unblinded and made that change, though I could never prove such an assertion.

Similarly, when I was looking at Karen Dineen Wagner’s 2003 Efficacy of sertraline in the treatment of children and adolescents with major depressive disorder: two randomized controlled trials recently [see and then there was one…] and I read this in defense of their combining two seperate Zoloft trials…
Reply: In response to Dr Garland, our combined analysis was defined a priori, well before the last participant was entered into the study and before the study was unblinded. The decision to present the combined analysis as a primary analysis and study report was made based on …
…my eyes started crossing again. And they did the socket thing when I realized that the two studies were negative when analyzed individually. And that’s because "well before the last participant was entered into the study and before the study was unblinded" is not even close to a priori, and again allows the possibility of "peeking," which I assume is what they did. Once more, my assumption is totally unprovable. But the ball’s in her court, not mine.

Being asked to accept statements like that is particularly annoying from people who otherwise preach the gospel of evidence-based medicine from any available pulpit on any given Sunday. As a matter of fact, the whole point of doing RCTs in the first place is because people wouldn’t accept, "Newbery’s Brain Salt offers ‘A POSITIVE RELIEF AND CURE FOR Brain Troubles, Headaches, Sea Sickness,’ etc," particularly if the person saying it worked for F. Newbery & Sons.

For that reason alone, declaring the Outcome Variables in the Protocol and filing a Statistical Analysis Plan before beginning a clinical trial is mandatory. This is not faith-based medicine. But in addition, I know of no situation where picking a particular statistical procedure informed by the results is recommended – quite the contrary. We select them based on considerations outside of the data itself. Just about anybody with a college statistics book and some free time can locate a combo of outcome parameters and statistical procedures that will turn any trivial difference into something that reads out as "statistically significant." Insisting that a clinical trial is pristine is defining its later analysis a priori is neither picky nor optional. It’s an essential element in the whole enterprise. So what if you change your mind halfway into a study? Scrap the study you’re doing and start over. The track record for any other answer to that question is too abysmal to even contemplate…
Mickey @ 9:16 AM

hardly our finest hour…

Posted on Sunday 25 September 2016

When I left the faculty in the early 1980s in the wake of the medicalization of our department after the DSM-III revolution, I didn’t think of it as leaving psychiatry [I sort of thought of it as psychiatry leaving me]. But circumstances were such that I got busy with my practice and teaching, drifting further and further from what was going on in psychiatry local [Dr. Nemeroff’s department] and psychiatry at large. It was 25 years later when two things woke me from my slumber – the Washington Irving's Rip Van Winklerevelations of widespread corruption in academic psychiatry [the KOLs] and volunteering in a clinic and being horrified at the medication regimens I found people on there. So I had a lot of catching up to do, luckily finding others who were willing to help. I think that the CROs [Contract Research Organizations], the Medical Writing Firms, that whole industry that entered the clinical trial scene must have been in its infancy about the time I was going into seclusion, because I didn’t know about any of it, though part of my reason for leaving had to do with a new administration that was keen on teaming up with PHARMA [another unfamiliar  term]. I think of the time from going into practice until five or six years into my retirement as my "Rip Van Winkle" period:

I periodically tell that story partly because I feel guilty for not noticing what was happening, and sometimes to explain why I never even heard terms like evidence-based medicine, RCTs, meta-analysis, systematic reviews, or even the word pharma until five or six years ago [I’m apparently a heavy sleeper]. This time, however, I have another reason. Research watchdog, John Ioannidis, has a new article. And it was a graph in his paper that led to my retelling that snippet of my history:
by JOHN P.A. IOANNIDIS
Milbank Quarterly. 2016 94[3]:485-514.

POLICY POINTS: Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses. Instead of promoting evidence-based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools. Suboptimal systematic reviews and meta-analyses can be harmful given the major prestige and influence these types of studies have acquired. The publication of systematic reviews and meta-analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.
CONTEXT: Currently, most systematic reviews and meta-analyses are done retrospectively with fragmented published information. This article aims to explore the growth of published systematic reviews and meta-analyses and to estimate how often they are redundant, misleading, or serving conflicted interests.
METHODS: Data included information from PubMed surveys and from empirical evaluations of meta-analyses.
FINDINGS: Publication of systematic reviews and meta-analyses has increased rapidly. In the period January 1, 1986, to December 4, 2015, PubMed tags 266,782 items as "systematic reviews" and 58,611 as "meta-analyses." Annual publications between 1991 and 2014 increased 2,728% for systematic reviews and 2,635% for meta-analyses versus only 153% for all PubMed-indexed items. Currently, probably more systematic reviews of trials than new randomized trials are published annually. Most topics addressed by meta-analyses of randomized trials have overlapping, redundant meta-analyses; same-topic meta-analyses may exceed 20 sometimes… Many other meta-analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta-analyses are both non-misleading and useful.
CONCLUSIONS: The production of systematic reviews and meta-analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted.
The abscissa on the graph goes from 1986-2014 and the ordinate goes from 0-30,000 [articles/year!], red  being systematic reviews and blue being meta-analyses [I wonder what the clinical trial graph would looks like?]. Ioannidis obviously takes a dim view of this epidemic. He’s a smart guy, so I expect he knows what he’s talking about.

But when I looked at Ioannidis’ graph, I saw something else – a historical epoch of medicine. Is it any wonder that I didn’t know about meta-analyses and systematic reviews? There weren’t any at the time I went all Rip Van Winkle! That graph parallels a profound change in medicine the age of evidence-based medicine, the age of managed care, the age of corporatization, the age of the clinical trial. Whatever you want to call it, it has been a distinct era. And it is hardly our finest hour.

Again, I don’t miss Ioannidis’ point, that many of these meta-analyses and systematic reviews can be a way for academics to rack up publications for academic advancement  without doing any original bench research or clinical studies of their own. But it’s also possible that one of the reasons for that is that research funding is so hard to come by these days – except from corporate sponsors [with strings attached]. And another reason for the burst of secondary publications might be that there’s been so much questionable research[?] in this time frame that genuinely does need a critical second [or third] look.

So at least in psychiatry, I welcome the flurry of independent meta-analyses and systematic reviews. We’ve had a pipeline of psychotherapeutic agents steadily pouring into our landscape during the time under discussion, literally changing the direction of the specialty, and we still can’t trust our literature to tell us about, either their safety or their efficacy:
The meta-analyses and systematic reviews have been our only real window into any rational understanding of these drugs. And they still have a lot to tell us. Here’s a very recent example:
by Ymkje Anna de Vries, Annelieke M. Roest, Lian Beijers, Erick H. Turner, Peter de Jonge
European Neuropsychophaemacology. 2016 Article in press.

Mickey @ 9:46 AM

clinical trials – concordance…

Posted on Thursday 22 September 2016

My preacher friend Andy had a book, The Concordance, that showed the four gospels with their different accounts of the same event aligned to be side by side. I liked the idea, and used it in my own teaching on a very different topic. But in this case, I’m co-opting the term to say that these four documents should be a Concordance, say the same thing, reach the same conclusions:

published
journal article
clinicaltrials.gov
results database
fda medical and
statistical reviews
raw data
[IPD, CRF]

Under the current arrangement, the only entities that have actually seen and analyzed the raw data from a clinical trial of an FDA regulated drug are the trial sponsors and the FDA. The FDA has extensively analyzed the data itself and said either yeah or nay to a new drug application or an approval for a new indication of an approved drug. That they keep the raw data a secret is something I would argue with, but for the moment, it is a longstanding convention so I wouldn’t get very far with my argument. They are giving in to the Subject Privacy and Commercially Confidential Information [CCI] arguments.

But the FDA silence goes deeper than that. They alone know the results of the prespecified Primary and Secondary Outcome variable analysis, yet have remained silent as our literature fills up with journal articles that deliberately distort those results – have remained silent as clinicaltrials.gov‘s results database has been ignored. So by remaining silent, the FDA, the agency charged with insuring that our pharmaceutical formulary is both safe and effective, the only agency that has access to the definitive clinical trial data and the results of its analysis, has been an active partner in the corrupt journal trial reports that have swept through our medical literature. Likewise, the NIH has also been a party to the corruption by being passive while their clinicaltrials.gov results database has been systematically ignored, even in situations where reporting is mandatory.

This is a giant loophole in the system. Whether through passivity or feeling they didn’t have the mandate or authority, the FDA and NIH have become a major part of this problem by remaining silent, being passive. And there is a solution:

1boringoldman Simple Facts

On the day that a sponsor and authors submit an article about their clinical trial to a journal, they already have the results at their fingertips. There is no reason that they could not easily post the required results on the clinicaltrials.gov results database…

On the day that the clinicaltrials.gov results database is populated, the FDA already has their own results at their fingertips. There is no reason that they could not easily check for concordance between those posted by the sponsor and their own analysis, and add their findings as a commentary to the results database.

In this scenario, the journal editors and peer reviewers still wouldn’t have the raw data, but they would have the results and a commentary about those results in hand which would bring their decision making process into the realm of evidence based medicine.

By taking an active role in this process, the FDA would be stepping up to the plate in fulfilling its broad charge of insuring safety, efficacy, and integrity to clinical trial reporting landscape, and they could do this without compromising either subject confidentiality or commercially confidential information.

Last week, the FDA, NIH, and clinicaltrials.gov announced broad changes directly addressing the problem of corrupt clinical trial reporting. They are to be commended for their initiative and many of their changes. But they didn’t plug the loophole being discussed here – publicly addressing Concordance among the versions of the results, with coordination between agencies, and with surveillance and enforcement.

What’s it going to take? An Act of Congress? Stay tuned…
Mickey @ 7:30 AM

clinical trials visited again…

Posted on Wednesday 21 September 2016


By all rights, the usual randomized clinical trial [RCT] of a medication ought to be relatively straightforward:

  1. Decide on the target population, the intervention arms of the study, and what outcome difference you would accept as clinically meaningful.
  2. Do the power computations to determine the study’s size.
  3. Preregister the study, declaring the Primary and Secondary Outcome Variables and the way they will be collected and analyzed.
  4. Begin the clinical trial, randomizing and blinding everyone.
  5. After the last subject completes the trial, break the blind and apply the [exact] methods specified in 3.
  6. Post those results on the Registry.
Some persist in arguing that 3. is too restrictive, that they should be able to change Outcomes or Analytic Methods in mid-stream before the blind is broken. First, the validity of the statistical methods depends on preregistration. And second, the track record of "outcome switching" being used to turn negative data into positive is too abysmal to even consider that argument. Preregistration is the only way to insure against HARKing [hypothesis after results known], adapting variable choice and analytic method to get the desired result.

Building on a 1988 registry of HIV studies, in 1997 Congress created clinicaltrials.gov, a public registry for clinical trials [see the history on Wikipedia and clinicaltrials.gov]. It prescribed the exact system described in the box above: preregistration of outcomes and analytic methods, and posting of the results prescribed before the outset of the trial. It is a [sort of] user friendly structured online database, well designed for the task. I have no idea why it was created as part of the NIH instead of the FDA where it seemed most pertinent. But sadly, it has been a flop. Because when it wasn’t being ignored, it was being abused.

Studies weren’t often preregistered, they were registered after the fact [sometimes after the study was completed]. The results database, due within one year of completion, was almost universally simply ignored.  Several attempts to strengthen the requirements had little or no impact. It was a great idea [see clinicaltrials.gov revisited…], but it flopped because it needed  a cop:

The 1boringoldman Doctrine

No reform of the clinical trials system will work if any part of it relies on either ethical or voluntary compliance. The stakes are just too high. Clear legalistic rules with surveillance and predefined consequences are essential requirements.

There is, of course, a simple solution. Publish the a priori Protocol, the Statistical Analysis Plan, and the raw data as an online Addendum to the journal article. PHARMA has fought this solution with a vengeance arguing that it breaches Subject Privacy and contains Confidential Commercial Information. I think those are spurious arguments – a smoke screen. But so far they’ve been successful so Data Transparency is right now not the solution many of us hoped it might be. So where we’ve stood for a long time is that there are four potential versions of clinical trial results on FDA regulated drugs…

published
journal article
clinicaltrials.gov
results database
fda medical and
statistical reviews
raw data
[IPD, CRF]

… but we have only gotten to see one of them – the published journal article. Most sponsors neither registered a priori nor even filled out the clinicaltrials.gov results database. The fda medical and statistical reviews are very slow in becoming public, if at all, and the raw data itself is locked up in a safe in a place called PHARMA. Singing the praises of Data Transparency, PHARMA reframed it as benevolent Data Sharing for further research rather than checking for malfeasance, and offered a very restricted window for the persistent few who are willing the go through the process.

Fortunately the outrage has grown and the evidence of widespread data manipulation and corruption has accumulated to the point where people are beginning to call for action – including the NIH and the FDA! to be continued…
Mickey @ 8:19 PM

drugs@FDA visited…

Posted on Tuesday 20 September 2016

Whereas clinicaltrials.gov was designed to be a public online interface with structured data fields, drugs@FDA is a different species altogether. In clinicaltrials.gov revisited…, I joked…
While it’s discussed almost like it’s some complex governmental agency, clinicaltrials.gov is just a very large, online, searchable database of clinical trials using human subjects. It’s only a registry, not unlike the bridal registries filled out by prospective brides as the big day approaches…
I didn’t mean to trivialize it – just to describe it as a structured database. It really is part of a complex governmental agency – the National Institute of Health. On the other hand, the web site, drugs@FDA isn’t really a structured database. It’s just an Internet menu system in front of what data they have made available [which is like swiss cheese – some really good stuff with lots of holes]. You start with a screen that looks like this:
So you wind your way through several screens to find the drug you’re looking for, ending up on a screen like this:
 
It’s that highlighted link that says "Approval History, Letters, Reviews, and Related Documents" that’s what you came for. It leads to a screen with a table like this:
 
It’s the Reviews you’ve come to find. The rest of it is stodgy labeling revisions and moderately unintelligible letters in beaurocratese. So pick one [here‘s the link to Abilify‘s initial Review]. At this point, it seems like every single drug is different. Sometimes there’s nothing there [because they haven’t put anything there], particularly with newer drugs. And sometimes they’re like this Abilify selection – pages and pages of pdfs.

It may seem like in describing the hit-or-miss quality of this site and its sometimes chaotic organization, I’m complaining. Quite the contrary. I’m mentioning it for several reasons. First, my impression is that it’s not often visited, being the haunt of people who do systematic reviews rather than mere mortals. But mainly I want to say that after five or six years of repeatedly looking up things here, I’m impressed. These reviews are extensive. They’re all different in that the reviewers "follow their noses" going through the data. Sometimes they reanalyze the raw data, slicing and dicing it in different ways from the way it’s submitted. Sometimes they explore different analytic methods. But I’ve never found one where the didn’t follow the a priori protocol or took the sponsor’s version and computation for granted. Their standard is low [two positive studies], but they go at it competently. The statistical reviews are often illuminating. I’ve disagreed with their conclusions at times, but never their methodology or thoroughness. These reports are reviewed by committees who say yeah or nay. Actually, so do the reviewers. And I think the final decision rests with the director of the FDA.

They’re obviously in no hurry to post their documents or to be comprehensive [did I mention swiss cheese], but that’s not what they’re for. And I’ve been glad for what I could get. I’m talking about this for a specific reason. The intended public interface for clinical trials is clinicaltrials.gov, an NIH database created to describe the trial design, its progress, and its results. But it has been purposefully underpopulated and essentially rendered impotent. The real information is inside the FDA with its ability to see all the information [see Evidence of Clinical Effectiveness and Data Requirements For an NDA], but it’s latecoming to their site and spotty for the likes of those of us who try to investigate these drugs and vet the published studies.

Two windows into the Clinical Trials results just across town from each other, but in some ways, they might as well be on the opposite poles of the planet:

Mickey @ 7:54 PM

clinicaltrials.gov revisited…

Posted on Sunday 18 September 2016

With all the reports of the NIH/FDA reforms coming out yesterday, I thought I’d dabble in the details of the various elements being discussed. Usually, one would start with an abstract discussion of how such a thing as clinicaltrials.gov came to be and what it was intended to accomplish, but this time I think it makes more sense to briefly start with what it is concretely, and then move to the loftier narrative. As the philosophers sometimes said: "existence precedes essence."

While it’s discussed almost like it’s some complex governmental agency, clinicaltrials.gov is just a very large, online, searchable database of clinical trials using human subjects [see clinicaltrials.gov]. It’s only a registry, not unlike the bridal registries filled out by prospective brides as the big day approaches. Its contents are all entered by the trial sponsors at various points along the process of doing their trial. Once you locate the trial you’re looking for, you’re offered several different ways to view the contents that relate to that specific trial:

The opening screen [full text view] shows the information gathered when a trial is registered. It tells what’s being studied and why, who’s in charge of the trial, and who’s paying for it. It briefly defines the primary and secondary outcome parameters and how they’ll be analyzed at the end. There’s a section about eligibility and another identifying the location[s] of the study sites. There are lots of dates: when it was registered, when it was completed, etc [notably, it usually doesn’t say when the study actually began].

There’s another window into the same contents [tabular view] with a completely different layout. I’m usually visiting this site to see if they played by the rules, and so I find this view more helpful. For one thing, they report what the outcome variables were when the study was originally registered AND when the study was completed [for some reasons mentioned below, this isn’t always as helpful as it ought to be]. I think this view is the one most useful to scientists, physicians, and trialists – less "fishing around" required.

The results database has been the most disappointing aspect of this enterprise, not because of its structure, but because it has been essentially ignored – even in trials where it is legally mandated. Compliance percentages have been in the teens, even for NIMH/NIH funded trials, and close to zero for the contested studies I’ve tried to look at. The structural layout is fine – the results of the primary and secondary Outcomes and the Adverse Events. But it has way too often simply said No Study Results Posted.

Clicking the link above the tabbed menu [History of Changes], you get a log of all the times it has been updated along the way, and a link that says "Continue to the history of changes for this study on the ClinicalTrials.gov Archive Site." That leads to a screen that lets you scroll through the changes made during the trial. It’s a more primitive database·y interface, but it’s an invaluable vetting tool for exploring the changes, particularly changes in the Outcome Parameters along the way. It’s for the hardy among us.

This was a really great idea, collecting all the data about a trial in a simple public database. There are some important omissions like the date the first patient started the trial or who is actually conducting the trial [which CRO]. Some parts have been degraded over time like the specifics about the study sites. And requirements have changed over time too, like who was required to use it to report results. As government databases go, the design and interface get the job done. But it has been a flop for one simple reason – they didn’t do it! Garbage In, Garbage Out [or in this case, Nothing In, Nothing Out].

Some trials weren’t registered until after they were completed [instead of before they started as they were supposed to]. Most were late. The results database was largely ignored – true for both commercial and government or foundation funded trials. It’s really a shame, because it’s a great idea. I think it’s actually a better way to report the results from clinical trials than journal articles. As I’ve said endlessly, these trials are not research, they’re product testing. So what we need to know are the basics – the results of the prespecified Outcomes and a compilation of the Adverse Events. Those extra words in the journal articles are often rhetoric – part of a smoke screen or a sales pitch – and frequently, some of the essentials have gone missing.

Recently, things have improved. Results have begun to show up in a more timely manner. Trials have been registered on time. This NIH/FDA crackdown that was released yesterday has been in the works for a while and as a result, people have been "being good." The earlier honor system didn’t work. So now, with compliance, surveillance, and enforcement, maybe these new changes will allow clinicaltrials.gov to live up to the dreams of its creators. Time to pore over those articles from last week and see if they have the kind of teeth that will put an end to the corruption we’ve been subjected to for far too long:
Mickey @ 7:55 PM

a blast off…

Posted on Friday 16 September 2016


JAMA
by Kathy L. Hudson, PhD; Michael S. Lauer, MD; and Francis S. Collins, MD, PhD
September 16, 2016.
New England Journal of Medicine
by Deborah A. Zarin, M.D., Tony Tse, Ph.D., Rebecca J. Williams, Pharm.D., M.P.H., and Sarah Carr, B.A.
September 16, 2016
STAT politics
By Charles Piller
September 16, 2016
Well, speaking of launches! Francis Collins [Director of the NIH], Deborah Zahn [Boss at ClinicalTrials.gov], Robert Califf [new FDA Commissioner] and Ben Goldacre [ClinicalTrials Everyman] in the STAT piece – big guns all around. And then add this to the mix:
Promoting innovation and access to health technologies
September 2016
A lot of good words in these reports. The spirit is there in all of them at first reading. It’s not yet time to pick apart those encouraging words and see if they will translate into actionable changes in the Clinical trial process. That’s for a close reading of the official documents when they’re fully available. But we can frame some important things to look for.

In the past, there have been reforms that should have worked, at least helped. They have failed us for several reasons, but one stands out. After the trumpets stopped blaring, they were just forgotten. The people doing the trials either didn’t do them, or didn’t do them right, or didn’t do them on time. The people in charge didn’t keep up with them or do anything in response to infractions if they even knew about them. They sounded good, but they flunked compliance, surveillance, and enforcement. So as we’re looking over these various changes and reforms, these provisions are paramount. Without these elements, it’s just another failed exercise.

I’m just going to be glad they’re doing something, and that the agencies seem to be working together for a change. These links are here for starters, but now it’s time to read them and their official versions and see what’s behind the good words, with an eye out for compliance, surveillance, and enforcement. Feel free to join in the fun…
Mickey @ 9:29 PM