accountability III…

Posted on Tuesday 18 October 2016

"He who controls the narrative, controls the debate."

So why insist that the FDA certify the a priori declarations and analysis plan entered into ClinicalTrials.gov prior to beginning the study?

    That one is easy. The FDA is the only agency that can. Usually, the a priori Protocol and Statistical Analysis Plan have been filed with the FDA Registration and the IRB report, so the certification os no hardship. But why insist? Far and away, the commonest bit of deceit in these reports is "outcome switching" – selecting an outcome after the fact that gives the statistical results that are most favorable to the drug. The only way to prevent that kind of sleight of hand is to have a bona fide certified listing from before the study begins. Only the FDA has the power to get that.

So why insist that the FDA certify the entries into the ClinicalTrials.gov RESULTS database at the time of submission to the FDA?

    First, the FDA is the only agency that has direct access to that information. At the time of submission, the Sponsor has those RESULTS and can easily put them into the ClinicalTrials.gov RESULTS database to be checked during the FDA Approval process. If they’re not right, the FDA can insist that they be made right before proceeding. Again, this is a necessary check to prevent the ubiquitous "outcome switching." 

Any other reason to insert the FDA into this process?

    Absolutely! The FDA is a governmental agency tasked specifically to insure that our pharmacopoeia is safe and populated with drugs displaying at least some degree of efficacy. There is beyond ample evidence that the current version of that process hasn’t worked. The FDA has done a fairly decent job of approval/disapproval, but has turned a blind eye to the gross distortion of many trial results in journal publications – distortions that they alone have the data and power to correct. Is that part of their charge? Absolutely! It’s false advertising in plain sight. They’re not charged with riding herd on what journals publish, but they are definitely tasked with surveillance for false advertisement, and academic journal articles have become a [if not the] major vehicle for pharmaceutical advertisement.

Why petition Congress?

    The suggested change would make fundamental changes in the system. ClinicalTrials.gov would become the official interface betweem the FDA and the public rather than something that is either optional or treated as optional. The FDA would become accountable for both the timing and content of the information available to the public about our pharmaceuticals. Such a change would mean that the FDA would itself need oversight and authority. In our system, such things come from Congress, not the Agencies themselves. Likewise, it would make willful ignoring the mandate to comply a potentially criminal offense.

Currently, the FDA is an unwitting part of the problem. Maybe they have to keep the actual data from clinical trials private as things stand now. But currently, they are keeping the results private as well, and that’s unacceptable. It has had disastrous consequences well known by all. If the FDA is to be the eyes and ears for the public and the medical profession, they need to tell us what they see and hear – need to be accountable for what they tell us. If they find that Paxil® Study 329 is a failed study [which they did] and it is published as positive by treating non-protocol variables as if they are Primary Outcome Variables [which it was], we shouldn’t have to wait fourteen years for confirmation [which we did]. We need to know that shortly after they know it – by law – rather than allowing the pharmaceutical industry to "control that narrative"…
Mickey @ 10:27 AM

accountability II…

Posted on Monday 17 October 2016

"He who controls the narrative, controls the debate."

I’ve always had something of a love affair with the medical literature. I think I was awed that so many physicians contributed, and felt proud to be in a profession where there was such a rich community dialog among us. And for whatever reason, I was drawn to case reports rather than disease reports. I read the latter, and the review articles, but it usually started with a case and ended there too. And I’m sure that my wandering from internal medicine to psychiatry/psychoanalysis had something to do with the case by case focus in the mental health world. That’s still true for me today, 50+ years later. I don’t do it here because of confidentiality, but I often have to actively stop myself. When I left [or was extruded from] my full time academic position, I continued to teach a lot, but my reading seemed to change. It became more focused on the areas I taught and patients I treated, whereas before I read most everything that came in front of me. I realized later that the literature had changed on me. Case reports essentially disappeared, and I let my subscriptions run out. Everything seemed like speculations about future things and was frankly boring, too unpopulated for the likes of me. So I don’t know when the drug trial articles appeared. I think I must’ve been long gone by then.

I think if I had read our Petition [that one I keep talking about] even two years ago, I wouldn’t have understood it, even though by then I had filled this blog with complaints about the distortions in clinical trial reporting and matters ClinicalTrials.gov to overflowing. What wouldn’t I have understood? First, why the insistance on declaring the primary and secondary outcome variables and analytic methods in ClinicalTrials.gov before starting the study? And for that matter, why insist that the FDA review and certify those entries? Similarly, why insist on filling out the ClinicalTrials.gov RESULTS database before any FDA submission? And why demand that the FDA check it for concordance with the FDA information [and certify that concordance]? For further, why ask Congress to make these things part of the Law of the Land rather than ask for just a change in FDA/NIH procedures? That comes across as pretty controlling.Looking  Is it really necessary?

Looking back over this era, the majority of the FDA Approvals seem Legit to me. Efficacy standards are low, two statistically significant trials. The promary charge of the FDA is safety. Efficacy and usage is for the medical profession to work out. But there’s a huge loophole in the system in that the same trials that make it through the FDA as "weak sisters" look like "liquid gold" in the academic Journals that publish the self-same trials. And this discordance is never addressed, actually it’s often undetectable or at least undetected.  That’s what those two ostriches in the graphic are not seeing. How can the same data reach such divergent conclusions? Often, the answer is simple – what Ben Goldacre calls "outcome switching."

By the time a drug is in the NDA [New Drug Approval] Phase III trials, it has already shown that there’s a signal that it has the desired medicinal effect and that it’s probably safe [from Phase I and Phase II trials] approved as such by an Institutional Review Board. So the likelyhood that it’ll have some kind of significant result on some metric is high [if you look at enough metrics!]. But that’s not how statistical analysis works, poring over the data in search of something that suits [p-hacking]. You have to select your parameters in advance, 1° and 2°].  So that’s one reason for both insisting that the outcome parameters are set in stone before you start anything. To guarantee that you haven’t gone over and over the data to make something fit [which has happened over and over], but it can’t be proved because it all happens in secret. What if the Sponsor picked wromg? Do another study with the new choice. This loophole has been exploited quite enough, thank you!

And why so rigid about the RESULTS database? For one thing, there’s no reason for a lag. If you have done the analysis for the submission, you have the RESULTS. But beyond that, Journal Editors, Peers reviewers, Doctors, Patients – all need to see those RESULTS to evaluate the submissions, articles, and advertisement that we are being bombarded with. The journal can’t deny responsibility for what they publish if they have the FDA certified ClinicalTrials.gov data filled out in front of them. Without it, the editors, peer reviewers, clinicians, and patients only have the submitted article to go on – and we all know that has not been a reliable source. In my specialty, I’m having troub;e with coming up with any major articles that didn’t have some evidence of some of this kind of fudging along the way, in spite of the ouvert support for data integrity coming from industry [the narrative under control].

But there’s more. To be continued…
Mickey @ 8:22 PM

accountability I…

Posted on Monday 17 October 2016

"He who controls the narrative, controls the debate."

In this particular American political season, one would have to be close to stage four Coma to doubt the veracity of those words. But we have plenty of examples in American medicine that make the same statement. One version that still amazes me is the notion that the raw data from a clinical trial of medication is treated as proprietary, intellectual property. The rationale is that releasing it might breach patient confidentiality, or expose commercially confidential information. They’re not "patients," they are "subjects" who volunteered for one thing. And what’s commercially confidential about their response to an intervention? Yet, in spite of the uproar, industry has held onto that right, making access to data still a rare exception, and very difficult at that. Our re-write of Study 329 was an enormous undertaking, primarily because of the restrictive window we were given to access the data. The sensible program adopted by the European Medicines Agency to release data has likewise been essentially undermined. Data Transparency is on the lips of everyone, including industry – only it just hasn’t happened in real life. The intellectual property meme continues to control the narrative.

Alastair Matheson has recently clarified another example – ghost writing. Most industry-funded clinical trials are conducted by commercial Contract Research Organizations [Clinical Research Organizations][CROs]. The resultant data is analyzed by company statisticians working with the companies; marketing departments, and a summary of the results are forwarded to a professional medical writers who create a first draft. At that point, the recruited COI-ladened KOL authors enter the review process along with the industry-doctors also on the author by-line. In the past, the medical writers were either unmentioned or mentioned as editorial support. When challenged as ghost-writers – the medical writers/pharma denied it. But then came the new narrative pointed out by Matheson. They claimed that if they’re mentioned in the Acknowledgements by name, they’re no longer ghosts. As absurd as that is, they’ve stuck to the explanation like super-glue.

While I believe that someday the battle for Data Transparency will finally be won, I expect it’s a long way off. The enemy has very deep pockets. But industry can’t possible claim that the RESULTS are proprietary. They publish them themselves. However, with a little help from the regulatory agencies, industry has essentially created a situation where the RESULTS are similarly under their control, just like the data access:

  • By ignoring the requirement to declare outcomes a priori in the ClinicalTrials.gov database, they’ve undermined having a certified statement as an anchor that couldn’t be changed.
  • Even more damning, they’ve ignored even posting the RESULTS until relatively recently.
We all likely know those few studies where we can prove that they’ve changed outcomes [Paxil Study 329, CIT-MD-18, Paxil Study 352, etc]. Many of them are in the past [because it took so long to get hold of the information!]. But few people seem to realize that "outcome switching" is an everyday occurrence in the present, in articles published recently.One way we know that is through the efforts of Ben Goldacre [physician, psychiatrist, epidemiologist, researcher, journalist, author, activist, bundle-of-energy, etc]. In his COMPare study, he and his colleagues took a look: 
Between October 2015 and January 2016, the COMPare team systematically checked every trial published in the top five medical journals, to see if they misreported their findings.

And what they found:

TRIALS
CHECKED
TRIALS WERE
PERFECT
OUTCOMES NOT
REPORTED
NEW OUTCOMES
SILENTLY ADDED
67 9 354 357

And when they wrote the Journals:

LETTERS
SENT
LETTERS
PUBLISHED
LETTERS
UNPUBLISHED [1 mo]
LETTERS
REJECTED
58 18 8 32

I encourage you to look at their website, particularly the Blog and the different reactions among the journals. It’s straightforward. Each study looked at is well documented. See also Goldacre’s academic colleague, , and his review of the FDA Requirements. These results couldn’t be clearer.


But you probably  didn’t know that changing outcomes was still that common, well in the majority. You and I have been reading the narrative being handed to us with industry falling all over itself to aver their commitment to the cause of  Data Transparency and ClincalTrials.gov compliance, stroking the largely fictitious narrative. And if you’re a denizen of that web-site like some of us, you’ve noticed how recently the RESULTS database has been filled out on-time [for a change]. Maybe not on point, but at least on-time.

I frankly doubt that even the people at the FDA, the NIH, or ClinicalTrials.gov actually realize how common this practice of changing outcomes really is or why it’s such a game changer. And it’s likewise probable that few know why prompt a priori Registration and RESULTS posting on that site matters so much. The recent reforms by the NIH and ClinicalTrials.gov focus on the importance of using that web site, filling it out, but don’t provide for checks on its content [see what to do? the final rule?…] and allow for unnecessary delays in populating both the outcome parameters at registration and the RESULTS. Both of those things are mission critical

to be continued…
Mickey @ 4:49 PM

bru  bach  beck, 1956…

Posted on Sunday 16 October 2016

Mickey @ 8:00 AM

starting the ball…

Posted on Saturday 15 October 2016


New York Times
By JOHN C. MARKOWITZ
OCT. 14, 2016

The United States government recently announced its new director of the National Institute of Mental Health, Dr. Joshua Gordon. If you think that’s just bureaucracy as usual, think again. Mental health research, under the leadership of the previous director, Dr. Thomas Insel, underwent a quiet crisis, one with worrisome implications for the treatment of mental health. I hope Dr. Gordon will resolve it. For decades, the National Institute of Mental Health provided crucial funding for American clinical research to determine how well psychotherapies worked as treatments [on their own as well as when combined with medications]. This research produced empirical evidence supporting the effectiveness of cognitive behavioral therapy, interpersonal psychotherapy and other talking treatments.

But over the past 13 years, Dr. Insel increasingly shifted the institute’s focus to neuroscience, strangling its clinical research budget. Dr. Insel wasn’t wrong to be enthusiastic about the possibilities of neuroscientific research. Compared with the psychiatric diagnoses listed in the Diagnostic and Statistical Manual of Mental Disorders [D.S.M.], which can be vague and flawed, brain-based research holds out the promise of a precise and truly scientific understanding of mental illness…

In 2010, the institute introduced a system of brain diagnostics known as “research domain criteria.” These criteria discard diagnoses like post-traumatic stress disorder, examining instead phenomena such as “response to an acute threat” [i.e., fear] at various scientific levels: genes, the molecules they produce, cells, brain circuits, physiology and behavior. Establishing links up and down this ladder — linking a gene to a neurohormonal molecule, and ultimately to a behavior — produces what is called “translational” research…

Nonetheless, translational research has become virtually required for funding. Although the “neurosignature” targets of the research domain criteria are not demonstrably any more useful than D.S.M. diagnoses, and though they are far more distant from clinical symptoms and treatments, the institute favors them. As a result, clinical research has slowed to a trickle, now accounting for only 10 percent of the institute’s budget. Many clinical researchers like myself worry that this kind of research will disappear. We have too often been reluctant to voice our protest, for fear of incurring the institute’s displeasure [and losing whatever opportunities we still have for funding]…

We need both neuroscience and clinical research. I hope the institute will re-establish that balance.
The Research Domain Criteria [RDoC] have been vague at best. The idea rests on the notion that mental illnesses are biologically determined – that if we collect a large enough cohort and a database full of their biological parameters and responses, "big data" techniques will identify the elusive groupings. I assume it has been a bust, and that’s why he left. That’s based on a comment he made in a New Scientist interview as he was exiting:
Question: Are you saying Google is a better place to do mental-health research than the NIMH?
Answer: I wouldn’t quite put it that way, but I don’t think complicated problems like early detection of psychosis or finding ways to get more people with depression into optimal care are ever going to be solved solely by government or the private sector, or through philanthropy. Five years ago, the NIMH launched a big project to transform diagnosis. But did we have the analytical firepower to do that? No. If anybody has it, companies like IBM, Apple or Google do – those kinds of high-powered tech engines.
Much of what he said on leaving was like that, vaguely bitter, like the NIMH had let him down, disappointed him. Not long after arriving as Director, he announced that Psychiatry was to become Clinical Neuroscience and gradually colonized the NIMH in the manner Markowitz describes in his op-ed. It’s not an overstatement to suggest it gradually became the National Institute of whatever Tom Insel was thinking about. An even bigger problem was that when he jumped from one fad to the next, the older projects kept on – so by last year, when he left, there wasn’t much space for anyone to think in.

I think he really believes that mental illnesses are all brain disorders, a belief it would be hard for any practitioner to sustain. But then he wasn’t "any practitioner" – having never seen a patient after finishing his Residency. I always thought his notion of Psychiatry as a Clinical Neuroscience was strange since he was never in the Clinic himself. In my view, he was paradoxically a detriment to Biological Research. Rather than allow researchers to follow their own Muses, he had them boxed in following his – and his blew about in the wind from shiny object to shiny object.

The taint of Insel’s NIMH will linger long. Good on Dr. Markowitz for starting the ball rolling…
Mickey @ 12:41 PM

congressional action time, redux…

Posted on Friday 14 October 2016


IF
… at the time of Registration [before the study started], the ClinicalTrials.gov database defined the a priori Primary and Secondary Outcome Parameters and the methods by which they were to be analyzed.
AND IF
… at the end of rhe study [after the blind was broken], the ClinicalTrials.gov results database were populated with the results as defined at the time of registration [Primary and Secondary Outcome Parameters].
THEN
… by simply looking at the ClinicalTrials.gov site, one could decide for yourself quickly if it were a positive or negative study.
AND IF
… you knew how to do a few simple calculations, you could generate the Effect Sizes [NNT, OR, Cohen’s d, etc] to estimate the robustness of the drug’s effect.
AND IF
… at the end of rhe study [after the blind was broken], the ClinicalTrials.gov results database were populated with tabulations of adverse effects and severe adverse effects, you could reach something of a conclusion about the drug’s short-term safety profile.

Isn’t that an overly simplistic view? What are you trying to do, put the journals out of business? Of course it’s simplistic. But RCTs are themselves simplistic. They’re not intended to be the standard for clinical medicine. They’re only designed to say something about the short-term safety of a drug and it’s medicinal properties. The actual worth of the drug as a therapeutic agent is to be determined in clinical use – not the heavily structured environments of a multicentered RTC.

But of course there’s a lot a journal can tell us. It can show us the longitudinal response – over time. It can display the self-rated scales from the subjects themselves.  But RCTs themselves are not the gold standard for anything. They’re the getting started standard for regulators. And even at that, the gold standard for these getting started RCTs is replication, not a single or even a collage of studies. As an aside, we have something of a quirk in our approval process in that we base our approval on two positive studies rather than the replication of a positive study [those aren’t the same thing]. And the over-valuing of single RCTs is pretty ubiquitous. A recent example:

PSYCHIATRICNEWS
by Aaron Levin
June 16, 2016

In the clinic, managing mental illness in young people requires subtle but significant shifts in thinking, said Karen Dineen Wagner, M.D., Ph.D., a professor and chair of psychiatry and behavioral sciences at the University of Texas Medical Branch, Galveston…

As for treatment, only two drugs are approved for use in youth by the Food and Drug Administration [FDA]: fluoxetine for ages 8 to 17 and escitalopram for ages 12 to 17, said Wagner. “The youngest age in the clinical trials determines the lower end of the approved age range. So what do you do if an 11-year-old doesn’t respond to fluoxetine?”

One looks at other trials, she said, even if the FDA has not approved the drugs for pediatric use. For instance, one clinical trial found positive results for citalopram in ages 7 to 17, while two pooled trials of sertraline did so for ages 6 to 17. Another issue with pediatric clinical trials is that 61 percent of youth respond to the drugs, but 50 percent respond to placebo, compared with 30 percent among adults, making it hard to separate effects.

When parents express anxiety about using SSRIs and ask for psychotherapy, Wagner explains that cognitive-behavioral therapy [CBT] takes time to work and that a faster response can be obtained by combining an antidepressant with CBT. CBT can teach social skills and problem-solving techniques as well. Wagner counsels patience once an SSRI is prescribed…
Note: This is the 7th time I’ve quoted this particular blurb from PSYCHIATRICNEWS [1, 2, 3, 4, 5, 6]. I guess I see it as a paradigm of sorts – a paradigm for something very, very wrong. Each time, I want to insert "Take a history!" after the second paragraph – or even better, "Refer the kid to someone who knows something about children besides how to write prescriptions!" <expletive deleted>

It’s the concreteness of Dr. Wagner’s remarks that I want to call attention to. Prozac® was approved for children and teens early in the game, based on a couple of studies [see tuning the quartet… and eyes wide shut open III…]. While it’s data has never been independently confirmed, the studies are accepted as showing a significant signal though some of us would question that. A recent meta-analysis lists it as the only such drug though doesn’t recommend using it [see antidepressants in kids? a new meta-analysis…]. So Doctor Wagner poses the example of an 11 year old who doesn’t respond to Prozac®. She suggests going off-label and mentions two clinical trials [she was an author on both]! The first, Celexa®, is a trial finally taken down definitively on any number of grounds, including outcome switching [see the jewel in the crown…]:
by Jon Jureidini, Jay Amsterdam, and Leemon McHenry
International Journal of Risk & Safety in Medicine. 2016 28[1]:33-43.
is an analysis of…
by Karen Dineen Wagner, Adelaide S. Robb, Robert L. Findling, Jianqing Jin, Marcelo M. Gutierrez, and William E. Heydorn
American Journal of Psychiatry. 2004 161:1079-1083.
The second suggestion, Zoloft®, also comes from one of her papers:
by Wagner KD, Ambrosini P, Rynn M, Wohlberg C, Yang R, Greenbaum MS, Childress A, Donnelly C, Deas D; and the Sertraline Pediatric Depression Study Group
Journal of the American Medical Association. 2003, 290[8]:1033-41.
… which I consider one of the worst of the worst. They glued two negative trials together somewhere in mid-study[s] and declared the combo positive [?!?], and there was more [see and then there was one…]. At least this time around, she left out Paxil® Study 329 [also one of hers]. So we’re making progress.

There’s so much wrong here it’s hard for me to stay on point. Dr. Wagner takes the pronouncement of over a decade before based on a couple of RCTs about Prozac® as if it is gospel. And brings up two thoroughly debunked trials from twelve and thirteen years ago as off-label alternatives – without mentioning that other similar studies were negative or that the ones she mentions are destined to the hall of the infamous. But beyond all of that, have we learned nothing more in the last decade plus? Once a clinical trial is published, does it enter some realm of infallability in perpetuity?

back to the future:

At last I make it back to my point! While I don’t know about the two papers used for the initial approval of Prozac® because the information just isn’t available, I do know that none of the papers mentioned in this post [Celexa®, Zoloft®, Paxil®] would make it through the simple scheme proposed at the beginning. Every one of them is a gross example of switching the outcome parameters during the study, probably from peeking through the blinds or thereafter. We can never prove that, but there’s really little doubt in the mind of anyone who has looked closely. Dr. Wagner is an author on all of them except the first Prozac® study, so you’d surmise she should’ve known that.

By the way, that scheme suggested above is the essence of our proposal in The Petition [in case you’ve neglected signing it yet]…
Mickey @ 5:28 PM

richard twardzic 1954…

Posted on Friday 14 October 2016


a crutch for the crab

Mickey @ 10:00 AM

congressional action time…

Posted on Thursday 13 October 2016

This article [Schwartz et al] compares the results found in the ClinicalTrials.gov Results Database with those in Drugs@FDA for 100 drugs up for approval between January 2013 and June 2014. It makes little sense to look at it outside the context of a whole series of articles focusing on ClinicalTrials.gov going back in time. At the end of this post, there are seven abstracts of a representative group of such articles, arranged by year of publication [all available full text on-line]. You might just scan the conclusions before you start…
by Lisa Schwartz, Steven Woloshin, Eugene Zheng, Tony Tse, and Deborah Zarin
Annals of Internal Medicine. 2016 165:421-430.

Background: Pharmaceutical companies and other trial sponsors must submit certain trial results to ClinicalTrials.gov. The validity of these results is unclear.
Purpose: To validate results posted on ClinicalTrials.gov against publicly available U.S. Food and Drug Administration [FDA] reviews on Drugs@FDA.
Data Sources: ClinicalTrials.gov [registry and results database] and Drugs@FDA [medical and statistical reviews].
Study Selection: 100 parallel-group, randomized trials for new drug approvals [January 2013 to July 2014] with results posted on ClinicalTrials.gov [15 March 2015].
Data Extraction: 2 assessors extracted, and another verified, the trial design, primary and secondary outcomes, adverse events, and deaths.
Results: Most trials were phase 3 [90%], double-blind [92%], and placebo-controlled [73%] and involved 32 drugs from 24 companies. Of 137 primary outcomes identified from ClinicalTrials.gov, 134 [98%] had corresponding data at Drugs@FDA, 130 [95%] had concordant definitions, and 107 [78%] had concordant results. Most differences were nominal [that is, relative difference <10%]. Primary outcome results in 14 trials could not be validated. Of 1927 secondary outcomes from ClinicalTrials.gov, Drugs@FDA mentioned 1061 [55%] and included results data for 367 [19%]. Of 96 trials with 1 or more serious adverse events in either source, 14 could be compared and 7 had discordant numbers of persons experiencing the adverse events. Of 62 trials with 1 or more deaths in either source, 25 could be compared and 17 were discordant.
Limitation: Unknown generalizability to uncontrolled or cross-over trial results.
Conclusion: Primary outcome definitions and results were largely concordant between ClinicalTrials.gov and Drugs@FDA. Half the secondary outcomes, as well as serious events and deaths, could not be validated because Drugs@FDA includes only "key outcomes" for regulatory decision making and frequently includes only adverse event results aggregated across multiple trials.
In response to the AIDS activists’ insistence on rapid access to drug trials, in 1988 Congress mandated the creation of the AIDS Clinical Trials Information System [ACTIS]. It was a good idea and over time lead to the creation of ClinicalTrials.gov, an online database maintained by the NIH National Library of Medicine. By 2007 [Food and Drug Administration Amendments Act of 2007], both Registration and Results reporting on ClinicalTrials.gov became mandatory for all FDA regulated drugs.

But they just didn’t do it. Most trials were registered but often long after they started, and the ClinicalTrials.gov Results Database was the loneliest database on the Internet. There was no enforcement, even from Journal Editors who also claimed to require it. I’ve collected a few of the REFERENCES below that document the problems from several different angles. But even done right [Registered a priori with Results submitted], there are still difficulties. So this system created to promote public transparency just hasn’t worked, and the consequences have become the stuff of legends. Recently, the NIH, ClinicalTrials.gov, and the FDA rolled out a set of reforms designed to correct the things in the ClinicalTrials.gov system. While the effort is appreciated, it was primarily focused on compliance and enforcement – more a commitment to actually do what was supposed to happen in 2007, but uninformed by the multiple real problems in the system today. For example, nobody checks the submitted information, and the deadlines for submission mean that, at best, the information comes after it’s actually needed. On-time is too-late.

So we have a proposal to Congress known as The Petition. It has several parts:

  • At the time a clinical trial is Registered on ClinicalTrials.gov, the Primary and Secondary Outcome Parameters and the Methods by which they will be analyzed need to be included – no lag time. We ask that the FDA certify that they are concordant with the Protocol and Statistical Analysis Plan which they usually have as part of the FDA Registration. The "why?" is obvious. The most common method of spinning the results is to change the outcome variables. Requiring them in black and white at the time of Registration anchors them so any attempt to change them will be obvious. And "why the FDA?" It is the agency responsible to us to insure the integrity of scientific basis of our pharmacopoeia. It’s their job.
  • After the study is completed, the ClinicalTrials.gov Results Database must be completed at the time of any submission to the FDA with the results of the prespecified outcomes and analyses clearly labeled. Other outcomes listed should be identified as "non-protocol" or "exploratory." Again, the FDA needs to certify that the ClinicalTrials.gov results are concordant with those submitted to the FDA. No lag time. If they have results to submit to the FDA, they have them to submit them to ClinicalTrials.gov as well.
  • While the FDA/NIH has no place in telling Journals what to publish, any publication or advertisement that results from a trial and uses non-protocol outcomes should make that clear, otherwise it will be considered false advertisement of an FDA regulated drug. Academic Journal articles of clinical trials are advertisements
  • If the FDA recommends further trials as part of an approval, it is the responsibility of the FDA to insure that these trials are done expeditiously and follow the same procedures as the original NDA.
Congress instantiated ClinicalTrials.gov in 1997 trying to add a public interface to the clinical trial system. In 2007, Congress mandated that ClinicalTrial.gov become the required public interface to our clinical trial system but that mandate was ignored until the complaints were so loud as to be heard around the world. It’s time for Congress to finally close the deal in 2017. One might look at the paper above [Schwartz et al] as a sign that compliance in recent years has improved. A much more realistic interpretation would be that they are now being so closely watched that they’ve finally begun to comply. We absolutely must have that kind of oversight built-in to this system to insure that the rampant corruption we’ve had in plain sight comes to an immediate halt. Look at the sample articles in the REFERENCES to see what happens when no one is looking.

There is no reason for any lag time in posting the needed information at the time of Registration or Submission. They have it at hand, and we need it. Editors, Peer Reviewers, readers, everyone has the right to see the results certified by the responsible party. That responsible party is the FDA. They are part of the problem in colluding with industry to keep the raw data secret. Perhaps they have no choice. But that sure doesn’t apply to the a priori Protocol or the Protocol-directed Results. So sign the The Petition now and send it to everyone you can think of. It’s time for Congress to act!


REFERENCES

2009
by Mathieu S, Boutron I, Moher D, Altman DG, and Ravaud P.
JAMA. 2009 302[9]:977-984.

Context: As of 2005, the International Committee of Medical Journal Editors required investigators to register their trials prior to participant enrollment as a precondition for publishing the trial’s findings in member journals.
Objective: To assess the proportion of registered trials with results recently published in journals with high impact factors; to compare the primary otcomes specified in trial registries with those reported in the published articles; and to determine whether primary outcome reporting bias favored significant outcomes.
Data Sources: MEDLINE via PubMed was searched for reports of randomized controlled trials [RCTs] in 3 medical areas [cardiology, rheumatology, and gastroenterology] indexed in 2008 in the 10 general medical journals and specialty journals with the highest impact factors.
Data Extraction: For each included article, we obtained the trial registration information using a standardized data extraction form.
Results: Of the 323 included trials, 147 [45.5%] were adequately registered [ie, registered before the end of the trial, with the primary outcome clearly specified]. Trial registration was lacking for 89 published reports [27.6%], 45 trials [13.9%] were registered after the completion of the study, 39 [12%] were registered with no or an unclear description of the primary outcome, and 3 [0.9%] were registered after the completion of the study and had an unclear description of the primary outcome. Among articles with trials adequately registered, 31% [46 of 147] showed some evidence of discrepancies between the outcomes registered and the outcomes published. The influence of these discrepancies could be assessed in only half of them and in these statistically significant results were favored in 82.6% [19 of 23].
Conclusion: Comparison of the primary outcomes of RCTs registered with their subsequent publication indicated that selective outcome reporting is prevalent.
by Joseph S. Ross, Gregory K. Mulvey, Elizabeth M. Hines, Steven E. Nissen, and Harlan M. Krumholz
PLoS Medicine. 2009 6[9]: e1000144.

Background: ClinicalTrials.gov is a publicly accessible, Internet-based registry of clinical trials managed by the US National Library of Medicine that has the potential to address selective trial publication. Our objectives were to examine completeness of registration within ClinicalTrials.gov and to determine the extent and correlates of selective publication
Methods and Findings: We examined reporting of registration information among a cross-section of trials that had been registered at ClinicalTrials.gov after December 31, 1999 and updated as having been completed by June 8, 2007, excluding phase I trials. We then determined publication status among a random 10% subsample by searching MEDLINE using a systematic protocol, after excluding trials completed after December 31, 2005 to allow at least 2 y for publication following completion. Among the full sample of completed trials [n=7,515], nearly 100% reported all data elements mandated by ClinicalTrials.gov, such as intervention and sponsorship. Optional data element reporting varied, with 53% reporting trial end date, 66% reporting primary outcome, and 87% reporting trial start date. Among the 10% subsample, less than half [311 of 677, 46%] of trials were published, among which 96 [31%] provided a citation within ClinicalTrials.gov of a publication describing trial results. Trials primarily sponsored by industry [40%, 144 of 357] were less likely to be published when compared with nonindustry/nongovernment sponsored trials [56%, 110 of 198; p<0.001], but there was no significant difference when compared with government sponsored trials [47%, 57 of 122; p=0.22]. Among trials that reported an end date, 75 of 123 [61%] completed prior to 2004, 50 of 96 [52%] completed during 2004, and 62 of 149 [42%] completed during 2005 were published [p=0.006].
Conclusions: Reporting of optional data elements varied and publication rates among completed trials registered within ClinicalTrials.gov were low. Without greater attention to reporting of all data elements, the potential for ClinicalTrials.gov to address selective publication of clinical trials will be limited.
2011
by Michael R. Law, Yuko Kawasumi, and Steven G. Morgan
Health Affairs. 2011 30[12]:2338-2345.

Clinical trial registries are public databases created to prospectively document the methods and measures of prescription drug studies and retrospectively collect a summary of results. In 2007 the US government began requiring that researchers register certain studies and report the results on ClinicalTrials.gov, a public database of federally and privately supported trials conducted in the United States and abroad. We found that although the mandate briefly increased trial registrations, 39 percent of trials were still registered late after the mandate’s deadline, and only 12 percent of completed studies reported results within a year, as required by the mandate. This result is important because there is evidence of selective reporting even among registered trials. Furthermore, we found that trials funded by industry were more than three times as likely to report results than were trials funded by the National Institutes of Health. Thus, additional enforcement may be required to ensure disclosure of all trial results, leading to a better understanding of drug safety and efficacy. Congress should also reconsider the three-year delay in reporting results for products that have been approved by the Food and Drug Administration and are in use by patients.
by Deborah A. Zarin, Tony Tse, Rebecca J. Williams, Robert M. Califf, and Nicholas C. Ide
New England Journal of Medicine. 2011 364:852-860.

Background: The ClinicalTrials.gov trial registry was expanded in 2008 to include a database for reporting summary results. We summarize the structure and contents of the results database, provide an update of relevant policies, and show how the data can be used to gain insight into the state of clinical research.
Methods: We analyzed ClinicalTrials.gov data that were publicly available between September 2009 and September 2010.
Results: As of September 27, 2010, ClinicalTrials.gov received approximately 330 new and 2000 revised registrations each week, along with 30 new and 80 revised results submissions. We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 [52%] had associated publications within 2 years after posting. Of results records available publicly, 20% reported more than two primary outcome measures and 5% reported more than five. Of a sample of 100 registry record outcome measures, 61% lacked specificity in describing the metric used in the planned analysis. In a sample of 700 results records, the mean number of different analysis populations per study group was 2.5 [median, 1; range, 1 to 25]. Of these trials, 24% reported results for 90% or less of their participants.
Conclusions: ClinicalTrials.gov provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data.
2012
by Christopher J Gill
BMJ Open. 2012 2:e001186

Context: The Food and Drug Administration Modernization Act of 1997 [FDAMA] and the FDA Amendment Act of 2007 [FDAAA], respectively, established mandates for registration of interventional human research studies on the website clinicaltrials.gov [CTG] and for posting of results of completed studies.
Objective: To characterise, contrast and explain rates of compliance with ontime registration of new studies and posting of results for completed studies on CTG.
Design: Statistical analysis of publically available data downloaded from the CTG website.
Participants: US studies registered on CTG since 1 November 1999, the date when the CTG website became operational, through 24 June 2011, the date the data set was downloaded for analysis.
Main outcome measures: Ontime registration [within 21 days of study start]; average delay from study start to registration; proportion of studies posting their results from within the group of studies listed as completed on CTG.
Results: As of 24 June 2011, CTG contained 54?890 studies registered in the USA. Prior to 2005, an estimated 80% of US studies were not being registered. Among registered studies, only 55.7% registered within the 21-day reporting window. The average delay on CTG was 322 days. Between 28 September 2007 and June 23 2010, 28% of intervention studies at Phase II or beyond posted their study results on CTG, compared with 8.4% for studies without industry funding [RR 4.2, 95% CI 3.7 to 4.8]. Factors associated with posting of results included exclusively paediatric studies [adjusted OR [AOR] 2.9, 95% CI 2.1 to 4.0], and later phase clinical trials [relative to Phase II studies, AOR for Phase III was 3.4, 95% CI 2.8 to 4.1; AOR for Phase IV was 6.0, 95% CI 4.8 to 7.6].
Conclusions: Non-compliance with FDAMA and FDAAA appears to be very common, although compliance is higher for studies sponsored by industry. Further oversight may be required to improve compliance.
2013
by Jones CW, Handler L, Crowell KE, Keil LG, Weaver MA, and Platts-Mills TF.
British Medical Journal. 2013 347:f6104.

Objective: To estimate the frequency with which results of large randomized clinical trials registered with ClinicalTrials.gov are not available to the public.
Design: Cross sectional analysis
Setting: Trials with at least 500 participants that were prospectively registered with ClinicalTrials.gov and completed prior to January 2009.
Data Sources: PubMed, Google Scholar, and Embase were searched to identify published manuscripts containing trial results. The final literature search occurred in November 2012. Registry entries for unpublished trials were reviewed to determine whether results for these studies were available in the ClinicalTrials.gov results database.
Main Outcome Sources: The frequency of non-publication of trial results and, among unpublished studies, the frequency with which results are unavailable in the ClinicalTrials.gov database.
Results: Of 585 registered trials, 171 [29%] remained unpublished. These 171 unpublished trials had an estimated total enrollment of 299,763 study participants. The median time between study completion and the final literature search was 60 months for unpublished trials. Non-publication was more common among trials that received industry funding [150/468, 32%] than those that did not [21/117, 18%], P=0.003. Of the 171 unpublished trials, 133 [78%] had no results available in ClinicalTrials.gov.
Conclusions: Among this group of large clinical trials, non-publication of results was common and the availability of results in the ClinicalTrials.gov database was limited. A substantial number of study participants were exposed to the risks of trial participation without the societal benefits that accompany the dissemination of trial results.
2014
by Jessica E. Becker, Harlan M. Krumholz, Gal Ben-Josef, and Joseph S. Ross
JAMA. 2014 311[10]:1063-1065.

The 2007 Food and Drug Administration [FDA] Amendments Act expanded requirements for ClinicalTrials.gov, a public clinical trial registry maintained by the National Library of Medicine, mandating results reporting within 12 months of trial completion for all FDA-regulated medical products. Reporting of mandatory trial registration information on ClinicalTrials.gov is fairly complete, although there are concerns about its specificity; optional trial registration information is less complete.1- 4 To our knowledge, no studies have examined reporting and accuracy of trial results information. Accordingly, we compared trial information and results reported on ClinicalTrials.gov with corresponding peer-reviewed publications.
Methods: We conducted a cross-sectional analysis of clinical trials for which the primary results were published between July 1, 2010, and June 30, 2011, in Medline-indexed, high-impact journals [impact factor ≥10; Web of Knowledge, Thomson Reuters] and that were registered on ClinicalTrials.gov and reported results. For each trial, we assessed reporting of the following results information on ClinicalTrials.gov and corresponding publications and compared reported information in both sources: cohort characteristics [enrollment and completion, age/sex demographics], trial intervention, and primary and secondary efficacy end points and results. Results information was considered concordant if the described end point, time of ascertainment, and measurement scale matched. Reported results were categorized as concordant [ie, numerically equal], discordant [ie, not numerically equal], or could not be compared [ie, reported numerically in one, graphically in the other]. For discordant primary efficacy end points, we determined whether the discrepancy altered study interpretation. Descriptive analyses were performed using Excel [version 14.3.1, Microsoft].
Results: We identified 96 trials reporting results on ClinicalTrials.gov that were published in 19 high-impact journals. For 70 trials [73%], industry was the lead funder. The most common conditions studied were cardiovascular disease, diabetes, and hyperlipidemia [n = 21; 23%]; cancer [n = 20; 21%]; and infectious disease [n = 19; 20%]. Trials were most frequently published by New England Journal of Medicine [n = 23; 24%], Lancet [n = 18; 19%], and JAMA [n = 11; 12%]. Cohort, intervention, and efficacy end point information was reported for 93% to 100% of trials in both sources [Table 1]. However, 93 of 96 trials had at least 1 discordance among reported trial information or reported results.
Among trials reporting each cohort characteristic and trial intervention information, discordance ranged from 2% to 22% and was highest for completion rate and trial intervention, for which different descriptions of dosages, frequencies, or duration of intervention were common.
There were 91 trials defining 156 primary efficacy end points [5 trials defined only primary safety end points], 132 [85%] of which were described in both sources, 14 [9%] only on ClinicalTrials.gov, and 10 [6%] only in publications. Among 132 end points described in both sources, results for 30 [23%] could not be compared and 21 [16%] were discordant. The majority [n = 15] of discordant results did not alter trial interpretation, although for 6, the discordance did [Table 2]. Overall, 81 of 156 [52%] primary efficacy end points were described in both sources and reported concordant results.
There were 96 trials defining 2089 secondary efficacy end points, 619 [30%] of which were described in both sources, 421 [20%] only on ClinicalTrials.gov, and 1049 [50%] only in publications. Among 619 end points described in both sources, results for 228 [37%] could not be compared, whereas 53 [9%] were discordant. Overall, 338 of 2089 [16%] secondary efficacy end points were described in both sources and reported concordant results.
Discussion: Among clinical trials published in high-impact journals that reported results on ClinicalTrials.gov, nearly all had at least 1 discrepancy in the cohort, intervention, or results reported between the 2 sources, including many discordances in reported primary end points. For discordances observed when both the publication and ClinicalTrials.gov reported the same end point, possible explanations include reporting and typographical errors as well as changes made during the course of the peer review process. For discordances observed when one source reported a result but not the other, possible explanations include journal space limitations and intentional dissemination of more favorable end points and results in publications.
Our study was limited to a small number of trials that were not only registered and reported results, but also published in high-impact journals. However, because articles published in high-impact journals are generally the highest-quality research studies and undergo more rigorous peer review, the trials in our sample likely represent best-case scenarios with respect to the quality of results reporting. Our findings raise questions about accuracy of both ClinicalTrials.gov and publications, as each source’s reported results at times disagreed with the other. Further efforts are needed to ensure accuracy of public clinical trial result reporting efforts.
Mickey @ 8:41 PM

who’d have thought…

Posted on Thursday 13 October 2016

Nobel Prize Literature 2016 50th Anniversary
Mickey @ 8:44 AM

rct, ebm, and archie cochrane…

Posted on Tuesday 11 October 2016

I’ve run out of ways to say how far away from mainstream medicine I drifted for the last 20 years of my medical career, or how much coming back to it felt like waking up 25 years later in a strange land. Every time I say that, it sounds melodramatic and exaggerated – but that’s not at all how it feels. For example, I had never directly heard the term evidence-based medicine used to describe a discipline or a movement.  As a matter of fact, one of my earlier blogs was about encountering a particularly concrete example of EBM [Evidence-Based Medicine] and being dumbfounded by the article [Barriers to implementation of a computerized decision support system for depression: an observational report on lessons learned in "real world" clinical settings]. I ended up writing a little series of blogs trying to figure it out [evidence-based medicine I… 28 Jan 2011]. Ironically, it was in that same article that I encountered the first example of a multi-paragraph declaration of conflicts of interest [which was even more dumbfounding!].

Back then, I drew a little diagram to illustrate what I was reading. A structured interview [SCID] lead to a diagnosis [DSM-IV]. Enter that into Dr. Madhukar Trivedi‘s computer program and it spit out an evidence-based treatment informed by the latest RTCs [Randomized Clinical Trials]. The evidence-based treatments for depression provided were either the second generation antidepressants or CBT [Cognitive Behavior Therapy]. And then you went round and round the process hopefully iterating towards success. The actual article I was reading had Dr. Trivedi‘s reflections about why the clinicians ignored his system unless he was actively looking over their shoulder. I had no difficulty at all understanding that myself [but I’ll fight the temptation to say why yet again]. But that was then, and this is now.

Something else I encountered for the first time on awakening – the Cochrane Collaboration and their Systematic Reviews. Founded in 1993, it’s a virtual army of thousands of volunteer scientists who publish comprehensive meta-analyses of RCTs including an evaluation of the scientific rigor of each trial. It’s an invaluable resource that counteracted some of my horror on discovering so many jury-rigged RCTs corrupting the medical literature, particularly in my specialty of psychiatry.

It was named after Archie Cochrane, an early advocate of RCTs  and something of a grandfather to the general idea of Evidence-Based Medicine later introduced by Guyatt G, Cairns J. Churchill D, et al. in Evidence-based medicine. A new approach to teaching the practice of medicine [JAMA 1992 268:2420-5.]:
Archibald Leman Cochrane [12 January 1909 – 18 June 1988] was a Scottish doctor noted for his book Effectiveness and Efficiency: Random Reflections on Health Services. His advocacy of randomized controlled trials eventually led to the development of the Cochrane Library database of systematic reviews, the establishment of the UK Cochrane Centre in Oxford and the international Cochrane Collaboration.
One final introductory remark. In the course of the life of a scientific paradigm, there comes a point where something that was a good idea at its inception has been turned into dogma, and then it runs out of gas as the exceptions and shortcomings become so prominent that people begin to question if it was ever a good idea in the first place – a phenomenon known as paradigm exhaustion [I would argue until I’m blue in the face that Dr. Trivedi’s scheme for treating something as complicated as human depression using a simple computerized algorithm is as fine an example as you’ll ever find of pushing such a paradigm beyond its limits]. In the period of paradigm exhaustion, one often finds a flurry of articles that go back to the concept’s roots to show that the failed end-stage elaborations were never part of what the originator had in mind. In an area familiar to me, there’s a surprisingly robust literature returning to Freud’s writing to illustrate how far afield some of the latter day saints had wandered. It’s beginning to happen with Robert Spitzer’s thoughts about his DSM-III, what he really said. Such reflections are often called a Renaissance, after the period when europe rediscovered its history.

Who would want to argue with  the idea of looking for a scientific way to answer questions in medicine? Or asking "what is the evidence?" when someone suggests some course of action. What’s the alternative? But who could’ve imagined that simple ideas like Randomized Clinical Trials [RCTs] or Evidence-Based Medicine [EBM] could’ve become vehicles for so much corruption in some segments of medicine? creating a virtual superhighway for the commercial contamination if our literature. This article from the University of Oslo Medicine/Humanities faculty filled in a piece of that story that I didn’t really know much of anything about. And it’s an important history. I don’t know that this piece will usher in a Renaissance, as it’s hidden away in a think journal behind a pay wall, but it is certainly clarifying and worth putting some effort into getting hold of [I’m going to write the authors to see is there’s a free full-text version available for general consumption]. The abstract only paints some broad strokes…

by Clemet Askheim, Tony Sandset, and Eivind Engebretsen
Medical Humanities Online First, 6 October 2016

Abstract: Over the last 20 years, the evidence-based medicine [EBM] movement has sought to develop standardised approaches to patient treatment by drawing on research results from randomised controlled trials [RCTs]. The Cochrane Collaboration and its eponym, Archie Cochrane, have become symbols of this development, and Cochrane’s book Effectiveness and Efficiency from 1972 is often referred to as the first sketch of what was to become EBM. In this article, we claim that this construction of EBM’s historical roots is based on a selective reading of Cochrane’s text. Through a close reading of this text, we show that the principal aim of modern EBM, namely to warrant clinical decisions based on evidence drawn from RCTs, is not part of Cochrane’s original project. He had more modest ambitions for what RCTs can accomplish, and, more importantly, he was more concerned with care and equality than are his followers in the EBM movement. We try to reconstruct some of Cochrane’s lost legacy and to articulate some of the important silences in Effectiveness and Efficiency. From these clues it might be possible, we argue, to remodel EBM in a broader, more pluralistic, more democratic and less authoritarian manner.

UPDATE: I wrote author Eivind Engebretsen who replied:
Mickey @ 8:29 PM