non·random missing·ness…

Posted on Friday 27 December 2013

In the last post [almost inevitable…] proposing that the CAFE study was cloned from the CATIE study without considering the different populations being studied, I mentioned the many similarities between them. Well here’s another similarity:
In spite of their age, neither study had results posted on…
Health Affairs
by Michael R. Law, Yuko Kawasumi and Steven G. Morgan
December 2011

Clinical trial registries are public databases created to prospectively document the methods and measures of prescription drug studies and retrospectively collect a summary of results. In 2007 the US government began requiring that researchers register certain studies and report the results on, a public database of federally and privately supported trials conducted in the United States and abroad. We found that although the mandate briefly increased trial registrations, 39 percent of trials were still registered late after the mandate’s deadline, and only 12 percent of completed studies reported results within a year, as required by the mandate. This result is important because there is evidence of selective reporting even among registered trials. Furthermore, we found that trials funded by industry were more than three times as likely to report results than were trials funded by the National Institutes of Health. Thus, additional enforcement may be required to ensure disclosure of all trial results, leading to a better understanding of drug safety and efficacy. Congress should also reconsider the three-year delay in reporting results for products that have been approved by the Food and Drug Administration and are in use by patients.
by Christopher J Gill
British Medical Journal – Open. 2012 2:001186.

Context: The Food and Drug Administration Modernization Act of 1997 [FDAMA] and the FDA Amendment Act of 2007 [FDAAA], respectively, established mandates for registration of interventional human research studies on the website [CTG] and for posting of results of completed studies.
Objective: To characterise, contrast and explain rates of compliance with ontime registration of new studies and posting of results for completed studies on CTG.
Design: Statistical analysis of publically available data downloaded from the CTG website.
Participants: US studies registered on CTG since 1 November 1999, the date when the CTG website became operational, through 24 June 2011, the date the data set was downloaded for analysis.
Main outcome measures: Ontime registration [within 21 days of study start]; average delay from study start to registration; proportion of studies posting their results from within the group of studies listed as completed on CTG.

Results: As of 24 June 2011, CTG contained 54,890 studies registered in the USA. Prior to 2005, an estimated 80% of US studies were not being registered. Among registered studies, only 55.7% registered within the 21-day reporting window. The average delay on CTG was 322 days. Between 28 September 2007 and June 23 2010, 28% of intervention studies at Phase II or beyond posted their study results on CTG, compared with 8.4% for studies without industry funding [RR 4.2, 95% CI 3.7 to 4.8]. Factors associated with posting of results included exclusively paediatric studies [adjusted OR [AOR] 2.9, 95% CI 2.1 to 4.0], and later phase clinical trials [relative to Phase II studies, AOR for Phase III was 3.4, 95% CI 2.8 to 4.1; AOR for Phase IV was 6.0, 95% CI 4.8 to 7.6].

Conclusions: Non-compliance with FDAMA and FDAAA appears to be very common, although compliance is higher for studies sponsored by industry. Further oversight may be required to improve compliance.
by Christopher W Jones, Lara Handler, Karen E Crowell, Lukas G Keil, Mark A Weaver, and Timothy F Platts-Mills
British Medical Journal. 2013 347:f6104.

Objective To estimate the frequency with which results of large randomized clinical trials registered with are not available to the public.
Setting Trials with at least 500 participants that were prospectively registered with and completed prior to January 2009.
Data sources PubMed, Google Scholar, and Embase were searched to identify published manuscripts containing trial results. The final literature search occurred in November 2012. Registry entries for unpublished trials were reviewed to determine whether results for these studies were available in the results database.
Main outcome measures The frequency of non-publication of trial results and, among unpublished studies, the frequency with which results are unavailable in the database.
Results Of 585 registered trials, 171 [29%] remained unpublished. These 171 unpublished trials had an estimated total enrollment of 299 763 study participants. The median time between study completion and the final literature search was 60 months for unpublished trials. Non-publication was more common among trials that received industry funding [150/468, 32%] than those that did not [21/117, 18%], P=0.003. Of the 171 unpublished trials, 133 [78%] had no results available in
Conclusions Among this group of large clinical drug trials, non-publication of results was common and the availability of results in the database was limited. A substantial number of study participants were exposed to the risks of trial participation without the societal benefits that accompany the dissemination of trial results.
The Principle Investigator on both of these studies [CATIE and CAFE] is Jeffrey Lieberman, Chairman of Psychiatry at Columbia and current President of the APA. He’s got these two Clinical Trials published in high impact journals, but he hasn’t bothered to report the results in I suppose he has an excuse in that the absolute requirement for publication of results didn’t become law until 2007, so he didn’t have to publish them even though it was recommended. But if he wanted to do something for the image of psychiatry [something he says repeatedly is important to him], he could get those results posted. The absolute requirement for publishing results is now 6 years old as law, yet I don’t know of an example where it has yet been enforced, and the studies up top make it clear how negligent the investigators have been. As I pointed out earlier [also stupid…], maybe PHARMA wouldn’t be in such hot water with Data Transparency if they had followed the mandates of that 2007 law.

A year or so ago, I reviewed the results database on [starts with eyes wide shut open I…]. It’s kind of thin – more summary than raw data. But still, it would’ve been helpful. It would’ve given us the outcome of all those unpublished studies like Seroquel Study 15 [15 years of study 15 [and counting]…], the two negative trials of paroxetine in adolescents that weren’t published until the patent expired [an addendum…], or the missing Zoloft studies [zoloft: the approval I… etc.]. I doubt it would help us vet studies like Paxil Study 329. For that one, you’d need the raw data to confirm suspicions. But that’s all speculation as the listed articles above show. The requirements for results publication on have been essentially ignored [without consequence].

Their track record on comes to mind whenever I hear the PHARMA people trying to negotiate their way into holding on to control of data access in the face of AllTrials, making promises of various kinds. They haven’t even complied with the law of the land. The only real recourse is obligatory raw data transparency as a requirement for FDA submissions and for publication in peer reviewed journals. Anything less than that will surely follow the path of the results database – non·random missing·ness…

Sorry, the comment form is closed at this time.