Usually when I write a blog post, I know what I want to say. I don’t like it when others wander around, so I try not to do it myself. But that’s not what happened with wolf alert! wolf alert!…. A coauthor on the Paxil Study 329 article had forwarded the PLOS article. It was a short simple piece, but I thought it might be a springboard to contrast the approach to Data Transparency between the European Medicines Agency [EMA] and the more chaotic efforts here in the US. I particularly wanted to say something about the notion that ClinicalTrials.gov site and its Results Database were resources. They may become that, but they sure aren’t right now. I wanted to mention two things: that they’re frequently not current with missing results and they’re not checked for completeness even if present. Then I wanted to talk about the advantages of the EMA plan to release all the information submitted for approval. A short post about a simple little article.
But I couldn’t seem to finish it. I’d write something and wander off. Coming back, I’d erase what I wrote and start over. That’s just not my M.O. and I went to bed with it open on my desktop. When I looked at it in the morning, I decided that I might be dawdling because I hadn’t really read the IOM Report, so I started reading. It only took a couple of pages to get to the part where they listed who was on the Committee. And my musing about my dawding came to an end. I finally got to what was bothering me. I dashed off the post [wolf alert! wolf alert!…] having other things to do in the afternoon, and picked it up when I got home later in the evening.
PLOS MedicineNational Library of Medicine, National Institutes of Healthby Deborah A. Zarin and Tony TseJanuary 19, 2016A newly published reanalysis, part of the Restoring Invisible and Abandoned Trials [RIAT] initiative, was based on access to original case report forms for 34% of the 275 participants. These highly granular IPD datasets enabled the researchers to recategorize certain adverse events that they determined had been miscategorized originally [e.g., “mood lability” rather than the more serious “suicidality”]. The reanalysis concluded that Study 329 did not show either efficacy or safety.
How Would the Problems of Study 329 Be Addressed by the Current TRS?
It would be an oversimplification to conclude that this reanalysis demonstrates the need to make IPD for all trials available. A more nuanced look at the specific problems is useful. Many of the concerns about Study 329 and the other Paxil studies might have been addressed if current policies regarding registration and results reporting had been in existence. The key issue that specifically required access to IPD was the detection of miscategorization of some adverse events in the original report…
Key Issue Relevant TRS Component Comment
Summary Results Reporting Results database entries would have provided access to "minimum reporting set" including all prespecified outcome measures and all serious adverse events. Detection of selective reporting bias of efficacy and safety findings in the published results of Study 329, unacknowledged changes in outcome measures, and other issues. Prospective Registration Archival registration information would have allowed for the detection of unacknowledged changes in prespecified outcome measures and detection of nonprespecified outcome measures reported as statistically significant. Summary Results Reporting Structured reporting devoid of interpretation or conclusions would have made summary data publicly available while avoiding the possibility of spinning the results. Invalid and unacknowledged categorization of certain adverse events, resulting in the underreporting of suicidality Sharing Highly Granular IPD and Documents [CRFs] Access to high-granularity IPD enabled the elucidation of data analytic decisions that had not been publicly disclosed; reanalysis was possible with different methods of categorizing adverse events. It is important to note that this illuminating reanalysis required access to the highly detailed IPD available in the original CRFs, represented by the far-left side of the x-axis in Fig 1. However, recent high-profile proposals for the sharing of IPD might not have added any clarity in the case of the Paxil studies in children beyond what could have been achieved with the optimal use of a registry and results database [i.e., two foundational levels of the pyramid in Fig 2]. The reason is that journal publication serves as the “trigger” for IPD release in many of these proposals, which could not possibly mitigate biases resulting from selective publication in the first place [i.e., IPD from unpublished trials would be exempt from sharing requirements]. In addition, such proposed IPD policies call for the release of only the “coded” or “analyzable” dataset, which would not have allowed for the detection of miscategorization or the recategorization of the adverse events. Finally, such proposals would only require the sharing of a subset of IPD and documents for those aggregate data reported in the publication and not the full dataset, precluding secondary analyses intended to go beyond validation and reproducibility of the original publication.
-
the a priori Protocol:
While the ClinicalTrials.gov write-up for a trial usually contains the PRIMARY and SECONDARY OUTCOME VARIABLES, it doesn’t have the full a priori protocol. It needs to be emphasized that this is a definitive outcome variable declaration and a harbinger of what’s going to be reported in the Results Database on the completion of the study. While some debate whether this declaration is "binding" [whether outcomes can be changed along the way as they were in Paxil Study 329], it’s the only guard we have against someone running every imaginable analysis until they find one they like. -
the Results Database:
First, as I mentioned, this is the single most ignored requirement on the planet. And filling it out is no big deal. I would propose that one requirement for any submission to the FDA for approval or perhaps even for publication in a journal is a completed entry in the ClinicalTrials Results Database. -
the Statistical Analysis Plan:
My part of the Study 329 RIAT article was primarily the efficacy statistical analysis. And while the point that "The key issue that specifically required access to IPD was the detection of miscategorization of some adverse events in the original report." was indeed the central focus of our reanalysis, there was a less obvious but important finding in the efficacy analysis that required the full IPD to resolve. The original paper skipped the omnibus ANOVA analysis before making pairwise comparisons and there were many other gross statistical issues with the article’s "rogue variables" that didn’t make it into our paper, but were reported here [see study 329 vii – variable variables?… thru study 329 xii – premature hair loss…]. While it didn’t matter in this instance, in many other cases it could be absolutely crucial. If ClinicalTrials.gov is to be the definitive Summary Results Reporting mechanism, the details of the statistical analytic methods need to be specified in the original ClinicalTrials.gov write-up.
So part of my dawdling had to do with needing to add the things in this list to the paper’s discussion of How Would the Problems of Study 329 Be Addressed by the Current TRS? But that’s not all. There was something else that had me peppering that last post with pictures of a wolf in sheep’s clothing, and that’s why there’s a the making of « wolf alert! wolf alert!… » part 2 just around the corner. But I wanted to say what I thought before looking at what the Institute of Medicine Committee thought in any detail…
Sorry, the comment form is closed at this time.