All systems have a weak spot: the weakest link in the chain; an Achilles’ heel; an O-Ring that’s sensitive to the cold. Where are the vulnerabilities in the elaborate scheme [see in the details…] that has evolved for conducting Clinical Trials of medications?
It’s the blinded piece of the study that has the most rigid standardization. The protocol can be flawed, for example: choosing either a too low or too high dose for a comparator compound that results in a weak efficacy result or an inflated adverse event profile, but that should be apparent on careful reading. Assuming the trial is truly blind, there’s not much one can do to jury-rig the outcome during this phase. In the case if the Individual Participant Data [IPD], the numeric and categorical values are cut and dried, but there are places where subjective information is coded or translated into scales and that can either be done badly or inaccurately, so the unabstracted Case Report Forms [CFRs] are the most accurate source for adverse events. But again, if the blind is true, such manipulations should affect all groups. Thus, the heavily regulated blinded portion of a Clinical Trial has a few vulnerabilities, but is not the Achilles’ heel [unless you consider the biggest vulnerability of all – not publishing the study if you don’t like how it comes out].
The actual Achilles’ heel comes after the blind is broken in the analysis, the creation of the Clinical Study Report [CSR], and the subsequently published journal article. With eyes wide open, there are infinite paths to misuse and misrepresention of study results – most avenues well traveled. And recall that the long narrative CSR is as vulnerable to spin as the published paper. For example, in 2004 as part of the settlement in New York, Attorney General Elliot Spitzer stipulated that GSK make their Paxil Study 329 data public on the internet. But until August 2012, it looked like the listing on the left below [see here via the Wayback Machine]:
Finally, with urging, they added the IPDs as appendices like on the right [see here]. The CSR is only a prequel to the published paper, vulnerable to the same sleight of hand. There were tables galore in the CSR as originally offered, but they were summaries, not the raw IPDs that could be checked. Those appendices in the later version are the data itself.
by Emma Maund, Britta Tendal, Asbjørn Hróbjartsson, Karsten Juhl Jørgensen, Andreas Lundh, Jeppe Schroll, and Peter C Gøtzsche.British Medical Journal. 2014 348:g3510.
Objective To determine, using research on duloxetine for major depressive disorder as an example, if there are inconsistencies between protocols, clinical study reports, and main publicly available sources [journal articles and trial registries], and within clinical study reports themselves, with respect to benefits and major harms.Design Data on primary efficacy analysis and major harms extracted from each data source and compared.Setting Nine randomised placebo controlled trials of duloxetine [total 2878 patients] submitted to the European Medicines Agency [EMA] for marketing approval for major depressive disorder.Data sources Clinical study reports, including protocols as appendices [total 13 729 pages], were obtained from the EMA in May 2011. Journal articles were identified through relevant literature databases and contacting the manufacturer, Eli Lilly. Clinicaltrials.gov and the manufacturer’s online clinical trial registry were searched for trial results.Results Clinical study reports fully described the primary efficacy analysis and major harms [deaths [including suicides], suicide attempts, serious adverse events, and discontinuations because of adverse events]. There were minor inconsistencies in the population in the primary efficacy analysis between the protocol and clinical study report and within the clinical study report for one trial. Furthermore, we found contradictory information within the reports for seven serious adverse events and eight adverse events that led to discontinuation but with no apparent bias. In each trial, a median of 406 [range 177-645] and 166 [100-241] treatment emergent adverse events [adverse events that emerged or worsened after study drug was started] in the randomised phase were not reported in journal articles and Lilly trial registry reports, respectively. We also found publication bias in relation to beneficial effects.Conclusion Clinical study reports contained extensive data on major harms that were unavailable in journal articles and in trial registry reports. There were inconsistencies between protocols and clinical study reports and within clinical study reports. Clinical study reports should be used as the data source for systematic reviews of drugs, but they should first be checked against protocols and within themselves for accuracy and consistency.
- the original protocol with any legitimate amendments
- the IPDs
- if the question is about Adverse Events, the CRFs
Blinding is essential, to be sure, but there is nominal blinding and genuine blinding. Both patients and investigators often are biased in favor of the experimental treatment – and this is not always conscious. It’s just human nature. When they pick up cues from drug-related side effects then bias is likely to skew what goes into the case report forms and the Individual Participant Data reports. For this reason, we cannot consider many trials as truly blinded unless the experimental drug is compared against an active placebo or an active comparator. We have known about this issue since the 1950s but it is often overlooked nowadays, even by regulatory agencies. It is rare to see this issue considered in meta-analyses of clinical trials, and it is rare to see quantitative statements of the verified blinding efficacy in individual trial reports. My bottom line is that the corpus of data for proof of efficacy is even less secure than we skeptics think it is.
Testing bias, something I have not thought about in years. A valid concept then, a valid concept now.
Steve Lucas