
Earlier [seroquel II [version 2.0]: guessing…], there was an example that has stayed with me [Trial 0006]. This was the graph published in the study itself, a CRO-Chart extraordinare:

But in the F.D.A. approval documents, they had the information from before the LOCF [Last Observation Brought Forward] correction was applied, so we got closer to what they actually observed – the mean scores of the non-drop-outs (BPRS [PSS][OC]) uncorrected:


Since I’d never heard of this Last Observation Carried Forward business, I wrote a more savvy friend about it. He explained that it was a way to fill in for missing data. Since there are a lot of drop-outs in these Clinical Trials, they take the Last Observed value at the time of drop-out and pretend it’s the value for the remaining periods in the study [see the graph on the right]. Obviously the graphs of the means using Observed Case Values and Last Observation Carried Forward values are remarkably different. In reading about this method, they talk about two kinds of drop-outs: non-informative [random] and informative [having to do with the variables measured]. Examples of the latter would be things like clinical worsening, drug side effects, etc. The LOCF method is recommended for data with non-informative drop-outs [which is not necessarily the case in these studies]. What it comes down to is that this is just a convention in Clinical Trials. There’s another more complex model MMRM [Mixed-Effect Model Repeated Measure] that is increasingly recommended. Here’s how people at the F.D.A. talk about the difference between the two methods:MMRM vs. LOCF: A Comprehensive Comparison Based on Simulation Study and 25 NDA Datasets
Journal of Biopharmaceutical Statistics 2009, 19:227 – 246
by Ohidul Siddiqui, H. M. James Hung, Robert O’Neill
AbstractIn recent years, the use of the last observation carried forward (LOCF) approach in imputing missing data in clinical trials has been greatly criticized, and several likelihood-based modeling approaches are proposed to analyze such incomplete data. One of the proposed likelihood-based methods is the Mixed-Effect Model Repeated Measure (MMRM) model. To compare the performance of LOCF and MMRM approaches in analyzing incomplete data, two extensive simulation studies are conducted, and the empirical bias and Type I error rates associated with estimators and tests of treatment effects under three missing data paradigms are evaluated. The simulation studies demonstrate that LOCF analysis can lead to substantial biases in estimators of treatment effects and can greatly inflate Type I error rates of the statistical tests, whereas MMRM analysis on the available data leads to estimators with comparatively small bias, and controls Type I error rates at a nominal level in the presence of missing completely at random (MCAR) or missing at random (MAR) and some possibility of missing not at random (MNAR) data. In a sensitivity analysis of 48 clinical trial datasets obtained from 25 New Drug Applications (NDA) submissions of neurological and psychiatric drug products, MMRM analysis appears to be a superior approach in controlling Type I error rates and minimizing biases, as compared to LOCF ANCOVA analysis. In the exploratory analyses of the datasets, no clear evidence of the presence of MNAR missingness is found.
-
The first is Last-Observation-Carried-Forward Imputation Method in Clinical Efficacy Trials: Review of 352 Antidepressant Studies. The authors did a literature search of the Clinical Trials for the Antidepressants between 1965 and 2004. They demonstrate how these studies increasingly drifted towards using the LOCF methods without including the ancillary information that allow an accurate assessment of the results. They make it clear what information needs to be included in the articles [and it never is]. These authors already knew what it’s taken me months and a beach trip to figure out – in their current form, the Clinical Trials articles are deliberately opaque.
-
The next article is Why Olanzapine Beats Risperidone, Risperidone Beats Quetiapine, and Quetiapine Beats Olanzapine: An Exploratory Analysis of Head-to-Head Comparison Studies of Second-Generation Antipsychotics. In this article, the authors look at the industry supported head-to-head Clinical Trials and point out the obvious – the sponsor’s drug wins. But they go further and demonstrate the subtle ways that these studies are manipulated by the study design and writing methods to move the outcomes in the desired directions. It’s a must-read in full.
-
This final article is recent, sent to me by PharmaGossip, Those Who Have the Gold Make the Evidence: How the Pharmaceutical Industry Biases the Outcomes of Clinical Trials of Medications. While this is the most cynical article of the bunch, it’s the closest to the emerging truth:"We can reasonably ask that pharmaceutical companies not break the law in their pursuit of profits but anything beyond that is not realistic. There is no evidence that any measures that have been taken so far have stopped the biasing of clinical research and it’s not clear that they have even slowed down the process. What will be needed to curb and ultimately stop the bias is a paradigm change in the relationship between pharmaceutical companies and the conduct and reporting of clinical trials."
Mickey — Way back when I was still occasionally prescribing an SSRT for a few patients, I was impressed with the results in one patient to Lexapro; so I began using that as my first choice. Mine was really a very small sample because I was seeing mostly psychoanalytic patients or others who didn’t need meds.
Then I noticed an ad placed by the Lexapro company itself in the APA Journal touting its effectiveness — and the ad included a graph that was almost identical to your CRO-Chart above. I kept it for years, as my justification for backing off using meds for relatively mild to moderate symptoms of anxiety and depression.
Why use something that — although it may have had a slight statistical improvement — most of the dramatic “improvement” was also there in the placebo group.
Me too. They fooled us all. The problem with those CRO-Charts is that they look the same for genuinely effective and barely effective drugs. They’re for the FDA, not clinicians. I’m working up a real resentment about how much we’ve been played in all of this…