[continued from outcome switching…]
My title is facetious – guilty as charged. What I’m pointing to is that there are more ways to bugger a Clinical Trial Protocol than changing the outcome variables after the study is underway like they did in Paxil Study 329. One way is to change the dataset definition [what comes in]. The double entendre of the word income was just too tempting to resist – ergo income switching. My example is [you guessed it] the recent Clinical Trials of Brexpiprazole in Augmentation in Treatment Resistant Depression. At some point after the trial was underway, they made a Protocol change:
by Thase ME, Youakim JM, Skuban A, Hobart M, Zhang P, McQuade RD, Nyilas M, Carson WH, Sanchez R, and Eriksson H.Journal of Clinical Psychiatry. 2015 76[9]:1232-1240.
by Thase ME, Youakim JM, Skuban A, Hobart M, Augustine C, Zhang P, McQuade RD, Carson WH, Nyilas M, Sanchez R, and Eriksson H.Journal of Clinical Psychiatry. 2015 76[9]:1224-31.
Following the prospective treatment phase, patients were eligible for entry into the double-blind randomized treatment phase if they had inadequate prospective ADT response, defined as < 50% reduction in HDRS-17 total score between baseline and end of the prospective phase, with an HDRS-17 total score of ≥ 14 and a Clinical Global Impressions-Improvement scale (CGI-I) score of ≥ 3 at the end of the prospective phase. While this study was ongoing, additional analyses were performed on data from a completed phase 2 study of similar design… It was found that a small number of patients in that study had seemingly adequate improvement in Montgomery-Asberg Depression Rating Scale (MADRS) and CGI-I scores at various times during the prospective treatment period, but subsequent worse scores at time of randomization. These patients did not show a consistent lack of response and would have been considered adequate responders if evaluated at another time point during the prospective phase. A number of these patients showed significant improvement again during the randomized phase, even if continuing on ADT alone…
In the first paper, it continues:
In order to exclude patients with seemingly variable response to ADT, this study’s protocol was amended in March 2012 during the enrollment phase and prior to database lock to specify that patients had to meet more refined inadequate response criteria throughout prospective treatment (HDRS-17 score ≥14, <50% reduction from baseline in HDRS-17 as well as <50% reduction in MADRS total score between start of prospective treatment and each scheduled visit, and CGI-I score ≥3 at each scheduled visit) to be eligible for randomization. The investigator was also blinded to the revised criteria. Both the protocol amendment and the resulting primary analysis were discussed and agreed with the relevant regulatory authorities (US Food and Drug Administration).
Whereas in the second, we read:
In order to exclude patients with seemingly variable response to ADT, this study’s protocol was amended to specify that patients had to meet more refined inadequate response criteria throughout prospective treatment (a HDRS-17 score ≥14; < 50% reduction from baseline in HDRS-17, as well as <50% reduction in MADRS total score between start of prospective treatment and each scheduled visit, and CGI-I score ≥3 at each scheduled visit) to be eligible for randomization and also to blind the investigator to the revised criteria.
First off, I don’t believe that part in red above [the FDA didn’t either]:
7.4. Adjunctive Treatment of MDD
7.4.1. The Sponsor conducted two adequate and well – controlled trials to asse ss the efficacy of brexpiprazole for the adjunctive treatment of MDD. Based on the prespecified statistical analysis plan, only one of these trials (Study 331-10-228) was positive. The Sponsor acknowledges that Study 331-10-227 was not positive based on the pre-specified plan, but provides a number of arguments to support the concept that brexpiprazole should nonetheless be approved for this indication.
While their explanation may read as if they’re just doing their part for science, what they’re proposing is actually absurd. They’re changing their basic definition in midstream in a way that makes their result come out in their favor. That’s simply never okay. As much as people like to describe these industry-funded [and analyzed], ghost-written clinical trials as research, that is [IMO] a misnomer. The proper term is product testing, and high stakes product testing at that. The track record of these corrupted studies speaks for itself. So I propose the following:
The 1boringoldman Manifesto
-
The a priori Protocol be filed publicly on either clinicaltrials.gov or some other new registry [a-priori-protocols.gov] prior to starting any clinical trial.
-
Only allowing procedural changes to the Protocol as Amendments if they have no possibility of affecting the outcome or the analysis.
-
If something "comes up" that was a mistake after the study is underway [defined as one subject taking one pill], too bad. Scrap the trial and start over.
-
Finally, the links to the clinical trial registry submission, the Protocol [a-priori-protocols.gov], and the name of any CRO involved and their person in charge for the study be included in any and every publication.
Reviewing your four points.
! Not disagreeing but amplifying- Should be certain the Protocol includes the method of analysis.In my view ,for continuous measures, that should usually be an ANCOVA that includes test for slope heterogeneity. Don Ross and I worked out a non-parametric analog that has been disappeared but should at least be discussed somewhere.
However,the Global Improvement Scale ,1=cured,2= unimportant symptoms,vs. anything below that is pretty good outcome measure.
Issue is that regulatory authorities work with a cliff of non-significance. Different analyses can get past cliff if you can pick and choose ,post data collection.
2. Anything initiated post any data collection makes the whole effort suspect. However there are many subtle,biasing, post-data maneuvers that don’t ,maybe can’t,get reported e.g. How hard do you try to keep potential drop-outs,or convince probable non-compliers, if you have a hunch it is treatment related?
On the other hand you seem to rule out pre-data changes if they may affect outcomes–What’s the bias?
3.If all you wish for is the definitive trial then any mistake leads to scrapping. If ,as I think likely,there are no definitive trials,wouldn’t a mistake be worth reporting and considering (arguing about it’s effect)..
The alternative to a definitive trial is multiple ,well conducted,adequately studied and dose estimated in Phase Two ,independent trials paid for by an independent source..
The inter-ocular traumatic test ,as occurred with the open,uncontrolled chlorpromazine reports , that swiftly called for international conferences in the face of conventional wisdom ,would be more than satisfying.
4. No argument.
We both agree current procedures lead to doubtful,often biased results. I think (maybe wrong) you think that increased stringency in protocol definition and trial analysis will make for substantial improvement. My guess is that is helpful ,but independent funding and monitoring of multiple trials of whatever seems important to substantive experts, is more important,perhaps more Utopian.