faith-based medicine…

Posted on Wednesday 28 September 2016

At the risk of overworking my Rip Van Winkle analogy, before I went to sleep in the early 1980s, I recall that conflict of interest standards in medicine and science were similar to those for our judicial counterparts – even the possibility of a conflict of interest was grounds for recusal. Had someone announced in a conference, "I am a Consultant for by SmithKline Beecham, but that will not cloud my judgements about Paxil," the room would’ve dissolved into laughter and cat-calls. Likewise, scientific experiments built-in procedures to eliminate the possibility of bias – randomization, double-blinding, etc are there for a reason and sacrosanct.

So  when I got really into reading the volumes of material we had about Paxil Study 329 and read in the Clinical Study Report [CSR] that the Outcome Parameters had been changed in the final months of the trial, my eyes crossed. And they came out of their sockets when I realized that the only significant outcome variables were those very ones added in the 11th hour. No argument can convince me that somebody didn’t "peek" at the data before the study was unblinded and made that change, though I could never prove such an assertion.

Similarly, when I was looking at Karen Dineen Wagner’s 2003 Efficacy of sertraline in the treatment of children and adolescents with major depressive disorder: two randomized controlled trials recently [see and then there was one…] and I read this in defense of their combining two seperate Zoloft trials…
Reply: In response to Dr Garland, our combined analysis was defined a priori, well before the last participant was entered into the study and before the study was unblinded. The decision to present the combined analysis as a primary analysis and study report was made based on …
…my eyes started crossing again. And they did the socket thing when I realized that the two studies were negative when analyzed individually. And that’s because "well before the last participant was entered into the study and before the study was unblinded" is not even close to a priori, and again allows the possibility of "peeking," which I assume is what they did. Once more, my assumption is totally unprovable. But the ball’s in her court, not mine.

Being asked to accept statements like that is particularly annoying from people who otherwise preach the gospel of evidence-based medicine from any available pulpit on any given Sunday. As a matter of fact, the whole point of doing RCTs in the first place is because people wouldn’t accept, "Newbery’s Brain Salt offers ‘A POSITIVE RELIEF AND CURE FOR Brain Troubles, Headaches, Sea Sickness,’ etc," particularly if the person saying it worked for F. Newbery & Sons.

For that reason alone, declaring the Outcome Variables in the Protocol and filing a Statistical Analysis Plan before beginning a clinical trial is mandatory. This is not faith-based medicine. But in addition, I know of no situation where picking a particular statistical procedure informed by the results is recommended – quite the contrary. We select them based on considerations outside of the data itself. Just about anybody with a college statistics book and some free time can locate a combo of outcome parameters and statistical procedures that will turn any trivial difference into something that reads out as "statistically significant." Insisting that a clinical trial is pristine is defining its later analysis a priori is neither picky nor optional. It’s an essential element in the whole enterprise. So what if you change your mind halfway into a study? Scrap the study you’re doing and start over. The track record for any other answer to that question is too abysmal to even contemplate…

Sorry, the comment form is closed at this time.