study 329 iv – some challenges…

Posted on Friday 11 September 2015

The RIAT Initiative was a bright idea. Rather than simply decrying unpublished or questionable Clinical Trials, it offers the original authors/sponsors the opportunity to set things right. If they decline, the RIAT Team will attempt to do it for them with a republication. Success depends on having access to the raw trial data and on having it accepted by a peer reviewed journal [see “a bold remedy”…]. Both the BMJ and PLoS had responded to the RIAT article by saying they would consider RIAT articles. Paxil Study 329 had certainly been proven "questionable" in the literature and in the courts. And most of the data was already in the public domain thanks to previous legal actions. So a group of us who had independently studied this study assembled to begin working on breathing life into the RIAT concept. Dr. Jon Jureidini and his Healthy Skepticism group in Australia had mounted the original [and many subsequent] challenges to this article. He was joined there by colleagues Melissa Raven and Catalin Tofanaru. Dr. David Healy, well known author and SSRI expert was joined in Wales by Joanna Le Noury. Elia Abi-Jaoude in Toronto and yours truly in the hills of Georgia, USA rounded out the group. I was certainly honored to be included. While all of us have some institutional affiliation, this project was undertaken as an unsupported and unfunded enterprise without connection to any institution. Though my own psychiatric career was primarily as a psychotherapist, in a former incarnation, I was a hard science type with both bench and statistical training. So I gravitated to the efficacy reanalysis, and that’s the part I’ll mention here and in some remarks after the paper is published.

 

The Full Study Report Acute was a 528 page document that addressed the 8 week acute phase of Paxil Study 329. The actual raw data was in additional Appendices. On the first pass through this document, we considered a number of approaches to presenting the data. In recent years, there has been a move away from the traditional statistical analysis towards also considering the Effect Sizes. Statistics only tell us that groups are different, but nothing about the magnitude of that difference. Effect Sizes approximate the strength of that difference and have found wide acceptance particularly in meta-analyses like those produced by the Cochrane Collaboration. But in the end, we decided that our article was more than simply about Study 329, we wanted it to represent how such a study should be properly presented. And since every Clinical Trial starts with an a priori protocol that outlines how the analysis should proceed, we decided, wherever possible, to follow the original protocol’s directives.

Looking over the protocol, it was comprehensive. We found two things that were awry. First, the comparator group was to take Imipramine, and the dose was too high for adolescents –  1.5 times the dose used in the Paxil trials for adults. That was apparent in the high incidence of side effects in that group in the study. The second thing was a remarkable absence. There was no provision for correcting for multiple variables to avoid false positives. The more variables you look at, the more likely you are to find a significant correlation by chance alone. There are many different correction schemes from the stiff Bonferroni correction to a number of more forgiving schemes. This study had two primary and six secondary efficacy variables. The protocol should have specified some method for correction, but it didn’t even mention the topic. Otherwise, the protocol passed muster. It was written well before the study began and it was clear about the statistical methods to be used on completion to pass judgement on efficacy. One other question came from the protocol, how were we going to deal with missing values. The protocol defined all of the outcome variables in terms of LOCF [last observation carried forward]. In the intervening 14 years, LOCF has largely been replaced by other methods: MMRM [Mixed Model, Repeat Measurements] and Multiple Imputation. We used the protocol directed LOCF method, but at the request of reviewers and editors, we also show the Multiple Imputation analysis for comparison.

 

I guess the only other thing to say before the paper is published is that this was quite an undertaking. There were no precedents for any aspect of this effort. I’ve mentioned just a few of the decisions we had to make along the path, but every one of them and many others are the result of a seemingly endless stream of email and drop-box communications that regularly sped around the globe. There’s no part of this paper that doesn’t have the collective input of most of the authors. There were no technicians, statisticians, or support staff involved so we drew our own graphs, built our own tables, ran our own numbers, and checked and revised each others work. As with any new thing, looking back over it, it’s easy to see how it could have been a much more streamlined process. But that’s only apparent looking through a retrospectascope. Somewhere down the line, I hope we’ll have the energy to pass on some of the many things we learned along the way to help future RIATers have an easier passage.

So in the near future, there are going to be two studies in the medical literature that reach opposite conclusions but are derived from the self-same Clinical Trial and its data. I don’t know if there’s another instance where that’s the case. After it’s published, I want to add a short series of blog posts to describe how that came about. The goals of the paper are to set the record straight and to model how a report of a Clinical Trial should be presented. But in later blog posts, I want to add a discussion of how the original analysis was twisted to make this negative study into something that was published as positive. And I hope that future RIAT restorations will do the same. The more we learn about exactly how scientific articles can be jury-rigged to reach questionable conclusions, the closer we’ll be to expunging the widespread bias that has invaded our medical literature for much too long. In the final analysis, the ultimate goal is for physicians and patients alike to have access to a scientific medical literature that can be trusted to be accurate. After all, it’s ours…
  1.  
    Ove
    September 11, 2015 | 1:15 PM
     

    I can somewhat appreciate how of a monumental task this has been, being just a Swedish low educated end user of Seroxat/Paxil.
    Despite many Medical terms your posts are fairly easy to make sense out of.
    15 years on paroxetine has taken its toll, cognitively, but I’m so glad I can follow this debate from across the World.

    Are your team experiencing alot of “criticism” for digging into things many wants undisturbed? If so, all I can do is say: -“keep up the good work, we are many who will someday benefit from your efforts”!!!

    And I hope you are starting up a new trend where younger scientists can see that falsified science won’t stand the test of time.

    I’m curious to read the upcoming conclusions, you have allready admitted they will be pretty much opposite of the original, but still.

    Thanks, Ove, Sweden

  2.  
    September 11, 2015 | 1:19 PM
     

    Ove,

    Not much criticism so far. We’ll see when the paper comes out officially in the next week or so.

Sorry, the comment form is closed at this time.