study 329 ii – the importance of protocol…

Posted on Wednesday 9 September 2015

I sure don’t want to become 1·terminally·boring·old·man. On the other hand, this is my only available format for communicating. I want to write about the process of evaluating Trials anyway, but I also have a practical reason. A lot of us have clamored for access to the raw data from Clinical Trials, realizing that a lot of the published journal articles are riddled with subtle distortions in both the efficacy and harms analyses, particularly in psychiatry. We intuitively know that if the raw data had been available to us all along, things would be a lot different, and a lot better. But be careful what you ask for, because once you get it, it’s a long and winding road to know quite what to do with it.

There are thousands of pages in various packages generated by every Clinical Trial. So processing it all is no small task – finding those trees that matter in the forest. One thing for sure – an absolutely essential element for understanding any Clinical Trial is the a priori protocol. If you’ve done any research at all, you know that once you’ve got some data in your hands, there are a bunch of different ways to analyze it. The saying, "If you torture the data long enough, it will tell you anything you want to hear" becomes very real in practice. In any circumstances, there’s a strong temptation to try out various analytic techniques to see if the outcome doesn’t look more like you’d hoped. But in the case of a drug trial, there’s already a lot of time and significant treasure invested, meaning that the Clinical Trial results are the difference between throwing it all away or landing on a gold mine. The temptation to do some creative data analyzing is magnified exponentially in such a high stakes game. So it’s an absolute requirement that the outcome variables and the precise analytic methods are clearly stated before the study begins.

In evaluating a journal article reporting a Clinical Trial, the a priori protocol is an invaluable tool, and the first window to pry open. In the case of Study 329, the Protocol and the SAP [Statistical Analysis Plan] were together as a single document:

With the published article in hand [left], the trial itself is only a shadow. You can’t really know if the article is presenting the trial as declared before it started, or if it has been manipulated in one way or another. With the a priori protocol [right], you can evaluate the study design itself [for bias, omissions, etc] as well as compare it to the article to look for changes. So once recruitment begins, there shouldn’t have been any substantive changes in the protocol. Even minor alterations should have been added as official amendments to the protocol [and approved by the Institutional Review Board]. That point can’t be emphasized enough.

It may seem downright anal to insist on following the original protocol to the absolute letter. After all, people who do Clinical Trials call themselves researchers, and isn’t research supposed to be a creative endeavor? Certainly, the researcher can do any analysis he wants to do on the data. But an industry funded Clinical Trial is at the core, something else besides research – it’s Product Testing [creativity not invited]. One has to assume that any deviations after the study is underway are potential attempts to bias the outcome. The acronym HARK [Hypothesis After Results Known] reminds us of this danger. Non-protocol analyses or outcome variables are called exploratory, and may be very revealing, may even be discussed in the narrative. But they’re off limits in formulating the definitive conclusion of the study. If they’re that tempting, do another Clinical Trial with those findings in the new a priori protocol.

I was a late-comer to Study 329. By the time I got involved, it already had a literature of its own from the subpoenaed documents and settled court cases. I used a lot of that in a previous series that starts with a movement… and continues for quite a while, giving something of a  historical perspective [catalogued in the lesson of Study 329: an unfinished symphony…]. It’s there for the reading so I won’t repeat all of that here. When I wrote it, I’d been looking at RCTs for a while. But re-reading that series now, I can see how naive I was about the details – a novice about how Clinical Trials actually work, how they can be distorted. I suspect I wasn’t alone in my ignorance. I’ve learned a lot being involved in our current project, and so my focus is going to be different. Last time through, I was interested in proving to myself [and maybe you] that the analysis presented in the published paper was flawed, and did not show that Paxil® was either efficacious or safe in depressed adolescents. After this two year stint, I’ve learned a lot more about how to actually vet a Clinical Trial when you have the kind of Data Transparency we all want to be coming in the near future for all of them – what’s important and how to go through it. I hope this partial narrative of that journey will:

  • encourage other RIAT teams to look into unpublished or questionable Clinical Trials
  • help make future enterprises less grueling
  • make a contribution to future reforms in the current system
and it all starts with the a priori protocol

    a pri·o·ri  [ä′ prë-ôr′ë]
    adj.

    1. from a general law to a particular instance;
      valid independently of observation.
    2. existing in the mind independent of experience.
    3. conceived beforehand.
  1.  
    September 9, 2015 | 3:38 PM
     

    This is very interesting, thanks Mickey. I wonder what proportion of such studies are subject to after the fact tinkering.

Sorry, the comment form is closed at this time.