studying the studies…

Posted on Tuesday 10 September 2013

Who would’ve thought that I would end up reading stuff about the Clinical Trial industry? Not me. I guess I could take solace in some old saying like "variety is the spice of life," but whatever path got me here, here I go again. In a Randomized, Double Blind Clinical Trial, the way things are supposed to work is that up front, you have a Protocol, and in that Protocol you outline how the study will be conducted and how you’ll analyze the results at the end of the study. So you make educated guesses about how many subjects you’ll need to prove or disprove the research hypothesis – called power. And up front, you declare efficacy variables – the Primary and Secondary Outcome Variables, and declare how you’ll analyze them. The reasons are obvious. Clinical Trials are hard to do and involve actual human beings. You compute the power to avoid ending up with too little data to say anything at all useful. And anyone who has done any research at all knows why you declare the outcome variables and analytic methods a priori. It’s because after you have the data, you can torture it and make it say almost anything you want it to say by [excuse my language], masterbating with the numbers.

And after all that trouble and expense, the temptation to make the data speak is overwhelming. It’s called HARK [Hypothesis After Data Known] and it’s a no-no. What if you find something amazing that you didn’t expect? Great news! Mention it in your article – but then do another study that has this finding as the primary hypothesis using the Randomized Double Blind Clinical Trial techniques to confirm your serendipitous finding. That’s just what scientists do. It’s their nature. This article comes from International Congress on Peer Review and Biomedical Publication in Chicago, and I’m only going to talk about one piece of the article:
ScienceInsider
by Jennifer Couzin-Frankel
2013-09-10

Published Trial Results Often Differ From Those Initially Posted

Deborah Zarin, director of the database ClinicalTrials.gov at the National Library of Medicine, likes to say that her website is “a window into the sausage factory” — a view that we usually don’t get of how clinical trials work and how they don’t.  Six years ago, ClinicalTrials.gov was tasked by Congress to embark on a new experiment: In addition to trial registrations, many trial sponsors were required to deposit their results in the public database for anyone to access. At the congress, a group from Yale University School of Medicine explored how well the results posted on ClinicalTrials.gov match up with what’s published. What they found was not particularly encouraging.

Jessica Becker, a medical student, described how she and her Yale colleagues — Harlan Krumholz, Gal Ben-Josef, and Joseph Ross—identified 96 trials published between July 2010 and June 2011, all of them with a ClinicalTrials.gov identification number. They focused on studies that appeared in high-profile journals. Almost three-quarters of the trials analyzed were funded by industry. All but one trial had at least one discrepancy in how trial details, results, or side effects were reported.

One big question was whether the same primary endpoints and secondary endpoints appeared in both the final publication and the ClinicalTrials.gov results database. A primary endpoint represents the main goal of a study and the question or questions it was designed to answer. Secondary endpoints are often added to squeeze as much information as possible out of what’s collected, but statistically they can be weaker because the trial wasn’t created with them in mind. Primary endpoints in 14 trials appeared only on ClinicalTrials.gov, while primary endpoints from 10 others were only in the publication. The results described were also different in some cases: For 21% of the primary endpoints, what appeared in the journal wasn’t exactly the outcome described on ClinicalTrials.gov, and in 6%, the Yale group suggested that this difference influenced how the results would be interpreted.

For secondary endpoints, the difference was even more dramatic: Of more than 2000 secondary endpoints listed across the trials, just 16% appeared the same way in both the public database and the published article along with the same results. Results for dozens of secondary endpoints were inconsistent. “Our findings raise concerns about the accuracy of information in both places, leading us to wonder which to believe,” Becker said. The group hasn’t probed why this is happening: There could be innocent errors on ClinicalTrials.gov or typos in publications. Or authors may promote “more favorable stuff” in what’s printed, she speculated.

“There are many, many microdecisions” that come with writing up a publication, Zarin says. The uncomfortable results presented by Becker are “part of what motivates the desire” for anonymized information on individual patients, Zarin suggests — exposing that might be the only way to reconcile the discrepancies. Zarin also speculates that researchers might add positive secondary endpoints after the study is completed — a big no-no in the trials world — to give it a rosier hue, and thus they don’t appear on ClinicalTrials.gov when the study is first registered. Zarin is conducting her own analysis of the ClinicalTrials.gov results database, which now includes results from almost 10,000 trials [150,000 trials are registered on the site]. She says she’s reaching similar outcomes as the Yale group. One question that the Yale team didn’t explore was whether researchers had inputted their results on the site before submitting their paper — something that would allow journal editors or reviewers to play detective and see whether the document they have matches up with what’s in the database…
hat-tip to pharmagossip master spotter 
In this last few years, I guess studying the studies has become my hobby. It wasn’t planned that way. I started off trying to figure out why psychiatric patients were on so many odd medications, something I found out after retiring when I emerged from the cave of my practice to the public world, volunteering in a clinic. But that quickly lead to the whole world of Clinical Drug Trials, something I knew little about. Considering the size and importance of that domain, it’s remarkable that I’d missed it [see 3512! or this report from ClinicalTrials.gov – "150,000 trials are registered on the site!"].

This is a well done report [as the research of medical students often turns out to be]. Well powered. Well analyzed. And it answers an important question, How often do people change their analytic techniques, outcome parameters, and [there’s that word again] masterbate with the numbers? substituting fantasy for facts. The answer is All the time! There’s even a confirmatory study underway by Deborah Zarin, the boss at ClinicalTrials.gov. It’s a good show all around and there couldn’t be a more important topic. These Clinical Trials literally determine what gets defined as a Medicine in our modern world.

Obviously, this is just one more piece of the growing body of evidence that has become a somewhat monotonous topic on this blog – data transparency – what Deborah Zarin calls "the desire for anonymized information on individual patients." It’s a conclusion that needs no p-value anymore. It’s just right…
  1.  
    Steve Lucas
    September 11, 2013 | 7:50 AM
     

    I remember a few years ago becoming embroiled in a long discussion of the proposed HPV mandate. There was a small sample size with the results pointing to a surrogate end point and no real explanation of adverse reactions.

    Pharma had primed doctors with the concept of a cure for cancer and politicians with saving women’s lives. On the day the drug was approved for sale the Texas legislature was poised to pass a mandate until the State Medical Board said they would like to have a look. Those silly doctors, don’t they know politicians know best!

    The data at the time did not include any test children for the proposed mandates starting age. Reported deaths were for cervical cancer worldwide. Severe reactions were minimized and I believe the drug is not being sold in India due to genetic markers increasing the chance of an adverse reaction.

    At the time of the roll out the drug company stated there next goal was mandating this for boys, although no test had yet to be done.

    I had made the case that along with the weak data the money spent on the vaccine, if spent on real women’s health issues not Planned Parenthood, would provide more benefit than the vaccine.

    Only after a doctor restated my case was there any recognition of my position. Today the CDC continues to recommend the HPV vaccine for young girls and boys. You hear nothing of the severe reactions, and there have been no cost benefit or efficacy studies.

    Am I anti-vaccine? No. I do believe we need to look at the data and think things through before we spend often billions of dollars. One of the more interesting side stories was of a WSJ reporter going overseas and receiving a number of vaccines, being 26 she asked about the HPV vaccine and the doctor advised against it. No, she wanted to see what the fuss was all about, she found out when she was laying on the exam table in pain.

    Steve Lucas

  2.  
    September 11, 2013 | 9:45 AM
     

    Does what you highlight happen? Certainly. However, out of all the methods of gathering knowledge available to us, scientific analysis still is the most reliable.

  3.  
    September 11, 2013 | 2:00 PM
     

    Zen saying: Whatever you hit, call it the target.

Sorry, the comment form is closed at this time.