food for thought

Posted on Sunday 9 June 2013


The Affordable Care Act offers strong support for comparative effectiveness research, which entails comparisons among active treatments, to provide the foundation for evidence-based practice. Traditionally, a key form of research into the effectiveness of therapeutic treatments has been placebo-controlled trials, in which a specified treatment is compared to placebo. These trials feature high-contrast comparisons between treatments. Historical trends in placebo-controlled trials have been evaluated to help guide the comparative effectiveness research agenda. We investigated placebo-controlled trials reported in four leading medical journals between 1966 and 2010. We found that there was a significant decline in average effect size or average difference in efficacy [the ability to produce a desired effect] between the active treatment and placebo. On average, recently studied treatments offered only small benefits in efficacy over placebo. A decline in effect sizes in conventional placebo-controlled trials supports an increased emphasis on other avenues of research, including comparative studies on the safety, tolerability, and cost of treatments with established efficacy.
I’ve kind of developed a rule without thinking about it that people with hundreds of published articles must be dangerous characters. But I recognized Dr. Olfson’s name and found four of his articles on this blog previously quoted that I thought were well-done and right-thinking and PubMed lists him at 300 articles. So maybe my unconscious rule has exceptions:

  1. National trends in the outpatient diagnosis and treatment of bipolar disorder in youth. Archives of General Psychiatry. 2007 Sep;64[9]:1032-1039.
In this study, they plotted effect size of drug/placebo differences in randomly chosen clinical trials from four major journals [JAMA, BMJ, NEJM, Lancet] going back to the time the trials became required. These are not just psychiatry articles, they cover all disease areas. Here’s the breakdown of their data set:
This is their prime finding – the fall in effect size of the clinical trials over their study period:

I graphed some of their tabular data to make it easier to see [all were significant]. On the left is the percent of Clinical Trials where PHARMA funding was acknowledged. This figure is confusing as that only became a requirement during the study period [NEJM 1985, JAMA 1989, BMJ 1994, Lancet 1994]. On the right, the number of authors rose from a median of 2-3 to 8-20 over the study period! Who knew?

The left graph below shows the median number of sites used in the clinical trials. The trend to multiple sites is a striking phenomenon of the recent past, I presume reflecting the ascendency of of the Clinical Research Organizations. Similarly, in the center the median number of study subjects has soared over the last two decades [chasing significance]. And on the right, another way of showing the falling effect size – the mean odds ratio.

And finally some miscellaneous stats from their tables. In this study the % of non-US studies, studies with insignificant Primary Outcomes, and drop-out rates did not change significantly. Intent-to-treat analysis means all subjects that start the study are included including drop-outs etc [corrected for missing data]. Studies reporting number of subjects screened and ITT analysis increased significantly.


[reformatted]

The article has an excellent but long discussion of the why of all of this, and concludes:

    Substantial investments are made in clinical trials. In the United States, for example, more than $100 billion is spent each year on biomedical research, with most of the funding devoted to clinical trials.  Concern over a slowdown in the discovery of innovative medical treatments has prompted calls for new public policies to stimulate drug development, including incentives for novelty drugs. During difficult economic times, however, private research sponsors may be tempted to take a cautious approach that favors incremental research focused on follow-on or “me too” drugs that offer little additional efficacy over more speculative high-risk/high-reward strategies.

    The Affordable Care Act mandates a new national comparative effectiveness research agenda. Its great promise is to build the scientific foundation of clinical practice. Yet forty years of declining effect sizes in traditional placebo-controlled trials underscores the increasing challenge of discovering breakthrough interventions. Where established treatments achieve comparable efficacy, comparative effective research can help identify the safer, less expensive, and better-tolerated alternative. By placing clinical decision making on firmer footing, it is hoped that comparative effectiveness research will help patients receive the best possible care and save them from unnecessary risks, inconvenience, and costs.


I obviously liked this study – well-done, timely, well-analyzed. I’m not sure I’m through digesting it, but several things occurred to me. First, my focus has been on the deceit in the psychiatric clinical trial world and I tend to see the problems with clinical trials as localized to psychiatry. This study widens that lens to all of medicine. I was surprised by that. Besides deceit, I’ve also tended to blame the falling effect sizes on the Clinical Research Industry using recruited subjects often in far-away lands with culturally diverse subgroups. So I wouldn’t have expected the same decline in trials of biomarker-confirmed physical diseases. But there it is.

It’s the rare clinical trial article that doesn’t comment on the small effect sizes these days – blaming the placebo effect or other karmic forces. And this paper makes it very clear that they are escalating the number of subjects [requiring multiple sites] to chase significance with smaller effect sizes. But I find it impossible to look at this data and conclude anything except that, on the whole, they’re testing weaker and weaker drugs.

It makes perfect sense. They’ve increased their capacity to detect significance in drugs with a smaller effect size, but that does nothing to insure discovering better drugs. If anything – the opposite. It also makes sense to move towards comparative effectiveness studies to keep from iterating downward in effectiveness. Food for thought, this article…
hat tip to Pharmagossip…
… and the multiple author thing? Unsportsman-like conduct. Shame on them!

Update: What a clever thought!

This column discusses declining differences in response rates between sequentially introduced selective serotonin reuptake inhibitors [SSRI] and placebo. Although discussions of this phenomenon in the literature have largely focused on increasing placebo response rates, the author proposes that another factor may be responsible. That factor is an order effect, meaning that response rates have been declining as a function of the number of SSRIs on the market when the next SSRI is in development. The rationale is that the pool of potential clinical trial participants likely to respond to a drug with this mechanism of action [MOA] becomes progressively smaller with the introduction of each new agent with the same MOA, because many patients will already have been treat- ed and responded to an earlier member of the class. This phenomenon is not limited to the SSRIs but generalizes to any class of treatments that shares the same MOA.
hat tip to Helge
  1.  
    Annonymous
    June 9, 2013 | 12:55 PM
     

    So how does this:
    http://www.nytimes.com/2013/06/07/business/an-experimental-drugs-bitter-end.html
    fit within the narratives being discussed in the comments the past few weeks?

  2.  
    Helge
    June 9, 2013 | 1:01 PM
     
  3.  
    June 9, 2013 | 1:19 PM
     

    Helge,
    Excellent point…
    [see update]

  4.  
    Tom
    June 9, 2013 | 1:30 PM
     

    Maybe some of that $100 billion should be directed to understanding the curative factors behind the placebo effect?

  5.  
    wiley
    June 9, 2013 | 2:35 PM
     

    The rest of medicine has definitely peddled some bad drugs, some of them causing the problems they were supposed to treat. Fosamax, Actonel and Boniva for osteoporosis caused bone death. Beta blockers caused heart attacks.

    Vandia caused heart problems in diabetics, who are at high risk of heart attack. Vioxx caused heart attacks and strokes. Saw an advertisement for a new drug to treat arthritic pain recently. My friend and I started laughing (we have a dark sense of humor) during the list of possible negative effects. It sounded like it was supposed to cause severe illness.

  6.  
    June 9, 2013 | 3:48 PM
     

    More public money, for more (of the same) research?

    It doesn’t sound so “affordable” somehow.
    The ‘Not-so Affordable Health Care Act ‘ would have been a more appropriate name for the legislation.

    Duane

  7.  
    June 9, 2013 | 3:50 PM
     

    “Trust us” say the same politicians, researchers, KOLs.

    “Are you kidding?” say those of us who’ve been paying attention.

    Duane

  8.  
    Annonymous
    June 9, 2013 | 5:56 PM
     

    Does Dr. Preskorn address in his article that if pharmaceutical companies used the old drugs with similar MOAs as active comparators in their trials of new drugs that it would help clarify a lot of what he is bringing up?

  9.  
    Annonymous
    June 9, 2013 | 7:47 PM
     

    Part of why taking Dr. Preskorn’s statements at face value gives me pause:
    http://www.neurosciencecme.com/cmea.asp?ID=250
    http://www.cmeoutfitters.com/cme_disclosure.asp?ID=250
    Neurosciencecme has run stuff like this:
    http://www.neurosciencecme.com/email/2008/011508_ckc.htm
    Which then links to:
    http://www.neurosciencecme.com/cmea.asp?ID=272

    http://carlatpsychiatry.blogspot.com/2007/09/author-calls-his-own-cns-spectrums.html
    http://carlatpsychiatry.blogspot.com/2007_09_01_archive.html

    None of this changes tha it is a clever idea. Some of this may suggest careful vetting of what might be motivating the idea. What corollary conclusions he draws from it.

  10.  
    Annonymous
    June 10, 2013 | 4:38 AM
     

    Harking back to:
    http://1boringoldman.com/index.php/2013/03/29/least-squares-mean-patients/

    Is this:
    http://www.badscience.net/2013/06/badger-badger-badger-badger-cull-badger-badger-badger-trial/
    From that post:
    “But more importantly, the trial loses what evidence nerds call “external validity”: the ideal perfect intervention, used in the trial, is very different to the boring, cheap, real-world intervention that the trial is being used to justify.

    This is a common problem, and the right thing to do next is a new trial, this time in the real world, with no magic. The intervention could be the thing we’re doing, and the outcome could be routinely collected bovine TB data, since that’s the outcome we’re interested in. This gives you answers that matter, on the results you care about, with the intervention you’re going to use.

    People worry that research is expensive, and deprives participants of effective interventions. That’s not the case when your intervention and data collection are happening anyway, and when you don’t know if your intervention actually works. Here, though, as in many cases, the missing ingredient is will.”

    Also referenced in the post:
    http://apps.who.int/rhl/Lancet_365-9453.pdf

    It also reminded me of this:
    http://davidhealy.org/not-so-bad-pharma/#comment-79092
    In particular Dr. Chalmers comments, including:
    “4. Cannot provide patient-level answers. As randomised n-of-1 trials are top of the draft of the EBM Group’s evidence hierarchy published several years ago, this is not true. And if you’re referring to using data from groups to provide prediction of treatment effects in individual patients then you should give examples of how you overcome this unavoidable challenge by preferring non-randomised groups to randomized groups as a basis for the guess.”

    The evidence hierarchy can be found here:
    http://www.cebm.net/mod_product/design/files/CEBM-Levels-of-Evidence-2.1.pdf

    I hope that at some point you highlight that Dr. Chalmers, Dr, Goldacre, et al are big proponents of n of 1 trials performed through collaboration of the physician and patient. Not really of large CRO RCTs.

Sorry, the comment form is closed at this time.