not the case…

Posted on Thursday 21 March 2013

I put this one aside for a while because I was at something of a loss about what to say. A few years back when I first started looking at Clinical Trials in any kind of systematic way, I was awed with the monotony of the graphs in the articles. I made up a graph and called it a CRO-Chart [Clinical Research Organization] because that’s what they all looked like, more or less. I read about the the placebo effect, asked others, but I can’t say I learned very much except that it was a problem, and getting to be a bigger problem over time. Problem? It was getting harder and harder to prove that the drugs were doing anything because everyone in the clinical trial was getting better.

Here’s the magnitude of the problem as presented by the authors of the paper referenced below:
In antidepressant trials for adults, the mean placebo response rate is 31%, compared with a mean medication response rate of 50%, and it has risen at a rate of 7% per decade over the past 30 years. In children and adolescents with major depression, rates of placebo response are even higher [a mean rate of 46%, compared with a mean medication response rate of 59%], and they have also been increasing over time. High placebo response reduces medication-placebo differences and leads investigators to make methodological modifications [e.g., the use of multiple study sites to increase sample size] that increase measurement error, both of which make it more difficult to demonstrate a statistically significant benefit of a putative antidepressant agent over placebo. Consequently, the average difference between medication and placebo observed in published antidepressant trials has decreased from an average of 6 points on the Hamilton Rating Scale for Depression [HAM-D] in 1982 to 3 points in 2008. For most currently approved antidepressants, fewer than half of the efficacy trials filed with the U.S. Food and Drug Administration for regulatory approval found active drug superior to placebo. While not all trials that fail to distinguish medication from placebo represent false negatives, meta-analyses of antidepressant trials suggest that high placebo response rather than low medication response explains most of the variability in drug-placebo differences. The increasing number of failed trials in recent years has made developing psychiatric medications progressively more time-consuming [average of 13 years to develop a new medication] and expensive [estimates range from $800 million to $3 billion per new agent] compared with medications for non-CNS indications. These considerations contributed to recent decisions by several large pharmaceutical companies to reduce or discontinue research and development on medications for brain disorders, prompting warnings of “psychopharmacology in crisis”.
At the time I first pondered the placebo effect, my mind drifted to my thoughts about Major Depressive Disorder, the object of the trials in question. I won’t go through them for the jillionth time, but I don’t think MDD exists. I’m still back in ancient history when Depression was an illness [as in Melancholia, the Black Dog] and depression was a symptom of a whole lot of things, a signal of helplessness, hopelessness, loss [the blues]. In the latter, I’m used to a lot of people feeling better when they talk about things to somebody that knows how listen. It’s a human thing. Sometimes that’s all that’s needed, and sometimes there’s more. So from my biased point of view, I thought most of the people in the studies had that condition whose-name-may-not-be-mentioned that got dropped in the DSM-III. But I’m not in the business of testing drugs, so I tried to read this paper as if that were my focus:
A Model of Placebo Response in Antidepressant Clinical Trials
by Bret R. Rutherford,M.D. and Steven P. Roose,M.D.
American Journal of Psychiatry. published on-line January 15, 2013.

Placebo response in clinical trials of antidepressant medications is substantial and has been increasing. High placebo response rates hamper efforts to detect signals of efficacy for new antidepressant medications, contributing to trial failures and delaying the delivery of new treatments to market. Media reports seize upon increasing placebo response and modest advantages for active drugs as reasons to question the value of antidepressant medication, which may further stigmatize treatments for depression and dissuade patients from accessing mental health care. Conversely, enhancing the factors responsible for placebo response may represent a strategy for improving available treatments for major depressive disorder. A conceptual framework describing the causes of placebo response is needed in order to develop strategies for minimiz- ing placebo response in clinical trials, maximizing placebo response in clinical practice, and talking with depressed patients about the risks and benefits of antidepressant medications. In this review, the authors examine contributors to placebo response in antidepressant clinical trials and propose an explanatory model. Research aimed at reducing placebo response should focus on limiting patient expectancy and the intensity of therapeutic contact in antidepressant clinical trials, while the optimal strategy in clinical practice may be to combine active medication with a presentation and level of therapeutic contact designed to enhance treatment response.

The table from the study pretty much summarizes their findings. I highlighted the one in red because the table is ambiguous. Here’s what they say about symptomatic volunteers versus self-referred patients:
It is intriguing to speculate that natural history factors may be playing a greater role in antidepressant clinical trials over time as the population of patients enrolling in research studies changes. Most research participants in the 1960s and 1970s were recruited from inpatient psychiatric units, whereas current participants are symptomatic volunteers responding to advertisements. Studies are needed to compare the baseline characteristics, treatment response, and attrition rates of self-referred depressed patients with those who respond to advertisements. Natural history factors may be more important in the latter population, with the symptoms experienced by advertisement respondents being more variable and transient, resulting in greater placebo response rates compared with self-referred patients.
So by weak, they don’t mean it is a weak factor. It’s that the evidence is weak [to non-existent], and I assume that most studies these days are on recruits who answer advertisements. The fact that the placebo effect has increased with the rise of the CRO industry where recruitment is primary suggests to me that this is a very big part of the story. I really agree with, "Studies are needed to compare the baseline characteristics, treatment response, and attrition rates of self-referred depressed patients with those who respond to advertisements."

I wanted to mention this article for some other reasons. First, what happens in the real world:
The intensive therapeutic contact found in clinical trials may be contrasted with what patients being treated with antidepressants receive in the community. In community samples of patients receiving antidepressant medication, 73.6% are treated exclusively by their general medical provider as opposed to a psychiatrist. Fewer than 20% of patients have a mental health care visit in the first 4 weeks after starting an antidepressant, and fewer than 5% of adults beginning treatment with an antidepressant have as many as seven physician visits in their first 12 weeks on the medication. Thus, assignment to placebo in an antidepressant clinical trial represents an intensive form of clinical management that has therapeutic effects.
The point about the intensive form of clinical management in clinical trials is well noted, but if you keep up with Dr. David Healy’s blog, or SSRI Stories, it’s what happens in the community that gives me pause. The akathisia with suicidality or homicidality they document on antidepressants doesn’t occur often, but with 73.6% of the antidepressants prescribed by GPs [presumably without cautioning about akathisia] and the patients rarely seen for a month thereafter, this is a recipe for unnecessary deaths.

Likewise, I find this suggestion bizarre, "Research aimed at reducing placebo response should focus on limiting patient expectancy and the intensity of therapeutic contact in antidepressant clinical trials, while the optimal strategy in clinical practice may be to combine active medication with a presentation and level of therapeutic contact designed to enhance treatment response." I’m afraid I’m too set in my ways to see "therapeutic contact" as a way to augment antidepressant drugs! That’s just too much of a paradigm shift for this old man to take in. And that first part seems kind of strange too – limiting contact and expectations to increase the possibility of seeing a drug effect. I understand the logic, but that means to me that the drug effect is either pretty weak or the subjects in these trials aren’t very sick [or both]. This is hardly a suggestion one would make for a major illness being treated with robust medications. This is a recipe for detecting small differences.

If these medications were close to as efficacious as advertised, we wouldn’t have seen the deceit from the pharmaceutical industry or their KOLs. They wouldn’t have needed it. They could’ve just put them on the shelf and watched them sell without any sleight of hand. The small clinical trials needed to show strong results would’ve been a breeze – no tweaking required. That was clearly not the case…
  1.  
    wiley
    March 21, 2013 | 12:13 PM
     

    Aside from the obvious problems of diagnosing being unhappy to miserable as MDD and drugging it, did the study look at the use of active placebos in trials?

    Come to think about it, does it make any sense at all that there is a brain disease that can be effectively treated with a drug that has no impact on a brain that is not diseased? Or that a person can have a brain disease that is hidden until a psychoactive drug reveals it?

  2.  
    Catalyzt
    March 21, 2013 | 11:01 PM
     

    I have a tendency to oversimplify, and I know sample size is only one part of the puzzle, but the more I read about large RCTs, the more my thinking tends to follow Healy’s, (and this post, specifically). If the drug had a dramatic and specific effect, you would probably see it in a very small population. Thus, if two guys drop 50 mics of acid, it won’t take a 40 page assessment instrument, neuroimaging, and a complex algorithm to know that the drug is doing something. We can argue for over half a century about what its doing and whether it has any clinical usefulness, but there are several distinct clinical effects that would be absolutely clear with a very small sample and a very simple assessment instrument. Even if a drug has a subtle and bizarre effect, like SSRIs, I think one may get a better sense of them from a smaller sample size and a population you’re familiar with… Healy’s experiment with SSRIs recruited clinicians he had known for years, and reading about the responses of his colleagues was far more useful than anything I learned in psychopharm (particularly, for example, the fact that very intelligent people frequently do not want to discontinue SSRIs even when they cause akathisia and SI, and tend to blame themselves for SI even when they understand intellectually these responses are often caused by the drugs.)

  3.  
    March 21, 2013 | 11:36 PM
     

    SSRIs numb.

    And feeling more *numb* may make people feel like the drugs are helping – at least at first… but it doesn’t mean the drugs are *working* or providing a *benefit*.

    Another reason these drugs *slightly* out-perform placebo is due to the fact that SSRIs have “side effects”… this makes the person feel like *something* is taking place.

    When an *active* placebo is used – one with a side effect, such as dry mouth, it’s not uncommon to see the placebo out-perform the SSRI. Studies involve many kinds of manipulation – for example, using a small, white pill for placebo and administering the pill by folks in street clothing vs a white lab coat for the SSRI.

    Using a large pill that is colored (pink is the best), and which is marked – with lettering, and a split in the middle changes the results.

    The bottom lie is that SSRIs are a scam.
    A very *dangerous* scam.

    Tomothy Scott, Ph.D. – ‘ The Problem with Drug-Relate Studies’ –

    http://www.ihealthtube.com/aspx/viewvideo.aspx?v=337dc5fa25677983

    Duane

  4.  
    March 21, 2013 | 11:47 PM
     

    And they do *not* work for MDD either.

    The mantra amongst psychiatrists is to (reluctantly) begin to admit they have “little or no benefit” for mild/moderate depression, but remain quite effective for major depressive disorder.

    Really?
    Are you kidding me?

    How much common sense is there in that proposition?

    The drugs can lift 100 lbs of depression, but are unable to lift 25 lbs of depression?

    Some of us were born at night.
    But none of us were born *last* night.

    Duane

  5.  
    berit bj
    March 22, 2013 | 5:56 AM
     

    Trauma make us feel bad, much shit may make people ill, still more shit can make people psychotic. My experiences have led to the view that these are normal reactions, and that given proper, wholesome care and safety, most may recover… see Richard Bentall on http://www.iai.tv

  6.  
    berit bj
    March 22, 2013 | 6:23 AM
     

    Exception: Some experiences are too horrible, heartbreaking. Loaded silence passed on – generations of human misery – as sensed around men who survived KZ camps, political prisoners in my familiy, and not the worst off. Their families suffered with them, in spite of psychiatric treatments, drugs, misery, passed on to children. Genes? Sure, normal, human reactions to inhumane, criminal cruelties of war, and to the litany of cruelties possible in peace. Easier, more lucrative to offer drugs than work for change, I think. Modern methods of exploitation..

  7.  
    jamzo
    March 22, 2013 | 11:07 AM
     

    http://www.sciencedaily.com/releases/2012/05/120503142540.htm

    Biased Evidence? Researchers Challenge Post-Marketing Drug Trial Practices

    “Rigorously designed and executed research has a critical role in improving patient care and restraining ballooning health care costs,” said Kimmelman, associate professor of biomedical ethics at McGill. “There is currently a push to streamline the ethical review of research. In this process, oversight systems should be empowered to separate scientific wheat from marketing chaff.”

  8.  
    March 22, 2013 | 5:34 PM
     

    Good luck finding any “scientific wheat” with antidepressant research.
    Probably more likely to find a needle in a hay stack.

    Duane

  9.  
    jamzo
    March 22, 2013 | 8:24 PM
     

    fyi

    Edward Shorter, Ph.D.

    The Coming Battle Over DSM-5

    A wave of opposition is building against the new edition of psychiatry’s bible.

    Published on March 15, 2013 Psychology Today Blog

    http://www.psychologytoday.com/blog/how-everyone-became-depressed/201303/the-coming-battle-over-dsm-5-0

  10.  
    March 23, 2013 | 11:50 AM
     

Sorry, the comment form is closed at this time.