a lot riding on…

Posted on Wednesday 24 September 2014

The meeting when the EMA will deliver their formal policy for Data Transparency is just a week away. Earlier, I’ve mentioned my apprehension about what they might say [the other shoe] and given an outline of the kind of data that might be involved [it matters…]. Several of years ago, I was pointed to some of the original trials of Imipramine from before they were required by the FDA [remembrance of things past…, remembrance of things past redux]. Their outcome variable was much closer to the one familiar to all clinicians – they interviewed the patients. They had a placebo control group and the study was double-blinded as in modern trials. They saw the subjects weekly and classified their responses based on the clinical interview:

The results of treatment were assessed as symptom free, greatly improved, somewhat improved, no change, or worse. For the purpose of assessing the value of the drug as a significant therapeutic agent, the first two of these categories have been combined as a good or worth-while result and the other three as a poor result. Patients showing a good result were able to return to their normal activities without undue effort…
Note that they didn’t even settle for "somewhat improved." They did do a statistical analysis, but they didn’t need to. All you had to do was look at the results to see the effect of the drug. The formal output was the size of the effect expressed as the NNT [Number Needed to Treat], and those are robust responses [2.08, 2.56].

They concluded:

A controlled trial was carried out to evaluate the effects of imipramine on endogenous and reactive depression. In endogenous depression 74% of cases showed a good response to the drug, while 22% responded to the placebo [P<0.01]. In reactive depression 59% responded satisfactorily on imipramine, as compared with 20% on the placebo [P<0.02]. On comparing these results with those of a previous trial of iproniazid the impression was obtained that imipramine was the more effective agent in treating endogenous depression.
It’s a different world now – 50 years later. The trials are conducted by technicians trained specifically to do drug trials. They follow formal protocols and administer formal tests – rating scales that are used for diagnosis and to catalog responses. They ask about adverse experience and transcribe what they’re told. The patients are recruited from clinics and through advertisements rather than just being people who are seeking treatment. The results are tabulated and analyzed on computers by statisticians. The trials are primarily financed by the drug’s manufacturer and analyzed by their own scientists. Many of the published articles have been written by medical writers hired by the trial sponsor. The involvement of the actual clinician authors whose names are on the published papers varies from none to some, but they’re rarely authors of the sort who did that early Imipramine study. Like I said, "It’s a different world now."

It would be the naive person, indeed, that was unaware of the increasingly blurred boundary between commerce and medical practice. Yesterday at a charity clinic in a very rural community, over 10% of the patients I saw asked if Cymbalta® was right for them. Being asked that repeatedly is bad enough. But what’s worse – I  don’t  can’t even know the answer. It feels like a pincer move – living between a contaminated literature and a successful Direct-To-Consumer ad campaign. Throw in ineffective and complicit professional organizations, an academic community that is often asleep, the restrictions from third party payers, and the accusations of activists, and sometimes you just want to throw up your hands and surrender. And sadly, many physicians [and patients] have done just that.

It would be equally naive to think that having the kind of Data Transparency originally promised by the European Medicines Agency will fix all of those things. The dyke has too many holes in it for that. But it would be one fine place to start. The threat of having their work checked would have a major impact on the invasion of the medical literature by commercial interests. And while it wouldn’t necessarily tell me if Cymbalta® is right for you, it would make it at least possible for me to approximate the answer if I or others were actually able to study the studies [and we did it]. So from my perspective, there’s a lot riding on next week’s report on the EMA’s definitive policy…
  1.  
    wiley
    September 24, 2014 | 6:23 PM
     

    I’m guessing, Mickey, that you already saw the article on Chantix and the suicide warning being challenged in the Wall Street Journal. It’s really sad that not only is some of the most up to date information about pharmaceutical drugs in WSJ and Forbes but that the articles say things like this:

    In a boost to Pfizer PFE +0.87%, the FDA has updated labeling on its Chantix smoking-cessation pill to indicate the drug may not carry the risks of suicidal behavior, a controversial issue that prompted the agency to include a serious warning in the labeling in 2009. The changes are being made to reflect the results of various studies.

    One study involved a so-called (emphasis added) meta-analysis of five studies involving nearly 2,000 patients that Pfizer says did not show an increase in suicide thoughts or actions among people who took Chantix compared with a placebo.

    I’m not going to even try to go into the weeds of pondering the different studies or any numbers (or finishing the article), but there are not “so-called meta-studies” there are meta-studies. Oh, I’m sure that someone could say that they’re just not assuming that readers are familiar with the concept, but it’s an article on the effects of a legal FDA decision about a specific drug that is affecting the stock price of a drug company. And, in common usage, the term so-called is typically a scare quote.

  2.  
    September 24, 2014 | 6:32 PM
     

    Wiley,

    Well put. I’m writing about Ed’s post right now…

  3.  
    James O'Brien, M.D.
    September 24, 2014 | 11:39 PM
     

    It was fascinating looking at that study and how much more dramatic the responses were than in SSRI trials or really any antidepressant trials since 1980. I looked at some old MAOI studies from a little later and found similar results.

    This has to relate to the failures of DSM-3,4,5 and beyond as much as anything. Clearly, in the “endogenous” group they were treating much more seriously ill patients than you see in trials today under the expanded definition of “Major Depression” which, as of last year might be depressed mood or anhedonia and 5/9 symptoms of major depression 15 days after you lost your family in an airplane crash. In other words, drug and placebo responses were because they patients had self-limited conditions, not because they were more suggestible.

    The standard line in psychiatry is that SSRI/SNRI/NDRI antidepressants have the same efficacy as TCAs/MOAIs. But the placebo numbers tell the real story. Placebo responses have been going up 7% in antidepressant trials every decade. It’s obvious why…the patients in the study aren’t as sick.

    I’m calling foul on the facile party line…clearly the older antidepressants are more efficacious although they have more side effects. The drug-placebo number is key, not the top-line drug response rate. The popularity of SSRIs is a combination of defensive medicine, pharma/KOL hype and a belief you can have a free lunch in biochemistry. It’s no surprise to me that more effective medications are also more problematic with side effects.

    Too damn bad Emsam costs 14 bucks a day…

  4.  
    Steve Lucas
    September 25, 2014 | 2:00 AM
     

    A side note. A company must report any action that may have a material impact on earnings. Sever years ago I was given a medication for dry skin that had an increased risk of cancer and was going to be withdrawn per that WSJ. The doctor knew nothing of this since the drug rep had not told her.

    One of the reasons I became involved in med blogs was finding doctors repeating verbatim the drug ads of the time. My frustration was palatable given my knowledge of drug sales and how supposedly sophisticated people had been taken in to the possible detriment of my health.

    Steve Lucas

  5.  
    September 25, 2014 | 6:31 AM
     

    What do people think about this study which reports an increase in placebo response (and decrease in effect size) for anti-psychotic drugs from 1991-2008?
    Khin et al J Clinical Psychiatry 2012;73(6):856-864
    http://www.ncbi.nlm.nih.gov/pubmed/22687813

  6.  
    James O'Brien, M.D.
    September 25, 2014 | 9:19 AM
     

    It seems more likely that patients in the older schizophrenia studies were sicker.
    I don’t think people are getting more suggestible at that pace.

  7.  
    James O'Brien, M.D.
    September 25, 2014 | 11:00 AM
     
  8.  
    James O'Brien, M.D.
    September 25, 2014 | 12:16 PM
     

    And possibly the most informative article ever written in Psychiatric Times:

    http://www.psychiatrictimes.com/articles/not-obsolete-continuing-roles-tcas-and-maois/page/0/2

    All of this seems to suggest a radical new treatment algorithm:

    1. TCAs for melancholic or severe depression
    2. MAOIs for moderate to severe atypical depression or possibly bipolar depression
    3. For mild to moderate depression, exercise and sleep hygeine and psychotherapy, and if no improvement, maybe SSRISNRI/NDRI or transdermal Ensam.

    Obviously, if someone is on something already and it is working, ignore the decision tree.

  9.  
    wiley
    September 27, 2014 | 3:39 PM
     

    Oh, goodie, Dr. O’Brien! I’ve saved that first study you linked to, and boy is that an education. I get the distinction between placebo effect and placebo response now in their most raw terms. I’ll study this at times when I’m up to this kind of technical reading. This is the kind of information and training in concepts and terms that is good for researchers, clinicians, and anyone who wants to do their best to make sure that they’re informed enough to make their own decisions about whether or not to try or to continue taking a drug.

    And it’s necessary to know these things in order to speak accurately, and precisely on the issues of taking medications, especially the psychological and social aspects of medicating when dealing with drugs that are intentionally or unintentionally psychoactive.

    People who believe that medical science is pure as fresh snow and anyone who denies that hates science and is convinced that contradicting the great platitudes of pharmaceutical marketing is attempting to destroy science itself and anyone labeled with a mental illness are letting themselves down. Being able to discuss why the reigning cynicism and fraud of drug testing and its abuse of methodologies also short changes those who want/need to be properly medicated is critically important along with evidence that some drug interventions better with other kinds of interventions, don’t work well enough to be used as a first and automatic attempt to modify symptoms, or don’t pan out in the risk/benefit calculations of an individual.

    Now I’m going to check the rest. Thank you.

  10.  
    James O'Brien, M.D.
    October 2, 2014 | 11:41 PM
     

    At this point, I’m even wondering if eliminating all conflicts of interest would lead to robust psychiatric studies. I think the methodology has been so bad for so long I wonder if we can ever get it right. The old studies were done on depressed inpatients and that eliminated a lot of noise.

    From Richard Feynman’s Cal Tech address 1974:

    All experiments in psychology are not of this type, however. For example, there have been many experiments running rats through all kinds of mazes, and so on–with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

    The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell.

    He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

    Now, from a scientific standpoint, that is an A-number-one experiment. That is the experiment that makes rat-running experiments sensible, because it uncovers that clues that the rat is really using– not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat-running.

    I looked up the subsequent history of this research. The next experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running the rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic example of cargo cult science.

    Are we doing the same with clinical trials? Repeating the same mistake over and over? In other words, using outpatients who are not as depressed by definition of a watered down DSM and who else knows what the hell is going on with them basically the same as running a rat maze without sand?

Sorry, the comment form is closed at this time.