which drug is best for Mr. Jones?

Posted on Wednesday 24 April 2013

If you steal something from yourself, is it plagiarism? I said all of this before [my old Greek…], but I guess I want to say it again:
Pyrrho of Elis [360-270 BC]Sometimes an idea comes along that sticks in the mind like it belongs there – like there was already a space patiently awaiting its arrival. After that, it might fade or seem to disappear into a cloud of other ideas and thoughts, but then it reappears with the unexpected familiarity of an old friend who just happens to be in the neighborhood and gives you a call. Later visits never reproduce the wonder of the first encounter, but they bring a calm that approximates the meaning of the word wisdom.

For me, such ideas are usually personified – Einstein, Joshu, Eliot, Freud, Mandelbrot. They seem oddly connected in that they poked holes in how I’d been thinking, and offered me another perspective that I didn’t quite know was there. A few years back, I added another luminary to the cloud. I had been asked to give a short graduation address by the graduates of our Psychoanalytic Institute [short, because the speaker at the previous graduation had exceeded our endurance by leagues]. It was an honor, but I was skeptical that I could come up with a short topic that would rise to the occasion.

I knew what I wanted to say. I wanted to attack the dogmatic way many analysts have approached this whole business of theories of mental life and psychotherapy in general. On the other hand, I wanted to say something about what our graduates had learned along the way. Then I remembered something an old iconoclastic mentor, a Pathologist, had said when I was a Fellow in Immunology in a previous life. He was talking about the difficulty of giving talks to peers. The gist of what he said went like this: "Talking to peers is a pain. Either they taught you what you know, or you’ve already taught them what you’ve learned – so you have to come up with some new angle. But since there’s nothing new under the sun, I always look for something old that we’ve all forgotten. And it really helps to find an old Greek." I hadn’t thought about what he said for years.

But armed with the triad of words and phrases [skeptical, dogmatic, and old Greek], I did the modern thing and hit the Internet. Within a short period of time, the talk wrote itself as Pyrrho of Elis clicked into the space in my mind that had long been waiting for him. In fact, I got there immediately – I googled skepticism, looked at the Wikipedia entry, and there was Pyrrho. Here’s the short version.

Pyrrho was from Elis, a suburb of Athens. He started life as a painter, but gravitated to Philosophy. He became one of the Philosophers that traveled with Alexander the Great on his campaigns of conquest and he was influenced by the Philosophers in the East that he met on those travels. When he returned to Greece, the dominant school of Philosophy was Dogmatism. We know a jaded version of Dogmatism, largely from the excesses of the Catholic Church centuries later. At the time of Pyrrho, Dogmatism was something  lofty, like the "search for absolute truth." Pyrrho taught that there was no absolute truth, and his teachings became known as Skepticism. What we know of Pyrrho outside later writings about his philosophy are stories that we know aren’t true. They’re the jokes the Dogmatists made about him – parodies of his indecisiveness. They told the story that Pyrrho was walking down the road and saw a man fall face down in the mud. While Pyrrho pondered, the man died from asphyxiation. Or a story that his students followed him everywhere to make sure that he decided to eat [those old Greeks weren’t so great with jokes].

Since there were no absolute truths, Pyrrho taught that we had to accept relative truth, always maintaining a questioning attitude, vigilant for things that might cause us to revise our former approximations. We call it healthy skepticism these days, and it’s the essence of the scientific method. Pyrrho was perfect for my talk. Psychoanalysis at its best is a benevolent skepticism about the anachronistic meanings that our patients live by as if they were true, even when they cause havoc in living. And Freud was a grand skeptic, skeptical of his own ideas as well as others, constantly revising his theories even as he defended them. Pyrrho’s story offered a way to talk about paradigm ascension and exhaustion – and about the perils of dogma in general. Pyrrho was a welcome comrade in my pantheon of personified ideas…

When I first encountered the name, Healthy Skepticism, for the watchdog group, I thought it was a brilliant choice. It’s the essence of Science, at least to me. Our predecessors [Necromancer, Priest, Shaman, Alchemist, Guru, etc] each had their ideas about treatment, but the problem was that they believed in those ideas. So when things didn’t work, they kept at it and often hurt people – the dreaded therapeutic zeal. We’ve all done it as it’s such a hard lesson to hold to – a mistake just waiting to be repeated.

In my first medical specialty, rheumatology, many of the diseases were of unknown etiology [still are]. Unlike psychiatry, there were plenty of visible signs and symptoms, laboratory abnormalities galore, but causes remain elusive. There were treatments – some empiric, some based on known mechanisms of disease – but most were toxic in one way or another, and often dangerous: high doses of Aspirin, Corticosteroids, Gold injections, antimalarial drugs, chemotherapeutic agents, biologicals. And both the toxicity and the efficacy varied widely from patient to patient. The principle is that every treatment in every patient is a therapeutic trial. There is often too much variability within a group of patients with the same disease to make individual predictions, so it was a medication trial in every patient. I understand that’s still somewhat true in rheumatology even with the newer agents [just listen to the mumbled ending to the t.v. ads for these drugs].

That principle must’ve followed me into psychiatry, because I still feel that way. I didn’t come to psychiatry to be a medication maven, but when using our drugs, I still see it as an individual therapeutic trial – the efficacy and safety data gathering starts anew with every case. In practice, it was easy because I saw people frequently. In the clinic where I now work some, it’s harder because I see people more frequently than most. But I guess I’d rather be a schedule clogger-upper than too casual with medications. That’s just healthy skepticism in my book – the lesson of old Pyrrho of Elis.

Right now, Dr. David Healy is coming towards the end of his series on the history of the use of Randomized Placebo-Controlled Clinical Trials and Evidence-Based Medicine with the stories of Lou Lasagna [US] and Michael Shepherd [UK] – the doctors that essentially introduced the RCTs to regulatory medicine and popularized their use:
Healy uses the ultimate disillusionment of Lasagna and Shepherd with what their own brain-child became to bolster his case that these concepts have become mangled along the way and are often destructive to good medical practice – a point I absolutely agree with. I’ll leave you to read Dr. Healy’s story for yourself.

My version above is simpler. Randomized Placebo-Controlled Clinical Trials and Evidence-Based Medicine have been treated as dogma, absolute truth. But even the most honest of trials [rare in psychiatry] can dull the single most important thing physicians bring to the individual patient – their healthy skepticism.

Said Lou Lasagna:
“Evidence Based Medicine has become synonymous with randomized placebo-controlled clinical trials even though such trials invariably fail to tell the physician what he or she wants to know which is which drug is best for Mr Jones or Ms Smith – not what happens to a non-existent average person”.
  1.  
    wiley
    April 24, 2013 | 2:47 PM
     

    Really, I would rather pharmacologists do the lion’s share of research on drugs that have already been approved. If you know of studies, Mickey, on the effects of drugs combined, I’d love to see one. On top of individual differences in patients, a drug can act very differently when combined with another, and when it comes to cocktails of three or more drugs, no psychiatrist KNOWS what they’re doing.

    There are reasons why psychiatrists have to rely on symptoms of emotional/psychiatric disturbances, but there really is no excuse for them not to consult and pursue science when it comes to their use of drugs and drug combinations.

  2.  
    jamzo
    April 24, 2013 | 4:07 PM
     

    FYI

    “For my recent Observer article I discussed how genetic findings are providing some of the best evidence that psychiatric diagnoses do not represent discrete disorders.”

    “As part of that I spoke to Michael Owen, a psychiatrist and researcher based at Cardiff University, who has been leading lots of the rethink on the nature of psychiatric disorders.”

    I asked Owen several questions for the Observer article but I couldn’t reply the answers in full, so I’ve reproduced them below as they’re a fascinating insight into how genetics is challenging psychiatry.”

    http://mindhacks.com/2013/04/24/deeper-into-genetic-challenges-to-psychiatric-diagnosis/

  3.  
    wiley
    April 24, 2013 | 8:19 PM
     

    It’s befuddling to me that psychiatry is so gun-ho to go into genetics, before they’ve done significant field work in the brute physicality of mental disturbance. Since it’s known that a lot of medical illnesses are misdiagnosed as “mental illness,” doesn’t it make sense to do studies in which people who have been labelled as “mentally ill” are thoroughly tested to eliminate those who are indeed suffering from a distinct illness that has psychiatric symptoms? Since MS appears to be “comorbid” with bipolar disorder, doesn’t it make sense to screen a lot of people with bi-polar disorder for MS? Wouldn’t it make more sense to look at the relationship between actual brain lesions and diagnosed mental illnesses?

    To look at people diagnosed with “mental illness” and to take their diagnosis at face value— whether you call it by a clinical name, or label it a generic category of “mental illness”– is already to have assumed too much. To, on top of that, dismiss environment and trauma, and to focus on genes in the hopes of finding a genetic cause to prove heritability— when heribility is only supported by faulty, and fraudulent studies—-is like using the Hubble to find Valhalla.

  4.  
    April 24, 2013 | 10:46 PM
     

    Show me the science.

    “Psychiatry is to medicine what astrology is to astronomy.” – Leonard Roy Frank

    Duane

  5.  
    Annonymous
    April 26, 2013 | 1:36 AM
     

    “There’s another crucial limitation that science reporting — especially in psychology and the social sciences — often ignores. Even when we have R.C.T.s that decisively establish a scientific law, it doesn’t follow that we can appeal to this result to guide practical decisions. As Nancy Cartwright, a prominent philosopher of science, has recently emphasized, the very best randomized controlled test in itself establishes only that a cause has a certain effect in a particular kind of situation. For example, a feather and a lead ball dropped from the same height will reach the ground at the same time — but only if there is no air resistance. Typically, scientific laws allow us to predict a specific behavior only under certain conditions. If those conditions don’t hold, the law doesn’t tell us what will happen.”

    http://opinionator.blogs.nytimes.com/2013/04/25/what-do-scientific-studies-show/

    “For policy and practice we do not need to know “it works somewhere”. We need evidence for “it-will-work-for-us” claims: the treatment will produce the desired outcome in our situation as implemented there. How can we get from it-works-somewhere to it-will-work-for-us?”

    http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2811%2960563-1/fulltext

    At the same time, after Dr. Healy has completed his series I hope you also consider tackling the comments to his first post, in particular:
    http://davidhealy.org/not-so-bad-pharma/#comment-79072
    http://davidhealy.org/not-so-bad-pharma/#comment-79092
    In the context of Pyrrho, this statement from Chalmers stands out:
    “The features of all of these statements that I am unlikely ever to accept is their maximalist, ‘brook no disagreement’ nature. My strong impression is that you are prepared to generalise from your deep but essentially narrow experience in the field in which you have been a pioneer and to which you have made extremely important contributions.”
    Also of particular note:
    “Cannot provide patient-level answers. So what can, apart from randomised n-of-1 trials (which are at the top of the draft of the EBM Group’s evidence hierarchy published several years ago)? If you’re referring to using data from groups to provide prediction of treatment effects in individual patients then you should give examples of how you overcome this unavoidable challenge by preferring non-randomised groups to randomized groups as a basis for the guess.”
    “You suggest in your talk that trials that need large numbers mean that treatments don’t work, and that, instead, ‘we’ve got to believe the evidence of our own eyes’. So you wouldn’t want tranexamic acid if you get knocked off your bike and started haemorrhaging?”

    You said, in part:
    “The principle is that every treatment in every patient is a therapeutic trial. There is often too much variability within a group of patients with the same disease to make individual predictions, so it was a medication trial in every patient. I understand that’s still somewhat true in rheumatology even with the newer agents [just listen to the mumbled ending to the t.v. ads for these drugs]. That principle must’ve followed me into psychiatry, because I still feel that way. I didn’t come to psychiatry to be a medication maven, but when using our drugs, I still see it as an individual therapeutic trial – the efficacy and safety data gathering starts anew with every case. In practice, it was easy because I saw people frequently.”
    I then thought about this story, that Dr. Goldacre discusses in “Bad Science”:
    http://www.cochrane.org/about-us/history/our-logo#explanation
    And, how it might be difficult to apply the model you discuss above to that situation. One rarely hears people discuss how different heuristics may be differentially appropriate. E.g., prophylactic treatment of low frequency events (manic episodes) or one time events (e.g., pregnancy complications). Or, say, suicide prevention.

  6.  
    Annonymous
    April 26, 2013 | 2:52 AM
     

    “There are two different ways to interpret Level 1 evidence for treatment benefits as it is currently stated. The intended interpretation is: “either N-of-1 randomized trials or systematic reviews of randomized trials”.”
    http://www.cebm.net/?o=5653
    http://www.cebm.net/mod_product/design/files/CEBM-Levels-of-Evidence-2.1.pdf

    Then there is this:

    http://www.jameslindlibrary.org/illustrating/articles/meta-analyse-en-medecine-the-first-book-on-systematic-reviews

    “Much of the focus until now has been on synthesizing the results of randomized trials, but what should we do about the integration of alternatives to RCTs, such as time series analyses, or n-of-1 trials? What about the integration of findings derived from analytical observational research? How can and should we integrate findings about screening and diagnostic tests, ‘simple’ incidence studies, or studies of prognosis? Should clinical case series reports not be presented as systematic reviews of cases (Jenicek 2001)?

    Are we investing too heavily in an excessively precise concept of some overall treatment effect instead of more closely examining the heterogeneity of findings, their nature, the biological explanation of such heterogeneity, and what it really means for decision making? How should we refocus, expand or reduce research findings to particular subgroups of patients and community groups, or generalized policies for all? Knowing how to rationally and pragmatically use findings from research syntheses is just as important as methodologically brilliant research syntheses.

    How simple life was for us when all this started in the 1980s! How everything looked crystal clear when we opened this Pandora’s box! Today, we can hardly imagine coping with the explosion of medical information without some process of research synthesis to deal with it all. Is there something out there to cope with this challenge which is better than systematic reviews and meta-analysis as we know them today? Place your bets!”

    I would agree that the “science” of medicine out there right now as it is currently defined is very problematic and often counterproductive. I would not agree that that is not often the case for the “art” of medicine as well, both as it had been practiced and as it continues to be practiced.

    “The key feature of empirical testing is not that it’s infallible but that it’s self-correcting.”

    One of the biggest dangers of “evidence based medicine” is that it is presenting the illusion that it adequately addresses the “which drug is best for Mr. Jones” question. I,e., the illusion that we don’t need to come up with better models for the “art” because the gold standard is the RCT and that is the end all be all anyway.

    I retain healthy skepticism that the past or current “art” of medicine, or anything else extant in medicine as it has been or is currently practiced, is sufficient to address that question as effectively as it deserves to be addressed.

    It’s not just a matter of beating off the scourge of the hordes of wild-eyed EBM hyper-enthusiasts.

    Plus, many if not most of the psychiatrists out there handing out antipsychotics like candy, when interviewed, seem to say they’re doing so because it works so well “in their hands.” The trials and systematic reviews be damned. They know it works because they’ve seen it in their patients with their own eyes. Placebo effect, regression to the mean, selection and recollection bias, reporting bias, …etc be damned.

    Dr. Chalmers makes some interesting points here and it seems to deserve a careful reading:
    http://books.google.com/books?id=aHfBF8GoL8UC&pg=PA37&lpg=PA37&dq=%22n-of-1+trials%22+%22iain+chalmers%22&source=bl&ots=I6thk7s1FX&sig=PZvp6GmeFtbxvRQ3An9aG_eAubY&hl=en&sa=X&ei=vx56UbPSNNGGqQG8vIGgDQ&ved=0CC4Q6AEwADgK#v=onepage&q=%22n-of-1%20trials%22%20%22iain%20chalmers%22&f=false

    1BOM, your point that rheumatology, psychiatry, and perhaps some other fields lend themselves less well to pooled patient outcome data seems a particularly important one. The heterogeneity of individual responses to particular psychiatric drugs appears enormous. In some ways, perhaps the nature of psychiatric drug treatment lends itself particularly well to the approach you describe.

    At the same time, it would be of interest to compare and contrast how Dr. Chalmers characterizes n of 1 trials vs how this is typically characterized by most psychiatrists. I confess to not understanding either well enough to begin to do that.

  7.  
    Annonymous
    April 26, 2013 | 2:56 AM
     
  8.  
    April 26, 2013 | 9:15 AM
     

    I still think of psychiatry as it is currently practiced as the new psychiatry, by which I mean post-1980, post DSM-III psychiatry. I was certainly aware of the problems of the old psychiatry, and in case I forgot, in those days there was someone around every corner to remind me. Rigid analysts who forgot that Freud was just a smart old guy from some time back, community psychiatrists pretending that the antipsychotics would cure schizophrenia if the patients would just take them, experiential post-1960s types who got with their feelings and enjoyed saying outrageous things, and also a number of the best doctors I’ve ever encountered. I had come to psychiatry from a research based education in Internal Medicine and though I’d never heard the term, evidence-based medicine, I had a black belt. I hope I still do.

    The majority of my reason for changing specialties had to do with being drawn to the individual psychological plights of the patients I saw as an Internist. But a piece of it was something about changes in Internal Medicine at the time. I liked being a physician, which was a surprise for a research type like me. And so when I decided I wanted to be a practitioner, I had to rethink the whole thing, because Internal Medicine was picked as a research career. At that time, Internal Medicine was changing. Formerly, it was a diagnostic specialty – seeing and treating sick people. But it was morphing into some other things – more like a wellness specialty with routine physicals, disease management, and preventive medicine and then another group of sub·specialists who were mostly consultants. I found the former too rote and the latter too distant. It was thinking about that that helped me realize that it was the patients and their narratives that I was really drawn to – their subjectivity – not so much their objective medical afflictions or health promotion.

    But there was something else. One year, a diastolic blood pressure of 100 was the limit, the next year it was 90. One group recommended tight blood sugar control in diabetes, another said it wasn’t so important. High·ish cholesterol was bad, or it wasn’t. Coumadin after a heart attack was either a good idea, or it wasn’t. And so it went, and while I found that tedious, mostly I thought nobody knew the answer to those questions – and so I couldn’t pass those things along to patients with any conviction. The way I dealt with that internally was to decide that I was a natural treat-sickness doctor, not a public-health doctor.

    All of this is a lead-in to a conversation I’ve mentioned before almost forty years ago with someone representing the new psychiatry, trying to convert me. These days, it would be one of those discussions where I was cast in the role of arguing on the side of "the art of medicine" or "n=1" medicine against "evidence-based medicine." In that version, I would be advocating "shooting from the hip" or the "in my experience" genre – the flaky side of the street. But back then, it was the early days of this kind of discussion, so the lines weren’t drawn in the sand so clearly as they are now.

    The conversation itself isn’t so important. It was really about whether the two of us could operate together as a team. The answer was "no," but we hadn’t figured that out quite yet. But what I found myself saying was important, at least to me, because I’d never thought it clearly before. I said that I thought of psychiatry as the specialty in medicine that dealt with ambiguity. That when things became clear, the illness would pass to another specialty – like Syphilis or Epilepsy had in the past. That’s what I had been drawn to – trying to find the best path through all the ambiguity and subjectivity when there was no clarity. It was the scientific method applied to a unique problem – and a real challenge. I think I even said that I thought it was vastly important to know everything known that pertained to the case at hand, but I saw that as just a starting place. Knowing a person has a Borderline Personality Disorder and what that means isn’t like knowing someone has Appendicitis, I might have said.

    But that’s just what I think, not what this blog is about. It’s about how the time honored scientific method and principles of evidence-based medicine have been co·opted and perverted in psychiatry in the last three decades – commandeered in ways I could never have conceived. That’s why I am a supporter of Iain Chalmers, Ben Goldacre, Fiona Godlee, AllTrials, the Data Transparency crowd – because they are hell-bent on turning evidence-base medicine back into what it’s meant to be – honest, scientific, available. And that’s why I rant about a much longer list of prominent psychiatrists who I think are false prophets. They talk the talk, often very loudly, but they are bought-men, whether they know it or not. And that’s the reason I support David Healy. He’s harder to understand, but his point is strong – that evidence-based medicine and clinical trials have been elevated to a position of dogma, replacing what physicians are actually for – negotiating between all of our science and the best interests of the patient at hand. No trial is specific for a given patient, no algorithm adapted to them. They are just one dot on the scattergram that has been turned into a pretty imaginary line on a graph. And, by the way, that’s also why I chose psychoanalysis on top of my psychiatry training. It’s because I realized that almost all of my medical errors were not because I didn’t know my medicine, they were because I didn’t know myself.

    So I’m no good to talk with about evidence based-medicine versus clinical judgement. I’m on both sides of that fence. How is a physician to negotiate the impossible conflict that often arises between following a recommended algorithm/guideline/clinical·trial·result and an intuition that it’s not right for the patient you’re seeing? Examine the recommendation’s data carefully. Look at the source of the intuition with equal vigor. Get outside help from someone who doesn’t have "a dog in the hunt." Neither shoot from the hip nor hide behind the recommendation. And when you make the inevitable mistakes, try to be aware that you made them and go back and look at why, so you won’t make them again. A number of suggestions, but none of them cover every case. It’s one of those Zen questions…

  9.  
    April 26, 2013 | 11:49 PM
     

    “It’s one of those Zen questions…”

    This reminds me of another Zen question, which was even posed with an ominous zenny gong sounding throughout, where it says “SOUND EFFECT”, on this transcript of the RadioLab guys discussing the ‘Decline Effect’ and Scientific Truth: http://www.onthemedia.org/2011/may/13/the-decline-effect-and-scientific-truth/transcript/

    And one of the examples they used regarding regression to the mean was the case of *2nd generation anti-psychotics*. Listen for the sound of of an ‘interesting’ Drug rep’s one hand clapping:

    JONATHAN SCHOOLER: And I think this is, for me, the most troubling error to the decline effect, ‘cause you see like second generation anti-psychotic. … These are drugs used to treat people with schizophrenia, bipolar. When they first came out in the late ‘80s, early ‘90s, some studies found that they were about twice as effective than first generation anti-psychotics. JAD ABUMRAD: Mm! JONATHAN SCHOOLER: And then what happened is the standard story of the decline effect. Cue the sound effect. [SOUND EFFECT] Which is clinical trial after clinical trial, the effect size just slowly started to fall apart. You see a similar decline with things like Prozac, and – anti-depressants – ROBERT KRULWICH: Wow! JONATHAN SCHOOLER: The effect of the drugs have gotten weaker, but the placebo effect has also gotten stronger. I was talking to one guy at a drug company who [LAUGHS] – he was kind of interesting. He blamed that on drug advertising. He said that they started to see their placebo effect go up in the late ‘90s when these drug companies started advertising.

    JAD ABUMRAD: But then wouldn’t that actually offer an explanation for this decline thing because, you know, some – if you know about what this drug is supposed to do, maybe it works differently somehow? JONATHAN SCHOOLER: Certainly, there are areas of psychology where that can change the outcome in, in one way or another… Like – I, I say this with – some trepidation but I think we can’t rule out the possibility that there could be some way in which the active observation is actually changing the nature of reality. [MUSIC/MUSIC UP AND UNDER] That somehow in the process of observing effects, that we change the nature of those effects.

    [MUSIC]

    SEVERAL AT ONCE: Ah!

    ROBERT KRULWICH: You’re in real trouble. [GUYS LAUGHING] JAD ABUMRAD: But it sounds like you’re saying that maybe the truth is running away from you or something? JONATHAN SCHOOLER: Well, I – I’m not – you know, I’m not gonna say that I am certainly not gonna say that there’s some sort of intentionality to these effects disappearing, more that it’s almost – and again, this is just a speculation – some sort of habituation. So just as when you put your hand on your leg you feel it and then as you leave it there it becomes less and less noticeable, somehow there may be some kind of habituation that happens in – with respect to these findings. JAD ABUMRAD: But what is the hand and what is the leg in – in having this – JONATHAN SCHOOLER: Well, in – in – in this most radical conjecture, there could be some sort of collective consciousness that’s habituating. Again, radical speculation but there may be some peculiar way about the nature of reality that somehow it gets into the ether.

    [MUSIC] JONAH LEHRER: Keep in mind, the notion that the laws of reality are unchangeable is an assumption. It’s a reasonable assumption but we don’t know it for a fact. And there have been physicists who have even speculated that perhaps the rules change as time – goes on.

    [MUSIC] JAD ABUMRAD: But by this logic you can never actually know anything for sure – ROBERT KRULWICH: Because reality could change based upon the observer’s position, habits, biases, information whatever.

    JONATHAN SCHOOLER: Well, so far we have not really seen these types of things in the domain of, of physics, but you know an aspirin might not do what it used to.

    *sigh* None of those darn drugs do what they used to. But it seems Reality can change especially well when one of those ‘interesting’ drug company guys is there observing a study, or ghostwriting it.

    If a tree falls in the forest will anyone hear it? (CUE SOUND EFFECT)

Sorry, the comment form is closed at this time.