déjà vu…

Posted on Monday 1 December 2014


The Lancet Psychiatry. 2014 1[6]:403.

At The Lancet Psychiatry we aim to publish research that illuminates and changes clinical practice. Changing practice requires a high standard of evidence, although existing practice does not always have a solid scientific base. Management of mental health often seems intuitive, so many interventions have been developed and rolled out on the basis of good intentions rather than good science [eg, post-traumatic stress disorder counselling after natural disasters]. We do not deny the role of clinical expertise and the art of the individual psychiatrist, but we believe that studies aimed at altering the status quo should be rigorous in planning, execution, and communication.

The acknowledged gold standard in terms of research is the randomised controlled trial [RCT], a method that should be applied to all types of intervention, wherever possible. In psychiatry, there are also good pragmatic studies of intervention outcomes where RCTs are not feasible, the most informative being those that combine cross-sectional and longitudinal observations. Excellent work is done using registries, particularly where health-based registries can be linked with others, such as those held by the educational and justice systems. For each type of study, there are recognised guidelines that list what should be measured and reported [compiled by the EQUATOR network]. The best-known is the Consolidated Standards of Reporting Trials [CONSORT] statement. CONSORT is itself evidence-based, and for each item there is an explanation of key methodological issues and the importance of reporting that item. Many medical journals now include CONSORT in their instructions to authors, but the slow rate of progress can be seen by looking at the search results for any meta-analysis — too frequently, studies have to be excluded because insufficient information is available.

Some claim that psychiatry, especially psychotherapy, is not suited to such rigorous approaches. Psychotherapist and author Darian Leader has stated that “the criteria for the evaluation of therapies has moved to a very narrow view of evidence, based on the medical model of randomised-controlled trials … with a control group, and so on. You can’t do that with therapy, because the whole point of therapy involves the beliefs the person has initially about their treatment or therapeutic experience. So you can’t randomly assign someone to a therapist.”

The Lancet Psychiatry admires this focus on the individual, but believes a strong evidence base is both possible and necessary. In the interests of all those who entrust their lives and well being to mental health professionals, it is time to level the playing field. All interventions should be assessed to the same standards of evidence, from psychopharmacology and psychotherapies, to brain stimulation technologies and new approaches such as video games and apps. Trials should be registered in a publicly available database; the protocol should be available, and the methods and results reported must match the protocol to avoid publication bias. Most important, sufficient information should be provided to enable replication of the study. The basis of scientific research is validation and refutation. For compounds, this means chemical composition, formulation, and dosing schedules. For psychological therapies, it means details of how many sessions, the length of sessions, availability of a manual where a specific therapy was given, details about the therapist, and evaluation of differences between specific therapists and sites. For training interventions, it means access to the training material, and details of the trainers, sessions, etc. This approach also applies to models of care: if something has the potential to be useful, it must be replicable. Circumstances will differ and where people are involved there will always be considerable variation, but providing relevant detail will enable other researchers to test the data and perhaps to explain different outcomes. Where trials are not possible, data on outcomes — including those associated with general well being and function rather than specific symptoms — are necessary. One essential set of observations is that of adverse events. Again, there are people who claim that the different nature of medicine and psychotherapy means that, although it is relevant to collect adverse event data for the latter, to do so in the same manner for the former is the equivalent of comparing apples with oranges. However, The Lancet Psychiatry believes that if an intervention has the potential to confer benefit, it also has the potential to cause harm.

This rigour is needed not just to satisfy journal editors but to convince researchers, clinicians, patients, and ultimately governments. If we want mental health services to be free at point of access, we must demonstrate that they work.
I haven’t run across this argument for some time. It was the constant rallying cry in the lead-in to the neoKraepelinian revolution and the DSM-III in the 1970s – directed primarily against the psychoanalysts and other psychiatrist psychotherapists who were billing medical insurance for their services. The outcome was a split in mental health care, at least in mental health care paid for by third party payers. Thereafter, psychiatrists stuck to the biological side of the street, primarily focused on psychopharmacology.

Carrier reimbursed psychotherapy didn’t disappear. It was taken over by other mental health specialties with controlled frequency, duration, and negotiated rates of payment. A psychiatrist colleague recently suggested to me that a physician [like me] shouldn’t be a psychotherapist based on a different argument – economics – saying that it costs too much to train a physician and the physician psychotherapist would therefore not be cost effective [the latter part implied].

I am a psychiatrist and psychotherapist, but I didn’t post this article to argue about psychotherapy. There is much about what it says that I agree with and I have no complaint about Lancet Psychiatry having high scientific standards for what they publish. Mostly, I just appreciated that the article doesn’t have the usual contempt, sarcasm, or ludicrous examples that often accompany pieces containing the word psychotherapy. My reason for posting it has to do with something else entirely – this statement:
    The acknowledged gold standard in terms of research is the randomised controlled trial [RCT], a method that should be applied to all types of intervention, wherever possible.
There was a time in my life when I would have automatically agreed with that statement. In a first career, I was a very medical physician involved in biological research. I was in love with any and every thing about the scientific method. That’s still true. But over the last six years as I’ve spent a lot of time looking at the domain of Randomized Controlled Trials [RCTs], my thoughts about that statement have changed dramatically – even though it seems like it ought to be correct.

The most obvious objection is the extent to which RCTs can be corrupted, manipulated, jury-rigged, etc. The scientific misbehavior in the RCTs of psychiatric drugs is staggering. I could never have imagined anything like it could even happen. So I spend my days involved in trying to insure that the kind of perverse science we’ve seen in psychiatric and other drug research never happens again. At the moment, access to the raw data from trials with independent analysis seems the best approach to what has gone before.

But even without the corruption, there’s more to be said about RCTs. I think they are essential as a beginning take on cataloging the adverse effects of drugs [psychiatric and otherwise], but only a beginning. Many adverse effects come with chronic use, so ongoing reporting is an essential ingredient in any accurate understanding of drug toxicity [for example, David Healy’s Rxisk site].

One might think that cleaning up  the corruption, the distortion, the exaggeration might make RCTs the preferred standard in efficacy studies as well, but I’m not sure that’s totally right either. RCTs can be too sensitive – detecting small effects that are not clinically relevant even if they are statistically significant. They’re limited by their time frame, the outcome instruments, the subject recruitment and evaluation processes, and then there’s the placebo effect. The results are often hard to replicate, thus the increasing reliance on meta-analysis of multiple studies in evaluating overall efficacy.

The application of scientific principles in medicine is not like the strength of materials course in Civil Engineering training, no matter how much we’d like that to be true. There are too many parameters and inter-related forces at work for that kind of precision. That’s even more true in the region psychiatrists and psychotherapists haunt – the world of subjectivity. RCTs can sometimes add a degree of clarity, but sometimes not so much. In my specialty of psychiatry, the track record of RCTs is anything but exemplary. So I don’t think any single methodology is made out of gold or achieves level. That’s what makes things so confusing and leads to so much contention. It’s also what makes it all so incredibly interesting…
  1.  
    Bernard Carroll
    December 1, 2014 | 8:51 PM
     

    This editorial in Lancet Psychiatry signals a value system that has no place in clinical science. The essential conceit is the goal of publishing “studies aimed at altering the status quo.” Dispassionate science has no such agenda. When there is an agenda, watch out! That is the fatal flaw we find in Pharma’s record.

    Good clinical science consists in studies that approach questions with equipoise – seeking to come as close to the truth as is possible using current methods. As the Nobelist Peter Medawar put it, science is the art of the soluble (whereas politics is the art of the possible). He also pointed out that there are no prizes for tackling questions that our current methods and concepts are incapable of resolving. Wishful thinking and triumphalist fantasy don’t get us there.

    We might add that it is presumptuous and unseemly for journal editors to cast themselves as the arbiters of quality and influence – especially when those same editors have contributed nothing of substance themselves to the field.

  2.  
    James O'Brien, M.D.
    December 1, 2014 | 11:55 PM
     

    Contributed nothing to the field? No wonder. Looking at bios and pics, I’m wondering if they’ve ever had to use a razor.

    Awfully inexperienced to be making big decisions for a (formerly) prestigious journal.

  3.  
    berit bryn jensen
    December 3, 2014 | 1:48 AM
     

    http://www.thelancet.com/journals/lanpsy/article/PIIS2215-0366(14)00058-3/fulltext

    A system that costs the public purse billions of Dollars, Euros, Nkr, and still does not reliably deliver desired outcomes, needs to be changed. The crisis in psychiatry is evident to all who care, also to many whose interests are tied to status quo, I think. “Science as the art of the soluble” sounds good, but it may be considered hubristic to think that the complexities/chaos of human life and our brains shall be understood by methods seen as science today. The thinking and practices of the profession are mostly sorely lacking. It would be wise to aquire a position of humility in face of the many gross abuses perpetrated in the name of psychiatric “science”, including the current epidemic of iatrogenic illnesses from toxic drugs.

  4.  
    Bernard Carroll
    December 3, 2014 | 10:56 PM
     

    Berit Bryn Jensen commented “Science as the art of the soluble” sounds good, but it may be considered hubristic to think that the complexities/chaos of human life and our brains shall be understood by methods seen as science today.” Ms. Jensen’s point here is pretty much exactly what Medawar was speaking of. I see no important difference of content between the two, though it appears Ms. Jensen believes she is contradicting Medawar.

  5.  
    berit bryn jensen
    December 4, 2014 | 6:48 AM
     

    I thank dr Carroll for two reminders. First, the importance of logical coherence.If I had used “and” instead of “but” in the quote above, there had – possibly – been a lesser chance of dr Carroll misinterpreting two otherwise clear, I hope, sentences in agreement of the quote he offered from the late dr Medawar.
    Second. I’m still baffeled at times by people who presume to know what other people believe. Quite a few belong to the psychiatric profession. Dr Caroll is wrong when he thinks he can know what “Ms Jensen believes”. It would be uttterly presumptuous and hubristic of me to contradict anyone on the basis of so short a quote, which I like, and which, in my view, applies equally to politics.Lots of problems are soluble, if politicians had more integrity, courage, will to explore what’s possible on the long and arduous haul, instead of short term.
    I see from the page on dr Medawar on Nobelinstituttet that he used a concept called actively acquired tolerance (about skin grafts) In medicine there is also actively aquired intolerance of drugs and of people who think differently.

Sorry, the comment form is closed at this time.