|It is difficult to argue against evidence based medicine. Why would anyone even want to do that? Likewise, the double-blinded randomized clinical trial makes intuitive sense to scientists or anyone else that gives it any thought. The way to see if a drug works is to take a homogeneous group with the same affliction and blindly assign them to two groups, one taking a placebo and the other taking the test medication. One might even make three groups and throw in a drug that’s known to be effective [active comparator]. In psychiatry, one needs an objective measure for success to convert the subjective experience of the affliction into an objective marker, and for that we have observer-rated and/or self-rated scales [with the psychometric properties of test-retest reliability and objective validity]. It’s a system that has evolved over time for testing all kinds of things, but in the case of medicine effect, there are two regular outcome categories – efficacy and safety [adverse effects]. RTCs make so much sense that they’ve become the Law of the Land. For a drug to be approved in the US, it has to have two randomized clinical trials with proven efficacy, and an acceptable safety record considering all subjects from all studies done on the drug. These differing standards reflect our history. The original focus of the FDA was safety. The part about efficacy was an later add-on.|
|Likewise, the analysis of the results of RTCs is monotonously structured. One must declared the outcome variable in advance, including the criteria for response and those for remission. And then there’s the real world problem that subjects drop out along the way, and that has to be considered [often corrected using the Last Observation Carried Forward method]. There’s the pesky placebo effect to take into account [untreated subjects don’t stay sick either]. And there are standard statistical tests to compare groups, as well as tests for the strength of the drug effect. In short, there are a number of procedural conventions and protocols governing every facet of a given clinical trial – including conventions about how the results are presented. The messy scatter of data is reduced to a mean and some measure of variability – the stuff of statistical testing.|
In medical school and as an Internist thereafter, I didn’t know any of that. Oh, I knew about statistics and control groups because I was in a research track, but I don’t think I ever thought about drug testing. In my mind, drugs were classified by their mechanism of action. And all drugs had adverse effects, so every use of medication involved the risk/benefit equation. Drug Trials just weren’t a part of my world. And in psychiatric training in the 1970s, I thought of the psychiatric drugs the same way. By the time Prozac arrived, I was in practice as a psychotherapy type and isolated from the age of psychopharmacology. I can probably remember every patient treated with medications in my whole time in practice, so I didn’t experience the intervening years of clinical trials first hand. I never thought about drug testing that I can recall.
In my Internist days, I didn’t need clinical trials to know about the effect of drugs. With things like Digitalis, Insulin, Prednisone, etc. you don’t need clinical trials – the evidence is apparent just using the drug. The first time I ever used Ethacrinic Acid [a potent diurectic just released at the time I used it] in a patient in extremis from heart failure with massive fluid retention, the patient lost 64 pounds overnight [I guess we did our own clinical trials]. In psychiatry, the effect of antipsychotics, anxiolytics, or Lithium were obvious – the tricyclics antidepressants less so. I thought of the latter [TCAs] as drugs that sometimes worked in the severe endogenous depressions, but not in the kind of depressed outpatients I saw. So everything in that box up there is something I’ve learned since retiring ten years ago. I didn’t think about evidence based medicine because I thought all medicine was evidence based medicine – what was the alternative?
Since 1962 and the Thalidomide incident, in the US and elsewhere, the Drug Approval Agencies [FDA in the US, EMA in the EU, etc.] have required RTCs as part of the approval process. The story of the events of 1962 and the place of Estes Kefauver [US Senator from the Great Volunteer State of Tennessee, birthplace of me and Davey Crockett] is in the must-read book Pharmageddon by David Healy. Thorazine was approved in 1957, before that date, and in general, the approval history of the drugs of my youth in psychiatry are not available on the FDA site [preInternet]. So for psychiatry, the ages of evidence based medicine, randomized clinical trials, and the neoKraepelinian DSM-III revolution are concurrent.
The scheme boxed above, while initiated as a protective maneuver, is actually designed to detect smaller differences – differences that might not even be apparent without the statistical rigor described. One would hardly need more than my one case of terminal congestive heart failure treated with Ethacrinic Acid to know that drug was a powerhouse diuretic – dangerously powerful. The same is true of Digitalis, Insulin, and Prednisone. They don’t work statistically. They just work. The same would be true of Thorazine which is closer to a sledge-hammer than something subtle requiring statistical proof. The issue with all of these drugs is about their safety, not their efficacy. So the coming of the clinical trial technology greatly expanded the ability to detect much more subtle levels of efficacy.
It’s interesting that the drug zero that got all of this started was Thalidomide and its propensity to cause defects in limb bud development in embryos resulting is grossly apparent birth defects. The issue was safety, not efficacy. Thalidomide remains a top of the line anti-emetic that I understand is still in use in patients with protracted vomiting who are guaranteed not to be pregnancy prone [like men or sterilized women]. But the point is that clinical trials introduced a much more finely tuned tool for measuring medication effect.
And as we well know, the coming of clinical trials introduced a black box process that, in spite of all the rules and regulations of the clinical trial technology, offered an almost infinite array of ways that the results could be misrepresented – a domain of epidemic white lies. In he 1950s, there was a popular tv show that I always think of at this point in the story because the pharmaceutical and new clinical trial industries paraded CNS drugs down the main street of psychiatry for the last twenty-five years to a cheering audience, assisted by academia.
In spite of numerous well-intentioned attempts to chase the techniques for distorting trial results, industry paraded ever onward. For one thing, they had a technology to show efficacy in much less potent drugs. For another, despite all the attempts at plugging the loopholes, the process of analysis went on in the dark reaches of a black-box – unseen by we mortals on the outside. Enter stage left, a new army of reformers with names like Fiona Goodlee, Ben Goldacre, Peter Doshi, Tom Jefferson, the Nordic Cochrane Group, etc. who did two things. They made a science out of showing how data from clinical trials is distorted and how to detect the misbehavior [see Bad Science and Bad Pharma by Goldacre]. Things like forest plots and funnel plots. But the second thing they did was ask the most obvious question in the world. Why is the box black? Why do we have to use all of these fancy new technologies to indirectly get at the misconduct analyzing data? If we had the raw data itself, we could just do a parallel analysis and we’d see distortion in the first person, on the spot rather than years later. Their idea short circuits the primary black box problem – brilliantly simple [even if none are from Tennessee]. I’m for it. You’re for it. Let’s do it yesterday. Sign the AllTrials petition if you haven’t already. It’s an idea that can’t be wrong. Incomplete? maybe, but not wrong.
But there’s another voice, a voice we’ve become accustomed to hearing and respecting, a voice that was one of the earliest voices of clarity in this whole story – Dr. David Healy [also not from Tennessee]. And as excited as we all are about data transparency, he’s cautioning us that there’s more to the story. It’s the theme of his book, Pharmageddon, and it’s the theme of a series of blog posts he started with Not So Bad Pharma a few days ago, and follows up with a second installment today April Fool in Harlow, with four more posts in this series on the way. Dr. Healy is tenured. He’s earned the right to a careful hearing by being correct at times when nobody would either speak or listen, and by a long train of detailed scholarship and careful research.