under fire I…

Posted on Monday 13 April 2015

"The development of psychopharmacologic agents over the past 6 decades has been characterized by a paradoxical relationship between medication discovery and clinical trial methodology. The methodology during the most productive decade, 1949–1958, was primitive. Since then, there have been tremendous advances in clinical trial design, assessment, and statistical analyses. Yet, despite numerous innovations in methodology, the discovery of new mechanisms of action and blockbuster interventions seems to have slowed—especially during the past decade. In an effort to understand this phenomenon, the evolution of trial design and analysis during the lifespan of psychopharmacology is examined here…"

So goes the introduction to the article below, introducing a topic that has dominated the field of psychiatry since the 1950s, the coming of psychopharmacology. In those early days, drug testing was little different from routine office visits to the family doctor. Now, it’s a complex scientific process with protocols and procedures. It’s also the focus of much attention and many questions. This article is a review of how clinical trials evolved and may help understand something about what happened along the way – How did we get from that era of hope to today, when the chairman of a major department of psychiatry is resigning in disgrace over something that happened in a clinical drug trial eleven years ago [Under fire, Schulz stepping down]?
by Andrew C. Leon, PhD
Journal of Clinical Psychiatry 2011 72[3]:331–340.

Objective: The evolution of trial design and analysis during the lifespan of psychopharmacology is examined.
Background: The clinical trial methodology used to evaluate psychopharmacologic agents has evolved considerably over the past 6 decades. The first and most productive decade was characterized by case series, each with a small number of patients. These trials used nonstandardized clinical observation as outcomes and seldom had a comparison group. The crossover design became widely used to examine acute psychiatric treatments in the 1950s and 1960s. Although this strategy provided comparison data, it introduced problems in study implementation and interpretation. In 1962, the US Food and Drug Administration began to require “substantial evidence of effectiveness from adequate and well-controlled studies.” Subsequent decades saw remarkable advances in clinical trial design, assessment, and statistical analyses. Standardized instruments were developed and parallel groups, double-blinding, and placebo controls became the benchmark. Sample sizes increased and data analytic procedures were developed that could accommodate the problems of attrition. Randomized withdrawal designs were introduced in the 1970s to examine maintenance therapies. Ethical principles for research became codified in the United States at that time. A wave of regulatory approvals of novel antipsychotics, antidepressants, and anticonvulsants came in the 1980s and 1990s, each based on data from randomized double-blind, parallel-group, placebo-controlled clinical trials. These trial designs often involved fixed-dose comparisons based, in part, on a greater appreciation that much of the benefit and harm in psychopharmacology was dose related.
Conclusions: Despite the progress in randomized controlled trial [RCT] design, the discovery of new mechanisms of action and blockbuster interventions has slowed during the past decade.
Most likely know this first part. In the golden age of psychopharmacology, a clinical trial was simple. Gather a few patients and give them the medication you want to test – then interview them about what happened. No control group. No randomization. No blinding. No rating scales. No statistics. As I said, it was very much like the natural world of a medical practice – prescribe, then ask how it went:
The initial trials in psychopharmacology involved case series, each with a small number of patients. Cade reported the antimanic properties of lithium based on a series of 10 cases in Australia in 1949. In 1952, the initial psychiatric study of chlorpromazine, which was previously used for nausea in surgical patients, involved 20 patients with psychosis and reported symptomatic improvement. Chlorpromazine was approved by the FDA in 1954 for psychosis. Imipramine has a molecular structure similar to that of chlorpromazine and for that reason was initially tested as an antipsychotic in 1957 with several hundred cases. Although that effort did not demonstrate effectiveness for psychosis, observation of about 12 of the cases with depression revealed the antidepressant property of imipramine. Iproniazid, a monoamine oxidase inhibitor (MAOI), was used for tuberculosis and clinical observation on the tuberculosis wards reported that patients expressed joy and optimism, despite their prognosis. In 1957, a case series of patients with depression showed beneficial effects of iproniazid. The decade from 1949 to 1958 is unparalleled in the history of psychopharmacology, with the discovery of the first mood stabilizer, the first antipsychotic, and 2 antidepressants, a tricyclic and an MAOI. Yet, none of these case series involved a control.
First, with the acknowledgement of the placebo effect, several things were added to the Clinical Trial routine – [placebo] controlled, [double] blinded:
In 1955, Beecher described placebo response rates across a wide range of indications including anesthesia for surgery, highlighting the need for trials to include a comparator.8 He stated, “Many a drug has been extolled on the basis of clinical impression when the only power it had was that of a placebo.”
Then gradually, the criteria for inclusion became more precise [the DSM-III] and drug studies and approvals were more tightly tied to diagnosis:
The inclusion criteria in the early studies were often rather broad perhaps, in part, because the diagnostic nosology of the era, DSM-I [1952] and DSM-II [1968], were narrative based. It was not until Feighner criteria in 1972, Research Diagnostic Criteria in 1978, and DSM-III in 1980 that nosology became criterion based.
Obviously, the next step was to be the rating scales, and with their increased precision came the statistics:
Standards for the study design and analysis continued to evolve. Max Hamilton, MD, a psychiatrist and namesake of a rating scale for depression, published a text that comprised 12 of his lectures covering a range of areas in clinical research design and analysis including stages of experimentation, design of experiments, measurement of variability, tests of statistical significance, t test, χ2, ANOVA, correlation, selecting cases and treatment, and problems in design and analysis.
For all these bells and whistles, we should back up. The 1950’s was an exciting decade for psychopharmacology, and in 1959, Jonathon Cole was appointed head of the new Psychopharmacologic Research Branch [PRB] at the NIMH where many of the rating instruments and Clinical Trial procedures were subsequently developed and tested. By 1967, the Early Clinical Drug Evaluation Unit [ECDEU] offered a centralized service for Clinical Trials to NIMH grantholders. This is from a 1976 ECDEU Manual:
As originally conceived, the ECDEU program consisted primarily of grant- supported clinical investigators working in tine common area of psychotropic drug evaluation [both new and established compounds]. One of the problems they encountered, and task they accomplished, was the development of a uniform battery of clinical assessment instruments known as the ECDEU Standard Reporting System, first introduced for utilization in 1967. The rationale behind this effort was twofold. First, it was felt that such a system would enhance both the quality of early clinical drug research and allow greater generalizability of results across studies and investigating units. Second, data collected on common forms could be stored in a data bank for future study and research. Since the implementation of this Standard Reporting System and the Biometric Laboratory Information Processing System [BLIPS], the ECDEU program has evolved into more than an extramural grant support program for psychotropic drug research teams. In collaboration with The George Washington University Biometric Laboratory, the ECDEU Standard Reporting System has been made available to any investigator interested in conducting clinical trials, whether federally grant supported or not. To utilize these services, the investigator is requested to:
    1. Submit a Research Plan Report and agree to send the study data to the Biometric Laboratory.
    2. Collect sufficient information about the subjects in his study so that the data can be entered into the ECDEU data bank. This means, essentially, that a core of data must be collected for each patient…
In return, he receives a sufficient number of assessment scales to conduct his research. Once the trial is completed, the forms are returned to the Biometric Laboratory for processing and data analyses, the results of which are sent to the investigator in the form of a standard data package. The rating scales and data processing services are provided at no charge – our sole "remuneration" being the opportunity to add the investigator’s data to the data bank. It should be stressed that an investigator’s data and/or results are never published or disseminated to others without his permission.
I find it telling that the NIMH program [ECDEU] had been set up to have a central data registry and to have the NIMH actually doing the data analyses. It sounds like data-sharing and data transparency were part of the original plan! And all of this was going on at the NIMH in the 1970s, at the same time that Robert Spitzer was also working with the Feighner Criteria, developiing the Research Diagnostic Criteria [RDC] on an NIMH grant, and aiming towards producing the 1980 DSM-III. We all know that there were some major changes in the 1980s and 1990s. Instead of the NIMH, the trials were being done by pharmaceutical companies themselves using the Clinical Research Industry to manage them. The companies analyzed their own data [which they held onto as if it were a carefully guarded state secret]. I haven’t found any resources that talks about how that happened, but we all know by now that the Clinical trials of CNS drugs became a playground for a lot of dodgy science in the process. And to return to the original point, after the flurry of discovery in the 1950s, there have been precious few advances in the area of CNS drug development in spite of all the changes. And what does all of this have to do with the eleven year old Dan Markingson case? or Dr. Charles Schulz’s resignation as Chairman of the Psychiatry Department at the University of Minnesota? Or the down-side of evidence based medicine? Stay tuned…
  1.  
    April 13, 2015 | 11:25 PM
     

    I will be interested in what connections you make. I think it is pretty obvious that clinical trials technology especially in psychiatry has been static for the past 50 years and layers of new statistical analysis have really added nothing. There will be no further advances in pharmacology without adequate markers and measures that are meaningful and the ability to correct the more obvious problems in designs.

Sorry, the comment form is closed at this time.