Field Trial Results Guide DSM Recommendations
by David J. Kupfer, M.D.
Written with Helena C. Kraemer, Ph.D.
… some DSM-5 detractors have spotlighted the six as indicative of flaws in the field trials, especially because this group included major depressive disorder and generalized anxiety disorder, two of the most commonly diagnosed conditions. The opposite is closer to the truth. Rather than discrediting the field trials, the outcome here reveals the critical value of how the trials were constructed and conducted and how we are moving forward. Ironically, both major depressive disorder and generalized anxiety disorder were tested not because they were being modified for the next manual, but because they were remaining relatively unchanged and could serve as reference disorders from the DSM-IV trials. But as part of that process two decades ago, patients were carefully screened, and participating clinicians received special training and explicit direction on how to perform evaluations. In contrast, the DSM-5 field trials accepted patients as they came and asked clinicians to work as they usually did – to mirror the circumstances in which most diagnosing takes place.
We believe the DSM-5 results represent the truer picture of the difficulty clinicians may have in reliably diagnosing both conditions, either because they often occur with other conditions or because they are accompanied by symptoms that can fluctuate greatly. Regardless of why, we acknowledge that the relatively low reliability of major depressive disorder and generalized anxiety disorder is a concern for clinical decision-making. Strategies need to be developed to address the problem as the manual evolves into a living document that incorporates revisions and additions as research and clinical practices advance. The good news is that we’re now inherently better prepared for this challenge; the DSM-5 field trials have laid the groundwork for how such strategies and future changes should be judged…
One of the joys of being retired is that I can do what I want to do, no matter how long it takes. So now I read articles in a different way. After the Abstract, I read the Acknowledgements, the Conflict of interest declarations, and the Authors’ affiliations. Then I read the Methods. Finally I read the article through. Sometimes I forget and have to go through it again in the right way. I would’ve thought that I was a person who would be good at vetting articles. I’m a math major with an almost Masters in Statistics; was an NIH Research Fellow in Internal Medicine; and have been a computer programmer and closet geek since the first chip hit the market. But that’s not the case. I’ve had to have a lot of help with the methodology from some very patient people [Thanks!] and still get crossed eyes when people use "statistical packages" to generate their stats. They didn’t teach Clinical Trial design in my day and I still struggle with a lot of it.
There’s one thing I have gained poring over these bizarre Clinical Trials that have become the conduit for so much corrupted science. I’m beginning to have a usable bull-shit meter. I’ve been helped along with that because there’s so much of it, so that my gullability rating is also falling. I am no longer wowed by Harvard, Stanford, or Brown [I think Biederman, Schlatzberg, or Keller]. The spin-meisters have found all the trusted markers [institution, publication record, journal reputation, authors' degrees, etc] and exploited them to the hilt. So when I read Kupfer’s and Kraemer’s article about the DSM-5 Field Trials, my meter sounded its alarm and I went looking for Pinocchio and Jiminy Cricket graphics – a standard symbol for the evils of prevarication. Even though I’ve never had a love affair with any DSM, or for that matter the MDD diagnosis, I’ve come to believe that Reliability [Kappa] means something. As inadequate as it is, it’s our inadequate, and that rationalization up top just won’t do. If you’re going to discount it if it’s not what you expected or wanted – why even measure it?
DSM-5 Field Trials in the United States and Canada, Part I: Study Design, Sampling Strategy, Implementation, and Analytic Approaches
by Diana E. Clarke, William E. Narrow, Darrel A. Regier, S. Janet Kuramoto, David J. Kupfer, Emily A. Kuhl, Lisa Greiner, and Helena C. Kraemer
American Journal of Psychiatry. 2012 October 30, AJP in Advance
Objective: This article discusses the design, sampling strategy, implementation, and data analytic processes of the DSM-5 Field Trials.Method: The DSM-5 Field Trials were conducted by using a test-retest reliability design with a stratified sampling approach across six adult and four pediatric sites in the United States and one adult site in Canada. A stratified random sampling approach was used to enhance precision in the estimation of the reliability coefficients. A web-based research electronic data capture system was used for simultaneous data collection from patients and clinicians across sites and for centralized data management. Weighted descriptive analyses, intraclass kappa and intraclass correlation coefficients for stratified samples, and receiver operating curves were computed. The DSM-5 Field Trials capitalized on advances since DSM-III and DSM-IV in statistical measures of reliability [i.e., intraclass kappa for stratified samples] and other recently developed measures to determine confidence intervals around kappa estimates.Results: Diagnostic interviews using DSM- 5 criteria were conducted by 279 clinicians of varied disciplines who received training comparable to what would be available to any clinician after publication of DSM-5. Overall, 2,246 patients with various diagnoses and levels of comorbidity were enrolled, of which over 86% were seen for two diagnostic interviews. A range of reliability coefficients were observed for the categorical diagnoses and dimensional measures.Conclusions: Multisite field trials and training comparable to what would be available to any clinician after publication of DSM-5 provided “real-world” testing of DSM-5 proposed diagnoses.