by Catherine W Barber, Matthew Miller, and Deborah AzraelBritish Medical Journal 2014 348:g359.In the spirit of Lu et al’s [1] warning not to sound alarms about antidepressant use prematurely, we used readily available national data to investigate whether youth suicide attempts in the U.S. increased after 2003 and 2004—the years in which the FDA issued warnings about antidepressant safety. Attempts did not increase. Lu et al’s opposite finding probably has more to do with the unusual proxy they used [one they said was validated by a paper that two of us — MM and CB — co-authored] [see a madness to our method – a new introduction…] than with an actual change in suicidal behavior among youth. We briefly summarize here five readily available, online data sources that provide more direct and valid measures of youth suicidal behavior, and we discuss problems with the proxy that Lu’s study used.
The CDC’s Youth Risk Behavior Survey [YRBS] is a pencil-and-paper questionnaire filled out by high school students [3]. There was no increase in self-reported suicide attempts from 2003 to 2005 according to the YRBS [see Figure 1]; in fact, there was a decline in suicidal thoughts, plans, and medically-treated attempts from the late ‘90s through 2009 [with some increases in more recent years]. Two databases that estimate national hospital visit rates based on a sample of hospitals also saw no increase in youth self-harm following 2004. The first is the Health Care Utilization Project’s [HCUP] online database [4], which shows no increase in inpatient discharges for intentional self-harm diagnoses [E950-E959] among those ages 17 and under. The CDC’s WISQARS-Nonfatal database [5] also shows no increase in emergency department care for self-harm in this age group [although numbers jump around from year to year]. Both HCUP and WISQARS-Nonfatal are estimates based on a national sample of hospitals and thus subject to sampling error. California’s EPIC website, on the other hand, presents a census of inpatient discharges for the entire state [6]. There, too, no increases in self-harm hospitalization rates among children, adolescents, and young adults were observed following the FDA warnings. Finally, and most consequently, according to official mortality data available on the CDC WISQARS-Fatal website [5], the suicide rate among youth was largely flat 2000-2010, with an increase in 2011.
Lu’s study findings are roundly unsupported by national data. While the national and California data sources have limitations, each is a more direct indicator of intentional self-harm than the data Lu et al used. Lu et al used poisonings by psychotropics [ICD-9 code 969] as a proxy for suicide attempts in claims data from 11 health plans, in spite of the fact that the code covers both intentional and unintentional poisonings. Our paper, which is the sole reference to their claim that code 969 is a “validated” proxy for suicide attempts, in fact shows that in the U.S. National Inpatient Sample the code has a sensitivity of just 40% [i.e., it misses 60% of discharges coded to intentional self-harm] and a positive predictive value of 67% [i.e., a third of the discharges it captures are not intentional self-harm].
On balance, the evidence shows no increase in suicidal behavior among young people following the drop in antidepressant prescribing. It is important that we get this right because the safety of young people is at stake. Lu et al’s paper sounding the alarm that attempts increased was extensively covered in the media. Their advice that the media should be more circumspect when covering dire warnings about antidepressant prescribing applies as well to their own paper.
- a madness to our method…
- are you listening?…
- another campaign?…
- read
mehim… - a madness to our method – a new introduction…
- return to a madness in our method…
- all databases are not created equal…
A TWEET correction is needed by Summergrad; I will be glad but I suspect I will remain mad as he has obviously been had by the bad so I will end up sad and he will care not a tad as he succumbs to being a fad.
The BMJ has some hard thinking to do here. A substandard article with large policy implications slipped through their review and editing process and it was trumpeted in the world media. The Rapid Responses pointed up the weak tradecraft of the Lu report, and the coup de grace was delivered by this Rapid Response comment from Barber, Miller and Azriel.
The calculus for the BMJ is to decide whether the article should be retracted or whether on-line publication of the critical Rapid Responses is a sufficient disavowal of the Lu report. Certainly, a retraction would shine a stronger public searchlight on the compromised validity of the Lu report than just the Rapid Responses can do.
In a way, the issue is like that of declaring conflicts of interest. Simply declaring a compromise through stating competing interests does not remove the compromise. Likewise, simply publishing critical responses does not remove the compromise from the journal or from the original authors.
Tom,
A+, for all the fuss!
Bernard,
Thanks! Directly on point.
This flawed article implies that the FDA policy had inadvertent consequences and should be changed or ignored. It attacks the media impact of the FDA decision by using the article’s publication for its own media campaign. So by publishing this poorly vetted article, the BMJ has an ethical responsibility for its impact.
Our journals are bound by the same standards as medicine in general – a position well-championed by BMJ editor Fiona Godlee. This is a place for the BMJ to make a definitive statement about using the publication of an article for something other than scientific discourse – no different from the many jury rigged clinical trials published for marketing purposes.
It might be wise to send a letter of support to the FDA for their decision and to copy your Federal representatives