its origin…

Posted on Tuesday 15 July 2014


by Catherine W Barber, Matthew Miller, and Deborah Azrael
British Medical Journal 2014 348:g359.

In the spirit of Lu et al’s [1] warning not to sound alarms about antidepressant use prematurely, we used readily available national data to investigate whether youth suicide attempts in the U.S. increased after 2003 and 2004—the years in which the FDA issued warnings about antidepressant safety. Attempts did not increase. Lu et al’s opposite finding probably has more to do with the unusual proxy they used [one they said was validated by a paper that two of us — MM and CB — co-authored] [see a madness to our method – a new introduction…] than with an actual change in suicidal behavior among youth. We briefly summarize here five readily available, online data sources that provide more direct and valid measures of youth suicidal behavior, and we discuss problems with the proxy that Lu’s study used.

The CDC’s Youth Risk Behavior Survey [YRBS] is a pencil-and-paper questionnaire filled out by high school students [3]. There was no increase in self-reported suicide attempts from 2003 to 2005 according to the YRBS [see Figure 1]; in fact, there was a decline in suicidal thoughts, plans, and medically-treated attempts from the late ‘90s through 2009 [with some increases in more recent years]. Two databases that estimate national hospital visit rates based on a sample of hospitals also saw no increase in youth self-harm following 2004. The first is the Health Care Utilization Project’s [HCUP] online database [4], which shows no increase in inpatient discharges for intentional self-harm diagnoses [E950-E959] among those ages 17 and under. The CDC’s WISQARS-Nonfatal database [5] also shows no increase in emergency department care for self-harm in this age group [although numbers jump around from year to year]. Both HCUP and WISQARS-Nonfatal are estimates based on a national sample of hospitals and thus subject to sampling error. California’s EPIC website, on the other hand, presents a census of inpatient discharges for the entire state [6]. There, too, no increases in self-harm hospitalization rates among children, adolescents, and young adults were observed following the FDA warnings. Finally, and most consequently, according to official mortality data available on the CDC WISQARS-Fatal website [5], the suicide rate among youth was largely flat 2000-2010, with an increase in 2011.

Lu’s study findings are roundly unsupported by national data. While the national and California data sources have limitations, each is a more direct indicator of intentional self-harm than the data Lu et al used. Lu et al used poisonings by psychotropics [ICD-9 code 969] as a proxy for suicide attempts in claims data from 11 health plans, in spite of the fact that the code covers both intentional and unintentional poisonings. Our paper, which is the sole reference to their claim that code 969 is a “validated” proxy for suicide attempts, in fact shows that in the U.S. National Inpatient Sample the code has a sensitivity of just 40% [i.e., it misses 60% of discharges coded to intentional self-harm] and a positive predictive value of 67% [i.e., a third of the discharges it captures are not intentional self-harm].

On balance, the evidence shows no increase in suicidal behavior among young people following the drop in antidepressant prescribing. It is important that we get this right because the safety of young people is at stake. Lu et al’s paper sounding the alarm that attempts increased was extensively covered in the media. Their advice that the media should be more circumspect when covering dire warnings about antidepressant prescribing applies as well to their own paper.
I know I tend to perseverate on things I find outrageous…

  1. a madness to our method…
  2. are you listening?…
  3. another campaign?…
  4. read me him…
  5. a madness to our method – a new introduction…
  6. return to a madness in our method
  7. all databases are not created equal…
… and that Lu et al article is obviously the one on my front burner right now. But the Rapid Response article above stands out. It’s from the Harvard School of Public Health and was written by the authors of the study [Patrick et al] where Lu et al claimed to have gotten their "validated" proxy – poisoning by psychotropic agents. They call foul like the rest of us, but it carries more weight because they’re the ones that got "fouled."
    Our paper, which is the sole reference to their claim that code 969 is a “validated” proxy for suicide attempts, in fact shows that in the U.S. National Inpatient Sample the code has a sensitivity of just 40% [i.e., it misses 60% of discharges coded to intentional self-harm] and a positive predictive value of 67% [i.e., a third of the discharges it captures are not intentional self-harm]. 
The problem is that the damage has already been done with the extensive and immediate press blasts about this article [see another campaign?…]. We’ve seen that with many of the previous "anti-Black-Box" articles [see are you listening?…].This one was in all the syndicated news services. I found this one particularly disturbing – straight from the new APA President, Paul Summergrad:
I doubt that my fervor with blog posts or even all the very well framed BMJ Rapid Response replies will hold a candle to the impact of the immediate press alerts posted all over the country or the tweet to the APA President’s followers. One can’t help but wonder how these things get coordinated. The article was "Published 18 June 2014," and the press blitz was on the same day. It was even on WebMD the day it was published in the BMJ. That same thing happened with other "anti-Black-Box" articles – notably the ones by Dr. Robert Gibbons [see smell a campaign…]. His media blitz was in high gear even before the second article that gave the source of his data was published [and Gibbons was commenting on this current article on, you-guessed-it, June 18th]. There’s obviously a "pipeline" for getting the word out. It would make a good project for some young investigative reporter to flesh out this pipline and locate its origin…
  1.  
    Tom
    July 15, 2014 | 8:54 PM
     

    A TWEET correction is needed by Summergrad; I will be glad but I suspect I will remain mad as he has obviously been had by the bad so I will end up sad and he will care not a tad as he succumbs to being a fad.

  2.  
    Bernard Carroll
    July 16, 2014 | 5:25 PM
     

    The BMJ has some hard thinking to do here. A substandard article with large policy implications slipped through their review and editing process and it was trumpeted in the world media. The Rapid Responses pointed up the weak tradecraft of the Lu report, and the coup de grace was delivered by this Rapid Response comment from Barber, Miller and Azriel.

    The calculus for the BMJ is to decide whether the article should be retracted or whether on-line publication of the critical Rapid Responses is a sufficient disavowal of the Lu report. Certainly, a retraction would shine a stronger public searchlight on the compromised validity of the Lu report than just the Rapid Responses can do.

    In a way, the issue is like that of declaring conflicts of interest. Simply declaring a compromise through stating competing interests does not remove the compromise. Likewise, simply publishing critical responses does not remove the compromise from the journal or from the original authors.

  3.  
    July 16, 2014 | 6:35 PM
     

    Tom,
    A+, for all the fuss!

    Bernard,
    Thanks! Directly on point.

    This flawed article implies that the FDA policy had inadvertent consequences and should be changed or ignored. It attacks the media impact of the FDA decision by using the article’s publication for its own media campaign. So by publishing this poorly vetted article, the BMJ has an ethical responsibility for its impact.

    Our journals are bound by the same standards as medicine in general – a position well-championed by BMJ editor Fiona Godlee. This is a place for the BMJ to make a definitive statement about using the publication of an article for something other than scientific discourse – no different from the many jury rigged clinical trials published for marketing purposes.

  4.  
    Joseph Arpaia, MD
    July 16, 2014 | 10:01 PM
     

    It might be wise to send a letter of support to the FDA for their decision and to copy your Federal representatives

Sorry, the comment form is closed at this time.