First off, thanks to Suzanna
for sending along a tutorial about QUASI-EXPERIMENTAL RESEARCH DESIGNS
[I must’ve cut that lecture in psychoanalytic training]. Likewise, thanks to Dr. Carroll
for his looking over Dr. Lu’s article
[how does he see all that stuff?]. The tips are a big help in reading the article without my eyes crossing. One thing that Dr. Carroll noted was something I also wondered about along the way – a blip in adolescent males that didn’t pan out [see watchful waiting…
] though it was earlier than Lu’s two year marker:
Early evidence on the effects of regulators’ suicidality warnings on SSRI prescriptions and suicide in children and adolescents. [see peaks and valleys…, watchful waiting…]
by Gibbons RD, Brown CH, Hur K, Marcus SM, Bhaumik DK, Erkens JA, Herings RM, Mann JJ.
American Journal of Psychiatry. 2007 Sep;164:1356-63.
But I’m done with the science of this article and defer to people with more wisdom and talent on the methods of analysis. I want to talk about something else related – Big Data
[see two points…
]. It’s all the rage now and deserves our attention. Essentially, it comes from places like Google, Amazon, the NSA. It’s a way of thinking about the huge databases being amassed everywhere. The book is about how people are learning how to query and correlate these huge datasets without worrying so much about missing data and unequal sample sizes. We don’t think about it, but a lot of our science is based on looking at a small sample, then extrapolating to the general population – so we’re super precise and use our statistical tests and our rational brains. Big Data people don’t worry about that so much. They have huge data [like approximating all
of it]. And they don’t care if the correlations don’t make sense [as in the most accurate index for following the spread of flu is an algorithm that queries Google searches
]. It’s the fact that they correlate that matters. And this study [Changes in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study
] is Big Data. They have an impressive dataset:
Study cohorts included adolescents [around 1.1 million], young adults [around 1.4 million], and adults [around 5 million].
And they use an analytic technique that I’ve never encountered [Big Data people do that]. So in spite of the fact that I’m awed that Amazon can make recommendations for me that are big surprises but dead on the money [about a week after a friend sent me the Big Data book, Amazon popped it up based on my former buying habits!]. And I’m not upset that some computer somewhere is crunching big numbers to make adjustments in my retirement portfolio, but I’m pretty sure I don’t want some Big Data program making treatment recommendations on my family. I’ve seen suicidality including death in adolescents on SSRIs with my own eyes. And I’m underwhelmed that they are even clinically effective in adolescent depression with Clinical Trials that look like this:
If I decide that an SSRI might be useful in an adolescent [OCD, GAD, maybe big depression], I want to find out a lot of stuff. Will the kid come back and let me look for myself? How stable is the family? Are the parents well informed and looking? Do I have an alliance with these people? Or is the kid an impulsive latchkey kid with an alcoholic single parent who collects guns? [a real example]. This Big Data study says nothing to me about that kind of decision either way. But If I worked for Managed Care as a population statistician and I saw all that data [and I got raises or kudos based on any cost cutting], I might find myself really impressed with these small differences.
The Clinical Trial era has already confronted us and our patients with the problem of small differences in medication effect and reams of treatment guidelines that may or may not be meaningful. We’d best anticipate that studies like this one are just harbingers of what’s going to happen when these vast datasets gathered by electronic medical records, HMOs, prescriptions, pharmacies, scanners, screenings, etc multiply exponentially in the coming years. I expect some good things will emerge, but the possibilities for confusion, error, and deceit will likely grow in parallel. In psychiatry, we already have studies ongoing like iSpot and EMBARC that hope to pick your antidepressant based on some collection of other measurable parameters [don’t hold your breath]. Add to that the dream of adjusting your medication dose based on your weekly iPhone CAT-D App via a brief chat on the cloud with your tele-psychiatrist. Maybe your Google searches can get in the mix as well [I wish I were exaggerating].
The approach in this article moves us away from case centered medicine to a much more population centered focus, away from pathophysiological understanding to empiricism, and leaves both doctor and patient prey to misinformation that can not be tested by either. In these posts, I’ve only listed a few of the many articles that purport to tell us about the Black Box Warning. It’s a study in case reports versus epidemiological data – and I expect it will keep coming. Meanwhile, this article is on the opening page of the Harvard Medical School website
, it’s a Psychiatric News Alert
, and it’s tweeted by the new APA President. Feels almost like another campaign [see smell a campaign…
, the campaign…
I guess I can’t complain. It’s on this blog three times too…
UPDATE: OK. You win. It’s a campaign…
New England Public Radio
by Rob Stein
June 18, 2014
… “I think there were a lot of mistakes made in terms of how this risk was communicated to the public, which led a lot of parents to be terrified to have their children on these medications — and they took them off and there was a lot of untreated, serious depression,” says Robert Gibbons, a University of Chicago biostatistician who advised the FDA on the issue…