Posted on Wednesday 10 February 2016

U.S. Preventive Services Task Force Recommendation Statement
by Albert L. Siu, MD, MSPH, on behalf of the U.S. Preventive Services Task Force
Annals of Internal Medicine. Published online 9 February 2016

Description: Update of the 2009 U.S. Preventive Services Task Force [USPSTF] recommendation on screening for major depressive disorder [MDD] in children and adolescents.
Methods: The USPSTF reviewed the evidence on the benefits and harms of screening; the accuracy of primary care–feasible screening tests; and the benefits and harms of treatment with psychotherapy, medications, and collaborative care models in patients aged 7 to 18 years.
Population: This recommendation applies to children and adolescents aged 18 years or younger who do not have a diagnosis of MDD.
Recommendation: The USPSTF recommends screening for MDD in adolescents aged 12 to 18 years. Screening should be implemented with adequate systems in place to ensure accurate diagnosis, effective treatment, and appropriate follow-up. [B recommendation] The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening for MDD in children aged 11 years or younger.
A Systematic Review for the U.S. Preventive Services Task Force
by Valerie Forman-Hoffman, PhD, MPH; Emily McClure, MSPH; Joni McKeeman, PhD; Charles T. Wood, MD; Jennifer Cook Middleton, PhD; Asheley C. Skinner, PhD; Eliana M. Perrin, MD, MPH; and Meera Viswanathan, PhD
Annals of Internal Medicine. Published online 9 February 2016

Background: Major depressive disorder [MDD] is common among children and adolescents and is associated with functional impairment and suicide.
Purpose: To update the 2009 U.S. Preventive Services Task Force [USPSTF] systematic review on screening for and treatment of MDD in children and adolescents in primary care settings.
Data Sources: Several electronic searches [May 2007 to February 2015] and searches of reference lists of published literature.
Study Selection: Trials and recent systematic reviews of treatment, test–retest studies of screening, and trials and large cohort studies for harms.
Data Extraction: Data were abstracted by 1 investigator and checked by another; 2 investigators independently assessed study quality.
Data Synthesis: Limited evidence from 5 studies showed that such tools as the Beck Depression Inventory and Patient Health Questionnaire for Adolescents had reasonable accuracy for identifying MDD among adolescents in primary care settings. Six trials evaluated treatment. Several individual fair- and good-quality studies of fluoxetine, combined fluoxetine and cognitive behavioral therapy, escitalopram, and collaborative care demonstrated benefits of treatment among adolescents, with no associated harms.
Limitation: The review included only English-language studies, narrow inclusion criteria focused only on MDD, high thresholds for quality, potential publication bias, limited data on harms, and sparse evidence on long-term outcomes of screening and treatment among children younger than 12 years.
Conclusion: No evidence was found of a direct link between screening children and adolescents for MDD in primary care or similar settings and depression or other health-related outcomes. Evidence showed that some screening tools are accurate and some treatments are beneficial among adolescents [but not younger children], with no evidence of associated harms.
There was a time in the not so distant past when childhood and adolescence were simply seen as the period before adulthood – a time for physical development, learning, and training. But later views had a different take on things – childhood as a sequential unfolding of multiple inter-related developmental lines that condense in adolescence with the formation of a core identity and personality that more or less persist throughout the remaining life cycle. Childhood was no longer seen as a waiting period for little men and little women to grow up. It was Child Development, and the nuances were of central importance to educators, parents, law-makers, and mental health specialists. Many of the disturbances of childhood and adolescence were seen as developmental deviations, and the efforts of people involved were directed towards moving the patient back into the main stream of development.

I came to psychiatry drawn by the psychological struggles apparent in so many of the medical patients I’d seen along the way. I just didn’t get it. And learning to view them in the context of the person’s whole biography was, in itself, worth the price of admission for me. When the DSM-III with its expanded focus on distinct disorders came along, I was neither interested nor able to go back, so I went away. I understood and even agreed with many of the criticisms of what had been before, but I couldn’t live with the baby in the bathwater problem. I see non-melancholic depression as a signal that something’s wrong that needs attending rather than just a symptom to be treated. And that’s particularly true in adolescence and young adulthood. I really don’t think there’s a unitary disease, Major Depressive Disorder [MDD], even in adulthood, but I double-dog-really don’t think it exists in children and adolescents – certainly not in the way these articles imply. Five years volunteering in a Child and Adolescent clinic after retirement only reinforced these views.

So as important as I think it is to attend to depression in adolescents, this recommendation and the accompanying review just seem way off the mark. And to present this slide as an updated "Systematic Review" is ludicrous:

[truncated to fit]

While it’s similar to the 2009 recommendation, it’s a missed opportunity to give this issue a thorough review. So perhaps well-intended, but definitely ill-advised is all I can think of to say. .
Mickey @ 12:46 PM

the human interface…

Posted on Sunday 7 February 2016

I was sitting in the waiting room Friday morning waiting for an ENT appointment [to check a worrisome non-healing lesion in my nostril]. The big tv was looping through an animated description of some balloon thing they could do to your sinuses and a hearing aid spiel about why getting them from an ENT is better than [anywhere else]. I was reading on my phone about depression screening rather than from the various WebMD magazine-lets lying around. When the receptionist gave me the papers to fill out, I half expected there to be a PHQ-9 like at my internist’s office or in the waiting room at the clinic where I work. There wasn’t. But there was one about whether I needed that sinus balloon job and one in case I’m hard of hearing. When I went to the examining room, another tv set was silently mime-ing how to use an epi-pen and cycling through the various hearing aid batteries available at the front desk. The highlight of the visit was the pronouncement that it looked like "a wart" instead of all the ominous options that occurred to me when I found it. I gladly left my "wart" in a jar for the pathologist. On the way home, I wondered why I have such a strong visceral reaction against the idea of screening for depression in doctor’s waiting rooms. Here are some words:

US Preventive Services Task Force Recommendation Statement
by Albert L. Siu, MD, MSPH and the US Preventive Services Task Force [USPSTF]
JAMA. 2016 315[4]:380-387.

Description: Update of the 2009 US Preventive Services Task Force [USPSTF] recommendation on screening for depression in adults.
Methods: The USPSTF reviewed the evidence on the benefits and harms of screening for depression in adult populations, including older adults and pregnant and postpartum women; the accuracy of depression screening instruments; and the benefits and harms of depression treatment in these populations.
Population: This recommendation applies to adults 18 years and older.
Recommendation: The USPSTF recommends screening for depression in the general adult population, including pregnant and postpartum women. Screening should be implemented with adequate systems in place to ensure accurate diagnosis, effective treatment, and appropriate follow-up. [B recommendation]
Antidepressant use itself is already a public health problem – 11% of adult Americans are taking them. We didn’t get here because they’re so effective or benign. We were lead by deceitful trial reporting, advertising, marketing, and some shaky epidemiology.

Besides antidepressants, they’re talking about Cognitive Behavior Therapy. CBT is a legitimate and effective therapy, but it’s not generic. In my area, most mental health types would list it as their first line discipline. But I don’t think I’ve seen a full course of what Aaron Beck would call CBT in the eight years I’ve been involved. What happens in formal trials and what happens in the "natural world" are very different things. So I suspect that the majority of waiting room depression would be treated with medication.

And then there’s the question of what constitutes depression. Would it be people who say they’re depressed? – which is what unhappy people nowadays say to describe how they feel – bad marriages, no marriages, the burden of a difficult biography, personality disorders, situational crises, grief, lonliness, etc. The formal diagnosis of depression has become corrupted in the recent era – simplified to mean almost any constellation of discrete symptoms and causes. So even discussing it as if it’s a unitary entity is hard to justify.

But I don’t think those things are why depression screening evokes such a visceral "No" when I hear it. One part of that is the global burden of depression meme that introduces so many articles on antidepressants [and blogs by our former NIMH Director for one] implying that through some not-so-clear-mechanism, the incidence of depression-the-vaguely-biological-entity is accelerating. It makes  little sense. If there were a genetic or neural circuit etiology, why would it be increasing? And were that true, what does the symptomatic treatment with medications have to with that? Those arguments invariably precede a pitch for more funding for brain research or some new, innovative, treatment for depression, loosely based on a 2004 population study by the World Health Organization. And so this whole line of thinking seems flawed to me, a cascade of unproven and unexamined assumptions that have achieved an autonomous momentum that will only be amplified by waiting-room-depression-screening.

And the other related visceral thing for me is that this way of thinking simplifies and reduces the infinite variance of human suffering. Depression is an expressive and communicative human emotion – obvious with little more that brief look. The idea that one can practice decent medicine and need a questionnaire to identify it and inquire about it says something about the system of practice envisioned by a world of people who don’t understand what "practice decent medicine" means. The PHQ-9 questions aren’t about subtle things:
  • Little interest or pleasure in doing things?
  • Feeling down, depressed, or hopeless?
  • Trouble falling or staying asleep, or sleeping too much?
  • Feeling tired or having little energy?
  • Poor appetite or overeating?
  • Feeling bad about yourself – or that you are a failure or have let yourself or your family down?
  • Trouble concentrating on things, such as reading the newspaper or watching television?
  • Moving or speaking so slowly that other people could have noticed?
  • Or being so fidgety or restless that you have been moving around a lot more than usual?
  • Thoughts that you would be better off dead, or of hurting yourself in some way?
I’m aware that this second thing may say more about what I think of modern medicine with its emphasis on work product, cost containment, objectification, uniformity, and guidelines than it has to do with screening for depression. I just know that depressed people feel alone and unseen. I think if a doctor pointed at my PHQ-9 and said, "I see here that you’re depressed," I’d feel even more unseen. If on the other hand, she looked me in the eye and said "I can see that you’re really depressed" and asked "What’s going on?" I’d feel seen and maybe not so alone. Screening is generally designed to "pick up" things that don’t show – a pap smear, a blood count, a fasting blood glucose, an EKG, a serologic test for Syphilis, a questionnaire about past medical history or a review of systems. Depression shows. And I see screening for depression as interpersonal, part of the human interface.
Mickey @ 12:24 PM

in indelible ink…

Posted on Saturday 6 February 2016

When I mentioned the sponsorship of the STAT Morning Rounds page [explanation?…], George Dawson implied in a comment that my radar was perhaps set too low:
What would you call it if I gave a lecture for CME, unrelated to prescribing but it was sponsored by Big Pharma – even though I had nothing to do with the business end of it? I would definitely get listed in Open Payments web site.
And I had just written about the Annals of Internal Medicine editor going after Ben Goldacre’s COMPare for noting that the a priori outcome variables in one of their papers had changed in the published version [protest too much…]. I don’t have anything to say more about either of those things because he’s right, I don’t really know where either boundary ought to be set – and perhaps that is my whole point.

There was a time when pharmaceutical sponsorship of all kind of things was common, even desirable. I didn’t actually think much about it back then – seeing it as a service provided by one member of the medical community to another. They gave us an endless stream of pens, cheesy models if knees and brains, note pads, prescription pads. F. Netter’s drawings in C.I.B.A. pamphlet’s were the gold standard for anatomical illustration – much revered. But I honestly didn’t know which company made which drug. A friend once quipped that the only drug company we all really knew was SKF, because they made the diet pills we took as med students to cram for exams [embarrassingly true in the mythic 60s].

I know that when I read the NYT article about the Senate investigation in 2008 [Top Psychiatrist Didn’t Report Drug Makers’ Pay], that was the first time I’d ever heard of Speaker’s Bureaus. But by then, there were lots of things I’d never heard of – CROs [Contract Research Organizations] that actually did the Clinical Trials, KOLs [Key Opinion Leaders] and KOL Managers:
As pharmaceutical and life-sciences companies search for the most effective, efficient ways to manage collaboration with the physicians who conduct research, write articles, or speak on their behalf, relationship management of the interaction with these elite physicians, or key opinion leaders [KOLs], has ultimately emerged as an individual business discipline. Similar to CRM, KOL management is an essential component for marketers and medical staff throughout the life-cycle process of a specific drug or product.
[NOTE: CRM stands for Customer Relationship Management]

By sustaining a business process that creates and maintains meaningful and collaborative relationships between KOLs and business functions from marketing to medical affairs, pharmaceutical and life-sciences companies can experience increased share of voice and accelerated adoptions at the global, national, and regional levels. A CEO of a major pharmaceuticals company recently told a group of analysts that effectively managing KOL relationships was essential to companies’ future products and market expansion.

As physicians strive to choose from a myriad of drug options for their patients, they often turn to fellow key opinion leading physicians for knowledge and advice on specific drugs. Key opinion leaders possess a unique credibility, as their validity often stems from years of industry experience and medical affiliations. As a result, pharmaceutical and life-sciences companies have begun relying heavily on key opinion leaders to help establish the knowledge base about their drugs and expand their markets throughout all stages of life-cycle management.
And as we learned from first the Propublica Dollars for Docs database and later from the Open Payments web site, there are plenty of physicians on the PHARMA payroll to manage.

So I take George’s point, and I don’t mean that sarcastically. My radar just doesn’t work anymore. When I think about it, Megan’s Morning Rounds PhRMA and Johnson & Johnson sponsorship is more along the lines of what I recall from the past. It’s open and I expect [hope] that her clarification is on the up and up [explanation?…]. I’m sure that there are many hypotheticals that I wouldn’t have a clue how to classify, how to decide which side of the line they fall on. People like Dr. Nemeroff and the other ultra-KOLs on Senator Grassley’s list made that part easy. They were so far south of the border that the line drew itself. And there are plenty of others where that’s true.

I come from an era when even the possibility or appearance of a Conflict of Interest was exclusionary. I did a lot of speaking during my years. And though it was to trainees, graduate students, and peer groups in my own and other mental health specialties, I didn’t ask for honoraria, and if it appeared, I donated it back to the sponsoring organization. I did accept room and board if it was out of town. That wouldn’t have worked had I been on the circuit, but I wasn’t. Saying it now sounds kind of prissy, hyper-moral, maybe even pseudo-moral. But I was just following the lead of my own role models. I really liked talking about things I had learned, and I almost always came away from those talks having learned something myself. It was kind of like this blog – a way of collecting my thoughts. Nothing makes you think about things so much as the possibility of saying it out loud to others. I wouldn’t necessarily push that approach on anyone else. It just is what worked for me.

So I expect that anything I said about drawing a line that marked the limits between acceptable and unacceptable COI policies would be idiosyncratic, anachronistic, or some other five dollar word. But I would stand behind the fact that that the line needs to be drawn – not in the sand, but in indelible ink. And I don’t much agree with the idea of exceptions. I would much prefer the part that I think George objects to – 100% transparency. I’d rather they list all industry payments and connections publicly, and let the decision about whether they are benign or malignant come after the fact. And that’s based on how often people have found ways to use exceptions to hide all kind of things. Admittedly, there are people who will indict any industry involvement or payment, but we’re not all that way. And the people who push for transparency aren’t the cause of the problem – that blame falls on our colleagues who have abused the system, or allowed themselves to be used. I appreciate Megan’s quick, non-defensive clarification as what I was asking for. But I still think the editors response to COMPare was uncalled for. I know Megan of Morning Rounds and Ed Silverman of Pharmalot have to support themselves, and industry sponsorship is an obvious direction to look. But in a modern world, the burden of proof falls on their shoulders – proof that they’re not under the influence of some version of a KOL Manager. It’s just that kind of world now…
Mickey @ 12:29 PM

protest too much…

Posted on Thursday 4 February 2016

I want to linger on the response of the Annals of Internal Medicine to the COMPare letter pointing out an instance of a published paper that reported outcomes differing from those in the a priori Protocol. Here’s the letter from COMPare:
Annals of Internal Medicine
by Eirion Slade; Henry Drysdale; Ben Goldacre, on behalf of the COMPare Team
December 22, 2015

TO THE EDITOR: Gepner and colleagues’ article reports outcomes that differ from those initially registered. One prespecified primary outcome was reported incorrectly as a secondary outcome. In addition, the article reports 5 “primary outcomes” and 9 secondary outcomes that were not prespecified without flagging them as such. One prespecified secondary outcome also was not reported anywhere in the article.

Annals of Internal Medicine endorses the CONSORT [Consolidated Standards of Reporting Trials] guidelines on best practice in trial reporting. To reduce the risk for selective outcome reporting, CONSORT includes a commitment that all prespecified primary and secondary outcomes should be reported and that, where new outcomes are reported, it should be made clear that these were added at a later date, and when and why this was done should be explained.

The Centre for Evidence-Based Medicine Outcome Monitoring Project [COMPare] aims to review all trials published going forward in a sample of top journals, including Annals. When outcomes have been incorrectly reported, we are writing letters to correct the record and audit the extent of this problem in the hopes of reducing its prevalence. This trial has been published and is being used to inform decision making, and this comment is a brief correction on a matter of fact obtained by comparing 2 pieces of published literature. We are maintaining a Web site [] where we will post the submission and publication dates of this comment alongside a summary of the data on each trial that we have assessed.

I was surprised by the response from the Annals. The tone is generally defensive and dismissive, sometimes verging on contemptuous. After describing their journal’s own editorial process, they turn to COMPare and it’s methodology:
Annals of Internal Medicine
by the Editors
December 22, 2015

… According to COMPare’s protocol, abstractors are to look first for a protocol that has been published before a trial’s start date. If they find no such publication, they are supposed to review the initial trial registry data. Thus, COMPare’s review excludes most protocols published after the start of a trial and unpublished protocols or their amendments and ignores amendments or updates to the registry after a trial’s start date. The initial trial registry data, which often include outdated, vague, or erroneous entries, serve as COMPare’s “gold standard.”

Our review indicates problems with COMPare’s methods. For 1 trial, the group apparently considered the protocol published well after data collection ended. However, they did not consider the protocol published 2 years before MacPherson and associates’ primary trial was published. That protocol was more specific in describing the timing of the primary outcome [assessment of neck pain at 12 months] than the registry [assessment of neck pain at 3, 6, and 12 months], yet COMPare deemed the authors’ presentation of the 12-month assessment as primary in the published trial to be “incorrect.” Similarly, the group’s assessment of Gepner and colleagues’ trial included an erroneous assumption about one of the prespecified primary outcomes, glycemic control, which the authors had operationalized differently from the abstractors. Furthermore, the protocol for that trial clearly listed the secondary outcomes that the group deemed as being not prespecified.
They’re chiding COMPare for not digging deep enough. I’ve spent a lot of time chasing around trying to find a priori Protocols and amendments, and it’s a daunting  and often impossible task. COMPare is making a plea for that information to be included in the articles and the review process. The authors surely have it immediately at hand. The second paragraph of COMPare’s letter couldn’t be clearer and doesn’t deseve the ‘eye for an eye’ response.
On the basis of our long experience reviewing research articles, we have learned that prespecified outcomes or analytic methods can be suboptimal or wrong. Regardless of prespecification, we sometimes require the published article to improve on the prespecified methods or not emphasize an end point that misrepresents the health effect of an intervention. Although prespecification is important in science, it is not an altar at which to worship. Prespecification can be misused to sanctify both inappropriate end points, such as biomarkers, when actual health outcomes are available and methods that are demonstrably inferior.
Nobody’s arguing with the editors about that point. If there’s one point where the COMPare letter is weak, it doesn’t spell out the obvious. The a priori Protocol, right or wrong, is the only verifiable piece of evidence around. We can’t trust that the blind was maintained in an industry funded CRO run trial. So if the a priori Outcome Measures have been changed, we need to know what they were and why they were changed so we can make our own decisions about the changes. Evoking "long experience" is no trump card. We readers have "long experience" too [and some of it has been very bad experience].
The Centre for Evidence-Based Medicine Outcome Monitoring Project’s assessments seem to be based on the premise that trials are or can be perfectly designed at the outset, the initial trial registry fully represents the critical aspects of trial conduct, all primary and secondary end points are reported in a single trial publication, and any changes that investigators make to a trial protocol or analytic procedures after the trial start date indicate bad science. In reality, many trial protocols or reports are changed for justifiable reasons: institutional review board recommendations, advances in statistical methods, low event or accrual rates, problems with data collection, and changes requested during peer review. The Centre for Evidence-Based Medicine Outcome Monitoring Project’s rigid evaluations and the labeling of any discrepancies as possible evidence of research misconduct may have the undesired effect of undermining the work of responsible investigators, peer reviewers, and journal editors to improve both the conduct and reporting of science…
The COMPare letter is matter-of-fact, pointing to an unacknowledged discrepancy in an article, suggesting how it should have been mentioned in the published article. I don’t read a charge of ‘research misconduct’ in that letter. But I sure don’t read any great desire in the editors to protect us from it. Why so nasty? Why the comment about undermining editors? One is tempted to say, "thou dost protest too much."
Mickey @ 11:25 AM


Posted on Thursday 4 February 2016

An in-depth analysis of clinical trials reveals widespread underreporting of negative side effects, including suicide attempts and aggressive behavior
Scientific American
By Diana Kwon
February 3, 2016

Antidepressants are some of the most commonly prescribed medications out there. More than one out of 10 Americans over age 12 — roughly 11 percent — take these drugs, according to a 2011 report by the National Center for Health Statistics. And yet, recent reports have revealed that important data about the safety of these drugs — especially their risks for children and adolescents — has been withheld from the medical community and the public.

In the latest and most comprehensive analysis, published last week in BMJ [the British Medical Journal], a group of researchers at the Nordic Cochrane Center in Copenhagen showed that pharmaceutical companies were not presenting the full extent of serious harm in clinical study reports, which are detailed documents sent to regulatory authorities such as the U.S. Food and Drug Administration and the European Medicines Agency [EMA] when applying for approval of a new drug. The researchers examined documents from 70 double-blind, placebo-controlled trials of two common types of antidepressants — selective serotonin reuptake inhibitors [SSRI] and serotonin and norepinephrine reuptake inhibitors [SNRI] — and found that the occurrence of suicidal thoughts and aggressive behavior doubled in children and adolescents who used these medications.

This paper comes on the heels of disturbing charges about conflicts of interest in reports on antidepressant trials. Last September a study published in the Journal of Clinical Epidemiology revealed that a third of meta-analyses of antidepressant studies were written by pharma employees and that these were 22 times less likely than other meta-studies to include negative statements about the drug. That same month another research group reported that after reanalyzing the data from Study 329, a 2001 clinical trial of Paxil funded by GlaxoSmithKline, they uncovered exaggerated efficacy and undisclosed harm to adolescents.

Because of the selective reporting of negative outcomes in journal articles, the researchers in the most recent BMJ study turned to clinical trial reports, which include more detailed information about the trials. They discovered that some of most the useful information was in individual patient listings buried in the appendices. For example, they uncovered suicide attempts that were passed off as “emotional liability” or “worsening depression” in the report itself. This information, however, was only available for 32 out of the 70 trials. “We found that a lot of the appendices were often only available upon request to the authorities, and the authorities had never requested them,” says Tarang Sharma, a PhD student at Cochrane and lead author of the study. “I’m actually kind of scared about how bad the actual situation would be if we had the complete data”…
My post-retirement involvement in the business of psychiatric medications came as a surprise to my colleagues [and to me]. I practiced and taught another brand of psychiatry, and so they often ask "What got you into this?" I know some of the answer and have talked about it probably more necessary, but I haven’t mentioned the most important thing – disillusionment. One can make the case that human psychological development is a story of illusion/disillusionment cycles from beginning to end. The devoted mother of early life is replaced by the same mother encouraging self sufficiency. A solid principle of effective parenting is allowing illusion, then shepherding a disillusionment at the rate a child can both tolerate and even appreciate. And good doctoring is sometimes helping a person find a decent life as a chronically ill person when the illness is one that’s come to stay — in spite of the accompanying disillusionment. But while it’s interesting to reflect on the topic from an armchair, actually living with it isn’t so easy.

I was a late comer to what this Scientific American article is about, in my later sixties, retired from a psychotherapy practice that had been more out of the mainstream than I knew at the time. In retirement, I had started volunteering in some general clinics and was struck with a couple of things. First, the patients were almost universally taking a lot of medications in odd combinations unfamiliar to me. But even more striking, they came with expectations from the medications that were well beyond any possibilities I knew. To borrow a book title, I felt like a stranger in a strange land. About that time, I read in the New York Times that the chairman of the department I had been affiliated with for over thirty years was under investigation for unreported income from pharmaceutical companies [Top Psychiatrist Didn’t Report Drug Makers’ Pay]. And somewhere in there, I had prescribed an SSRI to a 17 year old young man who became confused, agitated, and suicidal within days – all thankfully clearing as fast as they came when the medication was discontinued. At the time, I didn’t know that could happen.

I’m surprised at how much the disillusionment I felt affected me. I had experienced my share of such things before, but this one was different. Reading back over the blogs I’ve written since then, I’ve bounced from place to place in how I understood [or didn’t understand] it all. I was lucky. I had a strong hard science background from a former career and could look into the science involved. And I’ve met a number of like minded people along the way who brought a wealth of experience and wisdom my way – helping me answer questions I didn’t even know were there to be asked. But there were two concrete experiences that helped me with my own uncomfortable disillusionment. The first was going to the Allen Jones TMAP Trial in Austin in January 2012 where I watched any number of regular people caught up in some little piece of the drama without allowing themselves to see the whole picture. The second was being involved in the research for one of those articles up there and seeing the details – another example of people neither stepping back far enough to see the big picture nor getting close enough to see what they were involved in. In both instances the main problems were at the top, and had to do with unnecessary secrecy.

Medical advances have often been accompanied by high hopes and enthusiasm [illusion] followed by the more accurate reality that comes with clinical experience [dis·illusionment]. This sequence has been eroded at both ends. The Clinical Trials that are meant to be a simplistic starting place have been jury-rigged and given an undeserved enduring authority. Meanwhile, academic medical departments and journals have not only become engaged in the hype, but have also failed in their traditional role as watchdogs and skeptics. In the process, the appropriate disillusionment that comes with clinical experience with medications is being replaced with a disillusionment with medicine itself – an unacceptable trade-off.

I’m less disillusioned [and less naive] than I once was. I guess I had assumed that the ethics of medicine would protect us from all of this and I was bitter that it didn’t. I don’t have any global solutions, but I do feel a resolve to stay wide awake and stop counting on the inertia of medical tradition to keep us on the right path. And all I really know is that the forces inside and outside of medicine that have lead us here lose their power when they see the light of day in articles like this…
Mickey @ 10:23 AM


Posted on Thursday 4 February 2016

I signed up for STAT on the Boston Globe because Pharmalot moved there. But the new additions to the morning emails these last several days have given me pause. We could use an explanation…

UPDATE: Megan’s response to an email:
Hi Mickey,

They’re simply sponsors for the newsletter — they’re paid advertisements that come from the business end of the publication. I have nothing to do with them, don’t interact with them, and they have no bearing on my content whatsoever. They started showing up because the newsletter has been rolling for a few months now, and in order to pay for the journalism we do, the business team needs to create revenue through advertisements like all other publications.


Mickey @ 8:29 AM

selective inattention…

Posted on Tuesday 2 February 2016

American psychiatrist Harry Stack Sullivan balked at the term "Unconscious," preferring "Selective Inattention" to explain realities that people simply omit. It’s a particularly apt term for some recent commentaries appearing in our medical literature. In notes from a reluctant parasite…, I mentioned Dr. Jeffrey Drazen’s editorial and the subsequent series by his reporter in the New England Journal last summer suggesting they lift the ban on experts with Conflicts of Interest writing editorials and review articles:
by Jeffrey M. Drazen, M.D.
New England Journal of Medicine. 2015 372:1853-1854.
by Lisa Rosenbaum, M.D.
New England Journal of Medicine. 2015 372:1860-1864.
by Lisa Rosenbaum, M.D.
New England Journal of Medicine. 2015; 372:1959-1963.
by Lisa Rosenbaum, M.D.
New England Journal of Medicine. 2015; 372:2064-2068.

In order to make that argument, they have to ignore the numerous examples of "experts" using review articles to push some product they had personal connections with – one of the more egregious versions being Dr. Nemeroff et al’s review of the vagal nerve stimulator in depression [VNS Therapy in Treatment-Resistant Depression: Clinical Evidence and Putative Neurobiological Mechanisms] that cost him his own editorship at Neuropsychopharmacology. That was an unacknowledged COI, but there are many other examples to choose from where acknowledgement doesn’t mitigate the glaring bias.

Now Dr. Drazen has this other piece suggesting that people who want to reanalyze questioned studies are "data parasites," saprophytes feeding off of the carrion of other researchers work. In that formulation, he has to selectively ignore the numerous examples of distorted clinical trial reports that literally beg for a thorough re-examination, and the much more likely motives of a person vetting such an article to expose distortions:
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.
Then there was the Viewpoint article in JAMA this Fall by [Associate Editor] Anne Cappola and colleague Garret FitzGerald that exercised the same kind of Selective Inattention [Confluence, not conflict of interest: Name change necessary]. They direct a Translational Institute at the University of Pennsylvania and seem worried that the focus on Conflicts of Interest might intrude on the dreams of the Translationists [my term]. They propose that reframing things with a name change [Conflicts of Interest to Confluence of Interest] might make things go better:
by Anne Cappola and Garret FitzGerald
JAMA. 2015 314[17]:1791-1792.

… Confluence of interest represents a complex ecosystem that requires development of a uniform approach to minimize bias in clinical research across the academic sector. Such a policy must be at once simple and accessible, capturing the complexity of the relationships while being sufficiently flexible at the individual level not to intrude on the process of innovation.
In order to suggest this naive grammatical solution, they have to have their Selective Inattention motor running full throttle [the elephant in the room comes to mind]. In Dr. Bernard Carroll’s words:
Health Care Renewal
by Bernard Carroll
January 24, 2016

"… the authors, presuming to speak for investigators generally, were offended by the increasing regulations for managing COI. Those developments have occurred at the Federal, institutional, and publication levels. Worse, the authors ignored the reality of recent corruption that led to those new regulations. That uncomfortable fact was airbrushed out of their discussion."
And the authors fail to notice that some of us think their whole notion of Translational Medicine itself is an elaborate version of the same kind of ruse – that same wolf in sheep’s clothing hiding behind lofty rhetoric [like in this very article]. Which brings me to Susan Molchan’s blog post on HEALTHNEWSREVIEW.
by Susan Molchan
January 25, 2016

It’s difficult to make a case for hiding or obscuring information about health and the medicines we take, but it seems the editors of two top medical journals are doing just that. The decisions of these editors substantially affect the quality of medical research studies reported, what public relations officials communicate about those studies, and what news stories eventually say about the research to patients and the public…
I’m currently trying to escape the fog one gets into spending too much time scrolling through endless columns of figures, so I wanted to write this article about the Selective Inattention of pundits in high places who have to overlook the loud and the obvious to press their own agendas. I had the Drazen articles and Cappola’s and FitzGerald’s JAMA Viewpoint piece in hand. I wanted to add another example, but I couldn’t find the one I was looking for. Then <ping>, my computer announced incoming. It was a comment on my last blog post by Susan Molchan. Not only did it point me to her excellent blog which pre-empted the post I was writing [that you’re reading right now], it also included the very piece I was looking for. But first I need to back up a bit and talk about Ben Goldacre’s COMPare Project.

Ben and a cadre of trainees are taking advantage of some of the data access afforded by the European Medicines Agency [EMA] and gathering the a priori Protocols from a number of Clinical Trials. Then they’re running down the published papers and comparing the Protocol defined outcome variables with what is reported by the articles – finding all kinds of discrepancies. They call it Outcome Switching. Then they’re taking it a step further by contacting the journals and asking the obvious questions – Did they notice? What might they do about that? It’s a great idea [and right in the middle of why I’m looking at the non-protocol variables introduced into the Keller et al’s 2001 paper on Paxil Study 329]. There’s a nice summary of Ben’s Project on Retraction Watch [Did a clinical trial proceed as planned? New project finds out]. The other article I was looking for was a letter from an Annals of Internal Medicine editor in response to COMPare’s query about one of their published articles:
Annals of Internal Medicine
December 22, 2015

… The Centre for Evidence-Based Medicine Outcome Monitoring Project’s assessments seem to be based on the premise that trials are or can be perfectly designed at the outset, the initial trial registry fully represents the critical aspects of trial conduct, all primary and secondary end points are reported in a single trial publication, and any changes that investigators make to a trial protocol or analytic procedures after the trial start date indicate bad science. In reality, many trial protocols or reports are changed for justifiable reasons: institutional review board recommendations, advances in statistical methods, low event or accrual rates, problems with data collection, and changes requested during peer review. The Centre for Evidence-Based Medicine Outcome Monitoring Project’s rigid evaluations and the labeling of any discrepancies as possible evidence of research misconduct may have the undesired effect of undermining the work of responsible investigators, peer reviewers, and journal editors to improve both the conduct and reporting of science.
Selective Inattention? You betcha! I don’t doubt that he’s right that investigators may frequently misjudge in their a priori predictions. But he is selectively inattentive to the very obvious problem that the a priori protocol is the only concrete evidence we have that the author’s didn’t go fishing after the fact with a computer to find the variable whose outcome fit their needs. We obviously can’t trust the blinding as it’s controlled by the Sponsor and their contracted CRO. This is a very high-stakes game and the principles aren’t boy scouts. The authors are free to mention that they are reporting non-protocol-defined variables, but that status needs to be crystal clear. And it usually isn’t – thus COMPare. Outcome switching was in the center ring of our Paxil Study 329 analysis, but we didn’t yet have that general name for it.

Harry Stack Sullivan was from the days before psychopharmacology, and he was nobody’s fool. He was objecting to the "un" in Freud’s "unconscious." Things people either don’t or don’t want to think about don’t just go away. The mind becomes selectively inattentive, but it shows. They get some kind of fidgety when the unwanted mental content is nearby. They may start doing odd things that gamblers call "tells." If the wires are hooked up to a polygraph, the needles on the graph begin to wiggle. They may change the subject, or ask why you’re asking, or get hostile, or be defensive, or maybe sarcastic, or go silent. There’s a subtle disturbance in the force. Such things don’t tell you what’s being selectively unattended – only that you’re in the neighborhood of something that matters.

I reckon editors are no different. They get their version of fidgety – dismissive, into expert mode, sarcastic, silent, make bizarre and forced arguments – all the things other people do when one gets into an area where for a myriad of reasons, you’re confronting something that pokes holes in the status quo. In this case, they are dancing around in order not to have to see that there has been a massive intrusion of unscientific interests into our science-based world, and to address it we’re going to have to tweak our system in some fairly fundamental ways. And the people who are gaining something from the system as it stands are going to lose some things they don’t want to lose. But that’s just the way of things. As they say, "Don’t do the crime, if you can’t do the time." I recommend reading Susan Molchan’s blog, Bernard Carroll’s blog, and anything Ben Goldacre has to say about COMPare. In differing ways, they’re all calling our attention to the same very important  thing  deceit… 
Mickey @ 4:24 PM

notes from a reluctant parasite…

Posted on Monday 1 February 2016

It was something of an irony to be immersed in trying to make sense out of someone else’s study [the reason I stopped blogging for a while], and then to read that I was a member of a new class of researchers – "data parasites." Jeffrey Drazen, editor of the New England Journal of Medicine, didn’t win me over with his proposal that the NEJM drop its policy of prohibiting experts with Conflicts of Interest from writing review articles and editorials [see a snail’s pace…]. And, in a way, this new editorial is a continuation of that same theme.
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.

The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick.
Data Sharing, as Drazen presents it here, is a sensible and time honored idea – using a dataset collected for one reason to uncover previously unseen insights that may be far from the original intent of the study. Darwin’s Finches come to mind. Darwin returned from the  Gal├ípagos Islands with a sack full of birds. But it was only when the birds were reexamined by ornithologist John Gould that their variability was noted, putting Darwin on the track that lead him to his concept of Natural Selection.

But in this first paragraph, Drazen also sets the stage for another agenda – one heavily promoted by the pharmaceutical industry. When the clamor about distorted clinical trial reports reached a pitch that could no longer be ignored, they reframed the real intent of the move for Data Transparency. Instead of that being a reform move to allow independent reanalysis to keep them honest [because they hadn’t been], they spoke of it as Data Sharing for the reasons Drazen presents in his opening gambit – a generous act in the service of scientific progress.

And in his second paragraph, he’s going to venerate the academic investigators’ role in these Clinical Trials. Perhaps his description is accurate in some instances, but it certainly doesn’t fit the industry-funded and administered studies I’ve looked at. The studies are run and analyzed by the industrial Sponsors and Contract Research Organization [CROs], written by medical writing firms, and the academic authors are more consultants than "researchers" [and tickets into the pages of prestigious journals]. While my cynical version may not be universally justified, it’s way common enough to be a glaring omission from Drazen’s narrative.
However, many of us who have actually conducted clinical research, managed clinical studies and data collection and analysis, and curated data sets have concerns about the details. The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters. Special problems arise if data are to be combined from independent studies and considered comparable. How heterogeneous were the study populations? Were the eligibility criteria the same? Can it be assumed that the differences in study populations, data collection and analysis, and treatments, both protocol-specified and unspecified, can be ignored?
The cat’s now out of the bag. It’s people like me and uncounted others that he’s after – people whose motive is to look for misconduct disguised as science – or perhaps people like the volunteers with the Cochrane Collaboration who do extensive structured reviews and meta-analyses aiming for a more accurate assessment of the data. So now Dr. Drazen turns to something of a global ad hominem argument, an indictment of the motives of such people. It’s in the form of the saying, "People who can’t do, teach":
A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
And so for Dr. Drazen, people who want to reanalyze the data from published studies are creepy hangers-on. contrarians. I’m obviously not in love with that formulation. He leaves out the possibility of another, more likely motivation – that we’re checkers, people who believe that a lot of the scientific drug trial literature is written [and often distorted] for commercial gain rather than medical understanding. We’ve been brought to that conclusion honestly, and Dr. Drazen’s summarily dismissing that possibility by not even mentioning it is a telling indictment against him and his own motives.

After giving an example of successful Data Sharing, he concludes:
How would data sharing work best? We think it should happen symbiotically, not parasitically. Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested. What is learned may be beautiful even when seen from close up.
Our group was one of the first to apply for data access under the new venues provided by the drug manufacturers [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence]. GSK insisted on a proposal, something in the range of Data Sharing. While it was tempting to make up something, the truth was that we wanted the data from Paxil Study 329 because we didn’t believe the published analysis [Efficacy of Paroxetine in the Treatment of Adolescent Major Depression: A Randomized, Controlled Trial]. So, instead of making up a reason, we simply submitted the original Protocol. And to GSK’s credit, they gave us access [after a long negotiation]. We already had the Study Reports [CSRs] and the Individual Participant Data [IPDs], as they had been forced to publish them by a court settlement. But we couldn’t really do an adequate evaluation of harms without the Case Report Forms [CRFs]. We weren’t looking for something new, and our dealings were all with the pharmaceutical companies, not the twenty-two authors who never responded to us.

I don’t personally see running industry-funded Phase III Clinical Trials as Research, I think of it as Product Testing. There’s an enormous financial coloring to the whole enterprise, billions of dollars riding on the outcome of some of these Clinical Trials that say yes or no to the investment put into any given drug. But the trials are primarily about the safety and efficacy of the drugs themselves, not about the financial health and fortunes of the company that developed them, nor the academic departments and faculty that involve themselves in this process. There’s an epithet coined to describe people who are skeptical about clinical trials – pharmascolds – implying that they are biased against all drugs. Such people exist for sure, but I’m not one of them, nor are most of us who look into the data from Sponsored drug trials. We’re physicians and science minded others who don’t like being gamed by our own scientific literature, depriving us of a vital source of information about how to treat our patients.

Frankly, I’m a reluctant parasite. I’ve had to revive skills from a previous career here in my retirement. I had some other plans that were pushed to the side in order to do that. But I think it’s vitally important for the medical·scientific community to have watchdogs, particularly in today’s climate. Certainly the scientific literature in psychiatry for the last twenty plus years begs for serious oversight. Our group’s work was unfunded and difficult [in part because of the way we had to access the data]. Our paper was extensively reviewed and only accepted after the seventh submission, though in a way, the thorough and comprehensive nature of the peer review was confirming [if only that original paper had been subjected to that kind of rigor…].

Dr. Drazen’s editorial makes the assumption that the "front-line researchers" are a gold standard, and their work is being inappropriately attacked. I could easily mount an argument that there are many among that group who are the real data parasites, opportunizing on their academic positions to sign on to jury-rigged, ghost-written articles that they often had little to do with producing. And I question Dr. Drazen’s motives in ignoring the corruption and misbehavior that has made up a sizeable portion of the commercially sponsored clinical trial reporting currently flooding the landscape of our academic literature. An often rendered old saying from my childhood seems appropriate, "I don’t mind your peeing in my boot, but don’t tell me it’s water"…
Mickey @ 1:03 PM

in the old days…

Posted on Friday 29 January 2016

I took a several week unpaid leave of absence from blogging and email commerce to do a number intense project. I found that I just couldn’t do it if I thought about anything else. I finished the part that had to be done yesterday, and look forward to returning to my normal mental life. Looking over all the accumulated emails, I ran across one from a medical school friend who sent me a blurb from his literature [he’s an Emergency Medicine physician]. It was in Emergency Medicine Today:
hat tip to Ferrell…  
Antidepressants Appear To Be Much More Dangerous For Kids, Teens Than Reported In Medical Journals, Review Finds.

HealthDay reports, “Antidepressants appear to be much more dangerous for children and teens than reported in medical journals, because initial published results from clinical trials did not accurately note instances of suicide and aggression,” a review published Jan. 27 in the BMJ suggests. Researchers arrived at that conclusion after analyzing data from “68 clinical study reports from 70 drug trials that involved more than 18,500 patients.” The clinical studies “involved five specific antidepressants: duloxetine [Cymbalta], fluoxetine [Prozac], paroxetine [Paxil], sertraline [Zoloft] and venlafaxine [Effexor].”

According to Medical Daily, “the limitations in both design and reporting of clinical trials may have led to ‘serious under-estimation of the harms.’” The study authors concluded, “The true risk for serious harms is still uncertain.” The Telegraph [UK] also covers the story.
At first I thought it was an acknowledgement of our recent article [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence], but it was better than that. It was an article from the Nordic Cochrane Center about the potential adverse effects in adolescents from SSRIs – another log on the fire:
by Tarang Sharma, Louise Schow Guski, Nanna Freund, and Peter C. Gøtzsche
British Medical Journal. 2016 352:i65

Objective: To study serious harms associated with selective serotonin and serotonin-norepinephrine reuptake inhibitors.
Design: Systematic review and meta-analysis.
Main outcome measures: Mortality and suicidality. Secondary outcomes were aggressive behaviour and akathisia.
Data sources: Clinical study reports for duloxetine, fluoxetine, paroxetine, sertraline, and venlafaxine obtained from the European and UK drug regulators, and summary trial reports for duloxetine and fluoxetine from Eli Lilly’s website.
Eligibility criteria for study selection: Double blind placebo controlled trials that contained any patient narratives or individual patient listings of harms.
Data extraction and analysis: Two researchers extracted data independently; the outcomes were meta-analysed by Peto’s exact method [fixed effect model].
Results: We included 70 trials [64 381 pages of clinical study reports] with 18 526 patients. These trials had limitations in the study design and discrepancies in reporting, which may have led to serious under-reporting of harms. For example, some outcomes appeared only in individual patient listings in appendices, which we had for only 32 trials, and we did not have case report forms for any of the trials. Differences in mortality [all deaths were in adults, odds ratio 1.28, 95% confidence interval 0.40 to 4.06], suicidality [1.21, 0.84 to 1.74], and akathisia [2.04, 0.93 to 4.48] were not significant, whereas patients taking antidepressants displayed more aggressive behaviour [1.93, 1.26 to 2.95]. For adults, the odds ratios were 0.81 [0.51 to 1.28] for suicidality, 1.09 [0.55 to 2.14] for aggression, and 2.00 [0.79 to 5.04] for akathisia. The corresponding values for children and adolescents were 2.39 [1.31 to 4.33], 2.79 [1.62 to 4.81], and 2.15 [0.48 to 9.65]. In the summary trial reports on Eli Lilly’s website, almost all deaths were noted, but all suicidal ideation events were missing, and the information on the remaining outcomes was incomplete.
Conclusions: Because of the shortcomings identified and having only partial access to appendices with no access to case report forms, the harms could not be estimated accurately. In adults there was no significant increase in all four outcomes, but in children and adolescents the risk of suicidality and aggression doubled. To elucidate the harms reliably, access to anonymised individual patient data is needed.
Since the article is online, I’ll skip the details and say why I liked it, over and above it being another loud message about the potential harms of SSRIs in adolescents as well as the pressing need for Data Transparency in Clinical Trials:
  • The reference came from a primary care newsletter. Non-psychiatrist physicians prescribe the majority of the SSRIs, and that’s where this information about harms belongs. Hopefully the news is finally spreading.
  • The heavy lifting in this article was done by students working with Dr. Gøtzsche at the Nordic Cochrane Center. We desperately need for this kind of critical evaluation of Clinical Trials to be coming from the world of young researchers and physicians rather than just from us old guys.
  • They did this meta-analysis using a large number of Complete Study Reports[ CSRs] they got from the European Medicines Agency [EMA]. Again, great news that they could get them. And there was more, they got to see the wide variability in what was in those reports and how variable they were with raw data – clarifying that in lobbying for Data Transparency, we need to specify that they have actual data.
  • They emphasize the point that that the CSRs are not enough to evaluate harms. We absolutely need to have access to the Case Report Forms [CRFs] where the data was originally transcribed. We couldn’t have done our Paxil Study 329 article without them.
  • Their findings mirrored ours from Paxil Study 329, but they had information from many more studies than just our one source. It is a more general commentary.
  • It’s always great to hear from an old friend from the time when the world was young [among many other things, ironically, we were lab partners in pharmacology lab a little over 50 years ago]…
But happy talk aside, we shouldn’t even have to be fighting for honesty and transparency is the scientific literature. That ought to be a given. As a young guy, I noticed that the elders always talked about the good old days when things were better and I resolved not to do that when I got old. In the main, I have been able to hold to that resolution. But with this issue of the Clinical Trial articles in the medical literature, I can’t stick to my guns. I can’t remember ever having to keep one eye always cocked, looking for signs that I’m being taken for a ride by a distorted, commercially biased production. It really was better in the old days…
Mickey @ 5:56 PM

peeking out…

Posted on Monday 18 January 2016

I got an email asking if I was sick [because I’ve been quiet for a week]. No, I am just knee deep in a project that involves scrolling through endless monotonous spreadsheets, and I just can’t look at a computer after a few [or more] hours of that. Probably another week more I would guess. In the process, I’ve discovered a new disease – spreadsheet oculopathy. Symptoms include diplopia, nystagmus, a dandy headache, and irritability.  It’s easy as pie  to treat.

I did want to jump in and thank PsychPractice for mentioning my jaunt into statistics [in the land of sometimes… 1, 2, 3, 4, and 5 & john henry’s hammer… 1, 2, 3, 4, and 5] and for giving it a test drive [DIY Study Evaluation]. I’m not a statistician, and I’ll be glad to hear when I get things wrong. But I’ve decided that a lot of the reason people are not reading these clinical trials critically is that all the modern talk of statistical modelling and linear regression etc puts people off. Either they don’t understand the analytic methodology or worse – it’s presented in a deliberately obfuscating way to keep the reader from looking behind all the fancy talk. What I’m proposing is that the average medical reader can easily learn how to use a few simple tools to quickly decide if one is being served a plate of science or dish of something else. At least in my specialty, psychiatry, the industry generated clinical trial reports have been heavily weighted on the south side of something else. There are more statistical things to say before I’m done.

So, just peeking out to say hello. Back soon…
Mickey @ 8:31 PM