a grief observed I…

Posted on Sunday 9 February 2014

Back in March 2012, Lisa Cosgrove and Sheldon Krimsky documented the extensive connections between DSM-5 workgroup members and  the pharmaceutical industry – criticizing the COI policy and suggesting changes:
PLoS Medicine
by Lisa Cosgrove and Sheldon Krimsky
March 13, 2012

Summary Points
  • The American Psychiatric Association (APA) instituted a financial conflict of interest disclosure policy for the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM).
  • The new disclosure policy has not been accompanied by a reduction in the financial conflicts of interest of DSM panel members.
  • Transparency alone cannot mitigate the potential for bias and is an insufficient solution for protecting the integrity of the revision process.
  • Gaps in APA’s disclosure policy are identified and recommendations for more stringent safeguards are offered.
At the time, I made this chart to illustrate their reported findings:
Then APA President John Oldham issued an immediate Press Release denying the analysis in their paper. Said APA CEO James Scully [lame…]:
In a statement, APA medical director and ceo James Scully says the DSM-5 development process ‘is the most open and transparent of any previous edition of the DSM. We wanted to include a wide variety of scientists and researchers with a range of expertise and viewpoints in the DSM-5 process. Excluding everyone with direct or indirect funding from the industry would unreasonably limit the participation of leading mental health experts in the DSM-5 development process.
The issue on the table at the time was the Task Force plan to eliminate the Bereavement Exclusion from the diagnostic criteria for Major Depressive Disorder. The obvious fear was that this was simply a move to open up the grief market for antidepressants [DSM-5 To The Barricades On Grief, a fundamental flaw…]. Again, APA President Dr. Oldham explained:
"What we know," Dr. Oldham said, "is that any major stress can activate significant depression in people who are at risk for it. It doesn’t make sense to differentiate the loss of a loved one as understandable grief from equally severe stress and sadness after other kinds of loss."
Then in December 2012, the Washington Post had an article about the extensive COI with PHARMA among DSM-5 Task Force and APA Guidelines members [Antidepressants to treat grief? Psychiatry panelists with ties to drug industry say yes], focusing specifically on the proposed elimination of the Bereavement Exclusion and the possibility of overmedication of normal grief. In that article, APA CEO Scully reiterated:
Each work group member was allowed to receive as much as $10,000 a year in income from pharmaceutical companies and hold as much as $50,000 in stock. Members could also receive unlimited amounts of money from pharmaceutical companies to conduct research. Scully said that if no financial ties were permitted, many knowledgeable psychiatrists would be excluded because so many university studies are funded by pharmaceutical companies.
And DSM-5 Task Force Chair David Kupfer responded with a Press Release defending the DSM-5 COI policies [Response to the Washington Post]. In an interview in Medscape, Kupfer added this addressing the footnotes designed to quell the rage:
… The Washington Post article also brought up criticisms about the DSM-5’s removal of the bereavement exclusion from the criteria for major depressive disorder, which would be replaced with cautionary notes for clinicians. But will these notes be enough to help differentiate between normal grieving and a potentially serious problem?

"Yes," Dr. Kupfer told Medscape Medical News. "This change draws clinicians’ attention to the distinctions between grief after a significant loss and depression. The exclusion criteria will be replaced by 2 notations — a footnote at the end of the criteria that cautions clinicians to differentiate between normal grieving associated with a significant loss and a diagnosis of a mental disorder, and a note embedded within the criteria that reminds clinicians that major depression and bereavement can coexist." "This provides greater guidance to clinicians to help make this distinction and ensures that it is understood that sadness, grief, and bereavement are not things that have a time limitation to them, as dictated in DSM-IV’s bereavement exclusion," he said.

He noted in the release that removing the exclusion "helps prevent major depression from being overlooked and facilitates the possibility of appropriate treatment, including therapy or other interventions."
Dr. Kenneth Kendler of the DSM-5 Mood Disorders workgroup had written the justification for removing the Bereavement Exclusion [see depressing ergo-mania…] and Dr. Stanley Zisook had published an article [industry funded and uncontrolled] using Wellbutrin to treat grief [see the what is absurd…]. But other than that, if there was anyone else on the planet who supported this change that wasn’t on the DSM-5 Task Force, I don’t know who they were. The outrage was universal with petitions signed by thousands circulating, particularly among the psychologists and bereavement counselors. In spite of the hue and cry, in December, the APA Board of Trustees approved the DSM-5 as written and it was sent to press for release in May 2013.

There was something else about that DSM-5 COI policy that we all noticed, but I for one didn’t foresee as being what it is turning out to be. Frankly, by January of 2012, I was exhausted with ranting about the DSM-5, exhausted and maybe disgusted. I’m a psychiatrist, and although I haven’t been an APA member for decades, I felt ashamed that the APA had behaved so badly along the way. So I focused my attention on other things [there are plenty to choose from]. I think it’s called scandal fatigue. But others saw the writing on the wall more clearly. Here’s what David Allen had to say at the time:
Family Dysfunction and Mental Health Blog
by David M. Allen MD
January 10, 2013

… David Kupfer, MD, chair of the DSM-5 Task Force, said in a news release"While speculation is bound to occur, we think it is important to stay focused on the fact that APA has gone to great lengths to ensure that DSM-5 and APA’s clinical practice guidelines are free from bias."

In his news release, in which he defended the policies regarding conflict of interest in the members of the different groups that were working on the DSM -5, published in the Psychiatric Times, he stated“… all individuals agreed that, starting in 2007 and continuing for the duration of each individual member’s work on DSM-5, that individual’s total annual income derived from industry sources [excluding unrestricted research grants] [italics mine] would not exceed $10,000 in any calendar year, and he or she would not hold stock or shares of a pharmaceutical or device company valued at more than $50,000."

So nothing to worry about?  No conflict of interest here? Unfortunately, that part I highlighted in the above quote is big enough to drive the proverbial truck through. As the article in the Washington Post of 12/26/12, pointed out, “Members [of the various task forces creating the new DSM] could also receive unlimited amounts of money from pharmaceutical companies to conduct research.”

If the drug companies are supporting the research of an "expert," how is that not financial influence?  Most of these experts are academics; if they do not get funding, they often cannot keep their jobs! Depending on Pharma for your income is hazardous to your objectivity. This very sly loophole in disclosure and conflict-of-interest rules has also been exploited by some Pharma-funded researchers who label themselves as “unpaid consultants” in the “disclosures” attached to journal articles…
and more
So what? you ask. What if there are ongoing trials with DSM-5 workgroup members as Primary Investigators. Aren’t all the antidepressants finally going off patent? Can’t we finally breathe a sigh of relief?
Mickey @ 12:23 PM

where’s my violin?

Posted on Saturday 8 February 2014

Nero and the burning of Rome

Whether Nero actually played the violin while Rome burned or not, the story is a great metaphor for misguided priorities, or indifference, or narcissism, or incompetence, or being clueless, or maybe even not sweating the things you can’t do anything about. Whatever it means, I prefer using it to describe other people rather than myself. And obviously my last post on screening was written without surveying the landscape – like the Medicare Manual:

Medicare covers yearly screenings for depression.

These screenings are designed to be completed by a doctor or other primary care provider to ensure you are correctly diagnosed, treated and followed-up with. For Medicare to cover the annual depression screening, the screening must take place in a primary care setting. This means it will not be covered if you are screened in an emergency room, skilled nursing facility or as a hospital inpatient.
The annual depression screening includes a questionnaire that you complete yourself or with the help of your doctor. This questionnaire is designed to indicate if you are at risk or have symptoms of depression.
If the results of the questionnaire indicate that you may be at risk or have symptoms of depression, your doctor will do a more thorough evaluation to assess if you suffer from depression. If your doctor decides you do suffer from depression, they will provide treatment and follow-up or refer you to a mental health professional for further care.
Annual depression screenings can be performed separately by your primary care doctor but will typically take place when you have a scheduled office visit. The Welcome to Medicare Visit and first Annual Wellness visit require that your doctor review your potential for depression or other mental health conditions. However, these visits do not require your doctor to screen you for depression. A review is when your doctor discusses your risk factors for depression such as a family history of depression. However, you will not be given a screening questionnaire during a review.
If you have Original Medicare, you will not have to pay a deductible or coinsurances for the annual depression screening as long as you see doctors who accept assignment. If you have a Medicare Advantage plan, you will not have to pay a deductible, copay or coinsurances as long as you see network doctors.
If you need further evaluation to diagnose your condition or if you need mental health treatment, there will be cost sharing. You will need to pay deductibles, coinsurances or copays for this care. The amount you pay depends on the type of care you get.
For more information about your costs when you receive outpatient mental health care, please click here.

Maybe the Nero metaphor is appropriate for things that are too big to even think about. But at least I know what got me here. Psycritic commented on my last post, and included a YouTube link to a young ER doc,  Leana Wen, passionately talking about the importance to both physicians and patients of the history in making a diagnosis – 80% she said. She had two examples – her mother whose cancer was interpreted as a viral illness and a patient of hers who had every test in the world chasing a chest pain that was probably musculoskeletal. I mentioned my recent extensive [and negative] cardiac work-up initiated because of my family history [I guess] and the fact that I’m sort of old [a brief physical exam following an impressive array of tests using some unfamiliar Cadillac machinery]. I’ll have to admit a fascination with machinery and particularly the modern echocardiogram. It was something to behold. Who knew that the little murmur I heard myself was a small inconsequential calcification on a mitral valve leaflet?

But it was the hurried doctor visits that occupied the space between "hello" and the ordering of tests that stuck with me. I recognized most of the doctors and vice versa – former students from my teaching days. I expect because of that, they were on their best behavior, but it was unfamiliar behavior.  The past history and family history were on forms filled out in the waiting room along with HIPPA things. There was no room without a computer flashing things about me gathered from hither, thither, and yon. The present history was terse, hurried, and as Dr. Wen discusses, punctuated by frequent interruptions. If there was a physical exam, I missed it. I could have a liver the size of a beach-ball, but it would remain just my little secret. I’m a good sport and I had a good time having my various tests. The technicians loved being asked about their machines and proudly showed me all kinds of wonderful things, vastly improved since my own internist days.

As I thought about what Dr. Wen was saying and my own experience, I was reminded of that New York Times article we all read several years ago [Talk Doesn’t Pay, So Psychiatry Turns Instead to Drug Therapy]:
Alone with his psychiatrist, the patient confided that his newborn had serious health problems, his distraught wife was screaming at him and he had started drinking again. With his life and second marriage falling apart, the man said he needed help. But the psychiatrist, Dr. Donald Levin, stopped him and said: “Hold it. I’m not your therapist. I could adjust your medications, but I don’t think that’s appropriate.
So in my feels wrong… I’m waxing eloquent about inter·subjective space and worse than seeing emotional pain as a thing is the notion that we need a psychometric to ferret it out, uninformed by  the directives of the Medicare Manual – a modern Nero oblivious to the incineration of the Rome I once knew.

As a child, I resolved not to be like the old people I was around, constantly talking about the good old days being better. They were knocking "my days" which I was sure were the best of days. Even as a child, I suspected that what they really missed was how they felt when they were young. But now I’m old and I’m in a quandry. It really does seem to me that everyone medical is playing to a different set of audiences than in my day: insurance regulators, government regulators, HIPPA directors, medicolegal advisers, machine operators, and who-knows-whomever-else. I’m not sure that what is wrong with this guy and what does he need are on the top of the list where they belong.

So is it reasonable to ask a doctor to take a moment with each patient to ask, "Is this person depressed?" Is the reason that "we’re not properly inhabiting the offices where we meet our patients" as I suggested that it’s so crowded in there with all those regulators that we just don’t have the time?

Where’s my violin?
Mickey @ 2:40 PM

feels wrong…

Posted on Friday 7 February 2014

I’ve recently waded into unfamiliar territory – screening for depression [beyond symptoms…, the proposed study…]. It’s an area that I don’t know a lot about. There’s no question that my sudden interest in the topic is a reaction to the Gibbons/Kupfer CAT tests [open letter to the APA…]. My anger at their undeclared Conflict of Interest is matched equally by a fear that this test is aimed at the waiting rooms of Primary Care Physicians. That doesn’t feel intuitively sound – more something that will up the general medication burden rather than improve health care. But I have no evidence other than the pregnancy/post-partum studies I already posted [the proposed study…]. But fortune shone through the clouds when this article showed up this week on the BMJ site, selected for the UNCERTAINTIES PAGE by David Tovey, editor in chief of the Cochrane Library:


[Note: I’ve chopped this article up in the service of space. If it’s something you’re interested in, you probably want to read it all on-line]
by Brett D Thombs, Roy C Ziegelstein
British Medical Journal. 2014 348:g1253.

Major depression is present in 5-10% of patients in primary care, including 10-20% of patients with chronic medical conditions. Based on the prevalence and burden of depression, the availability of screening tools, and access to potentially effective treatments, routine depression screening has been proposed as a way to improve depression care. Depression screening involves the use of self administered questionnaires or small sets of questions to identify patients who may have depression but who are not already diagnosed or being treated for depression. Clinical practice guidelines do not agree on whether health professionals should screen for depression in primary care. The US Preventive Services Task Force [USPSTF] recommends screening for depression when enhanced, staff assisted, depression care programmes are in place to ensure accurate diagnosis and effective treatment and follow-up. The Canadian Task Force on Preventive Health Care previously endorsed a similar recommendation, but in 2013 recommended against depression screening in primary care, citing a lack of evidence of benefit from randomised controlled trials and concern that a high proportion of positive screens would be false positives.

In the UK, the National Screening Committee has determined that there is no evidence of benefit from depression screening to justify costs and potential harms and has recommended against it. A 2010 guideline from the National Institute for Health and Care Excellence [NICE] did not recommend routine depression screening, but suggested that clinicians be alert to possible depression, particularly among patients with a history of depression or with a chronic medical condition… In contrast to these recommendations, between 2006 and 2013, the UK Quality and Outcomes Framework [QOF] financially rewarded routine depression screening of patients with coronary heart disease and diabetes in primary care. By 2007, 90% of eligible Scottish primary care patients had been screened, but outcomes were disappointing: 976 patients had to be screened for each new diagnosis of depression, and 687 for each new antidepressant prescription. The 2013-14 QOF no longer included depression screening as a quality indicator.

Thus, screening for depression is sometimes encouraged in primary care guidelines and is often encouraged via other mechanisms, such as expert opinion articles in the medical literature. It is not clear, however, that screening would benefit patients…

What is the evidence of uncertainty? A depression screening programme can be successful only if patients not already known to have depression agree to be screened, if a substantial number of new cases are identified with relatively few false positive screens, and if newly identified patients engage in treatment with successful outcomes. An assessment of the effect of a screening programme on depression outcomes must separate the effect of screening from the effect of providing additional depression treatment resources not otherwise available, such as staffing for collaborative depression care. Thus, randomised controlled trials of depression screening must fulfil at least three key criteria: [1] determining eligibility and randomising patients before screening; [2] excluding patients already known to have depression or already being treated for depression; and [3] providing similar depression care options to patients in both trial arms, whether they are identified as depressed by screening or via other methods, such as self report or unaided clinician diagnosis.

We searched Embase, PubMed, PsycINFO, Scopus, and the Cochrane Library for systematic reviews on the effect of depression screening on depression outcomes and for randomised controlled trials conducted in primary care settings that fulfilled the three criteria we have described for tests of depression screening. This search was partly based on that for our own systematic review.

We identified three systematic reviews. A systematic review done in conjunction with the recent Canadian guideline did not identify any randomised controlled trials of depression screening. A 2008 Cochrane systematic review, on the other hand, assessed five randomised controlled trials and reported that depression screening did not reduce depressive symptoms [standardised mean difference −0.02 (95% confidence interval −0.25 to 0.20)]. In contrast to this, a systematic review done in conjunction with the 2009 USPSTF depression screening guideline included nine randomised controlled trials and concluded that depression screening benefitted patients when done in the context of staff assisted collaborative care but not in the context of usual care without these services…

Overall, no trials in the Cochrane review or USPSTF review fulfilled all three criteria for a test of depression screening…

We did not identify any randomised controlled trial that tested whether screening with collaborative depression care would be more effective than collaborative care without screening…

We did not find any studies that reported the degree to which administering depression symptom questionnaires improved diagnostic accuracy for depression among patients suspected by healthcare providers of having depression.

Is ongoing research likely to provide relevant evidence? We searched ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform for ongoing trials intended to evaluate the effects of depression screening, but did not find any studies that fulfilled the criteria for tests of depression screening…

What should we do in the light of the uncertainty? The absence of evidence that routine screening of all primary care patients or even screening of only high risk patients improves depression outcomes does not take away from the importance of depression as a condition that negatively affects quality of life and may respond to treatment. It only means that there is insufficient evidence to recommend screening as a strategy to identify the condition…
Like with the WHO criteria and the findings of that Canadian Review of screening pregnant and postpartum women [mentioned in the proposed study…], there are recommendations for depression screening, but there’s little or no evidence that it’s of use from a public health perspective. My objections go further than that. I don’t really think depression is usually a thing. It’s an emotion. And there’s a downside to seeing it as a thing [unless it is, like Melancholia]. In the little clinic where I volunteer, I see an occasional person with what I would call a Depression, a thing, but most people use the term as a synonym for unhappy, or sad, or frustrated, or any number of negative experiences. I don’t mind the imprecise grammar, but I do mind that they think it’s a thing they have – a thing they think should be amenable to medication. We’ve taught them that with our television ads and our DSMs and the way we act in our offices. It hasn’t been a good lesson and I worry that screening will be more of the same – and worse than seeing emotional pain as a thing is the notion that we need a psychometric to ferret it out.

At the risk of being preachy, being attuned to the emotional state of a patient is no different than noticing that they’re covered with a rash, or jaundiced, or gasping for breath, or have a knife sticking out of the middle of their forehead. To my mind, the results of a depression screening test are just something else to put on the ubiquitous computer screens that medical personnel look at rather than their patients [that’s a preachy part]. They finally took the computer out of my office because I didn’t use it, even for prescriptions [even though I’m something of a computer guy]. The point is that no matter what the tenets of modern psychiatry or medicine teach, emotional discomfort is best evaluated in inter·subjective space. If we’re missing a lot of emotional illness, it says something about us, not our tools. We’re not properly inhabiting the offices where we meet our patients [preaching again].

I have had kind of a rule not to talk like this here. The point of this blog is to focus on areas where medicine has been corrupted by commercial or ideological influences, and this may be my own ideology peeking out. I just can’t think of another way to talk about my actual objection to screening for mental health. It just feels wrong. And I think my attraction to this review is that it confirms what I already thought…
Mickey @ 7:21 PM

a patch of blue…

Posted on Friday 7 February 2014

"I’m a kind of paranoiac in reverse. I suspect people of plotting to make me happy."
Raise High the Roof Beam, Carpenters [1955], J. D. Salinger

The last five years of writing about the misadventures of the pharmaceutical industry and the contingency of cooperating KOLs in academic psychiatry and elsewhere have taken something away from my world view. I’ve more or less lived my life walking on the sunny side of the middle of the street, but the steady diet of jury-rigged, ghost written clinical trials and beyond questionable marketing practices abetted by a subset of physicians who should be my mentors and colleagues has taken its toll. It wouldn’t be fair to say I’m diagnosable as fully paranoid, but the skepticism nuclei on my MRI are bilaterally enlarged and my trust circuits have some early atrophic changes. Thankfully, I’m still capable of hope, and last night, I got a much needed reminder of what that feels like.

Last Friday, I read the news report that Johnson & Johnson had contracted with the Yale University Open Data Access [YODA] Project to officiate access to the raw data on their clinical trials, past and present, and flagged it for further study [a placemarker…]. It was enthusiastically announced as a game-changer on the AllTrials web site as well. Then on Sunday, there was a New York Times Op-Ed piece [Give the Data to the People] by Yale’s Dr. Harlan Krumholz of the YODA Project. It was all happy talk, but there was a piece that bothered me. And I’ll admit up front that after sitting through the J&J TMAP Trial in Texas in January 2012 and reading all the trial exhibits, J&J had a big place in my skepticism scanner. The part that bothered me is in my post about it [reassure us…]:
The pharmaceutical companies have insisted on talking about what I call Data Transparency as if it is a synonym for Data Sharing. It’s not. Data Sharing is a magnanimous act on the part of the company to allow other researchers access to the data from their clinical trials for further research for the good of mankind. I’m all in favor of the good of mankind, but that’s not what I’m interested in here. I want us [some us] to be able to check their work independently starting from the same place they do – the instant the blind on a clinical trial is broken – the raw data itself. And the reason I want to do that is the outrageous record they have for cheating in the way they handle that raw data…
I wanted reassurance that keeping them honest would be considered a research project. In going over countless clinical trials, the funny business hasn’t been in the trial itself, it has been in what happens once the blind was broken and the raw data was available. The creative license in presenting the data in published articles has been beyond impressive. So I said:
We need some recognition that our goal is considered a research topic – namely, "Are they telling the truth in the published paper or are they presenting the data in a way that misleads the reader [like so many have done before]? Are they withholding data to make their drug look more efficacious or safer than it really is [like so many have done before]?" Putting the "re" in research!
And I sent my blog post to Dr. Krumholz. The first encouragement was that he answered at all. I’m not used to that. He asked that I clarify my question. So I did ["Would your program consider checking the authors’ analysis based on their submitted protocol an acceptable research question?"]. And he answered promptly:
They have relinquished control over the data release to us, by contract. So it is up to us to put in place the right process. If you have many questions then you can submit many proposals. I don’t see us rejecting requests that might be about replicating results. JNJ knows that – it was part of our discussion with them – and they still signed the contract. I cannot speak for past actions – but they signed over this authority to us. I think that given this action, JNJ deserves the benefit of the doubt.
I couldn’t have hoped for a more direct answer than that. The data itself is to be in the hands of YODA, and they would see replicating results as an acceptable research question. That is Data Transparency. They reserve the right to insist on a qualified team that agrees to stick to the submitted question. Those are expectable hoops and within their right to request.

The great fear in the pharmaceutical companies’ offers to come around and relinquish their claim of proprietary ownership of clinical trial data is a Trojan Horse scenario. And that was my fear here. The Trojan Horse was something that looked like a gift, or a concession, but turned out to be a trick. In this case, it would be turning down access because the "research question" wasn’t some new research project, but just to "replicate" results, or in the case of an unpublished study, find out what the results were in the first place. The title of my post was "reassure us," and he did. Of course, the real test will come when the system is in place and someone asks for access to vet a study. All we can do is wait for that day and see what happens. But for the moment, I’m well pleased with Dr. Krumholz’s response. Like Salinger’s Seymour in my opening quote, the universe is plotting to make me happy [or at least, hopeful].

And Dr. Krumholz has credentials that add credibility to his response:
by Harlan M Krumholz, Joseph S Ross,  Amos H Presler, and David S Egilman.
British Medical Journal. 2007 334:120.

Rofecoxib [Vioxx] was introduced by Merck in 1999 as an effective, safer alternative to non-steroidal anti-inflammatory drugs for the treatment of pain associated with osteoarthritis. It was subsequently found to increase the risk of cardiovascular disease and withdrawn from the worldwide market. Merck now faces legal claims from nearly 30 000 people who had cardiovascular events while taking the drug. The company has stated that it will fight each case, denying liability. Our recent participation in litigation at the request of plaintiffs provided a unique opportunity to thoroughly examine and reflect on much of the accumulated court documents, research, and other evidence. This story offers important lessons about how best to promote constructive collaboration between academic medicine and industry…

Summary points:
  • Merck faces legal claims from nearly 30 000 people who had an adverse cardiovascular event while taking rofecoxib [Merck denies liability]
  • Published studies of the drug obscured the risk
  • Merck had influence over all aspects, including data analysis, safety monitoring, and reporting
  • Academic medicine, industry, medical journals, and government agencies need to define a set of principles to restore trust in collaborations on drug development
He was an expert witness for the plaintiff’s in the Vioxx® case. And then there’s the recent story of Medtronic’s INFUSE®:
by Harlan M. Krumholz, MD, SM; Joseph S. Ross, MD, MHS; Cary P. Gross, MD; Ezekiel J. Emanuel, MD, PhD; Beth Hodshon, JD, MPH, RN; Jessica D. Ritchie, MPH; Jeffrey B. Low, AB; and Richard Lehman, MD
Annals of Internal Medicine. 2013 158[12]:910-911.

This issue of Annals heralds a historic moment in the emerging era of open science. It features 2 systematic reviews on recombinant human bone morphogenetic protein-2 [rhBMP-2], an orthobiologic agent used in certain surgeries to promote bone growth that once achieved close to $1 billion in annual sales for Medtronic. The reviews are based on patient-level data from all clinical trials conducted by Medtronic, which were shared through the Yale University Open Data Access [YODA] Project. With the publication of these reviews and the public release of its comprehensive reports, all of the clinical trial data for this product will now be made available by the YODA Project to other investigators for further analysis and examination.

The YODA Project seeks to address the problem of unpublished and selectively published clinical evidence. Nearly half of clinical trials are never published, and many that are have long delays in publication. Among those published, the information is often incomplete. Evidence suggests that some data are not missing at random and that the sharing of data, particularly patient-level data, often provides new insights that are consequential to patients.

Currently, even the most conscientious physicians — those committed to knowing the latest literature — cannot fully understand the true risks and benefits of many treatments. Patients, therefore, are hampered in their ability to make truly informed decisions. In addition, missing data undermine evidence-based medicine, as recommendations based on the published literature, whether in systematic reviews, guidelines, book chapters, or online resources, are not based on the totality of the evidence. To improve the care of patients, clinical trial data, protocols, and results need to be made more widely available and shared for public benefit…
As many of you know, in the independent reviews commissioned by Dr. Krumholtz’s YODA Project, Medtronic’s INFUSE® didn’t come out looking very good – probably less effective than the alternatives and potentially harmful. I knew about that part of the story only too well because INFUSE® was used in my own rather extensive back surgery five years ago. That kind of thing grabs one’s attention like no other. But I didn’t know about Dr. Krumholz’s involvement in the Vioxx® debacle. I find the Vioxx® episode the most encouraging because, like my experience with the TMAP discovery documents, that’s the place where one’s skepticism nuclei really start lighting up and growing. If Dr. Krumholtz’s YODA can pull this off with J&J like he did with Medtronic, I’ll become a downright YODA groupie.

I think the tone of my post and my questioning the RWJF connection offended Dr. Krumholz [reassure us…], just as my looking into the Wellcome Foundation offended Ben Goldacre last week [an irony of pharma past…, a snowy evening…]. I apologized to both and repeat those apologies here. There are precious few good guys around and they don’t deserve grief for the sins of others. But I don’t apologize for looking under every rock in the stream. Unfortunately, benevolent intent doesn’t go very far in this particular area [as I’ve learned writing this blog]. This is the age of evidence based medicine which is also unfortunately sometimes guilty until proved innocent. But my daughter didn’t name this blog, 1crotchetyoldman – she mercifully called it 1boringoldman. So I’ll take the recent feedback into advisement and follow my own advice, "It’s ready, aim, fire not fire, ready aim"…
Mickey @ 11:29 AM

that matters…

Posted on Thursday 6 February 2014

It has appeared to me that the die was cast for the DSM-5 long before the revision process proper was ever underway. I’ve repeatedly mentioned the book A Research Agenda for DSM-V – the 2002 book that set the stage for two major conceptual changes: the inclusion of biological parameters and the addition of dimensional diagnostic criteria. That book was followed by a series of symposiums. While dimensional diagnosis was discussed in many of those conferences, there was one devoted to dimensional diagnosis specifically, and it was published as a separate book in 2007, Dimensional Approaches in Diagnostic Classification: Refining the Research Agenda for DSM-V. Both books are available piecemeal on the Internet. Here’s the reference to the chapter by Helena Chmura Kraemer, the statistician for the DSM-5 Revision:
by HELENA CHMURA KRAEMER
International Journal of Methods in Psychiatric Research. 2007 16[S1]: S8 – S15.


Abstract: An enhancement to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition [DSM-V] is currently under consideration, one that would enhance both the reliability and validity of the Diagnostic and Statistical Manual [DSM] diagnoses: the addition of a dimensional adjunct to each of the traditional categorical diagnoses of the DSM. We first review the history and context of this proposal and define the concepts on which this dimensional proposal is based. The advantages of dimensional measures over categorical measures have long been known, but we here illustrate what is known with a theoretical and a practical demonstration of the potential effects of this addition. Possible objections to the proposal are discussed, concluding with some general criteria for implementing this proposal.
Like most of the book’s sections, it’s dripping with enthusiasm about the proposed dimensional additions to the DSM-5, mirroring Dr. David Kupfer’s later memo reported by Dr. Jane Costello in her DSM-5 resignation letter:
"…The tipping point for me was the memo from David and Darrell on February 18, 2009, stating “Thus, we have decided that one if not the major difference between DSM-IV and DSM-V will be the more prominent use of dimensional measures in DSM-V”, and going on to introduce an Instrument Assessment Study Group that will advise workgroups on the choice of old scale measures or the creation of new ones."
Dr. Helena Chmura Kraemer was a member of that Instrument Assessment Study Group along with Expert Advisors Drs. Robert Gibbons and Paul Pilkonis, two of the collaborators developing the computerized adaptive testing mentioned below. And all of this is background to two articles in the February American Journal of Psychiatry – an editorial by Drs. Helena Kraemer and Robert Freedman [the Journal’s editor] and the print publication of Dr. Gibbons et al’s paper on their CAT-ANX psychometric test.
American Journal of Psychiatry
Editorial
Computer Aids for the Diagnosis of Anxiety and Depression
by Helena Chmura Kraemer and Robert Freedman

The publication of DSM-5 marked many examples of progress in psychiatric diagnosis, but two diagnoses, major depressive disorder and generalized anxiety disorder, the core dysfunctions that psychiatry addresses, did not change from DSM-IV to DSM-5. Yet, these two diagnoses had questionable test-retest reliability in the field tests, although paradoxically, high reliability for patients’ self rating. In this issue of the Journal, Gibbons et al. report on the development and initial testing of computerized adaptive testing to assess patients’ self-perception of their anxiety and depression.

In computerized adaptive testing, patients are first asked general questions, and then, based on their initial answers, additional questions are selected to increase the precision of assessment. A good clinician does the same, beginning with general questions and then, based on the answers to those questions, asking more specific questions until the diagnosis is reached. Use of the computer allows for many possible questions and for rapid selection of those most likely to be informative for a given patient. Similar techniques are used by giant online retailers to suggest additional items to buy after an initial purchase is made…

The Gibbons et al. approach is a truly outstanding contribution to measurement in medicine (particularly in psychiatry): it is novel and exciting, and it promises to improve the accuracy and cost-effectiveness of diagnosis both in clinical practice and in research…
by Robert D. Gibbons, Ph.D., David J. Weiss, Ph.D., Paul A. Pilkonis, Ph.D., Ellen Frank, Ph.D., Tara Moore, M.A., M.P.H., Jong Bae Kim, Ph.D., and David J. Kupfer, M.D.
American Journal of Psychiatry. 2014 171:187–194.

Objective: The authors developed a computerized adaptive test for anxiety that decreases patient and clinician burden and increases measurement precision.
Method: A total of 1,614 individuals with and without generalized anxiety disorder from a psychiatric clinic and community mental health center were recruited. The focus of the present study was the development of the Computerized Adaptive Testing–Anxiety Inventory (CAT-ANX). The Structured Clinical Interview for DSM-IV was used to obtain diagnostic classifications of generalized anxiety disorder and major depressive disorder.
Results: An average of 12 items per subject was required to achieve a 0.3 standard error in the anxiety severity estimate and maintain a correlation of 0.94 with the total 431-item test score. CAT-ANX scores were strongly related to the probability of a generalized anxiety disorder diagnosis. Using both the Computerized Adaptive Testing–- Depression Inventory and the CAT-ANX, comorbid major depressive disorder and generalized anxiety disorder can be accurately predicted.
Conclusions: Traditional measurement fixes the number of items but allows measurement uncertainty to vary. Computerized adaptive testing fixes measurement uncertainty and allows the number and content of items to vary, leading to a dramatic decrease in the number of items required for a fixed level of measurement uncertainty. Potential applications for inexpensive, efficient, and accurate screening of anxiety in primary care settings, clinical trials, psychiatric epidemiology, molecular genetics, children, and other cultures are discussed.
Putting aside any number of salient points for the moment, like:

  • Dr. Kraemer’s tortured logic relating this test to the Field Trials above and to the potential usefulness of the test in clinical settings.
  • Dr. Kraemer being a collaborator and author in the move to add dimensional diagnoses to the DSM-5.
  • Dr. Kraemer being a frequent co-author with Dr. Kupfer during the DSM-5 process including the article reporting the Field Trials mentioned in the editorial.
  • The use of NIMH funds to develop this commercial venture.
  • The use of the academic journal articles [JAMA Psychiatry, Journal of Clinical Psychiatry, and now the American Journal of Psychiatry] to advertise this commercial venture.
  • The validity of the CAT-ANX psychometric is untested in substantive clinical Field Trials.
… reviewing the timeline in the open letter to the APA…, there can be little doubt that the pairing of this editorial with this article is part of the long-planned launch of these computerized adaptive tests, developed by Dr. Gibbons and his business associates [including Dr. Kupfer, Chair of the DSM-5 Task Force]. Even though the dimensional parameters were not added to the DSM-5 main diagnostic criteria, they were included in Section III [a point Dr. Kupfer reminded us of recently – Section III of New Manual Looks to Future]. And these tests were clearly part of the dimensional initiative which is obviously still very much alive [from Kraemer, "Applications might easily be developed that could be used by patients in the waiting room, probably requiring less than 10 minutes of the patient’s time and none of clinician’s, producing a score that could be used in its dimensional form (the actual score and its measure of precision) or in categorical form (by selection of an appropriate cut point)"]. Obviously, screening is still part of the mix.

In the memo on this incident submitted to the APA Board of Trustees, Dr. Young wrote:
"Dr. Kupfer should have disclosed to APA his interest in PAI in 2012. Dr. Kupfer’s interest in PAI, which came after the decision had been made to include dimensional measures in DSM-5, did not influence DSM-5’s inclusion of dimensional measures for further study in Section 3. Interest in inclusion of these measures in DSM-5 began with conferences starting in 2003. If and when PAI develops a commercial product with CAT, it will not have any greater advantage than the dozens of dimensional measures currently being marketed by others.
Nobody I know thinks that Dr. Kupfer’s interest in PAI is why the dimensional measures were included in Section III, so I’m unclear what she’s talking about. What I think is that these computerized tests were commercialized to opportunize on the inclusion of dimensional measures in the full manual, and that moving them to Section III was a disappointment, but they carried on with their launch anyway. But that’s just my opinion. What rises beyond opinion is that this very significant longstanding Conflict of Interest was not declared in order to hide the commercial interests of everyone concerned with PAI. That is an ethical breach of major import and demands a full investigation by the Board of Trustees. To ignore it is to say that the APA has no Conflict of Interest policy – at least no Conflict of Interest policy that matters…
Mickey @ 12:14 PM

the proposed study…

Posted on Wednesday 5 February 2014


Depression Screening and Patient Outcomes in Pregnancy or Postpartum: A Systematic Review
by Brett D. Thombsa, Erin Arthursi, Stephanie Coronado-Montoya, Michelle Rosemana, Vanessa C. Delisle, Allison Leavensa, Brooke Levis, Laurent Azoulay, Cheri Smith, Luisa Ciofanik, James C. Coyne, Nancy Feeley, Simon Gilbodym, Joy Schinazin, Donna E. Stewart,and Phyllis Zelkowitz
Journal of Psychosomatic Research, online 28 January 2014.

Objective: Clinical practice guidelines disagree on whether health care professionals should screen women for depression during pregnancy or postpartum. The objective of this systematic review was to determine whether depression screening improves depression outcomes among women during pregnancy or the postpartum period.
Methods: Searches included the CINAHL, EMBASE, ISI, MEDLINE, and PsycINFO databases through April 1, 2013; manual journal searches; reference list reviews; citation tracking of included articles; and trial registry reviews. RCTs in any language that compared depression outcomes between women during pregnancy or postpartum randomized to undergo depression screening versus women not screened were eligible.
Results: There were 9,242 unique titles/abstracts and 15 full-text articles reviewed. Only 1 RCT of screening postpartum was included, but none during pregnancy. The eligible postpartum study evaluated screening in mothers in Hong Kong with 2-month-old babies [N = 462] and reported a standardized mean difference for symptoms of depression at 6 months postpartum of 0.34 [95% confidence interval = 0.15 to 0.52, P < 0.001]. Standardized mean difference per 44 additional women treated in the intervention trial arm compared to the non-screening arm was approximately 1.8. Risk of bias was high, however, because the status of outcome measures was changed post-hoc and because the reported effect size per woman treated was 6–7 times the effect sizes reported in comparable depression care interventions.
Conclusion: There is currently no evidence from any well-designed and conducted RCT that screening for depression would benefit women in pregnancy or postpartum. Existing guidelines that recommend depression screening during pregnancy or postpartum should be re-considered.
Having just looked at a study on screening pregnant patients admitted for high risk pregnancy for depression [beyond symptoms…], I was sent the reference to this study – hot off the press. It’s an exhaustive meta-analysis of the literature by multiple investigators in Canada addressing screening of pregnant and post-partum women for depression. They were looking for studies that had data on whether screening was useful or not. In spite of the fact that there are multiple recommendations for such screening, they found zero hard evidence to back up those recommendations. Their conclusion:
In summary, we did not find evidence to support recommendations to screen women for depression during pregnancy or postpartum. Well-designed and executed trials that assess the effects of depression screening and that can determine whether there is benefit to women in excess of costs and potential harms are needed. Ideally, a trial will be conducted that randomizes women who are not known to have depression to be screened versus not screened, with women identified as depressed in both trial arms having access to staff-assisted, collaborative depression care. Without evidence from such a trial, current screening recommendations should be re-evaluated. Instead of screening, health care professionals working with women during pregnancy and postpartum should be encouraged to provide women, as well as their partners and families, with information about depression. Health care professionals should also be alert to the possibility of depression among pregnant and postpartum women and should attend to symptoms that may suggest depression, such as low mood, anhedonia, insomnia, and suicidal thoughts, through assessment and, as appropriate, referral or management. Health care providers should be particularly vigilant for depression among women with general risk factors for depression or risk factors that have been identified in women in pregnancy or postpartum, including a history of depression, the presence of a chronic medical condition, unexplained somatic symptoms, chronic pain, increased and unexplained use of medical services, a history of traumatic life events, domestic violence, drug or alcohol abuse, low income, a low education level, single status or poor social support, and unintended pregnancy.
What a concept! Before you undertake some mass project, do a small version and see how it plays out – a pilot project, a field trial. And in the area of screening, here illustrated by screening for depression in pregnant and post-partum women, this group scoured the world literature to see if anyone had done that. And in spite of various groups recommending routine screening, nobody has done an outcome study to see if it’s a good idea.

We’re used to the principles of preventive medicine being a part of our lives – antismoking campaigns, immunizations, serologic test for syphilis, pap smears, mammography, colonoscopy, etc. So screening is an acceptable part of modern life and seems intuitively like a good idea. But there’s no scientific principle that says that just because you can test for something means that you should use it as a screening test. The classic commentary on screening is a 1968 WHO monograph entitled the PRINCIPLES AND PRACTICE OF SCREENING FOR DISEASE [it’s a timeless document that was good enough to hold my attention most of the way through]. It lists the following guidelines for deciding when to screen:

It is a classic. To my mind, screening for mental health problems dies at around number (2) as there is so much controversy about treatment, but it also flounders at multiple places on down the list. Of course, the fear about screening is that it will be yet another conduit for overmedication, not just in pregnancy, but in patients at large. And it would lead to that in the real world of today, no matter how things are framed. This whole business needs to be looked at very carefully from beginning to end before any policy decisions are made. Even well meaning opinion is just opinion. In an earlier post, I was objecting to the use of the diagnosis Major Depressive Disorder in a study where women with high risk pregnancy were tested using the Edinburgh Postnatal Depression Scale [EPDS]. But I had also questioned using a "postnatal scale" in screening pregnant subjects. The author informed me that this was the standard for both pre- and post-natal depression and she’s right. I stand corrected and appreciate the clarification. However, this study from Iceland further muddies the water, finding that the EPDS is screening for a whole lot more than depression.
by Linda B. Lydsdottir, MSc; Louise M. Howard, PhD; Halldora Olafsdottir, MD; Marga Thome, PhD; Petur Tyrfingsson, Cand Psych; and Jon F. Sigurdsson, PhD
Journal of Clinical Psychiatry 10.4088/JCP.13m08646
[epub on-line 02/04/2014]

Objective: Few studies are available on the effectiveness of screening tools such as the Edinburgh Postnatal Depression Scale [EPDS] in pregnancy or the extent to which such tools may identify women with mental disorders other than depression. We therefore aimed to investigate the mental health characteristics of pregnant women who screen positive on the EPDS.
Method: Consecutive women receiving antenatal care in primary care clinics [from November 2006 to July 2011] were invited to complete the EDPS in week 16 of pregnancy. All women who scored above 11 [screen positive] on the EPDS and randomly selected women who scored below 12 [screen negative] were invited to participate in a psychiatric diagnostic interview.
Results: 2,411 women completed the EPDS. Two hundred thirty-three women [9.7%] were screened positive in week 16, of whom 153 [66%] agreed to a psychiatric diagnostic interview. Forty-eight women [31.4%] were diagnosed with major depressive disorder according to DSM-IV criteria, 20 [13.1%] with bipolar disorder, 93 [60.8%] with anxiety disorders [including 27 [17.6%] with obsessive-compulsive disorder [OCD]], 8 [5.2%] with dysthymia, 18 [11.8%] with somatoform disorder, 3 [2%] with an eating disorder, and 7 [4.6%] with current substance abuse. Women who screened positive were significantly more likely to have psychosocial risk factors, including being unemployed [χ2 = 23.37, P ≤ .001], lower educational status [χ2 = 31.68, P ≤ .001], and a history of partner violence [χ2 = 10.30, P ≤ 001], compared with the women who screened negative.
Conclusions: Use of the EPDS early in the second trimester of pregnancy identifies a substantial number of women with potentially serious mental disorders other than depression, including bipolar disorder, OCD, and eating disorders. A comprehensive clinical assessment is therefore necessary following use of the EPDS during pregnancy to ensure that women who screen positive receive appropriate mental health management.
Being attuned to mental health problems is part and parcel of the job of any person working in healthcare. The number of people who came or were sent to my office as an internist whose primary problem was in the psychological realm rather than the physical was something I didn’t anticipate from my training. And pregnancy and the post-partum period are places where a high index of suspicion is warranted. These are not problems that require a PHQ-9, an EPDS, or a CAT-DI to discover, They’re apparent with a little observation and a few simple questions. And pregnant and post-partum women are in plenty of medical offices for one reason or another.

To be honest, I think my complaint about screening for mental health issues in general is primarily driven by seeing it as one more way that medical personnel are distancing themselves from their medical responsibility to patients. An alert team of health professionals isn’t going to miss the kind of problems these surveys might pick up. And I think that "you seem really worried" or "you look really down" is a better opening line that "your EPDS is 13." But beyond that, If we’re going to have evidence based medicine, we need some evidence. that mental health screening produces the desired results rather than falls into the categories of wasted effort or worse. I think that Canadian study up top is a brilliant first step in saying "show me." And pregnancy is a perfect place to study mental health screening because it is time limited and outcome comparison could be accurately assessed in a finite study.

The second article begins:
A large study could not only address the outcome of the results of screening but also test some of the assertions in this paragraph. The healthcare systems of the social democracies are perfectly constructed for such a study.

It would be in the range of an undeclared conflict of interest for me not to add that some of my skepticism about mental health screening comes from the questions about things like the CAT-DI, CAT-ANX, and CAD-D testing recently introduced by Drs. Gibbons, Frank, Kupfer, Pilkonis, and Weiss [open letter to the APA…]. With the almost constant bombardment by figures about the public health burden of depression that introduces many of the articles in the industry financed psychopharmacology literature, I think my paranoia is justified. I don’t like using the phrase, "trolling for patients," but that evidence is unfortunately solid – certainly in the US. So we need to look at injunctions about screening very carefully. For example, 12-13% of pregnant women is a huge "market" as are the denizens of doctors’ waiting rooms.

It would also be in the range of an undeclared conflict of interest for me not to say that as much as I like the objective markers and scientific findings of medical science, there are places where the human instrument surpasses the objectivity of the best of questionnaires, and I personally think this is one of those places. I’m willing to be proved wrong, but I think that detecting mental health problems in prenatal or post-partum clinics or, for that matter,  in general medical offices and clinics is more a topic about clinical skills than psychometrics. That’s just an opinion, as testable as any of the others in this post as part of the proposed study…
Mickey @ 8:25 PM

bonuses…

Posted on Tuesday 4 February 2014


New York Times
By MICHAEL SUK-YOUNG CHWE
January 31, 2014

SCIENCE is in crisis, just when we need it most. Two years ago, C. Glenn Begley and Lee M. Ellis reported in Nature that they were able to replicate only six out of 53 “landmark” cancer studies. Scientists now worry that many published scientific results are simply not true. The natural sciences often offer themselves as a model to other disciplines. But this time science might look for help to the humanities, and to literary criticism in particular. A major root of the crisis is selective use of data. Scientists, eager to make striking new claims, focus only on evidence that supports their preconceptions. Psychologists call this “confirmation bias”: We seek out information that confirms what we already believe. “We each begin probably with a little bias,” as Jane Austen writes in Persuasion, “and upon that bias build every circumstance in favor of it.”

Despite the popular belief that anything goes in literary criticism, the field has real standards of scholarly validity. In his 1967 book “Validity in Interpretation,” E. D. Hirsch writes that “an interpretive hypothesis,” about a poem “is ultimately a probability judgment that is supported by evidence.” This is akin to the statistical approach used in the sciences; Mr. Hirsch was strongly influenced by John Maynard Keynes’s “A Treatise on Probability.” However, Mr. Hirsch also finds that “every interpreter labors under the handicap of an inevitable circularity: All his internal evidence tends to support his hypothesis because much of it was constituted by his hypothesis.” This is essentially the problem faced by science today…

It’s a danger the humanities have long been aware of. In his 1960 book “Truth and Method,” the influential German philosopher Hans-Georg Gadamer argues that an interpreter of a text must first question “the validity — of the fore-meanings dwelling within him.” However, “this kind of sensitivity involves neither ‘neutrality’ with respect to content nor the extinction of one’s self.” Rather, “the important thing is to be aware of one’s own bias.” To deal with the problem of selective use of data, the scientific community must become self-aware and realize that it has a problem. In literary criticism, the question of how one’s arguments are influenced by one’s prejudgments has been a central methodological issue for decades…

Austen might say that researchers should emulate Mr. Darcy in Pride and Prejudice, who submits, “I will venture to say that my investigations and decisions are not usually influenced by my hopes and fears.” At least Mr. Darcy acknowledges the possibility that his personal feelings might influence his investigations. But it would be wrong to say that the ideal scholar is somehow unbiased or dispassionate. …the textbook “scientific method” of dispassionately testing a hypothesis is not how science really works. We often have a clear idea of what we want the results to be before we run an experiment. …science as a lived, human process is different from our preconception of it. He was trying to give us a glimpse of self-understanding, a moment of self-doubt.

When I began to read the novels of Jane Austen, I became convinced that Austen, by placing sophisticated characters in challenging, complex situations, was trying to explicitly analyze how people acted strategically. There was no fancy name for this kind of analysis in Austen’s time, but today we call it game theory. I believe that Austen anticipated the main ideas of game theory by more than a century. As a game theorist myself, how do I know I am not imposing my own way of thinking on Austen? I present lots of evidence to back up my claim, but I cannot deny my own preconceptions and training. As Mr. Gadamer writes, a researcher “cannot separate in advance the productive prejudices that enable understanding from the prejudices that hinder it.” We all bring different preconceptions to our inquiries, whether about Austen or the electron, and these preconceptions can spur as well as blind us…
What a thoughtful piece about the illusion of scientific detachment. The ancient Greeks thought if they could only master the rules of logic, enumerate all the logical fallacies, they would be able to reach absolute truths. They were the Dogmatists [back in a time when it meant the search for truth rather than what dogmatic means now]. With logic – every single day lawyers joust in courtrooms with carefully crafted legal arguments whose logic impeccably proves opposite points in the same case [and leaves the choice up to the everymen in the jury]. In science, we have our own versions a bit beyond simple logic with our data and its surrogates neatly arrayed in tables and graphs leading to conclusions wrapped in a cloak of statistical and mathematical proof.

And it doesn’t take a Freud to tell us that our self-serving motives and dreams are lurking behind the structure of our logic and the numbers of our science – that our conclusions are often already waiting, hoping the logic and numbers catch up. Professor Michael Chwe finds that most important lesson in the writings of Jane Austen and the intrigues of her characters, well illustrated by the cover of his book and its bubbles. And he relates it to game theory. What could be more interesting than that? And the injunctions to self-doubt and self-awareness are value added bonuses…
Mickey @ 8:00 AM

reassure us…

Posted on Monday 3 February 2014


New York Times
By HARLAN M. KRUMHOLZ
FEB. 2, 2014

LAST week, Johnson & Johnson announced that it was making all of its clinical trial data available to scientists around the world. It has hired my group, Yale University Open Data Access Project, or YODA, to fully oversee the release of the data. Everything in the company’s clinical research vaults, including unpublished raw data, will be available for independent review. This is an extraordinary donation to society, and a reversal of the industry’s traditional tendency to treat data as an asset that would lose value if exposed to public scrutiny. Today, more than half of the clinical trials in the United States, including many sponsored by academic and governmental institutions, are not published within two years of their completion. Often they are never published at all. The unreported results, not surprisingly, are often those in which a drug failed to perform better than a placebo. As a result, evidence-based medicine is, at best, based on only some of the evidence. One of the most troubling implications is that full information on a drug’s effects may never be discovered or released. Even when studies are published, the actual data are usually not made available. End users of research — patients, doctors and policy makers — are implicitly told by a single group of researchers to “take our word for it.” They are often forced to accept the report without the prospect of other independent scientists’ reproducing the findings — a violation of a central tenet of the scientific method.

To be fair, the decision to share data is not easy. Companies worry that their competitors will benefit, that lawyers will take advantage, that incompetent scientists will misconstrue the data and come to mistaken conclusions. Researchers feel ownership of the data and may be reluctant to have others use it. So Johnson & Johnson, as well as companies like GlaxoSmithKline and Medtronic that have made more cautious moves toward transparency, deserve much credit. The more we share data, however, the more we find that many of these problems fail to materialize…

This program doesn’t mean that just anyone can gain access to the data without disclosing how they intend to use it. We require those who want the data to submit a proposal and identify their research team, funding and any conflicts of interest. They have to complete a short course on responsible conduct and sign an agreement that restricts them to their proposed research question. Most important, they must agree to share whatever they find. And we exclude applicants who seek data for commercial or legal purposes. Our intent is not to be tough gatekeepers, but to ensure that the data are used in a transparent way and contribute to overall scientific knowledge.

There are many benefits to this kind of sharing. It honors the contributions of the subjects and scientists who participated in the research. It is proof that an organization, whether it is part of industry or academia, wants to play a role as a good global citizen. It demonstrates that the organization has nothing to hide. And it enables scientists to use the data to learn new ways to help patients. Such an approach can even teach a company like Johnson & Johnson something it didn’t know about its own products. For the good of society, this is a breakthrough that should be replicated throughout the research world.
It feels like we’re only going to have one shot at Data Transparency, and we need to get it right. And at least in the realm of psychoactive medications, the level of misbehavior by the pharmaceutical industry is the stuff of legend. When and if the history is ever written, Johnson & Johnson will probably have a whole chapter all to themselves. The TMAP Program in Texas alone would qualify them, but there were other things including the nearby Excerpta Medica that ghost wrote Risperdal® articles faster that J&J could recruit KOLs to sign them, the J&J Center at MGH for Dr. Biederman’s Childhood Bipolar fantasies, Omnicare contracts for over-medicating the elderly, etc. Before getting a warm glow about this article, read the Rothman Report from the J&J trial in Austin several years ago. Their track record defines the word ruthless. So pardon me if I approach the plan above with a skeptical eye.

Dr. Harlan M. Krumholtz is in charge of the Yale Center for Outcomes Research and Evaluations [CORE] and its Yale University Open Data Access [YODA] Project. I have no reason to doubt his credentials, but there are a couple of things that need to be thoroughly investigated:
    Editor-in-Chief
    Harlan M. Krumholz, MD, SM, is the Harold H. Hines, Jr., Professor of Medicine in the Section of Cardiovascular Medicine at the Yale University School of Medicine, New Haven, Connecticut. He serves as Director of the Robert Wood Johnson Clinical Scholars Program and Director of the Yale-New Haven Hospital Center for Outcomes Research and Evaluation (CORE). Using methods of outcomes research, he has sought to illuminate the balance of risks, benefits, and costs of specific clinical approaches and to implement strategies to improve the prevention, diagnosis, and treatment of cardiovascular disease. He is an elected member of the American Society of Clinical Investigation, Association of American Physicians, and the Institute of Medicine. He has been an Editor for NEJM Journal Watch Cardiology since the publication’s launch in 1995 and Editor-in-Chief since 2000.

    Disclosures
    Consultant / Advisory board United Healthcare

    Speaker’s bureau Centrix

    Equity ImageCor

    Grant / Research support FDA; NIH-NHLBI; Commonwealth Fund; The Catherine and Patrick Weldon Donaghue Medical Research Foundation; Robert Wood Johnson Foundation; Medtronic

    Editorial boards American Journal of Managed Care; American Journal of Medicine; Archives of Medical Science; BMJ.com/US; Central European Journal of Medicine; Circulation: Cardiovascular Quality and Outcomes; Congestive Heart Failure; Critical Pathways in Cardiology; Current Cardiovascular Risk Reports; JACC: Cardiovascular Imaging; Journal of Cardiovascular Medicine

    Leadership positions in professional societies American Board of Internal Medicine (Chair, Assessment 2020 Task Forces);  American College of Cardiology (CV Research and Scholarly Activity, and Lifelong Learning Oversight Committee);  American College of Physicians (CV Research and Scholarly Activity);  American Heart Association (CV Research and Scholarly Activity);  Centers for Medicare & Medicaid Services (Heart Care Technical Expert Panel);  Oklahoma Foundation for Medical Quality (Heart Care Technical Expert Panel);  VHA, Inc. (Center of Applied Healthcare Studies External Advisory Board)
The Robert Woods Johnson Foundation is "the United States’ largest philanthropy devoted exclusively to health and health care." "Robert Wood Johnson II built the family firm of Johnson & Johnson into the world’s largest health products maker. He died in 1968. He established the foundation at his death with 10,204,377 shares of the company’s stock." And I say good for him. But the Board is built from J&J former executives as I recall from the testimony in the TMAP Trial [where the chairman was deposed] because they financed the start-up of that infamous program in Texas. While no connection was established, it wasn’t disproved either. No allegations here. It’s just something that needs thorough checking.

The pharmaceutical companies have insisted on talking about what I call Data Transparency as if it is a synonym for Data Sharing. It’s not. Data Sharing is a magnanimous act on the part of the company to allow other researchers access to the data from their clinical trials for further research for the good of mankind. I’m all in favor of the good of mankind, but that’s not what I’m interested in here. I want us [some us] to be able to check their work independently starting from the same place they do – the instant the blind on a clinical trial is broken – the raw data itself. And the reason I want to do that is the outrageous record they have for cheating in the way they handle that raw data. Here’s just one example where the Risperdal® data was hidden or distorted:
    In the South Carolina penalty settlement the Judge noted that they had evidence that the manufacturer knew that Risperdal® was associated with metabolic side-effects of some magnitude:
    … then, when instructed to send a "Dear Doctor" letter about those side effects in 2003, they sent out an advertisement instead:
So this bothers me –
We require those who want the data to submit a proposal and identify their research team, funding and any conflicts of interest. They have to complete a short course on responsible conduct and sign an agreement that restricts them to their proposed research question.
– in two ways:
  1. Those of us who want to "check their work" aren’t necessarily academics, particularly in psychiatry. We might not have any funding at all, and may be voluntarily operating with a PC, Excel, and a free copy of "R."
  2. We need some recognition that our goal is considered a research topic – namely, "Are they telling the truth in the published paper or are they presenting the data in a way that misleads the reader [like so many have done before]? Are they withholding data to make their drug look more efficacious or safer than it really is [like so many have done before]?" Putting the "re" in research!
Those are topics aimed at the good of mankind too! I don’t care if the pharmaceutical companies want to save face with the way this is presented to the world, as a generous humanitarian act, so long as the process allows for the kind of Data Transparency we need to prevent the kind of shameful criminal behavior J&J engaged in with Risperdal®. Dr. Krumholz needs to prove to us that his program knows what I’m talking about here…
Mickey @ 1:11 PM

finally, a right pick…

Posted on Sunday 2 February 2014

Mickey @ 10:32 PM

tordia rising…

Posted on Sunday 2 February 2014

Probably the reason that you don’t know about TORDIA [Treatment Of Resistant Depression In Adolescents] is that it was a doomed ship – cursed by fatal design and haunted by a storm of fate. By now you know that the new century of psychopharmacology opened with an all out assault on a new mental illness called treatment resistant depression, spearheaded by a gaggle of initialed studies by Steve Hyman’s NIMH. The afflicted were people diagnosed with Major Depressive Disorder whose symptoms stubbornly resisted the should-cure-everyone SSRIs. There was an almost universal conviction that there was some clever manipulation of the various drugs, some algorithm, that would extend their [then inflated] efficacy. It’s the sentiment that produced TMAP, STAR*D, IMPACT, COMED, and lives on in EMBARC – and in the child and adolescent realm, it produced TADS and TORDIA.

TORDIA was supported with NIMH grants MH61835, MH61856, MH61864, MH61869, MH61958, and MH62014, and MH66371 – one for each site. The study itself ran from 2001 through 2006. The reason I said it was cursed by poor design is that there was no control group and there were a plethora of heterogeneous experimental groups. The reason I said it was haunted is that Paxil was taken off the market in the UK and the black box warning was issued in the US during the study, so they had to reshuffle the protocols to adapt to new guidelines. Like STAR*D, TORDIA produced a lot of publications with an army of authors. The two main efficacy reports are here:
by David Brent, MD; Graham Emslie, MD; Greg Clarke, PhD; Karen Dineen Wagner, MD, PhD; Joan Rosenbaum Asarnow, PhD; Marty Keller, MD; Benedetto Vitiello, MD; Louise Ritz, MBA; Satish Iyengar, PhD; Kaleab Abebe, MA; Boris Birmaher, MD; Neal Ryan, MD; Betsy Kennard, PsyD; Carroll Hughes, PhD; Lynn DeBar, PhD; James McCracken, MD; Michael Strober, PhD; Robert Suddath, MD; Anthony Spirito, PhD; Henrietta Leonard, MD; Nadine Melhem, PhD; Giovanna Porta, MS; Matthew Onorato, LCSW; and Jamie Zelazny, MPH, RN
JAMA. 2008 299[8]:901-913.

Context: Only about 60% of adolescents with depression will show an adequate clinical response to an initial treatment trial with a selective serotonin reuptake inhibitor [SSRI]. There are no data to guide clinicians on subsequent treatment strategy.
Objective: To evaluate the relative efficacy of 4 treatment strategies in adolescents who continued to have depression despite adequate initial treatment with an SSRI.
Design, Setting, and Participants: Randomized controlled trial of a clinical sample of 334 patients aged 12 to 18 years with a primary diagnosis of major depressive disorder that had not responded to a 2-month initial treatment with an SSRI, conducted at 6 US academic and community clinics from 2000-2006.
Interventions: Twelve weeks of: [1] switch to a second, different SSRI [paroxetine, citalopram, or fluoxetine, 20-40 mg]; [2] switch to a different SSRI plus cognitive behavioral therapy; [3] switch to venlafaxine [150-225 mg]; or [4] switch to venlafaxine plus cognitive behavioral therapy.
Main Outcome Measures: Clinical Global Impressions-Improvement score of 2 or less [much or very much improved] and a decrease of at least 50% in the Children’s Depression Rating Scale-Revised [CDRS-R]; and change in CDRS-R over time.
Results: Cognitive behavioral therapy plus a switch to either medication regimen showed a higher response rate [54.8%; 95% confidence interval [CI], 47%-62%] than a medication switch alone [40.5%; 95% CI, 33%-48%; P = .009], but there was no difference in response rate between venlafaxine and a second SSRI [48.2%; 95% CI, 41%-56% vs 47.0%; 95% CI, 40%-55%; P = .83]. There were no differential treatment effects on change in the CDRS-R, self-rated depressive symptoms, suicidal ideation, or on the rate of harm-related or any other adverse events. There was a greater increase in diastolic blood pressure and pulse and more frequent occurrence of skin problems during venlafaxine than SSRI treatment.
Conclusions: For adolescents with depression not responding to an adequate initial treatment with an SSRI, the combination of cognitive behavioral therapy and a switch to another antidepressant resulted in a higher rate of clinical response than did a medication switch alone. However, a switch to another SSRI was just as efficacious as a switch to venlafaxine and resulted in fewer adverse effects.
Note: I said in the last post [off the science grid…] that I didn’t know where Dr. Wagner got "…60% of youngsters will respond favorably to their first antidepressant medication – generally a selective serotonin reuptake inhibitor (SSRI)." Well there it is under Context. This article is available full text online and that line is marked with references 12 through 16 which are linked. Take a look and see if you can get 60% out of them. I can’t.

What they did was gather a cohort of 334 subjects that had been SSRI non-responders and switched their medications to another drug. This was a time when the differences among the SSRIs and the difference between SSRIs and SNRIs was thought to matter, at least more than now. So they switched half to a different SSRI and half to an SNRI. Likewise, this was a time when CBT [Cognitive Behavior Therapy] was treated as something like a drug, so they further partitioned the two switched groups, half getting CBT and the other not. At 12 weeks, they had a 47/48% response to changing drugs, and it didn’t matter to which drug. And adding CBT pushed it up to 55%. What we don’t know is what would’ve happened had they been continued on their original drug; or given a placebo; or treated with CBT alone. So there’s no denominator in any equation. Like in STAR*D, the implication was that subsequent treatments were value added at a lower efficacy rate – but without controls, those conclusions remain, at best, speculations rather than true outcomes.
by Graham J. Emslie, M.D.; Taryn Mayes, M.S.; Giovanna Porta, M.S.; Benedetto Vitiello, M.D.; Greg Clarke, Ph.D.; Karen Dineen Wagner, M.D., Ph.D.; Joan Rosenbaum Asarnow, Ph.D.; Anthony Spirito, Ph.D.; Boris Birmaher, M.D.; Neal Ryan, M.D.; Betsy Kennard, Psy.D.; Lynn DeBar, Ph.D.; James McCracken, M.D.; Michael Strober, Ph.D.; Matthew Onorato, L.C.S.W.; Jamie Zelazny, M.P.H., R.N.; Marty Keller, M.D.; Satish Iyengar, Ph.D.; David Brent, M.D.
American Journal of Psychiatry 2010 167:782-791.

Objective: The purpose of this study was to report on the outcome of participants in the Treatment of Resistant Depression in Adolescents [TORDIA] trial after 24 weeks of treatment, including remission and relapse rates and predictors of treatment outcome.
Method: Adolescents [ages 12—18 years] with selective serotonin reuptake inhibitor [SSRI]-resistant depression were randomly assigned to either a medication switch alone [alternate SSRI or venlafaxine] or a medication switch plus cognitive-behavioral therapy [CBT]. At week 12, responders could continue in their assigned treatment arm and nonresponders received open treatment [medication and/or CBT] for 12 more weeks [24 weeks total]. The primary outcomes were remission and relapse, defined by the Adolescent Longitudinal Interval Follow-Up Evaluation as rated by an independent evaluator.
Results: Of 334 adolescents enrolled in the study, 38.9% achieved remission by 24 weeks, and initial treatment assignment did not affect rates of remission. Likelihood of remission was much higher [61.6% versus 18.3%] and time to remission was much faster among those who had already demonstrated clinical response by week 12. Remission was also higher among those with lower baseline depression, hopelessness, and self-reported anxiety. At week 12, lower depression, hopelessness, anxiety, suicidal ideation, family conflict, and absence of comorbid dysthymia, anxiety, and drug/alcohol use and impairment also predicted remission. Of those who responded by week 12, 19.6% had a relapse of depression by week 24.
Conclusions: Continued treatment for depression among treatment-resistant adolescents results in remission in approximately one-third of patients, similar to adults. Eventual remission is evident within the first 6 weeks in many, suggesting that earlier intervention among nonresponders could be important.
After 12 weeks, there was yet another division among the subjects. Responders could continue their current treatment. Non-responders could receive "open treatment" – something like dealers choice by unrestrained clinicians. 20% of the 12 week responders relapsed, but as a group kept improving. And "open treatment" didn’t do anything for the 12 week non-responders. In kids, the third tier didn’t add anything.
Along the way, Paxil was withdrawn, Effexor was cautioned, there was a black box warning added to all of the drugs in play, and around publication time, Senator Grassley lowered the boom on a couple of the authors [including Wagner]. TORDIA slipped under the waves in spite of a publishing flurry [listed on clinicaltrials.gov at the end]. The TORDIA revival mentioned in the last post [off the science grid…] actually started two year’s ago with a paper lead by Dr. Wagner [timeline of woes included]:
by Wagner KD, Asarnow JR, Vitiello B, Clarke G, Keller M, Emslie GJ, Ryan N, Porta G, Iyengar S, Ritz L, Zelanzny J, Onorato M, and Brent D.
Journal of Child and Adolescent Psychopharmacology. 2012 22[1]:5-10.

OBJECTIVE: The purpose of this article is to describe the effects of the pediatric antidepressant controversy on the Treatment of Serotonin-Selective Reuptake Inhibitor [SSRI] Resistant Depression in Adolescents [TORDIA] trial.
METHOD: Adolescents, ages 12-18 years, with SSRI resistant depression were randomized to one of four treatments for a 12 week trial: Switch to different SSRI, switch to an alternate antidepressant [venlafaxine], switch to an alternate SSRI plus cognitive behavior therapy [CBT], or switch to venlafaxine plus CBT.
RESULTS: The health advisories and "black box" warnings regarding suicidality and antidepressants in adolescents occurred during the course of the TORDIA trial. Revisions to the protocol, multiple-consent form changes, and re-consenting of patients were necessary. Recruitment of participants was adversely affected.
CONCLUSION: Despite a cascade of unforeseen events that delayed the completion of the study, the TORDIA trial resulted in clinically important information about treatment-resistant depression in adolescents.

Rather than be direct, I’ll make my editorial comments about this study with an analogy that spontaneously occurred to me while writing my last post:

Half a life ago, we were on a long drive-about in Scandanavia loaded with the guidebooks Americans carry everywhere, and came across a reference to a site we’d never heard of – the VASA, a wooden tall ship built in the 17th century that had been salvaged from the briny deep. She was indeed a site to remember. At the time we saw her, she was in a temporary building constantly, bathed with a saline spray to preserve the wood. She was largely intact – a trip into that part of history like no other. Being a guidebook scanner rather than reader, I was well into our visit to the ship before I heard the whole story. Minutes after being launched on her maiden voyage in 1628, the first gust of wind of note capsized the boat and she sank in the briny river harbor [thus the preservation]. The Vasa was reclaimed in 1961 and now has a very popular museum of her own in Stockholm.

It remains a wonderful irony that by far the best relic we have of the age of tall ships is the Vasa, a ship whose design was so flawed that she sailed less than a mile before sinking in a light wind. She was revived after centuries and stands as a peculiar testimony to bygone days…
Mickey @ 3:19 PM