more bully pulpit…

Posted on Wednesday 11 January 2017

When our group assembled to do our RIAT analysis of Paxil Study 329, we already had access to a wealth of raw data from that clinical trial thanks to the hard work of many other people who came before us. So we had the a prori Protocol, the Statistical Analysis Plan, the CSR [Clinical Study Report], and the IPD [Individual Participant Data] – all available in the Public Domain as the result of various Legal Settlements and Court Orders. The only thing we didn’t have – the CRFs [Case Report Forms] – the actual raw forms the raters used to record their observations during the study. But we felt that we needed them too. We had good reason to question the system originally used to code the Adverse Events, and felt it was important to redo that part from scratch using a more modern and widely validated system.

 

About that time, the European Medicines Agency [EMA] had announced that it was going to release all of its regulatory data. AllTrials was pressing for "all trials registered, all trials reported." I was researching on what authority the data was being kept proprietary in the first place, and finding nothing much except convention and inertia. What was being called Data Transparency was in the air, and it was an exciting prospect.

And then the pharmaceutical companies seemed to do a turnabout. GSK had just been hit with a $3 B fine, in part over Study 329, and they were one of the first to sign on to AllTrials. But as things developed, what they offered was something different from what a lot of us reeally wanted, at least what I wanted. By that time, I wasn’t a rookie any more and I’d vetted a number of industry funded, ghost written, psychopharmacology drug trials  turned into journal articles. I can’t recall a one of them was totally straight. So I wanted to see what the drug company saw – the a priori Protocol and Statistical Analysis Plan, the IPD, and the CRFs – the raw data. And the reason wasn’t to do any new research. It was to check their reported work, to do it right by the book, to stop the cheating.

And so with much fanfare, what the drug companies rolled out was something else – Data Sharing. They pretended that what we wanted was access to their raw data so we could do further new research – and that they were being real mensches to let us see it. They set up independent boards to evaluate proposals for projects. If we passed muster, we could have access via a remote desktop – meaning we couldn’t download the data. We could only see it online. All we could download were our results, if approved. In this scenario, they are generously sharing the data with us, avoiding duplication and wastage or some such, and the remote access portal protects the subjects’ privacy. They maintained control and ownership. What we wanted was Data Transparency to keep them honest, to stop them from publishing these fictional photo-shopped articles, to stop the cheating.

So our RIAT group submitted a request to their panel, and when they asked for a proposal, we didn’t make one up. We played it straight and told them why. After some back and forth, we submitter the Protocol from the original Study 329, and to their credit, they granted our request. The remote access system actually worked, but working inside of it was a complete nightmare [we called it "the periscope"]. The CRFs came to around 50,000 pages, and we could only look at them one page at a time! But that’s another story and it’s available in detail at https://study329.org/. The point for this post is that the call for Data Transparency got turned into something very different – Data Sharing. That’s called "SPIN." Instead of being on the hot-seat for having published so many distorted clinical trial reports – carefully crafted by professional ghost-writers – they portrayed themselves as heros, generously allowing outsiders to use their data for independent research. Sleight of hand extraordinaire!

So what does this have to do with the New England Journal of Medicine, and editor Jeffrey Drazen, and Data Transparency versus Data Sharing, and a bully pulpit? A lot – some mentioned in this series from in April 2016.
As editor of the NEJM, prominent figure in the International Committee of Medical Journal Editors, on the Committee on Strategies for Responsible Sharing of Clinical Trial Data he occupies a powerful position in shaping policy. He never mentions the corruption that has so many of us up in arms [the reason we need such a policy], and positions himself consistently on the side of protecting the sponsors’ secrecy – sticking to the Data Sharing idea. His opinion of people who are trying to bring the corruption into the light of day is obvious:
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.

The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick…

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
His predecessors Arnold Relman [The new medical-industrial complex], Jerome Kassier, and Marcia Angell [Is Academic Medicine for Sale?] lead a New England Journal of Medicine that championed the integrity of medical science. Jeffrey Drazen uses the bully pulpit of that same position to thwart attempts to restore that integrity. He’s either blind to, complicit with, or part of the medical-industrial complex Arnold Relman warned us about. And he fills his journal with articles about industry and clinical trials that ignore the rampant corruption in published clinical trial reports [see the bully pulpit… for a long list of 2016’s examples]…
Mickey @ 8:00 AM

the bully pulpit…

Posted on Tuesday 10 January 2017

A Bully Pulpit is a conspicuous position that provides an opportunity to speak out and be listened to. This term was coined by President Theodore Roosevelt, who referred to the White House as a "bully pulpit", by which he meant a terrific platform from which to advocate an agenda.

Flashback
In 1980, New England Journal of Medicine editor Arnold Relman saw something ominous coming up ahead, and wrote an editorial [The new medical-industrial complex] warning that there was a threat to the integrity of academic medicine from a growing medical industry. And by 1984, the NEJM instituted a policy against publishing any editorials or review articles by authors with industry Conflicts of Interest. But by 1999 things had changed dramatically, a story I summarized in a narrative…. At the time, the new editor, Jeffrey Drazen, was embroiled in a controversy over his own ties to industry [see New England Journal of Medicine Names Third Editor in a Year, FDA censures NEJM editor, Medical Journal Editor Vows to Cut Drug Firm Ties].

Flash Forward
In the summer of 2015, Drazen published an editorial suggesting that the NEJM rescind Relman’s policy and allow experts with COI to write reviews and editorials, introducing a three part series by one of his staff reporters explaining why this was really a good idea:
The suggestion was met with a swift flurry of negative responses from some of medicine’s solidest citizens:
And I couldn’t seem to keep my mouth shut about it either [a contrarian frame of mind… , wtf?…, wtf? for real…, a narrative…, not so proud…, unserious arguments seriously…, the real editors speak out…, got any thoughts?…, not backward…], mostly amplifying on what the others said. I’ll have to add that it felt almost personal. The New England Journal of Medicine was my own very first medical subscription ever, and I read it cover-to-cover for years. It was part of my coming of age as a physician, articles embedded in my own scientific and ethical infastructure. And I felt that Jeffrey Drazen was betraying that history. Who was he to do that? Over the year and a half since that series came out, he’s been on my radar. But the New England Journal of Medicine isn’t one of the journals I follow regularly, so it only came up when there was a loud blip, like his particularly obnoxious editorial, Data Sharing – the one where he warned us about "data parasites" [see notes from a reluctant parasite…].

Then, someone sent me a link to this month’s The Large Pharmaceutical Company Perspective about several heroic PHARMA adventures. I noticed it was from a series called The Changing Face of Clinical Trials, so I ran down the rest of the series and read them all. And then I found some other NEJM Clinical Trial Offerings offerings in 2016. 

    The Changing Face of Clinical Trials
  1. June 2, 2016 | J. Woodcock and Others
    With this issue, we launch a series of articles that deal with contemporary challenges that affect clinical trialists today. Articles will define a specific issue of interest and illustrate it with examples from actual practice, as well as bring additional history and color to the topic.
  2. June 2, 2016 | L.D. Fiore and P.W. Lavori
    Investigators use adaptive trial designs to alter basic features of an ongoing trial. This approach obtains the most information possible in an unbiased way while putting the fewest patients at risk. In this review, the authors discuss selected issues in adaptive design.
  3. August 4, 2016 | I. Ford and J. Norrie
    Investigators use adaptive trial designs to alter basic features of an ongoing trial. This approach obtains the most information possible in an unbiased way while putting the fewest patients at risk. In this review, the authors discuss selected issues in adaptive design.
  4. August 4, 2016 | I. Ford and J. Norrie
    In pragmatic trials, participants are broadly representative of people who will receive a treatment or diagnostic strategy, and the outcomes affect day-to-day care. The authors review the unique features of pragmatic trials through a wide-ranging series of exemplar trials.
  5. September 1, 2016 | S.J. Pocock and G.W. Stone
    When the primary outcome of a clinical trial fails to reach its prespecified end point, can any clinically meaningful information still be derived from it? This review article addresses that question.
  6. September 8, 2016 | S.J. Pocock and G.W. Stone
    When a clinical trial reaches its primary outcome, several issues must be considered before a clinical message is drawn. These issues are reviewed in this article.
  7. October 6, 2016 | D.L. DeMets and S.S. Ellenberg
    Randomized clinical trials require a mechanism to safeguard the enrolled patients from harm that could result from participation. This article reviews the role of data monitoring committees in the performance of randomized clinical trials.
  8. November 3, 2016 | M.A. Pfeffer and J.J.V. McMurray
    Ethical issues can arise in the design and conduct of clinical trials. Using the trials that set the stage for our current treatment of hypertension, the authors show how the changing treatment landscape raised ethical problems as these trials were undertaken.
  9. January 5, 2017 | M. Rosenblatt
    The former chief medical officer of a large pharmaceutical company addresses the issue of complexity and how it affects the performance of clinical trials.
    The Final Rule
  • September 16, 2016 | D.A. Zarin and Others
    The final rule for reporting clinical trial results has now been issued by the Department of Health and Human Services. It aims to increase accountability in the clinical research enterprise, making key information available to researchers, funders, and the public.
    History of Clinical Trials
  1. June 2, 2016 | L.E. Bothwell and Others
  2. July 14, 2016 | A. Rankin and J. Rivest
  3. August 11, 2016 | L.E. Bothwell and S.H. Podolsky
  4. Clinical Trials, Healthy Controls, and the IRB
    September 15, 2016 | L. Stark and J.A. Greene
When I got down to the next ones about Data Sharing, I went back even further because I was waking up to something I had kind of forgotten – a bit of relevant sleight of hand that should have been on the front burner, but somehow got lost in the shuffle. What I realized was that the series I started this post with, Revisiting the Commercial–Academic Interface, didn’t just come out of the blue. It was part of a story that was larger – one that I’ll remind us of in the next post. But first here are the articles on Data Sharing:
    Data Sharing
  1. Collaborative Clinical Trials
    March 3, 2011 | A.J. Moss, C.W. Francis, and D. Ryan
  2. Pragmatic Trials — Guides to Better Patient Care?

    May 5, 2011 | J.H. Ware and M.B. Hamel
  3. October 4, 2012 | R.J. Little and Others
  4. October 24, 2013 | M.M. Mello and Others
  5. November 27, 2014 | B.L. Strom and Others
  6. December 25, 2014 | S. Bonini and Others
  7. January 8, 2015 | D.A. Zarin, T. Tse, and J. Sheehan
  8. January 15, 2015 | J.M. Drazen
  9. Adaptive Phase II Trial Design
    July 7, 2015 | D. Harrington and G. Parmigiani
  10. August 4, 2015 | The Academic Research Organization Consortium for Continuing Evaluation of Scientific Studies — Cardiovascular (ACCESS CV)
  11. August 4, 2015 | The International Consortium of Investigators for Fairness in Trial Data Sharing
  12. August 4, 2015 | H.M. Krumholz and J. Waldstreicher
  13. January 21, 2016 | Dan L. Longo, and Jeffrey M. Drazen
  14. August 4, 2016 | E. Warren
  15. September 22, 2016 | F. Rockhold, P. Nisen, and A. Freeman
  16. September 22, 2016 | B. Lo and D.L. DeMets
  17. September 22, 2016 | R.L. Grossman and Others
  18. October 27, 2016 | B.L. Strom and Others
And so on to the reminder in the next post[s] – how Data Transparency got turned into Data Sharing – and why I called this the bully pulpit…
Mickey @ 5:57 PM

Let’s go take a look…

Posted on Wednesday 4 January 2017


by Matthew J. Press, M.D., Ryan Howe, Ph.D., Michael Schoenbaum, Ph.D., Sean Cavanaugh, M.P.H., Ann Marshall, M.S.P.H., Lindsey Baldwin, M.S., and Patrick H. Conway, M.D.
New England Journal of Medicine. December 14, 2016
DOI: 10.1056/NEJMp1614134

For example, under CoCM, if a 72-year-old man with hypertension and diabetes presents to his primary care clinician feeling sad and anxious, the primary care team [primary care clinician and behavioral health care manager] would conduct an initial clinical assessment using validated rating scales. If the patient has a behavioral health condition [e.g., depression] and is amenable to treatment, the primary care team and the patient would jointly develop an initial care plan, which might include pharmacotherapy, psychotherapy, or other indicated treatments. The care manager would follow up with the patient proactively and systematically [using a registry] to assess treatment adherence, tolerability, and clinical response [again using validated rating scales] and might provide brief evidence-based psychosocial interventions such as behavioral activation [which focuses on helping people with mood disorders to engage in beneficial activities and behavior] or motivational interviewing. In addition, the primary care team would regularly review the patient’s care plan and status with the psychiatric consultant and would maintain or adjust treatment, including referral to behavioral health specialty care as needed.
This paragraph is from an article about how the Centers for Medicare and Medicaid Services [CMS] intends to pay the psychiatrists involved in Collaborative [AKA Integrated] Care [but that isn’t why it’s here]. It has gotten to be something of a hobby of mine to scan these articles as they come around, not that I intend to be involved with the "Psychiatric Collaborative Care Model [CoCM], an approach to behavioral health integration [BHI] that has been shown to be effective in several dozen randomized, controlled trials." What intrigues me is the language used to write them – a bizarre kind of new·speak. I highlighted what I’m talking about in red in the quoted paragraph. Here’s an example of what I’m talking about:
"an initial clinical assessment using validated rating scales. If the patient has a behavioral health condition [e.g., depression] and …"
First off, notice that the rating scales determine whether or not the patient has something wrong. So while the developers of the various rating scales have generally said that they’re not for making a diagnosis, it looks like that’s how they’re being used here. Whoops! Technically, it’s not a diagnosis. It’s a behavioral health condition. That is certainly some kind of new·speak, but it’s not what I’m focused on right this minute. I’m talking about the phrase validated rating scales. We’re accustomed to hearing about validated rating scales when we talk about Clinical Trials, but not running into it in case narratives. Another example:
"[again using validated rating scales] and might provide brief evidence-based psychosocial interventions such as…"
A lot of new·speak in this one, but it’s the evidence-based psychosocial interventions that I’m referring to [I’ll get to brief in a minute]. I haven’t given this an enormous piece of real estate in my head, but so far, this is my tentative lexicon of new·speak categories:
  1. adjectives saying that something is evidence-based [meaning positive clinical trials]: validated rating scales, evidence-based psychosocial interventions, evidence based psychotherapy, indicated treatments, guideline approved this and that, etc. The gist of things is that only group-certified interventions are valid…
  2. traditional language is de-psychiatrized and de-medicalized: behavioral health care manager, behavioral health condition, rating scales, behavioral health specialty. behavioral activation.
  3. strict control, limiting choices and duration of anything: [now we get to brief] – particularly any face to face contact with psychiatrists.
  4. adverbs implying contientiousness and industry: proactively and systematically [using a registry]
That’s just off the cuff. With thought, the themes and motives of new·speak will undoubtedly become clearer. Just a few other comments. It’s an odd way to talk no matter what the reason. It sounds a bit like the overinclusive stilted language sometimes heard in chronic Schizophrenia. We get the point that they want everything to be evidence-based [RCT certified], so why append it to every noun? We also get the point that they want psychiatry and psychiatrists out of the picture except to review and to sign off on the cases.

Just a couple of observations. In every example, the cases are universally lite – unlikely reaching anyone’s standards for mental illness proper. I didn’t really know what behavioral activation and motivational interviewing were. I watched a few youtube videos and looked at several trials of the latter [and there have been many] with widely varying results. They’re behavior modification interview techniques. But the main thing I took away from thinking about this hadn’t occurred to me before. In a way, I’ve already done this myself for over thirty years. I did it in the clinic where I’ve been working for the last eight. And when I was on the faculty, I did it somewhere in some affiliated facility on most days. The med students, or residents, or staff saw the patients and presented them. Sometimes I said "fine." Sometimes we talked about the case. And sometimes I saw the patient myself. I expect most psychiatrists have done this kind of thing at some point in their career for years. But I’m absolutely sure I wouldn’t do this one.

The secret to being able to supervise other clinicians in a situation like the one described here isn’t some encyclopedic knowledge of medications, or diagnoses. It’s in being able to get to know how to read the clinician you’re supervising. Early on, I expect I saw most cases a given resident presented. But as I got to know them, I learned when I could trust what I was hearing, versus when something wasn’t right. One almost never knows what’s out of whack from such a presentation, just that you hear yourself saying, "Let’s go take a look." It’s a skill that comes with experience and a lot of it. Of course, the best trainees say, "I don’t know what’s going on with this case. Would you "take a look"? But others don’t know, and so [1] you "take a look" and then [2] try to help them figure out why they were off track, what they missed.

So I guess I know the reason I’m absolutely sure I wouldn’t do this one. That whole system being described up there is designed to keep me out of the room the patient’s in. They don’t need me to help them with behavioral activation and motivational interviewing. They know how to do that certainly better than I. And most of the time, the Primary Care Docs don’t need me to pick an antidepressant. One learns that kind of thing quickly. What they need is someone who has spent a lifetime around suicidal and psychotic patients; who has actually seen most of those unusual medical cases that masquerade as mental illness that most doctors only heard about in medical school; someone who has missed a few diagnoses along the way and knows the dire consequences, is acutely aware when something smells funny. And in this proposed Integrative system, I’m not the one who gets to say, "I need to see this person." Usually, I’m hearing about the case from a care coordinator who may be giving me second-hand information. And even with stable outpatients who aren’t getting better, I wouldn’t have much of a clue how to get the case on track without either seeing the patient, or making sure they’re being seen by someone who really knows the ropes and doesn’t speak new·speak.


Note: When I started writing this post, it wasn’t at all clear to me where I was headed. It’s been that way every time I run across a Collaborative Care article or reference. My reaction has been visceral. It wasn’t until somewhere around that lightbulb up there that I finally could put some words to my reaction. Reading the various versions of Collaborative Care, I always feel the same. But now I can at least make sense of why I respond so negatively. In the system as proposed, I can’t do my job – the actual job assigned to me. I can’t be in charge of saying "Let’s go take a look." And if I can’t do that, there’s really no reason for me to be there in the first place.

Another Note: What’s funny is that a little before the lightbulb, I thought I was finally getting a handle on the why of my reaction. But It was something entirely different from how the post ended, and it’s worth saying in its own right, but it wasn’t "It." So now I guess I’ll have to write yet another post to explain that other reason I react so negatively to these Collaborative Care articles…
Mickey @ 11:25 PM

big thing, small package

Posted on Monday 2 January 2017

Sometimes big things come in small packages. This is from a research letter published last month in JAMA Intern Medicine. The author’s data comes from the Agency for Healthcare Research and Quality. Medical Expenditure Panel Survey MEPS HC-160A: 2013 prescribed medicines:
hat tip to James O… 
by Moore TJ and Mattison DR
JAMA Intern Medicine. Dec 12, 2016. [Epub ahead of print]
That report they summarized is a bear, but they’ve pared it down into two tables that are manageable. First, how widespread is the use of prescribed psychiatric medication [expressed as % of the population]?
Next, which drugs are being used?
If you work in a public clinic like I do, none of that will come as any great shock. The only thing that surprised me was that Zoloft® and Ambien® are so high up on the list. I prescribe neither so it was just a surprise. Reasons? I had no success with Zoloft® at all, and later, when I looked at the FDA approval documents, they looked beyond shaky to me [zoloft: the approval I…, zoloft: the approval II…, zoloft: the approval III…, zoloft: beyond the approval I…, zoloft: beyond the approval II…, zoloft: the epilogue…]. Ambien®? When the second patient showed up with bruises from falls while sleepwalking on Ambien®, it came off of my formulary for good. But otherwise, no big surprises. However, the authors went further and took a reasonable stab at quantifying something that I’ve thought about [and struggled with] ever since I started at the clinic about 8 or 9 years ago – long term use of these medications. Here are a few quotes from their letter:
"Long-term use was defined as 3 or more prescriptions filled in 2013 or a prescription started in 2011 or earlier…"

"Most psychiatric drug use reported by adults was long term, with 84.3% [95% Cl, 82.9%-85.7%] having filled 3 or more prescriptions in 2013 or indicating that they had started taking the drug during 2011 or earlier. Differences in long-term use among the 3 drug classes were small. The long-term users filled a mean [SE] of 9.8 [0.19] prescriptions for psychiatric drugs during 2013…"

"These data show 1 of 6 US adults reported taking psychiatric drugs at least once during 2013, but with 2- to 3-fold differences by race/ethnicity, age, and sex. Moreover, use may have been underestimated because prescriptions were self-reported, and our estimates of long-term use were limited to a single survey year…"

"Among adults reporting taking psychiatric drugs, more than 8 of 10 reported long-term use…"
Having taken something of a 25 year long sabbatical from mainstream psychiatry after leaving academia for a private psychotherapy practice, I started volunteering in local charity clinic after I retired. I was unprepared for the psychiatry I encountered there. I expected that I’d have to bone up on my psychpharmacology [and I did], but I sure didn’t care for what I found. It seemed like over·medication, poly·pharmacy, inappropriate drug choices, continual use of time·limited medicines, all were standard operating procedures. So I started reading the clinical trials and learned about ClinicalTrials.gov, Drugs@FDA, PubMed, and the push·back – the blogs and literature that were developing around these topics [and I started this one of my own].

This report by Moore and Mattison well documents what I found returning to general psychiatry. I still find the figures staggering, but the one that makes the least sense is that these medications are being taken long term. Depression, even in its mest malignant format is time limited for the most part. There’s evidence that in some depressions, maintenance medication can be a relapse preventive, but hardly in 80% of the cases. All of this has happened in a period where psychiatry has been telling itself and the rest of the world that it’s medicalizing, but there’s nothing about those figures that’s medical. It’s contaminated by profiteering, plain and simple, and at the expense of patients who’ve come for help.

I hear from patients who are caught up in a carousel  of medications trying unsuccessfully to find something that helps but getting nowhere or even seeing their symptoms worsening:
    "… insanity is doing the same thing over and over again expecting different results"
There was a time that the best advice would be to forget what they’ve been told to date and start over, tapering any medication that isn’t clearly helping. Check out the hardware [a medical condition or medication that might be contributing]; likewise with the firmware [a major psychiatric syndrome like Melancholia or Manic Depressive Illness]; and then the software [find a reputable therapist to help them explore their lives, past and present, looking for the tangles]. That’s the same advice they would have gotten forty years ago. And it’s still good advice.

This coin has another side. While the figures quoted in this article telegraph the clear message that these medications have been over·promoted and over·prescribed, they also raise another potential concern. When we became disappointed with mental hospitals, we shut them down rather that right-size them. When we were disillusioned with antipsychotic medication and community care, we did the same thing [and filled our jails]. Similarly, when the various psychotherapies didn’t live up to their early promises, they were vilified. And while we currently remain in a situation where the medications on that list are over·prescribed, that’s not to say that there aren’t a significant number of patients who are genuinely benefiting from taking them. Something about the:
    "… baby with the bathwater"
Mickey @ 8:00 AM

so long 2016…

Posted on Saturday 31 December 2016

Mickey @ 4:00 AM

whodunit? theydunit…

Posted on Thursday 29 December 2016

    The active voice is usually more direct and vigorous than the passive:
         I shall always remember my first visit to Boston.
    This is much better than
         My first visit to Boston will always be remembered by me.
    The latter sentence is less direct, less bold, and less concise. If the writer tries to make it more concise by omitting "by me,"
         My first visit to Boston will always be remembered,
    it becomes indefinite: is it the writer, or some person undisclosed, or the world at large, that will always remember this visit?

I come from a generation schooled in the ways of Strunk and White, though the only suggestion I really remember is Don’t use the passive voice as my take-away [in case I forget, the grammar-checker in Microsoft Word is there to remind me]. As a kid, I could see that the active voice sounded better. But later, I saw that the passive voice was often used to obscure agency – a way of avoiding saying directly whodunit?. So writing "The primary outcome variables were changed in the published version of Paxil Study 329" just isn’t the same as writing "Martin Keller and his coauthors changed the primary outcome variables in the published version of Paxil Study 329."

One encounters patients who appear to live their lives in the domain of the passive voice. Things just happen in the world. Things happen to them [usually bad or disappointing things], but there’s no agent causing them. And invariably, they leave out their own participation in the things that happen. This was once known as the Fate Neurosis. While it may keep them from blaming others, or keep them from shouldering blame themselves, it’s part of a long-suffering view of life that has a sticky persistence that maintains their dysphoria [and often drives their therapists and acquaintances to distraction]. One of the goals of their psychotherapy is to help them see their own part in making things happen, even if it’s negative – to help them see a world in which they are actors rather than victims of obscure forces like fate, destiny, or bad luck. Whodunit? is of major importance in understanding anything that happens to these people [often times theydunit].

Oddly, my mind goes down this path when reading some of the language used to describe the various sources of bias in Clinical Trial reporting. There’s a long-suffering quality to the lamentations, as if we are victims of a maleficant  universe. It’s in the language we use. Publication Bias refers to trials with unfavorable outcomes that don’t get published. That italicized phrase happens to be an example of the use of the passive voice in that the actor who didn’t publish the study is missing in action – literally. Selective Reporting? Somebody did the selecting. And so it goes. The culprit isn’t in the language. All these sheenanigans that have so garbled our Clinical Trial literature aren’t mistakes, or sloppiness, or something overlooked, or random acts of a perverse deity. They’re not coming from incompetent or poorly trained statisticians. They’re the conscious, motivated acts of a person or persons who’ve got something very specific in mind. And again, the important question is whodunit?

We all know that these distorted trial reports are motivated actions. The goal is to exaggerate efficacy and downplay toxicity, to sell these drugs, but that knowledge doesn’t make it into our descriptive language or our policies. We routinely relate to them in the passive voice, but then wrack our brains trying to think up things that might respond to their happening rather than stopping people from doing them in the first place. We request minimal information and give industry a long time to provide it. Then we don’t levy fines when the required information doesn’t show up. We lament the things that are happening and rarely go after the agents except to extract inadequate fines long after the fact. Many say it won’t stop until we start sending people to prison, but that just hasn’t happened, in part because it’s so difficult to prosecute and often impossible to prove.

Instead of chasing instances where various biases have colored  the reported results after the fact, we could face the reality that distortion and non-compliance are the the expected response. The a priori protocol and declared outcome variables are unlikely to be available a priori. So we could say that no trial can begin until the outcome variables are posted in the registration section on ClinicalTrials.gov. Why not? They’re already available from the Institutional Review Board submission. Similarly, we could say no FDA review of an NDA will be initiated until the Results Database is publicly available filled out on ClinicalTrials.gov for all submitted trials. Why not? They’re being submitted to the FDA so they’re available. Why not submit them to the rest of us?

So no more it happened to us. We need to act on they do it. We already know whodunit!
Mickey @ 11:54 AM

an explanation and a surprise…

Posted on Sunday 25 December 2016

So, picking up from explanation would be welcome… and looking at the enrollments in RAP-MD-01, RAP-MD-02, and RAP-MD-03 that seem so high in those Rapastinel Clinical Trials. The way trialists pick their sample size is to do a Power Analysis. Using the Standard Deviation from a previous similar study, then pick the difference in means between the control and experimental values you would consider meaningful, and it spits out a sample size. Here’s the formula:

These studies use Montgomery-Asberg Depression Rating Scale [MADRS], and I can’t find any data to give me a Standard Deviation for Rapastinel using MADRS, nor do I know that scale well enough to select a mean difference. But, if you’ll allow me a bit of numeric mojo, I can use the Z Scores for p=0.05 and an 80% power [both standard], si I simplified the equation. I noticed that it has a clause that contains the formula for the Cohen’s d Effect Size so I substituted and come up with a formula that calculates the sample size for the group for any given Effect Size [could that be right?]. The table on the right gives some example values:

Looking at the group sample sizes for RAP-MD-01, RAP-MD-02, and RAP-MD-03, we have 700÷2 = 350, 2333÷3 = 778, and 1556÷2 = 778 respectively. Applying the formula, that gives Cohen’s d values of 0.21, 0.14, and 0.14. So if my formula works, using Cohen’s rough guidance [0.25 is a weak Effect, 0.50 is moderate Effect, and 0.75+ is a strong Effect] these studies are powered to detect statistically significant differences at Effect Sizes that are for all intents and purposes no Effect at all, clinically insignificant. By this read, these studies are dramatically overpowered.

And as to the question, why three different clinical trials that are close to identical? That one’s easy. They need two statistically significant RCTs to be eligible for an FDA Approval. Instead of doing a trial, and if it’s positive, trying to replicate it, they are doing them simultaneously because it’s faster. Sinilarly, why so many sites? I really speeds up recruitment.And how are they going to come up with 4589 depressed subjects that are treatment failures [<50% response to an SSRI] and get this study done in two years? I haven’t a clue. But the point is clear, this is a race to the finish line, to be the first company to have a Ketamine-like product on the market.

Surprise/Flash: So I just went to ClinicalTrials.gov to look up something about RAP-MD-04 that I was about to write about, and there’s another whole trial! posted on Christmas Day! It’s called Long-term Safety Study of Rapastinel as Adjunctive Therapy in Patients With Major Depressive Disorder [RAP-MD-06]. And this is what I had just typed, "But those things aren’t the most alarming piece of this suite of clinical trials. The fourth trial [RAP-MD-4] is a longer term clinical trial looking at relapse prevention…" I was about to talk about the potential harms. They were talking about an extension study where they were giving an intravenous drug weekly that is a kissing cousin to Ketamine, a hallucinogen. They’ve reported that Rapastunel isn’t a hallucinogen based on one published single shot trial. Now they’re going to give it weekly for up to two years as a preventative, but there’s no contingency for any continued monitoring over the long haul. And Flash, there’s appears a safety analysis. Thought broadcasting? I hope not. Anyway, here’s the new study…

which looks for all the world like yet another trial using that same cohort. Is that kosher? Basing multiple trials on a single group of subjects? I guess the right thing to do at this point is to back off until Allergan settles down and lands on a scenario that suits them. Is this their first CNS drug trial? They’re sure making one fine mess of things so far…

So Merry Christmas already!…
Mickey @ 6:00 PM

season’s greetings…

Posted on Sunday 25 December 2016

Mickey @ 12:01 AM

explanation would be welcome…

Posted on Friday 23 December 2016

I’ve been writing about Rapastinel, an NMDA receptor partial blocker touted to have Ketamine’s antidepressant effects without being a psychomimetic [see a touch of paralysis… and a block-buster-in-training…]. It was developed by a Northwestern University neuroscientist who formed a private company [Naurex], later purchased by industry giant Allergan for $560 M. They’ve recently registered four phase 3 clinical trials – all now recruiting. Before discussing these trials, here’s a bit of a review.

A properly conducted clinical trial has a number of essential elements related to the efficacy analysis:

  1. Subjects are assigned to the various arms at random.
  2. It is blinded – neither subject nor rater knows the subject’s assigned arm.
  3. The outcome variables [primary and secondary] are declared a priori [before the study begins] as is the plan for later analysis.
Many clinical trials have been misreported in journal articles, and a common modus operandi for distorting the results has been to base the report on the outcome variable that give the most favorable results rather than those declared a priori. Since the pharmaceutical companies that sponsor these trials insist that the raw data is proprietary [a secret], we don’t have the IPD [Individual Participant Data], the CRFs [Clinical Report Forms], the Protocol, or the SAP [Statistical Analysis Plan]. So our only shot at knowing what outcome variables were designated primary and secondary a priori is the registration on ClinicalTrials.gov. And even registration is only useful if it’s before the study starts and the outcome variables are clearly stated.

And so to the Rapastinel Phase 3 trials. Three of the four are virtual clones of each other, differing only in the number of subjects [why?] and one has a second dose level. They all have the same cohort – subjects in a Major Depressive Episode that "[h]ave no more than partial response [< 50% improvement] to ongoing treatment with a protocol-allowed antidepressant." These three trials call for 4589 total subjects who have failed to respond to SSRIs, documented by a less that 50% response on some unspecified rating scale after a course of an antidepressant from some unspecified list of drugs. That’s a huge cohort and it’s unclear why so many. There’s no power analysis included to explain it. Likewise, there’s a huge number of sites [134]. I even wondered if they are piggy-backing on the end of a bunch of somebody else’s antidepressant trials to get cases? Some kind of explanation would be welcome.

This is a medication that is given intravenously and the antidepressant effect lasts ~one week in the proof-of-concept study [see a block-buster-in-training…], so they call for weekly injections. Here’s all they have to say about the conduct of the study and the primary and secondary outcome variables:

  • Primary Outcome Measure: Change from Baseline in Montgomery-Asberg Depression Rating Scale [MADRS] Total Score [Time Frame: Baseline and 1 Day]
  • Secondary Outcome Measure: Change from Baseline in MADRS Total Score at the End of Study [Time Frame: Baseline and 3 Weeks]
That’s it for RAP-MD-01, RAP-MD-02, and RAP-MD-03. Bear in mind that this is a medication that’s going to be given weekly intravenously to over 4000 subjects for several years [explained below]. It’s a cousin to Ketamine [Special-K], a drug that people take to hallucinate, certainly not with anything like this kind of chronic frequency. Based on a single one injection proof-of-concept trial, the claim is that it has the antidepressant effect of  Ketamine, but not the club drug effects. But we have no idea if that holds true with long term use. And yet there’s no plan about how harms are to be assessed, no PANSS to look for subtle psychotic symptoms, no global well-being scale, no "self assessment" by the subjects themselves mentioned. Safety should come before efficacy.

How long does the study last? It doesn’t really say. I’m presuming three weeks based on that Time Frame comment in the Secondary Outcome Measure. They say the Primary Outcome Measure is the MADRS Score on day 1. After the first injection? or after each injection? And why day 1? After studying it a while, this was my best guess about what they were going to do,,,

Looking at the only published study [see a block-buster-in-training…], it seems important to take a look later in that week. That’s the whole reason the drug’s being given – for its presumed ability to last the week. And there are any number of Baselines. Which one is used? When? Is there a minimum MADRS score for entering the study? I could go on and on [and so could you].

And the 4th trial? RAP-MD-04? While it doesn’t say it in the ClinicalTrials.gov registration document, this trial apparently follows some subset of the subjects from the other three trials [responders?] on either continued Placebo or Rapastinel [weekly or every two weeks] as a relapse prevention trial for two years. And though it presumably follows some group from RAP-MD-01, RAP-MD-02, and RAP-MD-03, the criteria for that determination isn’t specified. Again, there’s no self rating depression scale, no PANSS, no adverse event plan mentioned.

In September, the NIH, FDA, ad ClinicalTrials.gov announced their long studied reforms for ClinicalTrials.gov and its place in our clinical trial reporting system [see a blast off… and The Final Rule]. If this is an example of what they had in mind, they failed. To be continued
Mickey @ 9:16 PM

Klerman 1978: schizophrenia…

Posted on Monday 19 December 2016

Psychiatry was obviously ripe for the the medicalization ushered in by the neoKraepelinians and the DSM-III released two years after this article. Changes in reimbursement schedules, a burst of new psychotropic drugs certified by industry funded clinical trials, and a focus on neuroscience research soon followed. The disappearance of the public mental hospitals was matched by similar closings in the private sector. And the boundaries between academic psychiatry, guild organizations, and the pharmaceutical industry became increasingly indistinct.

In the period since this chapter was written, the large mental institutions closed for good, replaced, with the reinstitutionalization of many chronic mental patients in our jails and prisons. Psychiatry did indeed medicalize to the point that most psychiatrists became primarily involved in pharmacologic and other biological treatments. And the neuroleptic drugs used in the treatment of psychosis in Klerman’s time were for the most part forgotten, replaced by a string of new Atypical Antipsychotics.

Although The Evolution of a Scientific Nosology] is a broad commentary about nosology in general, it’s in a book called Schizophrenia: Science and Practice, sandwiched among a number of different perspectives [though it would soon become the dominant point of view]. Here’s what Klerman had to say about Schizophrenia in 1978:
What has been the influence of the disease approach on understanding schizophrenia? The neo-Kraepelinian answer has been another question: Does the concept of schizophrenia have any meaning, and if it does, what are the data that give it meaning? In other words, the concern has been with what one might call the epistemology of diagnosis; namely, what are the rules of the game? In the disease approach, there are six steps toward validating a concept of an illness such as schizophrenia.
  1. Define the theoretical bases with clarity. It is very important to make explicit the assumptions on which the many conceptual views of schizophrenia are based. But unfortunately much of psychiatric discourse until the middle of this century* has never moved beyond these theoretical debates. In order to move psychiatry beyond philosophy and into science, the second step must be taken.
  2. Translate the general concept into specific hypotheses that can be operationally tested. For example, what is the meaning of borderline schizophrenia? What are its components? How does it manifest itself?
  3. Put the hypothesis to empirical testing to determine its reliability. How well do several observers agree that a borderline patient does have ego deficits, is using splitting or denial?
  4. Subject the data to various statistical tests to determine whether we are dealing with one syndrome alone or a mixture of syndromes.
  5. Attempt to validate the statistics by follow-up studies, family and genetic investigation, correlates in childhood development, and so on
  6. Undertake epidemiological studies to ascertain the patterns of incidence and prevalence.
How well, then, does schizophrenia meet the criteria of a chronic disease in the medical model? It meets it well but not completely. Before one can conclude definitively that schizophrenia is a disease, conclusive evidence will have to be presented as to etiology and clinical course. While such evidence exists for many other disorders in psychiatry, it does not yet exist for schizophrenia — nor for many other clinical conditions with which medicine deals, such as hypertension, arthritis, and leukemia. That is to say, it is an obviously disordered state with multiple determinants in which there is not certainty as to the exact etiology. Moreover, schizophrenia as we now’ define it is similar to hypertension in that it is likely to comprise various disorders. As the specific etiological principles come into scientific investigation, we will probably reaffirm Bleuler’s concept of a group of schizophrenias. Nevertheless, it is likely that within this group of schizophrenias there is a core group that has a strong genetic component. This genetic factor creates a vulnerability that becomes manifest in psychosis when precipitated by environmental stresses.
In his introduction to the chapter, Klerman had said of Emil Kraepelin…
His textbook was significant in tlie history of psychiatry not because it was the first textbook of psychiatry but because it was one of die first to approach mental illness in terms of causation and etiology, using the principles of modern scientific medicine.
and
After classifying as many cases of mental illness as possible by etiology — those due to infection, to endocrine disorders, and so on — Kraepelin was left with a large group of patients whose psychoses began in young adulthood and went on for many years but who had relatively few deaths… Kraepelin proposed that these psychotic conditions with no established etiology be further divided into two groups, which he called “dementia praecox” and “manic-depressive insanity.” He justified this division on the basis of clinical features during the acute illness, long term course, and outcome.
… capturing the conundrum that continues to haunt these discussions – from among the psychotic cases with no established etiology, Kraepelin was a pioneer for approaching them in terms of causation and etiology. In the snippet from his section Schizophrenia as a Disease quoted above, he lays out a pathway to define such Diseases.

Most psychiatrists have traditionally accepted the disease model based on the syndromatic constellation and predictable course, suspecting a biological etiology to show itself sooner or later. The critics are less taken with the homogeneity of the syndromes [see a guest post from Sandy Steingard…], many seeing guild hegemony and medical training as unacknowledged complicating factors. That conflict is, if anything, more intense now than it was in 1978.

Gerald Klerman and the neoKraepelinians had an unprecedented impact on the subsequent course of psychiatry itself. On the other hand, the plight of the patients with the schizophrenias, particularly those with its chronic forms, have not seen much change in the four decades that followed.
Mickey @ 3:21 PM