the great race…

Posted on Monday 31 October 2016

Tired of worrying about the historic race? Frequently checking Nate Silver’s 538 site?  There’s a solution to your turmoil…
Change historic races!
Mickey @ 9:30 AM

the frenzied match of ping pong: the last stand…

Posted on Sunday 30 October 2016

by Peter Doshi, associate editor
British Medical Journal. 2016 355:i5543
We all agree that the participants in trials should be randomly assigned to the various arms of the study, and about the conditions of the double-blind design. The old pharma trick of not publishing negative studies has been mostly eliminated by required registration. So what’s left to complain about? I’ve been borrowing Ben Goldacre et al’s COMPare study, some of the Journal Editors’ responses, and Peter Doshi’s reporting aiming at a particular point – the a-priori-ness of the outcome variables in the analysis of the clinical trial results. I see it as the last big hurrah – a place where there seems to be no enduring consensus supporting its importance. That’s apparent in some of the editor’s responses to COMPare.
Outraged editors

The long awaited goal of universal registration of trials now seemed achievable.5 and medical journal editors issued an ultimatum: preregister your trial or forgo publication in our pages. “Honest reporting begins with revealing the existence of all clinical studies, even those that reflect unfavourably on a research sponsor’s product,” a group of influential editors declared. “Unfortunately, selective reporting of trials does occur, and it distorts the body of evidence available for clinical decision-making.” The declaration had enormous impact, and public trial registers remain a key mechanism to prevent investigators from hiding or spinning unfavourable results.

But more than a decade on, a small project from Oxford University’s Centre for Evidence Based Medicine seems to have journal editors eating their own words, with some of the world’s most powerful editors arguing that strict adherence to the registry entry’ or trial protocol may not always make sense…
Journal dissent

“Upon receipt of COMPARED initial communication, our editorial team (comprised of physicians and statisticians)thoroughly re-reviewed materials associated with the articles.” Annals’ editor in chief. Christine Laine, told The BMJ. “We concluded that the information reported in the articles audited by COMPare accurately represents the scientific and clinical intent detailed in the protocols.” In notices posted to the journal’s website, Annals editors acknowledged the good intentions of COMPare but warned people to be wary.

“Until the COMPare Project’s methodology’ is modified to provide a more accurate, complete and nuanced evaluation of published trial reports, we caution readers and the research community’ against considering COMPare’s assessments as an accurate reflection of the quality’ of the conduct or reporting of clinical trials”…

NEJM also flunked COMPare’s test. The journal’s editors refused to publicly engage with the group, rejecting all of  COMPare’s 20 letters, most with a tightly worded statement saying “we have not identified any clinically meaningful discrepancies between the protocol and the published paper that are of sufficient concern to require correction of the record… NEJM’s response has a certain irony. The journal’s editor, Jeffrey Drazen, has been a longtime supporter of trial registration, and NEJM was the first of its peers to publish trial protocols alongside trial publications, a practice now followed by JAMA, The BMJ, and most recendy-in part thanks to COMPare — Annals.

The BMJ did not escape criticism but ultimately got a greenlight. COMPare sent rapid responses for two of the three trials evaluated, one of which led to a correction. It was “an example of best practice,” the group said in a blog. What about JAMA and the Lancet? JAMA rejected all 11 letters the group sent, and the Lancet rejected some but published others…

Frankly, while "a more accurate, complete and nuanced evaluation of published trial reports" sounds really good, it’s well off the mark. The whole point of a clinical trial is objectification, hard core evidence-based medicine. Of course there are instances where common sense says that the preregistered outcome need to be reconsidered, but that’s not what’s on the table here. COMPare found that the majority of the articles had some outcome changed. And the changes weren’t flagged or explained.

So some editors don’t really accept that the a priori declaration of outcome variables is a fundamental element of a scientifically conducted Randomized Clinical Trial [RCT]. That’s what COMPare shows, and that’s what the editors’ responses say as well. Their proper response to COMPare would’ve been "Whoops." In spite of their comments to the contrary, they’re keeping the multi-billion loophole from being closed – making a "last stand."
Mickey @ 4:43 PM

the frenzied match of ping pong: step one…

Posted on Saturday 29 October 2016

So do I think that editors and researchers sit around thinking, "We’ve got to keep them from nailing down the process to insure that there’s some official copy of the a priori Primary and Secondary Outcome variables publicly available"? I kind of doubt it. I expect there are some people inside the walls of Pharma whose understanding of this is more intentional, who consciously know that if they don’t keep this question always open and unresolved, they will lose an extremely valuable tool to market their drugs. But editors? I expect they think the reformers are just being too picky or are anti-pharmacology types or something like that. Maybe they feel like the lady outside the gala whose mink coat has just been sprayed with red paint by some unwashed hippie activist. Or maybe they don’t buy the scientific explanation for preregistration [see de Groot]. I doubt that it occurs to them that there was some savvy ghost·organizer in the background carefully orchestrating what Outcomes got presented in the article they published. But the net result is the same – the issue of the a priori Primary and Secondary Outcome variables just never gets resolved…

So here we are with the powerful publication editors, an astute investigative editor [Peter Doshi], and a preeminent clinical trial researcher [Ben Goldacre] engaged in a tête-à-tête about Clinical Trial preregistration, a priori variables, and "Outcome Switching" [that’s occurring after the fact, disconnected from the specific trial reports]. The clinicians who read all of those trials in their journals likely have no idea that this debate is taking place, and even more to the point, they didn’t and don’t know that the paper they read had the "Outcome Switched" [a selective reporting that may well have painted a picture of the results through rose colored glasses]. The discussion in Is this trial misreported? Truth seeking in the burgeoning age of trial transparency by the editors is focused on COMPare’s methodology, whether the prespecifieded Outcome Variables are the best choice, or whether Goldacre’s team made an "accurate, complete and nuanced evaluation of the published trial reports."

We have to wonder "Why are they talking about that?" Only the BMJ editor seems to have responded to the cogent question, "Did you know you were publishing an article where the Outcome Parameters had been switched?" The editors of the Annals of Internal Medicine and the New England Journal of Medicine either denied the facts or said that it didn’t make any difference – essentially ignoring the question. And Doshi’s article doesn’t address the Sponsors who submitted the articles or ask them the pertinent questions: "Why did you switch the Outcomes?" and "Why didn’t you tell us that you switched the Outcomes?"

the first step

Ben Goldacre’s COMPare team and the BMJ‘s Peter Doshi have done yeoman’s service here. First, the COMPare group made it clear that "Outcome Switching" is not just a thing of the past [as in Paxil Study 329, CIT-MD-18, Paxil Study 352, etc]. It’s happening right now, and frequently. And by querying the journal editors, Doshi got the same kind of defensive or attacking response as years ago.
“COMPare objects to the format in which the data are communicated, but COMPare is silent about whether they dispute the key clinical message of the article,” NEJM editor Jeffrey Drazen said through a spokesperson.
Annals echoed this sentiment: “COMPare’s methodology suggests a lack of recognition of the importance of clinical judgment when assessing pre-stated outcomes.”
That’s a disappointing response to say the least. The COMPare team didn’t even call it "Outcome Switching" if they said they were doing it and why in the article. They only catalogued instances where they changed the outcomes and didn’t tell us they were doing it. That’s a point the editors don’t seem to quite grasp. If either the authors or the editors were using clinical judgement to make their decisions, that’s their prerogative. But they need to say that. When I first read the COMPare study, I was impressed that they could even locate the a priori declarations. Frequently, I’ve either had a hard time finding them or actually failed in the endeavor. From the Compare FAQ:
"Our gold standard for finding pre-specified outcomes is a trial protocol that pre-dates trial commencement, as this is where CONSORT states outcomes should be pre-specified. However this is often not available, in which case, as a second best, we get the pre-specified outcomes from the trial registry entry that pre-dates the trial commencement. Where the registry entry has been modified since the trial began, we access the archived versions, and take the pre-specified outcomes from the last registry entry before the trial began."
So I guess I would say that the first step in evaluating an RCT is to see if the outcomes reported are the same ones that were prespecified in the a priori protocol. That’s the standard. If they’ve been changed, and the change isn’t addressed anywhere in the published article, you’re likely not reading a scientific document. You’re reading an advertisement…
Mickey @ 9:22 PM

the frenzied match of ping pong: decoded…

Posted on Friday 28 October 2016

The Goldacre et al COMPare study in a frenzied match of ping pong… seems straightforward. Find the Outcome Variables they said they were going to use and see if they used them. They frequently didn’t. But finding those preregistered [a priori] Primary and Secondary Outcome variables in the first place is no minor task. In fact, the COMPare team had to develop an algorithm for just that problem [see the COMPare full Protocol and the Adhering to the original plan section in the previous post]. It’s often ambiguous with either nothing to go on or several versions [Protocol,]. Some of the editors criticized COMPare‘s methodology. But the unclarity isn’t with COMPare. It’s with the inconsistent record keeping of the Sponsors. That shouldn’t be the problem it is. These Outcome Variables are a required part of the Protocol submitted to the Institutional Review Board that approved the study – before registration.

With the exception of the BMJ, the editors’ responses to the letters from COMPare pointing out the inconsistencies were either critical or dismissive. They talked about COMPare‘s methodology or intent, denied wrong-doing, brought up exceptions, etc. They did everything except flesh out the reason for the identified inconsistencies in their articles that COMPare was asking about. When we [Bernard Carroll, John Noble, Shannon Brownlee, the Lown Institute, and yours truly] were first beginning to work on turning Dr. Carroll’s Proposal into a Petition, I had written Peter Doshi with several questions about these matters. He responded but was similarly confused – said he was writing about this too [I presume this article]. He was finding the same mess we were trying to address.

Clinical Trials have been required by the FDA since 1962. has been in existence since 1997, and required since 2007 for industry-funded trials. You would think that in this last 54 years with thousands of trials behind us, we’d have worked out something as simple as where to locate an accurate list of the a priori Primary and Secondary Outcome Variables for a clinical drug trial. But as this scenario clearly demonstrates, it just hasn’t happened [carefully note the use of the passive voice in this sentence]. "it just hasn’t happened" isn’t really accurate, in my opinion. I think it has been kept from happening using frenzied matches of ping pong similar to this one because it creates a heavily protected multi-billion dollar loophole in the system. So first, what is the loophole?

the multibillion dollar loophole

The FDA efficacy standards are simple. The Trial rises or falls based on results of the a priori Primary Outcome Variable[s]. They take Secondary Outcome Variables and other arguments into account in questionable cases, but the bedrock is a statistically significant result on the a priori Primary Outcome Variable[s]. And they require two such studies for approval. They have the Protocol and the results. They have the whole CSR and they either have or can have the whole IPD if they need it. None of the arguments the drug companies use to withhold information apply, since the FDA keeps their secrets. And the could potentially get in a heap of trouble if they lie, cheat, or steal. So although the standards for approval are set fairly low, one can generally accept an FDA approval as on the up and up. And that’s basically all they say, YES or NO. At some point down the line, they may [or may not] post their report on Drugs@FDA or you can get it with an FOIA request, but it takes a while. And I think those reports are generally only available for approvals drugs.

However, there’s no official oversight in published academic journal articles. Acceptance for publication is in the hands of the journal editor and editorial staff assisted by peer reviewers. They don’t have the CSR or the IPD. They don’t necessarily have the Protocol. What they have is the submitted article and a CONSORT statement. And of particular relevance, they don’t necessarily have a copy of the a priori Protocol guaranteed to be a priori – nor do they have any injunction to make any determination about the outcome of the trial. And so to the loophole. If a drug has been approved by the FDA, the company is only allowed to advertise it for the condition it was approved for [and advertising matters!]. Let’s say you do some trials for another indication and the FDA turns it down. You can’t advertise it for that indication, but you can submit it to a journal [with its different standards] and if it’s published, that’s a HUGE advertisement. The academic authors can present it at meetings. And the reprints from those publications can be disseminated far and wide.

An example? Lets take the most famous one of all – Paxil Study 329 [Paroxetine in adolescent depression]. They set out to get an FDA Approval and ultimately did three trials. Two were total busts, so they weren’t published. The third had an insignificant slight signal which they blew up into a positive-appearing outcome using some blatant "Outcome Switching" and got it published in a primo journal with an army of child psychiatry experts as authors saying:
Conclusions: Paroxetine is generally well tolerated and effective for major depression in adolescents.

And beyond that, in a published journal article, you can selectively present your data, make arguments that omit or discount contradictions, enlist the skills of trained statisticians and writers to lean the material in your direction. You can make it desirable to publish by ordering a gajillion reprints [the ones that go far and wide]. All you have to do is get it published [and having all those semi-famous authors on the byline helps with that too]. In the case of Paxil Study 329, there were over twenty such authors.

how to maintain a loophole

When someone challenges it, initiate a frenzied match of ping pong. Bring up all the exceptions, legitimate and otherwise. Discredit the critic. Ignore the critic. Make the critic’s eyes cross. In this case, do everything possible to keep all the ambiguity about the location of the bona fide a priori list of Primary and Secondary Outcome Variables alive. If they invent a – ignore it. If they make it a requirement – insist on a long lag time before completely filling it out [and continue to ignore parts of it]. And if somebody like Ben Goldacre or Peter Doshi tries to nail things down, make them feel like this:

"…disputes are so detail oriented that my eyes crossed trying to follow what at times feels like a frenzied match of ping pong, each side’s latest rejoinder seeming to rebut their opponents’ last counterpoint."

The point is to either downplay the importance of these preregistered parameters or, in fact, change them to something that gives a more favorable result. And that is what has been happening in one version or another for literally decades. These arguments over whether they are a sine qua non or not have been going on that long. So…

to be continued…
Mickey @ 2:11 PM

a frenzied match of ping pong…

Posted on Thursday 27 October 2016

by Peter Doshi, associate editor
British Medical Journal. 2016 355:i5543

Peter Doshi has, as usual, done a great job of clearly laying out an issue and exploring all of its facets. It’s certainly one I haven’t been able to stop talking about since I knew it was there to be talked about. With great restraint, I’ll avoid referencing all the times [they might just fill this whole page]. He’s talking about the question of whether one needs to absolutely define the analytic parameters of a Clinical Trial before you begin the study, or can you make changes in the outcome variables or statistical methodology along the way. Doshi is particularly focused on what’s called "Outcome Switching"  [changing either the Primary or Secondary Outcome Variables after the study has begun]. And Doshi uses Ben Goldacre’s COMPare study as his example [comparin’…]. Goldacre’s group looked at recently published trials in top journals, and found that most of them had outcomes switched post hoc:

67 9 354 357

Goldacre et al wrote the journals about their findings and catalogued their responses:

Annals of Internal Medicine
“We concluded that the information reported in the articles audited by COMPare accurately represents the scientific and clinical intent detailed in the protocols.” “Until the COMPare Project’s methodology is modified to provide a more accurate, complete and nuanced evaluation of published trial reports, we caution readers and the research community against considering COMPare’s assessments as an accurate reflection of the quality of the conduct or reporting of clinical trials.”
"The BMJ did not escape criticism but ultimately got a green light. COMPare sent rapid responses for two of the three trials evaluated, one of which led to a correction. It was “an example of best practice,” the group said in a blog."
"What about JAMA and the Lancet? JAMA rejected all 11 letters the group sent, and the Lancet rejected some but published others."
NEJM “We have not identified any clinically meaningful discrepancies between the protocol and the published paper that are of sufficient concern to require correction of the record.”

This, from Peter’s article:

Lost in a maze of detail

The disputes are so detail oriented that my eyes crossed trying to follow what at times feels like a frenzied match of ping pong, each side’s latest rejoinder seeming to rebut their opponents’ last counterpoint. In one trial, COMPare’s datasheet shows that the group counted 83 secondary outcomes.34 Yet in its letter35 to NEJM, it mentions “29 pre-specified secondary outcomes” while the last entry before patient enrolment on lists just two.36 By my count—based on the protocol37 dated before patient enrolment and posted on—there were over 200.

I shared the case with Curtis Meinert, Johns Hopkins professor and former editor of the journal Clinical Trials. Meinert guessed that fear of future accusations of bias was driving trialists to bad practices. “They don’t want to be accused of looking at something they didn’t specify, so they specify every freaking thing.”

What is the clinical significance?

I asked COMPare to elaborate on clinical impact. Beyond the technical violations of best practices in trial reporting, did the team discover any misreporting that concerned them from a clinical perspective? For COMPare, this question was out of scope. “On clinical impact, we’ve deliberately not examined that. We set out to assess whether outcomes are correctly reported in journals listed as endorsing CONSORT,” the team told me, referring to the well established guidelines for proper reporting of randomised trials. “We deliberately steered away from any value judgments on why someone reported something other than their prespecified outcomes, because we needed our letters to be factual, unambiguous, and all comparable between trialists and journals.”

But clinical and value judgments were central to journal editors’ defence. “COMPare objects to the format in which the data are communicated, but COMPare is silent about whether they dispute the key clinical message of the article,” NEJM editor Jeffrey Drazen said through a spokesperson. Annals echoed this sentiment: “COMPare’s methodology suggests a lack of recognition of the importance of clinical judgment when assessing pre-stated outcomes”…

Adhering to the original plan

The challenge of establishing outcome “switching” begins with determining trialists’ prespecified outcomes. But which document—protocol, statistical analysis plan, registry entry, or some combination of the above—details trialists’ true intentions? COMPare’s methods prioritise protocols over registry entries, a practice that troubles Elizabeth Loder, head of research at The BMJ, which requires reporting according to the registry entry. “I see a worrying trend away from trial registries to protocols … People seem to think that as long as you’ve registered the trial and given notice of its existence, the details don’t matter so much.”

Annals said that sometimes editors are faced with “a choice between an incomplete or non-updated registry record, with a reliable date stamp, and a more detailed and updated protocol document.” Such situations deeply trouble Deborah Zarin, a long time advocate of trial registration and director of, who believes that trial registration is the foundation of the trial reporting system…

Even with trialists’ last testament established, complexity remains. Having herself conducted similar analyses that compare outcomes reported across different sources, Zarin noted that a key challenge is disagreement in the community over how detailed outcome prespecification must be. “From my perspective, prespecification sets the foundation for a statistical analysis; if there is no firmly prespecified outcome measure, then it’s unclear to me what any reported P value means,” she said, noting that now requires investigators to delineate “the specific metric and the time point[s]” for all outcome measures.

Yet some trialists do not sufficiently define outcomes until after the data are collected [but not unmasked] and they finalise the statistical analysis plan, a situation the FDA apparently accepts. The FDA told The BMJ that it “does not require the statistical analysis plan to be in place prior to initiation of the trial.” Annals stated that “prespecification is certainly important, but holding blindly to this prespecification may not be good science. Sometimes it becomes clear to informed clinicians and scientists who understand the area that what was prespecified is not going to provide patients and clinicians with the information they need”…
If you have made it this far down the page, I’d like you to notice something. We started with Goldacre et al’s data which is quite clear. In the overwhelming majority of cases, the prespecified Outcome Variables have been changed. But at this point, you’re confused – probably would have difficulty even saying succinctly what the exact topic is. Peter Doshi, a clear thinker, seems to be going from person to person who are saying things that add to the confusion rather than clarify. It’s not his fault. Or yours. Or mine. That’s what happens with this topic. The fog rolls in. That is probably the most important question here, "Why do these discussions of clinical trial analysis always get lost in the fog and seem to fade away?"

That question has an answer. As long as the confusion remains, one can do whatever makes the outcome look most like you want it to look. And you can justify your choice with the kind of discussion that has Peter’s eyes crossing, continuing to play a frenzied match of ping pong until the end of days. So if you keep from being definite about the outcomes, you can pick the one that fits your fancy, and if challenged – bring on the fog. It’s confusing on purpose…

to be continued…
Mickey @ 11:08 PM

don’t forget……

Posted on Thursday 27 October 2016

Mickey @ 11:35 AM

the clinic…

Posted on Wednesday 26 October 2016

A few years back, a woman in her mid forties showed up in the clinic "depressed" [visible from 50+ yards]. She wanted to talk about her symptoms and not her life. As it played out, one reason was because she didn’t have one [a life]. She was the housewife to her second husband, a man who worked hard, came home and ate, drank a couple of six-packs and was off to bed. His son by a previous marriage lived there and was a "freeloader druggie." Her son from a previous marriage was a college student rarely home except to eat. She worked full time as an Assistant Manager of a busy retail business. The manager often left her in charge [to run off for trists with her married boyfriend]. The patient to be was on the phone several times a day with her aging mother who lived in another state. Her mother was and always had been a "drama queen" and my future patient managed all of her affairs and money long distance. She was a "sort of therapist" for all of her siblings and her husband’s siblings in their dysfunctional marriages. Did I mention she was a full time night student taking a medical records course of study. Like I said, she didn’t have a life. Her PCP [Primary Care Physician] had plied her with various SSRIs with no effect. She didn’t tell me on that first visit, but she had a gun, bullets, and some suicidal fantasies.

The point of the post isn’t this patient’s case, but a paragraph or so will help get me closer to the point. Diagnosis? She met all MDD [Major Depressive Disorder] criteria. A twelve step diagnosis would’ve been [Malignant] Co-Dependency. A family systems theorist might label her with the "Executive Daughter" syndrome. But the Dx that I went for was "a Network case" [from the movie with the line, "I’m mad as hell and I’m not going to take it anymore!"].  I recall going for the anger because she didn’t seem to know she had it. But it wasn’t very hard to find. I saw her as frequently as I could and on my off weeks, she saw an LPC who was in the clinic. That’s where "Executive Daughter" dx came from. She had always essentially held her dysfunctional family together as her father’s "right hand man" – a family role. He was a soft alcoholic who appreciated her help. He had died several years ago, and the hardest anger for her to get to was directed towards him for putting her "in charge," then dying and leaving her to take care of all these "crazy f___ing" people.

It took a long while for her to figure out that the solution wasn’t making "them" change, it was in her stopping taking on their problems as if they were her own. Discovering the source of her anger helped, but changing roles is hard to do. On the other hand, not cringing every time the phone rang was a powerful reinforcer. To her surprise, when she started talking out loud, her husband got rid of his "freeloading druggie" son and drank much less himself. He said "I didn’t realize what we were doing to you." She dropped school ["just a way to get out of the house"] and moved to a better job. Progress once she got her rhythm was surprisingly swift. Recently, she showed up with a "relapse." He mother had a "light" stroke and a sibling’s husband died in a single week, and she found herself automatically sliding too far into the caretaker role once again and felt liked she was trapped again [forever].

The topic here is the PHQ-9 [Patient Health Questionnaire – 9 Item] [see how silly!…, remarkable…, simply absurd…, PHQ-9®™…]. The day this patient came back with her "relapse" was shortly after they started handing patients a routine PHQ-9 to fill out in the Waiting Room. It’s apparently a requirement for whatever certification the clinic is getting so they can take insurance. She said, "What is this shit?" I glanced at it and told her. She rolled her eyes, and then launched into what had happened.  It looked like this from what I recall in a quick glance…

[I particularly recall the "double answer" on number 9].

Last week when I was talking about the Collaborative Care Model [without ever talking to the patient…], I didn’t get around to talking about their use of the PHQ-9® to follow the patient’s progress, to decide when to change medications, etc. Nothing I know about the PHQ-9® makes me think of it as a candidate marker to follow the session to session effect of depression treatment, certainly not over observation or a simple question. Looking at that form, you’d think I ought to send her to the hospital on a stretcher. But I know her. Remember, I said:
    "showed up in the clinic ‘depressed’ [visible from 50+ yards]. She wanted to talk about her symptoms and not her life"
We’d talked about that part before. Her mother’s extreme melodrama and constant whining had put her off of talking about what bothered her. She didn’t want to be "like that." Instead, she "telegraphed" her internal states indirectly with depressive symptoms. We got onto that when I asked if she had talked to her husband about his son and his drinking. She had acted if he should know why she was so tangled up. She was bowled over when she tried saying it instead of "telegraphing" – he was glad to hear it and responded. That’s what got the ball rolling as she went from "ward" to "ward" setting boundaries "out loud." So the PHQ-9® was a telegram.

She felt really snookered. Her mother’s stroke wasn’t that light after all, and she really probably couldn’t live alone and take care of herself. Throw in the drama, and things were an unholy mess back home. It was the common problem of the elderly person who needed to go into care, but resists the need [with a heavy dash of lifelong Drama Queen-ness thrown in for good measure]. My patient felt sentenced to bringing her mother to Georgia to live with her, which would’ve been a fate worse than any she could ever imagine.

So the other thing that PHQ-9® measured was frustration. She had spent days calling hospitals, doctors, etc. trying to get someone to see that her mother couldn’t live alone, without success. Doctors and hospitals don’t initiate such things. Families and Social Services do. This is the reason every psychiatrist, psychologist, etc. should have as part of training a stint in a Social Agency. Her problem of an aging impaired parent, while common, is never easy. But in this case, everything was already in place – resources [check], fiscal assets [check], and an immediate cause [check]. All that’s needed is a savvy Social Worker and/or lawyer. My patient’s son [that college student I mentioned] had moved to his grandmother’s town to continue schooling, and moved in with her thinking he could "help Grandma." He ended up having to call the police when she got out of control and attacked him with a knife. "Grandma" ended up with an assault charge. He was busily looking for an apartment to get out of there. So there was a pending charge, a cause to go to the court and insist on an immediate mental competency assessment. That "other State" happened to be my home State and so I know the laws, so…

This patient is an action figure and I have no doubt that once given the path, she’ll take care of business [her mood was much lighter when she left]. I won’t prattle on about how much I think using a PHQ-9® to do waiting room screening or make clinical decisions is a pseudo-metric rather than the application of evidence-based medicine. It was developed by Robert Spitzer on a grant from Pfizer [who owns it]. It’s part of a fantasy that the psychic ills of human-kind can be reduced to some simple calculus that has a general application – a fantasy of policy makers like those that created that Collaborative Care Report [DISSEMINATION OF INTEGRATED CARE WITHIN ADULT PRIMARY CARE SETTINGS: THE COLLABORATIVE CARE MODE], rarely shared by clinicians. I wonder what a Consulting Psychiatrist who had never met this patient would say on seeing this PHQ-9® to a Care Manager to tell the Primary Care Physician?
Mickey @ 3:03 PM


Posted on Sunday 23 October 2016

I’ve kept politics out of here on purpose and will keep that up, but I was surprised and touched by this from Canada. I thought I’d post it in case you hadn’t seen it…
Mickey @ 3:03 PM

the times are a changin’…

Posted on Friday 21 October 2016

I work as a volunteer physician in a [formerly] charity clinic in a rural part of a rural State. The Clinic was started by some retired doctors and operated out of a few trailers with a large volunteer staff, seeing only uninsured patients. This is an unusual place at the beginning of the Appalachian Mountains. About ¼ of the county is occupied by gated retirement communities. The rest is home to the descendants of the settlers that came after the Cherokee were marched out on the Trail of Tears in 1838 and their land parceled off in a lottery. It’s a beautiful place that has the world’s largest piece of white marble [1 mile x 6 miles x unk. depth] that sustained the county back when people built with marble [it’s now the grit in your toothpaste]. It was also the home for the "moon shiners" of a former time, and their whiskey-runners and souped-up cars were the direct progenitors of the  modern NASCAR races.

The clinic I work in was staffed by retired volunteering professionals of various ilks from the gated communities and serves the indigenous population. There’s a symbiosis between these two groups that works well – strong tax base, well stocked Thrift Shop, plenty of eager workers for building and other jobs, a better hospital than we might otherwise have, our clinic, and a  cultural mixing that’s refreshing. Both groups are mostly Republicans [for different reasons].

With the coming of the Affordable Care Act and insurance for all, the clinic changed dramatically. The old docs who founded it either died or moved on [I find myself the old man now]. Instead of volunteers, we now have a few volunteers but mostly employees. There’s a new clinic building. There are inspections and standards, a new EMR [Electronic Medical Record] system. Everyone gets vital signs and fills out a PHQ-9 in the waiting room with every visit [I wonder who looks at them? not me]. And it seems like there’s some kind of administrative snafu with almost every patient, any prescriptions. Every patient has a "primary" – a primary care-person, usually a nurse practitioner. And there are lots of "rules" from the various governments and agencies involved. The patients are confused by it all [as am I]. I actually hate that it changed, and am likely to soon move on out of frustration.

The thing that has me mentioning it here is hard to know how to say – it’s a change in attitude in both the staff and patients that I didn’t anticipate – but it’s undeniable. It’s almost adversarial. Though that word seems too harsh, it’s the best one I can think of. It’s as if there has been a shift from a hermeneutic of trust to one of suspiciousness. The staff seems often worried that the patients might be trying to game the system, and the patients are likewise concerned that something might be withheld, that the "system" is working against them. It’s bigger than the fact that there’s now money involved though the fees are quite low. In my case, because I remain a volunteer and am not a registered "provider" in any system, there’s no charge to see me. Yet the attitudinal change spills over into my office. I never anticipated anything like this happening and there doesn’t seem to be anything I can do about it.

I hope I can find a way to stay on for a while because I’ve really enjoyed my time there. It has been a good end-of-career experience. It’s an antidote to some of the disillusioning things I encounter in writing this blog. It’s a mixture of all the different ways of approaching problems help-seeking patients show up with – doing what can be done in the time alloted. I often think of the title of Adolf Meyers’ collected works "Common Sense Psychiatry" – because that’s what it comes down to. Many of the patients I see have outrageous biographies, yet have put together meaningful lives against the odds. And even with the brief and infrequent visits to the clinic, I’m often awed with the work that can be accomplished. And it’s good for a psychoanalytically oriented psychotherapist to work in a situation where the very medications I talk about here as overused can be so helpful in treatment if you’re careful.

But something about becoming a Bona Fide, Certified Center with all that entails has changed the ambience in ways I couldn’t have imagined. My working hypothesis is that the staff’s feeling of being "watched" and "judged" has been passed on to the patients. Rules and standards have replaced "Common Sense" I find myself wondering if that’s inevitable…
Mickey @ 11:23 PM

without ever talking to the patient…

Posted on Friday 21 October 2016

In bygone days, as a resident and as a director of a busy city/county hospital psychiatric emergency room, I had years of working in collaboration with staff and trainees who were the primary contact person. With many cases, I saw the patient in person only briefly. And as a supervisor, I met with residents who brought the cases they needed help with/ Even now, I wish I had support staff in the clinic where I work but alas, it’s just me. Even with all of that experience "collaborating," I have a visceral negative reaction to what I read about the modern version of collaborative care. So when I ran across this recent APA Report on the model, I decided to spend some time looking it over to see if my reflex reaction could be softened up a bit. It’s 85 pages and is written in a salesmanship style with lots of jargon and slogans. I’ve included the intro below. There’s a case reported scatted throughout the narrative, and I’ve extracted much of it at the end, along with a rough diagram [as accurate as the narrative would allow]:
Spring 2016

There is expert consensus that all effective Collaborative Care Models share four core elements: [1] team-driven, [2] population-focused, [3] measurement-guided, and [4] evidence-based. These four elements, when combined, can allow for a fifth guiding principal to emerge; accountability and quality improvement. Table 1 reviews the core elements of Collaborative Care implementation. Collaborative Care is team-driven, led by a PCP with support from a "care manager" [CM] and consultation from a psychiatrist who provides treatment recommendations for patients who are not achieving clinical goals. Other mental health professionals can contribute well to the Collaborative Care Model. Collaborative Care is population-focused, using a registry to monitor treatment engagement and response to care. Collaborative Care is measurement-guided with a consistent dedication to patient-reported outcomes and utilizes evidence-based approaches to achieve those outcomes. Additionally, Collaborative Care is patient-centered with proactive outreach to engage, activate, promote self-management and treatment adherence, and coordinate services.

Table 1: Essential Elements of Collaborative Care
Team-Driven: A multidisciplinary group of healthcare delivery professionals providing care in a coordinated fashion and empowered to work at the top of their professional training.
Population-Focused: The Collaborative Care team is responsible for the provision of care and health outcomes of a defined population of patients
Measurement-Guided: The team uses systematic, disease-specific, patient-reported outcome measures (e.g., symptom rating scales) to drive dinical decision-making.
Evidence-Based: The team adapts scientifically proven treatments within an individual clinical context to achieve improved health outcomes.

The patient had been seen a year earlier by his PCP [Primary Care Physician] and given Prozac for depression with a "fair" response. This time, he was identified on a medical visit because of his score on a waiting-room PHQ-9. The PCP doubled his Prozac to 40 mg/daily and introduced him to the CM [Care Manager]. She obtained a history of his depressive symptoms and something else:
John has recently moved out of his house, and he and his wife are separating. He is staying with a friend in town…

After a month on the higher dose of Prozac, the patient stops it because of jitters. They’re following him with the PHQ-9 and it goes up. The Psychiatrist suggests changing to Zoloft and has a phone conversation with the PCP about titration [the only direct contact between the Psychiatrist and the PCP recorded]. The patient doesn’t fill the Rx, and a month later, he’s worse. Contacted by the CM, he finally starts the Zoloft and the dose is gradually increased. After several months, the patient is feeling better and he and his wife are "fighting less." 5½ months after the initial contact, he’s back with his wife and feels fine [PHQ-9 is 5]. But two weeks later, he stops the Zoloft, and some of symptoms return and persist, in spite of restarting the Zoloft at a maximum dose. The psychiatrist suggests adding Wellbutrin and later increases the dose. At 9½ months from first contact, he is better and his PHQ is 4.

I think reading it at least helped me clarify my negative reactions:
  • The psychiatrist never saw the patient, and as best I could tell only had one direct contact with the PCP. If the PCP had contact along the way when changing the medications around, it wasn’t apparent. It appears that contact was through the CM [though surely that’s not right with changing drugs and titrating doses?].
  • A PHQ-9 and some comorbidity screening  don’t a diagnosis make [did they actually make a diagnosis?].
  • A PHQ-9 is hardly a precise clinimetric. I’d prefer asking, and I’m pretty sure patients prefer being asked.
  • There are two instances where a high dose SSRI is stopped with no taper. Both suspect for withdrawal [particularly the first] but were interpreted as worsening!
  • What about "John has recently moved out of his house, and he and his wife are separating. He is staying with a friend in town..." Have we just forgotten that a loss like that can cause all of these symptoms? What were the details? Who left who? Is there a clinician in the house?!
  • Given that he got ill when they separated and got better when he moved back in, so I doubt that the Zoloft had much to do with anything [except maybe withdrawal]. It could he that he was just plenty glad to be home…
I realize they can’t put everything in a summary, and I’m Monday Morning Quarterbacking here, but the things I’ve listed are hardly subtle points.

Robert Whitaker, one of psychiatry’s major critics had this to say about where he thought psychiatry might head in the future [see still around…]:

… So I don’t believe it will be possible for psychiatry to change unless it identifies a new function that would be marketable, so to speak. Psychiatry needs to identify a change that would be consistent with its interests as a guild. The one faint possibility I see – and this may seem counterintuitive – is for psychiatry to become the profession that provides a critical view of psychiatric drugs. Family doctors do most of the prescribing of psychiatric drugs today, without any real sense of their risks and benefits, and so psychiatrists could stake out a role as being the experts who know how to use the drugs in a very selective, cautious manner, and the experts who know how to incorporate such drug treatment into a holistic, integrated form of care. If the public sees the drugs as quite problematic, as medications that can serve a purpose – but only if prescribed in a very nuanced way – then it will want to turn to physicians who understand well the problems with the drugs and their limitations. That is what I think must happen for psychiatry to change. Psychiatry must see a financial benefit from a proposed change, one consistent with guild interests.
So even Robert Whitaker sets a higher mark than this APA Collaborative Care piece. I recognize that they’re trying to streamline things, cut costs, etc. But how can one demonstrate any expertise without ever talking to the patient? Who’s going to look for withdrawal, akathisia? Who’s going to say "What’s going with you and your wife?" Who is going to take the history in this system? Make a diagnosis? What is the place of "psycho-social" in any of this? Did we do John J. any favors with our waiting room screening? Well, here’s the essence of the case narrative below. I’d recommend reading this jargon-filled report before signing any contracts [and be sure to renew your malpractice insurance]…

John J.

John J. is a 48-year-old white male visiting his PCP, Dr. Stevens, for a follow-up visit for managing hypertension. During the visit, John’s PHQ-9 score is taken and found to be 16, in the moderate range for major depression. John was treated by Dr. Stevens 12 months ago for depression and remains on fluoxetine 20 mg daily, to which he had a fair initial response. This is John’s first PHQ-9, part of the new Collaborative Care protocol instituted by Dr. Stevens’s clinic.

Dr. Stevens discusses the test results briefly with John during their clinic appointment and introduces him to Ms. Cook, a CM/behavioral health specialist with the clinic’s Collaborative Care team. Ms. Cook is immediately available in the clinic to meet patients coming and going from appointments at the request of the PCP or other clinic staff. John agrees to speak with Ms. Cook after the appointment, and Ms. Cook runs through a few patient screens for behavioral health and substance use conditions that are often comorbid with major depressive disorder. John screens negatively for alcohol use or a history of mania. Ms. Cook discovers that John has recently moved out of his house, and he and his wife are separating. He is staying with a friend in town, and it has been hard for him to make it to work consistently. He often goes to bed late and sleeps in, missing his alarm in the morning, and eventually calls in sick. Ms. Cook shares some of this initial information with Dr. Stevens after their appointment, and Dr. Stevens increases John’s fluoxetine to 40 mg daily. She also engages him in a behavioral activation strategy to improve his mood that includes getting together with his friend Joe over the weekend.

Three days later, Ms. Cook has her weekly meeting with Dr. Brown, the consulting psychiatrist. They discuss John, the new addition to Ms. Cook’s caseload. Dr. Brown acknowledges the PHQ-9 score and the fluoxetine increase and reminds Ms. Cook of additional brief intervention techniques she has reviewed in the past with other patients. Five weeks later, during their caseload review, Dr. Brown notices John’s PHQ-9 score is unchanged. Ms. Cook notes that he stopped taking the fluoxetine the week before because of some ongoing jitteriness. Dr. Brown recommends switching to sertraline instead, and Ms. Cook conveys the recommendation to Dr. Stevens by flagging him in the electronic health record. Dr. Stevens reviews John’s other medications the following day and writes a prescription for sertraline after Ms. Cook has called John to discuss the recommendations of the consulting psychiatrist. John agrees to try the sertraline. Ms. Cook reviews the side effects with John and offers her contact information in addition to Dr. Stevens’s office if he has any problems with the medication. Dr. Stevens phones Dr. Brown and asks about the titration schedule of sertraline and starting dosage to confirm his management is appropriate. They agree to continue with increases in this medication with a target PHQ-9 of less than 5 if possible.

Five weeks after his last appointment, John remains depressed. He did not return Dr. Stevens’s last call regarding some recent lab results, and he no-showed one appointment. During their weekly caseload review, John is eighth on Ms. Cook’s list of 58 patients when sorted by PHQ-9 score severity which leads to a case review. Their registry of patients also has flagged John’s PHQ-9 as overdue and above their target. As she and Dr. Brown are reviewing all the patients, they review John’s score and with the information in the registry are able to quickly recall his latest treatment plan, including the sertraline recommendations. Dr. Stevens did write the prescription, but Ms. Cook is unsure what happened after that. She attempted to call John about 1 week after the sertraline was prescribed and left him a message that wasn’t returned. Ms. Cook and Dr. Brown agree that John needs increased outreach given his recent depression and lack of engagement, and Ms. Cook takes on this task over the next week. They then move on to Sue after spending about 5 minutes discussing John

The following day, Ms. Cook writes a letter from the clinic to John offering assistance and begins to call more frequently. Three days later, John calls back, and he discloses that he never picked up the sertraline and was not sure he was worth the attention of the team. He reports that he didn’t want to feel like a failure again or let anyone down. John’s PHQ-9 score over the phone is 18, and Ms. Cook screens John for suicidal ideation, which is negative. She provides some education around depressive symptoms, the role of the team, and their desire to help him feel better. John agrees to pick up the sertraline from the pharmacy and check-in with Ms. Cook before the weekend to report on how he’s tolerating it.

John, the patient, calls Ms. Cook, the CM, on Friday and reports that he picked up the sertraline and is taking it without side effects but doesn’t feel much different after 2 days. Ms. Cook reassures John that this is not unusual, and that he needs to stick with the medication for 4-6 weeks at the right dose sometimes before his mood may change. They make a plan to check in once a week. In 4 weeks, John’s PHQ-9 score has gone from an 18 to a 15, and he is tolerating the sertraline without any problems. Dr. Brown, the consulting psychiatrist, recommends they titrate the dose to a higher level and continue to monitor John’s response. Dr. Stevens, the PCP, writes a new prescription for John; Ms. Cook confirms that he picks it up at the pharmacy and takes it; and after another 4 weeks, his PHQ-9 is 13. John reports that he is feeling better and has applied for a new job. He and his wife are fighting less, and they are talking about having him move back in. In spite of these gains, however, Ms. Cook discusses John’s remaining symptoms of prominent guilt and negative self-worth and poor quality sleep, energy, and concentration coupled to overeating—all of which contribute to his current score. They formulate a plan to begin more regular exercise. Because his PHQ-9 is still above 5, Dr. Brown’s advice is to continue to titrate the sertraline to the maximum daily dosage, noting his steady improvements.

Four weeks later, John’s PHQ-9 score is 5. He reports that he feels like his old self again, has moved back in with his wife, is exercising more regularly now, and starting to lose some excess weight. Two months after John achieved early remission from his depression, Ms. Cook calls him for a routine check-in. He notes that he stopped taking the sertraline for a couple of weeks right after their last conversation and had a relapse of some of his symptoms. His PHQ-9 score has jumped from 5 to 13, and John is feeling embarrassed and shameful.

He resumed his sertraline at 200 mg about a month ago but still struggles with energy and has stopped his workout routine. Dr. Brown suggests that they augment the sertraline with bupropion, and Dr. Stevens writes the prescription for John. One month later, John’s PHQ-9 score is 10, and Ms. Cook engages him with Behavioral Activation focused on his exercise regimen again. They discuss the cycle of inaction, guilt, and depression, and John agrees to experiment with a different workout regimen and assess his mood. Dr. Stevens automatically adjusts his bupropion to a higher level since he is tolerating it well, and 1 month later John’s PHQ-9 score is 4.

Mickey @ 4:13 PM