evidence of professional credibility…

Posted on Monday 14 November 2016


PSYCHIATRICNEWS
by Anne Huben-Kearney, R.N., B.S.N., M.P.A.
October 31, 2016

The importance of good documentation cannot be overstated. Documentation can provide evidence of care provided or not provided; serve to promote patient safety and minimize error; ensure regulatory, accreditation, and reimbursement compliance; provide an effective defense in a medical malpractice claim, and even prevent a claim from going forward. The medical record, whether in paper or electronic form, is a legal document. The medical record is…
  • Considered the primary tool for communication among the psychiatrist and other care providers.
  • Considered an accurate reflection of the care provided.
  • Used to reconstruct the care provided.
  • Provides evidence of professional credibility.

Scrutinized by the plaintiff and defense attorneys.

Documentation includes the notes in the psychiatric medical record itself, including informed consent, telephone messages/responses, prescriptions, and, if used, email communication with the patient.

Documentation should have these characteristics:
  • Clear: Document using specific, factual, and objective language. Avoid language that is speculative, opines, or is subjective in nature.
  • Comprehensive: Document all facts relevant to the course of treatment, patient condition, and response to treatment.
  • Concurrent: Document as soon as possible but never in advance. If there is a late entry, label the note as such.
  • Credible: Document in a professional, appropriate manner. Never alter the medical record because no matter how good and appropriate the care, an altered record is indefensible.
The basic elements for documentation include the following:
  • A thorough medical history: Is the patient’s medical, psychiatric, and social history, as well as the family medical and psychiatric history, thoroughly documented?
  • Relevant information regarding diagnosis and treatment: Is the assessment, diagnosis, and treatment plan recorded for each patient visit? Is there evidence of recognition and timely interventions upon a change in the patient’s condition? Are telephone messages and the responses documented with dates and times? If the patient’s insurer audits the medical record, will the documentation support the billing practices?
  • Assessment of the patient’s risk for suicide or possible violence toward others: Is the risk of suicide or violence toward others assessed and addressed throughout treatment?
  • Informed consent: Is there documentation that consent was obtained from the patient at the start of treatment? When new medications were started?
  • Medications: Are the medication prescriptions complete with medication name, dosage, and frequency? Is there evidence that side effects are consistently monitored, including blood levels as appropriate? If a medication is used off-label or carries a black-box warning, manufacturer’s warning, or FDA advisory, is the documentation explicit regarding discussion of the potential benefits, risks, side effects, and alternatives?
  • Compliance or noncompliance with treatment: Is the description of the patient’s compliance to the treatment plan, medications as ordered, and follow-up care noted in an objective manner without labeling?
  • Formal consultations: Is the consultation with another provider, including the specifics regarding the patient’s diagnosis and treatment, the reason for the consultation, the name of the consultant, and recommendations, recorded in the notes?
Document thoroughly and concisely. Failure to document is not only unethical but can lead to license revocation, restriction, or disciplinary actions and also the inability to defend a malpractice claim.

Allied World, through its subsidiaries, is a global provider of innovative property, casualty and specialty insurance and reinsurance solutions. Allied World is the APA-endorsed carrier for the professional liability program through its strategic relationship with the American Professional Agency Inc., the Program Administrator. This information is provided as a risk management resource and should not be construed as legal, technical, or clinical advice. Consult your professional advisors or legal counsel for guidance on issues specific to you. This material may not be reproduced without the permission of Allied World. Risk management services are provided by or arranged through AWAC Services Co., a member company of Allied World. Anne Huben-Kearney, R.N., B.S.N., M.P.A., is assistant vice president of the Psychiatric and Healthcare Risk Management Group of AWAC Services Company, a member company of Allied World.
Medicine changed during my time in grade, more than I could’ve ever dreamed half a century ago when I graduated from medical school. When people say things like that, they’re usually talking about the scientific advances. At the time I was an Intern, we were just begining to realize that patients died from arrhythmias in the period just after a heart attack and had begun to monitor the EKG – a major advance. Now there’s a monitor in every hospital room, but in the period just after a heart attack, the patients don’t go to their rooms. They usually go to a cath lab for a stent that opens the clogged artery [often for good].

But those changes were predictable. Medical science regularly marches upward. The changes I’m talking about are things like this article from the PSYCHIATRICNEWS focusing on Risk Management. Fifty years ago, had you handed me an article about Risk Management, I would’ve been curious to know what it was about – assuming that it would be about the risk patients faced from their diseases, or maybe their treatments. It wouldn’t have ever occurred to me that it would be about managing he doctors’ risk from the patients. In the first paragraph of the article where it gives the rationale for this kind of documentation, it mentions promote patient safety and minimize error in passing, but most of it is about preemptively heading off charges of negligence and/or malpractice. And it ends with Failure to document is not only unethical but can lead to license revocation, restriction, or disciplinary actions and also the inability to defend a malpractice claim – suggesting that one should Provides evidence of professional credibility in every note.

I suppose the suggestion that this kind of micro-documentation might make sense if the time allotted for a patient visit included the time it takes to write this kind of note, but that’s not the case. And we might add in the use of an Electronic Medical Record [EMR] system built to contain all of this information [most of which don’t have psychiatry in mind]. And perhaps others are more comfortable than I am being a clinician who is looking primarily at a computer screen. But those are just a few examples of the ground clutter choking medical practice. There are plenty of other things – things like screening [BP, P, PHQ-9, etc.].

As a free clinic staffed by retirees with only one paid employee [the Director], our little clinic was a great place to work even though we operated on a wing, a prayer, donated and generic drugs, and old people. Then came the decision to take insurance from those that had it – to get certified. Over the last year, the volunteer physicians, nurses, pharmacists, etc have all left [except for me] – and I’m finally mustering my way out over the next 3 or 4 months. It has been heartbreaking to see things change so quickly in spite of the well-meaning powers that be who tried to keep it from happening. I naively thought I could Ostrich my way through it, but I can’t bring it off in spite of trying.

And, by the way, being asked to give evidence of professional credibility in case of a lawsuit in every note takes time away from actually being professionally credible…
Mickey @ 2:25 PM

another possibility…

Posted on Sunday 13 November 2016

We  live in an age of Meta-Analyses and Systematic Reviews [studies of studies]. Some, like watchdog John  Ioannidis, lament their frequency as redundant and/or unnecessary [The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses]. But at least in the domain of neuroscience, I see these meta-analyses as an attempt at oversight in an area where advanced technologies have been applied to clinical problems without yielding much in the way of clarity. An example:
Meta-analyses of Neuroimaging Studies
by Veronika I. Muller. PhD; Edna C. Cieslik. PhD; llinca Serbanescu. MSc.; Angela R. Laird. PhD; Peter T. Fox. MD; and Simon B. Eickhoff. MD
JAMA Psychiatry. Published online November 9, 2016.

Importance During the past 20 years, numerous neuroimaging experiments have investigated aberrant brain activation during cognitive and emotional processing in patients with unipolar depression [UD]. The results of those investigations, however, vary considerably; moreover, previous meta-analyses also yielded inconsistent findings.
Objective To readdress aberrant brain activation in UD as evidenced by neuroimaging experiments on cognitive and/or emotional processing.
Data Sources Neuroimaging experiments published from January 1, 1997, to October 1, 2015, were identified by a literature search of PubMed, Web of Science, and Google Scholar using different combinations of the terms fMRI [functional magnetic resonance imaging], PET [positron emission tomography], neural, major depression, depression, major depressive disorder, unipolar depression, dysthymia, emotion, emotional, affective, cognitive, task, memory, working memory, inhibition, control, n-back, and Stroop.
Study Selection Neuroimaging experiments [using fMRI or PET] reporting whole-brain results of group comparisons between adults with UD and healthy control individuals as coordinates in a standard anatomic reference space and using an emotional or/and cognitive challenging task were selected.
Data Extraction and Synthesis Coordinates reported to show significant activation differences between UD and healthy controls during emotional or cognitive processing were extracted. By using the revised activation likelihood estimation algorithm, different meta-analyses were calculated.
Main Outcomes and Measures Meta-analyses tested for brain regions consistently found to show aberrant brain activation in UD compared with controls. Analyses were calculated across all emotional processing experiments, all cognitive processing experiments, positive emotion processing, negative emotion processing, experiments using emotional face stimuli, experiments with a sex discrimination task, and memory processing. All meta-analyses were calculated across experiments independent of reporting an increase or decrease of activity in major depressive disorder. For meta-analyses with a minimum of 17 experiments available, separate analyses were performed for increases and decreases.
Results In total, 57 studies with 99 individual neuroimaging experiments comprising in total 1058 patients were included; 34 of them tested cognitive and 65 emotional processing. Overall analyses across cognitive processing experiments [P > .29] and across emotional processing experiments [P > .47] revealed no significant results. Similarly, no convergence was found in analyses investigating positive [all P > .15], negative [all P > .76], or memory [all P > .48] processes. Analyses that restricted inclusion of confounds [eg, medication, comorbidity, age] did not change the results.
Conclusions and Relevance Inconsistencies exist across individual experiments investigating aberrant brain activity in UD and replication problems across previous neuroimaging meta-analyses. For individual experiments, these inconsistencies may relate to use of uncorrected inference procedures, differences in experimental design and contrasts, or heterogeneous clinical populations; meta-analytically, differences may be attributable to varying inclusion and exclusion criteria or rather liberal statistical inference approaches.
This meta-analysis focuses on neuroimaging studies of patients where DSM/ICD criteria were used to define the target cohort of depressed subjects. While this classification has survived for now 36 years, I know of no evidence that these criteria define anything like a discrete clinical entity.
Criteria Related to the Investigation Participants
Included experiments statistically contrasted neural activation between an adult [>18 years] UD group [based on DSM-IV-TR and DSM-5 or International Statistical Classification of Diseases and Related Health Problems, Tenth Revision] and a group of healthy controls…
The studies examined were ones that compared brain activity measured by PET scans or fMRI studies between their depressed cohort and normals. That presumes a hypothesis that the experimental subjects would have something detectable in their brains either causing or as a result of their depression. Thus there were two bedrock assumptions: [1] a unitary clinical entity [2] that has something to do with the brain.
Results
A total of 57 studies with 99 individual neuroimaging experiments, comprising 1058 patients were included in this analysis. There were 34 cognitive processing experiments and 75 emotional processing experiments; 50 experiments reported increased brain activity in UD, and 49 experiments reported decreased brain activity in UD.

Meta-analyses Across Emotional Experiments
None of the 9 emotional meta-analyses revealed any significant results {all emotional: 65 experiments [P > .69]; in- creases: 33 experiments [P > .47]; decreases: 32 experiments [P > .581; negative valence: 33 experiments [P > .76]; negative valence increases: 19 experiments [P > .12]; positive valence: 19 experiments [P > .15]; emotional faces: 32 experiments [P > .80]; negative emotional faces: 18 experiments [P > .75]; sex discrimination: 17 experiments [P > .41]}. Figure 4A displays the distribution of foci of the emotional analyses.

Meta-analyses Across Cognitive Experiments
None of the 4 cognitive meta-analyses revealed any significant results {all cognitive: 34 experiments [P > .63]; increases: 17 experiments [P > .29]; decreases: 17 experiments [P > .97]; memory: 19 experiments [P > .48]}. Figure 4B displays the distribution of foci of the cognitive analyses.

Meta-analyses Controlling for Confounds
Analyses restricted to [1] patients not receiving medication, [2] patients without comorbidity, and [3] patients without late-life or geriatric depression revealed similar results … When restricting the analyses to experiments using corrected statistics [COR], the analyses across experiments of negative emotional processing revealed significant convergence in the left thalamus extending into hippocampus [x = -18, y = -36, z = -4; 5 experiments contributing]. All other analyses did not reveal significant convergence {COR all emotional: 38 experiments [P > .82]; COR increases emotional: 20 experiments [P > .27]; COR decreases emotional: 18 experiments [P > .23]; COR all cognitive: 23 experiments [P > .61]}…
Said the accompanying editorial:
Müller et al provide a technically sophisticated and informative set of meta-analyses examining altered brain activity in adults with symptomatic unipolar depression. The striking overall finding of their analyses is the lack of consistent group differences across studies. The absence of replicable effects across studies remained even when they addressed a number of potentially key confounds, such as examining only patients not receiving medication, patients without comorbidities, and patients without late-life or geriatric depression…
Don’t think that reading this blog post is the definitive take on this comprehensive and complex meta-analysis. Read  the article and its supplements. What you’ll find is a window into the heavy duty math and numerous assumptions that go into these neuroimaging studies. What the article says is that we don’t know anything about the altered brain activity in unipolar depression. Reading this, I’m not even sure that we know if there is any altered brain activity in unipolar depression. The random scattering of hot and cold spots from the studies along with the dramatically insignificant p-values isn’t much of a confidence builder. The authors give a laundry list of possible reasons for the non-replicability and non-convergence, suspecting that methodological problems top the list. I would start my own list at the input end of things and quote historian Edward Shorter from Before Prozac:
"Bottom Line: Major Depression doesn’t exist in Nature. A political process in psychiatry created it…"
Major Depressive Disorder, Unipolar Depression, hasn’t played out as an entity, it’s a symptom complex [see which nail?…]. Is it a brain disease? So far, there’s some indirect evidence that there are perhaps several biological entities in the mix in small doses [Melancholia, Depressive Episodes in Manic Depressive Illness], and who knows what else? The point being that there may be methodological problems with these neuroimaging studies, sure enough. But that aside, the questionable diagnostic and etiological speculations before they even turn on the MRI machine are probably more to the point. I would venture that a strong possibility to be considered is that they’re chasing a biomedical etiology in a heterogeneous population of people with a symptom complex primarily generated in the biographical and psychosocial domain of living…
Mickey @ 9:00 AM

another decision…

Posted on Sunday 6 November 2016

At least for the moment, the polls seem to be leveling out a bit …

… and I found myself able to think [briefly] about other things. And the other thing on the front burner was a decision I made last week  After eight years volunteering at a rural  charity  clinic, I’ve decided to wind things down and retire for good. If it were for age or health considerations, I wouldn’t be mentioning it. But that’s not the reason.

Our area has a number of retirement communities. At some point, a few retired physicians opened a free clinic for the uninsured, staffed by an army of retired volunteers with a wide variety of medical skills. They raised the money with golf tournaments and donations, and created something I thought was unique – an interface between the retirees [from the "1%" retirement communities] and the patients from the other end of the financial spectrum. But with the coming of the national effort for universal insurance, the powers-that-be decided to change the clinic. We would still see the remaining uninsured, but collect the insurance reimbursement from those who were insured. And so we have a new building replacing our old "trailers." It’s staffed now by employees with far fewer volunteers. We have added Electronic Medical Records [EMR] and a jillion other trappings of a clinic that has been "certified" to receive the insurance "payments."

Working there was a major change from my urban referral psychotherapy practice – more back to basics. The patients often had grown up pillar to post, exposed to abusive situations, substance abuse, and other craziness.  At first, I was horrified at the medicine regimens people were on [almost everyone was "on" something]. The focus of this blog arose out of my explorations of these alarming medication practices. So the first order of business was to deal with the medication quagmires. Since I was the only act in town, I was also a social worker, an addiction counselor, a "common sense" psychotherapist, a medication manager, a neurologist, a grief counselor, etc – literally, a jack of many trades. Over time, I got my rhythm and found that I was able to do a lot more for patients in this setting than I might have ever imagined – even with infrequent contact.

I guess I thought I could ignore the changes that came with our new status, but I can’t. Instead of a brief written note and handwritten prescriptions, I have to do it on a computer using a system designed by someone who cut class on the day they taught "user friendly." Apparently the system pays extra if there’s a vital signs recorded with every visit, and a waiting room PHQ-9. There are frequent knocks on the door for some procedural something-or-another. Absolutely everything is just so much harder, and I’m sure that I now spend more than half my time doing these administrative, documentation tasks. The long and short of it is, instead of doing what I know how to do, I spend my time doing things that I don’t know how to do [and see no reason to be doing].

I could mount a reasonably convincing argument that the coming of "the system" with its rules and procedures makes it impossible for me to do my job, but that’s not really why I decided to muster out. I just can’t find a way not to be constantly pissed off. At times I think things like "a leopard can’t change its spots" or "you can’t teach an old dog new tricks," but most of the time, I think they’ve really messed up something that was working just fine, and I just don’t want to do it "their way" [whoever they are]. It’s as simple as, if you’re willing to work for free, and you’re 75 years old, you ought to be able to do it "your way." Obviously, the other side of the coin is that I’ll be abandoning the patients and that has weighed heavily, so I’m sticking around for a while to give the clinic time to get something else in place. I feel sad, but also relieved..
Mickey @ 8:00 AM

cute and telling…

Posted on Thursday 3 November 2016


by Kaplan RM and Irvin VL.
PLoS One. 2015 10[8]:e0132382.

BACKGROUND: We explore whether the number of null results in large National Heart Lung, and Blood Institute [NHLBI] funded trials has increased over time.
METHODS: We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. Trials were included if direct costs >$500,000/year, participants were adult humans, and the primary outcome was cardiovascular risk, disease or death. The 55 trials meeting these criteria were coded for whether they were published prior to or after the year 2000, whether they registered in clinicaltrials.gov prior to publication, used active or placebo comparator, and whether or not the trial had industry co-sponsorship. We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality.
RESULTS: 17 of 30 studies [57%] published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 [8%] trials published after 2000 [χ2=12.2, df= 1, p=0.0005]. There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings.
CONCLUSIONS: The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings.
Kind of a cute and telling study. The National Heart Lung, and Blood Institute plotted the standardized outcome [Relative Risk of the Primary Outcome] against the year of the study for all their funded studies from 1970 to 2014. Look what happened when ClinicalTrials.gov came along in 2000 and they started preregistering their Primary Outcome Variables a priori. Made honest scientists out of them! Speaking of cute, the article that pointed me to it said, "Preregistration of clinical trials causes medicines to stop working!"
Mickey @ 8:03 PM

efs respit…

Posted on Thursday 3 November 2016

I’m not much of a sports fan, but last night was fun and a major relief from Election Fatigue Syndrome – so I pass on a few more in case the next five days are getting to you like they tend to get to me…

Thursday 8:30 PM November 3, 2016 Atlanta Falcons 43
Tampa Bay Buccaneers 28
Friday 8:00 PM November 4, 2016 New York Knicks 117
Chicago Bulls 104
Saturday 8:00 PM November 5, 2016 Alabama Crimson Tide 10
LSU Tigers 0
Sunday 1:00 PM November 6, 2016 Philadelphia Eagles
New York Giants
Monday 8:30 PM November 7, 2016 Buffalo Bills
Seattle Sea Hawks
Mickey @ 2:00 PM

an omen?…

Posted on Thursday 3 November 2016

Mickey @ 1:13 AM

in polite company…

Posted on Wednesday 2 November 2016

These days, apparently in polite company we call lying ‘bias rampant",..
European College of Neuropsychopharmacology
Conference Coverage
by Bruce Jancin
October 28, 2016

VIENNA – Janneke A. Bastiaansen, PhD, has some advice for clinicians and researchers as they peruse the published literature on clinical trials of medication or psychotherapy for major depressive disorder: Don’t believe everything you read. “Be critical. Use your critical mind,” she urged at the annual congress of the European College of Neuropsychopharmacology.

The results of her analysis of 105 clinical trials of antidepressant drugs and 142 studies of psychotherapy indicated that the literature is rife with four types of bias: publication, outcome reporting, spin, and citation bias. “The quality of the evidence base is vital. We base our clinical decisions on what’s out there in the literature. And I think it’s really important to know that there are various biases that can color the literature,” said Dr. Bastiaansen, a psychologist at the University of Groningen, the Netherlands.

She took a closer look at 105 clinical trials of antidepressant drugs registered with the U.S. National Institutes of Health at clinicaltrials.gov. Fifty-three reported positive findings, and 52 were negative. Fifty-two of the 53 positive trials were ultimately published, as were only 25 of the 52 negative studies. That’s a sterling example of publication bias.

Upon careful scrutiny of the 25 negative trials that were published, 10 were misleadingly reported as positive studies. The investigators either switched out the prespecified primary outcome previously filed with NIH and promoted a positive secondary outcome to primary outcome status because the original primary outcome was negative, or they omitted the negative outcomes altogether. That’s outcome-reporting bias. Of the 15 published negative drug trials that were free of outcome-reporting bias, the authors of 10 of the studies employed “spin,” using phrases such as “the treatment was numerically superior.” Thus, only 5 of the 25 published negative clinical trials unambiguously reported that the studied treatment was not effective.

“Here the message is that, when you read a paper, look at the results, come up with your own conclusion, and then compare it with the conclusion of the authors, because sometimes they’ve colored it in a more positive way,” Dr. Bastiaansen said in an interview. Citation bias is the phenomenon whereby positive clinical trials are cited more frequently than published negative trials. “As a clinician, if you look at the literature and print out every paper that’s out there on a given antidepressant drug for major depression, and you look at that pile, you’ll think: ‘Ah, the literature is really strong about this treatment effect,’ because positive papers selectively cite other positive papers,” she continued.
,,, and psychotherapy?
The pharmaceutical industry takes a lot of heat for selectively burying company-sponsored negative trials, but the literature on psychotherapy for major depression is actually more opaque. “A lot of people aim their arrows at the pharmaceutical industry and say: ‘Everything’s bad about pharma,’ but actually, you see bias in every field. You see it in the trials of psychotherapy. It’s very important to know that it’s ubiquitous. The positive side of the antidepressant drug trials is that there’s this standard database [clinicaltrials.gov], and you can use it to check out what trial is published and what’s not. It’s not the case for psychotherapy trials. I think we need a mandatory registry for clinical trials of psychotherapy as well,” Dr. Bastiaansen said. Of the 142 psychotherapy studies, 49 were negative, but the abstracts of only 12 of those 49 concluded that psychotherapy was not more effective than a control.
One wonders how many such papers we have to hear before we get their messages. Here’s another – this time harms [abstract only]:
by Andreas Ø Bielefeldt, Pia B. Danborg, and Peter C. Gøtzsche
Journal of the Royal Society of Medicine. 2016 109[10]:381-392.

Objective: To quantify the risk of suicidality and violence when selective serotonin and serotonin-norepinephrine reuptake inhibitors are given to adult healthy volunteers with no signs of a mental disorder.
Design: Systematic review and meta-analysis.
Main outcome measure: Harms related to suicidality, hostil- ity, activation events, psychotic events and mood disturbances.
Setting: Published trials identified by searching PubMed and Embase and clinical study reports obtained from the European and UK drug regulators.
Participants: Double-blind, placebo-controlled trials in adult healthy volunteers that reported on suicidality or violence or precursor events to suicidality or violence.
Results: A total of 5787 publications were screened and 130 trials fulfilled our inclusion criteria. The trials were generally uninformative; 97 trials did not report the randomisation method, 75 trials did not report any discontinuations and 63 trials did not report any adverse events or lack thereof. Eleven of the 130 published trials and two of 29 clinical study reports we received from the regulatory agencies presented data for our meta-analysis. Treatment of adult healthy volunteers with antidepressants doubled their risk of harms related to suicidality and violence, odds ratio 1.85 [95% confidence interval 1.11 to 3.08, p = 0.02, I² = 18%]. The number needed to treat to harm one healthy person was 16 [95% confidence interval 8 to 100; Mantel-Haenszel risk difference 0.06]. There can be little doubt that we under- estimated the harms of antidepressants, as we only had access to the published articles for 11 of our 13 trials.
Conclusions: Antidepressants double the occurrence of events in adult healthy volunteers that can lead to suicide and violence.
An Aside: this second paper gets at something we tried to show in our Study 329 Continuation Phase paper. Here they list a group of suicidality precursors. Sometimes it’s called akathisia. In my mind, I call it Agita. It’s an uncomfortable experienced not-me-ness, not-right-ness, described by patients in many different ways, and it’s closely linked to the violence that sometimes follows. I think it’s more common than we know, bewcause many patients just say "this medicine’s not for me" and stop it [and we just don’t hear about it]. Later, they say, "Oh, I just can’t take that stuff!

Matching the Industry of CROs and Ghost -management and -writing enterprises, there’s almost an academic-specialty growing to deconstruct these jury-rigged clinical trial reports that have descended on our literature like a hoard of locusts – particularly the psychiatric literature. Dr. Bastiaansen in the first paper gives some really good advice, however she’s speaking at a meeting of specialists and scientists, not the primary care physicians who are prescribing the medicines. I now do everything she says – look up the drugs in ClinicalTrials.gov checking for publication bias, look carefully for evidence of outcome switching,  am ever alert to spin. But I’m not normal. It’s almost my new career/hobby/obsession. As a practicing doctor, no way would I be able to take the time to do that. I’d be lucky to have time to read the abstracts, much less the whole articles, way much less than look them up on ClinicalTrials.gov or Drugs@FDA.

Articles like these are part of a great big wake-up call, and I’m not sure they’re reaching the right audience in the right ways. For the moment, I’ll just add them to the growing catalog, and pick back up next week after the national election is behind us. Right now, diversion time – World Series, Game 7!
Mickey @ 8:22 PM

racing…

Posted on Wednesday 2 November 2016

It’s kind of hard to maintain my focus on the details of the clinical drug trials in a week when that graph is sitting there uncompleted – wondering where it’s going to end up. So…

Mickey @ 12:10 AM

the great race…

Posted on Monday 31 October 2016

Tired of worrying about the historic race? Frequently checking Nate Silver’s 538 site?  There’s a solution to your turmoil…
Change historic races!
Mickey @ 9:30 AM

the frenzied match of ping pong: the last stand…

Posted on Sunday 30 October 2016


by Peter Doshi, associate editor
British Medical Journal. 2016 355:i5543
We all agree that the participants in trials should be randomly assigned to the various arms of the study, and about the conditions of the double-blind design. The old pharma trick of not publishing negative studies has been mostly eliminated by required registration. So what’s left to complain about? I’ve been borrowing Ben Goldacre et al’s COMPare study, some of the Journal Editors’ responses, and Peter Doshi’s reporting aiming at a particular point – the a-priori-ness of the outcome variables in the analysis of the clinical trial results. I see it as the last big hurrah – a place where there seems to be no enduring consensus supporting its importance. That’s apparent in some of the editor’s responses to COMPare.
Outraged editors

The long awaited goal of universal registration of trials now seemed achievable.5 and medical journal editors issued an ultimatum: preregister your trial or forgo publication in our pages. “Honest reporting begins with revealing the existence of all clinical studies, even those that reflect unfavourably on a research sponsor’s product,” a group of influential editors declared. “Unfortunately, selective reporting of trials does occur, and it distorts the body of evidence available for clinical decision-making.” The declaration had enormous impact, and public trial registers remain a key mechanism to prevent investigators from hiding or spinning unfavourable results.

But more than a decade on, a small project from Oxford University’s Centre for Evidence Based Medicine seems to have journal editors eating their own words, with some of the world’s most powerful editors arguing that strict adherence to the registry entry’ or trial protocol may not always make sense…
Journal dissent

“Upon receipt of COMPARED initial communication, our editorial team (comprised of physicians and statisticians)thoroughly re-reviewed materials associated with the articles.” Annals’ editor in chief. Christine Laine, told The BMJ. “We concluded that the information reported in the articles audited by COMPare accurately represents the scientific and clinical intent detailed in the protocols.” In notices posted to the journal’s website, Annals editors acknowledged the good intentions of COMPare but warned people to be wary.

“Until the COMPare Project’s methodology’ is modified to provide a more accurate, complete and nuanced evaluation of published trial reports, we caution readers and the research community’ against considering COMPare’s assessments as an accurate reflection of the quality’ of the conduct or reporting of clinical trials”…

NEJM also flunked COMPare’s test. The journal’s editors refused to publicly engage with the group, rejecting all of  COMPare’s 20 letters, most with a tightly worded statement saying “we have not identified any clinically meaningful discrepancies between the protocol and the published paper that are of sufficient concern to require correction of the record… NEJM’s response has a certain irony. The journal’s editor, Jeffrey Drazen, has been a longtime supporter of trial registration, and NEJM was the first of its peers to publish trial protocols alongside trial publications, a practice now followed by JAMA, The BMJ, and most recendy-in part thanks to COMPare — Annals.

The BMJ did not escape criticism but ultimately got a greenlight. COMPare sent rapid responses for two of the three trials evaluated, one of which led to a correction. It was “an example of best practice,” the group said in a blog. What about JAMA and the Lancet? JAMA rejected all 11 letters the group sent, and the Lancet rejected some but published others…

Frankly, while "a more accurate, complete and nuanced evaluation of published trial reports" sounds really good, it’s well off the mark. The whole point of a clinical trial is objectification, hard core evidence-based medicine. Of course there are instances where common sense says that the preregistered outcome need to be reconsidered, but that’s not what’s on the table here. COMPare found that the majority of the articles had some outcome changed. And the changes weren’t flagged or explained.

So some editors don’t really accept that the a priori declaration of outcome variables is a fundamental element of a scientifically conducted Randomized Clinical Trial [RCT]. That’s what COMPare shows, and that’s what the editors’ responses say as well. Their proper response to COMPare would’ve been "Whoops." In spite of their comments to the contrary, they’re keeping the multi-billion loophole from being closed – making a "last stand."
Mickey @ 4:43 PM