here’s Linus…

Posted on Monday 16 January 2017

Sometimes it’s the things right under your nose that are the hardest things to see. This is my desktop at home [uncharacteristically uncluttered]. Pretty much standard fare with a couple of 27" monitors connected to a big Windows 10 computer under the desk. But there are some anomalies. Why the two keyboards? and the two mice? And what’s with that little screen on the tripod?


The screen, keyboard, and mouse on the right belong to a Raspberry Pi, a little $35 computer that runs Raspbian [a variant of the Linux Operating system – which is free], has a full range of software packages [which are free], can be programmed using the Python Language [which is downloaded free], and has a hardware interface to the outside for hackers to prototype all kind of stuff [like robots]. There’s a Raspberry Pi on the space station overhead. The Android OS that runs your phone is a variant of Linux, and the Apache software that runs almost every web server usually runs under Linux.


In the 1980s, when the personal computer burst onto the scene, you could buy programs for your computer, but they were compiled – meaning that you couldn’t see the code and you couldn’t change anything about them. Tux - the Linux MascotThe software producers essentially had a monopoly. The Open Source Movement arose on multiple fronts throughout the next few decades and is too complex to detail here, but the core idea is simple. If you buy a piece of Open Source Software, you get the compiled program AND the source code. You can do with it what you please. There are many variants but that’s the nuts and bolts of it. Linus Torvolds, a Finnish student wrote a UNIX-like operating system  [Linux] and released it Open Source [which put this movement on the map]. Netscape did the same thing. The idea is huge – that it’s fine to be able to sell your work [programs], but it’s not fine to keep the computer code under lock and key.


The Raspberry Pi Foundation LogoBefore I retired, computers and programming were my hobbies, and the source of a lot of fun. I didn’t need either of them for my work [psychoanalytic psychotherapy] – they were for play. I gradually moved everything to the Linux system and Open Source. But when I retired, my first project involved georegistering old maps and projecting them onto modern topographc maps, and the only software available ran under Windows. And then with this blog, I couldn’t find an Open Source graphics program that did what I wanted. So I’ve run Windows machines now for over a decade. But I just got this little Raspberry Pi running, and I can already see that I’m getting my hobby back. If it’s not intuitive what this has to do with Randomized Clinical Trials or the Academic Medical Literature, I’ll spell it out here in a bit. But for right now – here’s Linus:


Mickey @ 6:34 AM

not research…

Posted on Saturday 14 January 2017

I spent a day with the article in the last post [A manifesto for reproducible science]. It lived up to my initial impression and I learned a lot from reading it. Great stuff! But my focus here is on a particular corner of this universe – the industry-funded Clinical Trial reports of drugs that have filled our medical journals for decades. And I’m not sure that this manifesto is going to add much. Here’s an example of why I say that:

Looking at one of the clinical trial articles of SSRIs in adolescents, there was something peculiar [Wagner KD. Ambrosinl P. Rynn M. et al. Efficacy of sertraline in the treatment of children and adolescents with major depressive disorder, two randomized controlled trials. JAMA. 2003;290:1033-1041.]. What does it mean "two randomized controlled trials"? Well it seems that there were two identical studies that were pooled for this analysis. Why? They didn’t say… The study was published in August 2003, and there were several letters along the way asking about this pooling of two studies. Then in April 2004, there was this letter:
    To the Editor: Dr Wagner and colleagues reported that sertraline was more effective than placebo for treating children with major depressive disorder and that it had few adverse effects. As one of the study group investigators in this trial, I am concerned about the way the authors pooled the data from 2 trials, a concern that was raised by previous letters critiquing this study. The pooled data from these 2 trials found a statistically marginal effect of medication that seems unlikely to be clinically meaningful in terms of risk and benefit balance.

    New information about these trials has since become available. The recent review of pediatric antidepressant trials by a British regulatory agency includes the separate analysis of these 2 trials. This analysis found that the 2 individual trials, each of a good size [almost 190 patients], did not demonstrate the effectiveness of sertraline in treating major depressive disorder in children and adolescents.

    E.Jane Garland, MD, PRC PC
    Department of Psychiatry
    University of British Columbia
    Vancouver

So the reason they pooled the data from the two studies appears to be that neither was significant on its own, but pooling them overpowered the trial and produced a statistically significant outcome [see power calculation below]. Looking at the graph, you can see how slim the pickings were – significant only in weeks 3, 4, and 10. And that bit of deceit is not my total point here. Add in how Dr. Wagner replied to Dr. Garland’s letter:
    In Reply: In response to Dr Garland, our combined analysis was defined a priori, well before the last participant was entered into the study and before the study was unblinded. The decision to present the combined analysis as a primary analysis and study report was made based on considerations involving use of the Children’s Depression Rating Scale [CDRS] in a multicenter study. Prior to initiation of the 2 pediatric studies, the only experience with this scale in a study of selective serotonin reuptake inhibitors was in a single-center trial. It was unclear how the results using this scale in a smaller study could inform the power evaluation of the sample size for the 2 multicenter trials. The combined analysis reported in our article, therefore, represents a prospectively defined analysis of the overall study population…

This definition ["well before the last participant was entered into the study and before the study was unblinded"] is not what a priori means. It means "before the study is ever even started in the first place." And that’s not what prospective means either. It also means "before the study is ever even started in the first place" too. She is rationalizing the change by redefining a priori’s meaning.

The problem here wasn’t that Pfizer, maker of Zoloft, didn’t have people around who knew the ways of science. If anything, it was the opposite problem. They had or hired people who knew those science ways well enough to manipulate them to the company’s advantage.

  • Why did they have two identical studies? Best guess is that they were going for FDA Approval, and in a hurry. You need two positive studies for FDA Approval.
  • Why would they decide to pool them somewhere along the way? Best guess is that things weren’t going well and pooling them increases the chance of achieving significance with a smaller difference between drug and placebo.
  • How would they know that things weren’t going well if the study was blinded? You figure it out. It isn’t that hard.
  • Why would they say that a priori means "well before the last participant was entered into the study and before the study was unblinded" when that’s not what it means? That isn’t that hard to figure out either.
  • So why not just say that they cheated? Because I can’t prove it [plausible deniability]

I’m not sure that the industry-funded Clinical Trials of drugs should even be considered research. They’re better seen as product testing. And the whole approach should reflect that designation. Everyone involved is biased – by definition. The point of the enterprise isn’t to answer a question, it’s to say this in whatever way you can get there:
Conclusion The results of this pooled analysis demonstrate that sertraline is an effective and well-tolerated short-term treatment for children and adolescents with MDD.
And the only way to insure that the outcome parameters aren’t changed is to require preregistration with a date-stamped certified Protocol and Statistical Analysis Plan on file before the study begins – a priori. What if they change their minds? Start a new study. Product testing may be science, but it’s not research. And we may have more oversight on our light-bulbs and extension cords than we have on our medications.

And after all of that, the Zoloft study is still in Dr. Wagner’s repertoire at the APA Meeting some 13 years later…
PsychiatricNews
by Aaron Levin
June 16, 2016

… As for treatment, only two drugs are approved for use in youth by the Food and Drug Administration [FDA]: fluoxetine for ages 8 to 17 and escitalopram for ages 12 to 17, said Wagner. “The youngest age in the clinical trials determines the lower end of the approved age range. So what do you do if an 11-year-old doesn’t respond to fluoxetine?” One looks at other trials, she said, even if the FDA has not approved the drugs for pediatric use. For instance, one clinical trial found positive results for citalopram in ages 7 to 17, while two pooled trials of sertraline did so for ages 6 to 17.

Another issue with pediatric clinical trials is that 61 percent of youth respond to the drugs, but 50 percent respond to placebo, compared with 30 percent among adults, making it hard to separate effects. When parents express anxiety about using SSRIs and ask for psychotherapy, Wagner explains that cognitive-behavioral therapy [CBT] takes time to work and that a faster response can be obtained by combining an antidepressant with CBT. CBT can teach social skills and problem-solving techniques as well. Wagner counsels patience once an SSRI is prescribed.

A 36-week trial of a drug is too brief, she said. “The clock starts when the child is well, usually around six months. Go for one year and then taper off to observe the effect.” Wagner suggested using an algorithm to plot treatment, beginning with an SSRI, then trying an alternative SSRI if that doesn’t work, then switching to a different class of antidepressants, and finally trying newer drugs. “We need to become much more systematic in treating depression,” she concluded.
Mickey @ 12:00 PM

a must·read!…

Posted on Friday 13 January 2017

Whatever you’re reading right now [including this blog], you might just put a bookmark in it and read this paper. Besides it being written by luminaries [see scathing indictments… and the hope diamond…], it’s an encyclopedic proposal that deserves everyone’s attention:
by Marcus R. Munafo, Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button, Christopher D. Chambers, Nathalie Percie du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J. Ware, and John P. A. loannidis
Nature: Human Behavior. Published 10 January 2017. Open.

Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.


[abbreviated and reformatted from the paper]

From my perspective, there’s nothing more important in Medicine right now than reclaiming the academic medical literature from its captivity by the paramedical industries and others who are called stakeholders. But the problem in academic science is bigger than just Medicine. In the other fields, it goes under the name, The Reproducibility Crisis.

This paper is too important to whip off a blog post. So I’m going to let it sit for a bit before commenting, and picking out the specific recommendations that have to do with my corner of the world – Randomized Clinical Trials of medications – specifically the medications used in psychiatry.
Mickey @ 1:45 PM

no mo’ mojo…

Posted on Thursday 12 January 2017

Reuters
By Kate Kelland
January 11, 2017

LONDON — It is likely to be at least 10 years before any new generation of antidepressants comes to market, despite evidence that depression and anxiety rates are increasing across the world, specialists said on Wednesday. The depression drug pipeline has run dry partly due to a "failure of science" they said, but also due to big pharma pulling investment out of research and development in the neuroscience field because the profit potential is uncertain. "I’d be very surprised if we were to see any new drugs for depression in the next decade. The pharmaceutical industry is simply not investing in the research because it can’t make money from these drugs," Guy Goodwin, a professor of psychiatry at the University of Oxford, told reporters at a London briefing.

Andrea Cipriani, a consultant psychiatrist at Oxford, said such risk aversion was understandable given uncertain returns and the approximately billion dollar cost of developing and bringing a new drug to market. "It’s a lot of money to spend, and there’s a high rate of failure," Cipriani said. Treatment for depression usually involves either medication, some form of psychotherapy, or a combination of both. But up to half of all people treated fail to get better with first-line antidepressants, and around a third of patients are resistant to relevant medications.
It’s now been three decades since Prozac® was added to our pharmacopeia. Psychiatry as a specialty had rededicated itself to its medical roots, and this new drug class was a welcomed addition. While no more effective than the older tricyclic antidepressants, it was better tolerated [even though it had some side effects of its own]. After a few years, a progression of competitors came to market, and that came to be called the pipeline, and psychiatry settled into a rhythm of discussing their various differences. With the focus on the future, what’s coming next?

There were many attempts to enhance efficacy – sequencing, combining, augmenting with a variety of other drugs. Non-responders were said to have Treatment Resistant Depression, discussed almost as if it represented a unique entity. Multiple markers were queried looking for something that would predict the right drug – called Personalized Medicine. Practitioners and patients alike kept their eyes on the future – what’s coming down the pipeline. And there was a vague sense that the newer drugs were improvements over the earlier offerings, though that’s hard to justify in retrospect. Somewhere in there, the notion developed that the incidence of depression was rising rapidly, although that was hard to put together with the predominant view that depression was a biological-?-genetic entity. And the scientific basis for that escalating prevalence is hard to pin down.

And then in the summer of 2012, the Pharmaceutical companies threw in the towel and began to shut down their R&D programs for CNS drugs. They’d run out of candidates ["me too drugs"]. A great wail was heard throughout the land. There were conferences and task forces – much rhetoric and blaming. The NIMH seemed to have a new idea about how to jump-start drug development  every month. Multiple schemes were proposed to lure PHARMA back into the game. And all eyes turned to the search for something "novel" to keep things alive [eg Ketamine and its derivatives].

DEPRESSION RATES RISING

The experts said that since the current generation of SSRI [selective serotonin reuptake inhibitors] antidepressants – including Pfizer’s blockbuster Prozac [fluoxetine] – are widely available as cheap generics, there is reluctance among health services to fund expensive new drugs that may not be much better. That is partly because existing medications, while by no means perfect, are quite effective in more than half of patients, the specialists said, and partly because in this condition in particular, placebo can have a massive impact. That makes it difficult, they explained, to show that a new drug is working above and beyond a positive placebo response and an already effective generation of available drugs.
Looking at the pipeline graphic and the decades of industry introducing new versions of SSRIs, this explanation doesn’t make much sense. Maybe most prescrbng physicians [and their patients] have caught on to the fact that there’s no more mojo to be harvested from the SSRI antidepressants. Also, maybe what the drug companies say might well be true – that they’ve run out of candidate molecules [SSRIs] to even try. In other words, the SSRI paradigm has been exhausted, and there’s not another class of drugs to put in its place.
Depression is already one of the most common forms of mental illness, affecting more than 350 million people worldwide and ranking as the leading cause of disability globally, according to the World Health Organization. And rates are rising. Glyn Lewis, a professor of psychiatric epidemiology at University College London, cited data for England showing a doubling in prescriptions for antidepressants in a decade, to 61 million in 2015 from 31 million in 2005.
Here, we are asked to believe that the doubling of antidepressant prescriptions over that ten year span justifies the heading above [DEPRESSION RATES RISING]. A much more reasonable heading would be SSRI PRESCRIPTION RATES RISING. Why? Marketing. Primary Care Physicians prescribing SSRIs. Waiting Room screening. Patients taking them longer thinking they’re staving off something or correcting somethng [or inertia]. Knock yourself out here. I’ve just scratched the surface.
In the United States too, more people than ever are taking antidepressants. A study in the Journal of the American Medical Association [JAMA] in 2015 found that prevalence almost doubled from 1999 to 2012, rising to 13 from 6.9 percent. Yet several major drug companies including GlaxoSmithKline and AstraZeneca have scaled right back on neuroscience R&D in recent years, citing unfavorable risk-reward prospects.
Rejecting the [far-fetched] idea that the doubling of prescriptions equals a doubling of the disease prevalence, the drug companies have accepted another [more palatable] explanation – that the market is saturated. The likely reason they think that is that the market is saturated.
Goodwin said the absence of a drug development pipeline was also due to lagging scientific research into what is really happening in the brains of those who do and do not respond to current antidepressants. "It’s partly a failure of science, to be frank," said Goodwin. "Scientists have to … get more of an understanding about how these things actually work before we can then propose ways to improve them."

With all due respect to Dr. Goodwin, his pronouncement might’ve worked in the 90s [the Decade of the Brain] or the 2000s [the Research Agenda for the DSM-V]. But after thirty years, this argument itself has run out of mojo too. The scientists have scienced themselves silly trying to do what he suggests without much success. They’ve certainly gone through a small fortune in the process. The marketeers have had more success, raking in a beyond-modest fortune in the process. But this train is pulling into the station, its journey’s almost done. 

A supernova is a stellar explosion that briefly outshines an entire galaxy, radiating as much energy as the Sun or any ordinary star might emit over its lifespan. This astronomical event occurs during the last stages of a massive star’s life, whose dramatic and catastrophic destruction is marked by one final titanic explosion concentrated in a few seconds, creating a "new" bright star that gradually fades from sight over several weeks or months.

[click image for info]

Will the SSRIs have the same kind of fate as SN2014J? evaporating into the ether? I doubt it. At least not any time soon. They’re still useful in clinical practice when used carefully and in moderation. I expect the short acting ones will gradually disappear because of their heightened withdrawal profiles. And hopefully the others will be used in a more time limited way. And then, maybe we can get around to reworking our diagnostic system to bring it closer to clinical reality.

We’ll see… and speaking of shiny objects:

ASASSN-15lh [supernova designation SN 2015L] is a bright astronomical object. Initially thought to be a superluminous supernova, it was detected by the All Sky Automated Survey for SuperNovae [ASAS-SN] in 2015 in the southern constellation Indus. The discovery, confirmed by ASAS-SN group with several other telescopes, was formally described and published in a Science article led by Subo Dong at the Kavli Institute of Astronomy and Astrophysics [Peking University, China] on January 15, 2016. In December 2016, another group of scientists raised a hypothesis that ASASSN-15lh might not be a supernova. Based on observations from several stations on the ground and in space [including Hubble], these scientists proposed that this bright object might have been "caused by a rapidly spinning supermassive black hole as it destroyed a low-mass star". ASASSN-15lh, if a supernova, would be the most luminous ever detected; at its brightest, it was approximately 50 times more luminous than the whole Milky Way galaxy with an energy flux 570 billion times greater than the Sun….

[click image for info]
Mickey @ 2:20 PM

more bully pulpit…

Posted on Wednesday 11 January 2017

When our group assembled to do our RIAT analysis of Paxil Study 329, we already had access to a wealth of raw data from that clinical trial thanks to the hard work of many other people who came before us. So we had the a prori Protocol, the Statistical Analysis Plan, the CSR [Clinical Study Report], and the IPD [Individual Participant Data] – all available in the Public Domain as the result of various Legal Settlements and Court Orders. The only thing we didn’t have – the CRFs [Case Report Forms] – the actual raw forms the raters used to record their observations during the study. But we felt that we needed them too. We had good reason to question the system originally used to code the Adverse Events, and felt it was important to redo that part from scratch using a more modern and widely validated system.

 

About that time, the European Medicines Agency [EMA] had announced that it was going to release all of its regulatory data. AllTrials was pressing for "all trials registered, all trials reported." I was researching on what authority the data was being kept proprietary in the first place, and finding nothing much except convention and inertia. What was being called Data Transparency was in the air, and it was an exciting prospect.

And then the pharmaceutical companies seemed to do a turnabout. GSK had just been hit with a $3 B fine, in part over Study 329, and they were one of the first to sign on to AllTrials. But as things developed, what they offered was something different from what a lot of us reeally wanted, at least what I wanted. By that time, I wasn’t a rookie any more and I’d vetted a number of industry funded, ghost written, psychopharmacology drug trials  turned into journal articles. I can’t recall a one of them was totally straight. So I wanted to see what the drug company saw – the a priori Protocol and Statistical Analysis Plan, the IPD, and the CRFs – the raw data. And the reason wasn’t to do any new research. It was to check their reported work, to do it right by the book, to stop the cheating.

And so with much fanfare, what the drug companies rolled out was something else – Data Sharing. They pretended that what we wanted was access to their raw data so we could do further new research – and that they were being real mensches to let us see it. They set up independent boards to evaluate proposals for projects. If we passed muster, we could have access via a remote desktop – meaning we couldn’t download the data. We could only see it online. All we could download were our results, if approved. In this scenario, they are generously sharing the data with us, avoiding duplication and wastage or some such, and the remote access portal protects the subjects’ privacy. They maintained control and ownership. What we wanted was Data Transparency to keep them honest, to stop them from publishing these fictional photo-shopped articles, to stop the cheating.

So our RIAT group submitted a request to their panel, and when they asked for a proposal, we didn’t make one up. We played it straight and told them why. After some back and forth, we submitter the Protocol from the original Study 329, and to their credit, they granted our request. The remote access system actually worked, but working inside of it was a complete nightmare [we called it "the periscope"]. The CRFs came to around 50,000 pages, and we could only look at them one page at a time! But that’s another story and it’s available in detail at https://study329.org/. The point for this post is that the call for Data Transparency got turned into something very different – Data Sharing. That’s called "SPIN." Instead of being on the hot-seat for having published so many distorted clinical trial reports – carefully crafted by professional ghost-writers – they portrayed themselves as heros, generously allowing outsiders to use their data for independent research. Sleight of hand extraordinaire!

So what does this have to do with the New England Journal of Medicine, and editor Jeffrey Drazen, and Data Transparency versus Data Sharing, and a bully pulpit? A lot – some mentioned in this series from in April 2016.
As editor of the NEJM, prominent figure in the International Committee of Medical Journal Editors, on the Committee on Strategies for Responsible Sharing of Clinical Trial Data he occupies a powerful position in shaping policy. He never mentions the corruption that has so many of us up in arms [the reason we need such a policy], and positions himself consistently on the side of protecting the sponsors’ secrecy – sticking to the Data Sharing idea. His opinion of people who are trying to bring the corruption into the light of day is obvious:
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.

The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick…

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
His predecessors Arnold Relman [The new medical-industrial complex], Jerome Kassier, and Marcia Angell [Is Academic Medicine for Sale?] lead a New England Journal of Medicine that championed the integrity of medical science. Jeffrey Drazen uses the bully pulpit of that same position to thwart attempts to restore that integrity. He’s either blind to, complicit with, or part of the medical-industrial complex Arnold Relman warned us about. And he fills his journal with articles about industry and clinical trials that ignore the rampant corruption in published clinical trial reports [see the bully pulpit… for a long list of 2016’s examples]…
Mickey @ 8:00 AM

the bully pulpit…

Posted on Tuesday 10 January 2017

A Bully Pulpit is a conspicuous position that provides an opportunity to speak out and be listened to. This term was coined by President Theodore Roosevelt, who referred to the White House as a "bully pulpit", by which he meant a terrific platform from which to advocate an agenda.

Flashback
In 1980, New England Journal of Medicine editor Arnold Relman saw something ominous coming up ahead, and wrote an editorial [The new medical-industrial complex] warning that there was a threat to the integrity of academic medicine from a growing medical industry. And by 1984, the NEJM instituted a policy against publishing any editorials or review articles by authors with industry Conflicts of Interest. But by 1999 things had changed dramatically, a story I summarized in a narrative…. At the time, the new editor, Jeffrey Drazen, was embroiled in a controversy over his own ties to industry [see New England Journal of Medicine Names Third Editor in a Year, FDA censures NEJM editor, Medical Journal Editor Vows to Cut Drug Firm Ties].

Flash Forward
In the summer of 2015, Drazen published an editorial suggesting that the NEJM rescind Relman’s policy and allow experts with COI to write reviews and editorials, introducing a three part series by one of his staff reporters explaining why this was really a good idea:
The suggestion was met with a swift flurry of negative responses from some of medicine’s solidest citizens:
And I couldn’t seem to keep my mouth shut about it either [a contrarian frame of mind… , wtf?…, wtf? for real…, a narrative…, not so proud…, unserious arguments seriously…, the real editors speak out…, got any thoughts?…, not backward…], mostly amplifying on what the others said. I’ll have to add that it felt almost personal. The New England Journal of Medicine was my own very first medical subscription ever, and I read it cover-to-cover for years. It was part of my coming of age as a physician, articles embedded in my own scientific and ethical infastructure. And I felt that Jeffrey Drazen was betraying that history. Who was he to do that? Over the year and a half since that series came out, he’s been on my radar. But the New England Journal of Medicine isn’t one of the journals I follow regularly, so it only came up when there was a loud blip, like his particularly obnoxious editorial, Data Sharing – the one where he warned us about "data parasites" [see notes from a reluctant parasite…].

Then, someone sent me a link to this month’s The Large Pharmaceutical Company Perspective about several heroic PHARMA adventures. I noticed it was from a series called The Changing Face of Clinical Trials, so I ran down the rest of the series and read them all. And then I found some other NEJM Clinical Trial Offerings offerings in 2016. 

    The Changing Face of Clinical Trials
  1. June 2, 2016 | J. Woodcock and Others
    With this issue, we launch a series of articles that deal with contemporary challenges that affect clinical trialists today. Articles will define a specific issue of interest and illustrate it with examples from actual practice, as well as bring additional history and color to the topic.
  2. June 2, 2016 | L.D. Fiore and P.W. Lavori
    Investigators use adaptive trial designs to alter basic features of an ongoing trial. This approach obtains the most information possible in an unbiased way while putting the fewest patients at risk. In this review, the authors discuss selected issues in adaptive design.
  3. August 4, 2016 | I. Ford and J. Norrie
    Investigators use adaptive trial designs to alter basic features of an ongoing trial. This approach obtains the most information possible in an unbiased way while putting the fewest patients at risk. In this review, the authors discuss selected issues in adaptive design.
  4. August 4, 2016 | I. Ford and J. Norrie
    In pragmatic trials, participants are broadly representative of people who will receive a treatment or diagnostic strategy, and the outcomes affect day-to-day care. The authors review the unique features of pragmatic trials through a wide-ranging series of exemplar trials.
  5. September 1, 2016 | S.J. Pocock and G.W. Stone
    When the primary outcome of a clinical trial fails to reach its prespecified end point, can any clinically meaningful information still be derived from it? This review article addresses that question.
  6. September 8, 2016 | S.J. Pocock and G.W. Stone
    When a clinical trial reaches its primary outcome, several issues must be considered before a clinical message is drawn. These issues are reviewed in this article.
  7. October 6, 2016 | D.L. DeMets and S.S. Ellenberg
    Randomized clinical trials require a mechanism to safeguard the enrolled patients from harm that could result from participation. This article reviews the role of data monitoring committees in the performance of randomized clinical trials.
  8. November 3, 2016 | M.A. Pfeffer and J.J.V. McMurray
    Ethical issues can arise in the design and conduct of clinical trials. Using the trials that set the stage for our current treatment of hypertension, the authors show how the changing treatment landscape raised ethical problems as these trials were undertaken.
  9. January 5, 2017 | M. Rosenblatt
    The former chief medical officer of a large pharmaceutical company addresses the issue of complexity and how it affects the performance of clinical trials.
    The Final Rule
  • September 16, 2016 | D.A. Zarin and Others
    The final rule for reporting clinical trial results has now been issued by the Department of Health and Human Services. It aims to increase accountability in the clinical research enterprise, making key information available to researchers, funders, and the public.
    History of Clinical Trials
  1. June 2, 2016 | L.E. Bothwell and Others
  2. July 14, 2016 | A. Rankin and J. Rivest
  3. August 11, 2016 | L.E. Bothwell and S.H. Podolsky
  4. Clinical Trials, Healthy Controls, and the IRB
    September 15, 2016 | L. Stark and J.A. Greene
When I got down to the next ones about Data Sharing, I went back even further because I was waking up to something I had kind of forgotten – a bit of relevant sleight of hand that should have been on the front burner, but somehow got lost in the shuffle. What I realized was that the series I started this post with, Revisiting the Commercial–Academic Interface, didn’t just come out of the blue. It was part of a story that was larger – one that I’ll remind us of in the next post. But first here are the articles on Data Sharing:
    Data Sharing
  1. Collaborative Clinical Trials
    March 3, 2011 | A.J. Moss, C.W. Francis, and D. Ryan
  2. Pragmatic Trials — Guides to Better Patient Care?
    May 5, 2011 | J.H. Ware and M.B. Hamel
  3. October 4, 2012 | R.J. Little and Others
  4. October 24, 2013 | M.M. Mello and Others
  5. November 27, 2014 | B.L. Strom and Others
  6. December 25, 2014 | S. Bonini and Others
  7. January 8, 2015 | D.A. Zarin, T. Tse, and J. Sheehan
  8. January 15, 2015 | J.M. Drazen
  9. Adaptive Phase II Trial Design
    July 7, 2015 | D. Harrington and G. Parmigiani
  10. August 4, 2015 | The Academic Research Organization Consortium for Continuing Evaluation of Scientific Studies — Cardiovascular (ACCESS CV)
  11. August 4, 2015 | The International Consortium of Investigators for Fairness in Trial Data Sharing
  12. August 4, 2015 | H.M. Krumholz and J. Waldstreicher
  13. January 21, 2016 | Dan L. Longo, and Jeffrey M. Drazen
  14. August 4, 2016 | E. Warren
  15. September 22, 2016 | F. Rockhold, P. Nisen, and A. Freeman
  16. September 22, 2016 | B. Lo and D.L. DeMets
  17. September 22, 2016 | R.L. Grossman and Others
  18. October 27, 2016 | B.L. Strom and Others
And so on to the reminder in the next post[s] – how Data Transparency got turned into Data Sharing – and why I called this the bully pulpit…
Mickey @ 5:57 PM

Let’s go take a look…

Posted on Wednesday 4 January 2017

by Matthew J. Press, M.D., Ryan Howe, Ph.D., Michael Schoenbaum, Ph.D., Sean Cavanaugh, M.P.H., Ann Marshall, M.S.P.H., Lindsey Baldwin, M.S., and Patrick H. Conway, M.D.
New England Journal of Medicine. December 14, 2016
DOI: 10.1056/NEJMp1614134

For example, under CoCM, if a 72-year-old man with hypertension and diabetes presents to his primary care clinician feeling sad and anxious, the primary care team [primary care clinician and behavioral health care manager] would conduct an initial clinical assessment using validated rating scales. If the patient has a behavioral health condition [e.g., depression] and is amenable to treatment, the primary care team and the patient would jointly develop an initial care plan, which might include pharmacotherapy, psychotherapy, or other indicated treatments. The care manager would follow up with the patient proactively and systematically [using a registry] to assess treatment adherence, tolerability, and clinical response [again using validated rating scales] and might provide brief evidence-based psychosocial interventions such as behavioral activation [which focuses on helping people with mood disorders to engage in beneficial activities and behavior] or motivational interviewing. In addition, the primary care team would regularly review the patient’s care plan and status with the psychiatric consultant and would maintain or adjust treatment, including referral to behavioral health specialty care as needed.
This paragraph is from an article about how the Centers for Medicare and Medicaid Services [CMS] intends to pay the psychiatrists involved in Collaborative [AKA Integrated] Care [but that isn’t why it’s here]. It has gotten to be something of a hobby of mine to scan these articles as they come around, not that I intend to be involved with the "Psychiatric Collaborative Care Model [CoCM], an approach to behavioral health integration [BHI] that has been shown to be effective in several dozen randomized, controlled trials." What intrigues me is the language used to write them – a bizarre kind of new·speak. I highlighted what I’m talking about in red in the quoted paragraph. Here’s an example of what I’m talking about:
"an initial clinical assessment using validated rating scales. If the patient has a behavioral health condition [e.g., depression] and …"
First off, notice that the rating scales determine whether or not the patient has something wrong. So while the developers of the various rating scales have generally said that they’re not for making a diagnosis, it looks like that’s how they’re being used here. Whoops! Technically, it’s not a diagnosis. It’s a behavioral health condition. That is certainly some kind of new·speak, but it’s not what I’m focused on right this minute. I’m talking about the phrase validated rating scales. We’re accustomed to hearing about validated rating scales when we talk about Clinical Trials, but not running into it in case narratives. Another example:
"[again using validated rating scales] and might provide brief evidence-based psychosocial interventions such as…"
A lot of new·speak in this one, but it’s the evidence-based psychosocial interventions that I’m referring to [I’ll get to brief in a minute]. I haven’t given this an enormous piece of real estate in my head, but so far, this is my tentative lexicon of new·speak categories:
  1. adjectives saying that something is evidence-based [meaning positive clinical trials]: validated rating scales, evidence-based psychosocial interventions, evidence based psychotherapy, indicated treatments, guideline approved this and that, etc. The gist of things is that only group-certified interventions are valid…
  2. traditional language is de-psychiatrized and de-medicalized: behavioral health care manager, behavioral health condition, rating scales, behavioral health specialty. behavioral activation.
  3. strict control, limiting choices and duration of anything: [now we get to brief] – particularly any face to face contact with psychiatrists.
  4. adverbs implying contientiousness and industry: proactively and systematically [using a registry]
That’s just off the cuff. With thought, the themes and motives of new·speak will undoubtedly become clearer. Just a few other comments. It’s an odd way to talk no matter what the reason. It sounds a bit like the overinclusive stilted language sometimes heard in chronic Schizophrenia. We get the point that they want everything to be evidence-based [RCT certified], so why append it to every noun? We also get the point that they want psychiatry and psychiatrists out of the picture except to review and to sign off on the cases.

Just a couple of observations. In every example, the cases are universally lite – unlikely reaching anyone’s standards for mental illness proper. I didn’t really know what behavioral activation and motivational interviewing were. I watched a few youtube videos and looked at several trials of the latter [and there have been many] with widely varying results. They’re behavior modification interview techniques. But the main thing I took away from thinking about this hadn’t occurred to me before. In a way, I’ve already done this myself for over thirty years. I did it in the clinic where I’ve been working for the last eight. And when I was on the faculty, I did it somewhere in some affiliated facility on most days. The med students, or residents, or staff saw the patients and presented them. Sometimes I said "fine." Sometimes we talked about the case. And sometimes I saw the patient myself. I expect most psychiatrists have done this kind of thing at some point in their career for years. But I’m absolutely sure I wouldn’t do this one.

The secret to being able to supervise other clinicians in a situation like the one described here isn’t some encyclopedic knowledge of medications, or diagnoses. It’s in being able to get to know how to read the clinician you’re supervising. Early on, I expect I saw most cases a given resident presented. But as I got to know them, I learned when I could trust what I was hearing, versus when something wasn’t right. One almost never knows what’s out of whack from such a presentation, just that you hear yourself saying, "Let’s go take a look." It’s a skill that comes with experience and a lot of it. Of course, the best trainees say, "I don’t know what’s going on with this case. Would you "take a look"? But others don’t know, and so [1] you "take a look" and then [2] try to help them figure out why they were off track, what they missed.

So I guess I know the reason I’m absolutely sure I wouldn’t do this one. That whole system being described up there is designed to keep me out of the room the patient’s in. They don’t need me to help them with behavioral activation and motivational interviewing. They know how to do that certainly better than I. And most of the time, the Primary Care Docs don’t need me to pick an antidepressant. One learns that kind of thing quickly. What they need is someone who has spent a lifetime around suicidal and psychotic patients; who has actually seen most of those unusual medical cases that masquerade as mental illness that most doctors only heard about in medical school; someone who has missed a few diagnoses along the way and knows the dire consequences, is acutely aware when something smells funny. And in this proposed Integrative system, I’m not the one who gets to say, "I need to see this person." Usually, I’m hearing about the case from a care coordinator who may be giving me second-hand information. And even with stable outpatients who aren’t getting better, I wouldn’t have much of a clue how to get the case on track without either seeing the patient, or making sure they’re being seen by someone who really knows the ropes and doesn’t speak new·speak.


Note: When I started writing this post, it wasn’t at all clear to me where I was headed. It’s been that way every time I run across a Collaborative Care article or reference. My reaction has been visceral. It wasn’t until somewhere around that lightbulb up there that I finally could put some words to my reaction. Reading the various versions of Collaborative Care, I always feel the same. But now I can at least make sense of why I respond so negatively. In the system as proposed, I can’t do my job – the actual job assigned to me. I can’t be in charge of saying "Let’s go take a look." And if I can’t do that, there’s really no reason for me to be there in the first place.

Another Note: What’s funny is that a little before the lightbulb, I thought I was finally getting a handle on the why of my reaction. But It was something entirely different from how the post ended, and it’s worth saying in its own right, but it wasn’t "It." So now I guess I’ll have to write yet another post to explain that other reason I react so negatively to these Collaborative Care articles…
Mickey @ 11:25 PM

big thing, small package

Posted on Monday 2 January 2017

Sometimes big things come in small packages. This is from a research letter published last month in JAMA Intern Medicine. The author’s data comes from the Agency for Healthcare Research and Quality. Medical Expenditure Panel Survey MEPS HC-160A: 2013 prescribed medicines:
hat tip to James O… 
by Moore TJ and Mattison DR
JAMA Intern Medicine. Dec 12, 2016. [Epub ahead of print]
That report they summarized is a bear, but they’ve pared it down into two tables that are manageable. First, how widespread is the use of prescribed psychiatric medication [expressed as % of the population]?
Next, which drugs are being used?
If you work in a public clinic like I do, none of that will come as any great shock. The only thing that surprised me was that Zoloft® and Ambien® are so high up on the list. I prescribe neither so it was just a surprise. Reasons? I had no success with Zoloft® at all, and later, when I looked at the FDA approval documents, they looked beyond shaky to me [zoloft: the approval I…, zoloft: the approval II…, zoloft: the approval III…, zoloft: beyond the approval I…, zoloft: beyond the approval II…, zoloft: the epilogue…]. Ambien®? When the second patient showed up with bruises from falls while sleepwalking on Ambien®, it came off of my formulary for good. But otherwise, no big surprises. However, the authors went further and took a reasonable stab at quantifying something that I’ve thought about [and struggled with] ever since I started at the clinic about 8 or 9 years ago – long term use of these medications. Here are a few quotes from their letter:
"Long-term use was defined as 3 or more prescriptions filled in 2013 or a prescription started in 2011 or earlier…"

"Most psychiatric drug use reported by adults was long term, with 84.3% [95% Cl, 82.9%-85.7%] having filled 3 or more prescriptions in 2013 or indicating that they had started taking the drug during 2011 or earlier. Differences in long-term use among the 3 drug classes were small. The long-term users filled a mean [SE] of 9.8 [0.19] prescriptions for psychiatric drugs during 2013…"

"These data show 1 of 6 US adults reported taking psychiatric drugs at least once during 2013, but with 2- to 3-fold differences by race/ethnicity, age, and sex. Moreover, use may have been underestimated because prescriptions were self-reported, and our estimates of long-term use were limited to a single survey year…"

"Among adults reporting taking psychiatric drugs, more than 8 of 10 reported long-term use…"
Having taken something of a 25 year long sabbatical from mainstream psychiatry after leaving academia for a private psychotherapy practice, I started volunteering in local charity clinic after I retired. I was unprepared for the psychiatry I encountered there. I expected that I’d have to bone up on my psychpharmacology [and I did], but I sure didn’t care for what I found. It seemed like over·medication, poly·pharmacy, inappropriate drug choices, continual use of time·limited medicines, all were standard operating procedures. So I started reading the clinical trials and learned about ClinicalTrials.gov, Drugs@FDA, PubMed, and the push·back – the blogs and literature that were developing around these topics [and I started this one of my own].

This report by Moore and Mattison well documents what I found returning to general psychiatry. I still find the figures staggering, but the one that makes the least sense is that these medications are being taken long term. Depression, even in its mest malignant format is time limited for the most part. There’s evidence that in some depressions, maintenance medication can be a relapse preventive, but hardly in 80% of the cases. All of this has happened in a period where psychiatry has been telling itself and the rest of the world that it’s medicalizing, but there’s nothing about those figures that’s medical. It’s contaminated by profiteering, plain and simple, and at the expense of patients who’ve come for help.

I hear from patients who are caught up in a carousel  of medications trying unsuccessfully to find something that helps but getting nowhere or even seeing their symptoms worsening:
    "… insanity is doing the same thing over and over again expecting different results"
There was a time that the best advice would be to forget what they’ve been told to date and start over, tapering any medication that isn’t clearly helping. Check out the hardware [a medical condition or medication that might be contributing]; likewise with the firmware [a major psychiatric syndrome like Melancholia or Manic Depressive Illness]; and then the software [find a reputable therapist to help them explore their lives, past and present, looking for the tangles]. That’s the same advice they would have gotten forty years ago. And it’s still good advice.

This coin has another side. While the figures quoted in this article telegraph the clear message that these medications have been over·promoted and over·prescribed, they also raise another potential concern. When we became disappointed with mental hospitals, we shut them down rather that right-size them. When we were disillusioned with antipsychotic medication and community care, we did the same thing [and filled our jails]. Similarly, when the various psychotherapies didn’t live up to their early promises, they were vilified. And while we currently remain in a situation where the medications on that list are over·prescribed, that’s not to say that there aren’t a significant number of patients who are genuinely benefiting from taking them. Something about the:
    "… baby with the bathwater"
Mickey @ 8:00 AM

so long 2016…

Posted on Saturday 31 December 2016

Mickey @ 4:00 AM

whodunit? theydunit…

Posted on Thursday 29 December 2016

    The active voice is usually more direct and vigorous than the passive:
         I shall always remember my first visit to Boston.
    This is much better than
         My first visit to Boston will always be remembered by me.
    The latter sentence is less direct, less bold, and less concise. If the writer tries to make it more concise by omitting "by me,"
         My first visit to Boston will always be remembered,
    it becomes indefinite: is it the writer, or some person undisclosed, or the world at large, that will always remember this visit?

I come from a generation schooled in the ways of Strunk and White, though the only suggestion I really remember is Don’t use the passive voice as my take-away [in case I forget, the grammar-checker in Microsoft Word is there to remind me]. As a kid, I could see that the active voice sounded better. But later, I saw that the passive voice was often used to obscure agency – a way of avoiding saying directly whodunit?. So writing "The primary outcome variables were changed in the published version of Paxil Study 329" just isn’t the same as writing "Martin Keller and his coauthors changed the primary outcome variables in the published version of Paxil Study 329."

One encounters patients who appear to live their lives in the domain of the passive voice. Things just happen in the world. Things happen to them [usually bad or disappointing things], but there’s no agent causing them. And invariably, they leave out their own participation in the things that happen. This was once known as the Fate Neurosis. While it may keep them from blaming others, or keep them from shouldering blame themselves, it’s part of a long-suffering view of life that has a sticky persistence that maintains their dysphoria [and often drives their therapists and acquaintances to distraction]. One of the goals of their psychotherapy is to help them see their own part in making things happen, even if it’s negative – to help them see a world in which they are actors rather than victims of obscure forces like fate, destiny, or bad luck. Whodunit? is of major importance in understanding anything that happens to these people [often times theydunit].

Oddly, my mind goes down this path when reading some of the language used to describe the various sources of bias in Clinical Trial reporting. There’s a long-suffering quality to the lamentations, as if we are victims of a maleficant  universe. It’s in the language we use. Publication Bias refers to trials with unfavorable outcomes that don’t get published. That italicized phrase happens to be an example of the use of the passive voice in that the actor who didn’t publish the study is missing in action – literally. Selective Reporting? Somebody did the selecting. And so it goes. The culprit isn’t in the language. All these sheenanigans that have so garbled our Clinical Trial literature aren’t mistakes, or sloppiness, or something overlooked, or random acts of a perverse deity. They’re not coming from incompetent or poorly trained statisticians. They’re the conscious, motivated acts of a person or persons who’ve got something very specific in mind. And again, the important question is whodunit?

We all know that these distorted trial reports are motivated actions. The goal is to exaggerate efficacy and downplay toxicity, to sell these drugs, but that knowledge doesn’t make it into our descriptive language or our policies. We routinely relate to them in the passive voice, but then wrack our brains trying to think up things that might respond to their happening rather than stopping people from doing them in the first place. We request minimal information and give industry a long time to provide it. Then we don’t levy fines when the required information doesn’t show up. We lament the things that are happening and rarely go after the agents except to extract inadequate fines long after the fact. Many say it won’t stop until we start sending people to prison, but that just hasn’t happened, in part because it’s so difficult to prosecute and often impossible to prove.

Instead of chasing instances where various biases have colored  the reported results after the fact, we could face the reality that distortion and non-compliance are the the expected response. The a priori protocol and declared outcome variables are unlikely to be available a priori. So we could say that no trial can begin until the outcome variables are posted in the registration section on ClinicalTrials.gov. Why not? They’re already available from the Institutional Review Board submission. Similarly, we could say no FDA review of an NDA will be initiated until the Results Database is publicly available filled out on ClinicalTrials.gov for all submitted trials. Why not? They’re being submitted to the FDA so they’re available. Why not submit them to the rest of us?

So no more it happened to us. We need to act on they do it. We already know whodunit!
Mickey @ 11:54 AM