the basic efficacy table I…

Posted on Monday 23 January 2017

I don’t know if my musings about the Linux story worked, but what I was trying to talk about was the computer community’s struggles with the secrecy of the commercial developers – a problem similar to ours with Clinical Trials and their sponsors. Watching those computer code struggles happen from the sidelines, things took of when Linux came around and they had a new own platform [operating system]. Linus’s problem with "UI" [user interface] has held them back [see Ted talk – in here’s Linus…], but others are beginning to make it easier to use now. My point was rather than trying to gain access to the systems of others, they came into their own when they had their own system.

For many reasons, I think that something similar is ultimately the only real solution to the problem of industry’s control of drug trial and reportings. RCTs may be appropriate for drug approval purposes, the and FDA usually bases the approval on making the a priori declared primary outcome variables. But they aren’t so appropriate for deciding about actual usage of the drugs, particularly when presented in the journal articles we call experimercials – often distorted by a variety of tricky moves. Medicine is going to have to find some non-commercially driven way to test drugs that will give us reliable ongoing clinical information – our own Open Source platform.

But in the meantime, I’ve been thinking about data transparency – having access to all of the raw data from clinical trials. I certainly think that’s the way it should be. Medicine is almost by definition an open science. The majority of clinical education is apprenticeship, freely given. No secret potions, no wizards allowed. Secret data just doesn’t fit. But the business end of industry isn’t medicine. I frankly doubt that we’ll get access to all the data free and clear any time soon. It’ll probably always long be a fight like it was for us with Paxil Study 329. The marketeers see that data as part of their commodity and hold on to it at any cost. So they’ll stick with their restrictive data sharing meme as long as possible; jump on any legality they can find along the way; play havoc with the EMA’s or anyone else’s data release plan; etc. [and they can afford a lot of havoc!].

When we took on the Paxil Study 329 project, I learned a big lesson. Ultimately, we had all the data: the a priori Protocol; the Complete Study Report; the Individual Participant Data; and the Case Report Forms. The remote desktop interface was as difficult as we said it was, but beyond that – our analysis was a forced march that took a lot of work and time. If we had all that information on every trial, I doubt many people would take on the task. The drug companies have a support staff to do the leg work. People doing independent re-analysis don’t have that. We sure didn’t. Having total data access is ethically important, but practically, it’s quite another matter. So that leads us to think about "What are bare bones essentials in looking at the efficacy analysis of an RCT?"

  1. What are the a priori defined primary and secondary outcome variables and how will they be analyzed?
  2. What are the results of those specific analyses after the study was completed?
I’ve talked endlessly about the fact that prospective declaration of the outcome variables and analytic methods are an essential element of an RCT. That’s another big lesson from Paxil Study 329. If you look at their Full study report acute, they calculated every conceivable variable and picked four positive ones to report on out of the over 40 – none of their choices were declared as either primary or secondary in the Protocol. Recently, Ben Goldacre and his group have shown that Outcome Switching is quite common even now, and that the a priori Protocols are hard, if not impossible, to come by [see COMPare]. So while the battle over full data transparency related to the raw data continues, there is consensus that the a priori Protocol and the Results should be publicly available and that the place for posting them is ClinicalTrials.gov:

So, whether we have access to the raw data or not, we should be able to fill out this Basic Efficacy Table [or something close] for every Clinical Trial. The reasons we can’t are mixed up with non-compliance and non-enforcement, not non-consensus or non-requirenent:

Mickey @ 4:35 PM

the commercial strangle-hold…

Posted on Sunday 22 January 2017


by Rosa Ahn, Alexandra Woodbridge, Ann Abraham, Susan Saba, Deborah Korenstein, Erin Madden, W John Boscardin, and Salomeh Keyhani.
BMJ 2017 356:i6770

Objective: To examine the association between the presence of individual principal investigators’ financial ties to the manufacturer of the study drug and the trial’s outcomes after accounting for source of research funding.
Design: Cross sectional study of randomized controlled trials [RCTs].
Setting: Studies published in “core clinical” journals, as identified by Medline, between 1 January 2013 and 31 December 2013.
Participants: Random sample of RCTs focused on drug efficacy.
Main outcome measure: Association between financial ties of principal investigators and study outcome.
Results: A total of 190 papers describing 195 studies met inclusion criteria. Financial ties between principal investigators and the pharmaceutical industry were present in 132 [67.7%] studies. Of 397 principal investigators, 231 [58%] had financial ties and 166 [42%] did not. Of all principal investigators, 156 [39%] reported advisor/consultancy payments, 81 [20%] reported speakers’ fees, 81 [20%] reported unspecified financial ties, 52 [13%] reported honorariums, 52 [13%] reported employee relationships, 52 [13%] reported travel fees, 41 [10%] reported stock ownership, and 20 [5%] reported having a patent related to the study drug. The prevalence of financial ties of principal investigators was 76% [103/136] among positive studies and 49% [29/59] among negative studies. In unadjusted analyses, the presence of a financial tie was associated with a positive study outcome [odds ratio 3.23, 95% confidence interval 1.7 to 6.1]. In the primary multivariate analysis, a financial tie was significantly associated with positive RCT outcome after adjustment for the study funding source [odds ratio 3.57 [1.7 to 7.7]. The secondary analysis controlled for additional RCT characteristics such as study phase, sample size, country of first authors, specialty, trial registration, study design, type of analysis, comparator, and outcome measure. These characteristics did not appreciably affect the relation between financial ties and study outcomes [odds ratio 3.37, 1.4 to 7.9].
Conclusions: Financial ties of principal investigators were independently associated with positive clinical trial results. These findings may be suggestive of bias in the evidence base.
If you’re in need of a publication, all you have to do is study the relationship between Conflict of Interest and outcome. No matter what you measure, you’re sure to find a robust correlation. What distinguishes this study? It’s reasonably recent. It covers all specialties. and the finding remain no matter what other confounding variables you control for. It brings home Alastair Matheson‘s point that declaring Conflict of Interest mitigates nothing.
Discussion
We found that more than half of principal investigators of RCTs of drugs had financial ties to the pharmaceutical industry and that financial ties were independently associated with positive clinical trial results even after we accounted for industry funding. These findings may raise concerns about potential bias in the evidence base.
Possible explanations for findings
The high prevalence of financial ties observed for trial investigators is not surprising and is consistent with what has been reported in the literature. One would expect industry to seek out researchers who develop expertise in their field; however, this does not explain why the presence of financial ties for principal investigators is associated with positive study outcomes. One explanation may be “publication bias.” Negative industry funded studies with financial ties may be less likely to be published. The National Institutes of Health [NIH]’s clinicaltrials.gov registry was intended to ensure the publication of all trial results, including both NIH and industry funded studies, within one year of completion. However, rates of publication of results remain low even for registered trials…
 
Other possible explanations for our findings exist. Ties between investigators and industry may influence study results by multiple mechanisms, including study design and analytic approach. If our findings are related to such factors, the potential solutions are particularly challenging. Transparency alone is not enough to regulate the effect that financial ties have on the evidence base, and disclosure may compromise it further by affecting a principal investigator’s judgment through moral licensing, which is described as “the unconscious feeling that biased evidence is justifiable because the advisee has been warned.” Social experiments have shown that bias in evidence is increased when conflict of interest is disclosed. One bold option for the medical research community may be to adopt a stance taken in fields such as engineering, architecture, accounting, and law: to restrict people with potential conflicts from involving themselves in projects in which their impartiality could be potentially impaired. However, this solution may not be plausible given the extensive relationship between drug companies and academic investigators. Other, incremental steps are also worthy of consideration. In the past, bias related to analytic approach was tackled by a requirement for independent statistical analysis of major RCTs. Independent analysis has largely been abandoned in favor of the strategy of transparency, but perhaps the time has come to reconsider this tool to reduce bias in the analysis of RCTs. This approach might be especially effective for studies that are likely to have a major effect on clinical practice or financial implications for health systems. Another strategy to reduce bias at the analytic stage may be to require the publishing of datasets. ICMJE recently proposed that the publication of datasets should be implemented as a requirement for publication. This requirement is increasingly common in other fields of inquiry such as economics. Although independent analyses at the time of publication may not be feasible for journals from a resource perspective, the requirement to release the dataset to be reviewed later if necessary may discourage some forms of analytical bias. Finally, authors should be required to include and discuss any deviations from the original protocol. This may help to prevent changes in the specified outcome at the analytic stage…
This is a good article filled with thoughtful suggestions, well worth reading.  But one might ask why I put it here here in the middle of some posts about an offbeat Finnish computer programmer [Linus Torvold] and an analogy with his rogue computer operating system [Linux] – how it impacted a similar issue in the computer software world [here’s Linus… and show me the damn  code  numbers!…]? It’s because as useful as their suggestions are, and as close as they are to the ones many of us would make, they’re based on several ideas which approach the domain of fallacy:

  • The Randomized Controlled Trial [RCT] is a good way to determine clinical usefulness.

    In 1962, the FDA was charged with requiring two Randomized Controlled Trials [RCTs] demonstrating statistical efficacy and all human usage data demonstrating safety in order to approve a drug for use.  It’s a weak standard, designed to keep inert potions off the market. It was presumed that the medical profession would have a higher standard and determine clinical usefulness. That made [and makes] perfect sense. The FDA primarily insures safety and keeps swamp root and other patent medicines out of our pharmacopeia, but clinical usefulness should be determined by the medical profession and our patients. Not perfect, but I can’t think of a better system for approval. However, approval doesn’t necessarily correlate with clinical usefulness, or for that matter, long term safety. And then something unexpected happened. The Randomized Controlled Trials became the gold standard for everything – called Evidence Based Medicine. Randomized Clinical Trials are hardly the only form of valid evidence in medicine. That was a reform idea that kept people from shooting from the hip, but was also capable of throwing the baby out with the bathwater.

    This structured procedure designed to dial out everything and isolate the drug effect [RCTs] became a proxy for the much more complex and varied thing called real life. RTCs have small cohorts of recruited [rather than help-seeking] subjects in short-term trials. Complicatred patients are eliminated by exclusionary criteria. The metrics used are usually clinician-rated rather than subject-rated. And the outcome is measured by statistical significance instead of by the strength of the response. Blinding and the need for uniformity eliminates any iterative processes in dosing or identifying target symptoms. It’s an abnormal situation on purpose, suitable for the yes-no questions in approval, but not the for-whom information of clinical experience.

  • It is ever going to be possible to create a system that insures that the industry sponsors will openly report on their RCT without exaggerrating efficacy and/or understating toxicity.

    These RCTs were designed for submission to the FDA for drug approval. The FDA reviewers have access to the raw data and have regularly made the right calls. But then those same studies are written up by professional medical ghost writers, signed onto by KOL academic physicians with florid Conflicts of Interest and submitted to medical journals to be reviewed by peer reviewers who have no access to the raw data. The journals make money from selling reprints back to to the sponsors for their reps to hand out to practicing doctors. These articles are where physicians get their information, and discrepancies between the FDA version and the Journal versions are neither discussed, nor even easy to document.

    So it’s not the FDA Approval that’s the main problem. It’s the glut of journal articles that have been crafted from those studies and been the substrate for advertising campaigns that have caused so much trouble. The basic Clinical Trials that were part of the Approval have been glamorized. And many trials that were unsuccessful attempts at indication creep have been spun into gold. It seems that every time there’s an attempt to block the fabulation of such trials, there have been countermoves that render the reform attempts impotent. So far, it’s been a chess game that never seems to get to check-mate.
I don’t mean to malign this article at all. I thought it was well done and I liked the discussion. In fact, it the next post, I’m going to make some suggestions that are very like the ones they discuss. But I want to stick with my analogy between the commercial domination of the personal computer landscape and how it’s playing out. Rather than continuing to swim in someone else’s river, they took advantage of some other streams that appeared and began to come together to make a river of their own impervious to some company’s fourth quarter bottom line. And, sooner or later, Linux and its heirs will be the ones that lasts.

Structured RCTs may well be the best method for our regulatory agencies use to evaluate new drugs. They cost a mint to do and about the only people who can fund them are the companies who can capitalize on success – the drug companies. But medicine  doesn’t need to  shouldn’t buy into the notion that they’re the only way to evaluate the effectiveness of medicinal products. As modern medicine has become increasingly organized and documented, there are huge caches of data available. And it’s not just patient data or clinic data. What about the pharmacy data that’s already being used by PHARMA to track physician’s prescribing patterns? And where are the departments of pharmacology and the schools of pharmacy in following medication efficacy and safety? or the HMOs? or the Health Plans? the VAH? What about the waiting room questionnaires? I’d much rather they ask about the medications the patient is on than being used to screen for depression. It’s really the ongoing data after a drug is in use that clinicians need anyway – more important than the RTC that gets things started.

So while it’s important to continue the push for data transparency and clinical trial reporting reform, it’s also time to explore other ways of gathering and evaluating the mass of information that might free us from the commercial strangle-hold we live with now – and potentially give us an even better picture of what our medications are doing over time. There’s a way out of this conundrum. The task is to find it…
Mickey @ 5:00 PM

places and spaces…

Posted on Saturday 21 January 2017

Cartograms of the 2016 presidential election [with the country scaled by population rather than area]. On the left, colored by who won the county. On the right, color gradient by percentage vote in county.

Can you say Urban versus Rural?
Mickey @ 7:00 AM

show me the damn  code  numbers!…

Posted on Wednesday 18 January 2017

The background image is an iPhone photo of a spreadsheet opened on that little Linux computer in the last post, and the midground is a spreadsheet from my Windows computer. They’re both Open Source versions of OpenOffice, the free Office Suite that I use instead of Microsoft Excel [foreground]. The point of the graphic is that they’re basically the same. If I hooked that little $35 dollar machine to a full sized monitor I could do everything on it I need to do with ease. I don’t use the new Excel because I don’t like their "ribbon" interface and I can’t make the graphing utility do what I need it to do [I wonder if they changed it just to have something new].

Back in the early PC days, the software developers [Microsoft, Apple, etc] wanted to own their software through and through, make the code proprietary. The nerds and hackers of the world said ‘show me the damn code‘ and the companies said ‘hell no.’ There were lawsuits, and posturing, and all manner of haggling about whether computer code was intellectual property. For users, it was a problem because every new release [of something like Microsoft Word] meant that to get the new features, you had to buy it again or pay for an upgrade. And that extended to the operating system itself [DOS, OS]. It was a monopoly.

When the World  Wide Web came along, there was a different tradition. The hardware came from the government [DARPA] and the language that made it work [HTML] came from a think tank [CERN] developed by Tim Berners-Lee for internal use. The Browser used to read the HTML was Mosaic, and later Netscape [that was free, a version of Open Source] – built and maintained by volunteers. Microsoft wanted to grab the Internet, so they gave their Browser away too [Internet Explorer]. Now Google’s Chrome has jumped onto the mix. The tradition of Netscape carried the day and the Open Source Movement took hold – Linux, MySQL, Open Office, Apache server, etc and a whole lot of other very important stuff you can’t see. So the companies held on to their proprietary code and the home computer market primarily by building user friendly interfaces [and inertia]. As Linus Torvolds implied in the Ted interview, hackers, geeks, and nerds don’t do interfaces very well – and they sure aren’t marketeers. So now there’s a mix of Open Source and Proprietary software that’s actually mutually beneficial – a loose symbiosis of sorts. Android being a prime example.


This battle over Data Transparency with Clinical Trials and other scientific data strikes me as similar to those early days with computer code: intellectual property, commercial interests, competition, secrecy, etc. But there’s one difference that way ups the ante. It’s abundantly apparent that proprietary ownership of the data has allowed a level of sophisticated corruption and misinformation that is unequaled in the history of medicine in my opinion. So while there’s a real similarity to the computer code wars, the stakes reach beyond commerce and into the basic fabric of medical care. Have we learned something from Open Source and related initiatives that might help get things back in the road? Maybe…

  • PLOS [Public Library of Science] is a nonprofit open access scientific publishing project aimed at creating a library of open access journals and other scientific literature under an open content license. It launched its first journal, PLOS Biology, in October 2003 and publishes seven journals, as of October 2015.
  • ClinicalTrials.gov is a registry and results database of publicly and privately supported clinical studies of human participants conducted around the world. Learn more About Clinical Studies … including relevant History, Policies, and Laws.
  • PubMed comprises more than 26 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites.
  • AllTrials/COMPare: All Trials Registered | All Results Reported. The COMPare project is systematically checking every trial published in the top five medical journals, to see if they have misreported their findings.
  • Rxisk: No one knows a prescription drug’s side effects like the person taking it. Make your voice heard. RxISK is a free, independent drug safety website to help you weigh the benefits of any medication against its potential dangers.
… and there are many more. As I list these resources, I realize how much the general idea of Open Source maps onto the effort to put a stop to the commercial corruption of our pharmacopeia [and vice versa]. Perhaps where we lag is that we’re still hung up on trying to get "them" to change, much like the early efforts to get the major software corporations to change. Is the lesson from this story that the hackers created an alternative system of their own instead of continuing to bang their heads against the stone wall? It seems to me that  Linux was more than the sum of its code. It was an organizing principle, and we don’t yet have a Linus Torvolds or his system – something to rally around. hmm…
Mickey @ 2:53 PM

here’s Linus…

Posted on Monday 16 January 2017

Sometimes it’s the things right under your nose that are the hardest things to see. This is my desktop at home [uncharacteristically uncluttered]. Pretty much standard fare with a couple of 27" monitors connected to a big Windows 10 computer under the desk. But there are some anomalies. Why the two keyboards? and the two mice? And what’s with that little screen on the tripod?


The screen, keyboard, and mouse on the right belong to a Raspberry Pi, a little $35 computer that runs Raspbian [a variant of the Linux Operating system – which is free], has a full range of software packages [which are free], can be programmed using the Python Language [which is downloaded free], and has a hardware interface to the outside for hackers to prototype all kind of stuff [like robots]. There’s a Raspberry Pi on the space station overhead. The Android OS that runs your phone is a variant of Linux, and the Apache software that runs almost every web server usually runs under Linux.


In the 1980s, when the personal computer burst onto the scene, you could buy programs for your computer, but they were compiled – meaning that you couldn’t see the code and you couldn’t change anything about them. Tux - the Linux MascotThe software producers essentially had a monopoly. The Open Source Movement arose on multiple fronts throughout the next few decades and is too complex to detail here, but the core idea is simple. If you buy a piece of Open Source Software, you get the compiled program AND the source code. You can do with it what you please. There are many variants but that’s the nuts and bolts of it. Linus Torvolds, a Finnish student wrote a UNIX-like operating system  [Linux] and released it Open Source [which put this movement on the map]. Netscape did the same thing. The idea is huge – that it’s fine to be able to sell your work [programs], but it’s not fine to keep the computer code under lock and key.


The Raspberry Pi Foundation LogoBefore I retired, computers and programming were my hobbies, and the source of a lot of fun. I didn’t need either of them for my work [psychoanalytic psychotherapy] – they were for play. I gradually moved everything to the Linux system and Open Source. But when I retired, my first project involved georegistering old maps and projecting them onto modern topographc maps, and the only software available ran under Windows. And then with this blog, I couldn’t find an Open Source graphics program that did what I wanted. So I’ve run Windows machines now for over a decade. But I just got this little Raspberry Pi running, and I can already see that I’m getting my hobby back. If it’s not intuitive what this has to do with Randomized Clinical Trials or the Academic Medical Literature, I’ll spell it out here in a bit. But for right now – here’s Linus:


Mickey @ 6:34 AM

not research…

Posted on Saturday 14 January 2017

I spent a day with the article in the last post [A manifesto for reproducible science]. It lived up to my initial impression and I learned a lot from reading it. Great stuff! But my focus here is on a particular corner of this universe – the industry-funded Clinical Trial reports of drugs that have filled our medical journals for decades. And I’m not sure that this manifesto is going to add much. Here’s an example of why I say that:

Looking at one of the clinical trial articles of SSRIs in adolescents, there was something peculiar [Wagner KD. Ambrosinl P. Rynn M. et al. Efficacy of sertraline in the treatment of children and adolescents with major depressive disorder, two randomized controlled trials. JAMA. 2003;290:1033-1041.]. What does it mean "two randomized controlled trials"? Well it seems that there were two identical studies that were pooled for this analysis. Why? They didn’t say… The study was published in August 2003, and there were several letters along the way asking about this pooling of two studies. Then in April 2004, there was this letter:

    To the Editor: Dr Wagner and colleagues reported that sertraline was more effective than placebo for treating children with major depressive disorder and that it had few adverse effects. As one of the study group investigators in this trial, I am concerned about the way the authors pooled the data from 2 trials, a concern that was raised by previous letters critiquing this study. The pooled data from these 2 trials found a statistically marginal effect of medication that seems unlikely to be clinically meaningful in terms of risk and benefit balance.

    New information about these trials has since become available. The recent review of pediatric antidepressant trials by a British regulatory agency includes the separate analysis of these 2 trials. This analysis found that the 2 individual trials, each of a good size [almost 190 patients], did not demonstrate the effectiveness of sertraline in treating major depressive disorder in children and adolescents.

    E.Jane Garland, MD, PRC PC
    Department of Psychiatry
    University of British Columbia
    Vancouver

So the reason they pooled the data from the two studies appears to be that neither was significant on its own, but pooling them overpowered the trial and produced a statistically significant outcome [see power calculation below]. Looking at the graph, you can see how slim the pickings were – significant only in weeks 3, 4, and 10. And that bit of deceit is not my total point here. Add in how Dr. Wagner replied to Dr. Garland’s letter:

    In Reply: In response to Dr Garland, our combined analysis was defined a priori, well before the last participant was entered into the study and before the study was unblinded. The decision to present the combined analysis as a primary analysis and study report was made based on considerations involving use of the Children’s Depression Rating Scale [CDRS] in a multicenter study. Prior to initiation of the 2 pediatric studies, the only experience with this scale in a study of selective serotonin reuptake inhibitors was in a single-center trial. It was unclear how the results using this scale in a smaller study could inform the power evaluation of the sample size for the 2 multicenter trials. The combined analysis reported in our article, therefore, represents a prospectively defined analysis of the overall study population…

This definition ["well before the last participant was entered into the study and before the study was unblinded"] is not what a priori means. It means "before the study is ever even started in the first place." And that’s not what prospective means either. It also means "before the study is ever even started in the first place" too. She is rationalizing the change by redefining a priori’s meaning.

The problem here wasn’t that Pfizer, maker of Zoloft, didn’t have people around who knew the ways of science. If anything, it was the opposite problem. They had or hired people who knew those science ways well enough to manipulate them to the company’s advantage.

  • Why did they have two identical studies? Best guess is that they were going for FDA Approval, and in a hurry. You need two positive studies for FDA Approval.
  • Why would they decide to pool them somewhere along the way? Best guess is that things weren’t going well and pooling them increases the chance of achieving significance with a smaller difference between drug and placebo.
  • How would they know that things weren’t going well if the study was blinded? You figure it out. It isn’t that hard.
  • Why would they say that a priori means "well before the last participant was entered into the study and before the study was unblinded" when that’s not what it means? That isn’t that hard to figure out either.
  • So why not just say that they cheated? Because I can’t prove it [plausible deniability]

I’m not sure that the industry-funded Clinical Trials of drugs should even be considered research. They’re better seen as product testing. And the whole approach should reflect that designation. Everyone involved is biased – by definition. The point of the enterprise isn’t to answer a question, it’s to say this in whatever way you can get there:
Conclusion The results of this pooled analysis demonstrate that sertraline is an effective and well-tolerated short-term treatment for children and adolescents with MDD.
And the only way to insure that the outcome parameters aren’t changed is to require preregistration with a date-stamped certified Protocol and Statistical Analysis Plan on file before the study begins – a priori. What if they change their minds? Start a new study. Product testing may be science, but it’s not research. And we may have more oversight on our light-bulbs and extension cords than we have on our medications.

And after all of that, the Zoloft study is still in Dr. Wagner’s repertoire at the APA Meeting some 13 years later…
PsychiatricNews
by Aaron Levin
June 16, 2016

… As for treatment, only two drugs are approved for use in youth by the Food and Drug Administration [FDA]: fluoxetine for ages 8 to 17 and escitalopram for ages 12 to 17, said Wagner. “The youngest age in the clinical trials determines the lower end of the approved age range. So what do you do if an 11-year-old doesn’t respond to fluoxetine?” One looks at other trials, she said, even if the FDA has not approved the drugs for pediatric use. For instance, one clinical trial found positive results for citalopram in ages 7 to 17, while two pooled trials of sertraline did so for ages 6 to 17.

Another issue with pediatric clinical trials is that 61 percent of youth respond to the drugs, but 50 percent respond to placebo, compared with 30 percent among adults, making it hard to separate effects. When parents express anxiety about using SSRIs and ask for psychotherapy, Wagner explains that cognitive-behavioral therapy [CBT] takes time to work and that a faster response can be obtained by combining an antidepressant with CBT. CBT can teach social skills and problem-solving techniques as well. Wagner counsels patience once an SSRI is prescribed.

A 36-week trial of a drug is too brief, she said. “The clock starts when the child is well, usually around six months. Go for one year and then taper off to observe the effect.” Wagner suggested using an algorithm to plot treatment, beginning with an SSRI, then trying an alternative SSRI if that doesn’t work, then switching to a different class of antidepressants, and finally trying newer drugs. “We need to become much more systematic in treating depression,” she concluded.
Mickey @ 12:00 PM

a must·read!…

Posted on Friday 13 January 2017

Whatever you’re reading right now [including this blog], you might just put a bookmark in it and read this paper. Besides it being written by luminaries [see scathing indictments… and the hope diamond…], it’s an encyclopedic proposal that deserves everyone’s attention:
by Marcus R. Munafo, Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button, Christopher D. Chambers, Nathalie Percie du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J. Ware, and John P. A. loannidis
Nature: Human Behavior. Published 10 January 2017. Open.

Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.


[abbreviated and reformatted from the paper]

From my perspective, there’s nothing more important in Medicine right now than reclaiming the academic medical literature from its captivity by the paramedical industries and others who are called stakeholders. But the problem in academic science is bigger than just Medicine. In the other fields, it goes under the name, The Reproducibility Crisis.

This paper is too important to whip off a blog post. So I’m going to let it sit for a bit before commenting, and picking out the specific recommendations that have to do with my corner of the world – Randomized Clinical Trials of medications – specifically the medications used in psychiatry.
Mickey @ 1:45 PM

no mo’ mojo…

Posted on Thursday 12 January 2017


Reuters
By Kate Kelland
January 11, 2017

LONDON — It is likely to be at least 10 years before any new generation of antidepressants comes to market, despite evidence that depression and anxiety rates are increasing across the world, specialists said on Wednesday. The depression drug pipeline has run dry partly due to a "failure of science" they said, but also due to big pharma pulling investment out of research and development in the neuroscience field because the profit potential is uncertain. "I’d be very surprised if we were to see any new drugs for depression in the next decade. The pharmaceutical industry is simply not investing in the research because it can’t make money from these drugs," Guy Goodwin, a professor of psychiatry at the University of Oxford, told reporters at a London briefing.

Andrea Cipriani, a consultant psychiatrist at Oxford, said such risk aversion was understandable given uncertain returns and the approximately billion dollar cost of developing and bringing a new drug to market. "It’s a lot of money to spend, and there’s a high rate of failure," Cipriani said. Treatment for depression usually involves either medication, some form of psychotherapy, or a combination of both. But up to half of all people treated fail to get better with first-line antidepressants, and around a third of patients are resistant to relevant medications.
It’s now been three decades since Prozac® was added to our pharmacopeia. Psychiatry as a specialty had rededicated itself to its medical roots, and this new drug class was a welcomed addition. While no more effective than the older tricyclic antidepressants, it was better tolerated [even though it had some side effects of its own]. After a few years, a progression of competitors came to market, and that came to be called the pipeline, and psychiatry settled into a rhythm of discussing their various differences. With the focus on the future, what’s coming next?

There were many attempts to enhance efficacy – sequencing, combining, augmenting with a variety of other drugs. Non-responders were said to have Treatment Resistant Depression, discussed almost as if it represented a unique entity. Multiple markers were queried looking for something that would predict the right drug – called Personalized Medicine. Practitioners and patients alike kept their eyes on the future – what’s coming down the pipeline. And there was a vague sense that the newer drugs were improvements over the earlier offerings, though that’s hard to justify in retrospect. Somewhere in there, the notion developed that the incidence of depression was rising rapidly, although that was hard to put together with the predominant view that depression was a biological-?-genetic entity. And the scientific basis for that escalating prevalence is hard to pin down.

And then in the summer of 2012, the Pharmaceutical companies threw in the towel and began to shut down their R&D programs for CNS drugs. They’d run out of candidates ["me too drugs"]. A great wail was heard throughout the land. There were conferences and task forces – much rhetoric and blaming. The NIMH seemed to have a new idea about how to jump-start drug development  every month. Multiple schemes were proposed to lure PHARMA back into the game. And all eyes turned to the search for something "novel" to keep things alive [eg Ketamine and its derivatives].

DEPRESSION RATES RISING

The experts said that since the current generation of SSRI [selective serotonin reuptake inhibitors] antidepressants – including Pfizer’s blockbuster Prozac [fluoxetine] – are widely available as cheap generics, there is reluctance among health services to fund expensive new drugs that may not be much better. That is partly because existing medications, while by no means perfect, are quite effective in more than half of patients, the specialists said, and partly because in this condition in particular, placebo can have a massive impact. That makes it difficult, they explained, to show that a new drug is working above and beyond a positive placebo response and an already effective generation of available drugs.
Looking at the pipeline graphic and the decades of industry introducing new versions of SSRIs, this explanation doesn’t make much sense. Maybe most prescrbng physicians [and their patients] have caught on to the fact that there’s no more mojo to be harvested from the SSRI antidepressants. Also, maybe what the drug companies say might well be true – that they’ve run out of candidate molecules [SSRIs] to even try. In other words, the SSRI paradigm has been exhausted, and there’s not another class of drugs to put in its place.
Depression is already one of the most common forms of mental illness, affecting more than 350 million people worldwide and ranking as the leading cause of disability globally, according to the World Health Organization. And rates are rising. Glyn Lewis, a professor of psychiatric epidemiology at University College London, cited data for England showing a doubling in prescriptions for antidepressants in a decade, to 61 million in 2015 from 31 million in 2005.
Here, we are asked to believe that the doubling of antidepressant prescriptions over that ten year span justifies the heading above [DEPRESSION RATES RISING]. A much more reasonable heading would be SSRI PRESCRIPTION RATES RISING. Why? Marketing. Primary Care Physicians prescribing SSRIs. Waiting Room screening. Patients taking them longer thinking they’re staving off something or correcting somethng [or inertia]. Knock yourself out here. I’ve just scratched the surface.
In the United States too, more people than ever are taking antidepressants. A study in the Journal of the American Medical Association [JAMA] in 2015 found that prevalence almost doubled from 1999 to 2012, rising to 13 from 6.9 percent. Yet several major drug companies including GlaxoSmithKline and AstraZeneca have scaled right back on neuroscience R&D in recent years, citing unfavorable risk-reward prospects.
Rejecting the [far-fetched] idea that the doubling of prescriptions equals a doubling of the disease prevalence, the drug companies have accepted another [more palatable] explanation – that the market is saturated. The likely reason they think that is that the market is saturated.
Goodwin said the absence of a drug development pipeline was also due to lagging scientific research into what is really happening in the brains of those who do and do not respond to current antidepressants. "It’s partly a failure of science, to be frank," said Goodwin. "Scientists have to … get more of an understanding about how these things actually work before we can then propose ways to improve them."

With all due respect to Dr. Goodwin, his pronouncement might’ve worked in the 90s [the Decade of the Brain] or the 2000s [the Research Agenda for the DSM-V]. But after thirty years, this argument itself has run out of mojo too. The scientists have scienced themselves silly trying to do what he suggests without much success. They’ve certainly gone through a small fortune in the process. The marketeers have had more success, raking in a beyond-modest fortune in the process. But this train is pulling into the station, its journey’s almost done. 

A supernova is a stellar explosion that briefly outshines an entire galaxy, radiating as much energy as the Sun or any ordinary star might emit over its lifespan. This astronomical event occurs during the last stages of a massive star’s life, whose dramatic and catastrophic destruction is marked by one final titanic explosion concentrated in a few seconds, creating a "new" bright star that gradually fades from sight over several weeks or months.

[click image for info]

Will the SSRIs have the same kind of fate as SN2014J? evaporating into the ether? I doubt it. At least not any time soon. They’re still useful in clinical practice when used carefully and in moderation. I expect the short acting ones will gradually disappear because of their heightened withdrawal profiles. And hopefully the others will be used in a more time limited way. And then, maybe we can get around to reworking our diagnostic system to bring it closer to clinical reality.

We’ll see… and speaking of shiny objects:

ASASSN-15lh [supernova designation SN 2015L] is a bright astronomical object. Initially thought to be a superluminous supernova, it was detected by the All Sky Automated Survey for SuperNovae [ASAS-SN] in 2015 in the southern constellation Indus. The discovery, confirmed by ASAS-SN group with several other telescopes, was formally described and published in a Science article led by Subo Dong at the Kavli Institute of Astronomy and Astrophysics [Peking University, China] on January 15, 2016. In December 2016, another group of scientists raised a hypothesis that ASASSN-15lh might not be a supernova. Based on observations from several stations on the ground and in space [including Hubble], these scientists proposed that this bright object might have been "caused by a rapidly spinning supermassive black hole as it destroyed a low-mass star". ASASSN-15lh, if a supernova, would be the most luminous ever detected; at its brightest, it was approximately 50 times more luminous than the whole Milky Way galaxy with an energy flux 570 billion times greater than the Sun….

[click image for info]
Mickey @ 2:20 PM

more bully pulpit…

Posted on Wednesday 11 January 2017

When our group assembled to do our RIAT analysis of Paxil Study 329, we already had access to a wealth of raw data from that clinical trial thanks to the hard work of many other people who came before us. So we had the a prori Protocol, the Statistical Analysis Plan, the CSR [Clinical Study Report], and the IPD [Individual Participant Data] – all available in the Public Domain as the result of various Legal Settlements and Court Orders. The only thing we didn’t have – the CRFs [Case Report Forms] – the actual raw forms the raters used to record their observations during the study. But we felt that we needed them too. We had good reason to question the system originally used to code the Adverse Events, and felt it was important to redo that part from scratch using a more modern and widely validated system.

 

About that time, the European Medicines Agency [EMA] had announced that it was going to release all of its regulatory data. AllTrials was pressing for "all trials registered, all trials reported." I was researching on what authority the data was being kept proprietary in the first place, and finding nothing much except convention and inertia. What was being called Data Transparency was in the air, and it was an exciting prospect.

And then the pharmaceutical companies seemed to do a turnabout. GSK had just been hit with a $3 B fine, in part over Study 329, and they were one of the first to sign on to AllTrials. But as things developed, what they offered was something different from what a lot of us reeally wanted, at least what I wanted. By that time, I wasn’t a rookie any more and I’d vetted a number of industry funded, ghost written, psychopharmacology drug trials  turned into journal articles. I can’t recall a one of them was totally straight. So I wanted to see what the drug company saw – the a priori Protocol and Statistical Analysis Plan, the IPD, and the CRFs – the raw data. And the reason wasn’t to do any new research. It was to check their reported work, to do it right by the book, to stop the cheating.

And so with much fanfare, what the drug companies rolled out was something else – Data Sharing. They pretended that what we wanted was access to their raw data so we could do further new research – and that they were being real mensches to let us see it. They set up independent boards to evaluate proposals for projects. If we passed muster, we could have access via a remote desktop – meaning we couldn’t download the data. We could only see it online. All we could download were our results, if approved. In this scenario, they are generously sharing the data with us, avoiding duplication and wastage or some such, and the remote access portal protects the subjects’ privacy. They maintained control and ownership. What we wanted was Data Transparency to keep them honest, to stop them from publishing these fictional photo-shopped articles, to stop the cheating.

So our RIAT group submitted a request to their panel, and when they asked for a proposal, we didn’t make one up. We played it straight and told them why. After some back and forth, we submitter the Protocol from the original Study 329, and to their credit, they granted our request. The remote access system actually worked, but working inside of it was a complete nightmare [we called it "the periscope"]. The CRFs came to around 50,000 pages, and we could only look at them one page at a time! But that’s another story and it’s available in detail at https://study329.org/. The point for this post is that the call for Data Transparency got turned into something very different – Data Sharing. That’s called "SPIN." Instead of being on the hot-seat for having published so many distorted clinical trial reports – carefully crafted by professional ghost-writers – they portrayed themselves as heros, generously allowing outsiders to use their data for independent research. Sleight of hand extraordinaire!

So what does this have to do with the New England Journal of Medicine, and editor Jeffrey Drazen, and Data Transparency versus Data Sharing, and a bully pulpit? A lot – some mentioned in this series from in April 2016.
As editor of the NEJM, prominent figure in the International Committee of Medical Journal Editors, on the Committee on Strategies for Responsible Sharing of Clinical Trial Data he occupies a powerful position in shaping policy. He never mentions the corruption that has so many of us up in arms [the reason we need such a policy], and positions himself consistently on the side of protecting the sponsors’ secrecy – sticking to the Data Sharing idea. His opinion of people who are trying to bring the corruption into the light of day is obvious:
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.

The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick…

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
His predecessors Arnold Relman [The new medical-industrial complex], Jerome Kassier, and Marcia Angell [Is Academic Medicine for Sale?] lead a New England Journal of Medicine that championed the integrity of medical science. Jeffrey Drazen uses the bully pulpit of that same position to thwart attempts to restore that integrity. He’s either blind to, complicit with, or part of the medical-industrial complex Arnold Relman warned us about. And he fills his journal with articles about industry and clinical trials that ignore the rampant corruption in published clinical trial reports [see the bully pulpit… for a long list of 2016’s examples]…
Mickey @ 8:00 AM

the bully pulpit…

Posted on Tuesday 10 January 2017

A Bully Pulpit is a conspicuous position that provides an opportunity to speak out and be listened to. This term was coined by President Theodore Roosevelt, who referred to the White House as a "bully pulpit", by which he meant a terrific platform from which to advocate an agenda.

Flashback
In 1980, New England Journal of Medicine editor Arnold Relman saw something ominous coming up ahead, and wrote an editorial [The new medical-industrial complex] warning that there was a threat to the integrity of academic medicine from a growing medical industry. And by 1984, the NEJM instituted a policy against publishing any editorials or review articles by authors with industry Conflicts of Interest. But by 1999 things had changed dramatically, a story I summarized in a narrative…. At the time, the new editor, Jeffrey Drazen, was embroiled in a controversy over his own ties to industry [see New England Journal of Medicine Names Third Editor in a Year, FDA censures NEJM editor, Medical Journal Editor Vows to Cut Drug Firm Ties].

Flash Forward
In the summer of 2015, Drazen published an editorial suggesting that the NEJM rescind Relman’s policy and allow experts with COI to write reviews and editorials, introducing a three part series by one of his staff reporters explaining why this was really a good idea:
The suggestion was met with a swift flurry of negative responses from some of medicine’s solidest citizens:
And I couldn’t seem to keep my mouth shut about it either [a contrarian frame of mind… , wtf?…, wtf? for real…, a narrative…, not so proud…, unserious arguments seriously…, the real editors speak out…, got any thoughts?…, not backward…], mostly amplifying on what the others said. I’ll have to add that it felt almost personal. The New England Journal of Medicine was my own very first medical subscription ever, and I read it cover-to-cover for years. It was part of my coming of age as a physician, articles embedded in my own scientific and ethical infastructure. And I felt that Jeffrey Drazen was betraying that history. Who was he to do that? Over the year and a half since that series came out, he’s been on my radar. But the New England Journal of Medicine isn’t one of the journals I follow regularly, so it only came up when there was a loud blip, like his particularly obnoxious editorial, Data Sharing – the one where he warned us about "data parasites" [see notes from a reluctant parasite…].

Then, someone sent me a link to this month’s The Large Pharmaceutical Company Perspective about several heroic PHARMA adventures. I noticed it was from a series called The Changing Face of Clinical Trials, so I ran down the rest of the series and read them all. And then I found some other NEJM Clinical Trial Offerings offerings in 2016. 

    The Changing Face of Clinical Trials
  1. June 2, 2016 | J. Woodcock and Others
    With this issue, we launch a series of articles that deal with contemporary challenges that affect clinical trialists today. Articles will define a specific issue of interest and illustrate it with examples from actual practice, as well as bring additional history and color to the topic.
  2. June 2, 2016 | L.D. Fiore and P.W. Lavori
    Investigators use adaptive trial designs to alter basic features of an ongoing trial. This approach obtains the most information possible in an unbiased way while putting the fewest patients at risk. In this review, the authors discuss selected issues in adaptive design.
  3. August 4, 2016 | I. Ford and J. Norrie
    Investigators use adaptive trial designs to alter basic features of an ongoing trial. This approach obtains the most information possible in an unbiased way while putting the fewest patients at risk. In this review, the authors discuss selected issues in adaptive design.
  4. August 4, 2016 | I. Ford and J. Norrie
    In pragmatic trials, participants are broadly representative of people who will receive a treatment or diagnostic strategy, and the outcomes affect day-to-day care. The authors review the unique features of pragmatic trials through a wide-ranging series of exemplar trials.
  5. September 1, 2016 | S.J. Pocock and G.W. Stone
    When the primary outcome of a clinical trial fails to reach its prespecified end point, can any clinically meaningful information still be derived from it? This review article addresses that question.
  6. September 8, 2016 | S.J. Pocock and G.W. Stone
    When a clinical trial reaches its primary outcome, several issues must be considered before a clinical message is drawn. These issues are reviewed in this article.
  7. October 6, 2016 | D.L. DeMets and S.S. Ellenberg
    Randomized clinical trials require a mechanism to safeguard the enrolled patients from harm that could result from participation. This article reviews the role of data monitoring committees in the performance of randomized clinical trials.
  8. November 3, 2016 | M.A. Pfeffer and J.J.V. McMurray
    Ethical issues can arise in the design and conduct of clinical trials. Using the trials that set the stage for our current treatment of hypertension, the authors show how the changing treatment landscape raised ethical problems as these trials were undertaken.
  9. January 5, 2017 | M. Rosenblatt
    The former chief medical officer of a large pharmaceutical company addresses the issue of complexity and how it affects the performance of clinical trials.
    The Final Rule
  • September 16, 2016 | D.A. Zarin and Others
    The final rule for reporting clinical trial results has now been issued by the Department of Health and Human Services. It aims to increase accountability in the clinical research enterprise, making key information available to researchers, funders, and the public.
    History of Clinical Trials
  1. June 2, 2016 | L.E. Bothwell and Others
  2. July 14, 2016 | A. Rankin and J. Rivest
  3. August 11, 2016 | L.E. Bothwell and S.H. Podolsky
  4. Clinical Trials, Healthy Controls, and the IRB
    September 15, 2016 | L. Stark and J.A. Greene
When I got down to the next ones about Data Sharing, I went back even further because I was waking up to something I had kind of forgotten – a bit of relevant sleight of hand that should have been on the front burner, but somehow got lost in the shuffle. What I realized was that the series I started this post with, Revisiting the Commercial–Academic Interface, didn’t just come out of the blue. It was part of a story that was larger – one that I’ll remind us of in the next post. But first here are the articles on Data Sharing:
    Data Sharing
  1. Collaborative Clinical Trials
    March 3, 2011 | A.J. Moss, C.W. Francis, and D. Ryan
  2. Pragmatic Trials — Guides to Better Patient Care?

    May 5, 2011 | J.H. Ware and M.B. Hamel
  3. October 4, 2012 | R.J. Little and Others
  4. October 24, 2013 | M.M. Mello and Others
  5. November 27, 2014 | B.L. Strom and Others
  6. December 25, 2014 | S. Bonini and Others
  7. January 8, 2015 | D.A. Zarin, T. Tse, and J. Sheehan
  8. January 15, 2015 | J.M. Drazen
  9. Adaptive Phase II Trial Design
    July 7, 2015 | D. Harrington and G. Parmigiani
  10. August 4, 2015 | The Academic Research Organization Consortium for Continuing Evaluation of Scientific Studies — Cardiovascular (ACCESS CV)
  11. August 4, 2015 | The International Consortium of Investigators for Fairness in Trial Data Sharing
  12. August 4, 2015 | H.M. Krumholz and J. Waldstreicher
  13. January 21, 2016 | Dan L. Longo, and Jeffrey M. Drazen
  14. August 4, 2016 | E. Warren
  15. September 22, 2016 | F. Rockhold, P. Nisen, and A. Freeman
  16. September 22, 2016 | B. Lo and D.L. DeMets
  17. September 22, 2016 | R.L. Grossman and Others
  18. October 27, 2016 | B.L. Strom and Others
And so on to the reminder in the next post[s] – how Data Transparency got turned into Data Sharing – and why I called this the bully pulpit…
Mickey @ 5:57 PM