basic efficacy table – it’s what’s missing that matters…

Posted on Thursday 26 January 2017

I’ll be the first to admit that I’ve been pretty muddled these last several months – since, oh say, around 10:00 PM on November 8th, 2016 to be more precise. I’ve tried out any number of defense mechanisms to tell myself that I’m doing just fine. But to be honest, this election sort of took the wind out of my sails and I expect others of you might have experienced something similar. So I’m now sure that fine isn’t the best word to describe how I’m doing with all of this. I’ve got nothing to say about the election that you haven’t already thought yourself, except to re-emphasize that neither denial nor rationalization are much help on either side of this coin.

I don’t know if it’s apparent, but these five recent posts are a series, swirling around the same thoughts. It’s my attempt to pick up the thread of where I was in the Fall, and get back some semblance of my pre·November·2016 mindset:

  1. here’s Linus…
  2. show me the damn  code  numbers!…
  3. the commercial strangle-hold
  4. the basic efficacy table I…
  5. the basic efficacy table II…
We’ve all been focused on Randomized Controlled Trials [RCTs], for obvious reasons. They seem to be the basis for almost everything. In the face of there being so much distortion and artifice in the Journal articles generated from these studies, there’s been a vigorous effort to gain access to the raw data, data kept secret as proprietary property by the Sponsors of these trials. Some of the Pharmaceutical companies have set up mechanisms to have access to their data by application – Data Sharing. Once approved, you can view and work on the data on a remote desktop [which is hard work]. It hasn’t been used much.The companies are complaining [Data Sharing — Is the Juice Worth the Squeeze?].

I was on an international team that applied for and analyzed an RCT from 2001 [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence]. It was a herculean task – rewarding, but labor intensive and often frustrating [see]. While I learned a lot, there aren’t going to be too many such studies in its current form because it requires so much effort, time, and expertise. And so full Data Transparency [which I’ve supported on this blog] is a solution for special cases, but not everyday use. It would be infinitely easier if the data were in the public domain, not captive in the remote desktop interface and subject to so many restrictions, but the pharmaceutical companies have buried their heels deep into the pavement. They’re going to any length to hold onto it.

There’s something else. These short term RCTs are done for New Drug Approvals [NDAs] or New Indication Approvals, and that may be fine for that kind of initial testing – maybe the only real way to bring off the FDA’s opener. But like the case of the SSRIs or the COX-2 inhibitors [Vioxx®, Celebrex®], people might take these drugs for months, or even years, rather than just a few weeks. And that’s what got me to thinking about the "prototypical nerd," Linus Tolvold [here’s Linus…]. He didn’t set out to challenge the commercial monopoly, but challenge it he did. And the reason I say it will ultimately come out on top is that the development of Linux is driven by the science of computing, not the profits of the enterprise.

There are some things we need to do to clean up the woefully unregulated Clinical Trial problem, sure enough. But salvation depends of our developing a system that persists past approval. The data is there. We just haven’t figured out how to harvest it and create something that grows instead of a system that seems stuck on first base. Short term trials don’t even need an article for efficacy, just a simple basic efficacy table and for adverse events – a truthful compilation of adverse events – box scores. After all, it’s just product testing. It’s what’s missing that matters…
Mickey @ 10:17 PM

the basic efficacy table II…

Posted on Tuesday 24 January 2017

When Dr. Bernard Carroll comments here, he often uses the term "hand waving" when describing some of the tricky maneuvers used in the clinical trial reports to smooth over shaky logic or rationalize absurdities. It’s a great term, I think originating in the world of stage magicians who use exaggerated gesticulations to distract your attention. My wife’s a figure skating fan, and there it’s called "hand dancing" – dramatic arm and hand gestures to cover up sloppy skating. After reading so many jury-rigged clinical trial reports, I’ve almost come to see the whole narrative as organized around a verbal version of these attempts at artifice, and find myself jotting down the essential pieces in a hastily sketched table on the back of a nearby envelope or piece of scrap-paper. So the basic efficacy table isn’t just a concept or a proposal. It’s an outgrowth of my experience. I guess the formula goes:

article narrative – bullshit = basic efficacy table

One thing the ghost-writers seem to count on is that most doctors look at the abstract, scan the graphs and tables, and move on. I used to see that as virtue – getting through so much material on a regular basis. In my doctor youth, I could do that. But no longer. If a Clinical Trial report or a review article is there to be read, it’s there to be read closely, pencil and envelope back at the ready. At least that’s true of the industry funded clinical trials of psychiatric drugs that I find myself reading these days.

When I drew this diagram, it’s not how things are. It’s how I wish they would be. Step one is the approval of the study Protocol by the Institutional Review Board. At that point, by my reckoning, the trial should be registered [on]. There’s no reason at all that the Protocol couldn’t be published at that point. It has been written down for the IRB. Why not make it a part of the registration process. That would mean that a bona fide copy of the a priori declarations would be available from the outset.

This Fall, Dr. Carroll, Dr. John Noble, and I had been working on Dr. Carroll’s proposal built around the scheme in that graphic when two things happened. The NIH/FDA/ issued their own new plan [what to do? the final rule?…], and then there was a Presidential election. The latter has had me off my game since. But the fog is clearing a bit, so I want to get back to the former. This is from the summary table in Director Zarin’s paper outlining the changes:
adapted from Table 1A in

by Deborah A. Zarin, Tony Tse, Rebecca J. Williams, and Sarah Carr
New England Journal of Medicine. 2016 375[20]:1998-2004.
When does information need to be submitted to or posted on
  Submission: Within 21 days after enrollment of the first trial participant
  Posting: Generally, within 30 days after submission. For ACTs of unapproved or uncleared devices, no earlier than FDA approval or clearance and not later than 30 days after FDA approval or clearance (i.e., “delayed posting”), unless a responsible party authorizes posting of submitted information prior to FDA approval or clearance
What information?
  Descriptive information about the trial: e.g., brief title, study design, primary outcome measure information, studies an FDA-regulated device product, device product not approved or cleared by the FDA, post prior to FDA approval or clearance, and study completion date
  Recruitment information: e.g., eligibility criteria, overall recruitment status,
  Location and contact information: e.g., name of sponsor, facility information
  Administrative data: e.g., secondary ID, human-subjects protection review board status

Results information reporting
When does information need to be submitted to or posted on
    Standard deadline: Within 12 months after the date of final data collection for the prespecified primary outcome measures (primary completion date)
    Delayed submission with certification: May be delayed for up to 2 additional years (i.e., up to 3 years total after the primary completion date) for trials certified to be undergoing commercial product development for initial FDA marketing approval or clearance or approval or clearance for a new use
    Submitting partial results: Deadlines are established for submitting results information for a secondary outcome measure or additional adverse information that has not been collected by the primary completion date
    Extension request: After receiving and reviewing requests, NIH may extend deadlines for “good cause”
  Posting: Within 30 days after submission
What information?
  Participant flow: Information about the progress of participants through the trial by treatment group, including the number who started and completed the trial
  Demographic and baseline characteristics: Demographic and baseline characteristics collected by treatment group or comparison group and for the entire population of participants in the trial, including age, sex and gender, race or ethnicity, and other measures that were assessed at baseline and are used in the analysis of the primary outcome measures
  Outcomes and statistical analyses: Outcomes and statistical analyses for each primary and secondary outcome measure by treatment group or comparison group, including results of scientifically appropriate statistical analyses performed on these outcomes, if any.
  Adverse event information: Tables of all anticipated and unanticipated serious adverse events and other adverse events that exceed a 5% frequency threshold within any group, including time frame (or specific period over which adverse event information was collected), adverse-event reporting description (if the adverse-event information collected in the clinical trial is collected on the basis of a different definition of adverse event or serious adverse event from that used in the final rule), collection approach (used for adverse events during the study: systematic or nonsystematic), table with the number and frequency of deaths due to any cause by treatment group or comparison group
  Protocol and statistical analysis: Protocol and statistical analysis plan to be submitted at time of results information reporting (may optionally be submitted earlier)
  Administrative data: Administrative information, including a point of contact to obtain more information about the posted summary results information

First off, anything they do is a step forward. They’ve had the machinery available for two decades, and have done little with it. So they’re finally requiring registration for all the studies, and they pledge to keep up with it. An excellent start.

Initial Registration:  They say that they want the submission of the trial registration within three weeks of the initial subject’s enrollment and posting on-line within the 30 days after submission. Of course I’d prefer our "before the study starts" timing, but within the first two months will do. The point is to get it registered before they can look at the results and modify the Protocol – and two months is early enough for me. As for what’s to be posted, they don’t require posting the whole Protocol. That’s a disappointment. I’d prefer anchoring the outcome parameter at the beginning. But at least they do require declaration of the Primary Outcome Variables with registration.

Posting the Results:  This has traditionally been the most ignored requirement. They say: "Outcomes and statistical analyses for each primary and secondary outcome measure by treatment group or comparison group, including results of scientifically appropriate statistical analyses performed on these outcomes, if any" and add in "Protocol and statistical analysis plan to be submitted at time of results information reporting (may optionally be submitted earlier)." And I say A+! With that information, I could fill out my entire basic efficacy table, The only thing they left out was the Effect Size and there would be ample information to do that calculation.

And for timing on the Results? I’d have to say "barely passing," if that. "Within 12 months after the date of final data collection for the prespecified primary outcome measures (primary completion date)" and "May be delayed for up to 2 additional years (i.e., up to 3 years total after the primary completion date) for trials certified to be undergoing commercial product development for initial FDA marketing approval or clearance or approval or clearance for a new use." That’s a disappointment, and I can’t see any reason for it. The results are just what they are – they’re the results of the prespecified variables analyzed in the prespecified way. Who needs time for that? But I’ll have to admit that if they were to actually to follow these standards, the improvement would still be dramatic, probably satisfy most of us. With new drugs or new indications, they’d still be early in the drug’s patent life.

The loss here for me has to do with the publication of the Clinical Trial in a Peer-Reviewed Journal. I think the editors, peer-reviewers, and those of us who read these articles have the right to know …

  • the a priori declared primary and secondary outcome variables
  • the prospectively defined statistical analysis plan
  • the values, variance, and effect size for those specific parameters
… at the time we read the article. These postponed postings may well allow publication in journals prior to the filled out Results Database on So there it is as well as I know how to present it. What do you think?
Mickey @ 11:44 AM

the basic efficacy table I…

Posted on Monday 23 January 2017

I don’t know if my musings about the Linux story worked, but what I was trying to talk about was the computer community’s struggles with the secrecy of the commercial developers – a problem similar to ours with Clinical Trials and their sponsors. Watching those computer code struggles happen from the sidelines, things took of when Linux came around and they had a new own platform [operating system]. Linus’s problem with "UI" [user interface] has held them back [see Ted talk – in here’s Linus…], but others are beginning to make it easier to use now. My point was rather than trying to gain access to the systems of others, they came into their own when they had their own system.

For many reasons, I think that something similar is ultimately the only real solution to the problem of industry’s control of drug trial and reportings. RCTs may be appropriate for drug approval purposes, the and FDA usually bases the approval on making the a priori declared primary outcome variables. But they aren’t so appropriate for deciding about actual usage of the drugs, particularly when presented in the journal articles we call experimercials – often distorted by a variety of tricky moves. Medicine is going to have to find some non-commercially driven way to test drugs that will give us reliable ongoing clinical information – our own Open Source platform.

But in the meantime, I’ve been thinking about data transparency – having access to all of the raw data from clinical trials. I certainly think that’s the way it should be. Medicine is almost by definition an open science. The majority of clinical education is apprenticeship, freely given. No secret potions, no wizards allowed. Secret data just doesn’t fit. But the business end of industry isn’t medicine. I frankly doubt that we’ll get access to all the data free and clear any time soon. It’ll probably always long be a fight like it was for us with Paxil Study 329. The marketeers see that data as part of their commodity and hold on to it at any cost. So they’ll stick with their restrictive data sharing meme as long as possible; jump on any legality they can find along the way; play havoc with the EMA’s or anyone else’s data release plan; etc. [and they can afford a lot of havoc!].

When we took on the Paxil Study 329 project, I learned a big lesson. Ultimately, we had all the data: the a priori Protocol; the Complete Study Report; the Individual Participant Data; and the Case Report Forms. The remote desktop interface was as difficult as we said it was, but beyond that – our analysis was a forced march that took a lot of work and time. If we had all that information on every trial, I doubt many people would take on the task. The drug companies have a support staff to do the leg work. People doing independent re-analysis don’t have that. We sure didn’t. Having total data access is ethically important, but practically, it’s quite another matter. So that leads us to think about "What are bare bones essentials in looking at the efficacy analysis of an RCT?"

  1. What are the a priori defined primary and secondary outcome variables and how will they be analyzed?
  2. What are the results of those specific analyses after the study was completed?
I’ve talked endlessly about the fact that prospective declaration of the outcome variables and analytic methods are an essential element of an RCT. That’s another big lesson from Paxil Study 329. If you look at their Full study report acute, they calculated every conceivable variable and picked four positive ones to report on out of the over 40 – none of their choices were declared as either primary or secondary in the Protocol. Recently, Ben Goldacre and his group have shown that Outcome Switching is quite common even now, and that the a priori Protocols are hard, if not impossible, to come by [see COMPare]. So while the battle over full data transparency related to the raw data continues, there is consensus that the a priori Protocol and the Results should be publicly available and that the place for posting them is

So, whether we have access to the raw data or not, we should be able to fill out this Basic Efficacy Table [or something close] for every Clinical Trial. The reasons we can’t are mixed up with non-compliance and non-enforcement, not non-consensus or non-requirenent:

Mickey @ 4:35 PM

the commercial strangle-hold…

Posted on Sunday 22 January 2017

by Rosa Ahn, Alexandra Woodbridge, Ann Abraham, Susan Saba, Deborah Korenstein, Erin Madden, W John Boscardin, and Salomeh Keyhani.
BMJ 2017 356:i6770

Objective: To examine the association between the presence of individual principal investigators’ financial ties to the manufacturer of the study drug and the trial’s outcomes after accounting for source of research funding.
Design: Cross sectional study of randomized controlled trials [RCTs].
Setting: Studies published in “core clinical” journals, as identified by Medline, between 1 January 2013 and 31 December 2013.
Participants: Random sample of RCTs focused on drug efficacy.
Main outcome measure: Association between financial ties of principal investigators and study outcome.
Results: A total of 190 papers describing 195 studies met inclusion criteria. Financial ties between principal investigators and the pharmaceutical industry were present in 132 [67.7%] studies. Of 397 principal investigators, 231 [58%] had financial ties and 166 [42%] did not. Of all principal investigators, 156 [39%] reported advisor/consultancy payments, 81 [20%] reported speakers’ fees, 81 [20%] reported unspecified financial ties, 52 [13%] reported honorariums, 52 [13%] reported employee relationships, 52 [13%] reported travel fees, 41 [10%] reported stock ownership, and 20 [5%] reported having a patent related to the study drug. The prevalence of financial ties of principal investigators was 76% [103/136] among positive studies and 49% [29/59] among negative studies. In unadjusted analyses, the presence of a financial tie was associated with a positive study outcome [odds ratio 3.23, 95% confidence interval 1.7 to 6.1]. In the primary multivariate analysis, a financial tie was significantly associated with positive RCT outcome after adjustment for the study funding source [odds ratio 3.57 [1.7 to 7.7]. The secondary analysis controlled for additional RCT characteristics such as study phase, sample size, country of first authors, specialty, trial registration, study design, type of analysis, comparator, and outcome measure. These characteristics did not appreciably affect the relation between financial ties and study outcomes [odds ratio 3.37, 1.4 to 7.9].
Conclusions: Financial ties of principal investigators were independently associated with positive clinical trial results. These findings may be suggestive of bias in the evidence base.
If you’re in need of a publication, all you have to do is study the relationship between Conflict of Interest and outcome. No matter what you measure, you’re sure to find a robust correlation. What distinguishes this study? It’s reasonably recent. It covers all specialties. and the finding remain no matter what other confounding variables you control for. It brings home Alastair Matheson‘s point that declaring Conflict of Interest mitigates nothing.
We found that more than half of principal investigators of RCTs of drugs had financial ties to the pharmaceutical industry and that financial ties were independently associated with positive clinical trial results even after we accounted for industry funding. These findings may raise concerns about potential bias in the evidence base.
Possible explanations for findings
The high prevalence of financial ties observed for trial investigators is not surprising and is consistent with what has been reported in the literature. One would expect industry to seek out researchers who develop expertise in their field; however, this does not explain why the presence of financial ties for principal investigators is associated with positive study outcomes. One explanation may be “publication bias.” Negative industry funded studies with financial ties may be less likely to be published. The National Institutes of Health [NIH]’s registry was intended to ensure the publication of all trial results, including both NIH and industry funded studies, within one year of completion. However, rates of publication of results remain low even for registered trials…
Other possible explanations for our findings exist. Ties between investigators and industry may influence study results by multiple mechanisms, including study design and analytic approach. If our findings are related to such factors, the potential solutions are particularly challenging. Transparency alone is not enough to regulate the effect that financial ties have on the evidence base, and disclosure may compromise it further by affecting a principal investigator’s judgment through moral licensing, which is described as “the unconscious feeling that biased evidence is justifiable because the advisee has been warned.” Social experiments have shown that bias in evidence is increased when conflict of interest is disclosed. One bold option for the medical research community may be to adopt a stance taken in fields such as engineering, architecture, accounting, and law: to restrict people with potential conflicts from involving themselves in projects in which their impartiality could be potentially impaired. However, this solution may not be plausible given the extensive relationship between drug companies and academic investigators. Other, incremental steps are also worthy of consideration. In the past, bias related to analytic approach was tackled by a requirement for independent statistical analysis of major RCTs. Independent analysis has largely been abandoned in favor of the strategy of transparency, but perhaps the time has come to reconsider this tool to reduce bias in the analysis of RCTs. This approach might be especially effective for studies that are likely to have a major effect on clinical practice or financial implications for health systems. Another strategy to reduce bias at the analytic stage may be to require the publishing of datasets. ICMJE recently proposed that the publication of datasets should be implemented as a requirement for publication. This requirement is increasingly common in other fields of inquiry such as economics. Although independent analyses at the time of publication may not be feasible for journals from a resource perspective, the requirement to release the dataset to be reviewed later if necessary may discourage some forms of analytical bias. Finally, authors should be required to include and discuss any deviations from the original protocol. This may help to prevent changes in the specified outcome at the analytic stage…
This is a good article filled with thoughtful suggestions, well worth reading.  But one might ask why I put it here here in the middle of some posts about an offbeat Finnish computer programmer [Linus Torvold] and an analogy with his rogue computer operating system [Linux] – how it impacted a similar issue in the computer software world [here’s Linus… and show me the damn  code  numbers!…]? It’s because as useful as their suggestions are, and as close as they are to the ones many of us would make, they’re based on several ideas which approach the domain of fallacy:

  • The Randomized Controlled Trial [RCT] is a good way to determine clinical usefulness.

    In 1962, the FDA was charged with requiring two Randomized Controlled Trials [RCTs] demonstrating statistical efficacy and all human usage data demonstrating safety in order to approve a drug for use.  It’s a weak standard, designed to keep inert potions off the market. It was presumed that the medical profession would have a higher standard and determine clinical usefulness. That made [and makes] perfect sense. The FDA primarily insures safety and keeps swamp root and other patent medicines out of our pharmacopeia, but clinical usefulness should be determined by the medical profession and our patients. Not perfect, but I can’t think of a better system for approval. However, approval doesn’t necessarily correlate with clinical usefulness, or for that matter, long term safety. And then something unexpected happened. The Randomized Controlled Trials became the gold standard for everything – called Evidence Based Medicine. Randomized Clinical Trials are hardly the only form of valid evidence in medicine. That was a reform idea that kept people from shooting from the hip, but was also capable of throwing the baby out with the bathwater.

    This structured procedure designed to dial out everything and isolate the drug effect [RCTs] became a proxy for the much more complex and varied thing called real life. RTCs have small cohorts of recruited [rather than help-seeking] subjects in short-term trials. Complicatred patients are eliminated by exclusionary criteria. The metrics used are usually clinician-rated rather than subject-rated. And the outcome is measured by statistical significance instead of by the strength of the response. Blinding and the need for uniformity eliminates any iterative processes in dosing or identifying target symptoms. It’s an abnormal situation on purpose, suitable for the yes-no questions in approval, but not the for-whom information of clinical experience.

  • It is ever going to be possible to create a system that insures that the industry sponsors will openly report on their RCT without exaggerrating efficacy and/or understating toxicity.

    These RCTs were designed for submission to the FDA for drug approval. The FDA reviewers have access to the raw data and have regularly made the right calls. But then those same studies are written up by professional medical ghost writers, signed onto by KOL academic physicians with florid Conflicts of Interest and submitted to medical journals to be reviewed by peer reviewers who have no access to the raw data. The journals make money from selling reprints back to to the sponsors for their reps to hand out to practicing doctors. These articles are where physicians get their information, and discrepancies between the FDA version and the Journal versions are neither discussed, nor even easy to document.

    So it’s not the FDA Approval that’s the main problem. It’s the glut of journal articles that have been crafted from those studies and been the substrate for advertising campaigns that have caused so much trouble. The basic Clinical Trials that were part of the Approval have been glamorized. And many trials that were unsuccessful attempts at indication creep have been spun into gold. It seems that every time there’s an attempt to block the fabulation of such trials, there have been countermoves that render the reform attempts impotent. So far, it’s been a chess game that never seems to get to check-mate.
I don’t mean to malign this article at all. I thought it was well done and I liked the discussion. In fact, it the next post, I’m going to make some suggestions that are very like the ones they discuss. But I want to stick with my analogy between the commercial domination of the personal computer landscape and how it’s playing out. Rather than continuing to swim in someone else’s river, they took advantage of some other streams that appeared and began to come together to make a river of their own impervious to some company’s fourth quarter bottom line. And, sooner or later, Linux and its heirs will be the ones that lasts.

Structured RCTs may well be the best method for our regulatory agencies use to evaluate new drugs. They cost a mint to do and about the only people who can fund them are the companies who can capitalize on success – the drug companies. But medicine  doesn’t need to  shouldn’t buy into the notion that they’re the only way to evaluate the effectiveness of medicinal products. As modern medicine has become increasingly organized and documented, there are huge caches of data available. And it’s not just patient data or clinic data. What about the pharmacy data that’s already being used by PHARMA to track physician’s prescribing patterns? And where are the departments of pharmacology and the schools of pharmacy in following medication efficacy and safety? or the HMOs? or the Health Plans? the VAH? What about the waiting room questionnaires? I’d much rather they ask about the medications the patient is on than being used to screen for depression. It’s really the ongoing data after a drug is in use that clinicians need anyway – more important than the RTC that gets things started.

So while it’s important to continue the push for data transparency and clinical trial reporting reform, it’s also time to explore other ways of gathering and evaluating the mass of information that might free us from the commercial strangle-hold we live with now – and potentially give us an even better picture of what our medications are doing over time. There’s a way out of this conundrum. The task is to find it…
Mickey @ 5:00 PM

places and spaces…

Posted on Saturday 21 January 2017

Cartograms of the 2016 presidential election [with the country scaled by population rather than area]. On the left, colored by who won the county. On the right, color gradient by percentage vote in county.

Can you say Urban versus Rural?
Mickey @ 7:00 AM

show me the damn  code  numbers!…

Posted on Wednesday 18 January 2017

The background image is an iPhone photo of a spreadsheet opened on that little Linux computer in the last post, and the midground is a spreadsheet from my Windows computer. They’re both Open Source versions of OpenOffice, the free Office Suite that I use instead of Microsoft Excel [foreground]. The point of the graphic is that they’re basically the same. If I hooked that little $35 dollar machine to a full sized monitor I could do everything on it I need to do with ease. I don’t use the new Excel because I don’t like their "ribbon" interface and I can’t make the graphing utility do what I need it to do [I wonder if they changed it just to have something new].

Back in the early PC days, the software developers [Microsoft, Apple, etc] wanted to own their software through and through, make the code proprietary. The nerds and hackers of the world said ‘show me the damn code‘ and the companies said ‘hell no.’ There were lawsuits, and posturing, and all manner of haggling about whether computer code was intellectual property. For users, it was a problem because every new release [of something like Microsoft Word] meant that to get the new features, you had to buy it again or pay for an upgrade. And that extended to the operating system itself [DOS, OS]. It was a monopoly.

When the World  Wide Web came along, there was a different tradition. The hardware came from the government [DARPA] and the language that made it work [HTML] came from a think tank [CERN] developed by Tim Berners-Lee for internal use. The Browser used to read the HTML was Mosaic, and later Netscape [that was free, a version of Open Source] – built and maintained by volunteers. Microsoft wanted to grab the Internet, so they gave their Browser away too [Internet Explorer]. Now Google’s Chrome has jumped onto the mix. The tradition of Netscape carried the day and the Open Source Movement took hold – Linux, MySQL, Open Office, Apache server, etc and a whole lot of other very important stuff you can’t see. So the companies held on to their proprietary code and the home computer market primarily by building user friendly interfaces [and inertia]. As Linus Torvolds implied in the Ted interview, hackers, geeks, and nerds don’t do interfaces very well – and they sure aren’t marketeers. So now there’s a mix of Open Source and Proprietary software that’s actually mutually beneficial – a loose symbiosis of sorts. Android being a prime example.

This battle over Data Transparency with Clinical Trials and other scientific data strikes me as similar to those early days with computer code: intellectual property, commercial interests, competition, secrecy, etc. But there’s one difference that way ups the ante. It’s abundantly apparent that proprietary ownership of the data has allowed a level of sophisticated corruption and misinformation that is unequaled in the history of medicine in my opinion. So while there’s a real similarity to the computer code wars, the stakes reach beyond commerce and into the basic fabric of medical care. Have we learned something from Open Source and related initiatives that might help get things back in the road? Maybe…

  • PLOS [Public Library of Science] is a nonprofit open access scientific publishing project aimed at creating a library of open access journals and other scientific literature under an open content license. It launched its first journal, PLOS Biology, in October 2003 and publishes seven journals, as of October 2015.
  • is a registry and results database of publicly and privately supported clinical studies of human participants conducted around the world. Learn more About Clinical Studies … including relevant History, Policies, and Laws.
  • PubMed comprises more than 26 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites.
  • AllTrials/COMPare: All Trials Registered | All Results Reported. The COMPare project is systematically checking every trial published in the top five medical journals, to see if they have misreported their findings.
  • Rxisk: No one knows a prescription drug’s side effects like the person taking it. Make your voice heard. RxISK is a free, independent drug safety website to help you weigh the benefits of any medication against its potential dangers.
… and there are many more. As I list these resources, I realize how much the general idea of Open Source maps onto the effort to put a stop to the commercial corruption of our pharmacopeia [and vice versa]. Perhaps where we lag is that we’re still hung up on trying to get "them" to change, much like the early efforts to get the major software corporations to change. Is the lesson from this story that the hackers created an alternative system of their own instead of continuing to bang their heads against the stone wall? It seems to me that  Linux was more than the sum of its code. It was an organizing principle, and we don’t yet have a Linus Torvolds or his system – something to rally around. hmm…
Mickey @ 2:53 PM

here’s Linus…

Posted on Monday 16 January 2017

Sometimes it’s the things right under your nose that are the hardest things to see. This is my desktop at home [uncharacteristically uncluttered]. Pretty much standard fare with a couple of 27" monitors connected to a big Windows 10 computer under the desk. But there are some anomalies. Why the two keyboards? and the two mice? And what’s with that little screen on the tripod?

The screen, keyboard, and mouse on the right belong to a Raspberry Pi, a little $35 computer that runs Raspbian [a variant of the Linux Operating system – which is free], has a full range of software packages [which are free], can be programmed using the Python Language [which is downloaded free], and has a hardware interface to the outside for hackers to prototype all kind of stuff [like robots]. There’s a Raspberry Pi on the space station overhead. The Android OS that runs your phone is a variant of Linux, and the Apache software that runs almost every web server usually runs under Linux.

In the 1980s, when the personal computer burst onto the scene, you could buy programs for your computer, but they were compiled – meaning that you couldn’t see the code and you couldn’t change anything about them. Tux - the Linux MascotThe software producers essentially had a monopoly. The Open Source Movement arose on multiple fronts throughout the next few decades and is too complex to detail here, but the core idea is simple. If you buy a piece of Open Source Software, you get the compiled program AND the source code. You can do with it what you please. There are many variants but that’s the nuts and bolts of it. Linus Torvolds, a Finnish student wrote a UNIX-like operating system  [Linux] and released it Open Source [which put this movement on the map]. Netscape did the same thing. The idea is huge – that it’s fine to be able to sell your work [programs], but it’s not fine to keep the computer code under lock and key.

The Raspberry Pi Foundation LogoBefore I retired, computers and programming were my hobbies, and the source of a lot of fun. I didn’t need either of them for my work [psychoanalytic psychotherapy] – they were for play. I gradually moved everything to the Linux system and Open Source. But when I retired, my first project involved georegistering old maps and projecting them onto modern topographc maps, and the only software available ran under Windows. And then with this blog, I couldn’t find an Open Source graphics program that did what I wanted. So I’ve run Windows machines now for over a decade. But I just got this little Raspberry Pi running, and I can already see that I’m getting my hobby back. If it’s not intuitive what this has to do with Randomized Clinical Trials or the Academic Medical Literature, I’ll spell it out here in a bit. But for right now – here’s Linus:

Mickey @ 6:34 AM

not research…

Posted on Saturday 14 January 2017

I spent a day with the article in the last post [A manifesto for reproducible science]. It lived up to my initial impression and I learned a lot from reading it. Great stuff! But my focus here is on a particular corner of this universe – the industry-funded Clinical Trial reports of drugs that have filled our medical journals for decades. And I’m not sure that this manifesto is going to add much. Here’s an example of why I say that:

Looking at one of the clinical trial articles of SSRIs in adolescents, there was something peculiar [Wagner KD. Ambrosinl P. Rynn M. et al. Efficacy of sertraline in the treatment of children and adolescents with major depressive disorder, two randomized controlled trials. JAMA. 2003;290:1033-1041.]. What does it mean "two randomized controlled trials"? Well it seems that there were two identical studies that were pooled for this analysis. Why? They didn’t say… The study was published in August 2003, and there were several letters along the way asking about this pooling of two studies. Then in April 2004, there was this letter:

    To the Editor: Dr Wagner and colleagues reported that sertraline was more effective than placebo for treating children with major depressive disorder and that it had few adverse effects. As one of the study group investigators in this trial, I am concerned about the way the authors pooled the data from 2 trials, a concern that was raised by previous letters critiquing this study. The pooled data from these 2 trials found a statistically marginal effect of medication that seems unlikely to be clinically meaningful in terms of risk and benefit balance.

    New information about these trials has since become available. The recent review of pediatric antidepressant trials by a British regulatory agency includes the separate analysis of these 2 trials. This analysis found that the 2 individual trials, each of a good size [almost 190 patients], did not demonstrate the effectiveness of sertraline in treating major depressive disorder in children and adolescents.

    E.Jane Garland, MD, PRC PC
    Department of Psychiatry
    University of British Columbia

So the reason they pooled the data from the two studies appears to be that neither was significant on its own, but pooling them overpowered the trial and produced a statistically significant outcome [see power calculation below]. Looking at the graph, you can see how slim the pickings were – significant only in weeks 3, 4, and 10. And that bit of deceit is not my total point here. Add in how Dr. Wagner replied to Dr. Garland’s letter:

    In Reply: In response to Dr Garland, our combined analysis was defined a priori, well before the last participant was entered into the study and before the study was unblinded. The decision to present the combined analysis as a primary analysis and study report was made based on considerations involving use of the Children’s Depression Rating Scale [CDRS] in a multicenter study. Prior to initiation of the 2 pediatric studies, the only experience with this scale in a study of selective serotonin reuptake inhibitors was in a single-center trial. It was unclear how the results using this scale in a smaller study could inform the power evaluation of the sample size for the 2 multicenter trials. The combined analysis reported in our article, therefore, represents a prospectively defined analysis of the overall study population…

This definition ["well before the last participant was entered into the study and before the study was unblinded"] is not what a priori means. It means "before the study is ever even started in the first place." And that’s not what prospective means either. It also means "before the study is ever even started in the first place" too. She is rationalizing the change by redefining a priori’s meaning.

The problem here wasn’t that Pfizer, maker of Zoloft, didn’t have people around who knew the ways of science. If anything, it was the opposite problem. They had or hired people who knew those science ways well enough to manipulate them to the company’s advantage.

  • Why did they have two identical studies? Best guess is that they were going for FDA Approval, and in a hurry. You need two positive studies for FDA Approval.
  • Why would they decide to pool them somewhere along the way? Best guess is that things weren’t going well and pooling them increases the chance of achieving significance with a smaller difference between drug and placebo.
  • How would they know that things weren’t going well if the study was blinded? You figure it out. It isn’t that hard.
  • Why would they say that a priori means "well before the last participant was entered into the study and before the study was unblinded" when that’s not what it means? That isn’t that hard to figure out either.
  • So why not just say that they cheated? Because I can’t prove it [plausible deniability]

I’m not sure that the industry-funded Clinical Trials of drugs should even be considered research. They’re better seen as product testing. And the whole approach should reflect that designation. Everyone involved is biased – by definition. The point of the enterprise isn’t to answer a question, it’s to say this in whatever way you can get there:
Conclusion The results of this pooled analysis demonstrate that sertraline is an effective and well-tolerated short-term treatment for children and adolescents with MDD.
And the only way to insure that the outcome parameters aren’t changed is to require preregistration with a date-stamped certified Protocol and Statistical Analysis Plan on file before the study begins – a priori. What if they change their minds? Start a new study. Product testing may be science, but it’s not research. And we may have more oversight on our light-bulbs and extension cords than we have on our medications.

And after all of that, the Zoloft study is still in Dr. Wagner’s repertoire at the APA Meeting some 13 years later…
by Aaron Levin
June 16, 2016

… As for treatment, only two drugs are approved for use in youth by the Food and Drug Administration [FDA]: fluoxetine for ages 8 to 17 and escitalopram for ages 12 to 17, said Wagner. “The youngest age in the clinical trials determines the lower end of the approved age range. So what do you do if an 11-year-old doesn’t respond to fluoxetine?” One looks at other trials, she said, even if the FDA has not approved the drugs for pediatric use. For instance, one clinical trial found positive results for citalopram in ages 7 to 17, while two pooled trials of sertraline did so for ages 6 to 17.

Another issue with pediatric clinical trials is that 61 percent of youth respond to the drugs, but 50 percent respond to placebo, compared with 30 percent among adults, making it hard to separate effects. When parents express anxiety about using SSRIs and ask for psychotherapy, Wagner explains that cognitive-behavioral therapy [CBT] takes time to work and that a faster response can be obtained by combining an antidepressant with CBT. CBT can teach social skills and problem-solving techniques as well. Wagner counsels patience once an SSRI is prescribed.

A 36-week trial of a drug is too brief, she said. “The clock starts when the child is well, usually around six months. Go for one year and then taper off to observe the effect.” Wagner suggested using an algorithm to plot treatment, beginning with an SSRI, then trying an alternative SSRI if that doesn’t work, then switching to a different class of antidepressants, and finally trying newer drugs. “We need to become much more systematic in treating depression,” she concluded.
Mickey @ 12:00 PM

a must·read!…

Posted on Friday 13 January 2017

Whatever you’re reading right now [including this blog], you might just put a bookmark in it and read this paper. Besides it being written by luminaries [see scathing indictments… and the hope diamond…], it’s an encyclopedic proposal that deserves everyone’s attention:
by Marcus R. Munafo, Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button, Christopher D. Chambers, Nathalie Percie du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J. Ware, and John P. A. loannidis
Nature: Human Behavior. Published 10 January 2017. Open.

Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.

[abbreviated and reformatted from the paper]

From my perspective, there’s nothing more important in Medicine right now than reclaiming the academic medical literature from its captivity by the paramedical industries and others who are called stakeholders. But the problem in academic science is bigger than just Medicine. In the other fields, it goes under the name, The Reproducibility Crisis.

This paper is too important to whip off a blog post. So I’m going to let it sit for a bit before commenting, and picking out the specific recommendations that have to do with my corner of the world – Randomized Clinical Trials of medications – specifically the medications used in psychiatry.
Mickey @ 1:45 PM

no mo’ mojo…

Posted on Thursday 12 January 2017

By Kate Kelland
January 11, 2017

LONDON — It is likely to be at least 10 years before any new generation of antidepressants comes to market, despite evidence that depression and anxiety rates are increasing across the world, specialists said on Wednesday. The depression drug pipeline has run dry partly due to a "failure of science" they said, but also due to big pharma pulling investment out of research and development in the neuroscience field because the profit potential is uncertain. "I’d be very surprised if we were to see any new drugs for depression in the next decade. The pharmaceutical industry is simply not investing in the research because it can’t make money from these drugs," Guy Goodwin, a professor of psychiatry at the University of Oxford, told reporters at a London briefing.

Andrea Cipriani, a consultant psychiatrist at Oxford, said such risk aversion was understandable given uncertain returns and the approximately billion dollar cost of developing and bringing a new drug to market. "It’s a lot of money to spend, and there’s a high rate of failure," Cipriani said. Treatment for depression usually involves either medication, some form of psychotherapy, or a combination of both. But up to half of all people treated fail to get better with first-line antidepressants, and around a third of patients are resistant to relevant medications.
It’s now been three decades since Prozac® was added to our pharmacopeia. Psychiatry as a specialty had rededicated itself to its medical roots, and this new drug class was a welcomed addition. While no more effective than the older tricyclic antidepressants, it was better tolerated [even though it had some side effects of its own]. After a few years, a progression of competitors came to market, and that came to be called the pipeline, and psychiatry settled into a rhythm of discussing their various differences. With the focus on the future, what’s coming next?

There were many attempts to enhance efficacy – sequencing, combining, augmenting with a variety of other drugs. Non-responders were said to have Treatment Resistant Depression, discussed almost as if it represented a unique entity. Multiple markers were queried looking for something that would predict the right drug – called Personalized Medicine. Practitioners and patients alike kept their eyes on the future – what’s coming down the pipeline. And there was a vague sense that the newer drugs were improvements over the earlier offerings, though that’s hard to justify in retrospect. Somewhere in there, the notion developed that the incidence of depression was rising rapidly, although that was hard to put together with the predominant view that depression was a biological-?-genetic entity. And the scientific basis for that escalating prevalence is hard to pin down.

And then in the summer of 2012, the Pharmaceutical companies threw in the towel and began to shut down their R&D programs for CNS drugs. They’d run out of candidates ["me too drugs"]. A great wail was heard throughout the land. There were conferences and task forces – much rhetoric and blaming. The NIMH seemed to have a new idea about how to jump-start drug development  every month. Multiple schemes were proposed to lure PHARMA back into the game. And all eyes turned to the search for something "novel" to keep things alive [eg Ketamine and its derivatives].


The experts said that since the current generation of SSRI [selective serotonin reuptake inhibitors] antidepressants – including Pfizer’s blockbuster Prozac [fluoxetine] – are widely available as cheap generics, there is reluctance among health services to fund expensive new drugs that may not be much better. That is partly because existing medications, while by no means perfect, are quite effective in more than half of patients, the specialists said, and partly because in this condition in particular, placebo can have a massive impact. That makes it difficult, they explained, to show that a new drug is working above and beyond a positive placebo response and an already effective generation of available drugs.
Looking at the pipeline graphic and the decades of industry introducing new versions of SSRIs, this explanation doesn’t make much sense. Maybe most prescrbng physicians [and their patients] have caught on to the fact that there’s no more mojo to be harvested from the SSRI antidepressants. Also, maybe what the drug companies say might well be true – that they’ve run out of candidate molecules [SSRIs] to even try. In other words, the SSRI paradigm has been exhausted, and there’s not another class of drugs to put in its place.
Depression is already one of the most common forms of mental illness, affecting more than 350 million people worldwide and ranking as the leading cause of disability globally, according to the World Health Organization. And rates are rising. Glyn Lewis, a professor of psychiatric epidemiology at University College London, cited data for England showing a doubling in prescriptions for antidepressants in a decade, to 61 million in 2015 from 31 million in 2005.
Here, we are asked to believe that the doubling of antidepressant prescriptions over that ten year span justifies the heading above [DEPRESSION RATES RISING]. A much more reasonable heading would be SSRI PRESCRIPTION RATES RISING. Why? Marketing. Primary Care Physicians prescribing SSRIs. Waiting Room screening. Patients taking them longer thinking they’re staving off something or correcting somethng [or inertia]. Knock yourself out here. I’ve just scratched the surface.
In the United States too, more people than ever are taking antidepressants. A study in the Journal of the American Medical Association [JAMA] in 2015 found that prevalence almost doubled from 1999 to 2012, rising to 13 from 6.9 percent. Yet several major drug companies including GlaxoSmithKline and AstraZeneca have scaled right back on neuroscience R&D in recent years, citing unfavorable risk-reward prospects.
Rejecting the [far-fetched] idea that the doubling of prescriptions equals a doubling of the disease prevalence, the drug companies have accepted another [more palatable] explanation – that the market is saturated. The likely reason they think that is that the market is saturated.
Goodwin said the absence of a drug development pipeline was also due to lagging scientific research into what is really happening in the brains of those who do and do not respond to current antidepressants. "It’s partly a failure of science, to be frank," said Goodwin. "Scientists have to … get more of an understanding about how these things actually work before we can then propose ways to improve them."

With all due respect to Dr. Goodwin, his pronouncement might’ve worked in the 90s [the Decade of the Brain] or the 2000s [the Research Agenda for the DSM-V]. But after thirty years, this argument itself has run out of mojo too. The scientists have scienced themselves silly trying to do what he suggests without much success. They’ve certainly gone through a small fortune in the process. The marketeers have had more success, raking in a beyond-modest fortune in the process. But this train is pulling into the station, its journey’s almost done. 

A supernova is a stellar explosion that briefly outshines an entire galaxy, radiating as much energy as the Sun or any ordinary star might emit over its lifespan. This astronomical event occurs during the last stages of a massive star’s life, whose dramatic and catastrophic destruction is marked by one final titanic explosion concentrated in a few seconds, creating a "new" bright star that gradually fades from sight over several weeks or months.

[click image for info]

Will the SSRIs have the same kind of fate as SN2014J? evaporating into the ether? I doubt it. At least not any time soon. They’re still useful in clinical practice when used carefully and in moderation. I expect the short acting ones will gradually disappear because of their heightened withdrawal profiles. And hopefully the others will be used in a more time limited way. And then, maybe we can get around to reworking our diagnostic system to bring it closer to clinical reality.

We’ll see… and speaking of shiny objects:

ASASSN-15lh [supernova designation SN 2015L] is a bright astronomical object. Initially thought to be a superluminous supernova, it was detected by the All Sky Automated Survey for SuperNovae [ASAS-SN] in 2015 in the southern constellation Indus. The discovery, confirmed by ASAS-SN group with several other telescopes, was formally described and published in a Science article led by Subo Dong at the Kavli Institute of Astronomy and Astrophysics [Peking University, China] on January 15, 2016. In December 2016, another group of scientists raised a hypothesis that ASASSN-15lh might not be a supernova. Based on observations from several stations on the ground and in space [including Hubble], these scientists proposed that this bright object might have been "caused by a rapidly spinning supermassive black hole as it destroyed a low-mass star". ASASSN-15lh, if a supernova, would be the most luminous ever detected; at its brightest, it was approximately 50 times more luminous than the whole Milky Way galaxy with an energy flux 570 billion times greater than the Sun….

[click image for info]
Mickey @ 2:20 PM