Posted on Tuesday 15 September 2015

Dear Colleagues,

After serving 13 years as Director for the National Institute of Mental Health [NIMH], Thomas R. Insel, M.D., will step down effective November 1, 2015…

While we conduct a national search for a new NIMH Director, Bruce Cuthbert, Ph.D., will serve as Acting Director…

Francis S. Collins, M.D., Ph.D.
Director, National Institutes of Health
Can you feel relief and worried at the same time?

New York Times
SEPT. 15, 2015

… Dr. Insel, a brain scientist who made his name studying the biology of attraction and pair bonding, was the longest-serving director since Dr. Robert H. Felix, the agency’s founder, stepped down in 1964. Appointed in 2002, his tenure spanned four presidential terms, during which he honed an easygoing political persona and an independent vision of the agency’s direction. He steered funding toward the most severe mental disorders, like schizophrenia, and into basic biological studies at the expense of psychosocial research, like new talk therapies.

He was outspoken in defense of this path, at one point publicly criticizing establishment psychiatry for its system of diagnosis, which relies on observing behaviors instead of any biological markers. His critics – and there were plenty – often noted that biological psychiatry had contributed nothing useful yet to diagnosis or treatment, and that Dr. Insel’s commitment to basic science was a costly bet, with uncertain payoff…

… In his statement, Dr. Insel said the final details of his move to Google were not firm. The team is developing advanced technologies for better detection and prevention of illness, he wrote, and “I am joining the team to explore how this mission can be applied to mental illness”…
One can look at it like Benedict Carey does in this piece. He’s a reporter with a keen eye for such things. And what he says is certainly accurate, "He steered funding toward the most severe mental disorders, like schizophrenia, and into basic biological studies at the expense of psychosocial research, like new talk therapies" and was certainly a big problem. But that’s not what bothered me so much about Dr. Insel’s reign at NIMH. It’s the word, "steered." The way I’ve thought of it in my mind, he misunderstood the meaning of his title – Director. It’s supposed to mean that he directs an Institute and its infostructure in a way that locates the best and brightest scientists we have and provides the support they need to do those things that the best and brightest do – bring the scientific apparatus to bear on the problems they have insights into. The scientists generate the projects; the NIMH evaluates the relevance and feasability of those ideas; and supports the best and brightest of the lot. Dr. Insel interpreted the word director as meaning he directed what those projects were going to be, and the scientists followed his directions [if they wanted to be funded].

Besides being too controlling, Insel is a "breakthrough freak." He seems to go for the "shiny objects." So "personalized medicine" comes along and we hear about that. Then we hear about "neural circuits." One after another, we’ve moved from potential breakthrough to potential breakthrough as if there’s some over-riding plan, but we never quite found out what it was. All we really knew was that whatever it was, it came under the heading, "clinical neuroscience." He went to medical school and did a psychiatry residency, but he never practiced medicine and that has been apparent throughout his tenure. He has had the perspective of a recent graduate throughout his tenure at the NIMH – unseasoned by the experience of real-life medical practice. In the words of my current neighbors, "book larnin’". So I’m relieved at his leaving and immediately worried about what’s coming next.

But that’s not the only worry. He’s going to Google, a big resource that’s capable of bringing off about anything they set their mind to do. And I’m worried that Insel will point them in the direction of screening for mental illness. In my mind, that means putting more people on even more psychiatric drugs they don’t need. He’s a nut case for "the global burden of depression" and other such buzz phrases. Those ideas plus Google are a recipe for some real problems.

However this transition plays out, his replacement and his future placement are definitely things to watch very carefully…
Mickey @ 7:00 PM

time for some pushback?…

Posted on Tuesday 15 September 2015

British Medical Journal
by Khaled El Emam, Tom Jefferson, and Peter Doshi
15 Sep, 2015

The European Medicines Agency (EMA) has issued its long anticipated new policy (policy 0070) on prospective access to clinical trial data, and is now in consultations to figure out the details of its implementation. We were invited to join these ongoing consultations, and have previously reported on the debate here and here.

We have been particularly concerned about the anonymization and redactions of the content of clinical study reports (CSRs), and especially concerned about the approach proposed by some in industry.

But now we are getting really worried. Current drafts of the EMA’s evolving guidance documents for the anonymization of CSRs leave too much leeway for creative interpretation of acceptable anonymization practices, and an EMA follow-up meeting on 7 September made clear that some industry associations are pressing to apply a standard known as the TransCelerate approach. While almost all approaches sound reasonable (after all, they are intended to protect the anonymity of trial participants—a good thing), the TransCelerate redaction approach would cripple the usefulness of CSRs.

Take a look for yourselves. Figure 1 (below) is a page from a Tamiflu CSR (Research Report No. 1005291) that Roche released to us after a four year long battle for access. Figure 2 shows what would be likely to happen if Roche applied the TransCelerate redaction standard to that same document. Applying the TransCelerate approach takes the Tamiflu document and turns it into a page of black boxes. For instance, all dates relating to individual trial participants have to be redacted, as well as other patient information such as sex, age, weight, height, race, ethnicity, and socioeconomic information. All patient narratives would also have to be removed.

[see figures linked above]

Figure 1: Line listing from Tamiflu trial WV16277 (Research Report No. 1005291) redacted by Roche for public release. Available from

Figure 2: Line listing from Tamiflu trial WV16277 (Research Report No. 1005291) redacted according to the TransCelerate guidance. Available from

Why “likely to happen” and not “happen”? Because we had to create figure 2 ourselves. Ideally, those advocating a redaction approach would send around shared examples for the rest of us to see and discuss. But there were no clear examples at the EMA meetings.

Using redactions to assure the anonymization of data in CSRs is emerging as a make or break issue for the success of the EMA initiative. The intensive redaction of the TransCelerate approach risks nullifying most of the progress towards transparency made so far in Europe…
I kept a timeline of the EMA Data Transparency saga through this time last year [then I got busy]. It’s here for review. As you can see, the news here is bad. Ever since the AbbVie/Interimmune Suits, the cause of true data transparency has been slowly eroding away at the EMA. The initial offering was too good to be true, but it passed through the mid-point and has kept going south [see also important work… and in the details…].

It seems to me that the history of Clinical Trials of drugs is not unlike the stories told by many of our patients with personality disorders – the solution to the last problem is the beginning of the next problem. With the trials, the last reform movement creates the loophole that allows things to essentially remain as dysfunctional as they’ve always been. Right now, we’re committed to Data Transparency, and we’re now watching is be picked apart in front of our eyes. The watchdogs on the byline here are front and center on the case along with others, but they may be like the little Dutch Boy, running out of fingers to stick in the leaks.

The one bright side of this story is that the EMA has responded to a public outcry in the past [see the timeline for examples], and we may be approaching time for another all out effort…
Mickey @ 6:20 PM

study 329 v: into the courtroom…

Posted on Saturday 12 September 2015

When you read an article in a medical journal, all you have to go on is what you’re told in the article itself. If you watched Dr. Healy’s commentary [background music…],  you know that this 11 page article represents 77,000 pages of data locked away in some data archive out of sight, a compression ratio of 7,000:1! And if you question an article, there’s no real way to answer your questions without that data. In this case, because of a legal challenge in 2004, the Clinical Study Report has been available on the Internet for a long time. It’s a 528 page document used to submit the paper to regulatory agencies. Over the years, many have read it over and over and found further things to fuel our contention that the original article reached an indefensible conclusion. But all that really did was further refine suspicions. It didn’t prove a thing:
In 2012, GSK finally posted the actual data [Appendices B, C, and D] as they had agreed to do in 2004, and so the numbers were there to see. So many numbers! And the only way to analyze them would be to hand copy them into some electronic format that could be input into a statistical program for reanalysis. I had a shot at that [cataloged in the lesson of Study 329: an unfinished symphony…], but there were so many numbers! Too many. I did enough to gain the conviction that this study was as far off the mark as it appeared. But it was only when we got the raw data in an electronic format that we could really do a complete analysis like the one we are publishing. I hasten to add that the form that data came in was a real challenge – a restrictive remote desktop that made the data manipulation very difficult.
The safety analysis required more data access. The transcribed numbers in the IPD tables for the rating scales were fine for the efficacy part, but the IPD version of the Adverse Events weren’t enough. We needed to look at the actual forms filled out during the study by the blinded clinicians and raters to approach the level of nuance needed to reach any conclusions about harms.
Our article isn’t really about Paxil Study 329. People like reporters Shelley Jofre of BBC’s Panorama, Alison Bass who wrote Side Effects, or legal actions from patients and governments brought it to the fore. The courts have levied punishments and record breaking fines already. And our group has been able to add a counter to the original article in the JAACAP which still sits in our libraries un-retracted.

The broader point of our article is that physicians and the patients we advise have an absolute right to look at the raw data behind the abbreviated proxies that appear in our literature as journal articles. When we have that kind of access, the playing field is level and the profession has the necessary means to join in the kind of checks and balances system that keeps people honest. Our paper is an example of how we think that information should be presented. Further, the medical profession has an absolute obligation to do whatever it needs to do to insure that the information we pass on our patients as scientific transcends other influences – including commercial profit or the academic advancement of the authors.

It’s a paradox that many of the authors who have lent their reputations and the reputations of their universities to these jury-rigged Clinical Trials preach a gospel of evidence-based medicine. And these questionable Clinical Trial articles are certainly filled with icons representing the tools of science – graphs, tables, p-values, standard deviations, etc. But they hide the only basic scientific tool we will ever have – the carefully gathered primary observations we call data. The real evidence never makes it into the courtroom…

Mickey @ 8:00 AM

study 329 iv – some challenges…

Posted on Friday 11 September 2015

The RIAT Initiative was a bright idea. Rather than simply decrying unpublished or questionable Clinical Trials, it offers the original authors/sponsors the opportunity to set things right. If they decline, the RIAT Team will attempt to do it for them with a republication. Success depends on having access to the raw trial data and on having it accepted by a peer reviewed journal [see “a bold remedy”…]. Both the BMJ and PLoS had responded to the RIAT article by saying they would consider RIAT articles. Paxil Study 329 had certainly been proven "questionable" in the literature and in the courts. And most of the data was already in the public domain thanks to previous legal actions. So a group of us who had independently studied this study assembled to begin working on breathing life into the RIAT concept. Dr. Jon Jureidini and his Healthy Skepticism group in Australia had mounted the original [and many subsequent] challenges to this article. He was joined there by colleagues Melissa Raven and Catalin Tofanaru. Dr. David Healy, well known author and SSRI expert was joined in Wales by Joanna Le Noury. Elia Abi-Jaoude in Toronto and yours truly in the hills of Georgia, USA rounded out the group. I was certainly honored to be included. While all of us have some institutional affiliation, this project was undertaken as an unsupported and unfunded enterprise without connection to any institution. Though my own psychiatric career was primarily as a psychotherapist, in a former incarnation, I was a hard science type with both bench and statistical training. So I gravitated to the efficacy reanalysis, and that’s the part I’ll mention here and in some remarks after the paper is published.


The Full Study Report Acute was a 528 page document that addressed the 8 week acute phase of Paxil Study 329. The actual raw data was in additional Appendices. On the first pass through this document, we considered a number of approaches to presenting the data. In recent years, there has been a move away from the traditional statistical analysis towards also considering the Effect Sizes. Statistics only tell us that groups are different, but nothing about the magnitude of that difference. Effect Sizes approximate the strength of that difference and have found wide acceptance particularly in meta-analyses like those produced by the Cochrane Collaboration. But in the end, we decided that our article was more than simply about Study 329, we wanted it to represent how such a study should be properly presented. And since every Clinical Trial starts with an a priori protocol that outlines how the analysis should proceed, we decided, wherever possible, to follow the original protocol’s directives.

Looking over the protocol, it was comprehensive. We found two things that were awry. First, the comparator group was to take Imipramine, and the dose was too high for adolescents –  1.5 times the dose used in the Paxil trials for adults. That was apparent in the high incidence of side effects in that group in the study. The second thing was a remarkable absence. There was no provision for correcting for multiple variables to avoid false positives. The more variables you look at, the more likely you are to find a significant correlation by chance alone. There are many different correction schemes from the stiff Bonferroni correction to a number of more forgiving schemes. This study had two primary and six secondary efficacy variables. The protocol should have specified some method for correction, but it didn’t even mention the topic. Otherwise, the protocol passed muster. It was written well before the study began and it was clear about the statistical methods to be used on completion to pass judgement on efficacy. One other question came from the protocol, how were we going to deal with missing values. The protocol defined all of the outcome variables in terms of LOCF [last observation carried forward]. In the intervening 14 years, LOCF has largely been replaced by other methods: MMRM [Mixed Model, Repeat Measurements] and Multiple Imputation. We used the protocol directed LOCF method, but at the request of reviewers and editors, we also show the Multiple Imputation analysis for comparison.


I guess the only other thing to say before the paper is published is that this was quite an undertaking. There were no precedents for any aspect of this effort. I’ve mentioned just a few of the decisions we had to make along the path, but every one of them and many others are the result of a seemingly endless stream of email and drop-box communications that regularly sped around the globe. There’s no part of this paper that doesn’t have the collective input of most of the authors. There were no technicians, statisticians, or support staff involved so we drew our own graphs, built our own tables, ran our own numbers, and checked and revised each others work. As with any new thing, looking back over it, it’s easy to see how it could have been a much more streamlined process. But that’s only apparent looking through a retrospectascope. Somewhere down the line, I hope we’ll have the energy to pass on some of the many things we learned along the way to help future RIATers have an easier passage.

So in the near future, there are going to be two studies in the medical literature that reach opposite conclusions but are derived from the self-same Clinical Trial and its data. I don’t know if there’s another instance where that’s the case. After it’s published, I want to add a short series of blog posts to describe how that came about. The goals of the paper are to set the record straight and to model how a report of a Clinical Trial should be presented. But in later blog posts, I want to add a discussion of how the original analysis was twisted to make this negative study into something that was published as positive. And I hope that future RIAT restorations will do the same. The more we learn about exactly how scientific articles can be jury-rigged to reach questionable conclusions, the closer we’ll be to expunging the widespread bias that has invaded our medical literature for much too long. In the final analysis, the ultimate goal is for physicians and patients alike to have access to a scientific medical literature that can be trusted to be accurate. After all, it’s ours…
Mickey @ 8:00 AM

study 329 iii – the path to the data…

Posted on Thursday 10 September 2015

by Keller MB, Ryan ND, Strober M, Klein RG, Kutcher SP, Birmaher B, Hagino OR, Koplewicz H, Carlson GA, Clarke GN, Emslie GJ, Feinberg D, Geller B, Kusumakar V, Papatheodorou G, Sack WH, Sweeney M, Wagner KD, Weller EB, Winters NC, Oakes R, and McCafferty JP.
Journal of the American Academy of Child and Adolescent Psychiatry, 2001, 40[7]:762–772.

Objective: To compare paroxetine with placebo and imipramine with placebo for the treatment of adolescent depression.
Conclusions: Paroxetine is generally well tolerated and effective for major depression in adolescents.

Not long after Jon Jureidini and Anne Tonkin of Healthy Skepticism questioned these results in a 2003 letter to the editor, Elliot Spitzer, then Attorney General of New York State filed a complaint in 2004 alleging fraud. GSK settled for $2.5M with an agreement to post the data from their pediatric studies of Paxil® on a public Internet Clinical Trials Registry, but admitted no wrongdoing. This would be a good place to review the various packages referred to under the heading data:
  • PROTOCOL and SAP[Statistical Analysis Plan]: We talked about these documents in the last post – detailed ‘maps’ of how the study is to be conducted and analyzed.
  • CSR: The CLINICAL STUDY REPORT is an elaborate narrative write-up of the study, in this case, it’s filled with tables and graphs. It tells the story of the clinical trial in detail. And since this trial had two phases, there are two: Full study report acute [528 pages] and Full Study report continuation [264 pages]. In this case, the raw data [Appendices] was not released initially.
  • ARTICLE: This is the published article in a journal, abstracted above.
The CSRs are what GSK had posted on their Internet Clinical Trial Registry in response to settling the suit in New York in 2004, and that’s how things remained until 2012. In August 2012, I was visiting the GSK Clinical Trial Registry for some now-long-forgotten reason, and was amazed with what I found there [see a movement…]. Here’s the visual:
On the left is what it had always looked like before, and on the right was how it looked on that visit. And when I opened the new files, they were filled with tables and tables of raw data – the scores for every subject on every rating scale, tables filled with the logged side effects. It had been added in the previous few weeks. Was it Christmas morning?  I started asking around, and David Healy responded. It seems that Peter Doshi, the researcher working on getting the raw trial data from Roche on Tamiflu® was extending his reach. Noting that GSK had never really posted the raw data from Study 329, he contacted the current New York Attorney General and GSK finally posted all those Appendices that contained the results I’d just stumbled across. So now we can add yet another package under the heading data:
  • IPD: The INDIVIDUAL PARTICIPANT DATA is, in this case, 150 Megabytes of raw scores and other tabulations, increasing the mass of available information 100 fold! And bringing most of the trial out of the shadows.
I jumped on this new information and did my rough analysis [see the lesson of Study 329: an unfinished symphony…], naively thinking I could just send it right over to the journal [JAACAPJournal of the American Academy of Child and Adolescent Psychiatry] and they’d finally retract the article. No such luck [see simply ‘fuel the fire’…]. But I was in good company. Healthy Skepticism had appealed to everyone this side of the Vatican with the same frustrating responses.

When the RIAT Initiative [restoring invisible and abandoned trials] was launched in the summer of 2013, Study 329 was a prime candidate as so much of the data was already available, and there was no question that it had been abandoned. Not long after our team had formed, GSK announced that it was establishing a data portal, available to qualified groups who wanted to access the data from a previous trial [after 2007] for some further research project – generally known as Data Sharing. Access was contingent on being accepted by an independent panel of judges. Study 329 was conducted from 1994 until 1998 and published in 2001. We did not want to do a new research project. Instead, we wanted to reanalyze the raw data and potentially republish the study with a new analysis. And, as the figure above shows, we already had access to most of the information anyway. What remained?

When one does a Clinical Trial, there’s some form to fill out for every single interaction [emphasis on every] with the subject’s ID number and the date [the treatment is obviously not there in a blinded study]. By the end of things, each subject has amassed literally volumes of forms [the actual number depends on how long they stay around, how many adverse events they report, etc]. They’re called Case Report Forms [CRF], and there are plenty of them [50,000+ in Study 329]. The IPD [Individual Participant Data] is created by transcribing the CRFs into a tabularized [and more manageable] format. Why did we want them? We were specifically interested in checking the transcription of the Adverse Events from these raw forms into the tables we already had. The CRFs are the data [or at least as close to the real data as one can get].

GSK had not offered access to the CRFs as part of their Data Sharing program; the study was well before their 2007 offer; we didn’t have a new research proposal [other than the original Study 329 protocol]. On the other hand, there was that 2004 settlement in New York in which they had agreed to make the data from their pediatric Paxil® trials available. While it’s a little bit like selling you a Bible that has only Genesis and Revelations included, for the moment I’m going to forego all the negotiations in-between [see Peter Doshi’s Putting GlaxoSmithKline to the test over paroxetine]. By the beginning of 2014, we had been given access to the electronic version of the IPD and most of the CRFs [anonymized] via the remote data portal we called "the periscope" [another story for another time]:
  • CRF: The CASE REPORT FORMS are all of the forms filled out in the study along the way. They’re the snapshots by the people in direct contact with the subjects – the closest proxy to "being there."
So, in the end, we had it all. Earlier, I said "But be careful what you ask for, because once you get it, it’s a long and winding road to know quite what to do with it." Actually, it was a "long and winding road" just to get it…
Mickey @ 8:00 AM

study 329 ii – the importance of protocol…

Posted on Wednesday 9 September 2015

I sure don’t want to become 1·terminally·boring·old·man. On the other hand, this is my only available format for communicating. I want to write about the process of evaluating Trials anyway, but I also have a practical reason. A lot of us have clamored for access to the raw data from Clinical Trials, realizing that a lot of the published journal articles are riddled with subtle distortions in both the efficacy and harms analyses, particularly in psychiatry. We intuitively know that if the raw data had been available to us all along, things would be a lot different, and a lot better. But be careful what you ask for, because once you get it, it’s a long and winding road to know quite what to do with it.

There are thousands of pages in various packages generated by every Clinical Trial. So processing it all is no small task – finding those trees that matter in the forest. One thing for sure – an absolutely essential element for understanding any Clinical Trial is the a priori protocol. If you’ve done any research at all, you know that once you’ve got some data in your hands, there are a bunch of different ways to analyze it. The saying, "If you torture the data long enough, it will tell you anything you want to hear" becomes very real in practice. In any circumstances, there’s a strong temptation to try out various analytic techniques to see if the outcome doesn’t look more like you’d hoped. But in the case of a drug trial, there’s already a lot of time and significant treasure invested, meaning that the Clinical Trial results are the difference between throwing it all away or landing on a gold mine. The temptation to do some creative data analyzing is magnified exponentially in such a high stakes game. So it’s an absolute requirement that the outcome variables and the precise analytic methods are clearly stated before the study begins.

In evaluating a journal article reporting a Clinical Trial, the a priori protocol is an invaluable tool, and the first window to pry open. In the case of Study 329, the Protocol and the SAP [Statistical Analysis Plan] were together as a single document:

With the published article in hand [left], the trial itself is only a shadow. You can’t really know if the article is presenting the trial as declared before it started, or if it has been manipulated in one way or another. With the a priori protocol [right], you can evaluate the study design itself [for bias, omissions, etc] as well as compare it to the article to look for changes. So once recruitment begins, there shouldn’t have been any substantive changes in the protocol. Even minor alterations should have been added as official amendments to the protocol [and approved by the Institutional Review Board]. That point can’t be emphasized enough.

It may seem downright anal to insist on following the original protocol to the absolute letter. After all, people who do Clinical Trials call themselves researchers, and isn’t research supposed to be a creative endeavor? Certainly, the researcher can do any analysis he wants to do on the data. But an industry funded Clinical Trial is at the core, something else besides research – it’s Product Testing [creativity not invited]. One has to assume that any deviations after the study is underway are potential attempts to bias the outcome. The acronym HARK [Hypothesis After Results Known] reminds us of this danger. Non-protocol analyses or outcome variables are called exploratory, and may be very revealing, may even be discussed in the narrative. But they’re off limits in formulating the definitive conclusion of the study. If they’re that tempting, do another Clinical Trial with those findings in the new a priori protocol.

I was a late-comer to Study 329. By the time I got involved, it already had a literature of its own from the subpoenaed documents and settled court cases. I used a lot of that in a previous series that starts with a movement… and continues for quite a while, giving something of a  historical perspective [catalogued in the lesson of Study 329: an unfinished symphony…]. It’s there for the reading so I won’t repeat all of that here. When I wrote it, I’d been looking at RCTs for a while. But re-reading that series now, I can see how naive I was about the details – a novice about how Clinical Trials actually work, how they can be distorted. I suspect I wasn’t alone in my ignorance. I’ve learned a lot being involved in our current project, and so my focus is going to be different. Last time through, I was interested in proving to myself [and maybe you] that the analysis presented in the published paper was flawed, and did not show that Paxil® was either efficacious or safe in depressed adolescents. After this two year stint, I’ve learned a lot more about how to actually vet a Clinical Trial when you have the kind of Data Transparency we all want to be coming in the near future for all of them – what’s important and how to go through it. I hope this partial narrative of that journey will:
  • encourage other RIAT teams to look into unpublished or questionable Clinical Trials
  • help make future enterprises less grueling
  • make a contribution to future reforms in the current system
and it all starts with the a priori protocol
    a pri·o·ri  [ä′ prë-ôr′ë]
    1. from a general law to a particular instance;
      valid independently of observation.
    2. existing in the mind independent of experience.
    3. conceived beforehand.
Mickey @ 8:00 AM

study 329 i – setting things right…

Posted on Tuesday 8 September 2015

"Will this drug help me?" "… hurt me?" "… do nothing?" "What if I don’t take it?" Questions asked as if there’s an answer. But in every case the answer is in the form of likelihoods, not certainties. Each question has "how much?" tacked on – "how much might it help me?" "… hurt me?" Sometimes the answer depends on who you are – male/female? black/white? young/old? One can go on and on with things that might affect the answer, and still only end up with a risk benefit estimate, not the simple answer you want to hear. With interventions that have been around for a while, the doctor and sometimes even the patients have the benefit of long usage that makes things a lot easier. But every new treatment has a beginning with no clinical experience to fall back on. What then? So to the Clinical Trials.

In the laboratory, you can take two groups of genetically identical animals living under the same conditions and give one group a medication and the other something inert, then compare the results. We humans are much harder. We’re not genetic clones. We live in a wide variety of ways and places. We’re a fickle lot – sometimes we don’t consistently take the medication; sometimes we miss appointments; sometimes we drop out of studies altogether. And then there’s this placebo effect thing. For reasons known and unknown, just being in a study itself often makes us significantly better, a particularly common finding in psychiatric drug trials. Thus any Clinical Trial comes out of the gate with built in variabilities and confounding factors, no matter what is being tested. Then there’s time. Most trials are short compared to the projected drug use. So a Clinical Trial is only a rough starting point at best, picking up on harms and judging efficacies in a closely attended but brief setting – an abnormal setting.

I never paid too much attention to Clinical Trials. I think I even thought the FDA did them [that’s how little attention I paid!]. New drugs that mattered didn’t come along that often, and I [we] learned about them from other sources. I remember coming into Psychiatry from Internal Medicine, and being awed by how much we talked about them. There were only a few classes with not that many member drugs in each class. That was nothing, compared to where I came from. But "Why did you pick this-azine instead of that-azine?" was a frequent question, so I got into the swing of things and learned the standard lines about what seemed minor differences. By the time the "new" drugs showed up [SSRIs, SRNIs, Atypicals, Mood Stabilizers(?), etc], I was a practicing psychotherapist and mostly kept up out of habit, with lite usage. So Randomized Clinical Trials and drug approval processes are an acquired interest.

Actually, people like me who didn’t pay attention were part of our current problems. We relied on the academic community to keep us up to date about medications with journal articles and review articles. We took our required Continuing Medical Education [C.M.E.] Courses and went to our professional organizations’ meetings. And we didn’t really notice that things were slowly changing, that the firewall between the commercial medical enterprises and academic medicine had eroded a little more with every passing year. Medicine is traditionally a self-regulating profession, and we fell down on the job. So now we’re in the position of trying to reclaim things we just took for granted, and the commercially funded Randomized Clinical Trial [RTC] is right in the center of the problem.

I’ve had the opportunity to be on a RIAT team that has spent a couple of years immersed in looking back on a single Randomized Clinical Trial [RCT] that began recruitment 21 years ago, was published 14 years ago, and is now a classic – not as a breakthrough, but rather becoming a paradigm for what needs our attention. It was the SmithKline Beecham trial of Paxil® in depressed adolescents known as Study 329. Our reanalysis will be published shortly [see A Milestone in the Battle for Truth in Drug Safety]:
Journal of the American Academy of Child and Adolescent Psychiatry, 2001, 40(7):762–772.

Objective: To compare paroxetine with placebo and imipramine with placebo for the treatment of adolescent depression.
Method: After a 7 to 14-day screening period, 275 adolescents with major depression began 8 weeks of double-blind paroxetine [20–40 mg], imipramine [gradual upward titration to 200–300 mg], or placebo. The two primary outcome measures were endpoint response [Hamilton Rating Scale for Depression [HAM-D] score <8 or >50% reduction in baseline HAM-D] and change from baseline HAM-D score. Other depression-related variables were [1] HAM-D depressed mood item; [2] depression item of the Schedule for Affective Disorders and Schizophrenia for Adolescents-Lifetime version [K-SADS-L]; [3] Clinical Global Impression [CGI] improvement scores of 1 or 2; [4] nine-item depression subscale of K-SADS-L; and [5] mean CGI improvement scores.
Results: Paroxetine demonstrated significantly greater improvement compared with placebo in HAM-D total score <8, HAM-D depressed mood item, K-SADS-L depressed mood item, and CGI score of 1 or 2. The response to imipramine was not significantly different from placebo for any measure. Neither paroxetine nor imipramine differed significantly from placebo on parent- or self-rating measures. Withdrawal rates for adverse effects were 9.7% and 6.9% for paroxetine and placebo, respectively. Of 31.5% of subjects stopping imipramine therapy because of adverse effects, nearly one third did so because of adverse cardiovascular effects.
Conclusions: Paroxetine is generally well tolerated and effective for major depression in adolescents.
When I had my first crack at this article three years ago [the lesson of Study 329: an unfinished symphony…], I didn’t know as much as I thought I did. What I’ve learned in the interim are some things about how the Clinical Trial systems work in practice, how Clinical Trial data are recorded and catalogued, how to analyze the data, and just how easy it was to turn a negative trial into a paper that was accepted by the The Journal of the American Academy of Child and Adolescent Psychiatry where it remains, a stubborn reminder of a bygone era like the rebel battle flag that flew at the capital of South Carolina until recently…
Mickey @ 7:30 AM

anything goes…

Posted on Monday 7 September 2015

"There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen…"
                French Journalist/Economist Frédéric Bastiat

The Law of Unintended Consequences is an all too frequent force in the best laid plans of mice and men – derailing the most well meant reforms. It’s like an unseen ghost, lurking behind every tree just waiting to pop out when you least expect it. At the risk of a mundane example, the Kudzu on the sides of some of our Southern byways was actively imported by TVA in its early days to control erosion, but lingers in perpetuity choking everything in its path.

Back in 2001 when Paxil Study 329 was first published, it was a genuine shock to discover that it was ghost-written by professional medical writer, Sally Laden, who created the first draft of that article and oversaw the the subsequent revisions [and other paperwork]:
Children’s Hospital of Philadelphia [Dr. Weller]; North America Medical Affairs, GlaxoSmithKIine, Collegeville, PA [Ms. Oakes, Mr. McCafferty]. This study was supported by a grant from GlaxoSmithKline, Collegeville. PA. Children’s Hospital of Philadelphia [Dr. Weller]; North America Medical Affairs, ClaxoSmithKIine, Collegeville, PA [Ms. Oakes, Mr. McCafferty]. This study was supported by a grant from GlaxoSmithKline, Collegeville. PA. The authors acknowledge the contributions of the following individuals; Jill M. Abbott, Ellen Basian Ph.D.. Carolyn Boulos, M.D., Elyse Dubo, M.D., Mary A. Fristad. Ph.D., Joan Hebeler. M.D.. Kevin Kelly. Ph.D.. Sharon Reiter. M.D.. and Ronald A. Weller. M.D. Editorial assistance was provided by Sally K. Laden. MS.
By then, the funding source and the authors who were company employees were also being regularly acknowledged, though the COIs of the authors weren’t mentioned; for trial registration was in its infancy; and Propublica/Sunshine Act declarations were just a dream in the minds of a few:
    In olden days, a glimpse of stocking
    Was looked on as something shocking.
    Now heaven knows,
    Anything goes…
    Good authors too who once knew better words
    Now only use four-letter words
    Writing prose.
    Anything goes…
                      Cole Porter 1934
But have the required declarations made a difference? I’m sure they have made some difference, but like the statistical differences in many of our Clinical Trials, is it enough of a difference to really matter? Has it had the desired effect? I’m actually beginning to think that the old Law of Unintended Consequences is operating here, and that the insistance on declaring conflicts of interest may have had a paradoxical effect and increased our tolerance for Conflicts of Interest and industry involvement in scientific/academic matters. It’s a hypothesis I don’t care much for. For example, the recent Clinical Trials of the late-coming Atypical Antipsychotic, Brexpiprazole [the spice must flow…, how many stars?…]:
by Correll CU, Skuban A, Ouyang J, Hobart M, Pfister S, McQuade RD, Nyilas M, Carson WH, Sanchez R, and Eriksson H.
American Journal of Psychiatry. 2015 172[9]:820-821.
From the Zucker Hillside Hospital, Glen Oaks, N.Y.; Otsuka Pharmaceutical Development & Commercialization, Princeton, N.J.; and H.  Lundbeck A/S, Valby, Copenhagen, Denmark.

Funded by Otsuka Pharmaceutical Development & Commercialization, Inc., and H. Lundbeck A/S. Jennifer Stewart, M.Sc. [QXV Communications, Macclesfield, U.K.] provided writing support that was funded by Otsuka Pharmaceutical Development & Commercialization, Inc., and H. Lundbeck A/S.

Dr. Correll has been a consultant and/or advisor to or has received honoraria from Actelion, Alexza, American Academy of Child and Adolescent Psychiatry, Bristol-Myers Squibb, Cephalon, Eli Lilly, Genentech, Gerson Lehrman Group, IntraCellular Therapies, Lundbeck, Medavante, Medscape, Merck, National Institute of Mental Health, Janssen/J&J, Otsuka, Pfizer, ProPhase, Roche, Sunovion, Takeda, Teva, and Vanda; he has received grant support from Bristol-Myers Squibb, Feinstein Institute for Medical Research, Janssen/J&J, National Institute of Mental Health, NARSAD, and Otsuka; and he has been a Data Safety Monitoring Board member for Cephalon, Eli Lilly, Janssen, Lundbeck, Pfizer, Takeda, and Teva. Drs. Skuban, Ouyang, Hobart, McQuade, Nyilas, Carson, and Sanchez and Ms. Pfister are employees of Otsuka Pharmaceutical Development & Commercialization, Inc. Dr. Eriksson is an employee of, and owns stock in, H.  Lundbeck A/S.
by Kane JM, Skuban, Ouyang, Hobart, Pfister, McQuade, Nyilas, Carson, Sanchez, and Eriksson.
Schizophrenia Research. 2015 164[1-3]:127-35.
Contributors: Drs Kane, Skuban, Youakim, Hobart, Pfister, McQuade, Nyilas, Carson and Sanchez designed the study and wrote the protocol. Drs Kane, Skuban, McQuade and Eriksson contributed to interpretation of the data, and Dr Ouyang performed the statistical analysis. All authors contributed to and have approved the final manuscript. Ruth Steer, PhD, [QXV Communications, Macclesfield, UK] provided writing support, which was funded by Otsuka Pharmaceutical Development & Commercialization, Inc. [Princeton, USA] and H. Lundbeck A/S [Valby, Denmark].
Conflict of interest Dr Kane has been a consultant for Amgen, Alkermes, Bristol-Meyers Squibb, Eli Lilly, EnVivo Pharmaceuticals [Forum] Genentech, H. Lundbeck. Intracellular Therapeutics, Janssen Pharmaceutica, Johnson and Johnson, Merck, Novartis, Otsuka, Pierre Fabre, Proteus, Reviva, Roche and Sunovion. Dr Kane has been on the Speakers Bureaus for Bristol-Meyers Squibb, Eli Lilly, Janssen, Genentech and Otsuka, and is a shareholder in MedAvante, Inc. Drs Skuban, Ouyang, Hobart, Pfister, McQuade, Nyilas, Carson and Sanchez are employees of Otsuka Pharmaceutical Development & Commercialization, Inc. Dr Eriksson is an employee of H. Lundbeck A/S.
And, as I mentioned in the spice must flow…, there is only one academic author for each article, and both authors are at the Feinstein Institute for Medical Research. Both articles say:
From the Zucker Hillside Hospital, Glen Oaks, N.Y.; Otsuka Pharmaceutical Development & Commercialization, Princeton, N.J.; and H. Lundbeck A/S, Valby, Copenhagen, Denmark.
So like Cole Porter said:
    Now heaven knows,
    Anything goes…
I think if we had seen this much openly declared industry imprint back in 2001 [the days of Study 329], there would have been a loud general outcry [rather than just this complaint on my little blog here on the edge of the galaxy]. These articles are openly industry productions with all but two authors employed directly by industry. Both studies used 60 [!] sites [for rapidity] all over the world. They’re both ghost-written and the sole academic authors are from the same department and themselves heavily loaded with COI. We should be up in arms that two first-line journals published such obviously tainted articles. But unless I missed it, nobody has had much to say. So, as to that Law of Unintended Consequences, I’m wondering if our insistence in demanding these disclosures hasn’t sent the message that this kind of publication is fine. And that what was intended to be a check on tainted Clinical Trials has turned into a tolerance – a permission to publish them in this form. It damn sure hasn’t put a stop to them…
Mickey @ 8:21 PM

the growing cry…

Posted on Saturday 5 September 2015

    1. an official ban on trade or other commercial activity with a particular country.
      "an embargo on grain sales"
    1. impose an official ban on (trade or a country or commodity).
      "the country has been virtually embargoed by most of the noncommunist world"
    2. seize (a ship or goods) for state service.
The video in the last post [background music…] is of a talk Dr. Healy gave exactly a year ago when we first submitted our RIAT article about Paxil Study 329 to the British Medical Journal [BMJ]. If you watched it, you know that it’s a historical review of the Clinical Trial and the article that appeared in the Journal of the American Academy of Child and Adolescent Psychiatry [JAACAP] in 2001. Notice that he doesn’t talk directly about what we said in our article which is a second look at the data from that study. That’s because like most academic journals, the BMJ requests an embargo on discussing an article submitted for publication until it is either rejected or actually published. I hasten to add that this kind of embargo makes perfect sense to me for any number of reasons. So I’m not complaining.

But even though I understand and even approve of the embargo, that doesn’t mean that I enjoy waiting for publication to talk about our paper. I’ve been thinking about that Keller et al article for five years now, and actively working on our RIAT paper for two years. So it’s hard to think about much else these last several weeks [as in my recent posts are monotonous, mostly about RCTs]. But I’ll have to admit that the wait has had something of a positive effect in that it has focused my thinking onto an important topic. You guessed it – the topic is embargos – and specifically on the pharmaceutical industry’s embargo on the primary data from their Clinical Trials.

Since I was a late·comer to the ways of Clinical Trials, it took me a while to catch up. After looking at more RCTs than I’d like to admit, I realized that they were a big problem. And I was frustrated that I couldn’t get at the actual data, but I assumed that industry’s embargo was something that was backed up by some Law or Act. But that wasn’t slightly right [repeal the proprietary data act…, except where necessary to protect the public…]. They keep the data from RCTs secret because they want to, not because they have a legal right to. It’s an embargo like our embargo on trade with Cuba or Iran, a power play designed to force a desired outcome. What desired outcome? To make us accept the version presented in some published [often deceitfully written] article in a journal. And now that I’m thinking about it, I’m not sure that embargo is the right maritime metaphor for their keeping the actual data secret. Maybe …
    1. an act or means of sealing off a place to prevent goods or people from entering or leaving.
      "there was a blockade of humanitarian aid"
    1. seal off (a place) to prevent goods or people from entering or leaving.
      "Blackbeard blockaded the Charleston Harbor"
Blackbeard's blockade of the Charleston Harbor
… would be a more accurate choice of terms. And what makes it worse, the regulatory agencies [FDA, EMA] have been enforcers of the blockade that keeps us from being able to examine the data for ourselves. Medicine is traditionally self-regulating. How can we do that if we can’t see the data? And so our article is about more than bringing the data from one Clinical Trial out into the daylight. It’s an example of what can be learned in general from the examination of the raw data when conducted by people who don’t work for the company [that’s us] – who don’t have the kinds of conflicts of interest that are ubiquitous in these Industry funded RCTs [that’s us too]. The goal is, of course, to add our voices to the growing cry to make the actual raw data available for every Clinical Trial…
Mickey @ 2:26 PM

background music…

Posted on Friday 4 September 2015

a little background music from David Healy…
Mickey @ 8:00 AM