study 329 xii – premature hair loss…

Posted on Tuesday 29 September 2015

I was hoping to move away from the Study 329 paper for a while, mainly not to run the boring in 1boringoldman into the ground, but something came up today that I thought was in need of emphasizing. During the 2 years writing the paper, there were a number of subplots along the way. As it became apparent that the suicidality issue was even bigger than we had imagined, we began to wonder if the Sponsors/Doctors had told the afflicted kids or their parents that the suicidal thoughts may well have been caused by the medication itself. Dr Healy wrote about this on his blog recently [Study 329: MK, HK, SK, GSK & History]:
What Happened to those Suicidal in Study 329?

In May 2014, the RIAT team asked GSK what the children who became suicidal in the course of Study 329 have since been told.  The consent form says that anyone entering the study would be treated just the way they would be in normal clinical practice.

In Study 329, the children taking imipramine were by design force titrated upwards to doses of the order of 300 mg, which is close to double the dose of imipramine given in adult trials by GSK or in normal clinical practice. In normal clinical practice it would be usual to inform somebody who had become suicidal on an SSRI that the treatment had caused their problem.
  • It is important to the person’s image of themselves, that they are aware this problem might have come from the drug rather than from themselves.
  • It is important that they be told that they are likely to react in the same way to any other serotonin reuptake inhibitors – including some antihistamines, isotretinoin and some antibiotics and that they should be cautious about such treatments in the future.
  • It is important the patient’s family be warned about this adverse event as some blood relatives will be more susceptible to commit suicide on an SSRI than the rest of the population might be.
Responding to our letter, GSK’s Dr James Shannon made it clear that 20 years later the company have still not informed any of the participants in Study 329. One reason he offered was that it had only recently been agreed these drugs posed risks. [One of the features of GSK’s response to the published study appears to be a public acceptance that SSRIs do cause suicidality in at least this age group]. But the key reason for not doing so that he offered was that:
    As I have mentioned in my earlier letters, it is standard in clinical trials carried out according to good clinical practice guidelines for our trial investigators and treating physicians to be responsible for patients’ medical care during and after a trial.  This would include the management of any adverse experiences that arise during the trial. Being closest to patients’ medical histories, they are best placed to do this and we are confident of their commitment to provide the care patients need.
This may be the first recorded appeal to the use of the Learned Intermediary doctrine in clinical trial settings. One lesson of the 329 story is that without access to data, the learned intermediary doctrine is supremely dangerous for patients – and for doctors…

And these "Learned Intermediaries" in this instance happened to also be authors  on this article, authors who have remained mute throughout. In fact, the public acknowledgement of suicidality as an adverse effect has only come from GSK in response to our recent article [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence]. Back in 2004, Dr. Keller seemed more focused on the authors’ public faces than their study subjects:
Sitting in the midst of a Maelstrom in 2004 when New York State were suing GSK for Fraud, faced with demands to “modify” Study 329, on June 13 Marty Keller emailed some of his co-authors to get a united position about what they would say to GSK about the modification:
    … that [it must be] 100% clear in this paper that there is no way to read it and think that 329 is being criticized and that it was not written with complete integrity and accuracy given the data we had and should have had…..  We also want it to be crystal clear that any new data or analyses, case report forms, narratives etc. you have worked with since 329 was published, was not made available to the 329 investigator’s by SK, otherwise we could look foolish, naïve, incompetent or “biased” (the most likely accusation that will be made) to present things in a way that was favorable to SK, disregarding our responsibility to the proper scientific method,  to the public, children and their families.
So as MK & GSK see it, while the Learned Intermediary Doctrine rules, he and his colleagues have a responsibility to their patients.  Someone needs to do the right thing by the Paxil 12…
Focusing back on the kids, there was an article published in today’s International Business Times that has some answers:
International Business Times
By Amy Nordrum
September 29 2015
I’m not even going to attempt to summarize this article. It needs a full reading, as do most case studies. The reporter interviews several people, now in their thirties, who took Paxil as teenagers. They describe how the Adverse Effect described in our paper feels on the inside. Some figured out it was the drug at the time, some didn’t. And in the latter case, the reaction to our article was as predicted in Dr. Healy’s post. I’ve seen a number of such patients who have had persistent self image problems from SSRI Adverse Events they didn’t know were from the drug. Like the case studies in Nordrum’s article, they were both relieved and enraged when they found out that the medication was responsible.

Except for the recent Swedish study [see an innovative design… ], the population studies haven’t detected the suicidality Adverse Effects. I think this article probably speaks to why. Most patients who have this kind of reaction either recognize it or are told about it by others and stop the medication and move on. Thus, there’s no way for them to show up in a population survey. I was surprised at the actual incidence in Paxil Study 329 – greater than 10%. This is one place where the clinical trial, carefully reported, can tell us something missed in other venues [because the subjects are so closely watched].

As to the "Informed Intermediary" argument. The "Intermediaries" have been kept in the dark too. Since our article came out, GSK, the JAACAP, and Keller have acknowledged publicly the "tendency towards suicidality" in adolescents taking SSRIs as a way of saying our article says nothing new. To my knowledge, that’s somerthing of a new tune for each them [even though they still stand by their original versions]. That is some pretty powerful spin. But, if that’s what it takes to finally get their acknowledgement, so be it. That’s what we’ve been wanting them to say for well over a decade!

This was, however, more spin than I could tolerate:
GSK voluntarily published the Paxil trials through a new program called RIAT, which stands for Restoring Invisible and Abandoned Trials. RIAT aims to nudge clinical research and drug development toward greater transparency and data sharing, and GSK was the first company to participate.
I can’t blame the author. GSK has indeed been taking credit for our article. But it’s a bit like an abcessed tooth claiming it pulled itself. The data coming our way was hardly "voluntary." It took the 2004 New York settlement, a challenge by Peter Doshi, over a decade of widespread pressure and lawsuits, months of our own group’s negotiations, and viewing the data through a periscope to get that data. So it’s kind of hard to swallow that version and turn the other cheek [this was the leading cause of premature hair loss among our group over the last two years]…
Mickey @ 6:29 PM

study 329 xi – week 8…

Posted on Monday 28 September 2015

I know I’m a broken record with this 329 stuff. ‘Blog’ came from parsing ‘Weblog’ into ‘We Blog’ – and when push comes to shove, all you can write about is what’s in your mind. This is what’s in there right now [still]…

In study 329 ix – mystic statistics… I was talking about the variables reported ib the Keller et al paper that weren’t in the original protocol, the ones that were reported to be statistically significant. They’re highlighted in blue in this copy of their Table 2:

We left them out of the paper because we wanted to emphasize that they weren’t mentioned until near the end of the study. But in looking at them, when we looked at them post hoc, only three of the for survived using the protocol defined Anova analysis [the K-SADS-L Depressed Mood Item bit the dust]. Two were barely over the p<0.05 line [or on it], and were only significant in week 8 of the study.

The notion that you would take some pill for two months and all of a sudden it would just start working seemed pretty remote to me. But I didn’t pay a lot of attention to the HAM-D Depressed Mood Item. After all, it was significant at the p<0.01 level in our analysis. But there was a Rapid Response to our article in the BMJ that had something to do with the HAM-D Depressed Mood Item causing me to go back for yet another look [Study 329 did detect an antidepressant signal from paroxetine]. First, look at the graph on the left below. It’s the difference between the HAM-D Depressed Mood Item and the baseline value, and it doesn’t make any sense. It says Paroxetine is effective at the first week with a respectable p=0.021 and an effect size of d=0.39 [in the moderate range]. No antidepressant does that. And none of the other outcome variables show anything like that:

The HAM-D Depressed Mood Item is neither a continuous nor categorical [yes/no] variable. It’s an Ordinal with five levels for the rater to choose from:

    DEPRESSED MOOD (sadness, hopeless, helpless, worthless)
    0 | Absent.
    1 | These feeling states indicated only on questioning.
    2 | These feeling states spontaneously reported verbally.
    3 | Communicates feeling states non-verbally, i.e. facial expression, posture, voice and tendency to weep.
    4 | Patient reports virtually only these feeling states in spontaneous all communication.

An Ordinal scale is obviously a severity scale, but the numbers aren’t ordinary numbers in that they only tell you the order of things, not an magnitude. They don’t exactly have an arithmetic [+ – × ÷] and we use different statistics [Mann Whitney, Kruskal Wallis]. Subtracting a baseline seems kind of shaky. Actually, that left hand graph looks like someone sat on it. The baseline values were higher for Paroxetine, and I suspected that a subtraction anomoly got carried through to the end accounting for those peculiar p values. So I just compared the raw values, and sure enough, there was no significant difference until the very end – Week 8 [right hand graph].

So it’s down to three rogue outcome variables, significant only in the eighth week Was there something about that last week that should be further examined. Well there was one thing [see the full study report acute, page 53]:

Defined Timepoints
Day 1 was defined as the day on which the randomized, double-blind study medication was started. Assessments w;ere included ui the analyses at a particular timepoint (study week) if they occurred within the following day windows relative to Day 1:
  Timepoint


  Day Window


 
Week 1 = Days 01 to 11
Week 2 = Days 12 to 18
Week 3 = Days 19 to 25
Week 4 = Days 26 to 32
Week 5 = Days 33 to 39
Week 6 = Days 40 to 46
Week 7 = Days 47 to 53
Week 8 = Days 54 to 70
If multiple observations for a patient fell into a visit widow, then the last (furthest from the start of the study) observation was used to represent that patient’s result for that time period in the tabulations and analyses. However, all values within a visit window were presented in the data listings.

Do I have the energy left to run this down? Certainly not right now.

It’s been a very long week…
Mickey @ 4:20 PM

aacap and jaacap respond…

Posted on Saturday 26 September 2015

[The following emails from the American Academy of Child and Adolescent Psychiatry and the Journal of the American Academy of Child and Adolescent Psychiatry were forwarded to me by members]

09/16/2015

Dear Members,

This week, The BMJ published a study, “Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence,” which reanalyzes data from a clinical trial performed in the late 1990s and published in JAACAP in 2001. The conclusions of this article contradict those of the original study. Please know that the Academy has been fully aware of the pending publication of this article by The BMJ.

Research provides the foundation for child and adolescent psychiatry’s knowledge base. The Academy encourages rigorous scientific design and methodology and supports the highest ethical and professional standards. We also believe it is essential that research be conducted within a strong framework of transparency and disclosure. As an organization, AACAP has been a leader in advocating for the positive changes that have taken place in the last decade in the relationship between the pharmaceutical industry and academic and professional associations.

As the leading national professional medical association dedicated to promoting the healthy development of children, adolescents, and families, through advocacy, education, and research, our response to The BMJ publication is as follows:

  • AACAP has the utmost respect for the The BMJ and we thank them for their continued efforts to further scientific knowledge and understanding.
  • AACAP supports transparency in clinical trial reporting and welcomes the RIAT initiative, which enables publicly available primary data to be reanalyzed and published as new, potentially revised reports.
  • JAACAP is a forum for scientific reporting and scholarly discussion. The scientific process builds on itself over time through a cycle of new research, analysis, and ongoing dialog. This process stimulates debate and moves the field forward toward a better understanding of critical issues.
  • As with most medical journals, JAACAP operates with full editorial independence. AACAP does not influence or direct decisions regarding specific publications. Furthermore, the statements and opinions expressed in JAACAP articles are those of the authors, and not necessarily those of AACAP, the editors, or the publisher. Inquiries about the articles and study in question should be addressed to their respective authors.

Moving forward, we will continue to monitor any developments and keep the membership informed of relevant information as it becomes available. Please direct any questions to the Communications Department via email at communications@aacap.org.

Thank you for your continued support!
Paramjit T. Joshi, MD
President, AACAP


The American Academy of Child and Adolescent Psychiatry

3615 Wisconsin Avenue, N.W. | Washington, D.C.20016-3007 | Phone: 202.966.7300 | Fax: 202.966.2891

www.aacap.org

09/25/2015

Dear __________,

As many of you are already aware, The BMJ recently published a reanalysis1 of clinical trial data (study 329) that is inconsistent with the results of a study originally published in JAACAP in 2001.2

This reanalysis does not come as a surprise. Under the Restoring Invisible and Abandoned Trials (RIAT) initiative, originally proposed in 2013,3 research groups are encouraged to use publicly available data to publish new, potentially revised reports of past clinical trials, and we had anticipated that study 329 would be among the first to be revisited.

Since I became editor-in-chief in 2008, nearly seven years after the original article’s publication, we have received a number of inquiries about study 329. JAACAP takes seriously its responsibility to ensure scientific integrity, and manages allegations of scientific misconduct and breaches of publication ethics according to guidelines set forth by the Committee on Publication Ethics (COPE).4 JAACAP’s editorial team has reviewed allegations against study 329 several times over, and after thorough assessment, found no basis for editorial action regarding the 2001 article.

JAACAP represents a collaborative effort designed to disseminate research findings and facilitate discussion within our community. The scientific process is one of continual evolution – a cycle that advances with each new replication, refinement, or rejection of past findings. Under the vast umbrella of scientific research and reporting, we must always make room for opposing views and varying interpretations. There can be no final word on any subject, but our common goal must be the same: to advance the science of pediatric mental health and to promote the care of youth and their families.

Sincerely,
Andrés Martin, MD, MPH
Editor-in-Chief


1 Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence. BMJ. 2015;351:h4320.
2 Efficacy of paroxetine in the treatment of adolescent major depression: a randomized, controlled trial. J Am Acad Child Adolesc Psychiatry. 2001;40:762-772.
3 Restoring invisible and abandoned trials: a call for people to publish the findings. BMJ. 2013;346:f2865.
4 Committee on Publication Ethics Guidelines.


The American Academy of Child and Adolescent Psychiatry

3615 Wisconsin Avenue, N.W. | Washington, D.C.20016-3007 | Phone: 202.966.7300 | Fax: 202.966.2891

www.aacap.org

Mickey @ 8:00 AM

a painful re·al·i·za·tion…

Posted on Friday 25 September 2015

    re·al·ize  /rëà,liz/
    verb
      become fully aware of [something] as a fact; understand clearly
[I’ve preferred to think of to re·al·ize as "to make real"]

I had no conscious intention of parsing the verbs to ra·tion·al·ize and to re·al·ize in back to back posts. I wrote the last post, then read today’s installment of America’s Most Admired Lawbreaker, the serialized story of Alex Gorsky, Johnson & Johnson, and Risperdal in the Huffington Post [now the 11th of 15 daily chapters]. The story has a specific meaning for me, and so reading it isn’t just informative – it’s the stimulus for a lot of memories, some of which were painful. I don’t know if it’s this way for everyone, but medical training had a massive impact on my relationship with my own mind. If you make a medical error, it can have the gravest of consequences – and it’s inconceivable that one won’t make mistakes. So when it happens, you remember what you were thinking while making the error and are faced with the dramatic consequences of your wrongness.

The first real experience of clinical medicine in my medical school was being on an autopsy team as part of the Pathology Course. My initial autopsy was on a twelve year old boy who had come into the hospital with a raging case of pneumonia and within a day and a half was dead. He had been seen by every service on pediatrics, but nobody knew what was wrong. So besides the pathology resident doing the autopsy and we greenhorn second years, there were pediatricians and pediatric surgeons filling the suite. The boy had an anomolous appendix that wasn’t where it was supposed to be. It had ruptured towards the back and the infection had gotten behind the abdominal lining, traveled up through the diaphragm into his chest, and presented as pneumonia. He had none of the usual symptoms of Appendicitis.

While the surgeons had considered that possibility, one of the senior residents had nixed the idea of an exploratory laparotomy. And when it became apparent during the autopsy that would have been the only thing they could’ve done that might have saved the boy, I could see his despair – a quiet depth of despair I’d never seen before. And through the years as I made my own errors, I learned what that felt like from the inside. And it seems that the more you learn, the greater the requirement to be skeptical about your own thoughts: "Am I rationalizing? comes to the fore. And in my later profession as a psychotherapist, that skepticism has to come to the center of the stage. Every thought is tentative, only a hypothesis, until proven otherwise.

So back to the thread. It can’t be lost on anyone that a psychoanalyst/psychotherapist like me would have a built in bias as a retired guy looking into the likes of biomedical psychiatry and psychopharmacology. So when I started seeing patients in a general psychiatric clinic and was appalled at the medication regimens people were on or when the disturbing articles about prominent psychiatrists reporting on Senator Grassley’s investigations started appearing, those things certainly bothered me [to say the least]. But I’m a biased observer. It looked like something was terribly wrong [and by the way, some of the names I was reading were people I knew, or knew about]. But was I ra·tion·al·iz·ing based on my own inner workings? And there was another piece. It was a painful story. This was my profession we were talking about. I’m not an anti-psychiatrist. I’m a psychiatrist and this was  feeling like a pretty painful realization.

When I ran across TMAP and the other J&J antics, I got pretty intrigued. It was so widespread, reported as almost Machiavellian. Somewhere along the way, I connected with whistle-blower Allen Jones and I read the Rothman Report, an amazing must-read document written for Allen’s TMAP trial that was coming up. Almost without thought, I asked my wife if she was up for a trip to Austin Texas [she’s usually up for a trip to anywhere new]. She said "sure," and so we were off to Austin for a week and a half trial. It was an odd impulse, and even on the plane I wondered why I was going. But by the end of the first day of the trial, I knew the answer. I was there to see if it [all the deceit I’d been reading about] was real [to re·al·ize as in "to make real"]. And it was real with a capital "R"! I guess I’m an evidenced-based type after all. Parenthetically, I think it was that same need to make it real that drove me in our recent Paxil Study 329 article.

Steven Brill’s series [America’s Most Admired Lawbreaker] is really top notch – actually also a must read. But a lot of it is about the chess moves by the J&J lawyers and the lawyers on the various other sides. And it’s about the big guys at the top. But going to the trial, the story was populated, and it was the testimony of the witnesses in the lower ranks that made it all so very real for me. I just happen to have posted the transcripts of the whole trial indexed for easy reading right here on 1boringoldman. The main linked index is below with a few highlighted to focus your reading [don’t miss Moake or Jones]:

STATE OF TEXAS and ALLEN JONES v. JANSSEN et al

DATE WITNESS DESCRIPTION

State v. Janssen Vol 1
State v. Janssen Vol 2
01/10/2012   Cynthia O’Keeffe The Opening Statement for the State of Texas Civil Medicaid Fraud Division.
01/10/2012   Tom Melsheimer The Opening Statement given by by Whistleblower Allen Jones’ Lawyer.
01/10/2012   Steve McConnico The Opening Statrment for the defendents – Janssen Pharmaceutica et al.
01/10/2012   Thomas Anderson Mr. Anderson was a Product Manager at Janssen during the time Risperdal was "launched" in 1993.
01/10/2012   Margaret Hunt Ms. Hunt is a fraud investigator for the Civil Medicaid Fraud Division of the Texas Attorney General’s Office.
State v. Janssen Vol 3
01/11/2012   Alexander Miller Dr. Miller is in the Department of Psychiatry at the San Antonio Texas Health Science Center – a member of the TMAP team.
01/11/2012   Steven Shon Dr. Shon was Medical Director of the Texas Department of Mental Health Mental Retardation – an integral part of TMAP.
01/11/2012   Gary Leech A Janssen employee who was the medical science liaison for Texas, Oklahoma, Arkansas, Louisiana, and New Mexico[95 – 03].
01/11/2012   James Van Norman Dr. Van Norman is a public psychiatrist currently with Austin Travis County Integral Care, a community mental health center.
State v. Janssen Vol 4
01/12/2012   N. Bursch-Smith Janssen employee from the Department of Reimbursement Management.
01/12/2012   Bill Struyk Former Janssen employee from the Department of Reimbursement Management [1996-1997].
01/12/2012   Allen Jones Pennsylvania Investigator who blew the whistle on TMAP and filed this suit.
01/12/2012   Laurie Snyder Janssen employee in the Department of Public Health Systems & Reimbursement management.
01/12/2012   Susan Stone Dr. Stone worked at the TDMHMR at the time the Texas Medication Algorithm Project [TMAP] was started.
01/12/2012   Steven Schroeder He is the president of the Robert Wood Johnson Foundation and CEO of the Robert Wood Johnson Foundation.
01/12/2012   Percy Coard Janssen employee who was a District Manager for hospital sales and later a Public Health Systems & Reimbursement manager.
State v. Janssen Vol 5
01/13/2012   Arnold Friede An expert witness from New York testifying for the plaintiff, specializing in FDA Law.
State v. Janssen Vol 6
01/17/2012   Tiffany Moake Ms. Moake was a Sales Rep for Janssen from 2002-2004 in the San Antionio area.
01/17/2012   Shane Scott Mr. Scott was a Janssen employee and was Ms. Moake’s District Sales Manager.
01/17/2012   Bruce Perry Dr. Perry was an expert witness for the Plaintiffs – a Child Psychiatrist with Baylor Medical School.
01/17/2012   Tone Jones Mr. Jones was Janssen’s District Sales Manager for the Houston area.
State v. Janssen Vol 7
01/18/2012   Tone Jones [continued]
01/18/2012   Billy Milwee Dr. Milwee is in charge of the Texas Medicaid Formulary Program.
01/18/2012   Valerie Robinson Dr. Robinson worked as a Child Psychiatrist in Fort Worth TX, working with Foster Children.
01/18/2012   Sharon Dott Dr. Dott is a psychiatrist in the Galveston area working in public facilities.
01/18/2012   Scott Reines Dr. Reines is an MD/PhD Janssen scientist who was in charge of Clinical Trials and FDA submissions.
01/18/2012   Jos. Glennmullen Dr. Glenmullen was an expert witness for the plaintiff – on the faculty of Harvard University.
Mickey @ 3:00 PM

a creative ra·tion·al·i·za·tion…

Posted on Friday 25 September 2015

    ra·tion·al·ize  /raSHne,liz/
    verb
      attempt to explain or justify [one’s own or another’s behavior or attitude] with logical, plausible reasons, even if these are not true or appropriate
[I’ve preferred to think of to ra·tion·al·ize as "to start with a conclusion"]
JAMA
by Anne R. Cappola and Garret A.
September 24, 2015

The primary interest of the biomedical scientific endeavor is to benefit patients and society. Frequently, this primary interest coincides with secondary interests, most commonly financial in nature, at the interface of the investigator’s relationship with a private sponsor, typically a drug or device company or, increasingly, venture capital firms. Academia and the public have become sensitive to how such a secondary interest might be unduly influential, biasing the interpretation of results, exposing patients to harm, and damaging the reputation of an institution and investigator. This concern has prompted efforts to minimize or “manage” such “conflicts of interest” resulting in a plethora of policies at both the local and national level. Although these policies are often developed in reaction to a limited number of investigators, once introduced, they apply to all. Given the broad array of stakeholders, the diversity of approaches, and the concern that such policies might restrain innovation and delay translation of basic discoveries to clinical benefit, the Institute for Translational Medicine and Therapeutics at the University of Pennsylvania recently convened an international meeting on conflict of interest. Several themes emerged…
Well, since we know where this is headed, why not jump on ahead right off the bat and get the conclusion out of the way?…
Conflicts of Interest:
    Dr Cappola
    • reports receiving consulting fees from Biomarin, Mannkind Corporation, and Takeda.
    Dr FitzGerald
    • reports being the McNeil Professor of Translational Medicine and Therapeutics, a council member of the American Association for the Advancement of Science, and a member of the National Academy of Medicine biomarker committee;
    • receiving a stipend for being co-chair of the advisory board for Science Translational Medicine;
    • grants from the Harrington Family Foundation and Eli Lilly;
    • consulting fees from Calico and Pfizer, Eli Lilly, Glenmark Pharmaceuticals, and New Haven Pharmaceuticals;
    • serving as chair for the Burroughs Wellcome Foundation review group on regulatory science awards, the Helmholtz Foundation advisory board for the network of cardiovascular science centers, and the PhD program committee of the Wellcome Trust, a section committee of the Royal Society;
    • and serving on the advisory boards of the Oklahoma Medical Research Foundation and King’s Health Partners in London. He also serves on the advisory boards of the Clinical and Translational Science Awards held by the University of Connecticut, Harvard, the Medical University of South Carolina, Duke University, and the University of California at San Francisco.
Funding/Support:
    • This work is supported by a grant [UL1 TR000003] from the National Institutes of Health.
Now that the authors’ cards are on the table, we can actually savor the argument that follows which deserves the label a creative ra·tion·al·i·za·tion:
First, the term conflict of interest is pejorative. It is confrontational and presumptive of inappropriate behavior. Rather, the focus should be on the objective, which is to align secondary interests with the primary objective of the endeavor—to benefit patients and society — in a way that minimizes the risk of bias. A better term — indicative of the objective — would be confluence of interest, implying an alignment of primary and secondary interests. In this regard, the individuals and entities liable to bias extend far beyond the investigator and the sponsor; they include departments, research institutes, and universities. The potential for bias also extends to nonprofit funders, such as the National Institutes of Health and foundations, as well as to journals that might, for example, generate advertising revenue from sponsors…
A conflict of interest implies bias. That’s what the term means. So the authors say that for it to have that meaning makes it a biased term – perjorative. The solution is simple. Remove the bias from the bias, and call it a confluence of interest. Now it’s not perjorative anymore. In fact, it’s downright laudable. And so bias isn’t bias after all, and everything is all better.

In 1934, Anna Freud wrote the Ego and the Mechanisms of Defense to flesh out her father’s ideas about how the mind gets around unsavory motives. She devoted a whole chapter to intellectualization and rationalization, a favorite of adolescents. Another way to look at it is that there is a cognitive leap in adolescence [Piaget] when the child can finally use formal logic and can think in the same way as his/her parents [stripping them of the power of a superior intellect]. A smart adolescent can justify [rationalize] anything and delights in endless mind games to the consternation of parents through the ages. For a time, it’s a new tool to get what you want, or enter into the power struggle phase of growing up rather than a tool for understanding. And some people never make that latter jump and rationalize their way through life.

Ms. Freud could’ve used this totally silly article as her prime example…
Mickey @ 9:44 AM

blitzed…

Posted on Thursday 24 September 2015

I’m not old enough to have been around during the days of Bromides [Nervine], or Barbiturates, or Meprobamate [Miltown], or Methaqualone [Quaalude]. I grew up in the age of Benzodiazepine [Librium, Valium, Klonopin, Xanax]. We all know what they do so we don’t have to have any clinical trials. We all know they’re effective short term for anxiety and we all ought to know what’s up ahead with longer term [or even medium term] use. These are the "damned if you do and damned if you don’t" drugs and the skill of the everyday clinician can be partially gauged by his/her ability to use them [or not use them] effectively without causing future problems. Some say never use them. Others ignore the warnings. But this post isn’t about that. It’s about something else:

    She was brought to the clinic by her aunt who was taking care of her temporarily. She was a woman in her fifties with a cast on her lower leg from a fall. She was calm, alert, but couldn’t answer many questions. She was blitzed. She told me she’d fallen and broken her hip. But she knew neither the date nor the season. By history, she was obviously the ‘black sheep’ of the family – a failed marriage, no contact with her kids, psych hospitalizations, multiple rehabs for alcohol, benzodiazepine detox, etc. – moving from family member to family member. Her aunt had a piece of paper with her medications written out neatly:
    • Seroquel 600 mg/day
    • Trazadone 450 mg/day
    • Depakote 2.5 Grams/day
    • Neurontin [I forget how much too much]/day
    • Cogentin 4 mg/day
      among other things…
    …an outrageous cocktail! I can think of no medical/psychiatric condition where that’s an appropriate regimen. No wonder she fell and broke her leg. No wonder that she got her injury wrong. Little wonder that she didn’t know the season [I’m surprised she even knew her name]. Where does one even start? So I saw her at the end of each day I was in the clinic, and tried to figure out what I could get away with coming down on without precipitating some withdrawal state. Over a couple of months, I got her down to…
    • Seroquel 200 mg/day
    • Depakote 500/day
    • Cogentin 4 mg/day
    …without incident. But she was still pretty fuzzy [season "yes" – month "no"]. That was two weeks ago. I had noted her pupils were dilated every visit but  wanted to decrease the Seroquel before taking on the Cogentin. This time they were so widely dilated I could barely tell her eye color [why it wasn’t that dramatic earlier isn’t clear to me] and she complained about her vision being blurred. So I stopped the Cogentin by coming down a mg/every couple of days. Yesterday, I had stepped out to return a phone call. When I got back, the nurse had put she and her Aunt in the office because she was so agitated. She was in the middle of a full scale hyperventilation episode with carpal-pedal-spasm – throwing her glasses across the room breaking them and yelling about…well, about everything.

    It took a while to get her breathing slowed. In the barrage of things that followed  [a litany of a lifetime of woes and symptoms], I noticed that her pupils were down to size; that she was fully oriented with intact memory, past and present; and that she was mad as hell about many [if not all] things. As she calmed down, I could see that she had some subtle but none-the-less definite involuntary movements of her tongue. In addition, her legs were never totally still.

    She knew about both things: "My restless legs are back – pacing all night. I haven’t slept for four days!" "It’s that Tardive thing I get from the medicine. It comes and goes [pointing to her tongue]." So I had unmasked her Akathisia and her Buco-Lingual symptoms by dropping the neuroleptic dose and discontinuing the Cogentin too quickly. At least her cognitive apparatus was working, in fact, working overtime.

Yesterday was actually my first opportunity to take a history as she had been non compos mentis earlier. I can’t discuss it here except to say that the presumptive diagnosis is Borderline Personality Disorder. There was no evidence of a major affective or psychotic disorder. That this patient was overmedicated goes without saying. In an earlier era, overmedication might have happened with the anti·anxiety drugs. Such patients are always anxious, and when people begin to treat them with medication there is a tendency for doses to go up and up. It’s never enough. In her case, besides the pan·anxiety, she experiences the now discarded diagnostic criteria from the DSM-III – intolerance of being alone. When she’s living alone, she has great difficulty sleeping, and a lot of the overmedication has to do with that complaint. But now there’s something else. Over the years, her anxiety and insomnia have been treated with various antipsychotic medications, and she now has Akathisia and involuntary tongue movements suggesting Tardive Dyskinesia, emergent on reducing the dose and the Cogentin. I won’t know for sure for a while, but I think this might well be the kind that doesn’t go away – even if I can get her off of the Seroquel.

These patients are very difficult and are often overmedicated [and have been as long as there have been medicines] – with all the medications listed in the first paragraph. That’s a bad thing. She’s gotten medications that are used in conditions she doesn’t have [Depakote and Neurontin]. That’s a bad thing too. But this patient has been given escalating doses of antipsychotics and now she may well have signs of a permanent iatragenic neurological condition called Tardive Dyskinesia. And our literature says that’s a good idea – using Atypicals Antipsychotics in Borderline Personality Disorder – based on short-term Clinical Trials funded by industry. That’s a very bad thing, maybe a forever thing:

 

[see Atypicals in Borderline Personality Disorders, an anachronism…, Academic Industrial Complex II…, Academic Industrial Complex III…, and not really given the chance…] These studies came from Dr. Charles Schulz‘s Department at the University of Minnesota. Dr. Schulz has recently stepped down [or been stepped down] in the wake of the Dan Markingson affair – essentially being accused of running an industry funded Clinical Trial Mill. We know a lot about the Borderline conditions, and none of what we know would suggest to me that using these medications might be a good idea. This case is an example of why. She was on a maxi-dose to treat anxiety and insomnia giving us now two disorders to deal with.

With these patients, there is often nothing right to do. If you don’t treat the anxiety, they act out in dangerous ways. If you do treat it, they overdose or take too much and still want more. They defeat most treatments and yet they need to be treated. I’m not a bit surprised that they respond to Atypical Antipsychotics in short-term trials. But like anything in these cases, the drugs run out of juice and so up goes the dose. We know that pattern from their general response to any and all treatments. And these drugs can leave permanent sequela for no particular gain that I can see. We can do so much better than this, even with these difficult cases…
Mickey @ 7:52 PM

a breakthrough·freak…

Posted on Tuesday 22 September 2015


Tom Insel explains why he’s ready to give Silicon Valley a try.
MIT Technology Review
By Antonio Regalado
September 21, 2015

We are at a really interesting moment in time. Technology that already has had such a big impact, on entertainment and so many aspects of our lives, can really start to change health care. If you ask the question “What parts of health care can technology transform?”–mental health could be one of the biggest.

Technology can cover much of the diagnostic process because you can use sensors and collect information about behavior in an objective way. Also, a lot of the treatments for mental health are psychosocial interventions, and those can be done through a smartphone. And most importantly, it can affect the quality of care, which is a big issue, especially for psychosocial interventions.

What do you mean by treating over the phone? One of the best treatments for depression is cognitive behavior therapy. It’s building a set of skills for managing your mood. You can do it with a phone as well as face to face. A lot of people with severe depression or social phobia or PTSD don’t want to go in to see someone. This lowers the bar.

Is it possible to diagnose mental illness with a phone? I’d say you can collect information over the phone that can help people manage their own treatment. Your question rests on a paradigm that is completely shifting. The old paradigm is you go to the doctor and they write a prescription. Whether you call it a diagnosis or just identifying the issue, there is an awful lot that can be done online. There is an attachment for your smartphone than can see the tympanic membrane, and pediatricians can make a diagnosis [of ear infection] online. It’s a world where you want to get the right treatments at the right time for the right people. As a consumer, you are close to the source of the information. All of this is a different paradigm that we are moving into.

Is Alphabet’s approach to mental illness going to be primarily technological or biological? I don’t know that. We are going to explore what the opportunities are. We know their sweet spot is in data analytics. What they do really well is figure out how to analyze data. The opportunity is to take that skill and answer biological questions. What that means in terms of what projects the life science team takes on in mental health is totally undefined. Part of my move there is to figure it out.
As a medical student in the 1960s, I was in a new place and the only people I knew were other medical students. A couple of my early friends were local, had grown up in the town. Through them I met their longtime friends who weren’t in medicine. One such person was the son of a successful businessman, and his path was set for life. But, in spite of my own aversion to business, we really hit it off. One day, he explained why, and gave me a phrase that’s still with me. He casually quipped, "You’re a breakthrough·freak – just like me." I’d never thought of it that way, but it was completely on target. I read science fiction [the sciency kind]. I kept up with the latest science advances and technologies, and fantasized about where they might lead. I was in medical school as a prelude to a research career. He had casually nailed my diagnosis.

Much later, I was forced to practice medicine by being drafted into the Air Force after an Internal Medicine Residency and an NIH Research fellowship. Within a short period of time, I realized that practicing medicine was not only relevant and engaging, it got me out of my head. Did I want to do something that actually mattered, or spend my life being just a breakthrough·freak? How that had come to be and how it translated into the rest of my life is another story. But for this moment, my point is that I know a breakthrough·freak when I see one. And Tom Insel has a terminal case. I hasten to add that there’s nothing wrong with being a breakthrough·freak. Probably most breakthroughs are made by breakthrough·freaks repurposed as visionaries [and I’d bet that Google is filled to overflowing with examples].

Reading Insel’s blogs for a number of years, he bounces from thing to thing leaving a trail of projects in a string behind him. Google is actually a much better fit for a serial breakthrough·freak than the NIMH. He will likely be part of a  think·tank rather than the man in charge, and that might just work [though I’m betting there will be a bunch of pop-psychology apps coming our way]. But maybe he’ll land on something visionary after all…
Mickey @ 10:00 PM

just a note:…

Posted on Monday 21 September 2015

Just a quick note to say that if you’re reading this blog and you’re not reading Steven Brill’s America’s Most Admired Lawbreaker in the Huffington Post about Alex Gorsky, J&J, Risperidal, and related matters, you’re making a big mistake. Today was Day 7 of 15, and he’s getting to the good parts. It’s a story I know well yet I haven’t had a moment’s boredom. He’s doing a mighty fine job of telling a story every American ought to read. Don’t miss it!

Mickey @ 5:43 PM

lost its mojo…

Posted on Monday 21 September 2015

One might think that with all of the supportive media coverage our Study 329 article has received, I would be able to shake off the response from lead author, Martin Keller, reproduced from Retraction Watch in the last post [keller responds…], or his comment in The Chronicle of Higher Education:
Dr. Keller contacted The Chronicle on Wednesday to insist that the 2001 results faithfully represented the best effort of the authors at the time, and that any misrepresentation of his article to help sell Paxil was the responsibility of Glaxo. "Nothing was ever pinned on any of us," despite various trials and investigations, he said. "And when I say that, I’m not telling you we’re like the great escape artists, that we’re Houdinis and we did something wrong and we got away with the crime of the century. Don’t you think if there was really something wrong, some university or agency or something would have pinned something on us?" In what he described as his first effort to speak publicly about the matter, Dr. Keller said his critics also have financial and professional motives for amplifying criticisms, including lawyers representing Paxil plaintiffs and professors seeking their own records of journal publication…
I had a somewhat similar reaction to Dr. Jeffrey Lieberman’s comment:
“The group is a self-appointed watchdog,” Jeffrey Lieberman, chair of psychiatry at the Columbia University College of Physicians and Surgeons, told BuzzFeed News. “One wonders what the motivation is, and how objective they’re going to be.”
So I spent a day or so with a background chorus of refutations playing like a scratchy record in my mind until they played themselves out. Nobody reading this blog needs to hear them again. You could probably reproduce them yourselves, and I’ve certainly filled up enough pages saying them. After the din in my head subsided, I was left with just one clear note that I wanted to respond to. It is what has been characterized as the fallacy of an appeal to authority:
“The study authors comprised virtually all of the academic researchers studying the treatment of child depression in North America at the time”
In this case, the BY·LINE is indeed full of experts:
MARTIN B. KELLER, M.D., NEAL D. RYAN, M.D., MICHAEL STROBER, PH.D., RACHEL G. KLEIN, PH.D., STAN P. KUTCHER, M.D., BORIS BIRMAHER, M.D., OWEN R. HAGINO, M.D., HAROLD KOPLEWICZ, M.D., GABRIELLE A. CARLSON, M.D., GREGORY N. CLARKE, PH.D., GRAHAM J. EMSLIE, M.D., DAVID FEINBERG, M.D., BARBARA GELLER, M.D., VIVEK KUSUMAKAR, M.D., GEORGE PAPATHEODOROU, M.D., WILLIAM H. SACK, M.D., MICHAEL SWEENEY, PH.D., KAREN DINEEN WAGNER, M.D., PH.D., ELIZABETH B. WELLER, M.D., NANCY C. WINTERS, M.D., ROSEMARY OAKES, M.S., AND JAMES P. MCCAFFERTY, B.S.
But in spite of Dr. Keller’s claims otherwise, reading the raft of subpoenaed documents and the depositions, the author·ity appears to have rested on the shoulders of ghostwriter Sally Laden and perhaps the last two listed authors, both of whom were GSK employees [Deposition of Sally Laden, 2007]:

    QUESTION: The document that I have marked as Exhibit seven is the final clinical report for Study 329 is that correct?
       ANSWER: Yes
    QUESTION: Is this a document that you were referring to that you got the data from?
       ANSWER: I don t recall what specific document I did receive Whether it was this was one. I mean yes this would be what I would have gotten. I don’t recall getting it.
    QUESTION: You don t recall ever receiving it but you know you got it right?
       ANSWER: Yes I got it. Yes I don’t recall receiving it.
    QUESTION: This provided you with information that you then utilized to prepare the first draft of the manuscript for Study 329?
       ANSWER: Yes
    QUESTION: Was it your responsibility alone to create the first draft of Study 329 or did you get help from some of your colleagues?
       ANSWER: I believe I created it on my own.
    QUESTION: Did Martin Keller tell you what to put in the first draft?
       ANSWER: I don’t recall. I don t think I had any conversation with him until we were, you know, afterwards.
    QUESTION: After you prepared the first manuscript?
       ANSWER: To the best of my recollection, yes.
    QUESTION: In here you list eight main outcome measures, correct?
       ANSWER: Yes
    QUESTION: And you can’t tell from reading – a reader could not distinguish which are these – whether any or all of them are primary or secondary?
       ANSWER: Correct
    QUESTION:  My question was do you know whose idea it was to not distinguish between primary and secondary efficacy measures?
       ANSWER: A reader cannot. This was a first draft, so this came straight from me. This was, I guess, my interpretation. I’m remembering this may have been my interpretation of the data.

This is only one small example of the extent to which the appeal to authority has pervaded our literature. The experts are listed on the BY·LINE, but the work that matters is produced by the sponsor. In this case, the subjects were recruited and underwent the study at the institutions of the listed authors, but the article was drafted by the sponsor and written by a paid writer. This kind of "guest" authorship is common – the experts are involved, but not in the authorship as we understand the term. There are many other examples where the sponsors had already completed the papers before even recruiting the academic "guest authors" to sign onto the BY·LINE.

Similar experts with [financial] PHARMA COI are everywhere: CME presentations; Speaker’s Bureaus; Review Articles; the list gets longer by the year. A recent NEJM series [wtf?…, wtf? for real…] argued that their longstanding policy of banning authors with these COI from review articles should change [in part because there’s a paucity of "clean" experts]. The DSM-5 Revision was done using panels of experts heavily laden with COI tainted members [must be crazy…]. Expert "panels" produced the guidelines and algorithms for the infamous TMAP scam in Texas [1999…]. It appears that we have developed a "cult" of experts [called Key Opinion Leaders by the pharmaceutical marketers].

The Nizkor Project [a study of logical fallacies] lists among the instances where an appeal to authority is considered a logical fallacy:

  • If there is evidence that a person is biased in some manner that would affect the reliability of her claims, then an Argument from Authority based on that person is likely to be fallacious. Even if the claim is actually true, the fact that the expert is biased weakens the argument. This is because there would be reason to believe that the expert might not be making the claim because he has carefully considered it using his expertise. Rather, there would be reason to believe that the claim is being made because of the expert’s bias or prejudice.
  • If a person makes a claim about some subject outside of his area(s) of expertise, then the person is not an expert in that context. Hence, the claim in question is not backed by the required degree of expertise and is not reliable. It is very important to remember that because of the vast scope of human knowledge and skill it is simply not possible for one person to be an expert on everything. Hence, experts will only be true experts in respect to certain subject areas. In most other areas they will have little or no expertise.
In this case, when we were able to directly access the Individual Participant Data [IPD] and the Case Report Forms [CRFs] using the a priori Protocol as our guide for predefined Primary and Secondary Outcomes following their stated Statistical Analysis Plan, we could not confirm their claim of efficacy or of safety. Quite the opposite. The group listed on the BY·LINE may well be experts of one sort or another, but they are neither unbiased nor experts in analyzing Clinical Trial data. The notion that one can introduce a completely new outcome analysis at the end of a Clinical Trial lasting three years, whether before or after breaking the blind, and expect to be taken seriously in perpetuity is ludicrous – no matter what the explanation for the change. It’s equally bizarre to query 27 outcome measures [in the CSR] and ignore correcting for multiple variables. Likewise, the idea that Dr. Keller can claim that the article was written or analyzed by the experts on the BY·LINE when the hired medical writer testifies that she wrote the first draft [and others] from an industry supplied summary is equally absurd. And our paper was actually soft on some of the statistical manipulation along the way, but these comments apply in that arena as well.

I actually sometimes feel sorry for some  of the people on that BY·LINE. For some, I expect their error was in assuming that the analysis of the study was properly conducted. But my empathy is short-lived. I think they must’ve felt like it was a double return for their efforts. By doing the study on their site, they raised money for their departments AND they added an article to their respective resumes. What they got was anything but a boost to their status as experts. They delegated their author·ity to other experts who were operating deep in the domain of fallacy – then compounded and continue to compound the problem by their silence.

The appeal to authority argument has lost its mojo, not just in this trial, but in Clinical Trials in general. In our article, we concluded:
… As with most scientific papers, Keller and colleagues convey an impression that “the data have spoken.” This authoritative stance is possible only in the absence of access to the data. When the data become accessible to others, it becomes clear that scientific authorship is provisional rather than authoritative.
Mickey @ 10:00 AM

keller responds…

Posted on Friday 18 September 2015

This is  cross posted from Dr. David Healy’s web site with his permission…

The Letter below from Marty Keller and colleagues was sent to many media outlets, to retraction watch, and to professional organizations on Wednesday.  Paul Basken from the Chronicle for Higher Education asked me for a response which I sent about an hour after receiving the letter.  This response is from me rather than the 329 group. This and other correspondence features and will feature on Study329.org.

One quick piece of housekeeping.  Restoring Study329 is not about giving Paroxetine to Adolescents – its about all drugs for all indications across medicine and for all ages.  It deals with standard Industry MO to hype benefits and hide harms.  One of the best bits of coverage of this aspect of the story yesterday was in Cosmopolitan.

Letter From Keller et al

Dear

Martin Keller

Nine of us whose names are attached to this email (we did not have time to create electronic signatures) were authors on the study originally published in 2001 in the Journal of the American Academy of Child and Adolescent Psychiatry entitled, “Efficacy of paroxetine in the treatment of adolescent major depression: a randomized controlled trial,” and have read the reanalysis of our article, which is entitled, “Restoring Study 329:  efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence”, currently embargoed for publication in the British Medical Journal (BMJ) early this week. We are providing you with a brief summary response to several of the points in that article that which with we have strong disagreement. Given the length and detail of the BMJ publication and the multitude of specific concerns we have with its approach and conclusions, we will be writing and submitting to the BMJ’s editor an in-depth letter rebutting the claims and accusations made in the article. It will take a significant amount of work to make this scholarly and thorough and do not have a time table; but that level of analysis by us far exceeds the time frame needed to give you that more comprehensive response by today.

The study was planned and designed between 1991-1992. Subject enrollment began in 1994, and was completed in 1997, at which time analysis of the data commenced.  The study authors comprised virtually all of the academic researchers studying the treatment of child depression in North America at the time. The study was designed by academic psychiatrists and adopted with very little change by GSK, who funded the study in an academic / industry partnership.  The two statisticians who helped design the study are among the most esteemed in psychiatry.  The goal of the study designers was to do the best study possible to advance the treatment of depression in youth, not primarily as a drug registration trial.  Some design issues would be made differently today — best practices methodology have changed over the ensuing 24-year interval since inception of our study.

In the interval from when we sat down to plan the study to when we approached the data analysis phase, but prior to the blind being broken, the academic authors, not the sponsor, added several additional measures of depression as secondary outcomes.  We did so because the field of pediatric-age depression had reached a consensus that the Hamilton Depression Rating Scale (our primary outcome measure) had significant limitations in assessing mood disturbance in younger patients. Accordingly, taking this into consideration, and in advance of breaking the blind, we added secondary outcome measures agreed upon by all authors of the paper.  We found statistically significant indications of efficacy in these measures. This was clearly reported in our article, as were the negative findings.

In the “BMJ-Restoring Study 329 …” reanalysis, the following statement is used to justify non-examination of a range of secondary outcome measures:

Both before and after breaking the blind, however, the sponsors made changes to the secondary outcomes as previously detailed.  We could not find any document that provided any scientific rationale for these post hoc changes and the outcomes are therefore not reported in this paper. 

This is not correct.  The secondary outcomes were decided by the authors prior to the blind being broken.  We believe now, as we did then, that the inclusion of these measures in the study and in our analysis was entirely appropriate and was clearly and fully reported in our paper.  While secondary outcome measures may be irrelevant for purposes of governmental approval of a pharmaceutical indication, they were and to this day are frequently and appropriately included in study reports even in those cases when the primary measures do not reach statistical significance.  The authors of “Restoring Study 329” state “there were no discrepancies between any of our analyses and those contained in the CSR [clinical study report]”.  In other words, the disagreement on treatment outcomes rests entirely on the arbitrary dismissal of our secondary outcome measures.

We also have areas of significant disagreement on the “Restoring Study 329” analysis of side effects (which the author’s label “harms”).   Their reanalysis uses the FDA MedDRA approach to side effect data, which was not available when our study was done.  We agree that this instrument is a meaningful advance over the approach we used at the time, which was based on the FDA’s then current COSTART approach. That one can do better reanalyzing adverse event data using refinements in approach that have accrued in the 15 years since a study’s publication is unsurprising and not a valid critique of our study as performed and presented.

A second area of disagreement (concerning the side effect data) is with their statement, “We have not undertaken statistical tests for harms.” The authors of “Restoring Study 329” with this decision are saying that we need very high and rigorous statistical standards for declaring a treatment to be beneficial but for declaring a treatment to be harmful then statistics can’t help us and whatever an individual reader thinks based on raw tabulation that looks like a harm is a harm.  Statistics of course does offer several approaches to the question of when is there a meaningful difference in the side effect rates between different groups.  There are pros and cons to the use of P values, but alternatives like confidence intervals are available.

 “Restoring Study 329” asserts that this paper was ghostwritten, citing an early publication by one of the coauthors of that article. There was absolutely nothing about the process involved in the drafting, revision, or completion of our paper that constitutes “ghostwriting”. This study was initiated by academic investigators, undertaken as an academic / industry partnership, and the resulting report was authored mainly by the academic investigators with industry collaboration.

Finally the “Restoring Study 329” authors discuss an initiative to correct publications called “restoring invisible and abandoned trials (RIAT)” (BMJ, 2013; 346-f4223).  “Restoring Study 329” states “We reanalyzed the data from Study 329 according to the RIAT recommendations” but gives no reference for a specific methodology for RIAT reanalysis.  The RIAT approach may have general “recommendations” but we find no evidence that there is a consensus on precisely how such a RIAT analysis makes the myriad decisions inherent in any reanalysis nor do we think there is any consensus in the field that would allow the authors of this reanalysis or any other potential reanalysis to definitively say they got it right.

In summary, to describe our trial as “misreported” is pejorative and wrong, both from consideration of best research practices at the time, and in terms of a retrospective from the standpoint of current best practices.

Martin B. Keller, M.D.
Boris Birmacher, M.D.
Gregory N. Clarke, Ph.D.
Graham J. Emslie, M.D.
Harold Koplewicz, M.D.
Stan Kutcher, M.D.
Neal Ryan, M.D.
William H. Sack, M.D.
Michael Strober, Ph.D.

Boxed harms

Response

David HealyIn the case of a study designed to advance the treatment of depression in adolescents, it seems strange to have picked imipramine 200-300mg per day as a comparator, unusual to have left the continuation phase unpublished, odd to have neglected to analyse the taper phase, dangerous to have downplayed the data on suicide risks and the profile of psychiatric adverse events more generally and unfortunate to have failed to update the record in response to attempts to offer a more representative version of the study to those who write guidelines or otherwise shape treatment.

As regards the efficacy elements, the correspondence we had with GSK, which will be available on Study329.org as of  Sept 16 and on the BMJ website, indicates clearly that we made many efforts to establish the basis for introducing secondary endpoints not present in the protocol.  GSK have been unwilling or unable to provide evidence on this issue, even though the protocol states that no changes will be permitted that are not discussed with SmithKline.  We would be more than willing to post any material that Dr Keller and colleagues can provide.

Whatever about such material, it is of note that when submitting Study 329 to FDA in 2002, GSK described the study as a negative Study and FDA concurred that it was negative.  This is of interest in the light of Dr Keller’s hint that it was GSK’s interests to submit this study to regulators that led to a corruption of the process.

Several issues arise as regards harms.  First, we would love to see the ADECs coding dictionary if any of the original investigators have one.  Does anyone know whether ADECs requires suicidal events to be coded as emotional lability or was there another option?

Second, can the investigators explain why headaches were moved from classification under Body as a Whole in the Clinical Study Report to sit alongside emotional lability under a Nervous System heading in the 2001 paper?

It may be something of purist view but significance testing was originally linked to primary endpoints.  Harms are never the primary endpoint of a trial and no RCT is designed to detect harms adequately.  It is appropriate to hold a company or doctors who may be aiming to make money out of vulnerable people to a high standard when it comes to efficacy but for those interested to advance the treatment of patients with any medical condition it is not appropriate to deny the likely existence of harms on the basis of a failure to reach a significance threshold that the very process of conducting an RCT will mean cannot be met as investigators attention is systematically diverted elsewhere.

As regards RIAT methods, a key method is to stick to the protocol. A second safeguard is to audit every step taken and to this end we have attached a 61 page audit record (Appendix 1) to this paper.  An even more important method is to make the data fully available, which it will be on Study329.org.

As regards ghostwriting, I personally am happy to stick to the designation of this study as ghostwritten.  For those unversed in these issues, journal editors, medical writing companies and academic authors cling to a figleaf that if the medical writers name is mentioned somewhere, s/he is not a ghost.  But for many, the presence on the authorship line of names that have never had access to the data and who cannot stand over the claims made other than by assertion is what’s ghostly.

Having made all these points, there is a point of agreement to note.  Dr Keller and colleagues state that:

“nor do we think there is any consensus in the field that would allow the authors of this reanalysis or any other potential reanalysis to definitively say they got it right”.

We agree.  For us, this is the main point behind the article.  This is why we need access to the data.  It is only with collaborative efforts based on full access to the data that we can manage to get to a best possible interpretation but even this will be provisional rather than definitive.  Is there anything that would hold the authors of the second interpretation of these data (Keller and colleagues) back from joining with us the authors of the third interpretation in asking that the data of all trials for all treatments, across all indications, be made fully available?  Such a call would be consistent with the empirical method that was as applicable in 1991 as it is now.

David Healy
Holding Response on Behalf of RIAT 329

Mickey @ 6:01 PM