protest too much…

Posted on Thursday 4 February 2016

I want to linger on the response of the Annals of Internal Medicine to the COMPare letter pointing out an instance of a published paper that reported outcomes differing from those in the a priori Protocol. Here’s the letter from COMPare:
Annals of Internal Medicine
by Eirion Slade; Henry Drysdale; Ben Goldacre, on behalf of the COMPare Team
December 22, 2015

TO THE EDITOR: Gepner and colleagues’ article reports outcomes that differ from those initially registered. One prespecified primary outcome was reported incorrectly as a secondary outcome. In addition, the article reports 5 “primary outcomes” and 9 secondary outcomes that were not prespecified without flagging them as such. One prespecified secondary outcome also was not reported anywhere in the article.

Annals of Internal Medicine endorses the CONSORT [Consolidated Standards of Reporting Trials] guidelines on best practice in trial reporting. To reduce the risk for selective outcome reporting, CONSORT includes a commitment that all prespecified primary and secondary outcomes should be reported and that, where new outcomes are reported, it should be made clear that these were added at a later date, and when and why this was done should be explained.

The Centre for Evidence-Based Medicine Outcome Monitoring Project [COMPare] aims to review all trials published going forward in a sample of top journals, including Annals. When outcomes have been incorrectly reported, we are writing letters to correct the record and audit the extent of this problem in the hopes of reducing its prevalence. This trial has been published and is being used to inform decision making, and this comment is a brief correction on a matter of fact obtained by comparing 2 pieces of published literature. We are maintaining a Web site [www.COMPare-Trials.org] where we will post the submission and publication dates of this comment alongside a summary of the data on each trial that we have assessed.

I was surprised by the response from the Annals. The tone is generally defensive and dismissive, sometimes verging on contemptuous. After describing their journal’s own editorial process, they turn to COMPare and it’s methodology:
Annals of Internal Medicine
by the Editors
December 22, 2015

… According to COMPare’s protocol, abstractors are to look first for a protocol that has been published before a trial’s start date. If they find no such publication, they are supposed to review the initial trial registry data. Thus, COMPare’s review excludes most protocols published after the start of a trial and unpublished protocols or their amendments and ignores amendments or updates to the registry after a trial’s start date. The initial trial registry data, which often include outdated, vague, or erroneous entries, serve as COMPare’s “gold standard.”

Our review indicates problems with COMPare’s methods. For 1 trial, the group apparently considered the protocol published well after data collection ended. However, they did not consider the protocol published 2 years before MacPherson and associates’ primary trial was published. That protocol was more specific in describing the timing of the primary outcome [assessment of neck pain at 12 months] than the registry [assessment of neck pain at 3, 6, and 12 months], yet COMPare deemed the authors’ presentation of the 12-month assessment as primary in the published trial to be “incorrect.” Similarly, the group’s assessment of Gepner and colleagues’ trial included an erroneous assumption about one of the prespecified primary outcomes, glycemic control, which the authors had operationalized differently from the abstractors. Furthermore, the protocol for that trial clearly listed the secondary outcomes that the group deemed as being not prespecified.
They’re chiding COMPare for not digging deep enough. I’ve spent a lot of time chasing around trying to find a priori Protocols and amendments, and it’s a daunting  and often impossible task. COMPare is making a plea for that information to be included in the articles and the review process. The authors surely have it immediately at hand. The second paragraph of COMPare’s letter couldn’t be clearer and doesn’t deseve the ‘eye for an eye’ response.
On the basis of our long experience reviewing research articles, we have learned that prespecified outcomes or analytic methods can be suboptimal or wrong. Regardless of prespecification, we sometimes require the published article to improve on the prespecified methods or not emphasize an end point that misrepresents the health effect of an intervention. Although prespecification is important in science, it is not an altar at which to worship. Prespecification can be misused to sanctify both inappropriate end points, such as biomarkers, when actual health outcomes are available and methods that are demonstrably inferior.
Nobody’s arguing with the editors about that point. If there’s one point where the COMPare letter is weak, it doesn’t spell out the obvious. The a priori Protocol, right or wrong, is the only verifiable piece of evidence around. We can’t trust that the blind was maintained in an industry funded CRO run trial. So if the a priori Outcome Measures have been changed, we need to know what they were and why they were changed so we can make our own decisions about the changes. Evoking "long experience" is no trump card. We readers have "long experience" too [and some of it has been very bad experience].
The Centre for Evidence-Based Medicine Outcome Monitoring Project’s assessments seem to be based on the premise that trials are or can be perfectly designed at the outset, the initial trial registry fully represents the critical aspects of trial conduct, all primary and secondary end points are reported in a single trial publication, and any changes that investigators make to a trial protocol or analytic procedures after the trial start date indicate bad science. In reality, many trial protocols or reports are changed for justifiable reasons: institutional review board recommendations, advances in statistical methods, low event or accrual rates, problems with data collection, and changes requested during peer review. The Centre for Evidence-Based Medicine Outcome Monitoring Project’s rigid evaluations and the labeling of any discrepancies as possible evidence of research misconduct may have the undesired effect of undermining the work of responsible investigators, peer reviewers, and journal editors to improve both the conduct and reporting of science…
The COMPare letter is matter-of-fact, pointing to an unacknowledged discrepancy in an article, suggesting how it should have been mentioned in the published article. I don’t read a charge of ‘research misconduct’ in that letter. But I sure don’t read any great desire in the editors to protect us from it. Why so nasty? Why the comment about undermining editors? One is tempted to say, "thou dost protest too much."
Mickey @ 11:25 AM

disillusionment…

Posted on Thursday 4 February 2016

An in-depth analysis of clinical trials reveals widespread underreporting of negative side effects, including suicide attempts and aggressive behavior
Scientific American
By Diana Kwon
February 3, 2016

Antidepressants are some of the most commonly prescribed medications out there. More than one out of 10 Americans over age 12 — roughly 11 percent — take these drugs, according to a 2011 report by the National Center for Health Statistics. And yet, recent reports have revealed that important data about the safety of these drugs — especially their risks for children and adolescents — has been withheld from the medical community and the public.

In the latest and most comprehensive analysis, published last week in BMJ [the British Medical Journal], a group of researchers at the Nordic Cochrane Center in Copenhagen showed that pharmaceutical companies were not presenting the full extent of serious harm in clinical study reports, which are detailed documents sent to regulatory authorities such as the U.S. Food and Drug Administration and the European Medicines Agency [EMA] when applying for approval of a new drug. The researchers examined documents from 70 double-blind, placebo-controlled trials of two common types of antidepressants — selective serotonin reuptake inhibitors [SSRI] and serotonin and norepinephrine reuptake inhibitors [SNRI] — and found that the occurrence of suicidal thoughts and aggressive behavior doubled in children and adolescents who used these medications.

This paper comes on the heels of disturbing charges about conflicts of interest in reports on antidepressant trials. Last September a study published in the Journal of Clinical Epidemiology revealed that a third of meta-analyses of antidepressant studies were written by pharma employees and that these were 22 times less likely than other meta-studies to include negative statements about the drug. That same month another research group reported that after reanalyzing the data from Study 329, a 2001 clinical trial of Paxil funded by GlaxoSmithKline, they uncovered exaggerated efficacy and undisclosed harm to adolescents.

Because of the selective reporting of negative outcomes in journal articles, the researchers in the most recent BMJ study turned to clinical trial reports, which include more detailed information about the trials. They discovered that some of most the useful information was in individual patient listings buried in the appendices. For example, they uncovered suicide attempts that were passed off as “emotional liability” or “worsening depression” in the report itself. This information, however, was only available for 32 out of the 70 trials. “We found that a lot of the appendices were often only available upon request to the authorities, and the authorities had never requested them,” says Tarang Sharma, a PhD student at Cochrane and lead author of the study. “I’m actually kind of scared about how bad the actual situation would be if we had the complete data”…
My post-retirement involvement in the business of psychiatric medications came as a surprise to my colleagues [and to me]. I practiced and taught another brand of psychiatry, and so they often ask "What got you into this?" I know some of the answer and have talked about it probably more necessary, but I haven’t mentioned the most important thing – disillusionment. One can make the case that human psychological development is a story of illusion/disillusionment cycles from beginning to end. The devoted mother of early life is replaced by the same mother encouraging self sufficiency. A solid principle of effective parenting is allowing illusion, then shepherding a disillusionment at the rate a child can both tolerate and even appreciate. And good doctoring is sometimes helping a person find a decent life as a chronically ill person when the illness is one that’s come to stay — in spite of the accompanying disillusionment. But while it’s interesting to reflect on the topic from an armchair, actually living with it isn’t so easy.

I was a late comer to what this Scientific American article is about, in my later sixties, retired from a psychotherapy practice that had been more out of the mainstream than I knew at the time. In retirement, I had started volunteering in some general clinics and was struck with a couple of things. First, the patients were almost universally taking a lot of medications in odd combinations unfamiliar to me. But even more striking, they came with expectations from the medications that were well beyond any possibilities I knew. To borrow a book title, I felt like a stranger in a strange land. About that time, I read in the New York Times that the chairman of the department I had been affiliated with for over thirty years was under investigation for unreported income from pharmaceutical companies [Top Psychiatrist Didn’t Report Drug Makers’ Pay]. And somewhere in there, I had prescribed an SSRI to a 17 year old young man who became confused, agitated, and suicidal within days – all thankfully clearing as fast as they came when the medication was discontinued. At the time, I didn’t know that could happen.

I’m surprised at how much the disillusionment I felt affected me. I had experienced my share of such things before, but this one was different. Reading back over the blogs I’ve written since then, I’ve bounced from place to place in how I understood [or didn’t understand] it all. I was lucky. I had a strong hard science background from a former career and could look into the science involved. And I’ve met a number of like minded people along the way who brought a wealth of experience and wisdom my way – helping me answer questions I didn’t even know were there to be asked. But there were two concrete experiences that helped me with my own uncomfortable disillusionment. The first was going to the Allen Jones TMAP Trial in Austin in January 2012 where I watched any number of regular people caught up in some little piece of the drama without allowing themselves to see the whole picture. The second was being involved in the research for one of those articles up there and seeing the details – another example of people neither stepping back far enough to see the big picture nor getting close enough to see what they were involved in. In both instances the main problems were at the top, and had to do with unnecessary secrecy.

Medical advances have often been accompanied by high hopes and enthusiasm [illusion] followed by the more accurate reality that comes with clinical experience [dis·illusionment]. This sequence has been eroded at both ends. The Clinical Trials that are meant to be a simplistic starting place have been jury-rigged and given an undeserved enduring authority. Meanwhile, academic medical departments and journals have not only become engaged in the hype, but have also failed in their traditional role as watchdogs and skeptics. In the process, the appropriate disillusionment that comes with clinical experience with medications is being replaced with a disillusionment with medicine itself – an unacceptable trade-off.

I’m less disillusioned [and less naive] than I once was. I guess I had assumed that the ethics of medicine would protect us from all of this and I was bitter that it didn’t. I don’t have any global solutions, but I do feel a resolve to stay wide awake and stop counting on the inertia of medical tradition to keep us on the right path. And all I really know is that the forces inside and outside of medicine that have lead us here lose their power when they see the light of day in articles like this…
Mickey @ 10:23 AM

explanation?…

Posted on Thursday 4 February 2016

I signed up for STAT on the Boston Globe because Pharmalot moved there. But the new additions to the morning emails these last several days have given me pause. We could use an explanation…


UPDATE: Megan’s response to an email:
Hi Mickey,

They’re simply sponsors for the newsletter — they’re paid advertisements that come from the business end of the publication. I have nothing to do with them, don’t interact with them, and they have no bearing on my content whatsoever. They started showing up because the newsletter has been rolling for a few months now, and in order to pay for the journalism we do, the business team needs to create revenue through advertisements like all other publications.

Thanks,
Megan

Mickey @ 8:29 AM

selective inattention…

Posted on Tuesday 2 February 2016

American psychiatrist Harry Stack Sullivan balked at the term "Unconscious," preferring "Selective Inattention" to explain realities that people simply omit. It’s a particularly apt term for some recent commentaries appearing in our medical literature. In notes from a reluctant parasite…, I mentioned Dr. Jeffrey Drazen’s editorial and the subsequent series by his reporter in the New England Journal last summer suggesting they lift the ban on experts with Conflicts of Interest writing editorials and review articles:
by Jeffrey M. Drazen, M.D.
New England Journal of Medicine. 2015 372:1853-1854.
by Lisa Rosenbaum, M.D.
New England Journal of Medicine. 2015 372:1860-1864.
by Lisa Rosenbaum, M.D.
New England Journal of Medicine. 2015; 372:1959-1963.
by Lisa Rosenbaum, M.D.
New England Journal of Medicine. 2015; 372:2064-2068.

In order to make that argument, they have to ignore the numerous examples of "experts" using review articles to push some product they had personal connections with – one of the more egregious versions being Dr. Nemeroff et al’s review of the vagal nerve stimulator in depression [VNS Therapy in Treatment-Resistant Depression: Clinical Evidence and Putative Neurobiological Mechanisms] that cost him his own editorship at Neuropsychopharmacology. That was an unacknowledged COI, but there are many other examples to choose from where acknowledgement doesn’t mitigate the glaring bias.

Now Dr. Drazen has this other piece suggesting that people who want to reanalyze questioned studies are "data parasites," saprophytes feeding off of the carrion of other researchers work. In that formulation, he has to selectively ignore the numerous examples of distorted clinical trial reports that literally beg for a thorough re-examination, and the much more likely motives of a person vetting such an article to expose distortions:
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.
Then there was the Viewpoint article in JAMA this Fall by [Associate Editor] Anne Cappola and colleague Garret FitzGerald that exercised the same kind of Selective Inattention [Confluence, not conflict of interest: Name change necessary]. They direct a Translational Institute at the University of Pennsylvania and seem worried that the focus on Conflicts of Interest might intrude on the dreams of the Translationists [my term]. They propose that reframing things with a name change [Conflicts of Interest to Confluence of Interest] might make things go better:
by Anne Cappola and Garret FitzGerald
JAMA. 2015 314[17]:1791-1792.

… Confluence of interest represents a complex ecosystem that requires development of a uniform approach to minimize bias in clinical research across the academic sector. Such a policy must be at once simple and accessible, capturing the complexity of the relationships while being sufficiently flexible at the individual level not to intrude on the process of innovation.
In order to suggest this naive grammatical solution, they have to have their Selective Inattention motor running full throttle [the elephant in the room comes to mind]. In Dr. Bernard Carroll’s words:
Health Care Renewal
by Bernard Carroll
January 24, 2016

"… the authors, presuming to speak for investigators generally, were offended by the increasing regulations for managing COI. Those developments have occurred at the Federal, institutional, and publication levels. Worse, the authors ignored the reality of recent corruption that led to those new regulations. That uncomfortable fact was airbrushed out of their discussion."
And the authors fail to notice that some of us think their whole notion of Translational Medicine itself is an elaborate version of the same kind of ruse – that same wolf in sheep’s clothing hiding behind lofty rhetoric [like in this very article]. Which brings me to Susan Molchan’s blog post on HEALTHNEWSREVIEW.
HEALTHNEWSREVIEW
by Susan Molchan
January 25, 2016

It’s difficult to make a case for hiding or obscuring information about health and the medicines we take, but it seems the editors of two top medical journals are doing just that. The decisions of these editors substantially affect the quality of medical research studies reported, what public relations officials communicate about those studies, and what news stories eventually say about the research to patients and the public…
I’m currently trying to escape the fog one gets into spending too much time scrolling through endless columns of figures, so I wanted to write this article about the Selective Inattention of pundits in high places who have to overlook the loud and the obvious to press their own agendas. I had the Drazen articles and Cappola’s and FitzGerald’s JAMA Viewpoint piece in hand. I wanted to add another example, but I couldn’t find the one I was looking for. Then <ping>, my computer announced incoming. It was a comment on my last blog post by Susan Molchan. Not only did it point me to her excellent blog which pre-empted the post I was writing [that you’re reading right now], it also included the very piece I was looking for. But first I need to back up a bit and talk about Ben Goldacre’s COMPare Project.

Ben and a cadre of trainees are taking advantage of some of the data access afforded by the European Medicines Agency [EMA] and gathering the a priori Protocols from a number of Clinical Trials. Then they’re running down the published papers and comparing the Protocol defined outcome variables with what is reported by the articles – finding all kinds of discrepancies. They call it Outcome Switching. Then they’re taking it a step further by contacting the journals and asking the obvious questions – Did they notice? What might they do about that? It’s a great idea [and right in the middle of why I’m looking at the non-protocol variables introduced into the Keller et al’s 2001 paper on Paxil Study 329]. There’s a nice summary of Ben’s Project on Retraction Watch [Did a clinical trial proceed as planned? New project finds out]. The other article I was looking for was a letter from an Annals of Internal Medicine editor in response to COMPare’s query about one of their published articles:
Annals of Internal Medicine
December 22, 2015

… The Centre for Evidence-Based Medicine Outcome Monitoring Project’s assessments seem to be based on the premise that trials are or can be perfectly designed at the outset, the initial trial registry fully represents the critical aspects of trial conduct, all primary and secondary end points are reported in a single trial publication, and any changes that investigators make to a trial protocol or analytic procedures after the trial start date indicate bad science. In reality, many trial protocols or reports are changed for justifiable reasons: institutional review board recommendations, advances in statistical methods, low event or accrual rates, problems with data collection, and changes requested during peer review. The Centre for Evidence-Based Medicine Outcome Monitoring Project’s rigid evaluations and the labeling of any discrepancies as possible evidence of research misconduct may have the undesired effect of undermining the work of responsible investigators, peer reviewers, and journal editors to improve both the conduct and reporting of science.
Selective Inattention? You betcha! I don’t doubt that he’s right that investigators may frequently misjudge in their a priori predictions. But he is selectively inattentive to the very obvious problem that the a priori protocol is the only concrete evidence we have that the author’s didn’t go fishing after the fact with a computer to find the variable whose outcome fit their needs. We obviously can’t trust the blinding as it’s controlled by the Sponsor and their contracted CRO. This is a very high-stakes game and the principles aren’t boy scouts. The authors are free to mention that they are reporting non-protocol-defined variables, but that status needs to be crystal clear. And it usually isn’t – thus COMPare. Outcome switching was in the center ring of our Paxil Study 329 analysis, but we didn’t yet have that general name for it.

Harry Stack Sullivan was from the days before psychopharmacology, and he was nobody’s fool. He was objecting to the "un" in Freud’s "unconscious." Things people either don’t or don’t want to think about don’t just go away. The mind becomes selectively inattentive, but it shows. They get some kind of fidgety when the unwanted mental content is nearby. They may start doing odd things that gamblers call "tells." If the wires are hooked up to a polygraph, the needles on the graph begin to wiggle. They may change the subject, or ask why you’re asking, or get hostile, or be defensive, or maybe sarcastic, or go silent. There’s a subtle disturbance in the force. Such things don’t tell you what’s being selectively unattended – only that you’re in the neighborhood of something that matters.

I reckon editors are no different. They get their version of fidgety – dismissive, into expert mode, sarcastic, silent, make bizarre and forced arguments – all the things other people do when one gets into an area where for a myriad of reasons, you’re confronting something that pokes holes in the status quo. In this case, they are dancing around in order not to have to see that there has been a massive intrusion of unscientific interests into our science-based world, and to address it we’re going to have to tweak our system in some fairly fundamental ways. And the people who are gaining something from the system as it stands are going to lose some things they don’t want to lose. But that’s just the way of things. As they say, "Don’t do the crime, if you can’t do the time." I recommend reading Susan Molchan’s blog, Bernard Carroll’s blog, and anything Ben Goldacre has to say about COMPare. In differing ways, they’re all calling our attention to the same very important  thing  deceit… 
Mickey @ 4:24 PM

notes from a reluctant parasite…

Posted on Monday 1 February 2016

It was something of an irony to be immersed in trying to make sense out of someone else’s study [the reason I stopped blogging for a while], and then to read that I was a member of a new class of researchers – "data parasites." Jeffrey Drazen, editor of the New England Journal of Medicine, didn’t win me over with his proposal that the NEJM drop its policy of prohibiting experts with Conflicts of Interest from writing review articles and editorials [see a snail’s pace…]. And, in a way, this new editorial is a continuation of that same theme.
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.

The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick.
Data Sharing, as Drazen presents it here, is a sensible and time honored idea – using a dataset collected for one reason to uncover previously unseen insights that may be far from the original intent of the study. Darwin’s Finches come to mind. Darwin returned from the  Galápagos Islands with a sack full of birds. But it was only when the birds were reexamined by ornithologist John Gould that their variability was noted, putting Darwin on the track that lead him to his concept of Natural Selection.

But in this first paragraph, Drazen also sets the stage for another agenda – one heavily promoted by the pharmaceutical industry. When the clamor about distorted clinical trial reports reached a pitch that could no longer be ignored, they reframed the real intent of the move for Data Transparency. Instead of that being a reform move to allow independent reanalysis to keep them honest [because they hadn’t been], they spoke of it as Data Sharing for the reasons Drazen presents in his opening gambit – a generous act in the service of scientific progress.

And in his second paragraph, he’s going to venerate the academic investigators’ role in these Clinical Trials. Perhaps his description is accurate in some instances, but it certainly doesn’t fit the industry-funded and administered studies I’ve looked at. The studies are run and analyzed by the industrial Sponsors and Contract Research Organization [CROs], written by medical writing firms, and the academic authors are more consultants than "researchers" [and tickets into the pages of prestigious journals]. While my cynical version may not be universally justified, it’s way common enough to be a glaring omission from Drazen’s narrative.
However, many of us who have actually conducted clinical research, managed clinical studies and data collection and analysis, and curated data sets have concerns about the details. The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters. Special problems arise if data are to be combined from independent studies and considered comparable. How heterogeneous were the study populations? Were the eligibility criteria the same? Can it be assumed that the differences in study populations, data collection and analysis, and treatments, both protocol-specified and unspecified, can be ignored?
The cat’s now out of the bag. It’s people like me and uncounted others that he’s after – people whose motive is to look for misconduct disguised as science – or perhaps people like the volunteers with the Cochrane Collaboration who do extensive structured reviews and meta-analyses aiming for a more accurate assessment of the data. So now Dr. Drazen turns to something of a global ad hominem argument, an indictment of the motives of such people. It’s in the form of the saying, "People who can’t do, teach":
A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
And so for Dr. Drazen, people who want to reanalyze the data from published studies are creepy hangers-on. contrarians. I’m obviously not in love with that formulation. He leaves out the possibility of another, more likely motivation – that we’re checkers, people who believe that a lot of the scientific drug trial literature is written [and often distorted] for commercial gain rather than medical understanding. We’ve been brought to that conclusion honestly, and Dr. Drazen’s summarily dismissing that possibility by not even mentioning it is a telling indictment against him and his own motives.

After giving an example of successful Data Sharing, he concludes:
How would data sharing work best? We think it should happen symbiotically, not parasitically. Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested. What is learned may be beautiful even when seen from close up.
Our group was one of the first to apply for data access under the new venues provided by the drug manufacturers [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence]. GSK insisted on a proposal, something in the range of Data Sharing. While it was tempting to make up something, the truth was that we wanted the data from Paxil Study 329 because we didn’t believe the published analysis [Efficacy of Paroxetine in the Treatment of Adolescent Major Depression: A Randomized, Controlled Trial]. So, instead of making up a reason, we simply submitted the original Protocol. And to GSK’s credit, they gave us access [after a long negotiation]. We already had the Study Reports [CSRs] and the Individual Participant Data [IPDs], as they had been forced to publish them by a court settlement. But we couldn’t really do an adequate evaluation of harms without the Case Report Forms [CRFs]. We weren’t looking for something new, and our dealings were all with the pharmaceutical companies, not the twenty-two authors who never responded to us.

I don’t personally see running industry-funded Phase III Clinical Trials as Research, I think of it as Product Testing. There’s an enormous financial coloring to the whole enterprise, billions of dollars riding on the outcome of some of these Clinical Trials that say yes or no to the investment put into any given drug. But the trials are primarily about the safety and efficacy of the drugs themselves, not about the financial health and fortunes of the company that developed them, nor the academic departments and faculty that involve themselves in this process. There’s an epithet coined to describe people who are skeptical about clinical trials – pharmascolds – implying that they are biased against all drugs. Such people exist for sure, but I’m not one of them, nor are most of us who look into the data from Sponsored drug trials. We’re physicians and science minded others who don’t like being gamed by our own scientific literature, depriving us of a vital source of information about how to treat our patients.

Frankly, I’m a reluctant parasite. I’ve had to revive skills from a previous career here in my retirement. I had some other plans that were pushed to the side in order to do that. But I think it’s vitally important for the medical·scientific community to have watchdogs, particularly in today’s climate. Certainly the scientific literature in psychiatry for the last twenty plus years begs for serious oversight. Our group’s work was unfunded and difficult [in part because of the way we had to access the data]. Our paper was extensively reviewed and only accepted after the seventh submission, though in a way, the thorough and comprehensive nature of the peer review was confirming [if only that original paper had been subjected to that kind of rigor…].

Dr. Drazen’s editorial makes the assumption that the "front-line researchers" are a gold standard, and their work is being inappropriately attacked. I could easily mount an argument that there are many among that group who are the real data parasites, opportunizing on their academic positions to sign on to jury-rigged, ghost-written articles that they often had little to do with producing. And I question Dr. Drazen’s motives in ignoring the corruption and misbehavior that has made up a sizeable portion of the commercially sponsored clinical trial reporting currently flooding the landscape of our academic literature. An often rendered old saying from my childhood seems appropriate, "I don’t mind your peeing in my boot, but don’t tell me it’s water"…
Mickey @ 1:03 PM

in the old days…

Posted on Friday 29 January 2016

I took a several week unpaid leave of absence from blogging and email commerce to do a number intense project. I found that I just couldn’t do it if I thought about anything else. I finished the part that had to be done yesterday, and look forward to returning to my normal mental life. Looking over all the accumulated emails, I ran across one from a medical school friend who sent me a blurb from his literature [he’s an Emergency Medicine physician]. It was in Emergency Medicine Today:
hat tip to Ferrell…  
Antidepressants Appear To Be Much More Dangerous For Kids, Teens Than Reported In Medical Journals, Review Finds.

HealthDay reports, “Antidepressants appear to be much more dangerous for children and teens than reported in medical journals, because initial published results from clinical trials did not accurately note instances of suicide and aggression,” a review published Jan. 27 in the BMJ suggests. Researchers arrived at that conclusion after analyzing data from “68 clinical study reports from 70 drug trials that involved more than 18,500 patients.” The clinical studies “involved five specific antidepressants: duloxetine [Cymbalta], fluoxetine [Prozac], paroxetine [Paxil], sertraline [Zoloft] and venlafaxine [Effexor].”

According to Medical Daily, “the limitations in both design and reporting of clinical trials may have led to ‘serious under-estimation of the harms.’” The study authors concluded, “The true risk for serious harms is still uncertain.” The Telegraph [UK] also covers the story.
At first I thought it was an acknowledgement of our recent article [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence], but it was better than that. It was an article from the Nordic Cochrane Center about the potential adverse effects in adolescents from SSRIs – another log on the fire:
by Tarang Sharma, Louise Schow Guski, Nanna Freund, and Peter C. Gøtzsche
British Medical Journal. 2016 352:i65

Objective: To study serious harms associated with selective serotonin and serotonin-norepinephrine reuptake inhibitors.
Design: Systematic review and meta-analysis.
Main outcome measures: Mortality and suicidality. Secondary outcomes were aggressive behaviour and akathisia.
Data sources: Clinical study reports for duloxetine, fluoxetine, paroxetine, sertraline, and venlafaxine obtained from the European and UK drug regulators, and summary trial reports for duloxetine and fluoxetine from Eli Lilly’s website.
Eligibility criteria for study selection: Double blind placebo controlled trials that contained any patient narratives or individual patient listings of harms.
Data extraction and analysis: Two researchers extracted data independently; the outcomes were meta-analysed by Peto’s exact method [fixed effect model].
Results: We included 70 trials [64 381 pages of clinical study reports] with 18 526 patients. These trials had limitations in the study design and discrepancies in reporting, which may have led to serious under-reporting of harms. For example, some outcomes appeared only in individual patient listings in appendices, which we had for only 32 trials, and we did not have case report forms for any of the trials. Differences in mortality [all deaths were in adults, odds ratio 1.28, 95% confidence interval 0.40 to 4.06], suicidality [1.21, 0.84 to 1.74], and akathisia [2.04, 0.93 to 4.48] were not significant, whereas patients taking antidepressants displayed more aggressive behaviour [1.93, 1.26 to 2.95]. For adults, the odds ratios were 0.81 [0.51 to 1.28] for suicidality, 1.09 [0.55 to 2.14] for aggression, and 2.00 [0.79 to 5.04] for akathisia. The corresponding values for children and adolescents were 2.39 [1.31 to 4.33], 2.79 [1.62 to 4.81], and 2.15 [0.48 to 9.65]. In the summary trial reports on Eli Lilly’s website, almost all deaths were noted, but all suicidal ideation events were missing, and the information on the remaining outcomes was incomplete.
Conclusions: Because of the shortcomings identified and having only partial access to appendices with no access to case report forms, the harms could not be estimated accurately. In adults there was no significant increase in all four outcomes, but in children and adolescents the risk of suicidality and aggression doubled. To elucidate the harms reliably, access to anonymised individual patient data is needed.
Since the article is online, I’ll skip the details and say why I liked it, over and above it being another loud message about the potential harms of SSRIs in adolescents as well as the pressing need for Data Transparency in Clinical Trials:

  • The reference came from a primary care newsletter. Non-psychiatrist physicians prescribe the majority of the SSRIs, and that’s where this information about harms belongs. Hopefully the news is finally spreading.
  • The heavy lifting in this article was done by students working with Dr. Gøtzsche at the Nordic Cochrane Center. We desperately need for this kind of critical evaluation of Clinical Trials to be coming from the world of young researchers and physicians rather than just from us old guys.
  • They did this meta-analysis using a large number of Complete Study Reports[ CSRs] they got from the European Medicines Agency [EMA]. Again, great news that they could get them. And there was more, they got to see the wide variability in what was in those reports and how variable they were with raw data – clarifying that in lobbying for Data Transparency, we need to specify that they have actual data.
  • They emphasize the point that that the CSRs are not enough to evaluate harms. We absolutely need to have access to the Case Report Forms [CRFs] where the data was originally transcribed. We couldn’t have done our Paxil Study 329 article without them.
  • Their findings mirrored ours from Paxil Study 329, but they had information from many more studies than just our one source. It is a more general commentary.
  • It’s always great to hear from an old friend from the time when the world was young [among many other things, ironically, we were lab partners in pharmacology lab a little over 50 years ago]…
But happy talk aside, we shouldn’t even have to be fighting for honesty and transparency is the scientific literature. That ought to be a given. As a young guy, I noticed that the elders always talked about the good old days when things were better and I resolved not to do that when I got old. In the main, I have been able to hold to that resolution. But with this issue of the Clinical Trial articles in the medical literature, I can’t stick to my guns. I can’t remember ever having to keep one eye always cocked, looking for signs that I’m being taken for a ride by a distorted, commercially biased production. It really was better in the old days…
Mickey @ 5:56 PM

peeking out…

Posted on Monday 18 January 2016

I got an email asking if I was sick [because I’ve been quiet for a week]. No, I am just knee deep in a project that involves scrolling through endless monotonous spreadsheets, and I just can’t look at a computer after a few [or more] hours of that. Probably another week more I would guess. In the process, I’ve discovered a new disease – spreadsheet oculopathy. Symptoms include diplopia, nystagmus, a dandy headache, and irritability.  It’s easy as pie  to treat.

I did want to jump in and thank PsychPractice for mentioning my jaunt into statistics [in the land of sometimes… 1, 2, 3, 4, and 5 & john henry’s hammer… 1, 2, 3, 4, and 5] and for giving it a test drive [DIY Study Evaluation]. I’m not a statistician, and I’ll be glad to hear when I get things wrong. But I’ve decided that a lot of the reason people are not reading these clinical trials critically is that all the modern talk of statistical modelling and linear regression etc puts people off. Either they don’t understand the analytic methodology or worse – it’s presented in a deliberately obfuscating way to keep the reader from looking behind all the fancy talk. What I’m proposing is that the average medical reader can easily learn how to use a few simple tools to quickly decide if one is being served a plate of science or dish of something else. At least in my specialty, psychiatry, the industry generated clinical trial reports have been heavily weighted on the south side of something else. There are more statistical things to say before I’m done.

So, just peeking out to say hello. Back soon…
Mickey @ 8:31 PM

cannot imagine…

Posted on Tuesday 12 January 2016

Don’t be alarmed. This isn’t one of those long boring posts with formulas and numbers. The graphic below is just window dressing to make one simple point:

I’ve been writing a mini-tutorial for evaluating Clinical Trials, and this spreadsheet will generate the needed information in the absence of Data Transparency. All we need for the Continuous Variables is the MEAN, the Standard Deviation or the Standard Error of the Mean, and the Number of Subjects for the control [placebo] and the drug groups. And for the Categorical Variables, we need even less – just the tally of yeses and nos for both. Those are not esoteric numbers – a basic minimum of information.

In defending their claim that the raw data from these Clinical Trials is proprietary and can be kept from scrutiny, the pharmaceutical industry makes two arguments: protecting their subject’s privacy and withholding commercially confidential information [CCI in slang]. But neither argument pertains to the simple information to plug into that spreadsheet. And yet one is surprised with how often that data isn’t available in an article. The graphic is meant to demonstrate how minimal the request – three numbers for each group in the Continuous Variables and only two numbers for the Categorical Variables in each group.

I cannot imagine a reason for not supplying these basic values other than to hide some weakness in their study’s results. And I can’t imagine any reason for a competent editor or peer reviewer not to insist on their inclusion…
Mickey @ 8:00 AM

45-40

Posted on Tuesday 12 January 2016

Mickey @ 12:33 AM

john henry’s hammer: categorical variables…

Posted on Monday 11 January 2016

The Categorical Variables just get one post. It’s not because they aren’t important. It’s because I’ve already said what I’m up to in this series. And because they’re easy…

METHOD 1: the spreadsheet

    I thought it would be easiest to start with the spreadsheet itself this time because the column headings make it easy to discuss the output. This spreadsheet has been added below the one discussed in john henry’s hammer: continuous variables I…, downloadable here. It’s shown above with the MADRS efficacy data from the two recent Brexpiprazole Augmentation-in-Treatment-Resistant-Depression studies [see a story: the beginning of the end…].

    Categorical Variables show up in almost every Clinical Drug Trial. They are derived variables that hold binary [yes/no] data based on some specified criteria. They’re often named, though the criteria for the names vary from study to study. So RESPONSE might be «final HAM-D score < 50% of the baseline» or «final HAM-D score < 50% of the baseline OR final HAM-D score <. In this case, the data is usually reported in the articles – the tally of the yeses and nos. Unlike the Continuous Variables, defining the dataset with Summary Data requires no mathematical manipulations [MEAN, Standard Deviation, Standard Error of the Mean], all you have to do is count.

    The classic statistical test for significance is ChiSquare, discussed in in the land of sometimes[2]. The spreadsheet supplies two measurements of EFFECT SIZE. The first, Number Needed to Treat [NNT], is the more intuitive. It’s exactly what its name says, how many patients you have to treat to get one that beats what you would’ve gotten with placebo. The second EFFECT SIZE index is the ODDS RATIO [OR]. It is a quantitative measure frequently reported in meta-analyses where multiple studies are compared. One thing to note, unlike Cohen’s d, the Odds Ratio is not centered between the the 95% Confidence Intervals, so it is often charted on a logarithmic scale [which "centers" the OR]. For the mathematically inclined, the formulas for these column are:

        if a=control[yes], b=control[no], c=drug[yes], & d=drug[no], then:
      Control Response% = a÷(a+b)
      Drug Response% = c÷(c+d)
      ChiSquare = ((a×d-b×c)²×(a+b+c+d))÷((a+b)×(c+d)×(a+c)×(b+d))  [1df]
      NNT = 1÷(c÷(c+d)-a÷(a+b))
      OR = (c÷d)÷(a÷b)
      OR[95%CI] = Exp(Ln(OR)±1.96x√((1÷a)+(1÷b)+(1÷c)+(1÷d)))
          Ln is the natural log value and Exp means raise to the power of e [take the antilog]
METHOD 2: The Internet Calculators
    In this case, the Internet Calculators are all over the place. For the Chi Square calculations, I use the one at VassarStats because it gives the classic Pearson’s Chi Square, but it also calculates the Chi Square with the Yates correction. In addition, it computes the Fisher Exact Probability Test. These refinements are used in articles occasionally and the VassarStats page gives those results and links to an explanation of each [see Chapter 8 in their Web Textbook] so I don’t have to try and explain. And as for the Odds Ratio and its 95% Confidence Intervals, Why not stick with a winner – VassarStats? This version of their Internet Calculator does both the Chi Square and the Odds Ratio with its 95% Confidence Intervals. One stop shopping! As for the NNT, there are Internet Calculators around, but frankly it’s almost easier to do it in your head or using the calculator in your computer. Subtract the %yes of the control  from the %yes of the drug and divide the result into 100. That’s all there is to it. So 23.5%-14.7%=8.8% then 100÷8.8=11.27.
Mickey @ 2:00 PM