the making of « wolf alert! wolf alert!… » part 3

Posted on Monday 11 April 2016

In a psychotherapy career, you learn some odd things – things you didn’t even know were there to learn. One important one is that we all do wrong and hurtful things that cause trouble in both our own lives and the lives of people around us. And if you look hard enough in the mirror, you see your own blemishes and it’s important to address them too. As a therapist, often the most difficult task is helping people look at their own shortcomings constructively, and coming to grips with the negative things they’ve done themselves. It’s not a question of getting "off the hook," it’s about cleaning up your own act going forward. In the Twelve Step programs, it’s occupies a number of their steps. But it’s a piece of any recovery enterprise – and you don’t get there by sweeping things under the rug.

My complaint about the Institute of Medicine Reports and the International Committee of Medical Journal Editors article is that they never say why Data Transparency is such a hot-potato issue. Why are we even talikng about this? Yet we all know the answer. The pharmaceutical industry, the third party payers, and a way-too-big segment of the medical profession has behaved very badly and the clinical trials of medications sits in the middle of that bad behavior. There has been corruption small and large, and a solid piece of re·form is vitally needed. That’s why this is on the front burner – the sins of our fathers [and our father’s children AKA ourselves].

I think of Jeffrey Drazen, editor of the world’s first line medical periodical, the New England Journal of Medicine, as a wolf in sheep’s clothing. In spite of having his name on the three documents laying out a plan for Data Transparency, he uses his influential editorial page as a bully pulpit for only one side of the conflict. The month after the first IOM report in 2014 he wrote:
by Jeffrey M. Drazen, M.D.
New England Journal of Medicine. 2014 370:662

… I am especially eager to receive feedback from the biomedical community about one issue in particular of the many considered in the report. At the completion of a research study or clinical trial, a first report is often published. Usually, this report contains the key findings of the study but only a small fraction of the data that were gathered to answer the scientific or clinical question at hand. To what extent and for how long should the investigators who performed the research have exclusive access to the data that directly support the published material? And should the full study data set be subject to the same timetable? Open-data advocates argue that all the study data should be available to anyone at the time the first report is published or even earlier. Others argue that to maintain an incentive for researchers to pursue clinical investigations and to give those who gathered the data a chance to prepare and publish further reports, there should be a period of some specified length during which the data gatherers would have exclusive access to the information…

The month after the second IOM report in 2015 he wrote:
by Jeffrey M. Drazen, M.D.
New England Journal of Medicine. 2015 372:201-202.

… With whom will data be shared? When they register a trial, investigators will need to indicate whether their data can be shared with any interested party without a formal agreement regarding the use of the data, only with interested parties willing to enter into a data-sharing agreement, or only with interested parties who bring a specific analysis proposal to a third party for approval. It is possible that investigators could choose to share their data with different groups at various times. For example, data might be shared for the first year of availability only with parties who specify their analysis plan but be shared more widely thereafter…
And who will ever forget this one in the summer of 2015?
by Jeffrey M. Drazen, M.D.
New England Journal of Medicine. 2015 372:1853-1854.
May 7, 2015

Over the past two decades, largely because of a few widely publicized episodes of unacceptable behavior by the pharmaceutical and biotechnology industry, many medical journal editors [including me] have made it harder and harder for people who have received industry payments or items of financial value to write editorials or review articles. The concern has been that such people have been bought by the drug companies. Having received industry money, the argument goes, even an acknowledged world expert can no longer provide untainted advice. But is this divide between academic researchers and industry in our best interest? I think not — and I am not alone. The National Center for Advancing Translational Sciences of the National Institutes of Health, the President’s Council of Advisors on Science and Technology, the World Economic Forum, the Gates Foundation, the Wellcome Trust, and the Food and Drug Administration are but a few of the institutions encouraging greater interaction between academics and industry, to provide tangible value for patients. Simply put, in no area of medicine are our diagnostics and therapeutics so good that we can call a halt to improvement, and true improvement can come only through collaboration. How can the divide be bridged? And why do medical journal editors remain concerned about authors with pharma and biotech associations? The reasons are complex. This week we begin a series of three articles by Lisa Rosenbaum examining the current state of affairs…
In the month of the publication of the ICMJE report in 2016 he wrote:
by Dan L. Longo, M.D., and Jeffrey M. Drazen, M.D.
New England Journal of Medicine. 2016; 374:276-277.

… However, many of us who have actually conducted clinical research, managed clinical studies and data collection and analysis, and curated data sets have concerns about the details. The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters. Special problems arise if data are to be combined from independent studies and considered comparable. How heterogeneous were the study populations? Were the eligibility criteria the same? Can it be assumed that the differences in study populations, data collection and analysis, and treatments, both protocol-specified and unspecified, can be ignored? A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
There’s more, but that’s enough for my point. He never talks about why we are clamoring for Data Transparency [widespread corruption in Clinical Trial reporting]. He never gives examples of the dangers of these distorted reports that have already happened. Yet he often brings up hypothetical problems that might happen with Data Transparency. And he uses the industry spin term, Data Sharing, as if the whole point is industry’s magnanimus sharing for new discovery rather the truth – that we want to check their work for the kind of distortions we’ve lived with for several decades.

So Drazen is a prominent and influential member of the Committees setting the policy for Data Transparency going forward who hardly represents a neutral position. And it’s not like the industry side isn’t represented. For example, the IOM Activity Sponsors:

Activity Sponsors


• Bayer
    
• Sanofi
• Takeda

It would be foolish not to be concerned that by the time Data Transparency becomes a reality, it will be severely eroded and become as much a failed reform as previous attempts. For example, the Clinical Trials.gov Results Database has been around for years, but industry and academia just ignored it even when it was required. We need a solid policy with teeth in it, and Jeffrey Drazen is not the guy for that kind of work. I started with comments about the need to look in the mirror as part of any recovery. He’s advocating just the opposite, ignorng the very problems that we’re trying to solve.

 

The history since inception in 1962 is for the Clinical Trial reforms to end up leaking like a sieve, and this is exactly how it happens. Not with a bang, but a whimper…
Mickey @ 6:23 PM

the making of « wolf alert! wolf alert!… » part 2

Posted on Monday 11 April 2016

I hope I got across in the making of « wolf alert! wolf alert!… » part 1 that I think ClinicalTrials.gov is a good idea, and that putting it in the center of a system aiming towards appropriate Data Transparency is a good idea too, but that it needs some heavy tweaking before it will actually work. First, obviously, it needs to be used in a timely manner and that usage enforced. That part seems easy. No registration, no consideration for publication or regulatory approval. No Results posting, no consideration for publication or regulatory approval. No exceptions. Next, the a priori Protocol and Statistical Analysis Plan must be declared [in ClinicalTrials.gov] before the study starts and those results shown in the Results Database. If some investigator or sponsor wants to switch or add outcomes or methodologies in the published version, that’s their prerogative, but only if they also show us the results of their original plans. They can make their case in the paper, not behind closed doors. No exceptions.  At least that’s what I think. There are many opinions about whether the investigators should be held to their original choice of outcome variables and statistical methods. But there’s no controversy in my mind about secrecy. We need to hear what they thought a priori no matter what they decide after the study was underway or post hoc. No exceptions.

And now for a slight change in direction, what about the actual, rather than the summary data? Here’s what others think: From the National Academy’s Institute of Medicine COMMITTEE ON STRATEGIES FOR RESPONSIBLE SHARING OF CLINICAL TRIAL DATA come two very long policy documents…
and the International Committee of Medical Journal Editors, adds a more forgiving single pager…
The first two are tomes. I scanned them and I doubt they are much read; however, the one from ICMJE is short and a definite must read document. Here’s the central message:
by Darren B. Taichman, Joyce Backus, Christopher Baethge, Howard Bauchner, Peter W. de Leeuw, Jeffrey M. Drazen, John Fletcher, Frank A. Frizelle, Trish Groves, Abraham Haileamlak, Astrid James, Christine Laine, Larry Peiperl, Anja Pinborg, Peush Sahni, Sinan Wu
PLOS. Published on January 20, 2016

Note: This editorial is being published simultaneously in Annals of Internal Medicine, British Medical Journal, Canadian Medical Association Journal, Chinese Medical Journal, Deutsches Ärzteblatt (German Medical Journal), Ethiopian Journal of Health Sciences, JAMA (Journal of the American Medical Association), Nederlands Tijdschrift voor Geneeskunde (The Dutch Medical Journal), New England Journal of Medicine, New Zealand Medical Journal, PLOS Medicine, Revista Médica de Chile, The Lancet, and Ugeskrift for Laeger (Danish Medical Journal).

… As a condition of consideration for publication of a clinical trial report in our member journals, the ICMJE proposes to require authors to share with others the deidentified individual-patient data [IPD] underlying the results presented in the article [including tables, figures, and appendices or supplementary material] no later than 6 months after publication. The data underlying the results are defined as the IPD required to reproduce the article’s findings, including necessary metadata. This requirement will go into effect for clinical trials that begin to enroll participants beginning 1 year after the ICMJE adopts its data-sharing requirements.

Enabling responsible data sharing is a major endeavor that will affect the fabric of how clinical trials are planned and conducted and how their data are used. By changing the requirements of the manuscripts we will consider for publication in our journals, editors can help foster this endeavor. As editors, our direct influence is logically, and practically, limited to those data underpinning the results and analyses we publish in our journals.

The ICMJE also proposes to require that authors include a plan for data sharing as a component of clinical trial registration. This plan must include where the researchers will house the data and, if not in a public repository, the mechanism by which they will provide others access to the data, as well as other data-sharing plan elements outlined in the 2015 Institute of Medicine Report [e.g., whether data will be freely available to anyone upon request or only after application to and approval by a learned intermediary, whether a data use agreement will be required] [1]. ClinicalTrials.gov has added an element to its registration platform to collect data-sharing plans. We encourage other trial registries to similarly incorporate mechanisms for the registration of data-sharing plans. Trialists who want to publish in ICMJE member journals [or nonmember journals that choose to follow these recommendations] should choose a registry that includes a data-sharing plan element as a specified registry item or allows for its entry as a free-text statement in a miscellaneous registry field. As a condition of consideration for publication in our member journals, authors will be required to include a description of the data-sharing plan in the submitted manuscript. Authors may choose to share the deidentified IPD underlying the results presented in the article under less restrictive, but not more restrictive, conditions than were indicated in the registered data-sharing plan…

Data sharing is a shared responsibility. Editors of individual journals can help foster data sharing by changing the requirements of the manuscripts they will consider for publication in their journals. Funders and sponsors of clinical trials are in a position to support and ensure adherence to IPD sharing obligations. If journal editors become aware that IPD sharing obligations are not being met, they may choose to request additional information; to publish an expression of concern; to notify the sponsors, funders, or institutions; or in certain cases, to retract the publication…
I don’t personally think the behavior of the sponsors of Clinical Trials have earned the right to get as much consideration as they’re being given in these reports. The scientific misbehavior in Clinical Trial reporting, particularly in psychiatry, has been rampant rather than subtle, so I would’ve stonewalled for more ease of access. That being said, if this plan is put into action and actually enforced, it will do. But the enforcement paragraph is too weak as it stands. How about, if they don’t comply, publish a retraction notice and annotate ClinicalTrials.gov? How about, if they don’t live up to their agreement, put sponsor and authors on a no further acceptance until… list? So I’ve pushed the wolf in sheep’s clothing down one more rung. I promise  the making of « wolf alert! wolf alert!… » part 3 will have no more introductory remarks…
Mickey @ 2:13 PM

the making of « wolf alert! wolf alert!… » part 1

Posted on Sunday 10 April 2016

Usually when I write a blog post, I know what I want to say. I don’t like it when others wander around, so I try not to do it myself. But that’s not what happened with wolf alert! wolf alert!…. A coauthor on the Paxil Study 329 article had forwarded the PLOS article. It was a short simple piece, but I thought it might be a springboard to contrast the  approach to Data Transparency between the European Medicines Agency [EMA] and the more chaotic efforts here in the US. I particularly wanted to say something about the notion that ClinicalTrials.gov site and its Results Database were resources. They may become that, but they sure aren’t right now. I wanted to mention two things: that they’re frequently not current with missing results and they’re not checked for completeness even if present. Then I wanted to talk about the advantages of the EMA plan to release all the information submitted for approval. A short post about a simple little article.

But I couldn’t seem to finish it. I’d write something  and wander off. Coming back, I’d erase what I wrote and start over. That’s just not my M.O. and I went to bed with it open on my desktop. When I looked at it in the morning, I decided that I might be dawdling because I hadn’t really read the IOM Report, so I started reading. It only took a couple of pages to get to the part where they listed who was on the Committee. And my musing about my dawding came to an end. I finally got to what was bothering me. I dashed off the post [wolf alert! wolf alert!…] having other things to do in the afternoon, and picked it up when I got home later in the evening.

First, the PLOS Medicine article…
PLOS Medicine
National Library of Medicine, National Institutes of Health
by Deborah A. Zarin and Tony Tse
January 19, 2016

A newly published reanalysis, part of the Restoring Invisible and Abandoned Trials [RIAT] initiative, was based on access to original case report forms for 34% of the 275 participants. These highly granular IPD datasets enabled the researchers to recategorize certain adverse events that they determined had been miscategorized originally [e.g., “mood lability” rather than the more serious “suicidality”]. The reanalysis concluded that Study 329 did not show either efficacy or safety.

How Would the Problems of Study 329 Be Addressed by the Current TRS?

It would be an oversimplification to conclude that this reanalysis demonstrates the need to make IPD for all trials available. A more nuanced look at the specific problems is useful. Many of the concerns about Study 329 and the other Paxil studies might have been addressed if current policies regarding registration and results reporting had been in existence. The key issue that specifically required access to IPD was the detection of miscategorization of some adverse events in the original report…

Key Issue   Relevant TRS Component   Comment

    Summary Results Reporting   Results database entries would have provided access to "minimum reporting set" including all prespecified outcome measures and all serious adverse events.
 
Detection of selective reporting bias of efficacy and safety findings in the published results of Study 329, unacknowledged changes in outcome measures, and other issues.   Prospective Registration   Archival registration information would have allowed for the detection of unacknowledged changes in prespecified outcome measures and detection of nonprespecified outcome measures reported as statistically significant.
 
    Summary Results Reporting   Structured reporting devoid of interpretation or conclusions would have made summary data publicly available while avoiding the possibility of spinning the results.
 
Invalid and unacknowledged categorization of certain adverse events, resulting in the underreporting of suicidality   Sharing Highly Granular IPD and Documents [CRFs]   Access to high-granularity IPD enabled the elucidation of data analytic decisions that had not been publicly disclosed; reanalysis was possible with different methods of categorizing adverse events.

It is important to note that this illuminating reanalysis required access to the highly detailed IPD available in the original CRFs, represented by the far-left side of the x-axis in Fig 1. However, recent high-profile proposals for the sharing of IPD might not have added any clarity in the case of the Paxil studies in children beyond what could have been achieved with the optimal use of a registry and results database [i.e., two foundational levels of the pyramid in Fig 2]. The reason is that journal publication serves as the “trigger” for IPD release in many of these proposals, which could not possibly mitigate biases resulting from selective publication in the first place [i.e., IPD from unpublished trials would be exempt from sharing requirements]. In addition, such proposed IPD policies call for the release of only the “coded” or “analyzable” dataset, which would not have allowed for the detection of miscategorization or the recategorization of the adverse events. Finally, such proposals would only require the sharing of a subset of IPD and documents for those aggregate data reported in the publication and not the full dataset, precluding secondary analyses intended to go beyond validation and reproducibility of the original publication.

I hadn’t really said what needed saying about this idea of putting ClinicalTrials.gov at the center of our Clinical Trial system. So I want to redo my points from the last post:

  • the a priori Protocol:
    While the ClinicalTrials.gov write-up for a trial usually contains the PRIMARY and SECONDARY OUTCOME VARIABLES, it doesn’t have the full a priori protocol. It needs to be emphasized that this is a definitive outcome variable declaration and a harbinger of what’s going to be reported in the Results Database on the completion of the study. While some debate whether this declaration is "binding" [whether outcomes can be changed along the way as they were in Paxil Study 329], it’s the only guard we have against someone running every imaginable analysis until they find one they like.
  • the Results Database:
    First, as I mentioned, this is the single most ignored requirement on the planet. And filling it out is no big deal. I would propose that one requirement for any submission to the FDA for approval or perhaps even for publication in a journal is a completed entry in the ClinicalTrials Results Database.
  • the Statistical Analysis Plan:
    My part of the Study 329 RIAT article was primarily the efficacy statistical analysis. And while the point that "The key issue that specifically required access to IPD was the detection of miscategorization of some adverse events in the original report." was indeed the central focus of our reanalysis, there was a less obvious but important finding in the efficacy analysis that required the full IPD to resolve. The original paper skipped the omnibus ANOVA analysis before making pairwise comparisons and there were many other gross statistical issues with the article’s "rogue variables" that didn’t make it into our paper, but were reported here [see study 329 vii – variable variables?… thru study 329 xii – premature hair loss…]. While it didn’t matter in this instance, in many other cases it could be absolutely crucial. If ClinicalTrials.gov is to be the definitive Summary Results Reporting mechanism, the details of the statistical analytic methods need to be specified in the original ClinicalTrials.gov write-up.
Of the many lessons for me in the couple of years we spent on that paper, one of the the biggest was the power of the saying "the devil’s in the details." Some of the things that are crystal clear now in just looking at the summary reports took going over the full IPD over and over to see them the first time. We repeated every analysis using the full complement of raw data, and it took that to locate many of the problems. I heartily agree with these authors that partial data access schemes are not much better than no access at all.

So part of my dawdling had to do with needing to add the things in this list to the paper’s discussion of How Would the Problems of Study 329 Be Addressed by the Current TRS?  But that’s not all. There was something else that had me peppering that last post with pictures of a wolf in sheep’s clothing, and that’s why there’s a the making of « wolf alert! wolf alert!… » part 2 just around the corner. But I wanted to say what I thought before looking at what the Institute of Medicine Committee thought in any detail…

Mickey @ 9:08 AM

wolf alert! wolf alert!…

Posted on Friday 8 April 2016

The wheels of progress turn very slowly, and about all one can really take heart from is that they keep turning. It seems as if what we call Data Transparency has been echoing around the Globe for several years but the yield so far has been sparse. In the version being worked out by the European Medicines Agency [EMA], the focus of the first phase has been on the CSRs [Clinical Study Reports], covered by Tom Jefferson [see a priori]. The IPD [Individual Participant Data] policy is the next phase, still in discussion. In the US, there was a 290 page report from an Institute of medicine Committee released last year [linked here for reference]:
Committee on Strategies for Responsible Sharing of Clinical Trial Data
INSTITUTE OF MEDICINE OF THE NATIONAL ACADEMIES, pp. 290, 2015
[for reference]
PLOS Medicine
National Library of Medicine, National Institutes of Health
by Deborah A. Zarin and Tony Tse
January 19, 2016
  • The role of individual participant data [IPD] sharing can best be understood as part of an overall three-level trial reporting system [TRS] framework.
  • Different “types” of IPD, which reflect varying degrees of information granularity, have different potential benefits and harms.
  • Study 329 of Paxil [paroxetine] in children with depression is used as a case study to highlight the potential value of different components of the TRS.
It’s a short article, easy to read laying out a stepwise program for Data Transparency that starts with the Registration of the Trial on ClinicalTrials.gov. The next step would be the Results Database on ClinicalTrials.gov. And then they move to IPD Sharing proper and they discuss various flavors of access – using our 329 rewrite as an example:

Reading this short benign article, I felt bothered but couldn’t put my finger on exactly why. When I came down this morning and saw it on my computer screen, I had a different response. I couldn’t find anything right about it. I haven’t read the whole IOM report, just skimmed it, but I had something of the same reaction. A few points:

  • Index of Suspicion: The tone of the IOM Report and the article is very civil with talk of Data Sharing for other novel investigations, limiting the access by levels depending on what’s being looked for, using the ClinicalTrials.gov Results Database. etc.
    We don’t want Data Transparency to look for novel anything. We want Data Transparency because it has been tampered with and distorted on a massive scale. We don’t want limited anything. We want to be able to look in every last nook and cranny. So I had a strong "con job" feeling [really strong].
  • CinicalTrials.gov: I love this site – visit it all the time – pour through the history of changes fequently. But it’s no panacea for Data Transparency. It describes the study, but it doesn’t have the a priori Protocol, and it doesn’t have the Statistical Analytic Plan, and it’s obviously easy to leave out important details. If the study is being done by a CRO, it diesn’t say which one. So it’s great in a general way, but specifics, not so much.
  • CinicalTrials.gov Results Database: This is the most ignored requirement on the planet. The articles documenting how rarely it’s actually filled out are everywhere. It’s ignored by Industry and Academia alike. It’s improved some after all the attention its disuse gathered, but is hardly a reliable recource. And even if it is filled our, it rarely adds much to the published article. It’s as easy to jury-rig as the articles, maybe moreso because to my knowledge, it’s not reviewed. I hope the IOM Report will address this point.
  • The Horse’s Mouth: CinicalTrials.gov and the CinicalTrials.gov Results Database are secondary proxies for the actual data, and written by the sponsors. The other resource that are available are the FDA Medical Reports on Drugs@FDA. They’re also great, though often, the lag time can be excessive [the last one I went after took 18 months]. But all of these things are about the data, not the data itself. Even the FDA Medical Reports are Drugs@FDA. They’re too far from The Horse’s Mouth.
  • Viva Europe!: The verdict is still out on how the EMA is going to deal with the IPD information, but they’re off to a good start so far. What they’re releasing with some CCI redactions is exactly what was sent to them in the application for approval. That’s going to be the Drug Company on it’s best behavior, the place where they’re least likely to do a lot of spinning because it’s going to be evaluated by experts who have the power to ask for more information. So it’s the highest level surrogate information available. As someone checking it over, you’re at an equal level with the regulators. What we can get from the FDA is what they said about what they saw. What we will be able to get from the EMA is also what they saw. Much better…
And then there’s this from the report:
COMMITTEE ON STRATEGIES FOR RESPONSIBLE SHARING OF CLINICAL TRIAL DATA
    BERNARD LO (Chair), President, The Greenwall Foundation
    TIMOTHY COETZEE, Chief Research Officer, National Multiple Sclerosis Society
    DAVID L. DeMETS, Professor and Chair, Department of Biostatistics and Medical Informatics, University of Wisconsin–Madison
    JEFFREY DRAZEN, Editor-in-Chief, New England Journal of Medicine
    STEVEN N. GOODMAN, Professor, Medicine & Health Research & Policy, Stanford University School of Medicine
    PATRICIA A. KING, Carmack Waterhouse Professor of Law, Medicine, Ethics and Public Policy, Georgetown University Law Center
    TRUDIE LANG, Principal Investigator, Global Health Network, Nuffield Department of Medicine, University of Oxford
    DEVEN McGRAW, Partner, Healthcare Practice, Manatt, Phelps & Phillips, LLP
    ELIZABETH NABEL, President, Brigham and Women’s Hospital
    ARTI RAI, Elvin R. Latty Professor of Law, Duke University School of Law
    IDA SIM, Professor of Medicine and Co-Director of Biomedical Informatics of the Clinical and Translational Science Institute, University of California, San Francisco
    SHARON TERRY, President and CEO, Genetic Alliance
    JOANNE WALDSTREICHER, Chief Medical Officer, Johnson & Johnson
In my mind, this brings Dr. Jeffrey Drazen’s recent comments about Data Sharing into much sharper focus, and I smell danger, danger all around this whole story!

Give it another read…
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.

The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick…

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
Mickey @ 12:42 PM

seasonal dementia update…

Posted on Friday 8 April 2016


Note: Don’t be fooled by the low counts. Comes now the grass pollen, the true dementer.

Mickey @ 9:18 AM

already gone?…

Posted on Thursday 7 April 2016

"Over the pa$t two decade$, largely becau$e of a few widely publicized epi$ode$ of unacceptable behavior by the pharmaceutical and biotechnology indu$try, many medical journal editor$ [including me] have made it harder and harder for people who have received indu$try payment$ or item$ of financial value to write editorial$ or review article$. The concern ha$ been that $uch people have been bought by the drug companie$. Having received indu$try money, the argument goe$, even an acknowledged world expert can no longer provide untainted advice. But i$ thi$ divide between academic re$earcher$ and indu$try in our be$t intere$t? I think not – and I am not alone. The National Center for Advancing Tran$lational $cience$ of the National In$titute$ of Health, the Pre$ident’$ Council of Advi$or$ on $cience and Technology, the World Economic Forum, the Gate$ Foundation, the Wellcome Tru$t, and the Food and Drug Admini$tration are but a few of the in$titution$ encouraging greater interaction between academic$ and indu$try, to provide tangible value for patient$. $imply put, in no area of medicine are our diagno$tic$ and therapeutic$ $o good that we can call a halt to improvement, and true improvement can come only through collaboration. How can the divide be bridged? And why do medical journal editor$ remain concerned about author$ with pharma and biotech a$$ociation$? The rea$on$ are complex. Thi$ week we begin a $erie$ of three article$ by Li$a Ro$enbaum examining the current $tate of affair$."
by Jeffrey M. Drazen, MD, Editor
NEJM 2015 372:1853-1854.

I added back in what I think Editor Jeffrey Drazen is leaving out of his argument in this current campaign. I’m not as unsympathetic to some of his argument as one might think. Thirty-five years ago, I was in an academic department under a Chairman who had shown a surprising amount of skill in his thirty year tenure financing a growing department in a place where there really wasn’t anything to start with. But by the time I came along, he had run out of tricks. We had virtually no institutional support, which is pretty much the rule in medical academic departments. He’d always maximally utilized Federal Grant support. Resident and faculty salaries came from City/County, State, VAH, and private hospital placements. And he had done well with private donations. But it was all drying up at an almost palpable rate. Increasingly, my job was juggling the training needs of our residents with literally selling their services to earn their pay.

After I left, it was obvious that the post-DSM-III medical psychiatry was a more lucrative enterprise and that they were no longer operating on a shoestring, and I wasn’t so naive that I didn’t know how they brought it off. I guess I assumed that they were doing Clinical Trials and drug research to bring in the needed funding. I don’t recall thinking about it very much beyond that. So when I became reacquainted with such matters a quarter of a century later, I wasn’t surprised about the unrestricted institutional grants or the clinical trials. Their new money had to be coming from some place. But I was oblivious to the direct payment to faculty by industry in the form of Grants, Advisory Committees, or Speakers Bureaus. And that’s what Drazen is talking about, "academician$ who are receiving per$onal money from the pharmaceutical companie$". He’s implying that they can render editorial or review article opinions unbiased by the fact that they are being paid in some form or another by the manufacturer of the drugs being reviewed. It’s not the dialog between academia and industry that’s in question, it’s the money. And his opening sentence …

    "Over the pa$t two decade$, largely becau$e of a few widely publicized epi$ode$ of unacceptable behavior by the pharmaceutical and biotechnology indu$try, many medical journal editor$ [including me] have made it harder and harder for people who have received indu$try payment$ or item$ of financial value to write editorial$ or review article$.
… is like calling Europe’s Black Death epidemic "a few widely publicized cases of Pasteurella pestis."

I thought it was just psychiatry for a while, selling out to industry justified by dire straights. But it’s increasingly apparent that it’s all of medicine that’s playing with the same fire that has been so disastrous for my own specialty. And Jeffrey Drazen, editor of the New England Journal of Medicine is taking a lead in not just condoning it, but telling us it’s a good idea. After suggesting that experts who have personal financial Conflicts of Interest can write unbiased editorials and review articles [an obvious impossibility], he follows up a few months later with an attack on people who re-analyze data they think has been jury-rigged, calling  them  us "data parasites." And then later, he puts a cherry on top of this Sundae by dismissing confrontations about articles reporting drug trial results in his journal with protocol violations [How did NEJM respond when we tried to correct 20 misreported trials?].

I suppose that we should’ve expected this when the New England Journal of Medicine fired Jerome Kassirer and hired Jeffry Drazen in 1999 over these self-same issues. But it’s a bitter pill to swallow following editors like Arnold Relman, Jerome Kassier, and Marcia Angell [see a narrative…] who were champions in Medicine’s navigation between the Scylla of Managed Care and Charybdis of Industrial Conflicts of Interest – trying to preserve some semblence of a medical ethic. In my last post, I said "he has to go" but should’ve added "before the New England Journal of Medicine goes with him." But maybe that’s my naïveté once again in failing to accept that the New England Journal of Medicine may already be gone…
Mickey @ 12:49 PM

a must read…

Posted on Tuesday 5 April 2016


Amid Public Feuds, A Venerated Medical Journal Finds Itself Under Attack
by Charles Ornstein
April 5, 2016

The New England Journal of Medicine is arguably the best-known and most venerated medical journal in the world. Studies featured in its pages are cited more often, on average, than those of any of its peers. And the careers of young researchers can take off if their work is deemed worthy of appearing in it.

Jeffrey Drazen MDBut following a series of well-publicized feuds with prominent medical researchers and former editors of the Journal, some are questioning whether the publication is slipping in relevancy and reputation. The Journal and its top editor, critics say, have resisted correcting errors and lag behind others in an industry-wide push for more openness in medical research. And dissent has been dismissed with a paternalistic arrogance, they say.

In a widely derided editorial earlier this year, Dr. Jeffrey M. Drazen, the Journal’s editor-in-chief, and a deputy used the term “research parasites” to describe researchers who seek others’ data to analyze or replicate their studies, which many say is a crucial step in the scientific process. And last year, the Journal ran a controversial series saying concerns about conflicts of interest in medicine are oversimplified and overblown.

“They basically have a view that … they don’t need to change or adapt. It’s their way or the highway,” said Dr. Eric Topol, director of the Scripps Translational Science Institute and chief academic officer at Scripps Health in La Jolla, California…
We are warned that the weakest way to attack an argument we disagree with is to disparage the author of that idea. So avoid ad hominem attacks –  to the man. But in this case, things are different. The argument is actually with the man himself. I’ve tried to keep up with Dr. Drazen’s many splendored bad ideas and arrogance. Ornstein does a better job than I could of cataloging the facts in the case [did I mention that it’s a must-read]. But over and above Dr. Jeffrey Drazen’s bad ideas, and his barely veiled nasty temperament, he’s just not New England Journal of Medicine Editor material. He’s been there for a long time, but now he’s let the cat out of the bag. He needs to go. That’s simply that…
Mickey @ 9:16 PM

a sacred clown…

Posted on Tuesday 5 April 2016

Retraction Watch
by Ivan Oransky
March 16th, 2016

John Ioannidis is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False” … Earlier this month, he published a heartfelt and provocative essay in the the Journal of Clinical Epidemiology titled “Evidence-Based Medicine Has Been Hijacked: A Report to David Sackett.” In it, he carries on a conversation begun in 2004 with Sackett, who died last May and was widely considered the father of evidence-based medicine…

Retraction Watch: You write that as evidence-based medicine “became more influential, it was also hijacked to serve agendas different from what it originally aimed for.” Can you elaborate?
John Ioannidis: As I describe in the paper, “evidence-based medicine” has become a very common term that is misused and abused by eminence-based experts and conflicted stakeholders who want to support their views and their products, without caring much about the integrity, transparency, and unbiasedness of science…

Retraction Watch: You also write that evidence-based medicine “still remains an unmet goal, worthy to be attained.” Can you explain further?
John Ioannidis: The commentary that I wrote gives a personal confession perspective on whether evidence-based medicine currently fulfills the wonderful definition that David Sackett came up with: “integrating individual clinical expertise with the best external evidence”. This is a goal that is clearly worthy to be attained, but, in my view, I don’t see that this has happened yet. Each of us may ponder whether the goal has been attained. I suspect that many/most will agree that we still have a lot of work to do.

Retraction Watch: You describe yourself as a “failure.” What do you mean?
John Ioannidis: Well, I still know next to nothing, even though I am always struggling to obtain more solid evidence and even though I always want to learn more. If you add what are probably over a thousand rejections [of papers, grant proposals, nominations, and other sorrowful academic paraphernalia] during my career to-date, I think I can qualify for a solid failure. Nevertheless, I still greatly enjoy my work in science and in evidence-based medicine.

Retraction Watch: You say that your first grant, which you applied for 17 years ago, was “not even rejected.” Tell us about that grant.
John Ioannidis: It was a randomized controlled trial of antibiotics versus placebo for acute sinusitis. Hundreds of millions of people were treated with antibiotics without good evidence back then, and hundreds of millions of people continue to be treated with antibiotics even nowadays even though most of them would not need antibiotics. I sent in the application to a public funding agency, but have not heard back yet. Probably they felt that requesting funding for a randomized trial and not going to the industry for such funds was a joke. Many public funding agencies are accustomed to funding only research that clearly has no direct relevance to important, real-life questions, so perhaps they didn’t know where to place my application….
Most cultures have some figure who stands outside the mainstream and is allowed to say the things the rest of us can’t get away with saying. In Medieval Times, it was the Court Jester, or the Royal Fool. In our Southwestern Native American culture, they were the Pueblo and other Sacred Clowns. In classic Greek Tragedy, it was often the chorus. Such figures can be sarcastic, even openly hostile, iconoclastic, and  can generally deliver the cold hard truth [in a light-hearted way]. I think of John Ioannidis as occupying such a place in Medicine.

I know that my eyes roll when people start sprinkling terms like evidence-based medicine, best practices, clinical guidelines, etc. in their articles and presentations. I realize that those are important concepts, but they’re so frequently co-opted by other motives that my eye-rolling has almost become an involuntary reflex. But it’s his joke about public funding going to "only research that clearly has no direct relevance to important, real-life questions," that has my attention right now. As much as  we  I complain about the self-serving nature of PHARMA-funded medication research, who else is funding Clinical Trials of drugs? or simple practical studies like do antibiotics help sinus infections? I’m afraid the answer is  almost  nobody.

One place where I frequently find myself thinking about this is the RCTs done on psychiatric medications. The mandate of the FDA is primarily focused on safety, and efficacy is in the mix as a minimal standard, a 1962 afterthought: two statistically significant RCTs out of however-many-they-want-to-do, which can lead to something like the "me too" parade I was describing in the the pipeline paradigm… As I understand it, the lite efficacy testing is there to eliminate snake oil [eg inert medicinal products, the patent medicines of yore], not to inform judgements about prescribing. This is a chart the Atypicals in Schizophrenia by year of introduction, with the effect sizes from a meta-analysis added in…

… hardly representing a progression of increasing efficacy [level, or drifting south?]. One might suspect an incremental safety factor or side effect profile, but that’s not there either. The antidepressant data is harder to gather, but it looks like it degrades by perhaps even more with successive molecules [still work in progress]. The FDA just does what it’s mandated to do – two significant studies and a safety certification, usually based on short term exposure. Reform suggestions often hover around approving only drugs that offer improvement over existing drugs – but in practice that becomes the fuzziest of calls. Others suggest independent, longer running RCTs. That would just be wonderful, except these studies cost a mint, and there’s just no funding source. Sacred Clown Ioannidis’ speaks the truth. Non-industry funding goes to sexier stuff rather than such routine [but vital] independent comparisons.

We have mostly short-term RCTs done either for FDA approval or for a bit of indication creep  with their heavy dose of COI and bias. And no amount of re-analysis or meta-analysis of these early-on RCTs can really get at what we want to know – efficacy and side effects over time in the real world without the spin and sin. Managed Care isn’t any more interested in paying for ongoing research than anyone else, so MD visits remain limited in time and frequency hindering our personal learning from our own patients. It’s easy to hatch "on site" research fantasies, but "translating" such things into a workable reality is quite another matter. Thus Sackett’s evidence-based medicine can downgrade to a trope, or a cliché, or even a trick rather than “integrating individual clinical expertise with the best external evidence.”

Actually, things overall are improving. The prosecutions are down, the data transparency movement is on the move, and the public seems more aware. But bringing honesty to trial reporting isn’t going to change the fact that we have mostly short-term front loaded trials and little else. Somewhere down this road, we need to move beyond approval into clinical usage with our RCTs. Until then, the message of Sacred Clown John Ioannidis will have to keep echoing until someone hears it. Stripped of its misuses, evidence-based medicine is a term that actually means something…
Mickey @ 8:33 PM

a couple of points…

Posted on Sunday 3 April 2016

The residue of undue influence we call Conflict of Interest spreads like jungle vines in an deserted temple, and the longer the absence, the harder the restoration. In a recent [silly] JAMA Viewpoint article,
JAMA
by Anne R. Cappola and Garret A.
September 24, 2015
we were told Conflict of Interest was a pejorative term and should be changed to Confluence of Interest [in a bid to suggest something like synergy]. While our collective eyes rolled at the time [see a creative rationalization…,, selective inattention… have no place…], there’s a way in which the term is appropriate – as in a confluence of rivers, where contamination is guaranteed. And to push my analogy, the further you go downstream, the more muddied the distinctions.

In following the Brintellix® for Cognitive Dysfuntion in Major Depressive Disorder story [a parable…], I ran into a examples where the COI laden literature is what seems headed downstream – into the future. Both of the supporting studies and a pre-emptory review article are available full text on-line – industry funded, industry designed, industry written, authors industry tainted or employed:

On the other hand, a well researched critique of the literature about Brintellix® and its approval are behind a $41 pay wall in an otherwise inaccessible journal [to have made it available Open Access would have cost ~$3000]:
by Lisa Cosgrove, Steven Vannoy, Barbara Mintzes, and Allen Shaughnessy
Accountability in Research. 2016 Feb 18. [Epub ahead of print]

The relationships among academe, publishing, and industry can facilitate commercial bias in how drug efficacy and safety data are obtained, interpreted, and presented to regulatory bodies and prescribers. Through a critique of published and unpublished trials submitted to the Federal Drug Administration [FDA] and the European Medicines Agency [EMA] for approval of a new antidepressant, vortioxetine, we present a case study of the "ghost management" of the information delivery process. We argue that currently accepted practices undermine regulatory safeguards aimed at protecting the public from unsafe or ineffective medicines. The economies of influence that may intentionally and unintentionally produce evidence-biased-rather than evidence-based-medicine are identified. This is not a simple story of author financial conflicts of interest, but rather a complex tale of "ghost management" of the entire process of bringing a drug to market. This case study shows how weak regulatory policies allow for design choices and reporting strategies that can make marginal products look novel, more effective, and safer than they are, and how the selective and imbalanced reporting of clinical trial data in medical journals results in the marketing of expensive "me-too" drugs with questionable risk/benefit profiles. We offer solutions for neutralizing these economies of influence.
POINT 1: The Literature is what persists
In medicine, the thing that lasts is the published academic literature. Blog posts and press releases will evaporate sooner or later. So what will be available long term full text on-line will be those industry funded, industry designed, industry written, industry authored articles. In time, Cosgrove et al will require an archeologist to locate after a few years. It’s already hard enough. It’s because the industry funded ghosted studies come from deep pockets for whom the online fee is nothing, and because they’re in high impact journals. The critiques are hard to get published, usually unfunded, and there’s no loose petty cash to pay the Open Access fees. That’s one of the reasons for RIAT [Restoring Invisible and Abandoned Trials], to keep the other side of the story in The Literature [downstream]…

So what’s being passed to the future is the COI laden story, not the balancing independent, unfunded critique. Sooner or later, I’ll review their whole paper, but right now I wanted to focus on one particular piece of it. In looking over a new article, many of us reflexively look at the Conflict of Interest declarations and the funding to see if a clinical trial is industry funded and authored. A few years ago, we checked to see if the academic authors had COI and if it was ghost written. But they tightened the rules, and it hasn’t much mattered. The COI is now quickly apparent as well as the whatever-they-call the ghost-writer. Often they don’t even bother to have an academic author these days, just employees. And it doesn’t seem to make any difference except to cut down on the sponsors publication costs [which makes one wonder why an academician-free article is even in an academic journal anyway?]. We’ve apparently gotten enured to what was once taboo.

But Cosgrove et al added something that hasn’t occurred to me, and I haven’t noticed before. In looking at the COI, they went beyond the listed authors:

    Below is a summary of the industry-publishing relationships of the eight published studies submitted to the FDA and one additional study submitted to the EMA that was published…
    1. In eleven of the thirteen publications, the majority of authors were employees of the manufacturer, and in four of the thirteen published studies, all authors were company employees.
    2. In all of the trial reports, the authors explicitly thank an employee of the manufacturer for “assistance in the preparation and writing” of the manuscript or note that assistance with preparing and writing the article was provided by an employee.
    3. In nine of the thirteen published articles, the following issue was disclosed:
      [the manufacturer] was involved in the study design, in the collection, analysis and interpretation of data, and in the writing of the report.
    4. The thirteen published studies were published in seven academic journals. The editors of five of these journals had financial ties to vortioxetine’s manufacturer…

POINT 2: The Editors control The Literature
It’s number 4. They looked into the financial conflicts of interest of the Journal Editors. Of course we should’ve been doing that all along. It’s the Journal Editors that control the gateway into The Literature. It was Editor Myna Dulcan who opened the door to Paxil Study 329 and it is Editor Andres Martin who has kept it there for 15 years. It’s Jeffrey Drazen who wants to relax COI restrictions at the NEJM and thinks people who want to check articles are "parasites." If there’s any place where COI is a more important parameter, I don’t know where that might be. I would’ve naively assumed they didn’t have COI. Silly me.

So, while I believe that the idea of RIAT [Restoring Invisible and Abandoned Trials] remains a good solution for articles that are found to be questionable, there are lots of articles that can be identified almost from the get-go as in need of a strong counterpoint which is one part of what Cosgrove et al did in their article. So for POINT 1, we need more of those, but we also need some way to give people access to them – whether that means publishing in an Open Access journal, establishing journals for that specific purpose, or raising some money to give authors of those articles scholarships to pay Open Access fees. Contemporary criticism needs to travel alongside questionable literature as it rides into the future. And as for POINT 2, point taken!  In any article that appears to be COI-laden, we need to make the COI declaration of the Editor[s] a public part of any critique…
Mickey @ 9:35 PM

a pipeline rejig…

Posted on Saturday 2 April 2016

In the course of reading about the FDA’s rejection of Lundbeck/Takeda‘s bid for a Cognitive Dysfunction in MDD for Brintellix® [still going strong?…], I ran across another article in FiercePharma, the one that got me thinking about the pipeline in general [the pipeline paradigm…]. It’s about another Lundbeck drug – an Atypical Antipsychotic [Lu AF35700] – note the date. If I read this right, they’ve completed three Phase 1 Clinical Trials,
been granted Fast-Track status by the FDA [November 2015], and are off and running with a Phase III Clinical Trial,
FiercePharma
By Nick Paul Taylor
March 17, 2016

Lundbeck has started the first of several planned Phase III trials of its candidate against treatment-resistant schizophrenia, Lu AF35700. The candidate is a key component of the turnaround strategy initiated by Kåre Schultz, who has shed other pipeline prospects in order to up Lundbeck’s bet on Lu AF35700 since taking over as CEO last year.

Copenhagen, Denmark-based Lundbeck is kicking off the pivotal trial program of the experimental drug with a Phase III study that will recruit 1,000 patients across 15 countries. In the study, Lundbeck will administer one of two doses of Lu AF35700 to patients with treatment-resistant schizophrenia. If the drug works as Lundbeck hopes, participants’ scores on the Positive and Negative Syndrome Scale [PANSS] will improve following 10 weeks of treatment. The primary PANSS endpoint is being supplemented with Clinical Global Impression – Severity of Illness scores and Personal and Social Performance Scale data.

Lundbeck expects to be running the study for the next three years, during which time it will roll out additional trials with a view to gathering data on more than 2,000 people with treatment-resistant schizophrenia. The size of the bet is proportional to Lundbeck’s assessment of the scale of the unmet need and opportunity associated with treatment-resistant schizophrenia. "Today there is only one therapy approved for patients with treatment-resistant schizophrenia, and its use is severely limited by its problematic safety profile," Lundbeck R&D Chief Anders Gersel Pedersen said in a statement…

The drug they’re looking to compete with is Clozaril, based on the fact that Lu AF35700 has a strong D1 receptor binding like Clozaril, but a weaker D2 binding. Surely I’ve missed something here because the only study I see in Schizophrenia is the 3 week safety study – nothing that even determines that the drug has antipsychotic properties that I could find [?]:
FiercePharma
By Nick Paul Taylor
August 20, 2015

Lundbeck  has initiated a rejig of its pipeline, taking an ax to undisclosed early-stage assets in order to funnel cash into what it sees as its most promising candidates. The schizophrenia drug Lu AF35700 is among the beneficiaries of the reshuffle, with Lundbeck using its reorganized balance sheet to advance the therapy through development unpartnered.

Copenhagen, Denmark-based Lundbeck unveiled the tweaks to its R&D priorities alongside sweeping changes to its operations, which will see it shed 1,000 jobs in an attempt to become profitable. The commercial operation is expected to bear the brunt of the cuts but R&D will be affected, too, with recently-appointed CEO Kåre Schultz committing to dropping certain early-stage assets. Trimming the pipeline will allow Lundbeck top focus its cash on programs in which it thinks it has the clearest understanding of the underlying science, such as the schizophrenia drug Lu Af35700.

"35700 is an antipsychotic with a profile that is somewhat different than most drugs available. What we see is that many of the drugs that are available on the market today are predominantly driving effects through D2," Lundbeck R&D chief Anders Gersel Pedersen said on a conference call. 35700, like clozapine, has a strong binding affinity to the D1 receptor. Lundbeck hopes to differentiate its pipeline candidate from clozapine through its side effect profile, giving patients who are resistant to D2-targeted drugs a treatment option that doesn’t cause weight gain and metabolic disturbances. Lundbeck’s belief in this hypothesis has led it to conclude Lu Af35700 is among its most promising early-stage assets. "We have to put our bets where we think we have the best ability to translate science into products," Pederson said…
While this post is a little bit about Lu AF35700, I’ll have to admit it’s also about something else. When the announcement that the FDA had turned down that sNDA for Brintellix®, I was intrigued by the number of sites Google found echo-ing the announcement. They were mostly business/investor sites and they had a surprising amount of sophisticated science thrown in. I started looking at what else was on those sites, and was frankly amazed. I’ve spent the last several days bouncing from place to place in that domain that was, for me, new territory. One thing I learned was that if you want to know about what’s in the pipeline and what’s up with clinical trials, that’s the place to look – not in the medical universe [unless you consider that the medical universe].

A lot of the material on the investor sites came from press releases by the various PHARMA companies, and so they were generally upbeat, like this one for Lu AF35700. I had a few laughs thinking that the investment world was in the same boat we’re in, a target for spin, and in real need of some Data Transparency too. But there were some other things – like this quote about the Brintellix® ruling:
The FDA doesn’t have to follow the advice of its expert review panels, but it usually does. That’s a standard line in stories about advisory committee votes. Unfortunately for Lundbeck and Takeda, their new Brintellix app is one of the unlucky ones.
I doubt most of us would say it that way ["unlucky"]. There was a surprising-to-me absence of attention to the actual medical benefit or risk – mainly comments on success or failure of various gambits. I’m not being hyper-moral here, business sites are about business after all. It was just a bit jarring – as was something else. Much of this literature is about strategies. It reads like a huge chess game with the various moves and gambits cataloged, and sometimes relished. In the case of Lu AF35700, Lundbeck’s new CEO, Kåre Schultz,  is shutting down some of Lundbeck’s R&D, laying off employees, and dropping other candidate drugs to raise the cash for his big bet on Lu AF35700 [AKA rejig]. A 1000 Subject three year International Clinical Trial is apparently going to cost them a mint. Just like the failed bet they made on Brintellix® to be a think-better antidepressant cost them dearly. It’s little surprise they hauled in the uber-KOLs for that FDA hearing.

I hadn’t thought of it before, and maybe it was The Big Short movie that had me in this frame of mind, but a lot of the investing in pharmaceuticals is analogous to the Commodities Market – betting on a future value. Again, the pipeline paradigm, as in being in a position to reap large profits when and if the product flows out into the market. I suppose any business venture is like that, but here, it’s in bas relief because of the long development process, the FDA’s yes/no position, the guaranteed time limited monopoly afforded by the patent/exclusivity laws, and the presence of those pesky middlemen [AKA physicians]. So if Lu AF35700 turns out as they hope [a non-toxic Clozaril], as things now stand, they’ll be able to charge Gilead/Valeant/Shkreli prices – the treatment-resistant Schizophrenia patients are unlikely to have very deep pockets, so I presume they’re aiming at third patry payers.

There’s plenty more to say about the Lu AF35700 study itself, but it’s for another day. I just wanted to report on my visit to other side…
Mickey @ 1:36 PM