BE WARNED! This post is very boring. It’s what an old man does to escape watching any more Olympic figure skating…
Three years ago, I had my first shot at looking at the FDA Approval documents for a psychiatric drug [seroquel II [version 2.0]: guessing…]. To be honest, I didn’t know anything about how to do such a thing, but I had a lot of help. I learned that the FDA approval process requires two decent clinical trials with statistical efficacy and an acceptable side effect profile for approval. I discovered that PubMed had the abstracts of the published trials, that clinical trials.gov had the trial registrations [and rarely the required posted results], and that in many cases the medical report on the approval were on Drugs@FDA. And I learned that this wasn’t an easy avocation. There were lots of things to consider, too many for a rookie like me, and I often ended up with my eyes crossed scratching my head. Later I learned that the FDA is a pretty nice bunch, and if you can’t find what you’re looking for posted online, they’ll send it to you with a simple Freedom of Information Act request made online – charging a nominal fee only if you’re a frequent flyer or your request is hard to gather.
I looked at the FDA original approval of Lurasidone [Latuda®] for Schizophrenia back in 2011 [see ought to know by now…, echo echo echo echo echo echo echo… , in the shadows…, wait…] but I didn’t pursue it very far. Frankly, I thought it would be something of a dud with its low efficacy and the generic competition. That wasn’t correct apparently, so I’m taking another look. But that’s not what this post is really about. It’s about why the AllTrials campaign is absolutely essential.
About the Table: The top five studies were the ones looked at by the FDA. The bottom two were in the works but not completed. Study D1050049 was a failed study [inert comparator] so they only considered four [D1050006, D1050196, D1050229, D1050229]. The left-hand column has the study [linked to clinicaltrials.gov], the date started, the number of subjects, the dropout rate, the number of sites, and an icon linking to the abstract if published. The second column has the scale used [BPRSd or PANSS] and the correction method for missing values [LOCF or MMRM] with the Primary Variable in bold. The numbers are the raw differences between the drug and placebo. Significant values are bold red:
LS Mean Difference from Placebo | |||||||
Study # | Efficacy Scale | 20mg | 40mg | 80mg | 120mg | 160mg | Comparator |
|
|||||||
D1050006 US 2001 n=149 66% drop out 15 sites |
BPRSd (LOCF) | -5.6 | -6.7 | ||||
BPRSd (MMRM) | -7.3 | -9.2 | |||||
PANSS (LOCF) | -9.6 | -11.0 | |||||
|
|||||||
Haldol | |||||||
D1050049 US 2003 n=356 43% drop out 34 sites |
BPRSd (LOCF) | -5.0 | -5.2 | -8.0 | -9.8 | -7.9 | |
PANSS (LOCF) | -7.1 | -7.2 | -13.6 | -16.0 | -12.3 | ||
|
|||||||
D1050196 US 2004 n=180 45% drop out 15 sites |
BPRSd (LOCF) | -4.7 | |||||
PANSS (LOCF) | -8.6 | ||||||
|
|||||||
D1050229 MIX 2008 n=496 34% drop out 48 sites |
PANSS (MMRM) | -2.1 | -6.4 | -3.5 | |||
US Sites | +0.6 | -2.0 | +0.2 | ||||
Non-US Sites | -6.5 | -10.8 | -8.6 | ||||
PANSS (LOCF) | -2.7 | -6.1 | -3.5 | ||||
|
|||||||
Zyprexa | |||||||
D1050231 MIX 2009 n=475 38% drop out 52 sites |
PANSS (MMRM) | -9.7 | -7.5 | -12.6 | |||
US Sites | -5.7 | -4.8 | -11.4 | ||||
Non-US Sites | -10.5 | -3.8 | -9.6 | ||||
PANSS (LOCF) | -7.9 | -4.8 | -11.4 | ||||
|
|||||||
Seroquel | |||||||
D1050233 MIX 2008 n=488 28% drop out 65 sites |
PANSS (MMRM) | -11.9 | -16.2 | -17.5 | |||
|
|||||||
D1001002 Asia 2008 n~440 62 sites |
PANSS (?) | unpublished |
In addition to the Clinical Trial and PubMed links, I had the FDA Medical Review [the source for the values for the unpublished trials], a special report from the FDA Director, and the full journal articles for the published studies. Lots of stuff.
… the data submitted in this NDA, in this reviewer’s opinion, do not support the efficacy of lurasidone in the treatment of schizophrenia…
1.2 Risk Benefit Assessment: Efficacy has not been established in this NDA submission. The safety profile is more similar to typical antipsychotics with significant akathisia, hyperprolactinemia, parkinsonian-adverse events and dystonias; many of which are dose-related. Lurasidone does not appear to have significant adverse impact on metabolic indices (glucose, lipids, weight, etc.). Lurasidone may be associated with potentially significant hypersensitivity reactions. A comprehensive risk:benefit assessment is premature at this time.
I was also able to look at one of the studies that wasn’t quite ready for prime time [D1050233]. Even though those numbers may look good, there were some big question marks. There were 65 sites, 36 Non-US, but there was no breakdown between US/Non-US responses in the article. Even more troubling to me, there were four groups [Placebo, 80mg, 160mg. and Seroquel], 65 sites, and a total of 72% of 488 subjects who completed the study. That means that the average size for each group at any given site was (488 x 0.72) ÷ (65 x 4) = 1.4 subjects/group/site. I never heard of such a thing! There’s a way to test for that statistically [General Linear Model with effects for country, site, and treatment]. In English what I’m saying is that it’s almost as possible that the variance is due to site differences as to treatment effect. They did an "ANCOVA at Week 6 LOCF endpoint with treatment and pooled center as fixed factors and Baseline value as a covariate" which doesn’t say country or site to me. But, and this is the point, I don’t have the numbers so I can’t run that down. There’s also enough other strategic language in that article to make me plenty suspicious of the whole presentation. Study D1001002 had no results on clinicaltrial.gov and no publication so I assume it was a bust.
With all this information that I’ve been able to ferret out, I’m pretty sure that this is one of those situations where the data has been skewed and I doubt that Lurasidone is much of a contender as a drug for Schizophrenia. But I can’t really tell because we can’t have the data to definitively vet these studies. And now we have this month’s indication creep articles [or both…, creepy…] to think about, with even less to go on. It’s just really frustrating to read these three totally industry generated and industry authored articles in the American Journal of Psychiatry knowing how distorted the reporting has been with all of the Atypical Antipsychotics and not be able to independently corroborate their conclusions, no matter how many rocks we look under.
Not boring at all.
Do you think that the absolute numbers have meaning or should one only consider something like effect size? When the Meltzer study – the fifth one down – was published, I remember being impressed that the absolute differences in PANSS, even in the olanzapine arm, were not that impressive.
This just confirms that the FDA is not about clinical science. It is about setting the lowest bar for drugs to clear in order to enter the market. An FDA approval is not a Good Housekeeping Seal of Approval. The FDA doesn’t give weight to negative trials if just 2 positive trials can be submitted. The FDA doesn’t look at comparative efficacy. Thank goodness the FDA has no authority over the practice of medicine. Commercial propaganda notwithstanding, the real decision point is among professional peers. Hello! The professional opinion leaders have been bought off, while dunces like Steven Hyman and Thomas Insel maunder on about the need for close collaboration with industry.
Sandra,
Good point. I’m looking to see if I can find enough data to do effect sizes with 95% CI. Slow going.
Bernard,
I agree with that. In this case, Dr. Alfaro didn’t think they had enough to even meet the low FDA Efficacy standards…
I’m more concerned that you would be watching figure skating 🙂
I jest.
Cracking post as usual.
Who the heck does have authority over the practice of medicine? Because that agency should wake up to what’s going on with polypharmacy and psych drugs.
Alto, the state medical licensing boards have authority over the practice of medicine. One of the mandates to these boards is to investigate complaints from patients.
Dr. Carroll, as I understand it, those boards will investigate a complaint against a doctor. If one has suffered an injury from psychiatric drug treatment, the doctor will almost always be exonerated because there are no standards of care when it comes to such treatment.
While going on the record has value, a patient can’t expect anything to come of such complaints.
The state medical boards don’t aggregate patient complaints to ascertain an epidemiological pattern or pattern of overprescription.
Or am I wrong?
Alto-
I am no expert but I think you are correct. One is judged by peers and accepted community standards is an important.
To your question, Alto, I would say regulation of the practice of medicine is a multi-layered system of checks and balances. The primary tool is local evaluation by professional peers. From my lifetime experience in academic medicine I can say that practice in that setting is like working in a fishbowl – everybody knows what is being done. If something is out of line then the residents or one’s academic peers or the nursing staff or even sometimes the janitor will call attention to it.
Over the years I have taken action on sexual boundary violations, other boundary violations such as financial impropriety involving patients, cutting corners in research ethics, self-interested manipulation of potential donors who lacked mens sana, and of course diminished professional capacity due to psychosis, depression, mania or early dementia, not to mention substance abuse. In addition I have needed to give leadership in matters such as banning antiquated treatments (carbon dioxide inhalations and insulin coma therapy). I also needed to give leadership on quality of care matters such as requiring mandatory peer review of diagnosis and management when an out-of-control psychotic inpatient had been held in protective isolation for 5 days. I took my clinical leadership duty seriously, even to the extent of having a copy of all discharge summaries sent to me – and I reviewed them all, over a thousand per year, sending frequent feedback to the attending psychiatrists on the service. In dealing with such issues, the best outcome was a quiet but firm and dispositive resolution. If it could be made a teaching moment, it was, accompanied by a cuff around the ears; if the person had to go, he went.
Outside academic centers there is necessarily less transparency, but the same principles apply. Professional peer pressure can come from local, state, and national medical societies. The state medical boards are tasked with responding to professional level or patient level expressions of concern. They don’t do epidemiologic studies of practice patterns and they don’t develop clinical guidelines – that is up to the professional societies. But they do investigate individual cases. I can recall being consulted on possible standard of care deviations – one involved prescription of amphetamines; another involved apparent misuse of ECT for profit.
A third, de facto, level of regulation is in the courts. I have testified a few times concerning the standard of care, about 50-50 for plaintiffs and defendant practitioners. The results of such cases eventually cycle back to the arena of clinical guidelines, but the courts don’t presume to instruct the professions – the courts are mindful that difficult cases make for bad laws.
Alto,
I can only second Dr. Carroll’s point. The medical board has surprised me in it’s ability to follow up and take action on physician misbehavior, whether reported by patients or medical staff.
However, I don’t miss your point about what might be called “specialty” misbehavior which is what I think you’re getting at. In the area of commitment or what is now the unusual case of forced medication, those cases are in the judicial system almost from the start, so they no longer get to the board. At least in Georgia, that’s all a legal affair. I was involved in that decision back in the day. Our feeling was that early judicial review was putting “sunlight” where needed, and that seems to have been a good decision.
But things like medication practices in a specialty as a class – you’re right, they don’t monitor that. It’s a one complaint at a time system, at least as I’ve seen it. So your statement, “The state medical boards don’t aggregate patient complaints to ascertain an epidemiological pattern or pattern of overprescription.” is correct in the way you mean it.
There is an exception and it has to do with controlled substances [including stimulants, narcotics, and benzodiazepines]. They are on that like seagulls behind a shrimpboat. The prescribing habits of all physicians and the usage by all patients is monitored statewide. I can check the prescriptions for those drugs online for any patient I prescribe for. It’s essentially shutting down seeing multiple doctors for prescriptions. However, this close monitoring only pertains to the defined drugs of abuse. I recognize that’s not your point, just the state of the union reporting.
Yes, I understand the state board will take action if there’s a clear breach of medical practice by a particular doctor.
But, in the extremely common example of a psychiatrist misdiagnosing a severe adverse reaction as a psychiatric disorder — which I would consider incompetence — a state medical board would take no action because that’s par for the profession. Ugly, but true.
The state board system can’t contribute to identifying a pattern of bad outcomes and improving psychiatric treatment overall. The specialties are supposed to be self-monitoring. In psychiatry, there is no such monitoring. There are no performance improvement mechanisms.
For any given clinician, anything goes, prescription-wise, no matter what happens to the patient. To my mind, this is a major flaw in the profession, maybe even the central flaw. Monitoring patient outcomes would have been a corrective to bad research informing clinical practice.