a perfectly good nap…

Posted on Wednesday 4 February 2015

I first heard that PHARMA was shutting down its CNS drug development when I read Stephen Stahl let loose with a rant heard far and wide in August 2011 [see myopia – uncorrected…]. So I thought it was ironic that a few days later, I read in the American Journal of Psychiatry about a new Atypical Antipsychotic being introduced – Latuda® [see ought to know by now…]. On the by-line, that paper had seven Sunovion employees [manufacturer of Latuda®], one Quintiles employee [a Clinical Research Organization], and two KOLs on the Sunovion payroll [one of whom was a professional medical writer who may have written the paper]. Digging further into the FDA Approval documents, things looked odd from any angle [from or both…]:
No matter where I looked, there was something amiss, including Lurasidone bringing up the rear in a comparative meta-analysis of antipsychotic efficacy [the mother of meta…]. One thing I recall thinking back then was that the most functional part of Sunovion was their marketing department. Looking at clinicaltrials.gov, they had already started all of the trials needed to aim for new indications [eg Bipolar Disorder] before even being initially approved for Schizophrenia [see echo echo echo echo echo echo echo… and creepy…]. They’d learned the formula for success from their predecessors.

I didn’t give Latuda® much thought after that. I had noticed in their signature before-and-after ads that Latuda®  apparently moves the part from one side of the head to the other like a mirror. So when one of those ads popped up, I always noticed. But otherwise, Latuda® was off my radar. And then it was February 2014, and I looked at the American Journal of Psychiatry and there were two articles about Latuda® Clinical Trials in the same issue and there  was an editorial about it. You guessed it, a Bipolar Disorder indication. Then, I wrote [from or both…]:
Both articles were glowing. I remembered how informative the Drugs@FDA site had been in looking over the original Latuda® approval in 2011, so I went looking for their new indication, but there was nothing there except the label change in January 2013 adding the new approval. No new Medical Review in sight. I’d had splendid success with Freedom of Information Act [FOIA] requests to the FDA. Within a couple of weeks, I’d gotten the package of documents for Prozac®, Paxil®, and Zoloft® from earlier requests along the way. So I thought I’d just put in an FOIA request and find out the back story on the Latuda® indication "creep." A couple of days later, I was writing this [on time…]:
    Tuesday, after a long morning working in the clinic, I was awakened from a recovery nap by a mid-afternoon phone call. It was the FDA calling. The lady on the phone told me she’d seen I’d made other requests, but they were for "older drugs" [I wasn’t liking how this was sounding so far]. She said it was different for recent approvals. I didn’t follow all of what she was saying, but she seemed to be trying to dissuade me from pursuing my request. She said it had to go through several processes one of which was "disclosures" and there might be a charge. Did I still want to stay in the queue? By this time I was awake enough to say, "Sure. How long is the usual wait?" She said "18 to 24 months. You know, we get over 3000 requests every year." long pause. I said "Okay" [having given up with taking out my frustrations with big systems on small bureaucrats long ago]. But when I was fully awake and thought about it, I think I learned something, and have to revise my theory. My now fading enthusiasm for the FDA had been based on experiences where what I was requesting was old hat, and just required copying a disk from somebody else’s previous request. The fantasy that I could get the information on a recent approval in a timely fashion was naive. This drug is going to be well down the line in it’s patent life before I ever see that report, just like before.
So why am I writing about Latuda® right now? It’s because a reminder I set a year ago on my calendar after that phone call popped up to let me know that my FOIA request went in a year ago this month [and so far, silence from the FDA]. I’m not grandiose enough to think that the FDA has people sitting around poised to respond to my every whim. So I’m not surprised at the year of silence. And my calendar is dutifully set for another reminder at this time next year [which will be the advertised "24 months"]. And I recognize that my earlier successes were on older drugs that have probably been requested by big law firms involved in all the legal wranglings about Prozac®, Paxil®, and Zoloft® and were already available. So I’ll wait patiently to see if the information about the indication sprawl for Latuda® ever reaches our North Georgia mountains for me to peruse.

There’s a point to this narrative. I went looking for how Latuda® was doing on the market. What about sales? That took me to various business sites, and somewhere on that road I read that Latuda® prescriptions were up 29% since the Bipolar approval. I finally ended up on the Dainippon Sumitomo Pharma Co., Ltd. site [Sunovion is owned by this Japanese firm]. And going through the yearly financial reports, I came up with this table [ballpark figures]:

YEAR


US SALES

2011 $60 M
2012 $136 M
2013 $385 M
2014 $740 M

It looks like it’s headed for "Blockbuster" status next year [$1 B/YR]. So here’s a late-comer that dribbled out of the pipeline just as the spigot was being turned off. It’s competing with its atypical predecessors and the first generation antipsychotics, all of which are available as less expensive generics. And it’s at the bottom of the efficacy list in the only meta-analysis that I know of that includes it [see the mother of meta…]

:

And yet it got through the FDA with an 11th hour intervention by the FDA Director. It had three published Clinical Trial articles and an editorial that made it into the prestigious American Journal of Psychiatry. So it seems to be having an unusually charmed life, and is selling like hotcakes. As I said, these Sunovion people really seem to know how to sell some drugs [Sales Force Report: A Walk on the Sales Side].

That point I said I was aiming for is that while Latuda® is zooming up the ladder in sales and usage, I’m measuring my ability to find out about its safety and efficacy literally in years. So long as that’s the case, it’s going to be the pharmaceutical industry’s sales force that will exert the major influence on which drugs are prescribed – rather than the plain clinical facts. There’s no reason in the world for the FDA data seen by the reviewers to need anything except a postage stamp to get it to anyone else qualified to take a look. If there’s commercially confidential information on a drug, don’t put it in the material submitted. If there’s data that needs to be anonymized, do it before the submission. The FDA has more important things to do than calling some old man in the middle of a perfectly good nap to try to talk him out of wanting what he wants. Leave the keeping secrets business for the people in Langley Virginia where it makes sense…


Afterthought: When I looked back over this, I realized I left out something that belongs here. I think new has a high value. There are plenty of patients who have cycled through many of the available meds and are shopping for something new. And Latuda® came after an empty space – was the last something new on the block punctuating the dry pipeline meme. And I would recommend reading that article on drug rep technology [Sales Force Report: A Walk on the Sales Side]…

Mickey @ 9:55 AM

specula·rama…

Posted on Monday 2 February 2015


1boringoldman
Continuing Medical Education Syllabus

UNIT I: the breakthrough

UNIT II: the breakdown   

Of course 1boringoldman isn’t in the CME business and certainly can’t offer any kind of credits, but reading the articles in the series above [all freely available full text on-line] is definitely a valuable educational experience about any number of things.

UNIT I opens with a press release [1] written by an "intern at Stanford News Service" enthusiastically announcing a controlled study of the biological impact on daughters of depressed mothers reporting morphologic changes in chromosomes, premature aging, depression, susceptibility to disease, and dysregulation of the HPA [Hypothalamic-Pituitary-Adrenal] axis. Next comes the article itself [2]:
by I H Gotlib, J LeMoult, N L Colich, L C Foland-Ross, J Hallmayer, J Joormann, J Lin and O M Wolkowitz
Molecular Psychatry. doi: 10.1038/mp.2014.119.

TELOMEREA growing body of research demonstrates that individuals diagnosed with major depressive disorder (MDD) are characterized by shortened telomere length, which has been posited to underlie the association between depression and increased instances of medical illness. The temporal nature of the relation between MDD and shortened telomere length, however, is not clear. Importantly, both MDD and telomere length have been associated independently with high levels of stress, implicating dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis and anomalous levels of cortisol secretion in this relation. Despite these associations, no study has assessed telomere length or its relation with HPA-axis activity in individuals at risk for depression, before the onset of disorder. In the present study, we assessed cortisol levels in response to a laboratory stressor and telomere length in 97 healthy young daughters of mothers either with recurrent episodes of depression (i.e., daughters at familial risk for depression) or with no history of psychopathology. We found that daughters of depressed mothers had shorter telomeres than did daughters of never-depressed mothers and, further, that shorter telomeres were associated with greater cortisol reactivity to stress. This study is the first to demonstrate that children at familial risk of developing MDD are characterized by accelerated biological aging, operationalized as shortened telomere length, before they had experienced an onset of depression; this may predispose them to develop not only MDD but also other age-related medical illnesses. It is critical, therefore, that we attempt to identify and distinguish genetic and environmental mechanisms that contribute to telomere shortening.
The study was funded by the NIMH [MH074849, $4½ M over 8 years], and Dr. Insel’s NIMH Director’s Blog [3] reported the study enthusiastically, ending with treatment implications of the findings:
    What can be done to reduce the risk of depression? In previous experiments, girls at risk for depression exhibited different patterns of brain activation during experimental mood regulation. In ongoing experiments, the Gotlib team is using neurofeedback to help these girls retrain their brain circuits and hopefully their stress responses. It will be a few years before we will know how much this intervention reduces risk for depression, but anything that prevents or slows the telomere shortening may be an early indication of success.
UNIT I ends with a short report in the Mainstream Media quoting Dr. Insel [Washington Post [4]]…

UNIT II? Enter stage left James Coyne, blogger for Mind the Brain on PLoS. In his two part series [5] [7], he takes on this study and all of its vicissitudes:
    In this two-part blog post, I’ll document this process of amplification of the distortion of science from article to press release to subsequent coverage. In the first installment, I’ll provide a walkthrough commentary and critique of a flawed small study of telomere length among daughters of depressed women published in the prestigious Nature Publishing Group journal, Molecular Psychiatry. In the second, I will compare the article and press release to media coverage, specifically the personal blog of NIMH Director Thomas Insel. I warn the squeamish that I will whack some bad science and outrageous assumptions with demands for evidence and pelt the study, its press release, and Insel’s interpretation with contradictory evidence.  I’m devoting a two-part blog to this effort. Bad science with misogynist, mother bashing assumptions is being touted by the Director of NIMH as an example to be followed.
My reason for linking rather than summarizing the articles in my pretend C.M.E. format is that I wouldn’t even try to summarize Coyne’s posts. He surgically deconstructs this article and its surrounding fanfare so thoroughly that a synopsis wouldn’t do it justice. It’s more than worth the time it takes to read his posts [I think he must be as angry as I am about being jerked around with speculations-as-fact by journal authors and persons in high places like NIMH Director Tom Insel]. We are in Coyne’s debt for his thoroughness. Just read them [5] [7].

Bernard Carroll is well known to us as a commenter on this blog and for a myriad of other reasons. He responded to Coyne’s first post to clarify [and debunk] their claims about HPA dysregulation [6]. But it is his second comment at the end of Coyne’s series that bears emphasis [8]:

    NIMH Director Thomas Insel should stick to his knitting. He has no business setting scientific directions for the field, as with the RDoC mandates. I suppose it is an occupational hazard of administrators to prefer a top-down management style. Problem is, science is a bottom-up business. His shallow hyping of the Gotlib-Wolkowitz report is a further reason to disqualify him.

    Dr. Insel is on record with an amateurish view of how clinical science moves forward. See PubMed #22869033. When he says we should sidestep the issue of a gold standard he plainly doesn’t understand the iterative process of nosology or the related concept of convergent validity. He doesn’t seem to understand the need to advance nosology by incorporating biomarkers along with clinical symptoms in diagnostic definitions, as happened in general medicine. And he shows no understanding of the need for a Bayesian perspective on the interpretation of biomarkers as well as of definitional symptoms. When he proposes tracking biological markers across current diagnostic domains he ignores the central issue of pathophysiology. Consider where we would end up if, for instance, we lumped Cushing disease together with juvenile onset diabetes mellitus, Type II diabetes mellitus, severe psychological or physiological stress, metabolic syndrome, anorexia nervosa, and pregnancy on the basis of an abnormal glucose tolerance test, which these all can display.

    The RDoC matrix that Dr. Insel insists should be adopted in new grant proposals to NIMH is a classic product of armchair scholastic theorizing. We are already seeing researchers bend like pretzels to make their proposals and journal submissions appear RDoC-friendly. The effect is contrived, to put it mildly. We can expect that the RDoC matrix eventually will go the way of the eccentrics and epicycles of 16th Century astronomy. The pressing issue is how much damage will be done before that happens? Director Insel and his lieutenants at NIMH need to get out of the way – their presumptuousness is breathtaking.

On this blog, I’ve spent my time vetting industry sponsored Randomized Clinical Trials of medications that have been deceitfully presented for commercial gain. But this is different. There’s no commercial gain that I can see in the article by Gotlib et al [except maybe getting the next grant] nor in Insel’s glowing, uncritical commentary – just furthering personal academic, institutional, or ideological agendas. I would go further than Dr. Carroll in looking at Insel’s thirteen year tenure as Director of the NIMH, beyond his controlling top-down management style. I think he’s held the NIMH hostage to his own Clinical Neuroscience and Translational Medicine memes throughout, jumping among fads [like his current obsession with neural circuits eg " the Gotlib team is using neurofeedback to help these girls retrain their brain circuits and hopefully their stress responses."]. As I’ve said before, he appears to think that "Director" means "personally controlling the direction of" rather than "directing an environment where" our creative and productive scientists can work. I would suggest a more definitive solution than "get out of the way." I’d vote for just "get out"…
Mickey @ 4:54 PM

not their round…

Posted on Saturday 31 January 2015

When I open a medical journal and look at an article, there’s something called a by-line under the title that lists the authors of the article, usually with some little subscripts that lead to a footnote that lists the academic institutions they represent. Sometimes there’s one name. Sometimes it looks like a small army. But until a few years ago, I assumed that those were the people who wrote the article. And it’s also my assumption that the article wouldn’t have been published in a peer-reviewed journal if it wasn’t submitted as written by a peer.

Now, we’ve arrived at a point in time where the published results from a number of these articles are being questioned, and the broad consensus is that the raw data from these articles needs to be reanalyzed by independent investigators. It’s not just an academic interest that drives this call for reanalysis. It’s because these drugs remain in wide use based on the efficacy and safety results published in those articles – often certified by the FDA in the drug approval process.

And yet when we read about the process of making this data available in articles such as the one below, nowhere are the authors on that by-line mentioned, nor the academic institutions they represent, nor the peer-reviewed journals that published the original articles – nor, for that matter, do we hear about the clinical research centers where the studies were conducted, nor the clinical research organization that oversaw the studies, nor the medical writing  companies that wrote the articles. All we hear about are the sponsoring pharmaceutical companies we know collectively as PHARMA:
Pharmalot: WSJ
By Ed Silverman
Jan 29, 2015

With considerable fanfare, the Institute of Medicine released a long-awaited report last month praising the virtues of sharing clinical trial data. This is an important, but also contentious issue because without access to such information it can be virtually impossible for independent researchers to verify results that can lead to improved treatments, better health care and lower costs. Although the report is nothing more than a set of recommendations, the IOM effort is, nonetheless, seen as a needed step toward prodding industry and academia to release detailed data. Many drug makers, in particular, have been reluctant to grant much, if any, access over concerns about relinquishing trade secrets and compromising patient privacy, among other things. But some have taken steps to do so.

There is, however, one area where some experts say the report falls short and underscores ongoing difficulties in sharing trial information. While the IOM offers specific suggestions for releasing data from future trials, the institute did not provide a framework for obtaining retrospective data. This remains an unresolved issue for the pharmaceutical industry, especially in light of various safety scandals and ensuing litigation that revealed data for some products were never fully published or disclosed. “Clinical trials underpin the approval of medicines that are in use today, so if we don’t have access to that older data, how can we verify accuracy and conduct independent analyses?” says Peter Doshi, an assistant professor of pharmaceutical health services at the University of Maryland. “It’s a basic issue surrounding data transparency.”

One IOM committee member, Ida Sim, a professor at the University of California, San Francisco, says the report does suggest sharing retrospective data, but on a case-by-case basis and for studies that may influence clinical care. Why? Informed consent must be obtained from people who participated years earlier. Often, researchers have scattered. “We have to be pragmatic and realistic.”
I’ve made my argument about "informed consent" already [a special pleading]. But as to "researchers have scattered," I presume they’re talking about the authors on the by-line. In most cases, they weren’t really involved in the first place [speaking of being "pragmatic and realistic"]:
Some say convenience and liability are also factors. “Obviously, it’s less burdensome for companies to only have to do so going forward, especially if there’s a product on the market that’s raising concerns,” says Diana Zuckerman, who heads the National Center for Health Research, a non-profit think tank. “I see it as a way to protect companies from disclosing embarrassing and expensive information.”
The "burden" has been born already by doctors and patients who have been operating on questionable information. More to the current point, the burden "going forward" will be born by those in the future taking and prescribing these medications without more accurate efficacy and safety profiles. And speaking of "embarrassing," how do they think the prescribers now feel? And, by the way, who has born the "expense" of these medications in the past or "going forward?"
Murray Stewart, however, argues progress is being made. As chief medical officer at GlaxoSmithKline, he oversees an effort to disclose data going back 15 years and has convinced nine other drug makers to similarly provide access through a jointly run website. But most will not provide retrospective data. “We’ve shown it can be done,” he says. “But it’s a long journey” getting others on board. The inconsistent approaches underscore a key issue – finding a universal gatekeeper to vet researcher requests and corporate concerns. Harlan Krumholz, a Yale University cardiologist, is trying to offer a template with the Yale Open Data Access project, which last month signed an agreement with Johnson & Johnson to provide data for diagnostic and device products, but only as of 2014. A deal that J&J signed with YODA last year covers drugs and does not have a cut-off date for data requests. So far, J&J has not refused any request, but Krumholz says any J&J refusal would be disclosed on the site. “We are open science advocates,” he says, “and are not interested in giving anyone a pass.”

Meanwhile, some say the FDA has been a hindrance. While European regulators have a policy that allows researchers to make requests, the FDA does not. An FDA spokeswoman says the agency published a notice seeking comment about ways to make trial data available. But that was nearly two years ago. She says reactions are being evaluated and next steps are being considered. “I think FDA should make as much information available as possible, as soon as possible, particularly for approved drugs, including any safety information that might not be in the primary trials,” says Steve Goodman, a Stanford University School of Medicine professor. “What we need to avoid is the situation where the FDA is in possession of information that could materially impact or alter the assessment of the relative benefits and harms of a drug, and that is not easily available to other scientists or the public.” And “information they have on ‘failed’ drugs can often have an impact on marketed drugs or on other new drugs in the same class, so such information can be quite important as well, even though currently little or no information about such drugs can be divulged.”
The fact that the FDA has colluded with industry in the past in keeping data secret is really no real concern of ours. The only real question is when are they going to set up a mechanism to right this previous wrong? That they are set in their ways or don’t want to take the time isn’t really a consideration here. The damage done trumps their torpor.
    ra·tio·nal·ize   rash-na-liz
    verb

      : to think about or describe something [such as bad behavior] in a way that explains it and makes it seem proper, more attractive, etc.

All of these arguments are rationalizations that bear little relationship to any reality worth considering. PHARMA’s reasons for not wanting us to see what they did are obvious and immaterial. It’s time to stop arguing with them and get on with the business of getting what we need. This is not their round to win…

Afterthoughts: I left out something that should’ve been included. I’m on a team that has been granted access to the raw data from a "legacy" RCT by GSK. The annonymized data is well annonymized. The issue of requiring new consent forms never came up, and it shouldn’t have. There’s no personally identifying information present. My other afterthought is simple. If they don’t want to give us access to the old Clinical Trial Data, they can just repeat the studies with the proper consent forms, reasonable oversight, and subsequent Data Transparency. There’s ample evidence to justify insisting that contested RCT reports need to be reevaluated "going forward" no matter how it happens…
Mickey @ 10:22 AM

polythetic polymorphism…

Posted on Thursday 29 January 2015

In latter day STAR*D I…, I was looking at a study [Depression is not a consistent syndrome: An investigation of unique symptom patterns in the STAR*D study] where the authors demonstrated the wide diversity of symptom profiles among the subjects enrolled in the STAR*D Clinical Trial, yet all carried the same diagnosis [Major Depressive Disorder]. The author, Eiko I. Fried, kindly commented and added a link to the full text PDF. So here’s the update:
by Eiko I. Fried and Randolph M. Nesse
Journal of Affective Disorders. 2014 172C:96-102.
Also in the comments, Bernard Carroll pointed out that the problem came when they initially chose a "disjunctive format," saying:
    At my last count there were 19 possible symptoms in DSM-IV for major depressive disorder, with only 5 needed for the diagnosis. You do the math…
And then Charles Olbert also commented, adding another interesting article from his group coming at the same issue from a different direction. And they actually did a lot of math. But first, a brand new word for me:
    po·ly·the·tic  polí’θetik
    adjective

      Relating to or sharing a number of characteristics which occur commonly in members of a group or class, but none of which is essential for membership of that group or class.

    • A monothetic class is defined in terms of characteristics that are both necessary and sufficient in order to identify members of that class. This way of defining a class is also termed the Aristotelian definition of a class.
    • A polythetic class is defined in terms of a broad set of criteria that are neither necessary nor sufficient. Each member of the category must possess a certain minimal number of defining characteristics, but none of the features has to be found in each member of the category. This way of defining classes is associated with Wittgenstein’s concept of "family resemblances."
It’s a general term that describes the kind of classification chosen for the DSM diagnostic system – one where there is a symptom list, and the diagnosis is made by meeting some preset number or configuration of those symptoms:
by Charles Olbert, Gary Gala, and Larry Tupler
Journal of Abnormal Psychology. 2014 123[2]:452-62.

Heterogeneity within psychiatric disorders is both theoretically and practically problematic: For many disorders, it is possible for 2 individuals to share very few or even no symptoms in common yet share the same diagnosis. Polythetic diagnostic criteria have long been recognized to contribute to this heterogeneity, yet no unified theoretical understanding of the coherence of symptom criteria sets currently exists. A general framework for analyzing the logical and mathematical structure, coherence, and diversity of Diagnostic and Statistical Manual diagnostic categories [DSM-5 and DSM-IV-TR] is proposed, drawing from combinatorial mathematics, set theory, and information theory. Theoretical application of this framework to 18 diagnostic categories indicates that in most categories, 2 individuals with the same diagnosis may share no symptoms in common, and that any 2 theoretically possible symptom combinations will share on average less than half their symptoms. Application of this framework to 2 large empirical datasets indicates that patients who meet symptom criteria for major depressive disorder and posttraumatic stress disorder tend to share approximately three-fifths of symptoms in common. For both disorders in each of the datasets, pairs of individuals who shared no common symptoms were observed. Any 2 individuals with either diagnosis were unlikely to exhibit identical symptomatology. The theoretical and empirical results stemming from this approach have substantive implications for etiological research into, and measurement of, psychiatric disorders.
When I say they actually did a lot of math, I wasn’t kidding. They calculated the theoretical numbers of symptom clusters possible and the number of possible disjoint pairs [two diagnosable cases that shared no symptoms] and came up with some impressive numbers. Then they looked at two large databases and showed that this kind of heterogeneity was present in nature, not just in a math book. There’s no way to summarize their study in a simple blog post, but I was impressed. Here’s their lone figure illustrating a disjoint in MDD:
And I mangled their theoretical table to fit, primarily to show that the possible disjointed diagnoses aren’t uniform, but hit some pretty important disorders [eg MDD]:
It’s not hard to figure why the DSM framers went for a system like this. Psychiatry has always been criticized for its problems with definition. My take on that is that it is in the nature of the beast. But in the 1970s, they were out to get objective, get medical. The downside of this polythetic classification is obvious. It gives the illusion of descriptive unity where, in many cases, none exists. And that pseudo-unity has been exploited to the hilt in the Clinical Trial era where drugs are approved for specific conditions – most obvious in the area of Major Depressive Disorder. Fried et al and Olbert et al have done a good job in different ways to put this heterogeneity down in black and white. My guess is that one could show the same heterogeneity with a factor analysis of the other metrics like HAM-D, BDI, MADRS, CDRS, IDS, QIDS, etc.

Long ago, before the blood pressure cuff, doctors approximated blood pressure readings by feeling the "hardness of the pulse." Before the thermometer, fever was measured with the back of the hand. Anemia was diagnosed by looking at the color on the back of the eyelid, and diabetes by tasting the urine. You do what you can do until something better comes along. What’s important is to know the precision and limits of the instrument, and not make more of it than it has to offer. The symptomatic, descriptive system worked in a few places, but was heavily overvalued in others. That was apparent on day one, yet it got reified and deified early on for reasons other than clinical accuracy. Worse, it was used by many to imply unproven unity and even etiology.

I appreciate the commenters adding to this discussion…
Mickey @ 8:47 PM

a must-read article!…

Posted on Wednesday 28 January 2015

An Internal Medicine residency in a charity hospital in Memphis Tennessee in the 1960s was an encounter with Hypertension [High Blood Pressure] of the first kind. The patient population was weighted towards African Americans and we served not only the urban poor of Memphis, but also the rural areas of West Tennessee, Eastern Arkansas, and the Northern Mississippi Delta. If you wanted to learn about Hypertensive Disease, it was the place to be. Besides the ethnic demographic, there was another factor. Not to far from the hospital, there was a huge supermarket that served the same region that was open 24 hours of every day. The dried bean section was stacked floor to ceiling with 25# sacks of White Beans, Pintos, and Black-Eyed Peas. The produce department was mostly greens – Collards, Mustards, Turnip Greens. In the Old South, social status was determined by what part of the pig you ate ["High on the Hog"], and that tradition was apparent at the meat counter: a few hams, butts, and shoulders in a small section at the end, but mostly every other part of a pig, all salt cured – Tails, Feet, Ears, Knuckles, Hocks, Fatback, all thickly encrusted with salt [that you could taste just standing next to the cases]. It was the flavoring for those beans ‘n greens. So it wasn’t just ethnicity working on the Blood Pressure…

  • Malignant Hypertension: The patients would arrive in the ER delirious or in coma with outrageous blood pressure readings. They would have retinal hemorrhages and other vascular eye signs. Kidney function would be compromised. A "stroke" was eminent if it hadn’t happened already. It was a medical emergency, and at the time, the drug of choice was parenteral Reserpine, which did the job. The easiest transition was to oral Reserpine, but the problem was that some fraction of the patients developed a profound melancholic depression [one of the phenomena that lead to the "catecholamine hypothesis" of depression]. So we used the other available drugs of the time – all with plenty of side effects as part of the package.
  • Hypertensive Cardiovascular Disease: These patients were more common. They showed up in the clinics and ER with congestive heart failure of varying intensity: swollen legs, enlarged livers, shortness of breath, sleeping sitting up – with enlarged hearts and Left Ventricular Hypertrophy on EKG [big muscular hearts]. The symptoms cleared with lowering the blood pressure, but they usually got digitalis and diuretics as well.
  • Hypertension: Back then, Hypertension [asymptomatic] was defined as a Diastolic Blood Pressure consistently over 100 mmHg. And there was plenty enough of that around to treat where I was.
My next way station was an Air Force Hospital in the UK – a couple of bases populated with a very different, predomiantly Caucasian group [also younger][also no supermarket like the one in Memphis]. I never saw a case of Malignant Hypertension and not many with Hypertensive Cardiovascular Disease. It was around that time that Treatment Guidelines were beginning to come from the various specialty organizations. The American Heart Association came out with new guidelines for the treatment of High Blood Pressure, and lowered the definition to a Diastolic of 90 mmHg. I didn’t like it. The drugs made people impotent, feel bad. I gave them BP Cuffs to take home. That cured a lot of > 90 mmHg cases ["white coat" syndrome]. I preached weight loss and against salt before tasting. That cured even more. I got a wide BP cuff for fat arms. That was a winner too. I just didn’t feel right treating only a number – making people sick [but I felt guilty when I didn’t]. After that, I was back in Atlanta at a Charity Hospital similar to the one in Memphis training in Psychiatry. When a delirious patient got triaged to the psych floor, the first thing I did was a BP and rolled the Malignant Hypertension cases back downstairs myself to make sure they were seen quickly. Later, when I ran the psych ER, I convinced the medical ER to do a BP before triag on obtunded patients.

And then I forgot about it. Psychotherapist types aren’t in the BP business. And then I got old, and all my friends [and my wife] were on something for BP. Hers was easy. A home BP cuff cured her [70s and 80s]. One day, I thought seriously about it. I was tired of feeling guilty for insisting on hard evidence before following the guidelines [now 140/90, with a Diastolic BP 80-89 as borderline or pre-hypertension]. But what I realized is that there were several things about this that were important. First, I just didn’t believe it. I read all the long term studies, but I wasn’t impressed. Second, I was a treat-the-sick doctor, not a healthy-lifestyle doctor. Had I stayed in Internal Medicine, I guess I would’ve hired me a healthy-lifestyle nurse practitioner to specialize in seeing to that side of things. It just wasn’t in me to spend a lot of time keeping up with the gajillion guidelines that now pour out of our t.v. sets and journals, and I’m not sure I believed what I read a lot of the time anyway. My point is not about your BP meds, or my rebellious streak, it’s about the limits of population studies, statistics, and the relief I felt reading this article:
New York Times
4 heart attacks are not prevented. When 2,000 People Take a Daily Aspirin for Two Years: 1 Heart Attack is Prevented. People at risk for a first heart attack are often recommended to take aspirin daily to prevent it. Only a very few will actually see this benefit and there’s no way to know in advance who.
By Austin Frakt and Aaron E. Carroll
JAN. 26, 2015

In his State of the Union address last week, President Obama encouraged the development of “precision medicine,” which would tailor treatments based on individuals’ genetics or physiology. This is an effort to improve medical care’s effectiveness, which might cause some to wonder: Don’t we already have effective drugs and treatments? In truth, medical care is often far less effective than most believe. Just because you took some medicine for an illness and became well again, it doesn’t necessarily mean that the treatment provided the cure.

This fundamental lesson is conveyed by a metric known as the number needed to treat, or N.N.T. Developed in the 1980s, the N.N.T. tells us how many people must be treated for one person to derive benefit. An N.N.T. of one would mean every person treated improves and every person not treated fails to, which is how we tend to think most therapies work. What may surprise you is that N.N.T.s are often much higher than one. Double- and even triple-digit N.N.T.s are common.

Consider aspirin for heart attack prevention. Based upon both modifiable risk factors like cholesterol level and smoking, and factors that are beyond one’s control, like family history and age, it is possible to calculate the chance that a person will have a first heart attack in the next 10 years. The American Heart Association recommends that people who have more than a 10 percent chance take a daily aspirin to avoid that heart attack.

How effective is aspirin for that aim? According to clinical trials, if about 2,000 people follow these guidelines over a two-year period, one additional first heart attack will be prevented. That doesn’t mean the 1,999 other people have heart attacks. The fact is, on average about 3.6 of them would have a first heart attack regardless of whether they took the aspirin. Even more important, 1,995.4 people would never have a heart attack whether or not they took aspirin. Only one person is actually affected by aspirin. If he takes it, the number of people who remain heart attack-free rises to 1996.4. If he doesn’t, the number remains 1995.4. But for 1,999 of the 2,000 people, aspirin doesn’t make any difference at all.

Of course, nobody knows if they’re the lucky one for whom aspirin is helpful. So, if aspirin is cheap and doesn’t cause much harm, it might be worth taking, even if the chances of benefit are small. But this already reflects a trade-off we rarely consider rationally. [And many treatments do cause harm. There is a complementary metric known as the number needed to harm, or N.N.H., which says that if that number of people are treated, one additional person will have a specific negative outcome. For some treatments, N.N.T. can be higher than the number needed to harm, indicating more people are harmed than successfully treated.]

Not all N.N.T.s are as high as aspirin’s for heart attacks, but many are higher than you might think. A website developed by David Newman, a director of clinical research at Icahn School of Medicine at Mount Sinai hospital, and Dr. Graham Walker, an assistant clinical professor at the University of California, San Francisco, has become a clearinghouse of N.N.T. data, amassed from clinical trials…
I, of course, immediately raced to Dr. Newman’s NNT website, and there it was:
It wasn’t pleasure in confirmation I felt. It was relief. I guess I felt like I was supposed to believe the AHA guidelines and was conflicted that I didn’t. Was I just adverse to giving those make-you-sick meds? Wanting to be a nice guy? So I felt genuine relief that my persona wasn’t in front of my doctoring in this case.

The article is about the simplest of all statistics. If there’s a Clinical Trial where 40% respond to placebo and 50% respond to the drug, then the NNT = 1 ÷ (0.50 – 0.40) = 10. That literally means "you have to treat 10 cases to get 1 responder." Actually, 5 would respond, but only 1 would be because of the treatment. The other 4 were going to respond anyway. What could be simpler? And what could be more telling? The NNT is one of a family of measures of the Strength of the Effect of a treatment [see an anatomy of a deceit 3…].

I’m a Stats-Savvy type, but I never heard of these Strength of Effect measurements until I started to look at RCTs a few years ago and a mentor pointed me to them. In a modern world, an RCT that doesn’t report a Strength of Effect index right there next to the p value is suspect of hiding something. This is a must-read article!

COI Statement: My healthy-lifestyle spouse has been out of town for a week and a half. For supper tonight, I had a NY Strip Steak, and there’s a Crock-Pot on the counter with navy beans, onions, and liberal rashers of salty bacon for tomorrow. Culture is hard to transcend!
Mickey @ 9:32 PM

a special pleading

Posted on Tuesday 27 January 2015

In the days when those ancients first disuncovered the rules of logic and logical argument, they must’ve thought they’d found the path to determine absolutes, something like the truth. Alas, it was short-lived, because the senator on the other side of the forum rose and eloquently used those self-same rules of logic and logical argument to reach a diametrically opposite conclusion. And so the march began that lead to laws, the legal profession, courts of law with their judges and juries, our legislatures, etc. – all of those institutions involved in parsing a solution out of human disagreement before warring people end up shooting at each other. And somewhere along this path, there was a new class in the study of logic called logical fallacies – situations where logical arguments are chronically distorted or misused.

I wrote the IOM and sent my blog back on track… in which I was arguing for Data Transparency being retrospective including the legacy trials [older trials]. But the response suggested that they thought I was being critical of their work [the Institute of Medicine]. Actually, I was tickled pink that they weighed in, but my ending comment was, perhaps, accusing:
    So I would encourage Dr. Sim and her colleagues to reconsider playing into this industry manufactured argument and helping medicine get things back on track…
So let me be very public in saying that I was tickled pink that the [IOM] weighed in… and I was further encouraged by this response to another comment:
    1bom: …we can look at the raw data from the original trials. And fortunately, the sleight of hand occurred primarily in the analytic and publication processes that came after the blinds were broken. So those legacy trials are the very ones that need to be reanalyzed and meta-analyzed by independent investigators playing with a full deck. Without an accurate and very public re-appraisal, the problem is going to be perpetuated for decades.
    response: The IOM report fully endorses this type of reasoning for prioritizing the sharing of certain legacy trials. It just wasn’t up to our committee to call out which legacy trials in particular, or to propose a process for prioritization.

But in thinking about all of this, I realized that my argument flies mighty close to one of those logical fallacies I was mentioning – a special pleading. It’s one of those fallacies our poor judges have to listen to day after day. It goes something like this: "I know that the law says «something the law says», but it doesn’t apply to me because «something their lawyer made up to say»." And, if I’m honest, I’m really arguing that although I can see why subject confidentiality might concern the committee in releasing individual patient data, the legacy Clinical Trials in psychiatry are a special case that justifies whatever measures are necessary to protect subject confidentiality and make that data available to independent investigators. I’m arguing that the distortion of scientific data in the psychiatric drug literature was so blatant, so widespread, and so damaging that it needs to be made available to the medical community at large for systematic examination for any number of reasons – at any cost.

The most practical reason is that these medications are in heavy current use throughout medicine and will be for some time to come. Doctors need to know what they’re prescribing and patients need to know what they’re taking. And we don’t. Nobody is going to fork over the money it would take to restudy these drugs, but fortunately, there’s a wealth of raw data from the pre-blind-breaking parts of these legacy studies that would tell us a lot about what we need to know. The second reason is equally important in the long run. While it’s understandable why regulatory agencies landed on Clinical Trials as a way of certifying both efficacy and adverse effects of drugs, it is obviously a special pleading of its own that doesn’t touch the depth of knowledge and information gained from clinical experience – something we need to find a way to take advantage of in a better way than we’re doing now. We need to look long and hard at the whole system to make sure this kind of misuse of science is not driving the practice of medicine. What happened in the realm of psychiatric medicines in the last thirty years more than justifies a special pleading. Here’s what the Nizor Project [a resource for understanding fallacies] says about a special pleading:
    "From a philosophic standpoint, the fallacy of Special Pleading is violating a well accepted principle, namely the Principle of Relevant Difference. According to this principle, two people can be treated differently if and only if there is a relevant difference between them. This principle is a reasonable one. After all, it would not be particularly rational to treat two people differently when there is no relevant difference between them."
The pharmaceutical industry took advantage of the subjective nature of psychiatry and the fact that it was poorly supported, unfortunately engaging many in the profession. The effect on patients, the profession, and the drain on available funding for legitimate research efforts is now very apparent. It has happened elsewhere in medicine, but in sheer magnitude, psychiatry is different. Full Data Transparency goes directly to the heart of the problem. Thus, my special pleading
Mickey @ 7:22 PM

latter day RCTs – the re in re·search

Posted on Monday 26 January 2015

I sometimes refer to the years since 1980 as the age of antidepressants or the age of psychopharmacology, but I would be closer to the mark if I called it the age of clinical trials. Not only were the pharmaceutical companies turning them out, the NIMH [then under Director Steven Hyman] was regularly funding them. Here are some of the big ones from that time period:

LARGE NIMH CLINICAL TRIALS


STEP-BD Systematic Treatment Enhancement Program – Bipolar Disorder NCT00012558
CATIE Clinical Antipsychotic Trials of Intervention Effectiveness NCT00014001
STAR*D Sequenced Treatment Alternatives to Relieve Depression NCT00021528
TORDIA Treatment of SSRI-Resistant Depression In Adolescents NCT00018902
TADS Treatment for Adolescents with Depression Study NCT00006286

In the last two posts [latter day STAR*D I…, latter day STAR*D II…], I was looking at several independent studies using data from the STAR*D trial to address issues not central to the study itself. I expect that there’s a lot more mileage in that approach, plus, in some cases, a much needed re-analysis of the original research question [see significant III… for an example]. But the number of NIMH sponsored trials pales in the face of the industry run and funded clinical trials – and the same points apply in terms of both re-purposing and re-analyzing that data.

In back on track… I was arguing that it’s the legacy RCTs that actually need to be included in the Data Transparency programs because the out-of-patent drugs are going to be in use for a very long time, and much of what we think we know about them is suspect. The argument about Commercially Confidential Information obviously falls by the wayside with out-of-patent drugs. And the more I think about it, so does the argument about patient confidentiality for reasons I’ve already mentioned. I would suggest that the Institute of Medicine, the NIH, the EMA, the FDA, etc. begin to have committees and meetings looking into how to effectively anonymize the data rather than succumbing to the industry’s co-opting medical confidentiality as an excuse for continued secrecy. There’s a wealth of important medical information locked away in file drawers that needs inspecting, harvesting.

I don’t think we [psychiatrists] knew a lot about RCTs in those medicalizing days. I sure didn’t. And by the 1990s, our journals were filled with RCTs. It seems in retrospect that they quickly became the currency of the land, fitting right in with the emphasis on biomedical treatment and psychiatry’s new preoccupation with evidence-based medicine. I doubt that it ever occurred to me or many of us that they were financed by PHARMA, conducted by contract Clinical Research Organizations, or that the authors on the byline didn’t do the study, the analyses, even the writing. I only thought about those things a decade or more after the fact. I expect most of us looked at the graphs of rating scale scores like they were precise chemical measurements rather than results from subjective questionnaires or raters opinions. We looked at p rather than NNT or Effect Sizes. In the process of medicalizing, we took on the trappings of medical science too quickly without getting in up to our elbows and evaluating the instruments that were directing us. My point is that we were naive, gullible, ill-prepared to critically review what we were reading – and it showed in our performance.

So, at least in psychiatry, there’s more at stake than just the individual RTCs from the last thirty years. We need to re·search these studies to learn how to critically evaluate their content and find their place in making good clinical decisions. In my mind, that’s another strong reason we need for Data Transparency to include access to the raw data from those studies we’ve been reading about in our journals all these years. At least in our specialty, having independent teams re-evaluating that information is an important piece of our path to learning what we needed to know in the first place about the ins and outs of medical science [but didn’t]. We drank the Kool-Ade…
Mickey @ 8:30 PM

latter day STAR*D II…

Posted on Monday 26 January 2015

Scales like the HAM-D, BDI, MADRS, CDRS, IDS, QIDS, etc are designed to quantify the gamut of depressive symptoms. They’re either administered by a trained rater or self-administered. And they’re used both to certify diagnosis and to follow the progress of treatment in Clinical Trials of MDD [Major Depressive Disorder]. The Q-LES-Q was developed for the second function – following treatment – to measure the subjective experience [Quality of Life] rather than changes in the more objective depressive symptoms. It was collected periodically in the STAR*D Trial:
by IsHak WW, Mirocha J, James D, Tobia G, Vilhauer J, Fakhry H, Pi S, Hanson E, Nashawati R, Peselow ED, Cohen RM.
Acta Psychiatrica Scandanavia. 2015 131[1]:51-60.

OBJECTIVE: This study examines the impact of major depressive disorder [MDD] and its treatment on quality of life [QOL].
METHOD: From the Sequenced Treatment Alternatives to Relieve Depression [STAR*D] trial, we analyzed complete data of 2280 adult MDD out-patients at entry/exit of each level of antidepressant treatments and after 12 months of entry to follow-up. QOL was measured using the QOL Enjoyment and Satisfaction Questionnaire [Q-LES-Q]. The proportions of patients scoring ‘within-normal’ QOL [within 10% of Q-LES-Q community norms] and those with ‘severely impaired’ QOL [>2 SD below Q-LES-Q community norms] were analyzed.
RESULTS: Before treatment, no more than 3% of MDD patients experienced ‘within-normal’ QOL. Following treatment, statistically significant improvements were detected; however, the proportion of patients achieving ‘within-normal’ QOL did not exceed 30%, with >50% of patients experiencing ‘severely impaired’ QOL. Although remitted patients had greater improvements compared with non-remitters, 32-60% continued to experience reduced QOL. 12-month follow-up data revealed that the proportion of patients experiencing ‘within-normal’ QOL show a statistically significant decrease in non-remitters.
CONCLUSION: Symptom-focused treatments of MDD may leave a misleading impression that patients have recovered when, in fact, they may be experiencing ongoing QOL deficits. These findings point to the need for investigating specific interventions to ameliorate QOL in MDD.
I wasn’t familiar with the Q-LES-Q. It’s a pretty simple questionnaire, reproduced here FYI:

[click image for full screen version]
As the article is short and available on-line, I just clipped out a representative piece illustrating their point that the symptom reduction measured by the QIDS doesn’t directly correlate with an improvement in the Quality of Life. The QOL improvement is much less impressive – consistent with our clinical experience which doesn’t match the often glowing reports in the literature:
Their punchline:
To conclude, the present analysis highlights the major pitfalls associated with MDD treatments that are purely symptom-focused. Such treatments can give the misleading impression that a patient has recovered, when in fact the patient continues to experience ongoing deficits in QOL. QOL did not improve further after the acute treatment phase even in remitters, and non-remitters showed a statistically significant decline at follow-up after one year. Consequently, clinicians and researchers need to move beyond the mere assessment of symptoms when treating and/or researching MDD, by incorporating QOL measurement, and by investigating specific and personalized interventions to ameliorate QOL.
Like the study in the last post [Depression is not a consistent syndrome…, latter day STAR*D I…], these investigators were able to use the STAR*D dataset available from the NIMH to address a straight-forward question without getting tangled in the problems of the STAR*D study itself. And like in the previous post, one wonders Why didn’t someone do this earlier? It makes intuitive sense that the objective criteria used to define a condition wouldn’t necessarily be the best choice of parameters to follow when assessing subjective improvement. Putting aside for the moment the question of whether Major Depressive Disorder is even a valid diagnostic entity, the antidepressants are, in my mind, symptomatic medications. In the office, at least my office, I don’t go down some checklist of criteria on a follow-up visit after prescribing medication. I let the patient tell me what they think – is it helping or not?
Mickey @ 12:38 PM

latter day STAR*D I…

Posted on Monday 26 January 2015

The initial fanfare with the SSRIs introducing the Antidepressant Age [Prozac 1988] was followed by a period of disillusionment as the lower than hoped [hyped] response rates became apparent. While the logic behind what came next isn’t totally clear to me, the non-responding patients were seen as having Treatment Resistant Depression, as if this were some variant of Major Depressive Disorder, and the concept developed that the efficacy of the SSRIs could be enhanced by sequencing, combining, or augmenting the therapeutic power of these drugs using some algorithm or treatment guideline. I don’t know how this idea came into being, but I know where. It was in Texas at UT Southwestern, and was implemented as TMAP [the Texas Medication Algorithm Project], followed by a series of NIMH funded Clinical Trials [and others], looking for ways to improve the efficacy of these drugs:
These studies cost the NIMH well over $50 M, produced hundred of articles, are mentioned in 125 posts on this blog, and came to naught. Here’s the conclusion to my last post on STAR*D [retire the side…]:
    These attempts to turn clinical psychiatry into algorithms for medications driven by symptoms, often gathered by questionnaires, have yielded nothing. Worse, they trivialize both the human experience of patients and the practitioners’ efforts to help them. Let’s hope that the chart up there has finally run out of iterations and we can retire the side…
The STAR*D dataset is available for other uses from the NIMH, and recently several independent investigations have appeared that make interesting use of the data from this large cohort:
by Eiko I. Fried and Randolph M. Nesse
Journal of Affective Disorders. 2014 172C:96-102.

Background: The DSM-5 encompasses a wide range of symptoms for Major Depressive Disorder [MDD]. Symptoms are commonly added up to sum-scores, and thresholds differentiate between healthy and depressed individuals. The underlying assumption is that all patients diagnosed with MDD have a similar condition, and that sum-scores accurately reflect the severity of this condition. To test this assumption, we examined the number of DSM-5 depression symptom patterns in the “Sequenced Treatment Alternatives to Relieve Depression” [STAR*D] study.
Methods: We investigated the number of unique symptom profiles reported by 3703 depressed outpatients at the beginning of the first treatment stage of STAR*D.
Results: Overall, we identified 1030 unique symptom profiles. Of these profiles, 864 profiles [83.9%] were endorsed by five or fewer subjects, and 501 profiles [48.6%] were endorsed by only one individual. The most common symptom profile exhibited a frequency of only 1.8%. Controlling for overall depression severity did not reduce the amount of observed heterogeneity.
Limitations: Symptoms were dichotomized to construct symptom profiles. Many subjects enrolled in STAR*D reported medical conditions for which prescribed medications may have affected symptom presentation.
Conclusions: The substantial symptom variation among individuals who all qualify for one diagnosis calls into question the status of MDD as a specific consistent syndrome and offers a potential explanation for the difficulty in documenting treatment efficacy. We suggest that the analysis of individual symptoms, their patterns, and their causal associations will provide insights that could not be discovered in studies relying on only sum-scores.
It’s not easy to see what they did from the abstract and the full paper isn’t on-line, but here’s the gist of it. They looked at the intake QID-16 screening metric [Quick Inventory of Depressive Symptoms] and fractionated it into 12 Symptoms that correlated with the DSM-IV Major Depressive Disorder criteria:
Each item on the QID-16 is scored 0 to 4. They coded each symptom as absent [score 0 or1] or present [score 2 or 3]. That gave them a way to reclassify the 3703 subjects by twelve symptom profile. Here’s the punchline:
There was a striking heterogeneity of symptom profiles among this large cohort of patients diagnosed as having the DSM-IV Major Depressive Disorder. Striking! To my way of thinking, this simple study is totally brilliant. My only question about this paper is Why hasn’t someone done this before? They only used the intake QID-16, so they avoided the mess STAR*D became as it progressed and they started bouncing from metric to metric. And it showed something that many of us think already in a simple yet convincing way: Major Depressive Disorder is not a unitary diagnostic entity – far from it.
 
I would have probably reached a slightly different conclusion – one best articulated by historian Dr. Edward Shorter in his book, Before Prozac:

    "Bottom Line: Major Depression doesn’t exist in Nature. A political process in psychiatry created it…"
Mickey @ 8:00 AM

a categorical difference? a speculation…

Posted on Sunday 25 January 2015

Neuroskeptic‘s blog on Discover is one my favorites. His current post is about Diederik Stapel, the Dutch Social Psychologist, now famous as an admitted fraud who fabricated a lot of data. Diedrik actually became a regular on another favorite blog, Ivan Oranski’s Retraction Watch [Diederik has 54 Retractions to date]. If you don’t know the story, here are some links [in the New York Times, Neuroskeptic’s post, and on Retraction Watch]. Stapel also wrote a book:
"Two years ago, Dutch science fraudster Diederik Stapel published a book, Ontsporing [“Derailment”], describing how he became one of the world’s leading social psychologists, before falling from grace when it emerged that he’d fabricated the data in dozens of papers. Stapel wrote Ontsporing in Dutch, but now his story has been translated into English, under the title of Faking Science – thanks to the efforts of Nick Brown."
The link to Faking Science is a free download of Brown’s translated book [224 pages]. This story about Diederik Stapel comes shortly after a report on how another frequent flyer on Retraction Watch, Anil Potti, the Duke Cancer Researcher, was exposed [see Duke Officials Silenced Med Student Who Reported Trouble in Anil Potti’s Lab]. In the case of Potti, his work was being questioned by other researchers in the field, but it was a Med Student in his lab that blew the whistle [one that wasn’t listened to by the higher ups for a long time]. In Stapel’s case, a group of graduate students blew the whistle that was apparently heard definitively the first time around.

Who among us doesn’t have personal memories of early experimentation with embellishments or outright lies? or dealt with the experimentation of our children? or recall disillusioning encounters with friends or colleagues once exposed? And it’s the unusual case of someone in psychotherapy who doesn’t get around to shamefully confessed secrets of such things. Here are Stapel’s comments on his step over the line, quoted by Neuroskeptic [QRP: The study he references is available online, a survey of "Questionable Research Practices" among psychologists that reports a surprisingly high prevalence]:
After years of balancing on the outer limits [of scientific integrity], the grey became darker and darker until it was black, and I fell off the edge into the abyss. I’d been having trouble with my experiments for some time. Even with my various “grey” methods for “improving” the data [i.e. ‘QRPs‘], I wasn’t able to get the results the way I wanted them. I couldn’t resist the temptation to go a step further. I wanted it so badly. I wanted to belong, to be part of the action, to score.

I really, really wanted to be really, really good. I wanted to be published in the best journals and speak in the largest room at conferences. I wanted people to hang on my every word as I headed for coffee or lunch after delivering a lecture.

I felt very alone. I was alone in my tastefully furnished office at the University of Groningen. I’d taken extra care when closing the door, and made my desk extra tidy. Everything had to be neat and orderly. No mess.

I opened the file with the data that I had entered and changed an unexpected 2 into a 4; then, a little further along, I changed a 3 into a 5. It didn’t feel right. I looked around me nervously. The data danced in front of my eyes. When the results are just not quite what you’d so badly hoped for; when you know that that hope is based on a thorough analysis of the literature; when this is your third experiment on this subject and the first two worked great; when you know that there are other people doing similar research elsewhere who are getting good results; then, surely, you’re entitled to adjust the results just a little?…
I suppose that we would say he was already "over the line" ["the grey became darker and darker"] before this moment:
No. I clicked on “Undo Typing.” And again. I felt very alone. I didn’t want this. I’d worked so hard. I’d done everything I could and it just hadn’t quite worked out the way I’d expected. It just wasn’t quite how everyone could see that it logically had to be. I looked at the door of my office. It was still closed. I looked out the window. It was dark outside. “Redo Typing.” And again. For a moment I had the feeling that someone was standing behind me. I turned round slowly, fearfully. There was nobody there. I looked at the array of data and made a few mouse clicks to tell the computer to run the statistical analyses. When I saw the results, the world had become logical again. I saw what I’d imagined. I felt relieved, but my heart was heavy. This was great, but at the same time it was very wrong.
Any modern discussion of Moral Development usually has that word development in it, because morality is not something that just comes with the package. We acquire it along the way. And the terms used to describe it [Superego, Conscience, Moral Compass, Jiminy Cricket] imply it’s an attachment, rather than an integral component. Stapel‘s description of someone standing behind me, is pretty common. Many describe this moral agency as the personified part of the mind [as we acquire it from other people and it’s experienced as a presence]. My point is that we all know people whose morality depends on whether someone is looking or not – it hasn’t become an internalized part of the mind, but remains dependent on an outside monitor like it is in parts of childhood. In a later vignette, also quoted by Neuroskeptic, after Diederik Stapel has been confronted, he’s still thinking he can continue to fool the outside agent, and is fabricating new, more elaborate stories.

Retraction Watch is an interesting study of the fragility of human morality – high functioning people who step into the "grey," and then it gets "darker and darker." And it’s almost always "loners" doing the fabrication – I presume looking over their shoulders. This story attracted my attention at a time when I’ve been thinking about the issue of Data Transparency in Clinical Trials and all the distorted Clinical Trial reports strewn about in our medical literature. In Ben Goldacre’s now famous 2012 TED Talk, he mentioned that industry-funded Clinical Trials are better conducted than many independent trials. That was anti-intuitive to me when I first heard it, but I have to admit that now, having looked a a lot of trials, I reluctantly agree. There are a number of trials where the design is biased [wrong dosing of comparators, for example], but I haven’t run across fabrication of data – the kind of thing that’s common in the examples on Retraction Watch. The trouble comes after the blind gets broken. Negative studies go unpublished. And often, small [and trivial] effects are amplified with a whole host of "grey" techniques [that carefully stay out of the "black"].

What I’m implying [because I apparently don’t know how to say it clearly] is that the widespread embellishment and deceit seen in the Clinical Trial publications is categorically different from the cases we see in the people who show up on Retraction Watch. Each instance in a Clinical Trial includes the involvement of multiple people: guest authors, statisticians, scientists, medical writers, marketeers, legal consultants, CEOs, etc. And there’s an intentionality in staying "in the grey rather than the black." It is a widespread practice, a culture, with similar methods showing up from seemingly independent groups who are otherwise in competition. And throughout this struggle over Data Transparency, the individual companies and their collective [PhRMA] are fighting to hold onto as much control of data access as they can muster. The only reason I can think of for their dogged persistence is to maintain the latitude for embellishment and deceit in the future – as if it is potentially an essential element.

Maybe I can say it clearly after all. The fraudsters are, indeed, betraying their own morality. Diederik Stapel even describes being haunted by it’s presence in an empty room in the vignette above, and is now engaged in a reparative campaign eg a confessional book and a TEDx Brain Train talk. In that talk Stapel discusses how he lost his connections with others in the process. Whereas, those involved in the distortions of the Clinical Trials aren’t betraying a morality, they’re living up to a different, corporate moral standard. They don’t lose connections – their shared morality connects them and appears to be mutually reinforcing. It is, in fact, the rare whistle-blower who betrays this ethic [as in ethos, culture] and is ostracized. That moral difference should have implications for how we approach dealing with the seemingly similar problems the fraudsters and the trialists create…
Mickey @ 7:25 AM