a mighty hard row…

Posted on Tuesday 17 May 2016

“…it has become standard practice for pharmaceutical companies to pay medical communication companies to write articles [based on industry-designed studies], for academic physicians to be paid to essentially sign off on the articles, and then for communication companies to place the articles in prestigious medical journals, a process known as “ghost management.” Ghost management has resulted in the selective and imbalanced reporting of clinical trial data in medical journals, which in turn has supported the marketing of expensive new drugs with questionable risk/benefit profiles.”

Under the Influence: The Interplay among Industry, Publishing, and Drug Regulation.
by Cosgrove, Vannoy, Mintzes, and Shaughnessy

Looking at the twelve year old Citalopram in Adolescence Study, the recent deconstruction, and the problems they had getting it published reminded me of another more contemporary article that I had wanted to return to. It was about the RCTs used to gain FDA Approval for Vortioxetine. I mentioned it earlier [see a couple of points…], but at the time it was only available as an abstract:

As you can see from the opening quote, this article is about a lot more than just the specific papers or even the FDA Approval, it’s about the perversion of the scientific process in the industry funded RCTs, what the authors call ghost management [AKA total commercial control of the trial process from design and registration through the drafting of the paper for publication]. Being reminded of this paper, I recalled that when I first came across it I contacted the authors to get a copy and was told that they too had had a very hard time getting it published. It’s in a lower impact journal and it’s not available from the publisher on-line [an expensive proposition unless it’s part of grant or you’re involved with a well-heeled corporation]. So I recontacted them to find out if they had written about their publishing problems, and they hadn’t. But they mentionedf that they were disappointed with the paper’s reception, as was I. It’s a really strong and well documented article. Thinking back about our experience with the Paxil Study 329 RIAT article, the recent Citalopram deconstruction article, this Vortioxetine paper, the papers about the Tamiflu re-evaluation, and countless others, it’s apparent that there’s a common theme. So I want to revisit this Vortioxetine article later [I’ve found a way to get to it on-line], but first, I’d like to frame the common theme that unites these papers.

I’ve always had kind of a problem with “ear worms.” I get some song in my mind that repeats in the background. If I think about it, usually I can figure out how it got there. I was out and about all day, and there it was – an old Woodie Guthrie song called Pastures of Plenty:

    It’s a mighty hard row that my poor hands have hoed
    My poor feet have traveled a hot dusty road
    Out of your Dust Bowl and Westward we rolled
    And your deserts were hot and your mountains were cold
    I worked in your orchards of peaches and prunes
    I slept on the ground in the light of the moon
    On the edge of the city you’ll see us and then
    We come with the dust and we go with the wind…
It had to do with thinking about this Cosgrove et al paper this morning before I set out on my errand-filled day. Okay – my unconscious might be overly dramatic, even histrionic, but it’s on the right track. It is a mighty hard row and it can be a largely thankless task. We sure paid the piper to get our Paxil Study 329 paper in print [as did the Tamiflu team]. We made it to a big journal and it was well received, but there were years of work by a number of people to bring the paper we reanalyzed to the forefront. Even with that, we were scrutinized like few other [see Restoring Study 329: The Path To Publication]. Jureidini et al were able to take advantage of a lot of antecedent legal work producing documentation, but still traveled that same mighty hard row [see background notes]. Cosgrove et al didn’t have any specific pre-emptory work  by others, so they were flying blind.

What unites these articles is obvious – the are contrarian. They say that somebody else’s work is wrong, and in these cases, wrong on purpose. They use that somebody-else’s data in reanalysis to reach a different, often opposite conclusion. They make a serious charge, so they deserve careful scrutiny sure enough. But in the case of all four articles, they’re not, primarily, opinion pieces. They’re data-based analyses that, by definition, render an opinion, but it’s an evidence-based opinion rather than speculative. Yet even after being subjected to a rigorous examination far exceeding that applied to the originals, they’re still hot potatoes. There’s obviously a fear of reprisal, legal challenges by the powerful sponsors of the originals. And there’s something else, the authors are under suspicion for having ulterior motives, conflicts of interest. The obvious commercial conflicts of interest in the originals are buried in small print, but the fact that the reanalysis articles are contrarian in and of itself seems grounds for such a charge.

And so to a few of the bumps along the mighty hard row for the authors of contrarian articles:

  • data collection:
    In each of these cases, just getting hold of the data was itself a daunting task, coming from reluctant sources in formats that made analysis difficult.
  • funding:
    This is largely unfunded research. If there is grant support, it’s often for the investigator without allowance for statistical or technical support, indirect costs, or publication expenses.
  • heightened standards:
    The burden of proof for contrarian articles is uniformly higher than for the originals. Likewise, requirements for freedom from conflicting interests are more stringent – more on the side of guilty until proven innocent.
  • opinion rather than science:
    Contrarian articles tend to be judged as opinions rather than scientific exploration – and biased opinion at that.
  • accessibility:
    Because they’re often in lower impact journals and have to rely on charity to be available online, they can get lost in an ivory tower or a dusty corner of the library either never catching the wind, or if they do, having a short hang time.
  • seen as deprivation : 
    While their intent is health promoting, they can be seen as taking away something.
  • no PR:
    They’re not necessarily picked up by the news media, mentioned on the business pages, or reviewed in the professional trade journals

This is hardly a comprehensive list, just something off the cuff. But I think it captures the fact that these are hard articles to write and hard articles to publish. But that’s just the beginning. Often, getting them into the public and professional discourse is a another uphill climb. And while it’s tempting to see the difficulties as coming from evil opposing forces, and I’m certain that complaint is often justified, it’s also not the whole story. Contrarian literature in science is like that all on its own. We want to hear about breakthroughs, not breakdowns. That’s just the nature of things.

So now onto the skipped-over article by Cosgrove et al…
Mickey @ 11:58 AM

captain ben and his crew…

Posted on Sunday 15 May 2016

There are times when being wrong is just fine. When I first read about Ben Goldacre‘s COMPare Project, I didn’t think it would have much of an impact. What he was proposing to do was put together an army of medical students who would look over Clinical Trial papers and when they found one that didn’t follow the a priori Protocol, they’d start writing letters to the Journal, calling it "outcome switching." While I certainly agree with the sentiment, I thought his campaign was too simplistic – more parrying than combat…
Retraction Watch
by Alison McCook
May 15, 2016

A major medical journal has updated its instructions to authors, now requiring that they publish protocols of clinical trials, along with any changes made along the way.

We learned of this change via the COMPare project, which has been tracking trial protocol changes in major medical journals — and been critical of the Annals of Internal Medicine‘s response to those changes. However, Darren Taichman, the executive deputy editor of the journal, told us the journal’s decision to publish trial protocols was a long time coming…
    This change was something we planned prior to COMPARE and were intending to implement with an update of our online journal that is in process. However, the barrier COMPARE encountered in obtaining a protocol for one of the studies in their audit prompted us to implement it earlier…
Read the whole thing. It’s the real deal – a success that could be bigger than Ben’s AllTrials campaign. So I guess that one moral of the story is Don’t bet against Ben Goldacre. His TED talk was a landmark as was his AllTrials campaign. He seems to have the gift of both method and timing – something of a swashbuckler in an age of plodders.

While I still believe that Data Transparency is the ultimate goal to combat the rampant corruption, I realized when we were writing our RIAT paper that we needed a preventive strategy as well – something to head off the deceit in the first place. In the original Paxil Study 329, the Celexa Study in my last post [this tawdry era…], and for that matter, the overwhelming majority of the distorted RCTs I’ve looked at over the years, deviating from the a priori Protocol and/or the Statistical Analysis Plan to find something to call significant has been a ubiquitous practice, and the standard means for turning all those sow’s ears into silk purses.

It’s simple, up front, something that happens at the level of the journal publications where it needs to happen, and he’s brought it off in a major journal. So my hat’s off to Captain Ben and his crew…
Mickey @ 3:51 PM

this tawdry era…

Posted on Saturday 14 May 2016

For the last week, I’ve been unable to focus on anything very far from a single article [see the jewel in the crown…, why is that?…, the hope diamond…, and the obvious irony… ]. And it’s been frustrating in that the article has been behind the pay-wall. But now, the International Journal of Risk & Safety in Medicine has generously published it Open Access. And while the authors’ background notes are not yet on-line, they allowed me to post them here [see Update below]. So you can decide for yourself if my monomania is justified with a full deck:
by Jureidini, Jon N., Amsterdam, Jay, D, and McHenry, Leemon B.
International Journal of Risk & Safety in Medicine, 2016 28[1]:33-43.

OBJECTIVE: Deconstruction of a ghostwritten report of a randomized, double-blind, placebo-controlled efficacy and safety trial of citalopram in depressed children and adolescents conducted in the United States.
METHODS: Approximately 750 documents from the Celexa and Lexapro Marketing and Sales Practices Litigation: Master Docket 09-MD-2067-[NMG] were deconstructed.
RESULTS: The published article contained efficacy and safety data inconsistent with the protocol criteria. Procedural deviations went unreported imparting statistical significance to the primary outcome, and an implausible effect size was claimed; positive post hoc measures were introduced and negative secondary outcomes were not reported; and adverse events were misleadingly analysed. Manuscript drafts were prepared by company employees and outside ghostwriters with academic researchers solicited as ‘authors’.
CONCLUSION: Deconstruction of court documents revealed that protocol-specified outcome measures showed no statistically significant difference between citalopram and placebo. However, the published article concluded that citalopram was safe and significantly more efficacious than placebo for children and adolescents, with possible adverse effects on patient safety.
While this is only one example of many similarly misreported Clinical Trials, the access to the internal industry documents allowed these authors to leave nothing to our imagination. They prove that it’s ghost written; that it was framed by the industry executives for commercial gain before any academic author got near the data; that it was deceitfully written to hide its failings, on purpose; and that it was a negative Clinical Trial presented as positive and subsequently used to gain FDA Approval. Those points and more are abundantly clear in this easy-reading article.

I wanted to go through just one of their many examples to illustrate why it’s imperative that these RCT reports adhere to the pre-registered a priori Protocols and Statistical Analysis Plans, so clearly explained in Adriaan de Groot’s 1956 paper [see the hope diamond…]. In this case, the a priori Protocol was among the archived documents examined by Jureidini et al:

from the a priori Protocol [page 23]


12.5.1 Primary Efficacy Parameter
  Change from baseline in CDRS-R score at Week 8 will be used as the primary efficacy parameter. Descriptive statistics will be calculated by visit. Comparison, between citalopram and placebo will be performed using three-way analysis of covarianee [ANCOVA] with age group, treatment group, and center as three factors,and the baseline CDRS-R score as covariate.
12.5.2 Secondary Efficacy Parameter[s]
  The secondary efficacy parameters are:
    1. CGI-Improvement subscale score [CGI-I].
2. Change from baseline in CGI-Severity score [CGI-S].
3. Change from baseline in K-SADS-P [depression module] score.
4. Change from baseline in CGAS score.

However, in the published article [A Randomized, Placebo-Controlled Trial of Citalopram for the Treatment of Major Depression in Children and Adolescents], the parameters were not-so-subtly changed. Response is nowhere defined in the Protocol. And the K-SADS-P and CGAS were just dropped:

from Wagner et al [page 1080]


The primary outcome measure in this study was the change from baseline in score on the Children’s Depression Rating Scale – Revised at week 8 or upon termination. The Children’s Depression Rating Scale – Revised was administered at each study visit. Response was defined as a score of ≤ 28 [indicating minimal residual symptoms]. Secondary measures included Clinical Global Impression [CGI] improvement and severity ratings [25].

Then, in their terse Results section, they included Response [erroneously called "prospectively defined"], a non-Protocol Effect Size [wrongly calculated], and again just left out the K-SADS-P and CGAS altogether:

from Wagner et al [page 1081]


Citalopram treatment showed statistically significant improvement compared with placebo on the Children’s Depression Rating Scale – Revised as early as week 1 [F= 6.58, df=1,150, p<0.05], which persisted throughout the study. At week 8, the effect size on the primary outcome measure, Children’s Depression Rating Scale – Revised [last observation carried forward], was 2.9. Additionally, at endpoint more citalopram-treated patients [36%] met the prospectively defined criterion for response than did placebo-treated patients [24%], a difference that was statistically significant [χ²=4.178, df=1, p<0.05]. The proportion of patients with a CGI improvement rating ≤ 2 at week 8 was 47% for the citalopram group and 45% for the placebo group [last observation carried forward values]. For the CGI severity rating, baseline values were 4.4 for the citalopram group and 4.3 for the placebo group, and endpoint values [last observation carried forward] were 3.1 for the citalopram group and 3.3 for the placebo group.

Not to mention the fact that the reported CDRS-R result failed to follow Protocol-directed exclusions which invalidated the claimed significance or that the add-in Response had a trivial NNT [8.3]. So by deviating from the a priori Protocol in a variety of ways, they were able to cherry-pick among parameters to give the illusion of efficacy.

But the even bigger revelation in the documents is the amount of effort the industry handlers and doctors put into controlling the process and actively hiding the true results of the Clinical Trial:

from Jureidini et al [section 3.2.2]


Wagner et al. failed to publish two of the protocol-specified secondary outcomes, both of which were unfavourable to citalopram. While CGI-S and CGI-I were correctly reported in the published article as negative, the Kiddie Schedule for Affective Disorders and Schizophrenia-Present [depression module] and the Children’s Global Assessment Scale [CGAS] were not reported in either the methods or results sections of the published article.
In our view, the omission of secondary outcomes was no accident. On October 15, 2001, Ms. Prescott wrote: “I’ve heard through the grapevine that not all the data look as great as the primary outcome data. For these reasons [speed and greater control] I think it makes sense to prepare a draft in-house that can then be provided to Karen Wagner [or whomever] for review and comments.” Subsequently, Forest’s Dr. Heydorn wrote on April 17, 2002: “The publications committee discussed target journals, and recommended that the paper be submitted to the American Journal of Psychiatry as a Brief Report. The rationale for this was the following: … As a Brief Report, we feel we can avoid mentioning the lack of statistically significant positive effects at week 8 or study termination for secondary endpoints.
Instead the writers presented post hoc statistically positive results that were not part of the original study protocol or its amendment [visit-by-visit comparison of CDRS-R scores, and ‘Response’, defined as a score of ≤28 on the CDRS-R] as though they were protocol-specified outcomes. For example, ‘Response’ was reported in the results section of the Wagner et al. article between the primary and secondary outcomes, likely predisposing a reader to regard it as more important than the selected secondary measures reported, or even to mistake it for a primary measure.

There’s nothing speculative here. The points are illustrated with verbatim references from the perpetrators’ own internal emails. And yet the authors had one hell of a time getting it published [also well referenced in their background notes with  emails from journal editors].

Like the Paxil Study 329 article, the list of contributors stretches well beyond the listed authors – the subjects in the studies, the kids prescribed the medication, the litigation that released these documents, the library that archived them, etc. But major credit goes to these authors who spent countless hours doing tedious, unfunded research, wrote the paper, then persisted until they found a journal that would accept the article as it should’ve been written. And while we’re at it, the International Journal of Risk & Safety in Medicine deserves credit for rising to the occasion – both by publishing it and for making it Open Access.

I think it’s now our job to insure that all this dedicated work is rewarded with a wide readership, one that helps us move closer to putting this tawdry era behind us…
Mickey @ 12:12 PM

the obvious irony…

Posted on Thursday 12 May 2016

We don’t need a lofty scientific explanation for why we should demand that Clinical Trials of medication follow [and report the results of] the a priori Protocol. Common sense and historical fact offer reasons enough. If I can regress to my monotonous diagram of the Clinical Trial process for a moment, the a priori Protocol [including the Statistical Analysis Plan] have to be both reviewed and approved before the study begins, and constitute the last verifiable insurance against things like HARking or p-hacking [see the hope diamond…] – study results potentially manipulated based on foreknowledge of the outcome:

For one thing, there’s no guarantee that a sponsor won’t just go around the blind [they’re paying the CRO doing the study] or, for another more likely scenario, select analytic techniques that produce the results they want. That’s the common sense part. The historical fact part is also self evident. This blog and many others are filled with examples, as are the court dockets. In fact, looking at the psychiatric literature of several decades, we don’t need statistics to know what happened. It’s hard to produce examples where this kind of distortion didn’t happen at some level. The recent article by Jureidini, Amsterdam, and McHenry [see the jewel in the crown… and why is that?…] just happens to be about a sample case where there were enough subpoenaed materials available to directly document the behind-the-scenes deceit [when you happen to have three researchers willing to go through thousands of pages to flag the ones that mattered].

But Dr. De Groot’s thoughtful analysis [see the hope diamond…] adds value beyond this obvious empirical evidence, even beyond the technical explanation. It goes to the heart of what statistic analyses really represent. Just because they involve numbers and formulas and generate numeric answers doesn’t mean that statistical analyses are like the familiar arithmetic, algebra, or calculus with computations producing distinct answers. In fact, if you took a statistics course, the teacher was likely a psychologist or a social scientist rather than someone from the math department. Statistical analyses are about conditional likelihoods [with the emphasis on conditional]. And De Groot is pointing out that, unlike the other mathematics, one absolute condition in confirmatory statistical analysis is blindness [with the emphasis on absolute].

There is an obvious irony in this story of pharmaceutical clinical trials. While the corporations conducting and analyzing these clinical trial results are afforded any number of pathways to get around the absolute requirements for blindness, those of us on the outside who prescribe and take these medications and who should be able to see the whole process are muzzled by a blanket of absolute blindness [I could’ve replaced obvious irony with obvious travesty]. That’s clearly backwards. Recently, several papers have made the ramifications of this obvious irony abundantly clear…
Having been intimately involved in one of these articles and knowing the authors of the other, I can attest to the herculean effort required to produce them. Both are the result of unfunded research. There were no Conflicts of Interest and they were largely done by senior people with no need for further credentialing. Unfortunately, the primary articles they analyzed are not exceptions. And at least in the domain of industry funded RCTs of CNS drugs, they’re the rule. Even worse, both studies involved medications for vulnerable youth. The mandate for change is clear as a bell…
Mickey @ 7:01 PM

good news bulletin…

Posted on Thursday 12 May 2016

Since it’s almost Friday the 13th, I thought I’d post some good news follow-ups to neutralize any bad luck juju that might be out and about:

  1. Unfortunately, the rainy season coincided with the pollen season this year, interfering with the smoothness of my yearly graph on Seasonal Dementia [a new entity… and seasonal dementia update…]:

    As seasoned veterans know [pun intended], the times of greatest affliction don’t necessarily coincide with the highest pollen counts. In my case, it’s that second bump in later April that’s the killer. But this week has been great, and I declare Pollenarama-2016 officially over…

  2. I learned today that the blockbuster article that I can’t seem to stop talking about [The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance] will be published Open Access [full text on-line] [though it will take several days for the link to be changed]…

  3. Last summer, I saw a patient who presented with Delerium from an outrageous drug regimen – maximal doses of multiple psychiatric medications [blitzed…]:
    • Seroquel 600 mg/day
    • Trazadone 450 mg/day
    • Depakote 2.5 Grams/day
    • Neurontin [I forget how much too much]/day
    • Cogentin  8 mg/day [?]
    • Prozac 80 mg/day

    I began to taper the drugs, and what was lurking underneath was full blown Tardive Dyskinesia with constant back and forth jaw movement, hand wringing, athetoid shoulder shrugging, and restless legs. With continued tapering, her sensorium slowly cleared somewhat, but the Tardive Dyskinesia decidedly worsened [some truths are self-evident…, a story: getting near the ending[s]…, and the verb “to follow”…]. Even off medications, her cognition and memory remained impaired. So in March, I was finally able to gather enough family to get a coherent history that made it clear that her actual primary diagnosis was Traumatic Brain Injury from a fall two years before [cases…]. The TD symptoms were improving.

    I saw her yesterday in the clinic, and for the first time in almost a year, she had no symptoms of Tardive Dyskinesia. She does have a residual painful TMJ syndrome from the months of constant jaw movements, but all of the TD symptoms themselves have cleared. She’s only on Aprazolam for sleep and Prozac 20 mg daily [per her request]. So, one big bullet dodged and a sigh of relief from everyone concerned…
Thus ends my pre-Friday-13th good news bulletin…
Mickey @ 7:00 PM

the hope diamond…

Posted on Tuesday 10 May 2016


[click image to link to her slides]

Dorothy Bishop is a Developmental Psychologist who focuses on Dyslexia and other Language Disorders. This is not an article, just the slides from a presentation she gave to the Rhodes Biomedical Association last week on the reproducibility crisis. Her slides tell a story well known to us. And the problem isn’t the science, it’s the scientists. She starts with some familiar methods used to distort findings. I’ve synopsized those opening slides:

  • Publication bias: burying negative studies
  • HARKing: Hypothesis After Results Known
  • p-hacking: trying different statistical tests on various datasets until you get the result you want
but then she talks about the various people along the way who had written about this. And her history started with Adriaan de Groot [1956].
So, on a lark, I Googled Adriaan de Groot, and there was his article full text [put there by Dorothy Bishop]…
[Translated and Annotated by Eric–Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van der Maas]
from the Psychological Laboratory of the University of Amsterdam

Abstract:
Adrianus Dingeman de Groot [1914–2006] was one of the most influential Dutch psychologists. He became famous for his work “Thought and Choice in Chess”, but his main contribution was methodological — De Groot co-founded the Department of Psychological Methods at the University of Amsterdam [together with R. F. van Naerssen], founded one of the leading testing and assessment companies [CITO], and wrote the monograph “Methodology” that centers on the empirical-scientific cycle: observation–induction– deduction–testing–evaluation. Here we translate one of De Groot’s early articles, published in 1956 in the Dutch journal Nederlands Tijdschrift voor de Psychologie en Haar Grensgebieden . This article is more topical now than it was almost 60 years ago. De Groot stresses the difference between exploratory and confirmatory [“hypothesis testing”] research and argues that statistical inference is only sensible for the latter: “One ‘is allowed’ to apply statistical tests in exploratory research, just as long as one realizes that they do not have evidential impact”. De Groot may have also been one of the first psychologists to argue explicitly for preregistration of experiments and the associated plan of statistical analysis. The appendix provides annotations that connect De Groot’s arguments to the current-day debate on transparency and reproducibility in psychological science.
Last week, I called the publication by Jureidini, Amsterdam, and McHenry  the jewel in the crown… to metaphorically emphasize the importance of their article, which introduced subpoenaed internal corporate documents to illustrate the fraudulent underbelly of the 2004 Celexa RCT in adolescents. Well, I need an even greater superlative for the De Groot article Dorothy Bishop brings to us from a more naive time – how about the Hope Diamond? Since you’re unlikely to read the whole paper without a nudge, here’s its essence from the translator’s note in the Appendix:
Specifically, De Groot makes three important interconnected points. The first point is that exploratory analyses invalidate the standard interpretation of outcomes from hypothesis testing procedures. “Exploratory investigations differ from hypothesis testing in that the canon of the inductive method of testing is not observed, at least not in its rigid form. The researcher does take as his starting-point certain expectations, a more or less vague theoretical framework; he is indeed out to find certain kinds of relationships in his data, but these have not been antecedently formulated in the form of precisely stated «testable» hypotheses. Accordingly they cannot, in the strict sense, be put to the test.” «De Groot, 1969, p. 306». Indeed, in exploratory work: “The characteristic element of ‘trying out whether …’ is present, but in such a way that the researcher’s attitude in fact boils down to ‘let us see what we can find.’ Now what is ‘found’ — that is, selected — cannot also be tested on the same materials” «De Groot, 1969, p. 307»…

The second, related, point that De Groot makes is the pressing need to distinguish between exploratory and confirmatory «“hypothesis testing”» analyses. De Groot reiter- ated this point in his book “Methodology”: “It is of the utmost importance at all times to maintain a clear distinction between exploration and hypothesis testing . The scientific significance of results will to a large extent depend on the question whether the hypotheses involved had indeed been antecedently formulated, and could therefore be tested against genuinely new materials. Alternatively, they would, entirely or in part, have to be designated as ad hoc hypotheses, which could, emphatically, not yet be tested against ‘new’ materials.” «De Groot, 1969, p. 52» Indeed, De Groot believed that it was unethical to blur the distinction between exploratory and confirmatory work: “It is a serious offense against the social ethics of science to pass off an exploration as a genuine testing procedure. Unfortunately, this can be done quite easily by making it appear as if the hypotheses had already been formulated before the investigation started. Such misleading practices strike at the roots of ‘open’ communication among scientists.” «De Groot, 1969, p. 52». This point was later revisited by Kerr «1998» when he introduced the concept of HARKing «“Hypothesizing After the Results are Known”», as well as by Simmons et al. «2011», John et al. «2012», and Wagenmakers, Wetzels, Borsboom, and van der Maas «2011»…

The third point that De Groot makes concerns preregistration. De Groot strongly felt that in order for research to qualify as confirmatory «and, consequently, for statistical inference to be meaningful», an elaborate preregistration effort is called for: “If an investigation into certain consequences of a theory or hypothesis is to be designed as a genuine testing procedure «and not for exploration», a precise antecedent formulation must be available, which permits testable consequences to be deduced.” «De Groot, 1969, p. 69»…
My version:

  1. Randomized Clinical Trials are not research [exporatory], they’re product testing [confirmatory].
  2. The a priori Protocol defines the analysis.
Dr. Bernard Carroll‘s version was in this recent comment:
There is an obvious way to prevent that kind of data manipulation, cherry picking, moving of goalposts, HARKing, and glossing over of adverse events. All it would take is for the FDA to require that they analyze the data strictly according to the a priori protocol. That requirement would apply to any investigational new drug or to any approved drug being tested for a new indication. Corporations and investigators would be prohibited from reporting any analyses other than the FDA analyses. With an a priori protocol and plan of analysis there should be no room for self-serving “creativity” by the corporations.

As things stand, we have a Kabuki theater spectacle. The corporations don’t come clean about what they did and the FDA doesn’t call them out when agency analyses disagree with the corporate line. They may deny or delay approval, but the FDA doesn’t go to the literature like Jureidini, Amsterdam, and McHenry did here challenge the distorted corporate analyses reported in the literature.

This requirement would put an end to creative manipulation of the clinical trials literature. As Dr. Mickey often says, this is not high science but rather product testing. Thus, corporations cannot claim to be privileged for conducting the statistical analyses. We need a clinical trials equivalent of the Underwriters Laboratory. Before licensing, nobody takes the manufacturing corporation’s word for it concerning the safety and performance of X-ray machines or CT scanners or cardiac defibrillators. Why should we treat drugs any differently?…
And Dr. Adriaan de Groot said it back when the world was young [1956]. It’s the Hope Diamond because it’s our only hope to stop the craziness of the last thirty five years. It’s probably more important than Data Transparency, rigid enforcement of analysis that follows the registered a priori protocol…
hat tip to Dorothy Bishop…  
Mickey @ 7:07 PM

housekeeping…

Posted on Monday 9 May 2016

When I retired a lot of people kept asking me what I was going to do. Their questions made me aware that I had no idea, so I made up something. "I want to find my inner boredom" I would say. Later, I found a more accurate answer, "I want to think about what I want to think about." What I meant was that in the busy·ness of practicing and teaching and the many other things on my plate, my mind was not my own. It was filled with things I needed to attend to, and I wanted it back. I wanted to pick my own topics.

So for the first four or five years, I did a lot of things, but they had nothing to do with medicine or psychiatry. Then I started seeing patients again and got interested in the things I write about here, and I’ve really enjoyed doing it. In conventional terms, I guess I had burn-out, and after some needed respit care, I was good to go. But I still have to be vigilant about keeping my mind free "… to think about what I want to think about." It’s very easy for me to get on a topic, and feel like I have to stick to the task. Sometimes, that’s what I want to do, but sometimes it begins to feel like a homework assignment [self imposed].

That’s what has happened with this neural circuits topic. I’ve been wanting to figure out what they’re talking about ever since reading…
… that mental illness was increasingly being recognised as a disorder of brain circuitry, rather than as a chemical imbalance, thanks to neuroimaging techniques and the discovery of some key biomarkers.
…in a speech by Dr. Insel five years ago. It was a hard time for him. PHARMA was pulling out of CNS drug research. The bio·dreams of the DSM-5 Task Force had tanked. And the NIMH had just initiated its amorphous RDoC Project. Dr. Insel’s exuberant campaign to get us to see Psychiatry as a Clinical Neuroscience Discipline just wasn’t panning out. So I think I saw his pronouncement about brain circuitry as a hail mary, a desperate attempt to keep his dreams alive, and dismissed it since I had no idea what he was talking about.

But he kept talking about it [Director’s Blog: Mental Illness Defined as Disruption in Neural Circuits, Insel Outlines the Psychiatry of the Future Treating Disorders of Neural Circuitry]. And the RDoC became the official language of the NIMH [Director’s Blog: Transforming Diagnosis]. So when Dr. Leanne Williams recently hit the airways with her RAD project [Precision psychiatry: a neural circuit taxonomy for depression and anxiety, How neuroscience could determine your mental health treatment, and Developing a clinical translational neuroscience taxonomy for anxiety and mood disorder: protocol for the baseline-follow up Research domain criteria Anxiety and Depression [“RAD”] project], I felt like it was time to look into this business of neural circuits. And so I wrote neural circuits 1…, thinking that a good place to start was with the work in Neurology with Parkinson’s Disease. And I’ve been chasing the references about the psychiatric applications of the concept.

But it’s not going like I planned. Williams’ papers describe a number of neural circuits as if they’re established, but chasing the references, my impression so far is that they’re pretty soft. I’ve written colleagues who I would expect to be in the know, and chased down some of their references too. But I can’t find any place to stand and I’ve ended up with a computer desktop full of saved .pdf’s, but little else to show for my efforts. I genuinely can’t tell if my unfamiliarity with the topic is the problem or if this really is one of those places where the people writing have blurred the distinctions between their dreams and their research. Of course, I suspect the latter based on our experience of the last couple of decades, but I don’t know enough yet to be sure about any of it. So I’m going to leave neural circuits 1… sitting there without any neural circuits 2… for a while, and read over my gathered material at a more leisurely pace – until something gels. Right now, it’s making my eyes cross and my brain hurt. As I said, I’m retired and "I want to think about what I want to think about."

Saturday, out of the blue, I posted a couple of jazz classics from 1956 [nocturne for flute [1956]…, breezin’ along in the trades [1956]…], favorites from my own adolescence – a time of cool jazz and the beat generation. What got 1956 in my mind? It was reading a paper from that year by a Dutch Psychologist about the use of statistics in research, a paper I hadn’t seen before: The Meaning of “Significance” for Different Types of Research by A. D. de Groot [Psychologist and Chessmaster]. And one thing lead to another. I found myself on Youtube listening to that music from long ago [the Shorty Rogers piece wasn’t my favorite of his, but the Chinook that Melted my Heart just wasn’t to be found].

So I moved the music posts forward a couple of days [that’s the housekeeping part], added one other [one note samba…], and I’m going to talk about Adrianus Dingeman de Groot’s ideas and how I landed on that old paper. It’s a theme that’s always on my front burner, and I know that it’s something "I want to think about" right now…
Mickey @ 12:41 PM

one note samba…

Posted on Monday 9 May 2016

Mickey @ 12:00 PM

breezin’ along in the trades [1956]

Posted on Monday 9 May 2016


Shorty Rogers Quintet

Mickey @ 7:15 AM

nocturne for flute [1956]

Posted on Monday 9 May 2016


Bud Shank Quartet

Mickey @ 7:00 AM