captain ben and his crew…

Posted on Sunday 15 May 2016

There are times when being wrong is just fine. When I first read about Ben Goldacre‘s COMPare Project, I didn’t think it would have much of an impact. What he was proposing to do was put together an army of medical students who would look over Clinical Trial papers and when they found one that didn’t follow the a priori Protocol, they’d start writing letters to the Journal, calling it "outcome switching." While I certainly agree with the sentiment, I thought his campaign was too simplistic – more parrying than combat…
Retraction Watch
by Alison McCook
May 15, 2016

A major medical journal has updated its instructions to authors, now requiring that they publish protocols of clinical trials, along with any changes made along the way.

We learned of this change via the COMPare project, which has been tracking trial protocol changes in major medical journals — and been critical of the Annals of Internal Medicine‘s response to those changes. However, Darren Taichman, the executive deputy editor of the journal, told us the journal’s decision to publish trial protocols was a long time coming…
    This change was something we planned prior to COMPARE and were intending to implement with an update of our online journal that is in process. However, the barrier COMPARE encountered in obtaining a protocol for one of the studies in their audit prompted us to implement it earlier…
Read the whole thing. It’s the real deal – a success that could be bigger than Ben’s AllTrials campaign. So I guess that one moral of the story is Don’t bet against Ben Goldacre. His TED talk was a landmark as was his AllTrials campaign. He seems to have the gift of both method and timing – something of a swashbuckler in an age of plodders.

While I still believe that Data Transparency is the ultimate goal to combat the rampant corruption, I realized when we were writing our RIAT paper that we needed a preventive strategy as well – something to head off the deceit in the first place. In the original Paxil Study 329, the Celexa Study in my last post [this tawdry era…], and for that matter, the overwhelming majority of the distorted RCTs I’ve looked at over the years, deviating from the a priori Protocol and/or the Statistical Analysis Plan to find something to call significant has been a ubiquitous practice, and the standard means for turning all those sow’s ears into silk purses.

It’s simple, up front, something that happens at the level of the journal publications where it needs to happen, and he’s brought it off in a major journal. So my hat’s off to Captain Ben and his crew…
Mickey @ 3:51 PM

this tawdry era…

Posted on Saturday 14 May 2016

For the last week, I’ve been unable to focus on anything very far from a single article [see the jewel in the crown…, why is that?…, the hope diamond…, and the obvious irony… ]. And it’s been frustrating in that the article has been behind the pay-wall. But now, the International Journal of Risk & Safety in Medicine has generously published it Open Access. And while the authors’ background notes are not yet on-line, they allowed me to post them here [see Update below]. So you can decide for yourself if my monomania is justified with a full deck:
by Jureidini, Jon N., Amsterdam, Jay, D, and McHenry, Leemon B.
International Journal of Risk & Safety in Medicine, 2016 28[1]:33-43.

OBJECTIVE: Deconstruction of a ghostwritten report of a randomized, double-blind, placebo-controlled efficacy and safety trial of citalopram in depressed children and adolescents conducted in the United States.
METHODS: Approximately 750 documents from the Celexa and Lexapro Marketing and Sales Practices Litigation: Master Docket 09-MD-2067-[NMG] were deconstructed.
RESULTS: The published article contained efficacy and safety data inconsistent with the protocol criteria. Procedural deviations went unreported imparting statistical significance to the primary outcome, and an implausible effect size was claimed; positive post hoc measures were introduced and negative secondary outcomes were not reported; and adverse events were misleadingly analysed. Manuscript drafts were prepared by company employees and outside ghostwriters with academic researchers solicited as ‘authors’.
CONCLUSION: Deconstruction of court documents revealed that protocol-specified outcome measures showed no statistically significant difference between citalopram and placebo. However, the published article concluded that citalopram was safe and significantly more efficacious than placebo for children and adolescents, with possible adverse effects on patient safety.
While this is only one example of many similarly misreported Clinical Trials, the access to the internal industry documents allowed these authors to leave nothing to our imagination. They prove that it’s ghost written; that it was framed by the industry executives for commercial gain before any academic author got near the data; that it was deceitfully written to hide its failings, on purpose; and that it was a negative Clinical Trial presented as positive and subsequently used to gain FDA Approval. Those points and more are abundantly clear in this easy-reading article.

I wanted to go through just one of their many examples to illustrate why it’s imperative that these RCT reports adhere to the pre-registered a priori Protocols and Statistical Analysis Plans, so clearly explained in Adriaan de Groot’s 1956 paper [see the hope diamond…]. In this case, the a priori Protocol was among the archived documents examined by Jureidini et al:

from the a priori Protocol [page 23]

12.5.1 Primary Efficacy Parameter
  Change from baseline in CDRS-R score at Week 8 will be used as the primary efficacy parameter. Descriptive statistics will be calculated by visit. Comparison, between citalopram and placebo will be performed using three-way analysis of covarianee [ANCOVA] with age group, treatment group, and center as three factors,and the baseline CDRS-R score as covariate.
12.5.2 Secondary Efficacy Parameter[s]
  The secondary efficacy parameters are:
    1. CGI-Improvement subscale score [CGI-I].
2. Change from baseline in CGI-Severity score [CGI-S].
3. Change from baseline in K-SADS-P [depression module] score.
4. Change from baseline in CGAS score.

However, in the published article [A Randomized, Placebo-Controlled Trial of Citalopram for the Treatment of Major Depression in Children and Adolescents], the parameters were not-so-subtly changed. Response is nowhere defined in the Protocol. And the K-SADS-P and CGAS were just dropped:

from Wagner et al [page 1080]

The primary outcome measure in this study was the change from baseline in score on the Children’s Depression Rating Scale – Revised at week 8 or upon termination. The Children’s Depression Rating Scale – Revised was administered at each study visit. Response was defined as a score of ≤ 28 [indicating minimal residual symptoms]. Secondary measures included Clinical Global Impression [CGI] improvement and severity ratings [25].

Then, in their terse Results section, they included Response [erroneously called "prospectively defined"], a non-Protocol Effect Size [wrongly calculated], and again just left out the K-SADS-P and CGAS altogether:

from Wagner et al [page 1081]

Citalopram treatment showed statistically significant improvement compared with placebo on the Children’s Depression Rating Scale – Revised as early as week 1 [F= 6.58, df=1,150, p<0.05], which persisted throughout the study. At week 8, the effect size on the primary outcome measure, Children’s Depression Rating Scale – Revised [last observation carried forward], was 2.9. Additionally, at endpoint more citalopram-treated patients [36%] met the prospectively defined criterion for response than did placebo-treated patients [24%], a difference that was statistically significant [χ²=4.178, df=1, p<0.05]. The proportion of patients with a CGI improvement rating ≤ 2 at week 8 was 47% for the citalopram group and 45% for the placebo group [last observation carried forward values]. For the CGI severity rating, baseline values were 4.4 for the citalopram group and 4.3 for the placebo group, and endpoint values [last observation carried forward] were 3.1 for the citalopram group and 3.3 for the placebo group.

Not to mention the fact that the reported CDRS-R result failed to follow Protocol-directed exclusions which invalidated the claimed significance or that the add-in Response had a trivial NNT [8.3]. So by deviating from the a priori Protocol in a variety of ways, they were able to cherry-pick among parameters to give the illusion of efficacy.

But the even bigger revelation in the documents is the amount of effort the industry handlers and doctors put into controlling the process and actively hiding the true results of the Clinical Trial:

from Jureidini et al [section 3.2.2]

Wagner et al. failed to publish two of the protocol-specified secondary outcomes, both of which were unfavourable to citalopram. While CGI-S and CGI-I were correctly reported in the published article as negative, the Kiddie Schedule for Affective Disorders and Schizophrenia-Present [depression module] and the Children’s Global Assessment Scale [CGAS] were not reported in either the methods or results sections of the published article.
In our view, the omission of secondary outcomes was no accident. On October 15, 2001, Ms. Prescott wrote: “I’ve heard through the grapevine that not all the data look as great as the primary outcome data. For these reasons [speed and greater control] I think it makes sense to prepare a draft in-house that can then be provided to Karen Wagner [or whomever] for review and comments.” Subsequently, Forest’s Dr. Heydorn wrote on April 17, 2002: “The publications committee discussed target journals, and recommended that the paper be submitted to the American Journal of Psychiatry as a Brief Report. The rationale for this was the following: … As a Brief Report, we feel we can avoid mentioning the lack of statistically significant positive effects at week 8 or study termination for secondary endpoints.
Instead the writers presented post hoc statistically positive results that were not part of the original study protocol or its amendment [visit-by-visit comparison of CDRS-R scores, and ‘Response’, defined as a score of ≤28 on the CDRS-R] as though they were protocol-specified outcomes. For example, ‘Response’ was reported in the results section of the Wagner et al. article between the primary and secondary outcomes, likely predisposing a reader to regard it as more important than the selected secondary measures reported, or even to mistake it for a primary measure.

There’s nothing speculative here. The points are illustrated with verbatim references from the perpetrators’ own internal emails. And yet the authors had one hell of a time getting it published [also well referenced in their background notes with  emails from journal editors].

Like the Paxil Study 329 article, the list of contributors stretches well beyond the listed authors – the subjects in the studies, the kids prescribed the medication, the litigation that released these documents, the library that archived them, etc. But major credit goes to these authors who spent countless hours doing tedious, unfunded research, wrote the paper, then persisted until they found a journal that would accept the article as it should’ve been written. And while we’re at it, the International Journal of Risk & Safety in Medicine deserves credit for rising to the occasion – both by publishing it and for making it Open Access.

I think it’s now our job to insure that all this dedicated work is rewarded with a wide readership, one that helps us move closer to putting this tawdry era behind us…
Mickey @ 12:12 PM

the obvious irony…

Posted on Thursday 12 May 2016

We don’t need a lofty scientific explanation for why we should demand that Clinical Trials of medication follow [and report the results of] the a priori Protocol. Common sense and historical fact offer reasons enough. If I can regress to my monotonous diagram of the Clinical Trial process for a moment, the a priori Protocol [including the Statistical Analysis Plan] have to be both reviewed and approved before the study begins, and constitute the last verifiable insurance against things like HARking or p-hacking [see the hope diamond…] – study results potentially manipulated based on foreknowledge of the outcome:

For one thing, there’s no guarantee that a sponsor won’t just go around the blind [they’re paying the CRO doing the study] or, for another more likely scenario, select analytic techniques that produce the results they want. That’s the common sense part. The historical fact part is also self evident. This blog and many others are filled with examples, as are the court dockets. In fact, looking at the psychiatric literature of several decades, we don’t need statistics to know what happened. It’s hard to produce examples where this kind of distortion didn’t happen at some level. The recent article by Jureidini, Amsterdam, and McHenry [see the jewel in the crown… and why is that?…] just happens to be about a sample case where there were enough subpoenaed materials available to directly document the behind-the-scenes deceit [when you happen to have three researchers willing to go through thousands of pages to flag the ones that mattered].

But Dr. De Groot’s thoughtful analysis [see the hope diamond…] adds value beyond this obvious empirical evidence, even beyond the technical explanation. It goes to the heart of what statistic analyses really represent. Just because they involve numbers and formulas and generate numeric answers doesn’t mean that statistical analyses are like the familiar arithmetic, algebra, or calculus with computations producing distinct answers. In fact, if you took a statistics course, the teacher was likely a psychologist or a social scientist rather than someone from the math department. Statistical analyses are about conditional likelihoods [with the emphasis on conditional]. And De Groot is pointing out that, unlike the other mathematics, one absolute condition in confirmatory statistical analysis is blindness [with the emphasis on absolute].

There is an obvious irony in this story of pharmaceutical clinical trials. While the corporations conducting and analyzing these clinical trial results are afforded any number of pathways to get around the absolute requirements for blindness, those of us on the outside who prescribe and take these medications and who should be able to see the whole process are muzzled by a blanket of absolute blindness [I could’ve replaced obvious irony with obvious travesty]. That’s clearly backwards. Recently, several papers have made the ramifications of this obvious irony abundantly clear…
Having been intimately involved in one of these articles and knowing the authors of the other, I can attest to the herculean effort required to produce them. Both are the result of unfunded research. There were no Conflicts of Interest and they were largely done by senior people with no need for further credentialing. Unfortunately, the primary articles they analyzed are not exceptions. And at least in the domain of industry funded RCTs of CNS drugs, they’re the rule. Even worse, both studies involved medications for vulnerable youth. The mandate for change is clear as a bell…
Mickey @ 7:01 PM

good news bulletin…

Posted on Thursday 12 May 2016

Since it’s almost Friday the 13th, I thought I’d post some good news follow-ups to neutralize any bad luck juju that might be out and about:
  1. Unfortunately, the rainy season coincided with the pollen season this year, interfering with the smoothness of my yearly graph on Seasonal Dementia [a new entity… and seasonal dementia update…]:

    As seasoned veterans know [pun intended], the times of greatest affliction don’t necessarily coincide with the highest pollen counts. In my case, it’s that second bump in later April that’s the killer. But this week has been great, and I declare Pollenarama-2016 officially over…

  2. I learned today that the blockbuster article that I can’t seem to stop talking about [The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance] will be published Open Access [full text on-line] [though it will take several days for the link to be changed]…

  3. Last summer, I saw a patient who presented with Delerium from an outrageous drug regimen – maximal doses of multiple psychiatric medications [blitzed…]:
    • Seroquel 600 mg/day
    • Trazadone 450 mg/day
    • Depakote 2.5 Grams/day
    • Neurontin [I forget how much too much]/day
    • Cogentin  8 mg/day [?]
    • Prozac 80 mg/day

    I began to taper the drugs, and what was lurking underneath was full blown Tardive Dyskinesia with constant back and forth jaw movement, hand wringing, athetoid shoulder shrugging, and restless legs. With continued tapering, her sensorium slowly cleared somewhat, but the Tardive Dyskinesia decidedly worsened [some truths are self-evident…, a story: getting near the ending[s]…, and the verb “to follow”…]. Even off medications, her cognition and memory remained impaired. So in March, I was finally able to gather enough family to get a coherent history that made it clear that her actual primary diagnosis was Traumatic Brain Injury from a fall two years before [cases…]. The TD symptoms were improving.

    I saw her yesterday in the clinic, and for the first time in almost a year, she had no symptoms of Tardive Dyskinesia. She does have a residual painful TMJ syndrome from the months of constant jaw movements, but all of the TD symptoms themselves have cleared. She’s only on Aprazolam for sleep and Prozac 20 mg daily [per her request]. So, one big bullet dodged and a sigh of relief from everyone concerned…
Thus ends my pre-Friday-13th good news bulletin…
Mickey @ 7:00 PM

the hope diamond…

Posted on Tuesday 10 May 2016

[click image to link to her slides]

Dorothy Bishop is a Developmental Psychologist who focuses on Dyslexia and other Language Disorders. This is not an article, just the slides from a presentation she gave to the Rhodes Biomedical Association last week on the reproducibility crisis. Her slides tell a story well known to us. And the problem isn’t the science, it’s the scientists. She starts with some familiar methods used to distort findings. I’ve synopsized those opening slides:

  • Publication bias: burying negative studies
  • HARKing: Hypothesis After Results Known
  • p-hacking: trying different statistical tests on various datasets until you get the result you want
but then she talks about the various people along the way who had written about this. And her history started with Adriaan de Groot [1956].
So, on a lark, I Googled Adriaan de Groot, and there was his article full text [put there by Dorothy Bishop]…
[Translated and Annotated by Eric–Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van der Maas]
from the Psychological Laboratory of the University of Amsterdam

Adrianus Dingeman de Groot [1914–2006] was one of the most influential Dutch psychologists. He became famous for his work “Thought and Choice in Chess”, but his main contribution was methodological — De Groot co-founded the Department of Psychological Methods at the University of Amsterdam [together with R. F. van Naerssen], founded one of the leading testing and assessment companies [CITO], and wrote the monograph “Methodology” that centers on the empirical-scientific cycle: observation–induction– deduction–testing–evaluation. Here we translate one of De Groot’s early articles, published in 1956 in the Dutch journal Nederlands Tijdschrift voor de Psychologie en Haar Grensgebieden . This article is more topical now than it was almost 60 years ago. De Groot stresses the difference between exploratory and confirmatory [“hypothesis testing”] research and argues that statistical inference is only sensible for the latter: “One ‘is allowed’ to apply statistical tests in exploratory research, just as long as one realizes that they do not have evidential impact”. De Groot may have also been one of the first psychologists to argue explicitly for preregistration of experiments and the associated plan of statistical analysis. The appendix provides annotations that connect De Groot’s arguments to the current-day debate on transparency and reproducibility in psychological science.
Last week, I called the publication by Jureidini, Amsterdam, and McHenry  the jewel in the crown… to metaphorically emphasize the importance of their article, which introduced subpoenaed internal corporate documents to illustrate the fraudulent underbelly of the 2004 Celexa RCT in adolescents. Well, I need an even greater superlative for the De Groot article Dorothy Bishop brings to us from a more naive time – how about the Hope Diamond? Since you’re unlikely to read the whole paper without a nudge, here’s its essence from the translator’s note in the Appendix:
Specifically, De Groot makes three important interconnected points. The first point is that exploratory analyses invalidate the standard interpretation of outcomes from hypothesis testing procedures. “Exploratory investigations differ from hypothesis testing in that the canon of the inductive method of testing is not observed, at least not in its rigid form. The researcher does take as his starting-point certain expectations, a more or less vague theoretical framework; he is indeed out to find certain kinds of relationships in his data, but these have not been antecedently formulated in the form of precisely stated «testable» hypotheses. Accordingly they cannot, in the strict sense, be put to the test.” «De Groot, 1969, p. 306». Indeed, in exploratory work: “The characteristic element of ‘trying out whether …’ is present, but in such a way that the researcher’s attitude in fact boils down to ‘let us see what we can find.’ Now what is ‘found’ — that is, selected — cannot also be tested on the same materials” «De Groot, 1969, p. 307»…

The second, related, point that De Groot makes is the pressing need to distinguish between exploratory and confirmatory «“hypothesis testing”» analyses. De Groot reiter- ated this point in his book “Methodology”: “It is of the utmost importance at all times to maintain a clear distinction between exploration and hypothesis testing . The scientific significance of results will to a large extent depend on the question whether the hypotheses involved had indeed been antecedently formulated, and could therefore be tested against genuinely new materials. Alternatively, they would, entirely or in part, have to be designated as ad hoc hypotheses, which could, emphatically, not yet be tested against ‘new’ materials.” «De Groot, 1969, p. 52» Indeed, De Groot believed that it was unethical to blur the distinction between exploratory and confirmatory work: “It is a serious offense against the social ethics of science to pass off an exploration as a genuine testing procedure. Unfortunately, this can be done quite easily by making it appear as if the hypotheses had already been formulated before the investigation started. Such misleading practices strike at the roots of ‘open’ communication among scientists.” «De Groot, 1969, p. 52». This point was later revisited by Kerr «1998» when he introduced the concept of HARKing «“Hypothesizing After the Results are Known”», as well as by Simmons et al. «2011», John et al. «2012», and Wagenmakers, Wetzels, Borsboom, and van der Maas «2011»…

The third point that De Groot makes concerns preregistration. De Groot strongly felt that in order for research to qualify as confirmatory «and, consequently, for statistical inference to be meaningful», an elaborate preregistration effort is called for: “If an investigation into certain consequences of a theory or hypothesis is to be designed as a genuine testing procedure «and not for exploration», a precise antecedent formulation must be available, which permits testable consequences to be deduced.” «De Groot, 1969, p. 69»…
My version:
  1. Randomized Clinical Trials are not research [exporatory], they’re product testing [confirmatory].
  2. The a priori Protocol defines the analysis.
Dr. Bernard Carroll‘s version was in this recent comment:
There is an obvious way to prevent that kind of data manipulation, cherry picking, moving of goalposts, HARKing, and glossing over of adverse events. All it would take is for the FDA to require that they analyze the data strictly according to the a priori protocol. That requirement would apply to any investigational new drug or to any approved drug being tested for a new indication. Corporations and investigators would be prohibited from reporting any analyses other than the FDA analyses. With an a priori protocol and plan of analysis there should be no room for self-serving “creativity” by the corporations.

As things stand, we have a Kabuki theater spectacle. The corporations don’t come clean about what they did and the FDA doesn’t call them out when agency analyses disagree with the corporate line. They may deny or delay approval, but the FDA doesn’t go to the literature like Jureidini, Amsterdam, and McHenry did here challenge the distorted corporate analyses reported in the literature.

This requirement would put an end to creative manipulation of the clinical trials literature. As Dr. Mickey often says, this is not high science but rather product testing. Thus, corporations cannot claim to be privileged for conducting the statistical analyses. We need a clinical trials equivalent of the Underwriters Laboratory. Before licensing, nobody takes the manufacturing corporation’s word for it concerning the safety and performance of X-ray machines or CT scanners or cardiac defibrillators. Why should we treat drugs any differently?…
And Dr. Adriaan de Groot said it back when the world was young [1956]. It’s the Hope Diamond because it’s our only hope to stop the craziness of the last thirty five years. It’s probably more important than Data Transparency, rigid enforcement of analysis that follows the registered a priori protocol…
hat tip to Dorothy Bishop…  
Mickey @ 7:07 PM


Posted on Monday 9 May 2016

When I retired a lot of people kept asking me what I was going to do. Their questions made me aware that I had no idea, so I made up something. "I want to find my inner boredom" I would say. Later, I found a more accurate answer, "I want to think about what I want to think about." What I meant was that in the busy·ness of practicing and teaching and the many other things on my plate, my mind was not my own. It was filled with things I needed to attend to, and I wanted it back. I wanted to pick my own topics.

So for the first four or five years, I did a lot of things, but they had nothing to do with medicine or psychiatry. Then I started seeing patients again and got interested in the things I write about here, and I’ve really enjoyed doing it. In conventional terms, I guess I had burn-out, and after some needed respit care, I was good to go. But I still have to be vigilant about keeping my mind free "… to think about what I want to think about." It’s very easy for me to get on a topic, and feel like I have to stick to the task. Sometimes, that’s what I want to do, but sometimes it begins to feel like a homework assignment [self imposed].

That’s what has happened with this neural circuits topic. I’ve been wanting to figure out what they’re talking about ever since reading…
… that mental illness was increasingly being recognised as a disorder of brain circuitry, rather than as a chemical imbalance, thanks to neuroimaging techniques and the discovery of some key biomarkers.
…in a speech by Dr. Insel five years ago. It was a hard time for him. PHARMA was pulling out of CNS drug research. The bio·dreams of the DSM-5 Task Force had tanked. And the NIMH had just initiated its amorphous RDoC Project. Dr. Insel’s exuberant campaign to get us to see Psychiatry as a Clinical Neuroscience Discipline just wasn’t panning out. So I think I saw his pronouncement about brain circuitry as a hail mary, a desperate attempt to keep his dreams alive, and dismissed it since I had no idea what he was talking about.

But he kept talking about it [Director’s Blog: Mental Illness Defined as Disruption in Neural Circuits, Insel Outlines the Psychiatry of the Future Treating Disorders of Neural Circuitry]. And the RDoC became the official language of the NIMH [Director’s Blog: Transforming Diagnosis]. So when Dr. Leanne Williams recently hit the airways with her RAD project [Precision psychiatry: a neural circuit taxonomy for depression and anxiety, How neuroscience could determine your mental health treatment, and Developing a clinical translational neuroscience taxonomy for anxiety and mood disorder: protocol for the baseline-follow up Research domain criteria Anxiety and Depression [“RAD”] project], I felt like it was time to look into this business of neural circuits. And so I wrote neural circuits 1…, thinking that a good place to start was with the work in Neurology with Parkinson’s Disease. And I’ve been chasing the references about the psychiatric applications of the concept.

But it’s not going like I planned. Williams’ papers describe a number of neural circuits as if they’re established, but chasing the references, my impression so far is that they’re pretty soft. I’ve written colleagues who I would expect to be in the know, and chased down some of their references too. But I can’t find any place to stand and I’ve ended up with a computer desktop full of saved .pdf’s, but little else to show for my efforts. I genuinely can’t tell if my unfamiliarity with the topic is the problem or if this really is one of those places where the people writing have blurred the distinctions between their dreams and their research. Of course, I suspect the latter based on our experience of the last couple of decades, but I don’t know enough yet to be sure about any of it. So I’m going to leave neural circuits 1… sitting there without any neural circuits 2… for a while, and read over my gathered material at a more leisurely pace – until something gels. Right now, it’s making my eyes cross and my brain hurt. As I said, I’m retired and "I want to think about what I want to think about."

Saturday, out of the blue, I posted a couple of jazz classics from 1956 [nocturne for flute [1956]…, breezin’ along in the trades [1956]…], favorites from my own adolescence – a time of cool jazz and the beat generation. What got 1956 in my mind? It was reading a paper from that year by a Dutch Psychologist about the use of statistics in research, a paper I hadn’t seen before: The Meaning of “Significance” for Different Types of Research by A. D. de Groot [Psychologist and Chessmaster]. And one thing lead to another. I found myself on Youtube listening to that music from long ago [the Shorty Rogers piece wasn’t my favorite of his, but the Chinook that Melted my Heart just wasn’t to be found].

So I moved the music posts forward a couple of days [that’s the housekeeping part], added one other [one note samba…], and I’m going to talk about Adrianus Dingeman de Groot’s ideas and how I landed on that old paper. It’s a theme that’s always on my front burner, and I know that it’s something "I want to think about" right now…
Mickey @ 12:41 PM

one note samba…

Posted on Monday 9 May 2016

Mickey @ 12:00 PM

breezin’ along in the trades [1956]

Posted on Monday 9 May 2016

Shorty Rogers Quintet

Mickey @ 7:15 AM

nocturne for flute [1956]

Posted on Monday 9 May 2016

Bud Shank Quartet

Mickey @ 7:00 AM

neural circuits 1…

Posted on Sunday 8 May 2016

Usually, when we think of circuits, we think about a roughly circular path that ends in the same place it started, then repeats – like an electrical circuit or Escher’s fantastic circuit. But when people are talking about neural circuits, they seem to be using the term more like it’s used in the phrase, circuit boards, those peculiar green thingees that populate the gizmos that make our lives work.

In fact, when experts diagram their neural circuits, the figures even look a bit like those boards with their discrete elements [chips and the like] connected with rows of rigid copper conductors scurrying from element to element. But as tempting as it is, making analogies between computer hardware [or, for that matter, other electric circuitry] and the brain doesn’t hold much further than this.

So, what about those neural circuits? Where’s all the excitement? Well, right now it’s in the domain of Neurology and the Movement Disorders [eg Parkinson’s Disease]:
by DeLong MR and Wichmann T
JAMA Neurology. 2015 72[11]:1354-1360.

IMPORTANCE: The revival of stereotactic surgery for Parkinson disease [PD] in the 1990s, with pallidotomy and then with high-frequency deep brain stimulation [DBS], has led to a renaissance in functional surgery for movement and other neuropsychiatric disorders.
OBJECTIVE: To examine the scientific foundations and rationale for the use of ablation and DBS for treatment of neurologic and psychiatric diseases, using PD as the primary example.
EVIDENCE REVIEW: A summary of the large body of relevant literature is presented on anatomy, physiology, pathophysiology, and functional surgery for PD and other basal ganglia disorders.
FINDINGS: The signs and symptoms of movement disorders appear to result largely from signature abnormalities in one of several parallel and largely segregated basal ganglia thalamocortical circuits [ie, the motor circuit]. The available evidence suggests that the varied movement disorders resulting from dysfunction of this circuit result from propagated disruption of downstream network activity in the thalamus, cortex, and brainstem. Ablation and DBS act to free downstream networks to function more normally. The basal ganglia thalamocortical circuit may play a key role in the expression of disordered movement, and the basal ganglia-brainstem projections may play roles in akinesia and disturbances of gait. Efforts are under way to target circuit dysfunction in brain areas outside of the traditionally implicated basal ganglia thalamocortical system, in particular, the pedunculopontine nucleus, to address gait disorders that respond poorly to levodopa and conventional DBS targets.
CONCLUSIONS AND RELEVANCE: Deep brain stimulation is now the treatment of choice for many patients with advanced PD and other movement disorders. The success of DBS and other forms of neuromodulation for neuropsychiatric disorders is the result of the ability to modulate circuit activity in discrete functional domains within the basal ganglia circuitry with highly focused interventions, which spare uninvolved areas that are often disrupted with drugs.

[adapted from the paper]
[Fear not. There’s not going to be a test at the end of this post. This is here just to show what a neural circuit looks like.]

When I was in medical school, we all learned about some neural circuits – the ones that were known then: like the motor system [how the cortex send messages to the muscles] or the visual system [how the sensors in the eye connect to the visual cortex at the back of the brain]. Those two actually are like wiring diagrams and important for localizing brain lesions [tumors, strokes]. But the only thing I remember knowing about the Movement Disorders like Parkinson;s Disease is that they involved the extrapyramidal [postural] system which had something to do with the mysterious basal ganglia structures deep in the brain.

But now, the basal ganglia thalamocortical circuits are better characterized [above]. The boxes are brain structures/regions and the arrows are the ways in which they act on each other to make the system work. These circuits are spatially distant from the voluntary motor system, so they can be manipulated without loss of function. In this same article, there’s a figure that shows the abnormalities in Parkinson’s Disease, followed by another that shows where the symptoms arise and where treatments act. Now they know enough to successfully treat medication resistant disabling symptoms with surgical lesions or deep brain modulation, putting this basic science to direct clinical use.

So we have a primitive understanding of this particular neural circuit – something like a highway map. It tells how to get from place to place, but not much about what happens in the places the roads connect. There are undoubtedly a myriad of such functional neural circuits with similar nodes, pathways, and feedback loops in our brains just doing their various jobs whether we yet know about them or not. And it’s sure easy to see why neuroscientists of various ilks are so eager to know a lot more about them – and ultimately how to safely tweak them when they sputter and cause dis·ease.

Are there similar neural circuits whose dysfunction results in mental illness? If so, what would such systems actually control? The recent past Director of the NIMH, Tom Insel, certainly thought so…
British Medical Journal
1 September 2011

… The seismic shift had been driven by what he [Tom Insel] described as three “revolutionary changes” in thinking, the first of which was that mental illness was increasingly being recognised as a disorder of brain circuitry, rather than as a chemical imbalance, thanks to neuroimaging techniques and the discovery of some key biomarkers…

Even in his exuberance, I kind of wish Dr. Insel had said "some mental illness might even turn out to be disorders of brain circuitry." But whatever he said isn’t the point. This post is my attempt at an introduction to exploring what I started in weary…, the Decade of “Jumping the Gun“…., and this comment.

Are we ready for the kind of study proposed by Leanne Williams et al reported in weary…? or are we once again Jumping the Gun“….?
Mickey @ 1:38 PM