selective inattention…

Posted on Tuesday 2 February 2016

American psychiatrist Harry Stack Sullivan balked at the term "Unconscious," preferring "Selective Inattention" to explain realities that people simply omit. It’s a particularly apt term for some recent commentaries appearing in our medical literature. In notes from a reluctant parasite…, I mentioned Dr. Jeffrey Drazen’s editorial and the subsequent series by his reporter in the New England Journal last summer suggesting they lift the ban on experts with Conflicts of Interest writing editorials and review articles:
by Jeffrey M. Drazen, M.D.
New England Journal of Medicine. 2015 372:1853-1854.
by Lisa Rosenbaum, M.D.
New England Journal of Medicine. 2015 372:1860-1864.
by Lisa Rosenbaum, M.D.
New England Journal of Medicine. 2015; 372:1959-1963.
by Lisa Rosenbaum, M.D.
New England Journal of Medicine. 2015; 372:2064-2068.

In order to make that argument, they have to ignore the numerous examples of "experts" using review articles to push some product they had personal connections with – one of the more egregious versions being Dr. Nemeroff et al’s review of the vagal nerve stimulator in depression [VNS Therapy in Treatment-Resistant Depression: Clinical Evidence and Putative Neurobiological Mechanisms] that cost him his own editorship at Neuropsychopharmacology. That was an unacknowledged COI, but there are many other examples to choose from where acknowledgement doesn’t mitigate the glaring bias.

Now Dr. Drazen has this other piece suggesting that people who want to reanalyze questioned studies are "data parasites," saprophytes feeding off of the carrion of other researchers work. In that formulation, he has to selectively ignore the numerous examples of distorted clinical trial reports that literally beg for a thorough re-examination, and the much more likely motives of a person vetting such an article to expose distortions:
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.
Then there was the Viewpoint article in JAMA this Fall by [Associate Editor] Anne Cappola and colleague Garret FitzGerald that exercised the same kind of Selective Inattention [Confluence, not conflict of interest: Name change necessary]. They direct a Translational Institute at the University of Pennsylvania and seem worried that the focus on Conflicts of Interest might intrude on the dreams of the Translationists [my term]. They propose that reframing things with a name change [Conflicts of Interest to Confluence of Interest] might make things go better:
by Anne Cappola and Garret FitzGerald
JAMA. 2015 314[17]:1791-1792.

… Confluence of interest represents a complex ecosystem that requires development of a uniform approach to minimize bias in clinical research across the academic sector. Such a policy must be at once simple and accessible, capturing the complexity of the relationships while being sufficiently flexible at the individual level not to intrude on the process of innovation.
In order to suggest this naive grammatical solution, they have to have their Selective Inattention motor running full throttle [the elephant in the room comes to mind]. In Dr. Bernard Carroll’s words:
Health Care Renewal
by Bernard Carroll
January 24, 2016

"… the authors, presuming to speak for investigators generally, were offended by the increasing regulations for managing COI. Those developments have occurred at the Federal, institutional, and publication levels. Worse, the authors ignored the reality of recent corruption that led to those new regulations. That uncomfortable fact was airbrushed out of their discussion."
And the authors fail to notice that some of us think their whole notion of Translational Medicine itself is an elaborate version of the same kind of ruse – that same wolf in sheep’s clothing hiding behind lofty rhetoric [like in this very article]. Which brings me to Susan Molchan’s blog post on HEALTHNEWSREVIEW.
HEALTHNEWSREVIEW
by Susan Molchan
January 25, 2016

It’s difficult to make a case for hiding or obscuring information about health and the medicines we take, but it seems the editors of two top medical journals are doing just that. The decisions of these editors substantially affect the quality of medical research studies reported, what public relations officials communicate about those studies, and what news stories eventually say about the research to patients and the public…
I’m currently trying to escape the fog one gets into spending too much time scrolling through endless columns of figures, so I wanted to write this article about the Selective Inattention of pundits in high places who have to overlook the loud and the obvious to press their own agendas. I had the Drazen articles and Cappola’s and FitzGerald’s JAMA Viewpoint piece in hand. I wanted to add another example, but I couldn’t find the one I was looking for. Then <ping>, my computer announced incoming. It was a comment on my last blog post by Susan Molchan. Not only did it point me to her excellent blog which pre-empted the post I was writing [that you’re reading right now], it also included the very piece I was looking for. But first I need to back up a bit and talk about Ben Goldacre’s COMPare Project.

Ben and a cadre of trainees are taking advantage of some of the data access afforded by the European Medicines Agency [EMA] and gathering the a priori Protocols from a number of Clinical Trials. Then they’re running down the published papers and comparing the Protocol defined outcome variables with what is reported by the articles – finding all kinds of discrepancies. They call it Outcome Switching. Then they’re taking it a step further by contacting the journals and asking the obvious questions – Did they notice? What might they do about that? It’s a great idea [and right in the middle of why I’m looking at the non-protocol variables introduced into the Keller et al’s 2001 paper on Paxil Study 329]. There’s a nice summary of Ben’s Project on Retraction Watch [Did a clinical trial proceed as planned? New project finds out]. The other article I was looking for was a letter from an Annals of Internal Medicine editor in response to COMPare’s query about one of their published articles:
Annals of Internal Medicine
December 22, 2015

… The Centre for Evidence-Based Medicine Outcome Monitoring Project’s assessments seem to be based on the premise that trials are or can be perfectly designed at the outset, the initial trial registry fully represents the critical aspects of trial conduct, all primary and secondary end points are reported in a single trial publication, and any changes that investigators make to a trial protocol or analytic procedures after the trial start date indicate bad science. In reality, many trial protocols or reports are changed for justifiable reasons: institutional review board recommendations, advances in statistical methods, low event or accrual rates, problems with data collection, and changes requested during peer review. The Centre for Evidence-Based Medicine Outcome Monitoring Project’s rigid evaluations and the labeling of any discrepancies as possible evidence of research misconduct may have the undesired effect of undermining the work of responsible investigators, peer reviewers, and journal editors to improve both the conduct and reporting of science.
Selective Inattention? You betcha! I don’t doubt that he’s right that investigators may frequently misjudge in their a priori predictions. But he is selectively inattentive to the very obvious problem that the a priori protocol is the only concrete evidence we have that the author’s didn’t go fishing after the fact with a computer to find the variable whose outcome fit their needs. We obviously can’t trust the blinding as it’s controlled by the Sponsor and their contracted CRO. This is a very high-stakes game and the principles aren’t boy scouts. The authors are free to mention that they are reporting non-protocol-defined variables, but that status needs to be crystal clear. And it usually isn’t – thus COMPare. Outcome switching was in the center ring of our Paxil Study 329 analysis, but we didn’t yet have that general name for it.

Harry Stack Sullivan was from the days before psychopharmacology, and he was nobody’s fool. He was objecting to the "un" in Freud’s "unconscious." Things people either don’t or don’t want to think about don’t just go away. The mind becomes selectively inattentive, but it shows. They get some kind of fidgety when the unwanted mental content is nearby. They may start doing odd things that gamblers call "tells." If the wires are hooked up to a polygraph, the needles on the graph begin to wiggle. They may change the subject, or ask why you’re asking, or get hostile, or be defensive, or maybe sarcastic, or go silent. There’s a subtle disturbance in the force. Such things don’t tell you what’s being selectively unattended – only that you’re in the neighborhood of something that matters.

I reckon editors are no different. They get their version of fidgety – dismissive, into expert mode, sarcastic, silent, make bizarre and forced arguments – all the things other people do when one gets into an area where for a myriad of reasons, you’re confronting something that pokes holes in the status quo. In this case, they are dancing around in order not to have to see that there has been a massive intrusion of unscientific interests into our science-based world, and to address it we’re going to have to tweak our system in some fairly fundamental ways. And the people who are gaining something from the system as it stands are going to lose some things they don’t want to lose. But that’s just the way of things. As they say, "Don’t do the crime, if you can’t do the time." I recommend reading Susan Molchan’s blog, Bernard Carroll’s blog, and anything Ben Goldacre has to say about COMPare. In differing ways, they’re all calling our attention to the same very important  thing  deceit… 

Sorry, the comment form is closed at this time.