protest too much…

Posted on Thursday 4 February 2016

I want to linger on the response of the Annals of Internal Medicine to the COMPare letter pointing out an instance of a published paper that reported outcomes differing from those in the a priori Protocol. Here’s the letter from COMPare:
Annals of Internal Medicine
by Eirion Slade; Henry Drysdale; Ben Goldacre, on behalf of the COMPare Team
December 22, 2015

TO THE EDITOR: Gepner and colleagues’ article reports outcomes that differ from those initially registered. One prespecified primary outcome was reported incorrectly as a secondary outcome. In addition, the article reports 5 “primary outcomes” and 9 secondary outcomes that were not prespecified without flagging them as such. One prespecified secondary outcome also was not reported anywhere in the article.

Annals of Internal Medicine endorses the CONSORT [Consolidated Standards of Reporting Trials] guidelines on best practice in trial reporting. To reduce the risk for selective outcome reporting, CONSORT includes a commitment that all prespecified primary and secondary outcomes should be reported and that, where new outcomes are reported, it should be made clear that these were added at a later date, and when and why this was done should be explained.

The Centre for Evidence-Based Medicine Outcome Monitoring Project [COMPare] aims to review all trials published going forward in a sample of top journals, including Annals. When outcomes have been incorrectly reported, we are writing letters to correct the record and audit the extent of this problem in the hopes of reducing its prevalence. This trial has been published and is being used to inform decision making, and this comment is a brief correction on a matter of fact obtained by comparing 2 pieces of published literature. We are maintaining a Web site [www.COMPare-Trials.org] where we will post the submission and publication dates of this comment alongside a summary of the data on each trial that we have assessed.

I was surprised by the response from the Annals. The tone is generally defensive and dismissive, sometimes verging on contemptuous. After describing their journal’s own editorial process, they turn to COMPare and it’s methodology:
Annals of Internal Medicine
by the Editors
December 22, 2015

… According to COMPare’s protocol, abstractors are to look first for a protocol that has been published before a trial’s start date. If they find no such publication, they are supposed to review the initial trial registry data. Thus, COMPare’s review excludes most protocols published after the start of a trial and unpublished protocols or their amendments and ignores amendments or updates to the registry after a trial’s start date. The initial trial registry data, which often include outdated, vague, or erroneous entries, serve as COMPare’s “gold standard.”

Our review indicates problems with COMPare’s methods. For 1 trial, the group apparently considered the protocol published well after data collection ended. However, they did not consider the protocol published 2 years before MacPherson and associates’ primary trial was published. That protocol was more specific in describing the timing of the primary outcome [assessment of neck pain at 12 months] than the registry [assessment of neck pain at 3, 6, and 12 months], yet COMPare deemed the authors’ presentation of the 12-month assessment as primary in the published trial to be “incorrect.” Similarly, the group’s assessment of Gepner and colleagues’ trial included an erroneous assumption about one of the prespecified primary outcomes, glycemic control, which the authors had operationalized differently from the abstractors. Furthermore, the protocol for that trial clearly listed the secondary outcomes that the group deemed as being not prespecified.
They’re chiding COMPare for not digging deep enough. I’ve spent a lot of time chasing around trying to find a priori Protocols and amendments, and it’s a daunting  and often impossible task. COMPare is making a plea for that information to be included in the articles and the review process. The authors surely have it immediately at hand. The second paragraph of COMPare’s letter couldn’t be clearer and doesn’t deseve the ‘eye for an eye’ response.
On the basis of our long experience reviewing research articles, we have learned that prespecified outcomes or analytic methods can be suboptimal or wrong. Regardless of prespecification, we sometimes require the published article to improve on the prespecified methods or not emphasize an end point that misrepresents the health effect of an intervention. Although prespecification is important in science, it is not an altar at which to worship. Prespecification can be misused to sanctify both inappropriate end points, such as biomarkers, when actual health outcomes are available and methods that are demonstrably inferior.
Nobody’s arguing with the editors about that point. If there’s one point where the COMPare letter is weak, it doesn’t spell out the obvious. The a priori Protocol, right or wrong, is the only verifiable piece of evidence around. We can’t trust that the blind was maintained in an industry funded CRO run trial. So if the a priori Outcome Measures have been changed, we need to know what they were and why they were changed so we can make our own decisions about the changes. Evoking "long experience" is no trump card. We readers have "long experience" too [and some of it has been very bad experience].
The Centre for Evidence-Based Medicine Outcome Monitoring Project’s assessments seem to be based on the premise that trials are or can be perfectly designed at the outset, the initial trial registry fully represents the critical aspects of trial conduct, all primary and secondary end points are reported in a single trial publication, and any changes that investigators make to a trial protocol or analytic procedures after the trial start date indicate bad science. In reality, many trial protocols or reports are changed for justifiable reasons: institutional review board recommendations, advances in statistical methods, low event or accrual rates, problems with data collection, and changes requested during peer review. The Centre for Evidence-Based Medicine Outcome Monitoring Project’s rigid evaluations and the labeling of any discrepancies as possible evidence of research misconduct may have the undesired effect of undermining the work of responsible investigators, peer reviewers, and journal editors to improve both the conduct and reporting of science…
The COMPare letter is matter-of-fact, pointing to an unacknowledged discrepancy in an article, suggesting how it should have been mentioned in the published article. I don’t read a charge of ‘research misconduct’ in that letter. But I sure don’t read any great desire in the editors to protect us from it. Why so nasty? Why the comment about undermining editors? One is tempted to say, "thou dost protest too much."

Sorry, the comment form is closed at this time.