crisis of confidence…

Posted on Monday 15 October 2012

Like many of us, Dr. Brodie jumped on the recent study in the New England Journal of Medicine that Internists were very suspicious of industry sponsored studies and tended not to prescribe the recommended drugs. And Dr. Brodie thought very little of the editor’s comments that we should "Believe the Data."
…the ones that I’ve shown basically tell the following story:

  • Research studies paid for by industry commonly distort findings so as to favor their products.
  • As a rule, the journal reader cannot tell how the results have been distorted. (As the latest entry showed, to find out what was misleading about a study that occupies 7 pages in a journal might require wading through 8500 pages of data.)
  • These distortions occur in all medical journals and if anything are even more prevalent in the top-tier journals. (Probably not because these journals are badly edited, but because it’s so much more of a coup if the company can land their research findings in those top journals.)
Bottom line – if the reader is automatically more skeptical of a study because it’s industry sponsored, there is good reason for that skepticism.
But it’s Brodie’s follow-up blog post I wanted to comment on. I feel like I’ve gotten reasonably good at spotting the distortions in the industry financed articles. But I’m an old retired guy who spends periods during the day looking them over. It’s what I do these days. Sort of a hobby. When I was in practice, it’s inconceivable that I would have the time or inclination to do that. Then, I scanned the table of contents. Picked things of interest and scanned the abstracts. I only really dug in chasing a particular interest or looking for details, rather than looking for lies. I expect that’s what most doctors do. I used to just trust the literature. I read critically, but I wasn’t paranoid like I am now:
More on Sponsorship of Studies in Medical Journals
Hooked: Ethics, Medicine, and Pharma
by Howard Brodie MD PhD
October 12, 2012

Again tipping my hat to Rick Bukata and Jerry Hoffman’s Primary Care Medical Abstracts, I come across a study by Drs. Michael Hochman of UCLA and Danny McCormick of Harvard:

These guys looked at several ways of reporting results that overestimate benefits:

  • Reporting relative vs. absolute risks
  • Reporting surrogate endpoints rather than significant changes in health
  • Reporting composite endpoints instead of reporting each endpoint of interest separately
  • Reporting only disease-specific mortality instead of all-cause mortality
If people write in and request I’ll explain why each of these overestimates benefit from drugs.

Drs. Hochman and McCormick then compared how likely it was that a study would report results in these ways, based on who sponsored the study. They found significant differences in two categories. Exclusively industry-sponsored studies were more likely than studies with at least some non-commercial support to report surrogate endpoints (45% vs. 29%) and disease-specific mortality (27% vs. 16%).

Now, if you wanted to defend the NEJM editorial position, you could say that readers of studies can readily see how the data are reported according to these criteria and can be wary of any study, regardless who funds it, that reports data in the less desirable way. But let’s give the last word to Drs. Hochman and McCormick, in their final recommendations: "These findings highlight the need for educational efforts to ensure that readers understand the complexities of these endpoints and of relative risk reporting. … In addition, Institutional Scientific Review Committees and regulatory agencies (e.g. the FDA) must closely examine the endpoints used in clinical trials and discourage the inappropriate use of surrogate and composite endpoints, and endpoints involving disease-specific mortality. Finally, medical journals may consider instituting editorial policies mandating the reporting of results in absolute numbers."

In other words, rather than asking readers to sift through whether the results are reported in a useful and valid fashion, medical journals like NEJM could simply refuse to publish papers that don’t adhere to the highest standards. Of course, if they did, they might lose revenue as drug companies would not buy so many expensive reprints of papers that are really useful for marketing drugs – which may be one of the roots of the problem.

I know journals have to support themselves. But I for one would pay more for thinner journals if the editors would act like Dr. Brodie suggests in his last paragraph. One thing that I can see in the last several years of reading these journals more thoroughly. There’s still a big problem, but I think they’re getting better. But Dr. Brodie’s point should move up to the top of our todo list. Journals that truly vetted their articles – particularly industry financed articles – and weren’t afraid to retract the ones that slip through the cracks would make a huge dent in the current crisis of confidence about our literature. Physicians have the right to a truthful literature and the duty to insist on it…
  1.  
    Steve Lucas
    October 16, 2012 | 11:08 AM
     

    I was recently caught in this trap. A major medical journal publishes a piece that highlights a personal bias and the short read reinforces my belief in the article.

    Ooops.

    A doctor points out the weakness of the data and the COI’s of the authors and I am left wondering how I could have made such a mistake. Understanding the failings of the study was not beyond my understanding. The COI’s were not something I would have thought of given there was no direct relationship between the authors and the nature of the study.

    The question that was raised was: What are the long term goals of the article?

    I am not a doctor, but offer no excuses for not looking into the study and the authors past business relationships. I do understand how easily a doctor could fall into this trap.

    This was both a teaching and learning moment.

    Steve Lucas

Sorry, the comment form is closed at this time.