notes from a reluctant parasite…

Posted on Monday 1 February 2016

It was something of an irony to be immersed in trying to make sense out of someone else’s study [the reason I stopped blogging for a while], and then to read that I was a member of a new class of researchers – "data parasites." Jeffrey Drazen, editor of the New England Journal of Medicine, didn’t win me over with his proposal that the NEJM drop its policy of prohibiting experts with Conflicts of Interest from writing review articles and editorials [see a snail’s pace…]. And, in a way, this new editorial is a continuation of that same theme.
by Dan L. Longo, and Jeffrey M. Drazen
New England Journal of Medicine. 2016  374:276-277.

The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick.
Data Sharing, as Drazen presents it here, is a sensible and time honored idea – using a dataset collected for one reason to uncover previously unseen insights that may be far from the original intent of the study. Darwin’s Finches come to mind. Darwin returned from the  Galápagos Islands with a sack full of birds. But it was only when the birds were reexamined by ornithologist John Gould that their variability was noted, putting Darwin on the track that lead him to his concept of Natural Selection.

But in this first paragraph, Drazen also sets the stage for another agenda – one heavily promoted by the pharmaceutical industry. When the clamor about distorted clinical trial reports reached a pitch that could no longer be ignored, they reframed the real intent of the move for Data Transparency. Instead of that being a reform move to allow independent reanalysis to keep them honest [because they hadn’t been], they spoke of it as Data Sharing for the reasons Drazen presents in his opening gambit – a generous act in the service of scientific progress.

And in his second paragraph, he’s going to venerate the academic investigators’ role in these Clinical Trials. Perhaps his description is accurate in some instances, but it certainly doesn’t fit the industry-funded and administered studies I’ve looked at. The studies are run and analyzed by the industrial Sponsors and Contract Research Organization [CROs], written by medical writing firms, and the academic authors are more consultants than "researchers" [and tickets into the pages of prestigious journals]. While my cynical version may not be universally justified, it’s way common enough to be a glaring omission from Drazen’s narrative.
However, many of us who have actually conducted clinical research, managed clinical studies and data collection and analysis, and curated data sets have concerns about the details. The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters. Special problems arise if data are to be combined from independent studies and considered comparable. How heterogeneous were the study populations? Were the eligibility criteria the same? Can it be assumed that the differences in study populations, data collection and analysis, and treatments, both protocol-specified and unspecified, can be ignored?
The cat’s now out of the bag. It’s people like me and uncounted others that he’s after – people whose motive is to look for misconduct disguised as science – or perhaps people like the volunteers with the Cochrane Collaboration who do extensive structured reviews and meta-analyses aiming for a more accurate assessment of the data. So now Dr. Drazen turns to something of a global ad hominem argument, an indictment of the motives of such people. It’s in the form of the saying, "People who can’t do, teach":
A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
And so for Dr. Drazen, people who want to reanalyze the data from published studies are creepy hangers-on. contrarians. I’m obviously not in love with that formulation. He leaves out the possibility of another, more likely motivation – that we’re checkers, people who believe that a lot of the scientific drug trial literature is written [and often distorted] for commercial gain rather than medical understanding. We’ve been brought to that conclusion honestly, and Dr. Drazen’s summarily dismissing that possibility by not even mentioning it is a telling indictment against him and his own motives.

After giving an example of successful Data Sharing, he concludes:
How would data sharing work best? We think it should happen symbiotically, not parasitically. Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested. What is learned may be beautiful even when seen from close up.
Our group was one of the first to apply for data access under the new venues provided by the drug manufacturers [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence]. GSK insisted on a proposal, something in the range of Data Sharing. While it was tempting to make up something, the truth was that we wanted the data from Paxil Study 329 because we didn’t believe the published analysis [Efficacy of Paroxetine in the Treatment of Adolescent Major Depression: A Randomized, Controlled Trial]. So, instead of making up a reason, we simply submitted the original Protocol. And to GSK’s credit, they gave us access [after a long negotiation]. We already had the Study Reports [CSRs] and the Individual Participant Data [IPDs], as they had been forced to publish them by a court settlement. But we couldn’t really do an adequate evaluation of harms without the Case Report Forms [CRFs]. We weren’t looking for something new, and our dealings were all with the pharmaceutical companies, not the twenty-two authors who never responded to us.

I don’t personally see running industry-funded Phase III Clinical Trials as Research, I think of it as Product Testing. There’s an enormous financial coloring to the whole enterprise, billions of dollars riding on the outcome of some of these Clinical Trials that say yes or no to the investment put into any given drug. But the trials are primarily about the safety and efficacy of the drugs themselves, not about the financial health and fortunes of the company that developed them, nor the academic departments and faculty that involve themselves in this process. There’s an epithet coined to describe people who are skeptical about clinical trials – pharmascolds – implying that they are biased against all drugs. Such people exist for sure, but I’m not one of them, nor are most of us who look into the data from Sponsored drug trials. We’re physicians and science minded others who don’t like being gamed by our own scientific literature, depriving us of a vital source of information about how to treat our patients.

Frankly, I’m a reluctant parasite. I’ve had to revive skills from a previous career here in my retirement. I had some other plans that were pushed to the side in order to do that. But I think it’s vitally important for the medical·scientific community to have watchdogs, particularly in today’s climate. Certainly the scientific literature in psychiatry for the last twenty plus years begs for serious oversight. Our group’s work was unfunded and difficult [in part because of the way we had to access the data]. Our paper was extensively reviewed and only accepted after the seventh submission, though in a way, the thorough and comprehensive nature of the peer review was confirming [if only that original paper had been subjected to that kind of rigor…].

Dr. Drazen’s editorial makes the assumption that the "front-line researchers" are a gold standard, and their work is being inappropriately attacked. I could easily mount an argument that there are many among that group who are the real data parasites, opportunizing on their academic positions to sign on to jury-rigged, ghost-written articles that they often had little to do with producing. And I question Dr. Drazen’s motives in ignoring the corruption and misbehavior that has made up a sizeable portion of the commercially sponsored clinical trial reporting currently flooding the landscape of our academic literature. An often rendered old saying from my childhood seems appropriate, "I don’t mind your peeing in my boot, but don’t tell me it’s water"…
  1.  
    Johanna
    February 1, 2016 | 7:47 PM
     

    Just a couple of comments from the Research Parasite (@DataParasite) — Twitter’s newest star, born out of the Twitterstorm of protest over NEJM’s defense of proprietary “science”:

    https://twitter.com/dataparasite/status/692355670344126464

    https://twitter.com/dataparasite/status/692722291118100482

  2.  
    Catalyzt
    February 1, 2016 | 9:06 PM
     

    “The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters.”

    I don’t think it’s taking an idea out of context to expect a sentence like that to stand on its own. And it doesn’t.

    The whole idea of an experiment is that it can be understood– that it can be replicated, yes?

    But if no one can understand a research design besides the authors, that would suggest what?

    A) There’s a problem with the research design. Happens to everybody, right? That’s why scientists (honest ones, anyway) invite others to examine data and try to replicate their work.

    Or…

    B) The researchers are… misunderstood! Oh, so misunderstood!

    For some reason, I think of the National Lampoon parody of John Lennon: “Genius is pain!”

    I’m a… Front Line researcher! You just… don’t understand! *sob*

  3.  
    Susan Molchan
    February 1, 2016 | 9:12 PM
     

    That’s a great way to look at it –one person’s research parasite is another’s watchdog. I had the same thought that you articulated so well in your last paragraph about who the parasites are–they impact not only science but society in so many ways. But enough w the choir. Was wondering if you saw the comparable wisdom passed down from Annal IM editors: “On the basis of our long experience reviewing research articles, we have learned that prespecified outcomes or analytic methods can be suboptimal or wrong.”
    After I unstuck my eyeroll I wrote a blog for Health News Review on that piece & that latest offering from NEJM : http://www.healthnewsreview.org/2016/01/top-journal-editors-resist-transparency/

  4.  
    Joseph Arpaia
    February 1, 2016 | 10:49 PM
     

    What he is proposing is essentially religion. We won’t let you examine that old cloth or let you see our data on it because you might come to the conclusion that it is is a medieval forgery and we know it to be of miraculous origin. You can trust that we have done a thorough analysis.

  5.  
    Mark Hochhauser, PhD
    February 2, 2016 | 7:17 AM
     

    Longo and Drazen concluded that: “There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”

    How many is “some?” The authors should post their own data for analysis and interpretation, since “anecdotes are not evidence.”

Sorry, the comment form is closed at this time.