by Dan L. Longo, and Jeffrey M. DrazenNew England Journal of Medicine. 2016 374:276-277.
The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick.
But in this first paragraph, Drazen also sets the stage for another agenda – one heavily promoted by the pharmaceutical industry. When the clamor about distorted clinical trial reports reached a pitch that could no longer be ignored, they reframed the real intent of the move for Data Transparency. Instead of that being a reform move to allow independent reanalysis to keep them honest [because they hadn’t been], they spoke of it as Data Sharing for the reasons Drazen presents in his opening gambit – a generous act in the service of scientific progress.
However, many of us who have actually conducted clinical research, managed clinical studies and data collection and analysis, and curated data sets have concerns about the details. The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters. Special problems arise if data are to be combined from independent studies and considered comparable. How heterogeneous were the study populations? Were the eligibility criteria the same? Can it be assumed that the differences in study populations, data collection and analysis, and treatments, both protocol-specified and unspecified, can be ignored?
A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
How would data sharing work best? We think it should happen symbiotically, not parasitically. Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested. What is learned may be beautiful even when seen from close up.
I don’t personally see running industry-funded Phase III Clinical Trials as Research, I think of it as Product Testing. There’s an enormous financial coloring to the whole enterprise, billions of dollars riding on the outcome of some of these Clinical Trials that say yes or no to the investment put into any given drug. But the trials are primarily about the safety and efficacy of the drugs themselves, not about the financial health and fortunes of the company that developed them, nor the academic departments and faculty that involve themselves in this process. There’s an epithet coined to describe people who are skeptical about clinical trials – pharmascolds – implying that they are biased against all drugs. Such people exist for sure, but I’m not one of them, nor are most of us who look into the data from Sponsored drug trials. We’re physicians and science minded others who don’t like being gamed by our own scientific literature, depriving us of a vital source of information about how to treat our patients.
Frankly, I’m a reluctant parasite. I’ve had to revive skills from a previous career here in my retirement. I had some other plans that were pushed to the side in order to do that. But I think it’s vitally important for the medical·scientific community to have watchdogs, particularly in today’s climate. Certainly the scientific literature in psychiatry for the last twenty plus years begs for serious oversight. Our group’s work was unfunded and difficult [in part because of the way we had to access the data]. Our paper was extensively reviewed and only accepted after the seventh submission, though in a way, the thorough and comprehensive nature of the peer review was confirming [if only that original paper had been subjected to that kind of rigor…].
Just a couple of comments from the Research Parasite (@DataParasite) — Twitter’s newest star, born out of the Twitterstorm of protest over NEJM’s defense of proprietary “science”:
https://twitter.com/dataparasite/status/692355670344126464
https://twitter.com/dataparasite/status/692722291118100482
“The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters.”
I don’t think it’s taking an idea out of context to expect a sentence like that to stand on its own. And it doesn’t.
The whole idea of an experiment is that it can be understood– that it can be replicated, yes?
But if no one can understand a research design besides the authors, that would suggest what?
A) There’s a problem with the research design. Happens to everybody, right? That’s why scientists (honest ones, anyway) invite others to examine data and try to replicate their work.
Or…
B) The researchers are… misunderstood! Oh, so misunderstood!
For some reason, I think of the National Lampoon parody of John Lennon: “Genius is pain!”
I’m a… Front Line researcher! You just… don’t understand! *sob*
That’s a great way to look at it –one person’s research parasite is another’s watchdog. I had the same thought that you articulated so well in your last paragraph about who the parasites are–they impact not only science but society in so many ways. But enough w the choir. Was wondering if you saw the comparable wisdom passed down from Annal IM editors: “On the basis of our long experience reviewing research articles, we have learned that prespecified outcomes or analytic methods can be suboptimal or wrong.”
After I unstuck my eyeroll I wrote a blog for Health News Review on that piece & that latest offering from NEJM : http://www.healthnewsreview.org/2016/01/top-journal-editors-resist-transparency/
What he is proposing is essentially religion. We won’t let you examine that old cloth or let you see our data on it because you might come to the conclusion that it is is a medieval forgery and we know it to be of miraculous origin. You can trust that we have done a thorough analysis.
Longo and Drazen concluded that: “There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”
How many is “some?” The authors should post their own data for analysis and interpretation, since “anecdotes are not evidence.”