When our group assembled to do our RIAT analysis of Paxil Study 329, we already had access to a wealth of raw data from that clinical trial thanks to the hard work of many other people who came before us. So we had the a prori Protocol, the Statistical Analysis Plan, the CSR [Clinical Study Report], and the IPD [Individual Participant Data] – all available in the Public Domain as the result of various Legal Settlements and Court Orders. The only thing we didn’t have – the CRFs [Case Report Forms] – the actual raw forms the raters used to record their observations during the study. But we felt that we needed them too. We had good reason to question the system originally used to code the Adverse Events, and felt it was important to redo that part from scratch using a more modern and widely validated system.
About that time, the European Medicines Agency [EMA] had announced that it was going to release all of its regulatory data. AllTrials was pressing for "all trials registered, all trials reported." I was researching on what authority the data was being kept proprietary in the first place, and finding nothing much except convention and inertia. What was being called Data Transparency was in the air, and it was an exciting prospect.
And then the pharmaceutical companies seemed to do a turnabout. GSK had just been hit with a $3 B fine, in part over Study 329, and they were one of the first to sign on to AllTrials. But as things developed, what they offered was something different from what a lot of us reeally wanted, at least what I wanted. By that time, I wasn’t a rookie any more and I’d vetted a number of industry funded, ghost written, psychopharmacology drug trials turned into journal articles. I can’t recall a one of them was totally straight. So I wanted to see what the drug company saw – the a priori Protocol and Statistical Analysis Plan, the IPD, and the CRFs – the raw data. And the reason wasn’t to do any new research. It was to check their reported work, to do it right by the book, to stop the cheating.
And so with much fanfare, what the drug companies rolled out was something else – Data Sharing. They pretended that what we wanted was access to their raw data so we could do further new research – and that they were being real mensches to let us see it. They set up independent boards to evaluate proposals for projects. If we passed muster, we could have access via a remote desktop – meaning we couldn’t download the data. We could only see it online. All we could download were our results, if approved. In this scenario, they are generously sharing the data with us, avoiding duplication and wastage or some such, and the remote access portal protects the subjects’ privacy. They maintained control and ownership. What we wanted was Data Transparency to keep them honest, to stop them from publishing these fictional photo-shopped articles, to stop the cheating.
So our RIAT group submitted a request to their panel, and when they asked for a proposal, we didn’t make one up. We played it straight and told them why. After some back and forth, we submitter the Protocol from the original Study 329, and to their credit, they granted our request. The remote access system actually worked, but working inside of it was a complete nightmare [we called it "the periscope"]. The CRFs came to around 50,000 pages, and we could only look at them one page at a time! But that’s another story and it’s available in detail at https://study329.org/. The point for this post is that the call for Data Transparency got turned into something very different – Data Sharing. That’s called "SPIN." Instead of being on the hot-seat for having published so many distorted clinical trial reports – carefully crafted by professional ghost-writers – they portrayed themselves as heros, generously allowing outsiders to use their data for independent research. Sleight of hand extraordinaire!
by Dan L. Longo, and Jeffrey M. DrazenNew England Journal of Medicine. 2016 374:276-277.The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick…
A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites”…
Sorry, the comment form is closed at this time.