basic efficacy table – it’s what’s missing that matters…

Posted on Thursday 26 January 2017

I’ll be the first to admit that I’ve been pretty muddled these last several months – since, oh say, around 10:00 PM on November 8th, 2016 to be more precise. I’ve tried out any number of defense mechanisms to tell myself that I’m doing just fine. But to be honest, this election sort of took the wind out of my sails and I expect others of you might have experienced something similar. So I’m now sure that fine isn’t the best word to describe how I’m doing with all of this. I’ve got nothing to say about the election that you haven’t already thought yourself, except to re-emphasize that neither denial nor rationalization are much help on either side of this coin.

I don’t know if it’s apparent, but these five recent posts are a series, swirling around the same thoughts. It’s my attempt to pick up the thread of where I was in the Fall, and get back some semblance of my pre·November·2016 mindset:

  1. here’s Linus…
  2. show me the damn  code  numbers!…
  3. the commercial strangle-hold
  4. the basic efficacy table I…
  5. the basic efficacy table II…
We’ve all been focused on Randomized Controlled Trials [RCTs], for obvious reasons. They seem to be the basis for almost everything. In the face of there being so much distortion and artifice in the Journal articles generated from these studies, there’s been a vigorous effort to gain access to the raw data, data kept secret as proprietary property by the Sponsors of these trials. Some of the Pharmaceutical companies have set up mechanisms to have access to their data by application – Data Sharing. Once approved, you can view and work on the data on a remote desktop [which is hard work]. It hasn’t been used much.The companies are complaining [Data Sharing — Is the Juice Worth the Squeeze?].

I was on an international team that applied for and analyzed an RCT from 2001 [Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence]. It was a herculean task – rewarding, but labor intensive and often frustrating [see https://study329.org/]. While I learned a lot, there aren’t going to be too many such studies in its current form because it requires so much effort, time, and expertise. And so full Data Transparency [which I’ve supported on this blog] is a solution for special cases, but not everyday use. It would be infinitely easier if the data were in the public domain, not captive in the remote desktop interface and subject to so many restrictions, but the pharmaceutical companies have buried their heels deep into the pavement. They’re going to any length to hold onto it.

There’s something else. These short term RCTs are done for New Drug Approvals [NDAs] or New Indication Approvals, and that may be fine for that kind of initial testing – maybe the only real way to bring off the FDA’s opener. But like the case of the SSRIs or the COX-2 inhibitors [Vioxx®, Celebrex®], people might take these drugs for months, or even years, rather than just a few weeks. And that’s what got me to thinking about the "prototypical nerd," Linus Tolvold [here’s Linus…]. He didn’t set out to challenge the commercial monopoly, but challenge it he did. And the reason I say it will ultimately come out on top is that the development of Linux is driven by the science of computing, not the profits of the enterprise.

There are some things we need to do to clean up the woefully unregulated Clinical Trial problem, sure enough. But salvation depends of our developing a system that persists past approval. The data is there. We just haven’t figured out how to harvest it and create something that grows instead of a system that seems stuck on first base. Short term trials don’t even need an article for efficacy, just a simple basic efficacy table and for adverse events – a truthful compilation of adverse events – box scores. After all, it’s just product testing. It’s what’s missing that matters…

Sorry, the comment form is closed at this time.