still recalculating…

Posted on Wednesday 6 April 2011

Picking up where I left off with STAR*D in my last post [recalculating…], let’s summarize where Dr. Pigott’s findings have brought us so far:
    they changed the subjects enrolled for analysis:
    931 of their 4041 subjects had not met their enrollment requirements [607 did not have HRSD scores high enough to qualify and 324 did not have HRSD scores administered]. 370 of the subjects were eliminated from the Level 1 calculation because they had no post-baseline visit. They had, however, been started on medication and by protocol should’ve been included.
    • they changed the primary outcome measure:
    The protocol defined the outcome measures as the HRSD [Hamilton Rating Scale for Depression] and the ISD [Inventory of Depressive Symptomatology], both Clinician Rated. Instead they used the QIDS [Quick Inventory of Depressive Symptomatology], a Self-Rated scale they had developed themselves – a scale that they had said  was only going to be used for clinician decision making along the way [unblinded].
    they obfuscated the remission data in their report:
    The Report presented the remission rates at the end of the year’s follow-up in an unintelligible manner. Although they reported a "theoretical" remission rate of 67%, only 108 subjects who stayed in the study actually had a sustained remission at one year.
I have a reason for summarizing and posting Pigott’s findings repeatedly and in more detail than usual. His analysis raises more than one question. We can already see protocol violations that don’t meet even the minimal requirements for scientific research, particularly at this level. So were they covering up for a poorly administrated study with way too much missing data? Had their inclusive design somehow created a situation where so many people dropped out that the results were virtually unusable? Is Pigott just being an overly-zealous protocol-cop with a real-world study involving 4000 subjects at 41 centers? Those are interesting questions, but there’s a much bigger question that matters the most. Did these STAR*D authors jury-rig the outcome to fit their preconceived conclusions, even though the results did not confirm them? I think they did. That’s why I took a breather, to read the articles just one more time to be sure that’s what I think – and it is.

Some of the reasons I think that STAR*D was presented in a deliberately misleading way have already been covered. They made all those protocol changes after the results were unblinded [a big no-no]. They specifically didn’t tell us that the 370 people they dropped had already been started on Level 1 treatment. They gave us a specific rationale for changing outcome measures in mid-stream which may or may not have contained some truth, but that rationale had a great big lie right in the middle of it. They presented the response, remission, and relapse data in a way that could not be deciphered, instead of showing it to us in a simple table [from Pigott]:

There was more, a lot more, in Pigott’s paper. That’s why I suggest you take a look at it on your own. Just a few pieces from the mound of evidence. Every infraction, every change, every omission, even down to how the rounded off their numbers, moves the results in the same direction – towards increasing the apparent efficacy of their antidepressant treatment. In spite of squeezing out some seventy published papers, they’ve never published the outcome data pre-defined by their protocol – the before and after HRSD or ISD scores at the different levels. He found evidence of bias in the American Journal of Psychiatry‘s handling of the study, and at the NIMH as well.

I even began to wonder why Pigott’s papers contained so much carefully presented information, even understated at times. Then I found myself doing the same thing – wanting to be very clear and not open to some kind of counter-criticism of bias [or ranting]. Even Robert Whitaker, someone who is out front with his conviction that the current medication practices are outrageous [Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America] states his conclusions about STAR*D carefully [The STAR*D Scandal: A New Paper Sums It All Up]. At the end, Whitaker says:
    In her article, Medscape Medical News writer Deborah Brauser asked STAR*D investigator Maurizio Fava, who is a prominent psychiatrist from Massachusets General Hospital, whether the published analysis by Pigott and his collaborators was correct. "I think their analysis is reasonable and not incompatible with what we had reported," he said.

    His answer is revealing for two reasons. First, he is acknowledging that the low remission and stay-well rates reported by the Pigott group are accurate. Those are indeed the real results. Second, he is acknowledging that the STAR*D investigators knew this all along, and that, in fact, this information was in their published reports. And in a sense, that is true. If you dug through all of the published articles, and spent weeks and months reading the text carefully and intently studying all the data charts, then maybe, at long last, you could — like Pigott’s group — ferret out the real results.

    But that is not the way that honest science is supposed to work.

There’s another way to say that. STAR*D is grossly dishonest science! And dishonest science at its worst because it carries the seal of approval from some of our most prestigious institutions, misinforms practicing physicians, and affects the lives of a very large cohort of our patients. I thank the Pigott group for their careful and convincing deconstruction of this misrepresented study. They aren’t picky protocol cops, they’re exposing something every clinician who treats depressed people needs to know, whether they’ve personally read STAR*D or not, because the misreported results have embedded themselves into the lore of modern medical practice…
  1.  
    Talbot
    April 7, 2011 | 6:40 AM
     

    I hope you and Dr. Piggott take this a little further, and come up with some guidelines indicating when study results are likely to be bogus. For example, if the primary outcome measure is changed halfway through, or even worse, after the study is concluded, that’s a signal. If they make up their own outcome measure, that’s another. If you cannot follow what happened to the people enrolled in the study, that’s another. And a biggie–if the study looks like it’s been designed to support or align with something else that was discredited–in this case, TMAP–or those involved have conducted other discredited work.

  2.  
    April 7, 2011 | 9:57 AM
     

    That’s exactly right, but easier thought than taught. Dr. Pigott has done a fine job of finding the holes in this leaky bucket. We just need more people out there looking…

  3.  
    Peggi
    April 7, 2011 | 2:29 PM
     

    Hope you saw the posting today on Pharmalot regarding the link between SSRIs and breast cancer, as well as the impact of industry ties on which researchers were likely to acknowledge that link.

  4.  
    Rob Purssey
    April 8, 2011 | 1:47 AM
     

    Dear B.O.M . , thanks enormously for this excellent and measured synthesis of Piggott and Whitaker’s critically important contributions in this area. I will be be directing colleagues and email list members worldwide to your blog as a brilliant resource for a succinct overview of this very important area, clinically and scientifically (and regarding public health policy). I hereafter cut’n'paste a comment i’ve posted to Whitaker’s Psychology Today blog about this area – with apologies and asking forgiveness for “cross-posting” and some element of laziness!

    Vital information for Primary Care Docs, Psychiatrists, and Patients

    I write to thank Robert Whitaker and especially Ed Piggott for the enormous work done in this critically important re-analysis of the STAR*D data, and for doing their utmost to let others know of the results. As a psychiatrist i daily see “colleagues” believing and slavishly following the hype, with polypharmacy, high doses, unending medication suggestions the norm.

    Primary Care Docs do most of the prescribing here in Australia, as in the USA, and meds as first step for ANY emotional upset is such received wisdom that NOT doing so feels nearly malpractice, if you read the literature (increasingly marketing tools of pharma) uncritically. Honest appraisal of data such as Dr Piggott has provided need to be disseminated as widely, clearly and succinctly as possible.

    Hence a request – to distill these very concerning findings to a quickly readable abstract, highlight key unarguable data points, make the 2.7% non-relapsing endpoint crystal clear – so that Primary Care Docs may see that their patients continuing to struggle ain’t their (or patients) fault, needing more meds, or needing more “aggerssive” Rx of some other kind.

    Patients need to know they aren’t broken, their chemicals AREN’T imbalanced, that meds may help a bit but will likely NOT help enormously nor “fix” them, and that simple daily behavioral strategies (behavioral activation, mindfulness, which together make up my approach [disclosure] – Acceptance and Commitment
    Therapy) can get them back to valued living, even with difficult thoughts and feelings arising from time to time.

    Apologies for long comment. Hope of some interest / use. I will later comment about an alternative approach to human behavior and medications with a long and solid scientific history, functional contextual behavioral pharmacology, if readers/blog host interested.

    With enormous gratitude,

    Rob Purssey
    Psychiatrist and ACT Therapist, Brisbane, Australia
    http://www.mindfulpsychiatry.com.au

Sorry, the comment form is closed at this time.