931 of their 4041 subjects had not met their enrollment requirements [607 did not have HRSD scores high enough to qualify and 324 did not have HRSD scores administered]. 370 of the subjects were eliminated from the Level 1 calculation because they had no post-baseline visit. They had, however, been started on medication and by protocol should’ve been included.
The protocol defined the outcome measures as the HRSD [Hamilton Rating Scale for Depression] and the ISD [Inventory of Depressive Symptomatology], both Clinician Rated. Instead they used the QIDS [Quick Inventory of Depressive Symptomatology], a Self-Rated scale they had developed themselves – a scale that they had said was only going to be used for clinician decision making along the way [unblinded].
The Report presented the remission rates at the end of the year’s follow-up in an unintelligible manner. Although they reported a "theoretical" remission rate of 67%, only 108 subjects who stayed in the study actually had a sustained remission at one year.
Some of the reasons I think that STAR*D was presented in a deliberately misleading way have already been covered. They made all those protocol changes after the results were unblinded [a big no-no]. They specifically didn’t tell us that the 370 people they dropped had already been started on Level 1 treatment. They gave us a specific rationale for changing outcome measures in mid-stream which may or may not have contained some truth, but that rationale had a great big lie right in the middle of it. They presented the response, remission, and relapse data in a way that could not be deciphered, instead of showing it to us in a simple table [from Pigott]:
There was more, a lot more, in Pigott’s paper. That’s why I suggest you take a look at it on your own. Just a few pieces from the mound of evidence. Every infraction, every change, every omission, even down to how the rounded off their numbers, moves the results in the same direction – towards increasing the apparent efficacy of their antidepressant treatment. In spite of squeezing out some seventy published papers, they’ve never published the outcome data pre-defined by their protocol – the before and after HRSD or ISD scores at the different levels. He found evidence of bias in the American Journal of Psychiatry‘s handling of the study, and at the NIMH as well.
His answer is revealing for two reasons. First, he is acknowledging that the low remission and stay-well rates reported by the Pigott group are accurate. Those are indeed the real results. Second, he is acknowledging that the STAR*D investigators knew this all along, and that, in fact, this information was in their published reports. And in a sense, that is true. If you dug through all of the published articles, and spent weeks and months reading the text carefully and intently studying all the data charts, then maybe, at long last, you could — like Pigott’s group — ferret out the real results.