study 329 xi – week 8…

Posted on Monday 28 September 2015

I know I’m a broken record with this 329 stuff. ‘Blog’ came from parsing ‘Weblog’ into ‘We Blog’ – and when push comes to shove, all you can write about is what’s in your mind. This is what’s in there right now [still]…

In study 329 ix – mystic statistics… I was talking about the variables reported ib the Keller et al paper that weren’t in the original protocol, the ones that were reported to be statistically significant. They’re highlighted in blue in this copy of their Table 2:

We left them out of the paper because we wanted to emphasize that they weren’t mentioned until near the end of the study. But in looking at them, when we looked at them post hoc, only three of the for survived using the protocol defined Anova analysis [the K-SADS-L Depressed Mood Item bit the dust]. Two were barely over the p<0.05 line [or on it], and were only significant in week 8 of the study.

The notion that you would take some pill for two months and all of a sudden it would just start working seemed pretty remote to me. But I didn’t pay a lot of attention to the HAM-D Depressed Mood Item. After all, it was significant at the p<0.01 level in our analysis. But there was a Rapid Response to our article in the BMJ that had something to do with the HAM-D Depressed Mood Item causing me to go back for yet another look [Study 329 did detect an antidepressant signal from paroxetine]. First, look at the graph on the left below. It’s the difference between the HAM-D Depressed Mood Item and the baseline value, and it doesn’t make any sense. It says Paroxetine is effective at the first week with a respectable p=0.021 and an effect size of d=0.39 [in the moderate range]. No antidepressant does that. And none of the other outcome variables show anything like that:

The HAM-D Depressed Mood Item is neither a continuous nor categorical [yes/no] variable. It’s an Ordinal with five levels for the rater to choose from:

    DEPRESSED MOOD (sadness, hopeless, helpless, worthless)
    0 | Absent.
    1 | These feeling states indicated only on questioning.
    2 | These feeling states spontaneously reported verbally.
    3 | Communicates feeling states non-verbally, i.e. facial expression, posture, voice and tendency to weep.
    4 | Patient reports virtually only these feeling states in spontaneous all communication.

An Ordinal scale is obviously a severity scale, but the numbers aren’t ordinary numbers in that they only tell you the order of things, not an magnitude. They don’t exactly have an arithmetic [+ – × ÷] and we use different statistics [Mann Whitney, Kruskal Wallis]. Subtracting a baseline seems kind of shaky. Actually, that left hand graph looks like someone sat on it. The baseline values were higher for Paroxetine, and I suspected that a subtraction anomoly got carried through to the end accounting for those peculiar p values. So I just compared the raw values, and sure enough, there was no significant difference until the very end – Week 8 [right hand graph].

So it’s down to three rogue outcome variables, significant only in the eighth week Was there something about that last week that should be further examined. Well there was one thing [see the full study report acute, page 53]:

Defined Timepoints
Day 1 was defined as the day on which the randomized, double-blind study medication was started. Assessments w;ere included ui the analyses at a particular timepoint (study week) if they occurred within the following day windows relative to Day 1:
  Timepoint


  Day Window


 
Week 1 = Days 01 to 11
Week 2 = Days 12 to 18
Week 3 = Days 19 to 25
Week 4 = Days 26 to 32
Week 5 = Days 33 to 39
Week 6 = Days 40 to 46
Week 7 = Days 47 to 53
Week 8 = Days 54 to 70
If multiple observations for a patient fell into a visit widow, then the last (furthest from the start of the study) observation was used to represent that patient’s result for that time period in the tabulations and analyses. However, all values within a visit window were presented in the data listings.

Do I have the energy left to run this down? Certainly not right now.

It’s been a very long week…
  1.  
    September 29, 2015 | 3:32 PM
     

    This is a genuine inquiry for my education: what is your gain in pursuing this matter in 2015? To me, it is a dead horse, and think there are more current pressing issues of concern in our profession that defining how false and disingenuous Study 329 has turned out to be will really do nothing to aid in addressing such active problems.

    For instance, as much as I loathe the source, being Mad In America, they have their random chance correct moment in discussing the fallacy in medicating Borderline Personality Disorder, per this link:
    http://www.madinamerica.com/2015/09/drug-treatment-for-borderline-personality-disorder-not-supported-by-evidence/
    and provide the link in the article about what the World Federation of Societies of Biological Psychiatry said in their guidelines, albeit from 2007:
    http://www.wfsbp.org/fileadmin/user_upload/Treatment_Guidelines/Guidelines_Personality_Disorders.pdf
    (hope this will be linked, I did not like what MIA had as their link source)
    from the second page of the article is this: “To conclude, no medication has been registered for personality disorders, and there is no evidence for a benefit of polypharmacy in these patients. Although there is some evidence for differential effects on psychopathology, classes of psychotropic agents act on a rather broad spectrum of symptoms and there is no database to suggest the combination of several drugs with respect to different targets.”

    Perhaps this specific topic is not of pertinent interest, but, I think most of us know that Paxil and SKB are a corrupt lot. I really just want to understand all your time and energy into this matter today, September 2015.

    Oh, and by the way, I think Paxil really screws up primary personality disordered patients more than other SSRIs, just to tie it in to your post.

    Thank you for a reply, if willing.

    Joel Hassman, MD

  2.  
    October 7, 2015 | 3:50 AM
     

    Joel,

    Because it wasn’t pursued in 2001. It’s really that simple. And if there were a Study from 2015 where we could actually see the data that was suspect, I’d pursue it. You know it’s a dead horse. I know it’s a dead horse. There are plenty of studies of more contemporary relevance that would be dead horses too, but we have no way to prove it [because we have no access to the data]. So it’s not about Study 329. It’s about data transparency [and why we needed it in 2001 and all the years before and since then]…

Sorry, the comment form is closed at this time.