The article must look and feel like a scientific study using appropriate logic and format.
The article must be based on the genuine data set of the study.
The conclusion of the article must confirm the original hypothesis or some acceptable alternative.
- The data set does not actually support a desirable conclusion [or, perhaps, any conclusion], and specifically doesn’t actually support the conclusion of the article.
When the ancient Greeks were formalizing logic, some of them had the idea that one could use the rules of logic to reach absolute truth. Others realized that one could distort logic, so they began to study and categorize fallacy, thinking they might be able to plug the holes and still use logic to arrive at absolute truth. Then, along came the Skeptics who mocked the Dogmatists [the absolute truth guys] and said that relative truth was the best we could do. So, the rigid rules of logic and fallacy didn’t define absolute truth anymore and waited patiently in the dusty logic books until they made a comeback in the programming languages running that light box you’re looking at right now called a computer. We moved on to the new standards of relative truth like "beyond the shadow of a doubt" in a courtroom or the statistical calculations of science – all ways of "weigh[t]ing the evidence."
The kind of ghost-writer we’re talking about, someone like Sally Laden, is able to take the data from a study like Paxil Study 329 which was a negative study by the usual rules of science and work her mojo on it to make it look positive. Although she documented that it didn’t make its primary outcome standard, if she conflated the primary and secondary measures, adding a new parameter that was significant by a fluke, she could conclude it was a significant study. They could rename suicidality as emotional lability and bury it in a list instead of under serious adverse effects. They could rationalize not correcting for multiple comparisons based on grammar. All of these things we refer to as spin, a benign term for lying. So, we all know what kind of ghost-writer we’re pointing at. It’s the kind spoken of in porcine metaphors: "putting lipstick on a pig", "making a silk purse out of a sow’s ear".
Nobody cares if Paula Broadwell’s book about General Petraeus was ghost-written by Washington Post editor Vernon Loeb. It made it more interesting and readable. I doubt many of us care if a graduate student or lesser author writes some guru’s scientific paper as long as they are identified on the by-line. What we care about is when a pro at porcine cosmetics works some mojo on a piece of science and it’s kept a secret. Because if we knew, we’d be more likely to look for the signs that the conclusions in the article are in question. In Paxil Study 329, we had a clue. Sally Laden was mentioned as an editorial assistant in the acknowledgements. But in the paper we’re looking at now, Paxil Study 352, she’s not even mentioned. And we’re not told that three of the authors are GSK employees. And there’s no mention that the other authors had loud conflicts of interest because of financial connections with GSK who funded the study.
We particularly didn’t know that the lead author, Dr. Nemeroff, had his name on a sea of papers in 2001, the year of this paper:
or that in 2001 he worked for 18 pharmaceutical companies including GSK, or that he made 44 paid trips for GSK to give "talks" as part of their Speaker’s Bureau:
or that he got David Healy un-hired from a position in Toronto for speaking up about PHARMA [before that became cool!…]. Because if we knew any of those things, we might have known to pore over this paper with a fine tooth comb looking for the signs that this was what people call an experimercial, and that the science was likely a lie. But we know those things now, so that’s why we’re looking so belatedly. When a clinical trial article in a medical journal is written by a ghost-writer paid by the industry sponsor of the trial, the bells ring out loudly. Nowadays, they couldn’t even get away with this kind of sleaze.
In the comments of my last post about Paxil Study 352 [paxil study 352 – what’s ghost-writing?], there was a discussion about why I’m harping about an 11 year old paper. We all know Dr. Nemeroff is morally challenged. We all know that Sally Laden was a master manipulator of scientific data. We all now know why conflicts of interest and financial relationships have to be declared. Why beat a dead horse? One reason is that GSK, STI, Sally Laden, the American Journal of Psychiatry, and Dr. Nemeroff continue to pretend this is a legitimate scientific contribution to the psychiatric literature and that its conclusions are valid – Nemeroff even denies it was ghost-written ["we wrote the paper"]. Another reason is that I’m personally ashamed of what happened here and elsewhere in my profession and want us to make it right rather than continue to play like this kind of deceit wasn’t so widespread as to actually characterize our literature of the period. But my most pressing motive right now is that I’m not sure that simply plugging the ghost-writing and conflict of interest leaks is even enough to stop it from happening again. A lot of other people have the same concern.