what he said…

Posted on Wednesday 3 October 2012


Misconduct accounts for the majority of retracted scientific publications
by Ferric C. Fang, R. Grant Steen, and Arturo Casadevall
Proceedings of the National Academy of Sciences. 2012 Oct 1.
[Epub ahead of print]

A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud [43.4%], duplicate publication [14.2%], and plagiarism [9.8%]. Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.


[reformatted to fit]

I expect that most have either read or read about this article that reports a 10-fold increase in retraction of articles in peer reviewed journals in recent history [Finding Research Fraud Ain’t So Bad: Art Explains]. I expect the intended purpose is to raise the alarm in the scientific community about the rising incidence of research misconduct. In psychiatry, the issue is different. We wonder why so few have been retracted. This is the opening paragraph of the article:
The number and frequency of retracted publications are important indicators of the health of the scientific enterprise, because retracted articles represent unequivocal evidence of project failure, irrespective of the cause. Hence, retractions are worthy of rigorous and systematic study. The retraction of flawed publications corrects the scientific literature and also provides insights into the scientific process. However, the rising frequency of retractions has recently elicited concern. Studies of selected retracted articles have suggested that error is more common than fraud as a cause of retraction and that rates of retraction correlate with journal-impact factor. We undertook a comprehensive analysis of all retracted articles indexed by PubMed to ascertain the validity of the earlier findings. Retracted articles were classified according to whether the cause of retraction was documented fraud [data falsification or fabrication], suspected fraud, plagiarism, duplicate publication, error, unknown, or other reasons [e.g., journal error, authorship dispute].
In the last sentence, documented fraud is defined as data falsification or fabrication. I would argue that this definition is inadequate and can actually be used to cover activities that are manifestly fraudulent, providing a letter of the law that conflicts with the spirit of the law. As an example, I ran across a quote in the Rothman Report extracted from Janssen’s subpoenaed documents that frames the point:
In fact the whole point of the ghost-writing and jury-rigging is to neither falsify nor fabricate data. Even in the infamous Paxil Study 329 which rode as close to the wrong side line as they could, they reported on the data they had – the sleight of hand was in omissions. analysis, presentation, etc. To use a childhood metaphor, they stayed in the domain of "white lies" or the political technique of "plausible deniability." So the tools were not data falsification or fabrication – a table was presented with a parameter added because it made things come out right, an expected correction [the Bonferroni correction] was ignored, serious adverse effects were renamed and buried in a long list of benign ones. And with Study 329, the argument for not retracting was that it was factually correct, since the data was not falsified or fabricated. Instead it was mangled and tangled, hidden and ridden to an unsupportable conclusion. There are many possible roads to fraud. And as for ghostwriting, recall Sally Laden’s testimony in a suit against GSK about Paxil:

    GSK LAWYER: But do you consider that the work that you did in terms of revising the draft of the article to incorporate the authors comments their analysis and their changes, Is that a host of hours in your mind?
    SALLY LADEN: It was a lot of time.
    GSK LAWYER: If STI or you had ghost written the Keller article would you have then would there have been any need to do any of this work that we have been discussing?
    SALLY LADEN: Can you tell me what you mean by ghost written?
    GSK LAWYER: Ghost written is where the authors of the article have no input at all into the contents of the article.
    SALLY LADEN: And then can you repeat the question please?
    GSK LAWYER: With that identification of ghost writing in mind If STI or you had ghost written the Keller article would there have been any need to do any of the items that we discussed in terms of your editorial assistance?
    SALLY LADEN: That’s a hard question to answer because that didn’t happen.
    GSK LAWYER: Is it your testimony that you did not ghost Study 329 well excuse me Is it your testimony that you did not ghost write the Keller article which was published in the journal of the American Academy of Child and Adolescent Psychiatry?
    SALLY LADEN: Based on your definition of ghost writing – absolutely.
    GSK LAWYER: Would you have bothered to waste your time your effort and your energy doing all of this coordination with the authors if you were a ghost writer for the Keller article?
    SALLY LADEN: I can’t answer that.
    GSK LAWYER: Why can’t you answer that?
    SALLY LADEN: Because I don’t believe I was a ghost writer.

Here, we have a definition provided by an industry lawyer that leaves an exit strategy as slick as a water-slide in a theme park.

Flash: At this point while I was writing this I got an email mentioning a post on Neuroskeptic. In the spirit of full disclosure, he’s way ahead of me in framing the issue:

The Two Problems With Science
Neuroskeptic
October 3, 2012


There’s lots of concern at the moment over mistakes, misconduct and misbehaviour in science. This concern is a good thing. There are serious, systemic problems with modern science as I and many others have long argued. However, I worry that much of the recent discussion has failed to distinguish between two fundamentally distinct problems. On the one hand, we have outright fraud – i.e. making up data, or otherwise lying, breaking the basic rules of science. On the other hand we have questionable practices such as: publication bias, p-value fishing, the File Drawer, sample size peeking, post-hoc storytelling, and all of the other dark arts that can lead to false positive science. These are permissible, even encouraged, by the current rules of doing and publishing science.

These two problems are similar in some ways – they’re both "bad science", they both lead to failures to replicate, etc. – but in underlying essence they’re very different, so much so that I’m not sure they can be usefully discussed in the same breath. Fraud and questionable practices are different in terms of their harms. Fraud is a more serious act and it causes local harm, introducing major errors into the record. But in terms of its overall effects, I believe questionable practices are worse, as they systematically distort science: ensuring that, in some cases, it is difficult to publish anything but errors.

Fraud and questionable practices call for different solutions. Broadly speaking, fraudsters break the rules, so to stop them we need to enforce those rules, via deterrence, detection, and punishment – like with any criminal act. With questionable practices, it’s the opposite: here the problem is the rules [or the lack of them], and the solution is to reform the system. It’s been suggested that fraud and questionable practices share a common cause in the "pressure to publish", the "publishing environment", the "culture" of modern science etc. But while this is a good explanation for questionable practices, I don’t think this can explain fraud, any more than, say, the desire for money can explain theft. Yes, thieves desire money, and yes they steal in order to get money, but everyone else wants money as well, yet most of us don’t steal, so that’s not an explanation. Frauds fake data to produce publications. But all scientists are under pressure to produce good publications and they always have been – which is why fraud is not new – what’s changed recently is the criteria for a ‘good’ publication. Now in retrospect, I blurred these distinctions somewhat with my own 9 Circles of Scientific Hell, in which I placed 6 questionable practices and 2 forms of misconduct on the same scale of "sinfulness". In fact there are two distinct hierarchies. In my defence though, that was a cartoon.

I think finance offers a great analogy here. In finance, you have some people who break the rules. Bernie Madoff is the current poster boy for this. Such people harm others by outright criminal acts. But then we have the people who play by the rules, and still cause harm. The global financial crisis was in essence caused by all of the major American banks going all-in on a bet, and losing. Yet no-one broke the rules: the regulations allowed banks to gamble. The problem was not rule-breaking, but the rules [or lack thereof]. Here’s the curious thing: the financial crisis did more harm than Madoff’s scam, even though what Madoff did – theft by fraud – was more immoral than what the bankers did – gambling unwisely.

That’s confusing to our ethical sense and our emotions [who should we feel more angry at? Who’s ‘worse’?] but it’s really no surprise: precisely because what the banks did was above board, everyone did it so the damage was huge. If it had been illegal for banks to gamble all their money at once, individual banks might still have broken that rule, locally, but it’s unlikely that the system would have been threatened. Maybe you can see where I’m going with this: everyone following bad rules is often worse than individuals breaking good rules. Science has its share of fraud. Hauser, Smeesters, Fujii – they broke good rules against such deceit. They are the Bernie Madoffs of science. But then there’s ‘questionable practices’ like publication bias, p-value fishing, the File Drawer, and all the rest, which are allowed, but which are universally acknowledged to be bad for science. Scientists using these dark arts [and I don’t know any who never do] may be the Lehman Brothers of science.


So the end of my post is, "What Neuroskeptic said"…
  1.  
    annonymous
    October 4, 2012 | 12:50 AM
     

    “Ghost written is where the authors of the article have no input at all into the contents of the article.” That almost made me laugh out loud. Again, wow.

    Neuroskeptic has another great post that you might enjoy:
    http://neuroskeptic.blogspot.com/2010/11/9-circles-of-scientific-hell.html
    It would be more entertaining if most of the levels didn’t read as “how to” manual for so much of the psychiatric literature.

    In the comments Neuroskeptic writes:
    Ghost-writing was something I seriously considered, but I ran out of circles. Let’s put it this way, it’s not going to get you into Science Heaven.

    Also, don’t miss the comment by Catherina.

  2.  
    October 5, 2012 | 3:18 AM
     

    Thanks for the support!

    It’s true that the 9 Circles represents a how-to for many of Pharma’s worst practices. Ironically though, I think that clinical trials are now the cleanest area of research, thanks to mandatory pre-registration on clinicaltrials.gov. What I really worry about today is not clinical but preclinical, and what I call “quasi-clinical” research such as diagnostic tests & ‘biomarkers’ which are not clinical but are promoted as having clinical applications.

  3.  
    October 5, 2012 | 8:11 AM
     

    And then there’s the Quintiles “on-line” Clinical Trial initiative! Thanksto both for the comments. And annonymous is right, Catherina’s comment is a classic.

Sorry, the comment form is closed at this time.