Misconduct accounts for the majority of retracted scientific publications
by Ferric C. Fang, R. Grant Steen, and Arturo Casadevall
Proceedings of the National Academy of Sciences. 2012 Oct 1.
[Epub ahead of print]
A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud [43.4%], duplicate publication [14.2%], and plagiarism [9.8%]. Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.
[reformatted to fit]
The number and frequency of retracted publications are important indicators of the health of the scientific enterprise, because retracted articles represent unequivocal evidence of project failure, irrespective of the cause. Hence, retractions are worthy of rigorous and systematic study. The retraction of flawed publications corrects the scientific literature and also provides insights into the scientific process. However, the rising frequency of retractions has recently elicited concern. Studies of selected retracted articles have suggested that error is more common than fraud as a cause of retraction and that rates of retraction correlate with journal-impact factor. We undertook a comprehensive analysis of all retracted articles indexed by PubMed to ascertain the validity of the earlier findings. Retracted articles were classified according to whether the cause of retraction was documented fraud [data falsification or fabrication], suspected fraud, plagiarism, duplicate publication, error, unknown, or other reasons [e.g., journal error, authorship dispute].
Here, we have a definition provided by an industry lawyer that leaves an exit strategy as slick as a water-slide in a theme park.
There’s lots of concern at the moment over mistakes, misconduct and misbehaviour in science. This concern is a good thing. There are serious, systemic problems with modern science as I and many others have long argued. However, I worry that much of the recent discussion has failed to distinguish between two fundamentally distinct problems. On the one hand, we have outright fraud – i.e. making up data, or otherwise lying, breaking the basic rules of science. On the other hand we have questionable practices such as: publication bias, p-value fishing, the File Drawer, sample size peeking, post-hoc storytelling, and all of the other dark arts that can lead to false positive science. These are permissible, even encouraged, by the current rules of doing and publishing science.These two problems are similar in some ways – they’re both "bad science", they both lead to failures to replicate, etc. – but in underlying essence they’re very different, so much so that I’m not sure they can be usefully discussed in the same breath. Fraud and questionable practices are different in terms of their harms. Fraud is a more serious act and it causes local harm, introducing major errors into the record. But in terms of its overall effects, I believe questionable practices are worse, as they systematically distort science: ensuring that, in some cases, it is difficult to publish anything but errors.
Fraud and questionable practices call for different solutions. Broadly speaking, fraudsters break the rules, so to stop them we need to enforce those rules, via deterrence, detection, and punishment – like with any criminal act. With questionable practices, it’s the opposite: here the problem is the rules [or the lack of them], and the solution is to reform the system. It’s been suggested that fraud and questionable practices share a common cause in the "pressure to publish", the "publishing environment", the "culture" of modern science etc. But while this is a good explanation for questionable practices, I don’t think this can explain fraud, any more than, say, the desire for money can explain theft. Yes, thieves desire money, and yes they steal in order to get money, but everyone else wants money as well, yet most of us don’t steal, so that’s not an explanation. Frauds fake data to produce publications. But all scientists are under pressure to produce good publications and they always have been – which is why fraud is not new – what’s changed recently is the criteria for a ‘good’ publication. Now in retrospect, I blurred these distinctions somewhat with my own 9 Circles of Scientific Hell, in which I placed 6 questionable practices and 2 forms of misconduct on the same scale of "sinfulness". In fact there are two distinct hierarchies. In my defence though, that was a cartoon.
I think finance offers a great analogy here. In finance, you have some people who break the rules. Bernie Madoff is the current poster boy for this. Such people harm others by outright criminal acts. But then we have the people who play by the rules, and still cause harm. The global financial crisis was in essence caused by all of the major American banks going all-in on a bet, and losing. Yet no-one broke the rules: the regulations allowed
banks to gamble. The problem was not rule-breaking, but the rules [or lack thereof]. Here’s the curious thing: the financial crisis did more harm than Madoff’s scam, even though what Madoff did – theft by fraud – was more immoral than what the bankers did – gambling unwisely.That’s confusing to our ethical sense and our emotions [who should we feel more angry at? Who’s ‘worse’?] but it’s really no surprise: precisely because what the banks did was above board, everyone did it so the damage was huge. If it had been illegal for banks to gamble all their money at once, individual banks might still have broken that rule, locally, but it’s unlikely that the
system would have been threatened. Maybe you can see where I’m going with this: everyone following bad rules is often worse than individuals breaking good rules. Science has its share of fraud. Hauser, Smeesters, Fujii – they broke good rules against such deceit. They are the Bernie Madoffs of science. But then there’s ‘questionable practices’ like publication bias, p-value fishing, the File Drawer, and all the rest, which are allowed, but which are universally acknowledged to be bad for science. Scientists using these dark arts [and I don’t know any who never do] may be the Lehman Brothers of science.
“Ghost written is where the authors of the article have no input at all into the contents of the article.” That almost made me laugh out loud. Again, wow.
Neuroskeptic has another great post that you might enjoy:
http://neuroskeptic.blogspot.com/2010/11/9-circles-of-scientific-hell.html
It would be more entertaining if most of the levels didn’t read as “how to” manual for so much of the psychiatric literature.
In the comments Neuroskeptic writes:
Ghost-writing was something I seriously considered, but I ran out of circles. Let’s put it this way, it’s not going to get you into Science Heaven.
Also, don’t miss the comment by Catherina.
Thanks for the support!
It’s true that the 9 Circles represents a how-to for many of Pharma’s worst practices. Ironically though, I think that clinical trials are now the cleanest area of research, thanks to mandatory pre-registration on clinicaltrials.gov. What I really worry about today is not clinical but preclinical, and what I call “quasi-clinical” research such as diagnostic tests & ‘biomarkers’ which are not clinical but are promoted as having clinical applications.
And then there’s the Quintiles “on-line” Clinical Trial initiative! Thanksto both for the comments. And annonymous is right, Catherina’s comment is a classic.