The rate at which scientific journal articles are being retracted has increased roughly tenfold over the past two decades, an exclusive analysis for 成人VR视频 reveals.
Growth in research fraud as a result of greater pressure on researchers to publish, improved detection and demands on editors to take action have been raised as possible factors in the change.
The study, by the academic-data provider Thomson Reuters, follows the retraction last month of a paper on the creation of sperm from human embryonic stem cells.
The paper, written by researchers at Newcastle University, was withdrawn by the Stem Cells and Development journal following its discovery that the paper’s introduction was largely plagiarised.
The Thomson Reuters analysis charts the number of peer-reviewed scientific-journal articles produced each year from 1990 and the number of retractions.
It shows that over nearly 20 years the number of articles produced has doubled, but the number of retractions - still a small fraction of the literature - has increased 20 times. This is equal to a tenfold increase, factoring in the growth of articles.
The data are extracted from the Thomson Reuters Web of Science citation database, and apply to the journals covered by its Science Citation Index Expanded.
Whereas in 1990, just five of the nearly 690,000 journal articles that were produced worldwide were retracted, last year the figure was 95 of the 1.4 million papers published.
The growth has been particularly pronounced in the past few years, even factoring out 22 retracted papers authored by Jan Hendrik Schon, the disgraced German physicist, earlier this decade.
James Parry, acting head of the UK Research Integrity Office (UKRIO), said it was impossible to know for certain the reasons for the increase.
“It might reflect a real increase in misconduct or, more likely, an increase in detection compared with 20 years ago,” he said.
He noted that while “most” retractions were for misconduct or questionable practice, “many” were the result of honest errors, such as an author misinterpreting results and realising the mistake later.
“Some editors have been very slow to spot misconduct and to take action when they do,” he added.
Harvey Marcovitch, former chair of the Committee on Publication Ethics, welcomed the analysis. He said he had always thought that the number of retractions was small, but had never seen the figures before.
He hoped that the increased publicity scientific fraud had received in recent years had raised awareness - making scientists more likely to alert journal editors, and editors more prepared to investigate claims.
Editors, he agreed, had been notoriously reluctant to retract, for reasons ranging from “not having permission of authors, to being unsure about what retraction meant, to not knowing precisely what to do”.
He said plagiarism software could also play a part in the rise - the British Medical Journal uses it to evaluate suspect papers, while Nature is trialling it for some papers and all review articles.
Both Mr Parry and Dr Marco-vitch stressed that misconduct was likely to be more common than the retraction figures suggest.
“Even on a conservative estimate of 1 per cent misconduct, we might expect 15,000 retractions a year, but we have a lot less,” Mr Parry said.
“This suggests significant under-detection, which fits with what editors have told UKRIO.”
He added that there was evidence that people still frequently quoted papers after they had been retracted. “The system is not working as well as it could,” he said.
Aubrey Blumsohn, a former University of Sheffield academic and now a campaigner for greater openness in research conduct, said that only a “tiny proportion” of the papers known to have serious problems were retracted.
“Journal editors and institutions generally engage in a fire-fighting exercise to avoid retractions,” he said.
“Anyone looking at this problem in detail knows of dozens of papers that are frankly fraudulent, but they are never retracted.”
He said that the ways in which the scientific community “covers its tracks and prevents fraud being prosecuted” must be investigated.
Peter Lawrence, a scientist from the Medical Research Council’s Laboratory of Molecular Biology in Cambridge, speculated that more plagiarism and better detection had pushed up the retraction rate.
Blaming a culture of “publish or perish”, he said: “It’s now a desperate struggle for survival.”
He added that there was overwhelming pressure to be published in big journals: “You need to sensationalise results, be economical with rigour, and hype, hype, hype.”
Research, page 21
WIDESPREAD MISCONDUCT
A new study assesses the reasons for more than 300 journal retractions over the past 20 years.
The analysis looks at 312 cases of withdrawals listed in the PubMed database between 1988 and 2008. The authors, Liz Wager, chair of the Committee on Publication Ethics, and Peter Williams, research fellow in the department of information studies at University College London, found that 25 per cent were due to plagiarism or falsified data and 26 per cent were due to honest errors. The reasons for the other retractions were not given.
The study, Why and How Do Journals Retract Articles?, is due to be presented in September to the Sixth International Congress on Peer Review and Biomedical Publication in Vancouver.
It follows a paper published this year in the PLOS One journal that aggregates studies on how frequently scientists falsify research. It says that about 2 per cent admitted to having fabricated, falsified or otherwise modified data or results “at least once”. Almost 34 per cent admitted to “questionable research practices”.
The paper, How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data, is written by Daniele Fanelli, Marie Curie research fellow at the University of Edinburgh.
CODE OF PRACTICE: TAKE THE PLAUDITS AND THE BRICKBATS
Anyone listed as an author on a paper should be prepared to take “public responsibility” for the work, a body that battles research misconduct advises.
The advice is featured in a code of practice for research, due to be launched next month by the UK Research Integrity Office (UKRIO).
The code is designed to help universities formulate institutional guidelines.
“Researchers should be aware that anyone listed as an author should be prepared to take public responsibility for the work, ensure its accuracy and be able to identify their contribution to it,” it says.
James Parry, acting head of the UKRIO, said the document would provide “broad standards and principles” for best practice in research.
It follows a case at Newcastle University, which is investigating the plagiarised introduction of a stem-cell paper listing eight authors. The paper was retracted from the Stem Cells and Development journal last month after the problem came to light.
A research associate who has since left the university was blamed for the error, but leading scientists have criticised the senior authors involved for not taking responsibility.
For a copy of the UKRIO code: .