成人VR视频

Studies that fail to replicate ‘get more citations’

Articles in leading journals that can be reproduced gain 16 fewer citations a year on average, say Californian economists

May 21, 2021
Thumbs up

Social science studies in top journals that fail to replicate are cited far more often than similarly prestigious papers whose results are reproducible, an?analysis suggests.

Reviewing three recent projects that attempted to replicate high-profile studies published in leading economics, psychology and science journals, researchers from the University of California, San Diego found that those papers whose findings could not be reproduced enjoyed substantially more citations, even after this failure to replicate was made public.

Papers that could not be replicated were cited on 16?more occasions annually on?average over a three-year period than those that could be replicated, according to the analysis of 80?replication studies published in the journal on 21?May.

That difference was even higher for non-replicable papers published in Nature and Science, where the eight papers that could not be replicated were cited 38 to 44?times a?year more than the 13?papers whose findings were reproduced.

成人VR视频

ADVERTISEMENT

The authors – economists Marta Serra-Garcia and Uri Gneezy – considered whether the higher citation counts might be explained by the reporting of their non-replication but found that these replication projects were mentioned in only 12?per cent of post-replication citations.

As it was now possible to predict which papers would be replicable using “prediction markets”, journal reviewers and editors faced a “trade-off” between publishing “interesting” but potentially flawed work that would gain publicity and citations or admitting more robust studies that might not attract “hype”.

成人VR视频

ADVERTISEMENT

Explaining why journals were happy to allow non-replicable studies to be published, the authors speculate that “when the results are more ‘interesting’, [reviewers] apply lower standards regarding its reproducibility”.

“Although they expect some results to be less robust than others…they are willing to accept this lower expected reliability of the results in some cases,” they explain.

That pattern of publication fits with findings made by King’s College London psychologist Stuart Ritchie, whose recent book Science Fictions argued that pressure to receive grants and publish positive results favours “showy and ostentatious findings over…studies that only add small pieces to our knowledge”, the authors say.

That trade-off could help to “explain the source of the replication crisis in social sciences”, the authors continue, which could be addressed if the sector began to “increase the cost of publishing problematic data”.

成人VR视频

ADVERTISEMENT

One potential answer is to publish the name of the editor in a manuscript, which would allow readers to go back to them for comment about the editorial process if the results fail to replicate, they suggest.

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (3)

The important question is, what are the citing articles saying? Are they picking out that the results of the original studies might be flawed or are they just citing those papers? If the former, then it lends additional credibility to the view that citations may be flawed measure of quality. If a paper tha tone cites turns out to be not replicable ex-post there isnt anything one can do about it, that falls on the journals and the refrees of the original article.
I agree. An article can be cited a lot, but it may be cited because of some of its shortcomings, rather than its excellent quality. This essentially risks bringing the "good" and the "mediocre" at the same level. Hardly a measure of quality...
There is a general sickness in modern science Researchers used to look for fame by trying to solve some major, open problem This has been replaced by the aim to get into the top journals, usually by dressing up some average work by exaggerating, lying by omission or even fabricating results That, and advertising their work on twitter, in conferences etc. using the same tactics There is little time and patience for solving the major problems, while competitors are busy getting the most outrageous crap past reviewers (who are too frequently neither qualified nor interested enough to judge) in the most lauded of journals

Sponsored

ADVERTISEMENT