成人VR视频

‘ChatGPT-generated reading list’ sparks AI peer review debate

Social scientist sees hand of ChatGPT in list of non-existent papers cited in peer reviewer’s rejection

April 5, 2023
  A man in a robot costume performing to illustrate ‘ChatGPT-generated reading list’ sparks AI peer review debate
Source: Getty

Fears over ChatGPT’s growing role in peer review have been raised after a Dutch researcher claimed?that a reviewer who rejected his paper recommended a handful of fictitious publications invented by?the AI chatbot.

Robin Bauwens, assistant professor of leadership and human resources management at Tilburg University, told 成人VR视频 that he was perplexed when a reviewer at an Emerald?Publishing?journal suggested he familiarise himself with several literature reviews in his field by Dutch academics unknown to him.

“We were baffled at not knowing four reviews in our own field?– the Dutch names also suggested peers working close by in a related field…[whom], being employed?by a Dutch university, I felt I should have known,” explained Dr Bauwens.

“We got a strange feeling and checked GPT-2, which confirmed that these suggestions were AI-generated fakes,”?said the social scientist on the apparently??presented to him.

成人VR视频

The episode is likely to fuel concerns?about the emerging role of AI in peer review, including whether reviewers will seek to outsource decisions to the emerging technology. Many researchers have already reported how requests to ChatGPT for recommended reading in their fields often lead to the creation of nonsense reading lists filled with fictional authors, journals and publications.

Dr Bauwens said the bizarre suggested reading from the reviewer was surprising?because he had received “constructive critical responses from this reviewer and the editor” on the paper across three previous rounds of review.

成人VR视频

The last round of review was, however, “a bit less constructive, but still a genuine reflection of someone who has read the paper”, said Dr Bauwens, who was still considering how to complain to the journal and the publisher on this episode.

Emerald?Publishing told THE that it had recently updated its to clarify that large language models (LLMs) could not be credited as co-authors, and its “existing extends to the use of LLMs within the peer review process”.

“As such, ChatGPT and other AI tools should not be utilised by reviewers of papers submitted to journals published by Emerald. As with authorship, AI tools/LLMs should not replace the peer review process that relies on human subject matter expertise and critical appraisal,” a spokeswoman said.

Dr Bauwens said: “Our feeling is that sending out deliberately false information to authors as a reviewer constitutes a form of sabotage, which at least requires some form of action from the side of the journal – and a verbal warning is OK, in our view.” He?added that he imagined AI-influenced rejection “will probably happen more often in the future”.

成人VR视频

Dr Bauwens’ experience, which he revealed on Twitter, prompted other researchers to share their stories on how AI was influencing the editorial process in academic publishing. Ben Maier, a German postdoctoral researcher in infectious diseases based at Humboldt University in Berlin, ?that he had been forced to withdraw a submitted paper from a journal after an editor suggested text should be fed into ChatGPT to “make it clearer”.

The resulting paragraphs “sounded really bad”,?said Dr Maier, who argued that the AI-generated revisions were “overstepping boundaries” in accepted peer review practice.

Such incidents are likely to focus attention on the increasing role of AI in peer review, with the technology already used to check tables, reformat citations, check for plagiarism and perform other time-consuming tasks within the editorial process, although the outsourcing of editorial decisions to AI has, until now, been rare.

However, last month the Committee on Publication Ethics (Cope) published ?that insisted authors should declare the use of AI in scholarly papers, adding that Chat-GPT and other AI chatbots should not be listed as co-authors. This came after a number of controversial such incidents, suggesting new ethical guidelines for publishers?could be next.

成人VR视频

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers – for good or ill

16 March

Related universities

Sponsored