成人VR视频

First AI ethics policy unveiled by Cambridge University Press

New guidelines on use of ChatGPT follows plagiarism concerns and authorship controversies caused by rise of generative AI

三月 14, 2023
Source: istock

A leading university press has unveiled its first artificial intelligence (AI) ethics policy, which will require authors to declare any use of ChatGPT and other generative AI tools.

Under the ?published by Cambridge University Press (CUP) on 14 March, researchers will also be banned from treating AI as an “author” of academic papers and books, following recent controversies in which in several journals.

The rules from CUP, which publishes about 400 journals and 1,500 monographs a year, also seek to clarify grey areas where text generation by an AI bot has led to plagiarism, sometimes unwittingly. ?Authors will be “accountable?for the accuracy, integrity and originality of their research papers, including for any use of AI”, explain the new guidelines.

“Scholars have been told the work must be the author’s own, and they must not present others’ ideas, data, words or other material without adequate citation and transparent referencing,” they add.

Mandy Hill, managing director for academic at CUP, said the AI ethics policy was designed to give confidence to researchers who wished to use ChatGPT and other AI tools.

“We believe academic authors, peer reviewers and editors should be free to use emerging technologies as they see fit within appropriate guidelines, just as they do with other research tools,” said Ms Hill.

“Like our academic community, we are approaching this new technology with a spirit of critical engagement. In prioritising transparency, accountability, accuracy and originality, we see as much continuity as change in the use of generative AI for research,” she added, stating that the new policy aims to “help the thousands of researchers we publish each year, and their many readers. We will continue to work with them as we navigate the potential biases, flaws and compelling opportunities of AI for research.”

While the guidelines were welcomed by , professor of political and computational social science at the California Institute of Technology, who uses large language models to detect online harassment, trolling and abusive behaviour on social media platforms and in video games such as Call of Duty, further dialogue on their use was needed, he said.

The rise of generative AI “introduces many issues for academic researchers and educators – I anticipate the opportunities and pitfalls presented by generative AI for academic publishing for many years to come”, said Professor Alvarez, co-editor of the CUP title Quantitative and Computational Methods for Social Science.

jack.grove@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT