成人VR视频

Will AI liberate research from institutional bean-counting?

ChatGPT’s ability to churn out mediocre papers should lead us to reappraise how research is carried out, reported and evaluated, says Martyn Hammersley

June 22, 2023
Montage image of a man holding onto robot arms to illustrate Can AI free universities from the institutional bean-counting scourge?
Source: Stock/Getty images Montage

Much current concern about the implications of large-language AI models, such as ChatGPT, has focused on their use by students in producing essays for assessment. But some attention also needs to be given to the prospect of research articles being produced by, or with the aid of, such technology. And this raises questions about the functions these articles have come to serve in the institutional conditions that prevail today in universities.

Many years ago, social scientists developed the concept of goal displacement, suggesting that organisations with initially idealistic goals later come to prioritise other concerns, such as their own survival or powerful vested interests. In his book Political Parties, published in 1911, the German-Italian sociologist Roberto Michels famously argued that this was what had happened to the German Social Democratic Party: its original commitment to democratic ideals had been negated by an increasingly oligarchic form of internal organisation.

Somewhat later, Berkeley sociologist Philip Selznick developed the same argument in relation to the Tennessee Valley Authority, established in 1933 to promote economic development in the Tennessee Valley basin. Here, again, the bureaucracy increasingly came to serve its own interests, as well as those of local power-holders.

Over the past few decades, displacement of goals appears to have occurred within universities too. This has taken place on many fronts, but particularly as regards the production of research. The original goal was to contribute to the body of collective knowledge, and much research still does this. But research is increasingly evaluated in terms that bear little effective relationship to that purpose.

成人VR视频

ADVERTISEMENT

Institutionally, the primary function of research papers now appears to be to boost universities’ research profiles (in the UK, for example, in the context of the Research Excellence Framework), as well as to enhance the prospects of individual academics getting jobs or gaining promotion. Furthermore, the value of articles is increasingly measured in terms of citation impact, either that of the articles themselves or of the journals in which they appear. Yet the relationship between these measures and an article’s contribution to knowledge is weak at best, especially outside the natural sciences.

This is an environment in which the use of AI could thrive, since it?might build on existing trends. Faced with the task of producing an article to meet institutional requirements, might there be a temptation for an academic to collect information from already published sources and “synthesise” it into a “new” article? This need not involve crude plagiarism so much as “rewording” what others have written, and perhaps “borrowing” an argument structure from elsewhere. ChatGPT and its competitors would be a handy tool?for this task.

成人VR视频

ADVERTISEMENT

Most of us would not condone this use of AI, of course, but my point is that it highlights a deeper problem.?Although current institutional demands?might stimulate excellent publications, they are more likely to result in “quick and dirty” products given the huge pressures on academics today. A very large number of “outputs”?might already be generated in this manner, suggesting increased “productivity”. But this does not add to the body of genuine knowledge. Quite the reverse, it clutters up journals with poor quality or even bogus material that has to be waded through to find worthwhile contributions. Any faith that peer reviewing would prevent this is .

Moreover, within both the social sciences and the humanities there are some strands of thinking that deny the distinction I am drawing between a concern for producing knowledge and a preoccupation with generating outputs that will meet institutional requirements or have “impact”. Critics belonging to this persuasion might suggest that any appeal to genuine knowledge is simply an ideological gloss for a mundane activity that is necessarily directed towards serving institutionalised needs and professional interests. Yet if we adopt this view, we will have no grounds for challenging the charlatanry of politicians who successfully employ barefaced lies to serve their own ends. The conclusion would have to be that they are simply better at the game than the rest of us.

It has frequently been stated that if AI can produce a passable student essay so easily, this exposes the poverty of modern university teaching. Some have even suggested that AI could replace academics unless they up their teaching game. And the situation is similar with research. If academics approach writing papers for journals in the way that many students seem to tackle writing assessed essays, interested primarily in the institutional rewards for producing them, there is no reason why academics as researchers, too, could not be replaced by robots.

This should lead us to reappraise how research is carried out, reported and evaluated. In particular, it ought to prompt us to examine the effects of current institutional requirements on this.

成人VR视频

ADVERTISEMENT

Perhaps, contrary to many of the forecasts about the impact of AI, its rise will allow academics to regain some principled control over their work. But that would require significant institutional change.

Martyn Hammersley is emeritus professor of educational and social research at?The Open University.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers – for good or ill

16 March

Reader's comments (4)

"What is the point here? Please tell us
This reads rather like the Chaos in the Brickyard, written in 1963 - https://www.science.org/doi/10.1126/science.142.3590.339.a The stories are similar in the sense that they point to the same danger, namely the erosion of the essential value in an activity (ie research) when it becomes a target-driven job.
I am sure I have already read articles produced like this, but not necessarily by AI. I don't see why reviewing should not be done in the same way, cutting out the middleperson altogether. Social scientists could take a leaf out of natural scientists' books by swapping research results and arguments online leaving commercial companies (including universities) and quangoes in their own closed worlds.
This prompts three thoughts. First, ChatGPT, and other AIs might be good at writing at least a first draft of review type articles, so perhaps the AI should be credited as an author. I suspect, and hope, that the human co-author(s) would want to explain the AI's role in the production of the article. The second thought is that as these large language models work by hoovering up lots of text from the web, they may be good at building on existing ideas and perhaps extending them as little, but they are unlikely to be able to come up with new ideas, and the long term the development of human knowledge requires these inputs of new ideas. AI generated research may produce more and more specialised research which may eventually only be "intelligible" to AIs - which may not be entirely negative. (The peer review system may have a similar conservative influence - resulting in research which is only intelligible to peers.) The third thought is that this danger may be mitigated to some extent by asking the AI to use ideas from other disciplines: e.g. "write an academic article on the treatment of depression in the style of a mathematician." This almost certainly would not produce anything interesting, but the principle of mixing disciplines and genres may, sometimes, yield interesting results.

Sponsored

ADVERTISEMENT