成人VR视频

Course evaluation forms ‘not read properly by students’

Undergraduates endorsed patently false statements in US experiment

三月 8, 2016
Bored young woman looking at computer screen
Source: Alamy

The results of course evaluation surveys can be undermined by students’ inattentiveness while filling them in, a study suggests.

Researchers at a US university conducted an experiment in which they inserted patently false statements into end-of-module questionnaires and found that surprisingly high numbers of undergraduates answered that they were true.

More than two-thirds of students, 69 per cent, endorsed the statement that “the instructor took roll at the beginning, middle and end of every class meeting”.

Nearly a quarter, 24 per cent, agreed that “the instructor was late or absent for all class meetings”, while 28 per cent said that it was true that “the instructor never even attempted to answer any student questions related to the course”.

The experiment, , involved students on six psychology courses at Lander University, South Carolina.

In a follow-up survey, 113 students who were not involved in the experiment were asked how seriously they took course evaluations.

On an 11-point scale, the average response was 6.8, but only one in five students said that they took evaluations seriously all the time. More than three-quarters of respondents (76 per cent) said that they sometimes took the process seriously, but at other times they “just bubbled in answers” to finish the survey quickly.

Respondents were doubtful that the results of evaluations would be used to make decisions about promotion and retention, or to improve courses and teaching standards.

The five authors – Jonathan Bassett, Amanda Cleveland, Deborah Acorn, Marie Nix and Timothy Snyder – say that their findings “should not be interpreted as evidence that all student evaluations at all institutions are invalid due to inattentive responding”.

But the research builds on previous papers that have found that students are more likely to rate male lecturers highly, and have found a positive relationship between course grades and evaluation scores.

“The results of the present study should be taken as initial evidence that inattentive responding can potentially undermine the usefulness of student evaluations of teaching in some cases,” the paper concludes. “The challenge for future research will be to determine just how widespread rates of inattentive responding are in student evaluations of teaching at the university level.”

The researchers recommend that universities limit the number and length of surveys that students are required to complete, and demonstrate how the results are used to deliver improvements, in order to boost student attentiveness.

chris.havergal@tesglobal.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (2)

This academic year has seen a rash of articles in the popular and pseudo-scientific press about the uselessness of student ratings of instruction and/or course evaluations. It’s been shown time and again that student voice matters if we ask good questions and take the time to listen. In a couple of blog posts, IDEA's research team addressed the 2014 Stark and Freishtat article, discounting, with research, many of the claims made. We invite your readers to take a look at "An Evaluation of 'An Evaluation of Course Evaluations' Part I" and "An Evaluation of 'An Evaluation of Course Evaluations' Part II" at IDEAedu.org/ideablog. Ken Ryalls, Ph.D. President, IDEA (submitted by C.Torgersen, PR Specialist, IDEA)
I find it hilarious, while also somewhat depressing, that an article showing serious flaws in student satisfaction ratings is responded to by a PR Specialist from a firm making money from these assessments.