成人VR视频

Logo

How to systematically improve your teaching using student feedback

Jochen Wirtz introduces a simple but effective tool for gathering student feedback that will help educators to cement strengths and reduce weaknesses in their teaching

Jochen Wirtz's avatar
National University of Singapore
24 Jan 2022
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A guide to gathering student feedback that can help you refine your teaching

You may also like

Recommendations for incorporating and guiding peer assessment in the classroom or online
Advice on using peer assessment to encourage deeper student learning

When I first started teaching, I came from the consulting industry and had no previous teaching experience. My ratings reflected this: they were in the 80th percentile of all professors in the first semester I taught. In contrast, my latest rating was 4.9 out of 5 for my services management EMBA class and it was the highest rating in the history of this programme. How was this achieved after starting from such a low base?

Seeking feedback, especially negative feedback

Soliciting and responding to feedback, and being open to criticism, were the most important factors in my development as an educator. In fact, academic research in management and organisation shows that an important trait of outstanding performers is their willingness to seek negative feedback with the objective of improving themselves.

Limits of university-wide student evaluations

The standard university-wide student evaluation system is a good start. It provides a relatively hard and objective evaluation of your teaching overall. It shows your teaching standard compared with peers鈥. However, if you want to know exactly how and what to improve and what students appreciate about your teaching and module, the attribute-specific questions tend to be highly correlated, to either high or low, and provide little diagnostic insight. 

This is the 鈥 in satisfaction measurement and is caused by general impressions and inadequate discrimination between attributes that shift all ratings in the same direction, as . So, the attribute-specific scales generally do not give detailed insights on specific areas but are more an overall measure of how a professor is doing.

Even open-ended feedback provided by students in the university evaluation system is mostly 鈥渢op of mind鈥 and general, such as: 鈥淭he course was interesting鈥, 鈥淚t was interactive鈥 or 鈥淭he professor made me think more critically鈥. Such feedback may be good enough for learning about how students perceive a course but provides little actionable feedback to specific teaching and course design questions such as:

  • Which cases did students really like?
  • Was this project seen as value-added?
  • What exactly should I do differently next term?

How to obtain detailed and actionable student feedback

A tool I found effective is something called 鈥, whereby consumers are 鈥渋ntercepted鈥 right after a service transaction and asked for their perceptions and assessment of this particular experience. 

I apply the same principle to my courses. I tell the class that each student will be asked once per term to provide feedback to a specific lecture to give me real-time evaluation on how the course is going and to allow me to make immediate adjustments where needed.  

My teaching assistant (TA) administers this intercept survey and randomly picks a few students at the beginning of each class and asks them to fill in a simple survey form by the end of the class. This form, sent via email, contains only three questions, each with three pre-numbered fields. The questions are:

1. What are the three things you liked best about this class?

2. What are the three things you liked least about this class?

3. What are the three most important improvements you suggest?

The TA keeps note of who has already provided feedback so that no student is asked to do this more than once per course.  

The following points are important:

  • First, I explain why I solicit feedback in addition to the university-organised student evaluation exercise (that is, more detailed, specific and timely feedback).
  • Second, I position the feedback as developmental. That is, I listen to students and seek their views on what they like or don鈥檛 like and would like to see changed, so I can cement the strengths of a particular class.
  • Third, I get my TA to email me the feedback right after the class has been concluded. It helps me to feel the pulse of a class and allows real-time adjustments for the next class. I let the class know if there was feedback that resulted in immediate changes (for example, someone talking too much or an incorrect assumption that the class has certain prior knowledge, meaning I need to go through this material or ask students to read up on it).
  • Finally, I specifically ask the students and TA to keep the feedback anonymous; otherwise I could neither take positive nor negative feedback at face value.

Using two student feedback tools to drive improvements

To effectively improve teaching two types of feedback tools are needed:

1) A robust, reliable and representative overall rating that benchmarks your teaching against peers鈥 and over time. Typically provided by the university鈥檚 end-of-term teaching feedback surveys.

2) Detailed, qualitative feedback from a tool, such as that discussed in this article, providing excellent insights on why ratings are high or low, and what can be done to improve your teaching and or should be cemented into your course.

If done consistently, this approach will produce effective and well-informed educators who provide value to their students.  

Jochen Wirtz is a professor of marketing and vice-dean of MBA programmes at the NUS Business School, .

If you found this interesting and want advice and insight from academics and university staff delivered directly to your inbox each week, .

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site