When I first started teaching, I came from the consulting industry and had no previous teaching experience. My ratings reflected this: they were in the 80th percentile of all professors in the first semester I taught. In contrast, my latest rating was 4.9 out of 5 for my services management EMBA class and it was the highest rating in the history of this programme. How was this achieved after starting from such a low base?
- Why and how to gather ongoing student feedback right from the start
- Start with why: taking a new approach to curriculum development
- Experiment, test, refine: Work with students to shape online courses
Seeking feedback, especially negative feedback
Soliciting and responding to feedback, and being open to criticism, were the most important factors in my development as an educator. In fact, academic research in management and organisation shows that an important trait of outstanding performers is their willingness to seek negative feedback with the objective of improving themselves.
Limits of university-wide student evaluations
The standard university-wide student evaluation system is a good start. It provides a relatively hard and objective evaluation of your teaching overall. It shows your teaching standard compared with peers’. However, if you want to know exactly how and what to improve and what students appreciate about your teaching and module, the attribute-specific questions tend to be highly correlated, to either high or low, and provide little diagnostic insight.
This is the “halo effect” in satisfaction measurement and is caused by general impressions and inadequate discrimination between attributes that shift all ratings in the same direction, as my research shows. So, the attribute-specific scales generally do not give detailed insights on specific areas but are more an overall measure of how a professor is doing.
Even open-ended feedback provided by students in the university evaluation system is mostly “top of mind” and general, such as: “The course was interesting”, “It was interactive” or “The professor made me think more critically”. Such feedback may be good enough for learning about how students perceive a course but provides little actionable feedback to specific teaching and course design questions such as:
- Which cases did students really like?
- Was this project seen as value-added?
- What exactly should I do differently next term?
How to obtain detailed and actionable student feedback
A tool I found effective is something called “intercept satisfaction surveys” in consumer research, whereby consumers are “intercepted” right after a service transaction and asked for their perceptions and assessment of this particular experience.
I apply the same principle to my courses. I tell the class that each student will be asked once per term to provide feedback to a specific lecture to give me real-time evaluation on how the course is going and to allow me to make immediate adjustments where needed.
My teaching assistant (TA) administers this intercept survey and randomly picks a few students at the beginning of each class and asks them to fill in a simple survey form by the end of the class. This form, sent via email, contains only three questions, each with three pre-numbered fields. The questions are:
1. What are the three things you liked best about this class?
2. What are the three things you liked least about this class?
3. What are the three most important improvements you suggest?
The TA keeps note of who has already provided feedback so that no student is asked to do this more than once per course.
The following points are important:
- First, I explain why I solicit feedback in addition to the university-organised student evaluation exercise (that is, more detailed, specific and timely feedback).
- Second, I position the feedback as developmental. That is, I listen to students and seek their views on what they like or don’t like and would like to see changed, so I can cement the strengths of a particular class.
- Third, I get my TA to email me the feedback right after the class has been concluded. It helps me to feel the pulse of a class and allows real-time adjustments for the next class. I let the class know if there was feedback that resulted in immediate changes (for example, someone talking too much or an incorrect assumption that the class has certain prior knowledge, meaning I need to go through this material or ask students to read up on it).
- Finally, I specifically ask the students and TA to keep the feedback anonymous; otherwise I could neither take positive nor negative feedback at face value.
Using two student feedback tools to drive improvements
To effectively improve teaching two types of feedback tools are needed:
1) A robust, reliable and representative overall rating that benchmarks your teaching against peers’ and over time. Typically provided by the university’s end-of-term teaching feedback surveys.
2) Detailed, qualitative feedback from a tool, such as that discussed in this article, providing excellent insights on why ratings are high or low, and what can be done to improve your teaching and or should be cemented into your course.
If done consistently, this approach will produce effective and well-informed educators who provide value to their students.
Jochen Wirtz is a professor of marketing and vice-dean of MBA programmes at the NUS Business School, National University of Singapore.
If you found this interesting and want advice and insight from academics and university staff delivered directly to your inbox each week, sign up for the THE Campus newsletter.
comment