Examinations are not popular things. Students would, as a rule, rather not have them. And in this world of blended and online learning, they can admittedly feel somewhat archaic. In what other circumstance, for example, is anyone required to write by hand for two hours or more at a time?
One of the most significant adaptations during the academic years 2019-21 was the necessary move away from invigilated in-person examinations. Despite the traditional “social distancing” between desks, students simply could not be required to sit in exam halls. So examinations were converted en masse into assessment tasks that could be done, and submitted, remotely.
A likely long-term outcome of this is that we will see fewer in-person, invigilated examinations in higher education in the future. There are costs and disincentives involved in change, not least regarding “student satisfaction”, and in all probability some of these adaptations may not be reversed.
- So, you want to take the grades out of teaching? A beginner’s guide to ungrading
- Turn the marking process on its head by using ‘reverse grading’
- Assessment design that supports authentic learning (and discourages cheating)
We hope that doesn’t happen. Invigilated exams have an important role to play in the assessment diet of many courses. Let’s take a look at what that role can be.
At the simplest level, in-person examinations are one of the best ways we have of ensuring that the work submitted is genuinely the work of the student themselves. Of course, there are ways and means of subverting the system (many of which appear so onerous to engineer that one is left asking: “Why didn’t they just spend that time revising?”). By and large, however, course teams can be assured that they really are assessing the students they think they’re assessing.
One of the major threats to the integrity of uninvigilated, remote assessments is the pervasiveness of essay mills. Yes, countries such as the UK are finally looking to outlaw them. No, that won’t solve the problem. It’s very hard to know what will, when so many sites will continue to operate outwith the law.
Perhaps more important than even “integrity”, though, is the pedagogic rationale for unseen, invigilated examinations. We have inserted the qualifier “unseen” because the mysterious appeal of “seen” exams (whereby students are given advance notice of the questions, and which therefore have none of the benefits discussed in this piece) shouldn’t distract us here. The rationale, in our view, is twofold.
First, an unseen exam is an effective way of estimating a student’s overall knowledge and understanding in an area of study – by, quite simply, sampling that knowledge and understanding. This strategy can be effective only if the student does not have advance knowledge of which aspects of the curriculum will be sampled. If the student can show understanding of, say, two from 10 unannounced topic areas, then we can assume they would be able to show equivalent levels of understanding for the remaining eight topic areas that remain untested.
Professional and statutory bodies, which require students to have demonstrated appropriate levels of competence across a domain of study, when it is not possible to test that competence in all domains, are often particularly keen on this form of assessment.
Second, people often need to be able to show knowledge and understanding of an area without looking stuff up in a book. Therefore, it is legitimate to test the extent to which a student is capable of doing this. Exams provide an estimate of the student’s ability to marshal a body of knowledge “at run time” in the absence of support from sources of information.
One traditional justification for exams that we are not convinced by is the idea that they’re “a good test of how a student works under pressure”. If there is a reason for testing how well a person works under pressure, there are probably better ways of doing this than a two-hour exam requiring the writing of three essays.
This isn’t to say that the “working under pressure” justification shouldn’t be used. If assessing that aspect of performance is explicitly required, while demonstrating knowledge and understanding, then so be it. But it shouldn’t be assumed to be a pedagogic justification of exams in general.
In arguing for the benefits of exams, we’re not proposing a return to days long gone when degree outcomes were determined by performance on “finals”. We simply see a role for invigilated examinations in a sensibly varied diet of assessment.
The overall assessment strategy of a course should enable students with different skill sets to have a fair chance of success. One doesn’t want a degree outcome to say more about a student’s capacity (or otherwise) to do exams than it says about their command of a subject area.
An assessment strategy should also ensure that it assesses all the diverse types of knowledge and skill acquisition that one would expect from any given programme of study. Oral presentation skills, laboratory or clinical competence, report construction and so forth. Seen within that broad context, invigilated exams should continue to have their place.
Andy Grayson is an associate professor in psychology at Nottingham Trent University. He has worked in higher education for more than 30 years and provides leadership on learning, teaching and assessment.
Richard Trigg is a principal lecturer in psychology at Nottingham Trent University. He has worked in higher education for 18 years and leads on postgraduate taught provision within the department.