It is tempting, faced with breathless hype and apocalyptical scenarios about the impact of Generative AI on higher education, to keep one’s head down and hope it goes away. Alternatively, we might try to exclude its influence, whether through blanket bans and academic misconduct processes, questionable GenAI detection programs or wholesale changes to assessment practices, such as reversion to in-person unseen exams.
Neither response seems satisfactory. Clearly, many students are using these tools and will continue to do so, but not always in the ways we expect. Recent surveys suggest that only a minority would consider using GenAI to produce coursework (and perhaps still fewer in humanities disciplines). Much more worrying is the substantial proportion who trust GenAI outputs for basic research, summarising articles and debates and feedback on their work. We should worry less about GenAI’s impact on assessment and much more about its impact on student skills and understanding. Further, students themselves are increasingly anxious about how it could affect their prospects, and are seeking training and guidance.
The good news is that a considered response that involves introducing GenAI into our teaching is also probably our most effective means of discouraging students from relying on it. Rather than trying vainly to exclude the influence of GenAI, here are three suggestions to promote critical engagement drawn from a project I’ve been running with history and ancient history undergraduates this year.
1. Familiarise yourself
It’s an eye-opening experience to try out a programme like ChatGPT, not for a joke task like “Write Macbeth in limericks”, but to try to get a serious account of a topic in your field of expertise: to marvel at both the fluency of the language and the banality of the content, even if there are no obvious inaccuracies or an invented bibliography.
There is now plenty of good, clear advice on GenAI in general, but you’ll get a better idea of its capabilities and limitations – in other words, what your students might be drawing from it – by looking at concrete examples of its output. You can easily create your own account on ChatGPT and ask it to write 200 words on a specialised topic, even specifying that it should write as an expert and give supporting references. Comparing the results with your knowledge of the topic, it’s easier to get a sense of how GenAI “averages” relevant material in its database and presents it in an authoritative manner.
Given the environmental costs of using GenAI, however, there’s a lot to be said for a collective approach that involves discipline-based sessions for staff, ideally with a few more experienced people present to lead the discussion in which team members analyse and discuss examples of GenAI output relevant to their field. These can be preserved for future reference.
- Resource collection on GenAI
- Rather than restrict the use of AI, let’s embrace the challenge it offers
- Apply the principles of critical pedagogy to GenAI
2. Show rather than tell
Blanket assertions about GenAI’s flaws and limitations such as statements in student handbooks and so forth are likely to have limited influence when students are hearing positive and enthusiastic accounts of it from friends, social media and even other parts of the university. It’s more effective to incorporate the analysis of concrete examples into our teaching.
For example, a seminar where students are expected to read a key article in advance can include a discussion of an AI (such as Scholarcy) or a GenAI summary of the same publication. This opens up debate about identifying the core argument of the piece and about what is and isn’t essential to its development; it encourages students to compare and justify different interpretations, developing their critical skills. Subtly, it emphasises that reading is about more than the extraction of nuggets of content, but constructive engagement through a focus on the critique of the AI output rather than the limitations of students’ own readings.
Similarly, presenting a short GenAI account of a topic in class and asking students to evaluate it encourages the application of broader subject knowledge to specific material. This works well as a revision and consolidation exercise, so is relevant even for those who don’t feel they want or need to engage with AI. It also opens up the opportunity for a wider discussion of the key characteristics and limitations of GenAI output.
3. Incorporate GenAI into assessment
If your module assessment includes some source analysis, it’s easy to include a GenAI passage, labelled as such with the original prompt. Besides emphasising the nature and limitations of such outputs, this is a useful test of subject knowledge and critical analysis. In my pilot, this was a popular option; the moderator noted that “clearly there is utility in developing critical skills – although some of the answers already seem quite rote; they know what is wrong with GenAI and say it quite generically”. There are worse outcomes…
For coursework, students might generate a short passage on a relevant topic using GenAI and refine it with multiple prompts, then write a critical analysis of both the output and the process. It’s reasonable to class this as a kind of essay, rather than needing formal approval for changing the mode of assessment. It works as an option for keen students rather than being mandatory. There is no need for special assessment criteria; “knowledge and understanding” and the like are evaluated in terms both of the module subject matter and of GenAI.
Many students are currently using GenAI. Our best option is to support them in engaging with it critically and develop their subject skills at the same time.
Neville Morley is professor of classics and ancient history at the University of Exeter.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment