ChatGPT and the future of university assessment

By Miranda Prynne, 9 March, 2023
View
Artificial intelligence-powered tools like ChatGPT are forcing a much-needed opportunity to reimagine the role of education in the 21st century, says Alex Sims
Article type
Article
Main text

ChatGPT is the focus of much discussion, excitement and fear across tertiary education. While many of us in universities relish the opportunities such artificial intelligence (AI) writing tools enable, others, used to doing things in certain ways, find it difficult to embrace the rapid change associated with such technologies.

ChatGPT has garnered considerable media attention in the past few months for its ability to answer questions, provide advice on almost any topic in fluent, well-written English, write computer code and perform various other tasks. 

The chatbot, launched in November 2022, has been tested using a broad range of exam questions, including law, medical and business school exams. It passed those exams. Some of the answers provided by ChatGPT are nothing short of magic and I have seen experts rendered speechless by them. Yet, these uncanny answers were pure luck. ChatGPT does not know whether an answer is correct; it simply predicts the solution based on its massive dataset. Therefore, many answers are not 100 per cent accurate and can even be spectacularly wrong. A human is needed to determine the accuracy of its answers. 

The reaction of universities to ChatGPT and other similar AI tools has been mixed, falling into three main types: prevention, banning and embracing. 

First, to prevent the use of AI tools, some universities are falling back on in-person exams featuring old-fashioned pen and paper. However, tests and exams have never been ideal assessment methods. They don’t indicate whether a person can work well in teams or present and communicate information verbally, and they disadvantage those with debilitating exam anxiety. Indeed, to accommodate these limitations, many courses have reduced the percentage of course marks given out for tests and exams. 

In addition, preventing the use of ChatGPT would work only if all of a course’s assessments were for in-person work. To ensure that no student could use ChatGPT would require increasing the percentage of marks for old-school tests and exams, which would be a retrograde step. 

Second, some tertiary providers have explored banning ChatGPT, and other AI tools, with the support of AI detection tools. These AI detection tools are not 100 per cent accurate and can be worked around. My concern is that students will spend more time attempting to circumvent the system than learning the content. 

Both banning and preventing the use of AI tools for all, or most, assessments is counterproductive. People will not, for the foreseeable future, be in competition with AI. Instead, they will be competing with people who are adept and skilled at using such tools. Indeed, people unable to use AI tools may become unemployable in many professional settings as they will be considered too inefficient and slow. 

The key to successfully integrating AI into education lies in understanding that AI tools are not a replacement for human expertise but rather that they are tools that can augment and enhance it. 

Universities need to teach students how to use these tools effectively, to provide training and guidance on how these tools can enhance students’ learning and prepare them for the workforce. 

We have adapted to new tools in the past. For example, the fears that electronic spreadsheets would put accountants out of work did not materialise as the accounting profession pivoted. Similarly, AI tools are forcing a much-needed opportunity to reimagine the role of education in the 21st century. 

So where does this leave us with the vexed question of assessment? How do we assess students’ knowledge? For most courses, some element of in-person evaluation, whether written, oral or both, is necessary. The remaining assessments require rethinking and what may work for one discipline or course may not work for others. 

One idea is that instead of the traditional approach of providing a question to which the student writes an answer, both the question and answer could be given. The students could critique the question and answer and explain what they think is correct or incorrect and why. 

Alternatively, a student could be assessed on the nature and quality of the prompts they ask an AI tool. This may increase the time required for marking, but it will develop the students’ skills with using the tools and provide a good way of assessing their knowledge of the subject matter at hand.   

As with most technology, the challenge is not the technology itself but rather our human emotions, experience and reaction to it.  

Alex Sims is an associate professor in the University of Auckland’s department of commercial law and an associate at the UCL Centre for Blockchain Technologies.

If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.

Standfirst
Artificial intelligence-powered tools like ChatGPT are forcing a much-needed opportunity to reimagine the role of education in the 21st century, says Alex Sims

comment1

THE_comment

1 year 8 months ago

Reported
False
User Id
3447587
User name
amyjackie888
Comment body
But doesn't the AI can perform the critique task on its own work? The author seems to go round a circle. It remains to see how one can break through the circle. Secondly, the author rejects the so-called "old-fashioned" assessment too readily without considering possible adjustments and adaptations.
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoiYW15amFja2llODg4IiwiZW1haWwiOiJhbXlqYWNraWU4ODhAZ21haWwuY29tIiwiaWQiOiIzNDQ3NTg3IiwiaWF0IjoxNjc4NDEzOTA2LCJleHAiOjE2ODEwMDU5MDZ9.c_t6jkU3yY2loht3yYmfJcHvuwZSIDovuZ0Xu3rYkWDFrTi5OopjqtwAwrLs7r-IBji-W-e0GnyB9VFU7GMyzA
Reviewed
On