AI has been trumpeted as our saviour, but it’s complicated

By dene.mullen, 23 April, 2021
View
Time saved by lecturers on marking assignments could indeed be used to enrich teaching, but unfortunately many silver linings have a cloud, says Harin Sellahewa
Article type
Article
Main text

Artificial intelligence is full of potential and has been trumpeted as our saviour, the way forward, the answer to all the world’s ills and the future of learning. But this is not the true picture. Yes, AI has much to offer in education, but it’s not the be all and end all.

Tremendous advances have been made in the field thanks to new algorithms, availability of massive datasets and huge computing power at low cost. True, the time saved by lecturers on marking assignments could be used to enrich teaching, but unfortunately many silver linings have a cloud.

One downside is that lecturers will not get to know their students as thoroughly as is desirable if they never do any marking. Some middle ground is needed – AI could mark some assignments, but for a lecturer to accurately assess and understand a student’s ability, it is vital that the lecturer sees a significant amount of a student’s output.

When the Institute for Ethical AI in Education (IEAIED) set up focus groups to consider students’ views on the use of AI, several students were indeed concerned that lecturers wouldn’t get to know them as well as needed to if their work was being marked by AI algorithms. The IEAIED has drawn up guidelines for universities and schools to consider when introducing AI, so ethical issues do not offset the benefits.

Another concern that arose in focus groups was how to cater for students who are struggling with the course. While it is great news that the most able can benefit from a system whereby students learn at their own pace using a personalised AI programme, does that mean that those who struggle get the lion’s share of a lecturer’s time? What is the downside of this, if any? It can’t simply be the case that lecturers’ teaching time is axed because the AI is doing their work. Those who struggle will still need plenty of explanation and interaction from a lecturer, but others also need to be intellectually challenged to achieve their full potential.

At the University of Buckingham, we are already looking at ways of using AI to predict student outcomes. The aim is to identify at-risk or borderline students for early intervention. We used past students’ performance data, such as assignment marks and exam marks at each stage of their study, to train machine-learning models to predict the students’ final degree outcome.

At first sight, the capacity of AI to predict a student’s final grades from the first set of exams seems like a panacea. But considering the practicalities throws up all sorts of difficult questions: if AI predicts a student will fail or drop out, how should that be handled? Will a university’s action be driven by the need to put bums on seats, or will it act in the best interest of the student? Would seeing such early predictions motivate students to do better or demotivate them to think higher education is not for them? And what about students who are not deemed at risk or borderline − will they get no attention?

AI can certainly harvest a plethora of data on all aspects of students’ lives, but this raises another issue. Gaining information about how a student studies and the number of times they visit the library might help AI algorithms predict their performance, but surely the learner is entitled to know which data are being measured and how they are used.

Attending lectures, tutorials and spending time in the library are not the only learning activities. Students use YouTube as well as online resources and communities to learn. How would AI account for these data points?

Another less-discussed aspect is data ownership and who can profit from that data. Unlike the free services offered by internet search providers and social media platforms to whom we give our data for free for them to profit from, education costs students. So, should they be entitled to a payment from universities and edtech companies that profit from using their data?

The fundamental question, though, is whether it is right to use past students’ data to predict the performance of new students. We are all far too familiar with last year’s A-level fiasco.

Clearly, AI can be a force for good in higher education. It can be the answer to many a lecturer’s prayers in terms of reducing onerous tasks. But it cannot be introduced at any cost. All sides must be considered before AI is brought in, so that the benefits in learning it can bring are not cancelled out by the ethical problems its use entails.

Dr Harin Sellahewa is the dean of the Faculty of Computing, Law and Psychology at the University of Buckingham. His research interests are in artificial intelligence, computer vision and machine learning.

Standfirst
Time saved by lecturers on marking assignments could indeed be used to enrich teaching, but unfortunately many silver linings have a cloud, says Harin Sellahewa

comment1

THE_comment

1 year 5 months ago

Reported
False
User Id
3399582
User name
snagdoodle
Comment body
Are we really ready to cede our roles as expert opinionators to a machine just because someone says it's more efficient?
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoic25hZ2Rvb2RsZSIsImVtYWlsIjoic25hZ2Rvb2RsZUBnbWFpbC5jb20iLCJpZCI6IjMzOTk1ODIiLCJpYXQiOjE2NjU5NjMyNDYsImV4cCI6MTY2ODU1NTI0Nn0.0KlUspnnQ-RIUfPjg92H6iI2uy-gFCarTX1u-_6l1yEkcPbY2w7bQMzk6XyYbPKWl97I_vMXsUYcT33AV6CZgg
Reviewed
On