Talking to students about AI

By Eliza.Compton, 3 June, 2024
View
Socio-economic, cultural, geographic and other factors mean that some students know more about AI than others, and we can’t have an effective discussion about AI and academic integrity until we all know what we’re talking about, writes John Weldon
Article type
Article
Main text

Uppermost in my mind as I welcomed a new batch of first-year education students was how to broach the subject of academic integrity, and in particular the use of artificial intelligence. I didn’t want anybody’s first university experience to be some middle-aged white guy telling them: “You must do…or else!” That seemed far too high school. But I did have to get across the idea that integrity is important, in life and not just in academia, and that to ignore it could invite unwelcome consequences.

So, I thought I’d start with a conversation. “What do you know about AI?” I asked.

I received a few mumbled replies about ChatGPT and a lot of blank stares. Not what I expected. Perhaps it’s that the Victorian state government had banned the use of generative AI in schools up until the beginning of 2024 and so many of these students – fresh out of Year 12 – simply hadn’t come across it in their studies. Perhaps it was this bunch of students. I later asked colleagues if they’d had the same experience; some said “yes”, but many reported that their students were well informed about AI and how to use it.

From 2024, in Australia, it is mandatory for all commencing higher education students to undertake academic integrity training, usually via online modules. Academic integrity is always a hot topic in the tertiary sector, and the advent of generative AI has only made it more so.

But how do educators connect the two?

The lack of homogeneity among the student body reminded me of the aphorism, often attributed to William Gibson: “The future is here, it’s just not evenly distributed.” Socio-economic, cultural, geographic and other factors, of course, mean that some students know more about AI than others, and we can’t have an effective, inclusive university-wide discussion about AI and academic integrity until we all know what we’re talking about.

I was taken back to a similar moment in my career as an HE educator. Back in the noughties, I wrote my first online media unit, Print and Web Journalism (in retrospect, it seems quaint that print came before web in the title). I had expected a lecture hall filled with millennials and digital natives, all of whom knew much more about the subject than an old fart like me. The opposite turned out to be the case. Sure, they knew about the social media platforms they used and, yes, their thumbs were more dextrous than mine, but I had to wind back my expectations and bring them up to a speed that I, and the university as a whole, had assumed they had already attained.

As my work with this year’s education students has progressed, so too has our conversation about AI and integrity. It has become part of many of our classes: what it is, why it’s important that we know how to use it, and how to do so with integrity. They ask for rules, for dos and don’ts. Naturally, they want to know both where the boundaries are and what they can get away with.

I can’t give them anything hard and fast, nor can the university’s academic integrity policy, which speaks of AI in the general context of behaviour and practices that may or may not constitute breaches of academic integrity, without referring to specific AI applications, the use of which may be permissible or not in particular circumstances.

Instead, we worked out a deal. I told them that, as their educator, I was interested in what they were learning, not what they could print out based on a prompt. I invited them to explore AI applications and to talk to me about which ones they might want to use. I also made it clear that I reserved the right to ask them to defend their assessment pieces verbally if I thought they were submitting work that wasn’t theirs.

They agreed to this, but they also asked whether the use of apps that improve their expression, such as Grammarly, or which help clarify their ideas, was acceptable. International students wanted to know about translation programmes. Others wanted to know if they could use ChatGPT to paraphrase or generate ideas. We discussed how and if they might be allowable and how to reference and acknowledge their use.

My students and I may not have solved the problem, but we all learned more about AI and integrity, and we talked more deeply about assessment – how to approach it, the point of it (from educator and student perspectives) and what to do if we’re unsure or lost. Talk about it. We’re all more able to ask informed questions and to explore the role AI will play in our study and work lives.

There is a clamouring for answers to the issue of AI in HE. Some are calling for the return of invigilated exams across the board; others mourn the death of the essay. And that’s understandable. Assessment is our currency in trade in HE. Our power to assess with accuracy and integrity is the determiner of whether we confer degrees on our students or not, so naturally, when that power is threatened, we want to sort out the issue quickly.

But we must resist the temptation to reach for black-and-white quick fixes because, as sure as eggs are eggs, AI will reinvent its place in our psyches and our work and study lives many times yet before it finds any kind of equilibrium. In the meantime, it’s surely better to keep the conversation going rather than close it off with regulation.

John Weldon is associate professor in the First Year College at Victoria University, Australia.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
Socio-economic, cultural, geographic and other factors mean that some students know more about AI than others, and we can’t have an effective discussion about AI and academic integrity until we all know what we’re talking about, writes John Weldon

comment