I used to allow students to use artificial intelligence to help them write their papers. I spent a lot of class time explaining how AI works, how they can use it to help them write if they wish, and how I expect them to document its use if they draw on it. I did this because students are likely going to have to know how to use it, and so we should help them understand how to do so effectively and ethically.
But I now think that allowing AI use for student writing assignments is a mistake – at least in lower-level classes. It doesn’t help students learn what I want them to learn through writing papers for class. So, I have banned its use.
The ban is simple: students are not allowed to use AI in writing their papers. They cannot use it to generate ideas, to “improve” their writing or (obviously) to write their papers for them.
- Advice for writing and teaching writing
- Common features of LLM-created writing
- Resource collection: AI and assessment in higher education
Eliminating AI use is also simple in practice. Students do much of the initial work on their papers in class. They read, in class, short academic works. They identify the conclusions being argued for and the reasons given for them. They then draft by hand (without access to electronic devices) an exegesis of the author’s position. We reconvene to discuss the view, and in another class period students write their initial take on things. They complete the paper outside class, handing in their initial handwritten work to accompany their typed papers.
The in-class work is not presented to the students as a way to ensure that they’re not using AI. That would undermine the trust needed between students and instructors for a class to be effective. Rather, this work is done in class, without electronics, to ensure students can work in a distraction-free environment suited for careful and sustained thought.
Why handwriting rather than typing? This slows things down and encourages thought.
Teaching text understanding without AI
Just as the ban is simple, so is the reason for it: I want students to learn how to engage with arguments.
To do this, they need to read and understand the author’s conclusion. They need to understand the author’s arguments for that view. They need to think about whether the premises of those arguments support the conclusion – and, if they do not, whether they could be modified to do so or whether the argument needs to be abandoned. If the premises do support the conclusion, they need to think about whether they’d also support a counterintuitive conclusion – and, if so, whether this shows that some of the premises should be rejected.
Could students use AI to do some of this? Sure. They could, for example, ask AI to outline the philosopher Peter Singer’s argument for famine relief. AI does a nice job of this. But if students use AI in this way, they won’t develop the skill of working out what the author is arguing for and how they do that. I don’t want students to be able to read a digested summary of a paper. I want them to develop the skill to produce such a summary themselves, identifying the main points of an argument and working out how they support the conclusion.
Could they use AI to generate objections to the arguments they read? Of course. AI does a good job of summarising objections to Singer’s view. But I don’t want students to parrot others’ objections. I want them to think of objections themselves.
Could AI be useful for them in organising their exegesis of others’ views and their criticisms of them? Yes. But, again, part of what I want my students to learn is precisely what this outsources to the AI: how to organise their thoughts and communicate them effectively.
Having students start their essays in class and by hand helps them develop their views and how to convey them effectively. As they work through the process of writing their papers, they often realise that their thoughts aren’t as well developed as they thought. I want them to master these skills.
When students resist an AI-free writing process
Some students are initially resistant to what they see as a needlessly time-consuming approach to writing. Used to skimming texts and banging out a fast 1,000 words, they dislike having to slow down. And when they slow down, some discover that they are less able to engage with texts than they believed.
This identifies weaknesses that students did not know they had. But it has also led to some students becoming frustrated at what they see as pedantry on my part. Isn’t their exegesis “good enough” if it “captures the gist” of what was written? More challenging is that if students can succeed without engaging with the texts they read, what questions does this raise about the value of such engagement? This, by implication, casts doubt on the value of a liberal arts education itself. If students can succeed in their chosen careers without being able to critically engage with texts, it is difficult to justify the claim that it is important that they develop that skill. The prevalence of AI is raising questions not just about its appropriate use in the classroom but about whether some of those classrooms are important at all.
Other students, however, have expressed appreciation for this slower, more careful analogue approach. One told me that my classes were the only times in his college career he’d felt “like a scholar”; a senior told me she was proud of having finally grappled with a problem and written something that was entirely hers. Several have told me this approach got them to think about ideas rather than grades.
So, goodbye, AI.
James Stacey Taylor is a professor of philosophy in the department of philosophy, religion and classical studies in the School of Humanities and Social Sciences at The College of New Jersey.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment2
(No subject)
(No subject)