Deep machine learning has reached a level of sophistication whereby platforms can produce human-like text instantly. The latest craze is around the mind-blowing capabilities of ChatGPT and GPT-3.5. With its new user-friendly interface, the AI chatbot launched on 2 December had attracted more than 1 million users within five days.
There are a growing number of AI-enabled content writers – Jasper, CopyAI and Rytr, to name a few. So, what sets ChatGPT apart?
It is prolific, highly effective and still learning. And it is free. Most of these platforms cost money students do not have. The new interface means everyone can use it. Putting AI-enabled content development into a chat format is a new innovation that makes it interactive, accessible and inviting.
ChatGPT and GPT-3.5 can produce very good text in any structured language: poems, essays, blog posts, marketing materials and all code languages.
For example, I gave it the prompt:
“You are a journalist. Explain what ChatGPT is to THE Campus readers. Explain what it means for higher education.” I received the following text in about 30 seconds.
ChatGPT is a new technology developed by OpenAI, a research laboratory, which uses artificial intelligence (AI) to enable natural-language conversations. It works by using a deep learning model which is trained on large datasets of human conversations. This model is then used to generate natural-sounding responses to questions posed by users. ChatGPT has the potential to revolutionize the way we communicate with computers, and it could have a big impact on higher education. For example, ChatGPT could be used to create virtual tutors, to interact with students in lectures, and to help answer questions about course material. It could also be used to create virtual research assistants, allowing students to ask questions and receive answers quickly and accurately. In addition, ChatGPT could enable smarter conversations between teachers, students, and researchers, allowing them to collaborate more effectively and efficiently.
There are of course naysayers who claim this is not a big deal. They point out that AI-generated text is not perfect, that it does not care about accuracy, that it lies. For example, it produces fake citations and reproduces biases that exist in the literature.
- Resource collection: AI and the university
- Eight ways to engage with AI writers in higher education
- Original essays written in seconds: how ‘transformers’ will change assessment
But this is a dynamic, developing tool that is learning and improving all the time. ChatGPT marks the next stage for a new world of non-human-generated content. Debating the current capabilities and viability of GPT-3.5 (or even the hotly anticipated GPT-4 due out in summer 2023) is not where we should focus our energies.
It is the broader implications for higher education that we need to confront. ChatGPT means universities can no longer look the other way or take a band-aid approach to AI writers. Big changes are needed, fast.
We need to embrace these tools and integrate them into pedagogies and policies. Lockdown browsers, strict dismissal policies and forbidding the use of these platforms is not a sustainable way forward.
Academic integrity
Submitting a paper written by GPT-3.5 is different from hiring a ghost writer because it’s free, instant and undetectable by plagiarism software. Detecting work generated by the AI is nearly impossible, and even if you, the marker or instructor, suspected something, there is no way to prove it. There are no viable countermeasures available.
It is likely that students are already being assessed based on work that AI can do rather than what they can do. We do not know if they are learning or not.
There is a risk that efforts to design more inclusive, flexible authentic assessments could be rolled back as part of knee-jerk administrative responses to the use of this software by students.
If universities want to stay true to their missions of equity, inclusion and access, then we need to keep and develop these alternative assessments. The task now is to design assessment that incorporates AI-generated text. Not least because upon graduation, students will be using this technology in the workplace.
Assessing process rather than outcome
Pedagogy and assessment need to change fundamentally. Lucinda McKnight has shared some great ideas on THE Campus for how to integrate AI writers into higher education. Assessing only a completed product is no longer viable. Assessment needs to shift to process. This has always been the case, but ChatGPT is forcing the issue. Scaffolding in the skills and competencies associated with writing, producing and creating is the way forward.
A sample class activity
Take a given week’s assigned reading. Ask students to discuss it in small groups for five minutes (this works with 10 students, or 600 students; online or face-to-face).
Then introduce them to OpenAI’s GPT-3.5.
Break students into groups of three and invite them to plug the reading’s research question into GPT-3.5 and let it generate an alternative essay. Ask the students to assess the writing in line with the course learning objectives.
They can compare the assigned reading and the AI-generated content. It is a great way to explore nuances. This can be done as an assessment, but it needs to be closely aligned to learning objectives such as: evaluation of evidence; identification of assumptions; review of methodology or lack thereof, etc.
The real work for educators is to develop new rubrics that stay true to course learning objectives. GPT-3.5 can create a rubric in less than a minute. But instructors will need training, and time to develop their understanding and skills and to modify their teaching materials. Many already teach this way. We just have another reason to double down now.
Librarians, writing centres and centres for teaching and learning are higher education’s frontline workers. They can provide support to faculty and instructors. They need training in recognising the use of AI-generated papers alongside workshops, tutorials and space for dialogue on how to integrate this software in the classroom.
Information literacy is the single most important skill to develop if we are to counter the misinformation that convincing AI-generated text can produce.
My dream response
Universities close for one week and provide intensive curricular design workshops to help faculty learn and make the shift in teaching materials and practices. They need to be trained in working with GPT-3.5. Curriculum committees then need to review syllabuses for quality control. It will take time but investing in this pedagogical and curricular shift is essential to integrating AI-generated text creation into the academy.
Immediate responses
In the short term, universities need to notify faculty about the capabilities of AI-generated text. They may need to update academic integrity policies. The current language may not explicitly prohibit students from using such software since it is not plagiarism.
- Raise awareness among faculty, teaching assistants and graders;
- Minimise opportunities to use it in assessment by shifting assessment types and practices (avoiding lockdown browsers as a solution);
- Train markers and teaching assistants to integrate this software into lessons;
- Conduct a university-wide academic integrity campaign to double down on values.
If a university wants to mandate that using GPT-3.5 or any AI-generated text in an assessment is a breach of the code of conduct, then this must be clearly explained to students.
Some claim the essay is dead. I disagree. Humans plus technology are the way forward. As educators, we have to teach our students what that means in practice. The essay isn’t dead, but the process of creating one is changing.
Nancy Gleason is the director of the Hilary Ballon Center for Teaching and Learning at NYU Abu Dhabi.
If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.
comment3
(No subject)
(No subject)
(No subject)