Seeking collective solutions to AI’s ethical challenges

By Sreethu.Sajeev, 17 December, 2024
View
Universities and their funding partners should not have to navigate the ethical complexities of using AI in research alone. A panel at the 2024 THE World Academic Summit argued for drawing up a shared framework
Article type
Article
Main text

How can institutions, researchers and funding organisations navigate the emergence of AI and, more recently, generative AI? Chairing a panel discussion on the subject at the 2024 THE World Academic Summit, Sarah Main, vice-president of academic and government relations at Elsevier, noted that higher education is “at the beginning of a dialogue”. The panellists agreed that while most institutions were developing their own policies on the responsible use of AI for staff and students, there was a need for a shared national – and even global – approach. 

Chris Day, president and vice-chancellor of the University of Newcastle and chair of the Russell Group, said: “We collaborate across institutions and sectors, so we need a shared understanding of what’s okay when using AI in the context of good research practice.” If universities are left to their own devices, there is a higher risk of exposing them to dangerous impacts of AI, such as false results, dissemination of false information and plagiarism. 

“Without consistency, it could be a race to the bottom as individual researchers and institutions are tempted to seek competitive advantage through AI,” said Day. The Russell Group has developed its own set of principles on the use of generative AI tools in education, but Day advocates for a wider framework and a national list of principles. 

Nick Fowler, chief academic officer at Elsevier, argued that the primary goal of such a framework should be to do no harm: “AI is a combination of content, technology and intent. And if any of these fail – for example, if the intent is to promote misinformation – the research suffers,” he said. In an Elsevier study, 95 per cent of participants expressed the belief that AI could be used for misinformation to some extent, even if it could generate huge gains in efficiency and effectiveness. As a publisher, the company has committed to five principles to ensure the ethical use of AI: promoting real-world impact, improving transparency around how tools work, ensuring human oversight and accountability, preventing the reinforcement of unfair bias and championing privacy and robust data governance. 

UK Research and Innovation (UKRI) is also on a journey to develop principles for the safe use of AI in research. Kathryn Magnay, deputy director for AI digitalisation and data at UKRI’s Engineering and Physical Sciences Research Council, advocated for more professional development in this space. “The approach to this is piecemeal at the moment, and not everyone understands the importance of learning the basics such as how AI uses data. We need to move away from AI being a black box that we have no understanding of,” she said. UKRI is now looking into how it can promote integrity around AI use in research. Magnay is optimistic about the future: “Done responsibly, we can embrace the opportunities AI can give us and potentially revolutionise research.” 

The panel: 

  • Sarah Main, vice-president academic and government, Elsevier (chair)
  • Chris Day, president and vice-chancellor, Newcastle University
  • Nick Fowler, chief academic officer, Elsevier
  • Kathryn Magnay, deputy director, AI digitalisation and data, Engineering and Physical Sciences Research Council 

Find out more about Elsevier.

Standfirst
Universities and their funding partners should not have to navigate the ethical complexities of using AI in research alone. A panel at the 2024 THE World Academic Summit argued for drawing up a shared framework

comment