The dark side of AI’s promised efficiencies

By Eliza.Compton, 26 July, 2023
View
Artificial intelligence can use data and algorithms in a way that prioritises rationality over values such as fairness and quality of education, writes Vern Glaser
Article type
Article
Main text

Artificial intelligence is being trumpeted as a means to transform university education. A recent article suggests that AI can help universities:

  • find the right students and persuade them to enrol
  • strengthen retention and help students graduate
  • provide personalised teaching and learning through virtual tutors
  • adapt their curriculum to meet market demands
  • streamline operations to gain efficiencies and lower costs.

These opportunities are tantalising, and university administrators and faculty members are – with good intentions – pursuing and implementing these kinds of initiatives. However, such activities have a dark side, too.

AI prioritises rationality and efficiency over other values

In such applications of AI, “the computer is provided a set of input data, a learning objective, an error function, and a mathematical algorithm for minimizing that error function”, according to organisational theorist Ayenda Kemp. In doing so, the learning objective becomes a goal that is pursued single-mindedly at the expense of all other values that might be important – a process that Dirk Lindebaum, Christine Moser, Mehreen Ashraf and I have called the mechanisation of values.

This phenomenon can be seen in the Australian government’s robodebt scandal. In 2015 the Australian government implemented an automated system for clawing back social welfare benefits in an attempt to gain A$2 billion (£1.05 billion) in efficiencies and lower costs under their income compliance programme. The algorithmic system pursued that objective – but failed to take other societal values into account such as compassion, fairness to citizens or treating everyone with kindness and respect. The results of this algorithmic implementation were tragic: the unbridled pursuit of efficiency led to outcomes succinctly described as a “human tragedy”. 

A hypothetical example: using AI to find the right students and persuading them to enrol

This type of dysfunctional implementation of AI could easily impact universities. Consider a university that wanted to use AI to find the right students and persuade them to enrol. Using ChatGPT+, I identified several steps that might be used in a campaign to increase student enrolment.

  1. Gather data on current and past students. This could include demographic data, academic data and other relevant data such as extracurricular activities or financial aid status.
  2. Develop student personas. Use machine-learning algorithms like clustering (for example, K-means, hierarchical clustering) to identify groups of similar students in your data. These groups can form the basis of your student personas.
  3. Develop AI-generated marketing materials. Use AI tools to generate or optimise marketing content for these personas. This can include AI copywriting tools to create compelling text, AI design tools to create engaging videos and AI video-production tools to create captivating videos.
  4. An AI-powered marketing campaign. Use AI tools for programmatic advertising to target your ads to the right audience at the right time. Also, use AI tools to optimise your social media posts for maximum engagement.
  5. Evaluate effectiveness. The efficiency of this plan can be measured by tracking metrics such as lead generation (the number of students who express interest in the university), conversion rate (the proportion of leads who enrol in the university) and cost per acquisition (the total cost of marketing activities divided by the number of students enrolled).

How could a programme like this go wrong? Quite simply, this type of campaign pursues one value: maximising the effectiveness of the marketing campaign. In doing so, other values can easily be pushed to the side and ignored. Examples could include:

  • by developing student personas based on past enrolment, any biases in the historical composition of the student body might be reinforced and perpetuated
  • by developing marketing materials that optimise student response, the brand of the university might be diluted
  • focusing on enrolment as a primary objective might marginalise the importance of other objectives such as the quality of education or the university’s impact on their local community.

Maintaining values while implementing AI

What can we do to avoid these types of dysfunctional outcomes? Fundamentally, we need to understand that the mechanisation of values will inherently take place when we implement an AI solution – active leadership is required to prevent dystopian outcomes. Research suggests that a few approaches to AI implementations can mitigate these problems.

  1. We can strategically insert humans into our decision-making processes.
  2. We can create evaluative systems that can account for multiple values.
  3. We need to periodically be willing to redesign our algorithmic routines.

AI is powerful, and the promise is real. But beware of the dark side – because the core values of the university can easily be stripped away if we’re not careful.

Vern Glaser is an associate professor of entrepreneurship and family enterprise and Eric Geddes professor of business in the department of strategy, entrepreneurship and management in the Alberta School of Business at the University of Alberta, Canada.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
Artificial intelligence can use data and algorithms in a way that prioritises rationality over values such as fairness and quality of education, writes Vern Glaser

comment