XClose

UCL Minds

Home
Menu

WWWAI - Episode 2

In this episode, we’re joined by Dr Jamie Woodcock (Open University) and Dr Zeynep Engin (UCL computer science) to discuss disenfranchisement by AI in the world of work. We cover topics like algorithmic management and the biases that can be exacerbated by AI driven technologies – and discuss broader inequities AI can highlight and make worse. We also consider positive uses of AI in work and discuss what actions can be taken to steer technology development and use in work to increase fairness for us all.


Working Well with AI episode 2: AI and disenfranchisement in work
[music]
Rose Luckin
Hello, and welcome to the UCL and British Academy podcast series, working well with AI. I'm Rose Luckin, Professor of learner centered design at the UCL Knowledge Lab. In this podcast series, we're exploring how artificial intelligence, AI, is changing the world of work. AI has long been predicted to reshape our working lives, and it has developed in leaps and bounds over the past decade. And as we emerge from a global pandemic, we're rethinking how we work, what sort of work we value, and what we need for the future. 
In this episode, we will be discussing who can be disenfranchised by AI developments in the world of work, and what this sort of disenfranchisement looks like. We'll cover topics such as algorithmic management, and the biases that can be exacerbated by AI driven technologies. We'll talk about broader social issues that AI can highlight and sometimes exacerbate, we will also consider positive uses of AI and work and discuss what actions can be taken to steer technology development and use in work to be beneficial for all of us. With me today, I'm delighted to say I have Dr. Jamie Woodcock from the Open University and Dr. Zeynep Engin, from University College London computer science. So Jamie, can I start with you? Could you summarize, in a nutshell, what interests you about AI in work?

Jamie Woodcock
So, broadly speaking, my research interests are around work and people's experience of work. And I think we can say that, in the past few years, the role of technology at work has become a fashionable topic to talk about. And so my interest in trying to understand AI at work is to understand firstly, how it's being used to change people's experiences of work, but I also have a research interest in finding out the limits of how AI is being used: some things either don't really involve AI or are just being blamed on AI. So I want to try and unpick what's, what's really happening with how work is changing.

Rose Luckin
That's really interesting. And I think you’re absolutely right to want to unpick what's really going on because it's not always obvious, is it? Sometimes it's quite hard to actually know, if an AI is really being used, and if so, how? So Zeynep, what areas of AI are you working with in the main at the moment? Where's your focus?


Zeynep Engin
Thank you. My interest is broadly in the topic of algorithmic governance, it's a new field of research growing field of research that's concerned with the design of social and economic processes through the use of algorithmic systems, finding the appropriate levels of human algorithm collaboration and co-decision and the issues that emerge from that development basically.  And especially if the algorithmic decision making is part of an important decision that affects individuals, society, or the environment.

Rose Luckin
I think you said something quite fundamental when you were talking about the appropriate levels of artificial intelligence, and human co-working, cooperation, collaboration. It seems to me that the interface between the human and the AI is not always very functional and effective. Can you say a little bit more about when you say appropriate levels? What might that look like?

Zeynep Engin
AI based technologies are every, everywhere. It's affecting lots of things in everyday life and also important decisions. But our governance structures are not really there yet, because they are more suited to a previous era of things working. The scale of change is often compared to that of industrial revolution. But the processes we are currently stuck with are processes that were invented to deal with those sorts of issues. Now we have to think about this new development of having other agency along with human and institutional processes, in them making decisions. So appropriate levels is really a difficult question. We do we do have algorithmic agency in the processes, whether they are the appropriate levels yet? Probably not, they're not at appropriate levels yet. But at the same time, it's hard to measure and maybe say anything conclusive about that as well, because we can't basically say that, well, we don't want algorithmic intervention or algorithmic components in your decision-making processes that affect individuals that affect real people. But unfortunately, we’re not there. And also, it's missing on the opportunities as well. So the appropriateness is really a very hard question that maybe we can get into a bit more in like as the conversation progresses.

Rose Luckin
Yes, I think it's absolutely a key part of the discussion around disenfranchisement. Okay, so let's move on to that focus a little. So that's what we're really interested in at the moment, because we don't really want everybody becoming disenfranchised because of the way that AI is being used, applied, introduced into the workplace. So, Zeynep from your experience, what do you think, are some of the most concerning uses of AI technologies in work? Maybe if you could talk us through an example of how someone could become disenfranchised when AI is used in their workplace that would be extremely useful.

Zeynep Engin
As I said, I mean, my interest is more broadly about governance, algorithmic management. The impact on workplace is really more marginal, I'd say. But I think when we talk about issues around AI technologies in work in general, I think they are often confusing it with automation, as well, there is still a whole automation issue that comes from mechanization, of governance, mechanization of managing things actually. So basically having more machines to do the repetitive tasks, or more like the tasks that require more physical power. So we have been outsourcing those to machines for quite a long time. But that obviously comes with some social consequences in terms of certain groups are basically not needed in economic sense. And that's a huge problem. But I think in the AI discussion, it's not that sort of automation, that's not about really losing blue collar jobs, um, to automation. It's the white-collar jobs that we are more talking about these days. For example, we are less reliant on human translators to do the translation tasks. While this has got its advantages, it makes it cheaper for people to get those sorts of services, it helps when you're traveling overseas. But at the same time, there are lots of issues around how biased those translations can be, and is as a result basically reproducing the same kind of problems, the same kind of issues that we have had in the past. I think the main challenge, again, with AI in the workplace, or in anywhere really, is this agency issue. Any type of agency that we're talking about. I think the main challenge to workplace or if you're talking about disenfranchisement will come from increasing reliance on AI, is AI technologies become better and better at what they're doing, it will be more difficult for humans to object to it basically. I mean, if you see an AI system, for example, making better diagnosis, and if you know that the chance of the algorithm getting it right is going to be better than chance of getting it right by a human doctor, then it will be more difficult for the human doctor to really challenge any algorithmic output in the first place. And that means basically, over time, we are inevitably handing over the human decision-making power to the algorithms. We also have to keep in mind that the ideal combination, usually or increasingly, it's the combined human-AI decision making and AI really being an aid to human decision maker. What we see in reality is really algorithms making decisions directly through government by algorithm or management by algorithm directly, and also even algorithm supervising decision making processes. And that's a different type of agency that is affecting the economic activity, the workplace activity in general.

Rose Luckin
There are several examples that you called upon there. If we think about translators, I think I've spoken to many translators over the last few years. And they're… On the one hand excited that they can maybe focus more on interpretation because there's an AI that can do some of the more basic parts of their role, but at the same time, they're worried because the AI is, as you pointed out, isn't always fool proof and it gets things wrong. And then if people rely too heavily on that, do we end up with our human translators and interpreters losing out and yet we end up with an inferior translation and interpretation process, because the AI is, is not actually as good as the human. So, I think there's some very interesting questions there about what does it mean to be effective, for the AI to be effective, for the human to be effective? Jamie, could I come to you now? Can you tell me a little bit about what you think in terms of this question of people becoming disenfranchised?

Jamie Woodcock
I think it's an interesting question about what disenfranchisement means or involves. And I think, for me, I guess one of the key things is how disenfranchised people are in many kinds of work. That the overwhelming majority of people who go to work, have little say over how they work, over when they work, over what they do when they work. And so my understanding of artificial intelligence, when it's used at work, is that it's not an abstract thing. It's used within sets of social relations that already exist in the workplace. And for very many people, those sets of relations are exploitative, are alienating and don't involve much autonomy or control. And so for me, that's really the risk of this disenfranchisement is that many people don't have the language to explain or to talk about artificial intelligence, that it becomes raised above people as if it's something that can't be explained or we can't take responsibility for. So, it's not for me that that artificial intelligence or that particular program, or a particular piece of software is disenfranchising people, it's that the people who own the places where people go to work, are disenfranchising the people that go to work there. And they're using technology, like they have for the past 200, 300 years, to disenfranchise people from, from the fruits of what they're doing at work. You know, I think we have to be very, very clear that people who go to work have a right to an explanation for decisions that are made. That just because a technology has been created doesn't mean it can be used. There are many technologies we have rejected in the workplace, and could reject. And so I think we have to cut through some of the Silicon Valley hype that there is a piece of software that will solve all of our problems. Because that has not been true so far and won't be true in the future I don’t think. 

Rose Luckin
That’s an important point. To what extent do people need to understand enough about AI, in order for them to be able to successfully cut through that hype? Because I think you're absolutely right.

Jamie Woodcock
I think the key way of thinking about this is when artificial intelligence is used as part of a decision making process at work, I think we have to be quite clear to say if nobody is held responsible, no human manager is held responsible for the decision that's made, or if the decision can't be explainable, then it's not appropriate for use at work. And I think we just have to be clear about this. You know, it may make more profit for a company, it may be more efficient, but it is fundamentally not appropriate for use at work. And you know, I think about this in terms of: what if a worker wants to appeal a decision that is made? You know, they have a grievance, they go to a trade union rep, they have a meeting about it. You can't go into a meeting where somebody says we can't explain why this thing happened. And none of us take responsibility for it. That is a shift in workplace relations in a way that we wouldn't accept in many, many other contexts. And so I think, I think we have to be, you know, we have to be quite clear on this, right?

Rose Luckin
Yes, it's something that we do have to be very clear about. Zeynep, can I just come back to you and ask you about this issue with respect to governance? Because it feels as if there's quite an important role for governance there. You know, the point Jamie's making about we shouldn't be having decisions made by an AI that cannot explain the basis for that decision as a human would be able to. And of course, one of the big problems with machine learning, as just pure machine learning is that it is black box and you don't get a satisfactory explanation. So is this also a governance issue in your eyes?

Zeynep Engin
Yes, it's true that the AI technologies, especially the machine learning based technologies, more data driven AI, is not transparent. So you don't necessarily see what is going on inside, inside the learning process itself. But what we know is we need AI for to deal with the growing complexity of the real world problems that the human experts are not necessarily able to deal themselves. 
The explainability question is important, yes… This advantage of AI comes with the cost of explainability, you are no longer able to comprehend what's going on in the process. At the same time, I think it's not the wrong thing to pursue algorithmic support in critical decision making processes, because it is offering us the capacity to deal with more complex problems that otherwise we deal with making lots of human assumptions. And this, this reality has brought us to this point, we have lots of inequalities in the world, we have got lots of disadvantages for people (I mean, apart from white males) because of these human institutional processes that has historically basically brought us to this point. And I think there is a role for really reformulating that equation, reformulating the question. And trying to use AI in a way to actually uncover these problems in the existing processes of human and institutional decision making and making sure we are focusing on the right questions and we are focusing on the outcomes.

Rose Luckin
I think, one often thinks about AI, in and of itself, without taking into account the context. There’s the social context, as well as the physical context, the digital context, the work context, within which that AI is operating, where there are, of course, existing issues, existing inequalities. So how do we Jamie, really think these kinds of issues through? What kinds of technologies? Or is it more about what kinds of relationships within that contextualized app application of the AI that's more likely to be responsible for that risk of people feeling disenfranchised? Is it the case that you might use a particular sort of AI technology or product in one place quite successfully? In another place, because of that context, absolutely not? Can you say a little about the contextual factors, particularly existing inequalities and biases?

Jamie Woodcock
So I think one of the risks that there is when we're talking about this as if we, for example, if we think about statistical analysis as being a neutral tool, or we think of artificial intelligence as being neutral tools, I think we can miss a part of the broader sociological picture of how technologies are used at work. And, you know, I'm of the firm belief that technologies aren't neutral, but are shaped by the people who design, pay for, build… You know, that people's material interests become embedded in the technologies that they create. And then those technologies are used within sets of material relations. 
Now, I'm not going quite as far as saying that every single invention, you know, from the Industrial Revolution onwards can only serve a purpose of intensifying exploitation, because, you know, there's always an ambivalence in in tools, right? Some can be repurposed or set against their original purposes. But I think what we need to focus on is, you know, at work, there are already a set of particular relationships. And I think, you know, we only have to look at the implementation of other forms of technology over the past couple of decades to see that many of these have led to disempowerment in the workplace. Particularly in the context of areas of work when there's low levels of trade unionization, and where there's no negotiation over the use of technology. 
So the example that I often come back to is call centre work. The computerization of call centre work was driven almost entirely by the imperatives of management, of capital, in call centres, and led to a kind of work that is universally disliked, pretty much by people who call call centres and people who who work in call centres. It has a number of negative factors associated with it. And so I think the question is, you know, what are people trying to achieve or do with these technologies? At Deliveroo, they used to name the algorithms. So they have a “Frank”, they had a number of different names for algorithms. And these were used to speed up the work process, right? These were used to make calculations about, you know, how to get the food out as fast as possible. But ultimately, they came back to some quite bland and familiar outputs: firing the people who perform worst, firing the lowest 10%, whatever it is. You can read here that this isn't a mathematical puzzle with a lovely solution that somebody has found. But this is a way of creating an outsourced workforce and pressuring them to work harder. These are not particularly new phenomenon, right? They're just being refigured. For the application of these new technologies.

Rose Luckin
Yeah, that's a brilliant example. Thank you. Absolutely great. What can we do about this? You know, what can we do about the risks? How can we try and ensure that the workplace, and the relationships that exist in that, are not made less equal or not made worse? What are the things that we can do to try and ensure that we reduce the risks?

Zeynep Engin
I think… Well, I am also thinking about more positive uses of these technologies to basically tackle, in a way, some of the systemic problems that we haven't been able to deal with as human beings basically so far. And I think it's an important aspect, because I think the current conversation is really more about being reactive, not too reactive, it's being rightly reactive about the technologies. They are causing us some problems and we need to point them out, find them, spot them, and find appropriate measures to minimize any harm any potential harms. But we shouldn't be missing on the opportunity side as well, because I do believe that there is a really huge potential here to help us do things better in the future, with this sort of external capacity. 
But what I often find is, first of all, I think the conversation is really too broken. I think the reactionary reflections are coming, not necessarily always with a good understanding of what the technology is about, and what it is offering, as well, it is really often very confused. And I don't think that we are really able to, we have found the language to go into root causes for both in terms of in socioeconomic terms in social terms, and in technological terms as well. They do have, for example, in computer science in machine learning, we have the huge field of research growing around algorithmic fairness. Trying to make things more fair. But it's a very interesting problem to really try and make these systems in a more fairness-conscious way. In many, many occasions, because we do see them basically being unfair being used in an unfair way, hiring decisions, in criminal sentencing, in lots of areas, basically. But we also find that they are really isolated from the questions, the real questions and the social and economic context. And therefore, they are all really limited to very, very specific and often unrealistic assumptions. Again, we do have also tackle the very social questions. In an algorithmic domain, you're really translating the problem, you're trying to translate the problem into numbers. And that means even like a term like “fairness”, “equality”, they mean different things to different people and different contexts. They have got different, lots of different expectations around them as well. And then you try to translate that into computational domain, you end up with really mathematically conflicting definitions of what it might mean as well. That's a huge sort of issue there. 
But on the other hand, I think on the social science, the social side, I think, yes, we do see the impact of these technologies. We are worried that they are not being used in the right way. But I think that it's the existing processes, not the technology itself to blame. I mean, well, statistics is a neutral thing, if you understand what it does. It is really what you're feeding into it. It's your data, it's your assumptions, it's your model, it's your contextualization of the social problem that that is the problem. And I think the main need is really to link this broken conversation between the two ends in the first place. And also, there's another thing to make things a bit also a bit live as well, because it's not the case of sort of developing solutions in a lab environment and testing it many times before implementing it in the in the field, it is really things are happening in real life. And we need to find solutions as we go, as we process as well. And that requires really also the language to be translatable to non-technical, non-scientific, or non-computer scientific fields as well. To really involve more lay audience more, if it is the employment, if it is a workplace issue, we need to really have the certain level of understanding for employers themselves and for employees. I mean the role of Trade Unions we were discussing last time - we need to understand how they are going to operate, how they fit in this new picture. And of course, regulators, the government as well, because there's also the huge issue of dominance of the private sector in the loop as well, that is changing the role of governments.

Rose Luckin
I think the key thing that I'm hearing is that communication and language are fundamentally important here. And, Jamie, I'm going to ask you the same question. What do we do? What can we do in order to try and reduce the risks and to build a more equal workplace rather than increasing the inequalities that are already there? 

Jamie Woodcock
Well, I mean, I guess the difficulty, of course, following on from what Zeynep has said is, of course, there are clearly big disagreements. You know, clearly Zeynep and I disagree quite substantially on the ethics or the use of technology. So I will try, of course, not being either a mathematician or a computer scientist, but somebody who studies technology at work, to say something about what we could do. 
And, you know, I guess, I guess, the challenge here, but what we can do is… You know, I think there are there are two kinds of arguments happening, right? And I won't categorize Zeynep’s as either one of them. One that says technological progress is in inherently a good thing. That, you know, there are neutral positivistic approaches to science that we can apply to problems, and we will solve issues. And that any opposition to technological progress is a kind of a reactionary or a bad thing. And then I think on the other side, there is a position that says, you know, the work was better in the past. You know, if only we could go back to the post war settlement, you know, things were better than or whatever it is. 
I think the difficulty is that both positions are not very helpful, because they miss the nuance of the non-laboratory situation, that people are complicated and contradictory, and that the workplace is a contradictory phenomenon. You know, it's not somewhere where technologies arrived fully formed and are used in a kind of space where everyone has a say over what's happening, and so on. And so I think what we have to try and do is to find a middle ground. And I think that middle ground starts not from technical expertise, whether from the social sciences, or STEM, but I think it starts from talking to people about their experience of work. And I think even if we take aside the potential disenfranchisement that AI is having in many workplaces, you not only have to look at the reports of new technological solutions being brought in to monitor people working from home or the recent announcements of unfair deactivations with Uber and Deliveroo and so on, you know, it's quite clear that there's a kind of shift in opinion around technology at work through COVID and so on. But it's to actually talk to people about what they think the solutions to the problems at work could be. And it may well be that there are some ways we can wrench some of this technology away from, you know, making profit or other things to be used. 
But I think this is a kind of broader question about disenfranchisement. And I think at present, AI technology is dominated with large, large companies funding much of the research. And if it's not, it's state funding that's being taken by private individuals, or still being shaped by that environment. So I think, you know, we could imagine a different kind of science, but I think that's a much bigger question than disenfranchisement, of where we're up to. But I think we need to find that balance based on the realities of most people's experience at work today.
Rose Luckin
I think that's the great place to conclude; that finding the balance, I couldn't agree more, I think we have to do that. And I do wonder, listening to this discussion, whether, as Zeynep’s pointed out, we can use AI to help us deal with some of the complexities that are very hard for us to deal with as humans. But in the process of using that AI, we perhaps introduce further complexities, because the AI is so is so alien to understanding the social relationships that already exist within the workplace. So you’re kind of tackling one lot of complexities, but actually not tackling the other a lot of complexities at all, and potentially making them worse. So it is a fascinating area, and we have to get it right. 

Thank you both for joining me today. That was I really enjoyed the discussion, thank you! So our guest today were Dr. Jamie Woodcock, senior lecturer at the Open University and Dr Zeynep Engin, Senior Research Associate in the University College London Department of Computer Science. Thank you both for coming along and engaging in such a lively and interesting discussion.
[music]
You've been listening to working well with AI. This episode was presented by myself Rose Luckin editing and mixing is by Suzie McCarthy. The series is funded by UCL public policy, UCL grand challenges and the British Academy. T find out more about the AI and the Future of Work Project for “UCL AI and the future of work”. Thanks for listening, and I hope you join us again next time.