UCL Grand Challenges


Disruptive Voices Episode 2: AI and the Future of Work

In Episode 2, we speak to Professor Rose Luckin (UCL Institute of Education) and Nimmi Patel (TechUK) about AI and the Future of Work.

SoundCloud Widget Placeholderhttps://soundcloud.com/uclsound/disruptive-voices-ai-and-the-future-of-w...


AI and the Future of Work

In this week’s episode, Professor Rose Luckin (UCL Institute of Education) and Nimmi Patel (TechUK) discuss what AI developments could mean for the future of work in the UK and globally, and how we need to redefine our understanding of good work to harness AI’s potential for positive change.


James Paskins  0:05 

Hello, and welcome to a new series of the Grand Challenges podcast, Disruptive Voices. My name is James Paskins and today I'm in conversation with Professor Rose Luckin, who is Professor of Learner Centered Design at the UCL Knowledge Lab, and Nimmi Patel, who is Policy Manager at TechUK.

Welcome to you both and thank you for joining me today. We're going to be talking about AI and the future of work. So Rose, can I come to you first and just ask you about AI and the things we hear about it in the mainstream media. There are usually some quite alarming stories about how we're going to lose a lot of jobs, and many people are going to be unemployed. Can you explain briefly what some of these issues are, and to what extent this worry is justified?

Rose Luckin  0:46 

I think we need to be careful, there's a huge amount of misunderstanding about what artificial intelligence is, what it is now, what it could be, what it can do, what it can't do, what it might be able to do in the future. And it's very easy, therefore, to feel anxious about the possibilities of your job being taken by this amazingly smart robot, or this incredibly intelligent system. The truth of the matter is that the world of work is changing. And in some areas, it's changing quite dramatically. And that is often at least partly due to artificial intelligence. So there certainly isn't any error in thinking that the world of work is changing. But it's not all negative. We have very good artificial intelligence systems, but they're very good in very particular ways. They're not good at everything that humans are good at. So in actual fact, as we move further through the 21st century, I think we'll see a much greater balance, as we appreciate much more about how what's actually happening is trying to find the right balance of an artificial intelligence and a human intelligence that complement each other. And therefore the artificial intelligence really augments the human intelligence, helps us to do more things better, helps us to understand ourselves more. Of course, we have to intend for it to be used in that way. But I think that's really at the heart of what this project is about. It's about trying to look at the relationship between artificial intelligence and the future of work, and to try and think about what the best outcomes are, given the huge potential there is for artificial intelligence to be beneficial, and to try and minimize the risks of the worrying and non-beneficial potential that AI could bring.

James Paskins  2:45 

Nimmi, how's that represented in the work that TechUK does?


Nimmi Patel  2:50 

Thanks so much for that James. And just picking up on what Rose says, I think innovation will bring opportunities and challenges. There are legitimate concerns about what the impact will be on those industries that are displaced. I think what history teaches us is that automations, and new technologies like AI will not replace human work, but rather transform it - boosting existing employment and creating new types of work. I think 30 years ago, there were no data scientists. 20 years ago, no search engine optimization people. And 10 years ago, no one was employed on as social media managers. And while the fourth industrial revolution has many unknowns, I think one of the tasks that we're faced with is preparing for jobs that don't exist yet. TechUK is a tech trade association with over 830 members, and we're really trying to prepare a digital world that is inclusive for everyone, and that everyone can harness the opportunities available in their digital future. I think by ensuring that everyone - no matter their age, ethnicity, gender, background, and skill level - is given the opportunity to harness AI skills and new technologies, we can thrive in the future workforce and society.


James Paskins  4:03 

Wonderful, thank you. So, Rose and Nimmi, UCL and the British Academy are currently working together to address critical questions for the policy, business practitioners and society on the ways that AI will affect the future and equity of work in the UK. I think this is probably a question for Rose to start with - but can you tell me a bit more about this project and how you're involved?


Rose Luckin  4:23 

Absolutely. I think this is an extremely important project. And it's also great to see UCL and the British Academy coming together over this, because what is the future of work in the UK is going to be like, and how is AI impacting on it requires a really interdisciplinary group of scholars to unpack the implications, yes, for policy and business and practitioners and society, all of us because it's impacting on everyone. So, I think it's an important topic area, I think it's a great collaboration. But for me, the really good thing is the bringing together of people from multiple disciplines from across UCL to try and look at issues such as the quality of work, the equity of work. If you take that last one, the equity of work, what are the implications of artificial intelligence, for the equity of work in the UK? And how do we understand how we can help policymakers bring to bear the kinds of policy that will try and ensure that actually we do develop a more, not less, equitable workplace? What are the implications for business? And actually, where does the AI itself fit in? If you think about a situation where you have a problem that's being solved by a team of people, not an unusual thing in business, and actually, some of those people aren't people, some of those people are AI. And so you've got this AI human intelligence team working together. What are the implications of that, for the business? How do you make it most effective for the business? But then what's it like for the people involved in that team, and what policies need to be brought to bear to ensure that people are treated properly? And there are also people who believe that we have to think about what's the appropriate way of treating the AI. So it's a really complex space. And the only way we're going to get to grips with this is if we bring people from the multiple disciplines, multiple expertise, who not only bring different knowledge and understanding, but different ways of working to bear on these really fundamental issues for society for the workplace.


James Paskins  6:56 

Thank you very much. Nimmi, can you tell me a little bit about the way that TechUK is involved in this work and how you see the importance of this kind of research.


Nimmi Patel  7:05 

The project that UCL and the British Academy are working on couldn't be more timely. And I'm really grateful that they invited TechUK to the AI and future of work roundtable that was held late in 2020. It was a really good discussion about the different variables and different considerations that the project will need to take. And it's so important as we look to economically recover from COVID-19. I think the future of work has now shifted. And what we can do with this project is really make substantial recommendations to government and industry about collaboration, about ways of working, and how we can make it most effective to a productive workforce. A report from the House of Commons Business Energy and Industrial Strategy Committee in 2019, asked the government to create a UK Robot and AI Strategy by the end of 2020, if the UK was to remain a global leader in new technologies. TechUK obviously welcomed this, we actually submitted evidence to that committee. But it found that, in particular, SMEs saw a lack of awareness and understanding of automation and new technologies in harming business productivity. Now, this can happen for a number of reasons, including because businesses don't understand the technology that has been developed, or don't prioritize it. Other times, it's because the new technology doesn't fit in with existing systems. I think a project like this, in an era of economic recovery from the pandemic, where we've seen forced digital transformation happen, can really help move the conversation and the narrative away from "the robots will take our jobs" and enable businesses to partially automate routine and repetitive tasks.


James Paskins  8:52 

How much uncertainty is there regarding future developments in AI? And how do you think we can make informed political decisions now?


Nimmi Patel  8:59 

The UK is already a world leader in AI innovation in sectors such as health and finance, but the tech is moving fast, and in order to keep pace with other nations, and if we are to remain at the forefront of the development and application of these technologies, we've got to do more. And skills are fundamental to this. I'm very biased because I work in skills, so I think it's the most important thing ever. My colleagues who work in digital infrastructure or digital adoption may think differently. But creating a steady pipeline of tech talent is imperative to remaining a leader in AI and data, we must address the significant mismatch between supply and demand for the digital skills in the workplace. And this is only going to increase as there's going to be more emphasis on digital transformation in organisations, especially those looking to enhance their own capabilities based on their lived experience of this crisis. This will include supporting new operating models, collaboration tools, and talent management solutions. I think over the next few months, we can expect to see a heightened degree of scenario planning around recovery and support on workplace models. I and techUK sort of agree that human centric policy reform will be needed in a world where everything becomes digital. As the fourth industrial revolution, as we love to call it, charges ahead, companies will find that they can automate certain tasks and we really encourage that, but they may discover they can get by with fewer employees. During the recession, a lot of companies downsized. And what we saw was that, in many places, it was a sort of jobless recovery, the economy came back, but not all the jobs came back. Leading with people-first thinking enabled by technology ensures that we do not deepen the digital divide while people adjust and that's incredibly important as we face a jobs crisis. Things that we may need to consider are things such as online safety - we do a lot of work around kind of online harms - or consultation on needs, workers preferences, how flexibility can support, you know, in particular, people with families, women. As we've seen, women become more disadvantaged during this time, and the necessary digital infrastructure for this ought to become a reality.


James Paskins  11:14 

Rose, can I come back to and just ask you some more about the importance of different disciplines working together? So we have a huge range of people interested in artificial intelligence and whether this brings them together - can you just talk about why you think that's important?


Rose Luckin  11:29 

Absolutely, I'd have to reflect back to when I first started studying artificial intelligence, which is way too long back for me to actually pinpoint the date, but several decades. And at that point, artificial intelligence was something that you studied in an interdisciplinary way. Because it was all about intelligence and intelligence involved needing to understand some psychology, some cognitive philosophy, some understanding of linguistics. But then, of course, you also needed to understand some theoretical computer science, as well as some artificial intelligence methodologies, strategies, computer programming. And so it was a really rich, interdisciplinary study area, and the key thing was to solve problems. But of course, to solve the problem, you have to understand the problem. And that's where the start of that interdisciplinarity comes in. If we think about the pandemic, you've only got to look at the things that are happening across the world to realize how many different sorts of expertise are required to deal with any of the relatively minor issues around pandemic, let alone the major issues. How do you roll out a vaccination program? Well, you certainly can't do that without having people with an expertise in logistics, in the medical field of virology, people who understand how the vaccines work, people who understand the public health issues around it, and that's only just scratching the surface. And so, when it comes to artificial intelligence, the situation is the same, only more so because not only do you need those multiple disciplines to understand the problem, you need that AI coming from those multiple disciplines to understand the problem, and then how you might use artificial intelligence in order to tackle that problem. And I think Nimmi expressed it really beautifully, because she talked both about the need for the stream of tech talent, but also the human centric workplace. So, reflecting the need for us to have sociologists, organizational psychologists, and many more disciplines besides of course, but working around what does that mean to have this human centric policy? What do we want in the workplace? But then, of course, you need the technical expertise to understand what is possible for the AI to do. And, of course, many disciplines around that. Because it's very important to recognize that it's one thing to say "okay, we have an artificial intelligence that is extremely good at helping us automate certain processes in the workplace. And that leads to changes in the roles that are available and the way that people work". But just because we can do something with technology, doesn't mean that society will accept that. I think a classic example happened last year with the algorithm that was brought in to basically do the adjudication of students GCSE and A level grades. Now, some people might not even see that algorithm as AI. But I think many people would feel that is a form of AI and it's certainly related to AI, if not AI. Actually, the core issues of that weren't to do with the algorithm not doing what the algorithm was meant to do, it was actually to do with the human misunderstanding of the context, what society would accept of the historical context and the bias that it brought with it. So, it's really complicated, and when things are complicated in this way, you must have multiple disciplines coming together, if you're really going to stand a chance of getting a good, fair outcome.


James Paskins  15:27 

Public acceptance is a really important point. How do we communicate and educate the people about AI?


Rose Luckin  15:34 

It's a really good question. And actually, I think it is becoming the question. How do you give informed consent for an AI to use your data, if you don't understand what it means for an AI to use your data, for example? I think we really have to look very carefully at this issue around public acceptance. And if we want the public to accept a role for AI, then we have to be careful not to make the assumption that somehow because the people who are commissioning that AI, or people who are developing that AI, know themselves that this is a jolly good thing, that's not enough you know. The public really do need to understand, and they need to understand for many reasons, but not least, because whether we like it or not, there are people in the world who want to use AI for negative reasons. And unfortunately, the regulation will never keep pace with the way that the technology rolls on, and the way that somebody who wants to wreak havoc will always be ahead of the regulator. So actually, there's a very practical reason to do with trying to help people prevent harm. I'm a big fan of regulation, I'm not trying to say we shouldn't be trying to develop the right regulation. But we also need an educated public, so that they understand why perhaps an alarm bell should sound when they're asked for their data to be used for X, Y, or Z. And so I think this education about AI is becoming an increasingly urgent issue, if we want that public acceptance to be there. And we do, but we want it to be a wise public acceptance, not a blanket public acceptance of people doing things because they're told it's good for them, they need to understand why it's good for them, or why it's good for other people that they care about. And I think my final point on this would be, I am really starting to understand a great deal more about how contextualized people's understanding of artificial intelligence needs to be, for example, there are some great resources out there if you want to learn about AI, and many of them are very good. But actually, what I find is that for many people, they still feel that those courses are too abstract, they can't relate it to their life. So what we really need is to find a way of helping people to see how AI is already impacting on their life, and actually help them to understand it within their own context, because that will help them to be a much more informed public. And that will help to keep themselves safe, to keep the people they care about safe, but also to reap the benefits of AI. So I actually think that public acceptance, and I would call it a public education piece, is increasingly important, and we need to start thinking about it a little bit differently.


James Paskins  18:45 

So Nimmi, we're still in the middle of the COVID-19 pandemic, and I think that's redefined a lot of people's understanding of work. How do you think work will change in the future? And what part will AI play in that?


Nimmi Patel  18:57 

Well, remote and flexible working, I think, is here to stay. I don't know how many of us will go back to the office full time. I know I won't be one of them. But I think we can enable people to work from different locations to go about doing their work while currently, you know, having caring responsibilities and increase people's job satisfaction and quality of life. I think when we talk about jobs, it's very localised, so it's very geographically restricted. That's not the case anymore. We've seen the world become much smaller in an instant when it was already quite small. Essentially, sort of platforms like Zoom for instance, which we've depended on a lot since being away from the office aren't intended to replace the office but tools to keep people connected, and working flexibly and remotely for those who need it. The future of work right now is wholly dependent on what we want it to be. And making those conscious decisions going forward, rather than allowing the technology to push us forward will be the best way we combat any kind of challenges with regards to the future of work. I think, what we think about when we talk about work, and that's something that the UCL and British Academy project really explores, is what is 'good work'. I'm really excited to understand more about the findings from the project, because I also would like to know what good work is, and how that's changed over the last few years. Essentially, job satisfaction and sort of higher salaries, because the tech industry is a high wage, high skill industry has always been one of the core components of the way the industry recruits. Now we're seeing sort of different things come in, for instance, the possibility of removal of middle managers, the idea of using worker surveillance, there's a whole myriad of issues that we need to look at and see how they actually change people's approaches to work.


James Paskins  20:55 

Artificial intelligence crosses boundaries, so it's clear, we're going to have international cooperation. What do you think the opportunities and potential challenges are in international cooperation around artificial intelligence?


Rose Luckin  21:06 

Yeah, interesting question. And you kind of said it in the question as well, about the network nature. I mean look across the world and if you kind of take away the exceptions, like North Korea, and China and a few others, actually, technology doesn't take a lot of notice of geographical borders. And so, you have this big network. And if you think about machine learning, AI, needing lots of data and the data swimming around in this huge network, obviously, I'm oversimplifying, but you get the drift. And I think, in that situation, we have to look at international bodies to work together. And that might be for example, about regulation. So, you can see the way that the EU, for example, came up with GDPR, which is obviously not specific to AI, but is to do with data and is an important element when we think about data and AI as we move forward. And looking at the way that both the EU and the US are trying to look at how they curb some of the big tech companies, many of whom use a lot of AI in the way that they influenced the way that people behave, for example. Now they're going about it in different ways but they are talking to each other. The fact that the different parts of the European Union, which I appreciate post Brexit we are not part of, but there's still cooperation around these issues of trying to ensure that we get the most effective regulation, because the truth of the matter is, nobody really understands how this is going to play out, in the same way that none of us really understand how the pandemic is going to play out. We can have some good guesses about some parts of it, but not all of it. And so we have to cooperate, and the opportunities that that acceptance of international cooperation brings are the opportunities for a safer, fairer foundation for AI, which brings greater opportunities to more people and more members of society. The challenges are, of course, that different nations are different. Look at the arguments that we had over currency, when there was once muted the possibility that actually, we might lose the Great British Pounds and accept the Euro, which was never going to happen, but people get very wedded to things around culture around their nation. And so, there are still challenges, and we have to recognize and respect diversity. And that diversity is represented in those different nationalities, those different cultures. So yes, there are huge challenges, and not all of the elements of that particular challenge are things we'd want to lose, but that kind of makes it an even greater challenge to keep the benefits and actually try and get over that and nevertheless, cooperate. So, the opportunity of cooperation is definitely worth fighting for. But we shouldn't underestimate the fact that it's not necessarily going to be super easy.


James Paskins  24:02 

I want to ask so many questions about how this could redefine international relations, but I think we possibly need to do that on a different podcast! Right. What do you think work and artificial intelligence is going to look like in five years’ time? And what are the issues we need to address on the way there?


Nimmi Patel  24:18 

If only I had a crystal ball that I could look into. My gosh, that would be fantastic. I think AI has the opportunity to enhance people's lives. But I think we have to, you know, as Rose had mentioned throughout this podcast, we have to do it in a considered manner. And that includes exploring AI in all different ways because we know AI is entrenched in the values that we hold, and often they can be bad values or values that judge or misrepresent groups of people in society. How we combat those issues will enable us to be better citizens and to redefine the workplace and what it means.


Rose Luckin  25:00 

It's a very, very interesting question. And actually, I'm going to be a bit cheeky and reframe it, because I'm going to say what I'd like AI and the workplace to be like in five years’ time. And I'd like it to be a much more open, equitable, transparent place where we respect each other. And we all work in a culture that supports achievements, but not achievements at the cost of human wellbeing. And why would I frame it that way? I think at the moment, the successes in artificial intelligence have, to a large extent, been driven by the ways in which AI can drive big profits. And what I hope we'll have is a step back from that and a much greater emphasis on what AI can do for society. And I think if we're really going to achieve that, it comes back to my piece about education. We need everybody to understand enough about the implications for them, the potential benefits, the potential risks, if we're really going to bring society with us, and not least important by any means, we actually need our policymakers to understand more about artificial intelligence. There was a question asked in the House of Lords, as a follow up to a discussion around education, and the question was asked by Lord Tim Clement Jones to the Conservative Party Representative for Education in the House of Lords. And the question was about ethics and artificial intelligence, and how we ensure that data, data storage, the way data is used, the design of algorithms, the application of artificial intelligence, is done in an ethical way, particularly in education. And the response from the educational representative of the government in the Lords was that they understood this issue very clearly, and that they would be taking cybersecurity very seriously. That reveals a huge misunderstanding about what that question was about - it wasn't anything to do with cyber security, it was something much more to do with artificial intelligence. Now, I don't say this to mock either of those politicians or any politician. I'm just trying to demonstrate how little people in decision making places, when it comes to policy, actually understand about artificial intelligence. And I'm not blaming them, it's not their fault. Why would they understand it? Whoever helped them to understand it? So I think if we really want the best scenario for work in AI, we have to focus on that education piece. And we have to focus on prioritizing AI that's not all about purely driving profits. We need to look at the amazing things we can do with AI to improve human wellbeing, human mental health, the way in which people interact together in the workplace. If they can be supported, for example, to do more of the perhaps slightly dull, tedious, repetitive jobs, and so they can actually really focus on some of the things that really uses their human intellect, their human skills, they might feel a little bit less stressed, they might feel more satisfied. Of course, it's not a panacea, and that's not the case for everybody, but I think we have to look holistically at individual people, their role in the workforce, at workforces, we have to think about what is the future workforce model that we want to build for our organization, for society. And that's where we need to start rather than "okay, how can we use a AI to be more efficient to drive the economy to make larger profits". I'm not against profit, I'm just saying I think that's been too much of a priority to date. And we need to step back and think about it in a much more holistic way.


James Paskins  29:07 

Thank you, Rose, Nimmi. I'm afraid that's all we've got time for today but thank you so much for joining me and talking to me on this fascinating topic. You've been listening to Disruptive Voices from UCL Grand Challenges. Join us next time for more.


Nina Quach  29:22 

This episode of Disruptive Voices was presented by James Paskins, edited by Nina Quach and produced by UCL Grand Challenges. Our guests were Professor Rose Luckin and Nimmi Patel. The music is by David Szesztay. If you like this topic, check out the latest episode of The UCL Coronavirus podcast with Dave Cook and Anna Cox. Professor Cox is also launching a brand-new series on work, life, and wellbeing, called eWorkLife. You can find all those, and more, at UCL Minds.


Transcribed by https://otter.ai