XClose

UCL Minds

Home
Menu

WWWAI - Episode 1

In this episode we discuss what “good” work means with Rob McCargow (PwC) and Mary Towers (Trades Union Congress), and how AI can impact this - considering both positive and negative uses of AI technologies in workplaces.

Working well with AI Ep 1: Good work with AI

[Music]
Rose Luckin
Hello, and welcome to the UCL and British Academy podcast series, working well with AI. I'm Rose Lukin, Professor of learner centered design at the UCL knowledge lab. In this podcast series, we're exploring how artificial intelligence, AI, is changing the world of work. AI has long been predicted to reshape our working lives, and it has developed in leaps and bounds over the past decade. And as we emerge from a global pandemic, we're rethinking how we work, what sort of work we value, and what we need for the future. 
In this episode, we will be discussing what good work means we'll consider what makes work enjoyable and rewarding, and how AI might enhance our working experiences. But we'll also be paying attention to how AI might exacerbate inequity in work, degrading worker experiences. We’ll cover topics such as automated decision making, worker privacy, the use of data for good and for bad, and how we can have a say in the ways that technology changes our working lives.
Welcome!
With me today are Mary Towers from the Trade Union Congress, and Rob McCargow PwC Director of AI. So Mary, can I start with you, please? Can you tell me briefly, what gets you fired up in your work about artificial intelligence?

Mary Towers
So thanks Rose! In in a nutshell, at the TUC, along with our member trade unions we’re on a mission to raise awareness about the use of AI to recruit and manage people at work, the impact that this has on workers, as well as the opportunities that AI can present to them. And just to be clear on talking about the use of AI to make decisions as important as who gets fired, who gets hired, how work gets allocated, who gets a bonus, the type of decisions that really do impact on people's lives. And so as well as seeking to raise awareness about the very existence of these types of technologies - and awareness is shockingly low - we're also seeking to raise awareness about how the lived worker experience of technology really matters to all of us. And trade unions are uniquely placed to make the worker experience heard. And that's why in the context of the TUC’s AI project, through our research and our legal report, our AI manifesto, we've been advocating for changes at work and in the law to ensure that human values and dignity at work aren't overlooked in the name of commercial and technological advancement.

Rose Luckin
I can understand why that would get you fired up. It's such an important topic, isn't it? And the fact that so few people know, I think is a concern. Rob, over to you. Likewise, what really sparks your interest most about the work that you do with artificial intelligence?

Rob McCargow
Well thanks for having us on the podcast today, Rose, this is a great opportunity to talk about this key topic. From a background in HR resourcing before I moved into technology, I've always been interested in this space, full stop. What's been interesting, though, is, as Mary was speaking to, the awareness of how the world of work, employment and the technology colliding is really poorly understood. And similar to the TUC, we also engage with organisations across the board in every sector. So we have quite an influential voice in the conversation. What's been in particular of interest to me, though, is how these conversations evolved in the last five years. I think that moment five years ago was when governments started awakening to the implications of AI, they were increasingly aware of the economic opportunity posed by AI. However, even a shorter time ago was five years ago, it felt still quite ethereal and hypothetical. Many of the talks I was giving at the time, or giving evidence on the all-party parliamentary group on AI in which I sit, was still fairly hypothetical by nature. What I think has happened in particular, over the last year and a half, during the pandemic has already brought this to life. We're seeing real world impact of technologies like AI in the workplace surfacing in the press, and many of these are too important to ignore. And it is crucial that businesses not just HR directors, but executives across the board are attuned to the opportunities, but equally, fully aware of the downside risk, and walk into this opportunity and situation with our eyes wide open.

Rose Luckin
I couldn't agree more. And I think it's a really interesting point you make about how the rhetoric has changed, and things have become more concrete and more real-world examples, which is extremely useful. And today, we're really focusing on the subject of good work. And what could that mean alongside AI. So again, I'm going to come to you first, again, Mary, I mean, you've already indicated some of the areas of concern for the TUC. And I know you've also done a lot of work on what workers want. So could you give us some ideas about what you think makes for good work?

Mary Towers
Sure. Yeah, thanks Rose. So I mentioned our AI project - we'd carried out some research to establish how the use of AI at work was impacting on workers, and using that research we identified a series of objectives and values. And so based on those project outcomes, I think I'd like to highlight three of the potential features of good work in a world of being managed by AI. And those three features are worker voice, equality, and that's the quality of reward, treatment and access, and also dignity at work. And so first looking at worker voice. So, we say that the key to good work is strong worker voice, and any good relationships is based on good communication. And the employment relationship is no different. In that we believe that everyone at work should have a say in deciding whether or not AI is introduced to make important decisions about people. And it's through active communication and consultation, that the negative impacts of AI and workers can be best avoided. 

And good communication at work can come in many forms. So for example: employers carrying out active consultation with Union and workers before implementing new technologies; employers and unions coming together to negotiate collective agreements that have provisions on technology at work; and also not to be overlooked in any way is the importance of individuals communicating on a one to one basis, you know, union reps, workers, managers, actually talking to each other in an open, constructive way to solve problems. And to do this, we all need the right vocabulary, and the right understanding to communicate effectively about technology. 

And listening to workers, I want to stress, is good for the economy and for productivity, you know, not just for workers. I think that's a really important point to make. And then secondly, equality. And I mentioned that I mean, equality, not in a singular way, but equality in terms of reward, equality in terms of treatment and equality in terms of access. So we think workers should have a fair share in the rewards of technology at work. For example, through good terms and conditions, good pay, good hours, good working environment. And so I think I'd sum that up is a fair day's pay for a fair day's work. You know, workers don't want the use of AI to result in more insecure work, lower pay work intensification. They want good work for them. And that means the opposite, all of those things! They want secure work, you know, higher pay, and fair, fair requirements in terms of the amount of work that they're required to produce in the time given. 

And then crucially, fair treatment, you know, not to be discriminated against, and then equal access to AI so that no one is shut out of the benefits of AI. You know, for example, a disabled person, someone who's blind, ought to be able to access AI-powered training just as much as anyone else. 

And then finally, and really crucially, dignity at work, and we called our AI manifesto at the TUC dignity at work in the AI revolution. And the reason that we chose that title is because our research revealed the extent to which the use of AI at work can threaten human dignity, and human connection and human agency. And we heard from workers that they really don't want to be subject to excessive monitoring and surveillance, because that impacts negatively on their mental health and on their physical health. That they'd like work home boundaries to be respected, and that they value being free and being trusted, to make decisions at work and to carry out their tasks with a degree of agency. And they also really value in-person engagement, you know, having one-to-one in person conversations with their manager. So I'd go so far as to say really, that dignity, you know, is the foundation of all of all good work. And that respect for human values and dignity should be at the heart of development and application of new technology at work. 

Rose Luckin
I agree. And I just wonder if one complication with AI, and you have referred to already Mary, is that for a lot of people, they don't understand it. And therefore, it's hard for them to have a voice say in the decisions that are being made about what kind of AI should be brought in because they may not know what it means when they're told that X monitoring system is going to be bought in or whatever. So there's quite, there’s quite a job there, isn't there for the trade union movement in making sure that people are educated enough in order to be able to use their voice effectively.

Mary Towers
Absolutely. It's a really important piece of work for us to do in terms of training reps and training workers, it's a piece of education work. But it's also a piece of work that we think can be done hand in hand with employers because when we've spoken to employers and to people that work in HR, you know what we’ve discovered is that essentially they've got the mirror image of the problems that we've had in that. You know, people within an HR department are finding “Well hang on a minute, we don't understand how this technology works,” you know, “we can't actually communicate about it!” Are we getting what we really want from a technologist in terms of products that actually reflect the true nature of the employment relationship? And so whilst we're undertaking a piece of education and training work, with reps and workers over how to communicate over the technologies and understand the technologies, that's something that we believe we can work hand in hand with employers to achieve.

Rose Luckin
That sounds great, I think that notion of partnerships really important. And Rob, I was really interested to hear that you'd come from a HR background, I hadn't realized that before. But I know that you're very much interested in your current role in well-being at work. And I just wondered how you think that complements the kinds of things that Mary's just been talking about, and that TUC have identified as making good work?

Rob McCargow
It's really interesting this topic. So I think there is far greater alignment between the, the trade unions and business leadership than one would suppose. If you look now at the not just that the moral question, but the commercial and regulatory questions posed to business leaders now around the ESG agenda, is absolutely top of mind. We just announced a huge investment in that to advise clients to get fit for purpose around this agenda. A corporate purpose, I think has come into question much more, how have organizations treated their staff during the pandemic? I think this is now being looked at by talent out there in terms of who they want to join. And I think we've got to agree that the absolute fundamental bedrock of the world of work has shifted beyond recognition, and there's no going back. I just thought I'd pull a couple of stats out from a really major survey we did a few months ago. So, our hopes and fears survey. And we surveyed 32,500 workers across multiple countries and industry sectors. 72% want to work in this hybrid way of working, only 9% of those who can work remotely want to get back full time, 19% of people don't want to go back at all. 

I mean, appreciate there’s big differences between blue and white collar workforce here, I do appreciate that. But this is now a different way in which organisations are contracting with their people. And in particular, the measures that determine success, and longevity, and well-being are now in the main not appropriate, they're not fit for purpose. Most of these things are based on things like employee sentiment, garnered through surveys and pulse survey data, or attrition data when it's too late in the day too much about it. So you've also got, and Mary and I've spoken a lot about this in previous events, there's already that sort of background anxiety in workers evident around things like the advent of job automation. This same study picked that out and said that 60% of workers surveyed were worried about automation putting their job at risk. And 39% of them said that their job could be obsolete within five years. So you've already got this sort of burning platform of anxiety in workforces. And now this new situation of hybrid working. And I think the business leaders I speak to are walking into this with good intentions about using this opportunity to make work better. My deep concern with this, and I'm sure both of you will agree, is that we could be walking into this world of unintended consequences and certain parts of your workforce could end up being more adversely affected than others. 

So this was a huge motivation for me because I really struggle to sort of operate in this binary world of AI is either good or bad. And I think there's been a huge amount of debate around where it's gone wrong. And we'll talk about this I'm sure during the conversation. One thing that we've done, is a huge project with 1000 volunteers, to combine everything from biometric, cognitive, psychometric and contextual data, to give workers their own insights into what's making them tick in the workplace. Empowering them to help with behavioral change and pointing towards interventions to help their well- being. The employer benefits from this, because we think a happy and vibrant workforce that's empowered around their health and well-being is, by default, much more productive and performs to a higher standard. So that's been a hugely exciting area to try to take inspiration from the likes of Mary's work at the TUC and others and many other civil society bodies I work with and show how it can be done well, with the safeguards in place, with the right people in the room from the start, and not just leaving it purely in the hands of the technologists to lead from the front. 

Rose Luckin
Yeah, that's the risk, isn't it? Sometimes thinking it's something that the technology department will deal with, and that's that. Rather than looking at it more broadly. And that certainly makes me think in terms of the conversations we've had so far, that there's a lot to be said about the experience of work. It's very nuanced, isn't it? It's not that it's either good or bad. So either your job is going to go, or it's not going to go. It's much more about changing the way that people work and the experience they have. And I'm just wondering, from your perspective, Rob, where you see the areas that we should pay most attention to, is it that unintended consequences? Or is there something else? And when I say pay most attention to that could be positive or negative.

Rob McCargow
I think it depends on what lens you're looking at it through. And I totally agree with what Mary said about the need to upskill and empower non-technical managers to understand the implications of the technology they're acquiring and deploying. And that anecdotally corresponds with everything I do. I spend a lot of time running workshops and training courses, and awareness is quite low. That so that's crucial, I think. And the same with upskilling workers, to understand the implications of this technology is key. 

I think we've got to think about, the opportunities to use this technology to make things better. And I think there are great opportunities through personalization to create work experience that's much more suited to the individual circumstances. If you think about this, you know, in terms of inversion, in the employer-employee contract that I think in this new way of work, the very essence in the way that work is consumed could change. And, and the technology is now potentially getting to the point where you can start thinking about how do you produce the optimal environment, conditions and, you know, circumstances in which that person is empowered, they flourish. Not just around time of day they're working and where and who, but, but with the tools at their disposal to liberate them to, to do their best work. And I appreciate this sounds a bit utopian, I'm… I can assure you, I'm the first to be the critic in the sector from an industry sort of perspective. But I remain upbeat, that if it's done well, it can lead to great results. But I'm not so blind to think that you can just let this run loose and it'll happen by default.

Rose Luckin
It is that isn't it? If it's done well - I agree with you. And I was thinking as you were describing the well-being work with your 1000 volunteers. That's a really nice way of helping people to understand a little bit more about what's possible with AI, because it's very much about them. And it's about their data and helping them to understand their needs more. That seems like an excellent way to perhaps start to alleviate a few of people's concerns. Because they have a greater understanding of the benefits that can come from this kind of work.

Rob McCargow
I think so, I think so. And I think, I think at the heart of this is the word “trust”, isn't it? And I think many of the situations that have arisen in the last four or five years in particular, it's been where there's been an innate lack of trust between employer and employee. Technology's been deployed at people rather than co-created with them. Communication lines have been poor. And no surprise it’s led to some poor outcomes. And I think there is a good way of doing it, and engagement up front and full transparency, and starting from that strong bedrock of trust is essential. Otherwise, it just doesn't work. People disengage, especially when you're inviting people to volunteer to wear a biometric device, then that just wouldn't work if there's that an inherent lack of trust and a power imbalance at the heart of the deal.

Rose Luckin
Yeah, I can see that. Absolutely. Mary, what are some of the main concerns that you see coming in from workers in your role at the TUC?

Mary Towers
Yeah, so I think Rob’s right about the speed at which these technologies have been rolled out. And I think, you know, possibly in no other industry would a product be rolled out without having first tested what the impact of that product might be on the consumers of the product. And what's interesting is that it's sort of often for some reason overlooked that, in a way, there are two consumers for these kind of HR, AI powered tools. And those are the employer but also the worker. And so what we saw in our research, were a sort of series of negative implications that workers were experiencing as a result of these technologies. 

And yeah, I'll just, maybe I could just take you through some of the main kind of messages if you'd like that came back from that research? So firstly, and we've already touched on this, we found that a lot of workers just simply weren't aware of when AI was being used to recruit or manage them. So for example, one worker commented in response to our survey, “I just can't actually answer any of these questions because I just don't know!” And that in itself is really concerning. Also, workers didn't generally seem to be asked for their consent before new technologies were introduced at work. And workers actually really wanted to be consulted. But unfortunately, consultation just wasn't common. So only about a third of workers were consulted before new technology was introduced at work. And then returning to this idea of trust. So, workers felt a really low level of trust and confidence in the ability of AI to make decisions about them. So only about 28% or so of workers were comfortable with the idea that AI could make decisions about people at work. And then also, harking back to our point about communication. So, workers felt there were really significant barriers to being able to challenge decisions that were being made about them by AI. And one of those barriers was an inability to actually communicate about AI. But also the work has found that employers seem to just make an assumption that because it was technology making the decision, it must be right, because it's technology! And so often people found themselves in this loop where they knew there was an unfairness, but they just couldn't challenge it.

Rose Luckin
And that does suggest a lack of understanding on the part of employers about what AI is capable of doing. There's so many examples of where inappropriate decisions are made that that they certainly should be aware of, I would have thought.

Mary Towers
I think one of the problems is literally the complexity. So you have an outcome, which seems straightforward, but behind that there is so much complexity. You know, ranging from what type of data was used to train the algorithm. And what is the actual context? How is that technology now being applied? Is it taking into account environment? Is it taking into account all the different factors that might be relevant in a situation in which someone's exercising judgment? it's so complex. So I think that's, you know, perhaps one of the reasons why there's just this enormous barrier. And that's where things like transparency and explainability are just so important. We also found that there was a negative impact on physical and mental well-being. And that was due to work intensification. So unreasonable demands in terms of targets that were being sort of set by the technology, again, without taking proper account of context and environment. And then also, I mentioned a lack of, of sort of agency over decisions and how work was being done. That in itself appeared to impact negatively on mental health, and particularly the inability to challenge decisions was impacting negatively on mental health. And then also workers were really concerned about unfair, and indeed, unlawful outcomes. So discriminatory outcomes, but also all different types of, of unfairness, you know, for example, low ratings that had been based on inaccurate data that then impacted on the ability to access performance-related bonuses. That that type of unfairness. So a really, really, sort of broad range of different concerns that workers expressed.


Rose Luckin
There's a lot to think about in what you've said, isn't there? And a lot of it is quite concerning. Both you and Rob agreed about the need for trust as being a key way of moving forward. I’m sure that people listening to this will want to replay a lot of what's being said to take it in again, because there's a lot of very useful information there. I'm just wondering now, what you feel is the way that we move forward in the future. So, what's your vision of what good work could look like in the future starting with Mary?

Mary Towers
So I've already touched on, on I think, some of the values that we're suggesting that employers adopt. And we set out those values in our manifesto, and they include things that we've already explored, really, in terms of worker voice, the importance of worker voice, the importance of equality, the importance of transparency, and explainability. So rather than sort of revisit those values, perhaps I could just mention a bit about innovation, and how workers themselves might benefit from the use of AI at work. So I suppose a kind of utopian good work future with AI for me, would involve workers having equal control over their data, and how it's used, and workers would be able to collectivise their own data. And then with the help of trade unions realise the power of that data. So for example, by using it to evidence campaigning, trade union campaigning for better terms and conditions at work. Or perhaps engaging data scientists, with trade union help, to conduct analysis of the data to secure equal pay, or indeed to identify patterns of discrimination. So I think in this ideal future, all the key parties would be sort of collaborating and working together, and there would be equality in terms of a fair share and fair treatment of the rewards of the use of technology at work. And that applies also in terms of the potential benefits of innovation.

Rose Luckin
Yeah, that makes a lot of sense to me. Rob, from your perspective, what do you think good work, for people working for PwC, what do you think good work alongside AI could look like?

Rob McCargow
I mean, I love what Mary had to say there about, you know, this being much more able to benefit the whole of your workforce rather than just a small coterie. And this whole upskilling agenda is vital at the heart of this, isn't it? You know, so we don't know what the jobs will look like, in 5, 10, 15 years’ time. You know, the only one I'm sure of is that hairdressers and barbers apparently a fairly invulnerable to the advent of technology. The other one I heard was, AI struggles with British sarcasm quite a lot. So sarcasm trainers will be in high demand in the years to come. But for the rest of us, you know, we just don't know, do we? 

Rose Luckin
No

Rob McCargow
And we have to ensure that everybody's got a fighting chance for a viable future. And that that's one thing we've done, we rather than just rolling a few training courses out for technologists, we decided to upskill the entirety of our 285,000 people globally. This is everyone from the people on front desk and the receptionist through to the board on, you know, data analytics tools, robotic process automation, you know, right up to fully fledged AI over the course of time. With the promise that, you know, we can't say what job you'll be doing in 5, 10 years’ time. But if you opt in, we can say you'll be doing something with us. And that's so created quite a nice bottom-up groundswell of attraction towards actually putting, you know, the control within your own hands on your future. So there's something quite powerful about, you know, as Mary was saying, around innovation coming from the workforce, rather than being retained in the little R&D hub. 

From a vision point of view, though, extending what I was saying on that project I've been running is, you know how we've seen this quite paradigm shift over the last few years? It's sort of started with some of the big investment companies seeing this sort of change in direction, the market around things like climate change, and then gender inequality, we've seen people stepping in, and to start thinking about how investments are made, depending on the composition of boards. We've now got a huge focus, as I said before, on ESG. And this is now locked in, I think, to many investment portfolios, and capital market scrutiny is increasing substantially off the back of this. What I would love to see is if this tool, that could be evolved over the next few years, could see executives compensation linked to wellness of their workforce. And can you imagine a competitive drive towards outperforming your peers on how well your workers are looked after? And there's mutual exclusive benefits there, isn't there, in terms of talent, attraction, retention, and an output and production? So, you know, it could happen. And I think the significant shift we've seen in the way that work has changed in the last year and a half, has given I think, many different parts of the workforce and management, that belief that things can change demonstrably and quickly, rather than sort of iterated at the edge and can be done for good not just for negative means. So yeah, huge opportunities to redefine the very essence of the way that work is delivered and management are compensated for that.

Rose Luckin
Absolutely. And it's interesting, you draw that analogy with the speed of change over the last 18 months with the possibilities for people actually embracing, perhaps more wholeheartedly, the fact that good work alongside AI can happen in lots of different ways in ways that they hadn't thought of previously. And that actually it can happen quickly. I think that's, that's fascinating. 

That was really a very enjoyable conversation. Thank you so much. Our guests today were Mary Towers from the Trades Union Congress, and Rob McCargow from PwC and he's director of AI at PwC. Thank you both Rob and Mary.

[music]

You've been listening to working well with AI. This episode was presented by myself, Rose Luckin. Editing and mixing is by Suzie McCarthy. The series is funded by UCL public policy, UCL grand challenges and the British Academy. To find out more about the AI and the Future of Work Project. search for “UCL AI and the future of work”. Thanks for listening, and I hope you join us again next time.