XClose

UCL Mathematical & Physical Sciences

Home
Menu

Dr Jack Stilgoe on self-driving cars and ethics in science

Listen to Dr Jack Stilgoe from UCL Science & Technology Studies talk to our hosts about his research into self-driving cars and artificial intelligence

This episode of Hypot-enthuse, listen to Dr Jack Stilgoe from UCL Science & Technology Studies talk to our hosts about his research into self-driving cars and artificial intelligence, how emerging technologies should be governed, and how ethics overlaps with science.

Laura Hewison  0:00  
Hello and welcome to Hypot-enthuse, a podcast all about science, maths and the world around us from the Mathematical and Physical Sciences faculty at UCL, or as we like to call it, MAPS. I'm your host, Laura Hewison, and I'm completely unqualified to be here. But as always very enthusiastic. Sadly, my normal co host, Sophie Lane can't be with me today. So I'm flying solo. But my guest here is Senior Lecturer in science and technology studies. Dr. Jack Stilgoe, welcome. Thank you for being here. Jack, can you tell us a little bit about your area of expertise and your research?

Jack Stilgoe  0:40  
So I'm, I'm a bit unusual, I guess, for this podcast, and that I'm a social scientist that looks at science and technology. So my interest is in the way that science and technology relates to society. So I'm interested in looking at how new technologies, for example, are taken up by people, the changes that they bring about into the world, why technologies fail, why some technologies succeed? So I'm not a scientist or an engineer myself.

Laura Hewison  1:09  
Funny enough, neither am I. both on the same podcast, though, so Can't go wrong. And you were just telling me prior to this podcast, that Mary Shelley's Frankenstein, has some impact on what you teach. And what you look at is that how does that work?

Jack Stilgoe  1:29  
Well, this year is 2018 is the bicentennial of the publication of Frankenstein. So I thought, what better way to engage with questions of science technology responsibility than to actually properly look at Frankenstein? So that's what I've been doing with my students this week, is taking them through discussions of what Frankenstein means now.

Laura Hewison  1:51  
How, how would a Frankenstein situation go down in the current 2018 UK Government?

Jack Stilgoe  2:00  
What is it's quite interesting when Mary Shelley wrote the book, she was thinking about the science that was around her at the time and the promise, I guess you'd now call it the hype. That was around the possibilities of being able to reanimate dead flesh and that sort of thing. But now we have the tools that Mary Shelley, I guess was sort of foreseeing in a way we have the ability to be able to intervene in nature in ways that we haven't had before. And so the questions of responsibility that come with that power, are even more relevant now than they would have been back in Mary Shelley's day.

Laura Hewison  2:37  
So do you examine the ethical side? Or do you look at it more from a policy side of how we govern this and how it affects you know, society?

Jack Stilgoe  2:47  
I am more of a policy person I you know, as, as they say, some of my best friends are ethicists. But I'm not I'm not an ethicist.

Laura Hewison  2:55  
I won't hold that against you.

Jack Stilgoe  2:57  
So I'm interested in in how you actually come up with good policy to help science and technology get aligned to social goals so that we can have that democratic discussion about the world that we want to live in. And think about how science and technology fit into that world.

Laura Hewison  3:14  
So thinking about the world that we live in now. What do you think the kind of the Frankenstein's of today, what are we really looking at that will change our world and how we see it from a kind of science and policy point of view?

Jack Stilgoe  3:32  
Well, if you think about the areas of science and technology that would seem to provide power over the future at the moment, and there, so a lot of the most Frankensteinian ones, I guess you could say are in the life sciences.

Laura Hewison  3:47  
I love that Frankensteinian.

Jack Stilgoe  3:50  
So in the life sciences, the recent debate about gene editing, right, that's really live that the power to be able to rapidly intervene in the structure of DNA, which presents some enormous questions of responsibility. But also you can think about I mean, I've done work on geoengineering, for example, the idea that you might be able to use technologies to counteract climate change with the potential to radically change the environment around us. Or you can think about artificial intelligence, the creation of some sort of artificially intelligent creature, I guess, in Frankensteinian terms, what that means in terms of how, how we feel about ourselves as as human beings, but also what should the responsibilities of the creators of artificial intelligence have for overseeing their creations and being better, more responsible parents if you like them? Victor Frankenstein wars?

Laura Hewison  4:49  
We don't want another Terminator on our hands really, do we? So what areas of kind of research what are you really focusing on? At the moment, kind of looking at all of these myriad of issues that will affect us in the next 1020 years.

Jack Stilgoe  5:08  
So, for me, I tend to sort of whenever hype builds around a particular area, I tend to get extremely interested. And I think that we should start asking some big questions about that area. At the moment, all the hype is about AI. So I've been asking some questions about artificial intelligence. But I've been particularly interested in looking at cases of artificial intelligence in the real world. So not just as sort of abstract software thing, but looking at a real world example. So I got interested in self driving cars as a sort of case study, in machine learning, a case study of how a technology and AI technology can learn in the wild. So in the real world, there are things driving around in, in various places around the world that are, there are computers that are learning to drive. And that seems to me an extremely interesting set of questions for somebody interested in in the governance of emerging technologies. I got involved in this actually, when it's often the case with social scientists like me, we get involved when stuff goes wrong, because when stuff goes wrong, you start to see the reality of a technology that is hidden behind the sort of behind the hype behind the veneer behind all the promises. So I first got interested when a Tesla electric car crashed in Florida, and its occupant driver passenger, we don't know quite what the status of the person was, he died instantly at the time. And then there was a crash investigation entered into what went on. So I wrote a paper about that crash, what it told us about the reality of machine learning. And just as that paper was coming out, an Uber in Phoenix, Arizona ran over a woman killed her when she was crossing the road with with her bicycle. So these were sort of test cases, to my mind of technology, experiments with technology that are happening in the real world with some really important questions about should we allow that sort of experimentation? What are the ethics of that form of experimentation? Do people have a say, in the experiments that are taking place? And that forces you to confront a set of questions about well, given that these things are never going to be completely safe, because no technology is ever completely safe? We have to ask questions about well, how safe is safe enough? And that forces us to say, safe enough? For what? Right? So we have to think about well, what is what might the purpose of this technology be? Might there be huge benefits in being able to alleviate? Might there be huge benefits in being able to cut the number of road deaths? Might there be benefits in terms of congestion and the way that we organize traffic in our cities? Might there be benefits in terms of being able to free up parking spaces, right? If in a world of self driving cars, you can imagine all sorts of possible benefits with against which to evaluate any any possible risks? And then we have a set of questions about well, how do we know the risks that we that we face? Are there likely to be new risks created from computer controlled vehicles, you know, the potential for a system failure rather than an everyday well known failure of an incompetent, drunk human being crashing their car into a wall? Right? These are? So we change the set of calculations that we make, at the moment it's happening, this technology is developing in a rather sort of haphazard way, as different cities around the world are doing their experiments. And the question that I and some colleagues in the UCL transport Institute are asking is whether we can do things better, whether we can get more of the good stuff from technology, avoid the bad stuff, and help create desirable futures rather than do a former sort of technological sleepwalking, which is what we do so often with technology, we just sort of let it happen. And we go, Oh, that was good. That was bad. Let's try and clean up some of the bad stuff. Maybe we'll try and fix that. And, you know, the sense that there might be a better way to govern technologies, what's driving us?

Laura Hewison  9:42  
Well, I happen to read that there's been some recent tests going on in Coventry and Milton Keynes, so it's spreading to the UK. And I also have a quote for you that somebody who has, I would say a slight vested interest in autonomous cars. Ilan musk He was quoted in an article on online, saying that it's really incredibly irresponsible of any journalist with integrity to write an article that would lead people to believe that autonomy is less safe, because people might actually turn it off and then die. So how do you measure whether the 40,000 or near 40,000 people who die on American roads every year? Is any better? Or worse than that, however, many people may or may not die through autonomous cars? And how do you how do you kind of legislate against or for that what's better innovation or people's lives?

Jack Stilgoe  10:47  
Well, so if you take the particular question of safety, right, if that's your focus, and you were to say, okay, that's, that's the purpose of this technology, right. And that's one of the stated purposes, behind the justifications behind self driving cars, then, and if you're extremely optimistic about the potential of the technology, like Ilan Musk is, then the it leads you inevitably to an argument where you're certain that these things are going to be safe. And that you are certain that currently a lot of people are dying from the from the current situation. Okay, so for him, it's inevitability that this technology will develop and that it will be good. And therefore, if you are asking complicated questions, then you are, and he has written, you know, if you write articles that are negative about this technology, you are killing people. And in the past, people have written similar things about agricultural biotechnology, right? They've said, If you ask questions about genetically modified crops, then you have blood on your hands, because this technology will inevitably be beneficial. I see that as a sort of form of moral blackmail, I can see why he would say it. But what he's doing there is presuming that we know about this technology, and at the moment, we don't. And if we didn't know about the technology, why would we accept his version of the technology rather than anybody else's? Right? There are all sorts of choices to be made here.

Laura Hewison  12:13  
I do tend to see any any coverage in the news or anything, it always tends to focus on the negatives in the in the in the worldwide press, necessarily, when I looked a little bit more in the kind of the engineering press and things like that they go really into the innovation and how exciting This is. And I read somewhere as well, that the kind of the leaps and bounds that autonomous vehicles could make could drastically change our society and how we interact with vehicles and how we not only how we get around, but how do you see them coming into the British world or the UK world? Do you see that having an impact on our our everyday life in the next kind of five to 10 years?

Jack Stilgoe  13:04  
I think. So behind that question. there's a there's a, there's a question of whether a technology that is developed in one particular context is applicable in another. I think this is really interesting when technologies themselves are learning to drive in particular environments. So at the moment, Tesla are generating huge amounts of data for Elon Musk, largely in the us a lot of Tesla's also in Norway, for some reason.

Laura Hewison  13:32  
environment, there's this fields, they quite like, there,

Jack Stilgoe  13:35  
they do you like it. And so they're generating an awful lot of data, will that be applicable? Were those were that technology to be transferred? Over here? Well, perhaps, but there would be a lot of work that would need to be done there. I think there's a really interesting set of questions about whether Britain has an opportunity to do things differently, not because our roads are particularly different, but because our culture of transport is very different. So we're in a city, at the moment London that predates the invention of the motorcar. That is a nightmare. If you're a car driver for a lot of people, where you can't imagine that a technology that has learned to drive on the streets of Phoenix, Arizona, you can just move it over here and it will be able to do its job,

Laura Hewison  14:22  
beautiful wide roads on a grid system. Absolutely. I'd invite you to drive to drive around Soho for half an hour.

Jack Stilgoe  14:29  
Exactly. Exactly. So you know, taking away mo car through Dickensian Soho would be would be quite a complicated, quite a complicated task. So there is an opportunity, I think, to reimagine. So, Ilan musk would would claim in his view, where he's saying if you're writing negative articles, you're killing people because I have the vision, the only vision and if you're standing in the way of that vision, then you are the problem, right? he is he is suggesting that there is only One way forward. And actually what we see with transport is there's always lots of possible ways forward. I think in Britain, we have a real opportunity to work out how self driving technology can fit within our public transport system, to, for example, help questions of inequality and mobility, how can we make sure that people who don't have access to transport do have access with new technologies? How can we make sure that transport improves public space? Rather than makes it worse? How can we make sure that transport improves on our currently congested streets? Right? At the moment, there's a real danger that if everybody were to buy into Elon Musk's vision of where this was going, you would have a future in which people are rather than walking, they're taking a self driving car from one place to another just because they can, that self driving car is then maybe driving itself somewhere to find a parking space. That sounds like a nightmare for congestion, right? The idea that just because the computers in charge we solve traffic is, is fanciful. And it's also you know, a lot of these these visions of the future, even though the people behind them are expressing them with real certainty. A lot of them are themselves really old responses to the problems that are facing us. So America, you mentioned has 40,000 Road deaths a year or there abouts. Which is about three times per mile, what the road deaths are in Britain or Sweden or a really safe country. And the difference between those two countries is not whether or not they have self driving cars. Right, it's actually some fairly boring things to do with the condition of our roads to do with whether we enforce drunk driving laws, what age people learn to drive at, you know, how, how well maintained our cars are things like that. Those are quite boring things. So you know, one response for Ilan Musk, if he really, really cared about the 40,000 road deaths a year, he probably wouldn't be doing self driving cars as the inevitable response to it. Right? He'd be doing some of those easier things, or rather, some of those things that require political intervention, rather than just the invention of artificial intelligence.

Laura Hewison  17:20  
I personally, I live in central London, I don't tend to take cars, unless, you know, I'm really really far away from my bed, I catch a taxi or an Uber or something like that. But when I do go in cars, I get terrible car sickness. How will we look at problems like this, that might not necessarily be you know, the, the thing that you the safety thing or the environmental thing, or it might be a problem that comes up out of nowhere out of, you know, everyday learning once autonomous vehicles become a part of our lives.

Jack Stilgoe  18:04  
So that's it and the way that in the normal way that we talk about technology, what you mentioned is an example of what might be called an unintended consequence, that technologies do all sorts of things, they do some things that we intend them to do. But they do other things as well. And in the case of a self driving car, it may be that car sickness becomes a big deal. because nobody's looking out the window anymore. Because they're all you know, reading or, or playing chess or doing some lovely activity that you can do in a car now that you don't have to drive it love playing chess in cars. Yeah, dressing cars, car chess. How you would deal with those unintended consequences is a really important question for for policymakers. What the first question is whether you can anticipate those unintended consequences. In the case of car sickness, people have already thought, yeah, that's going to be a problem. And so, you know, some people have said, Oh, it's a problem that we can fix. So let's come up with ways that we can make these things super smooth, or ways that we can help people in cars, even if they're not driving, look at the horizon or, or whatever. So they think, yeah, maybe we can, maybe we can fix this. But there's going to be other things that won't be anticipated at all and just genuinely take people by surprise. So with when it comes to a self driving cars, the obvious thing to look at is the the uptake of cars in the early 20th century, right. 100 years ago, cars were still a really peculiar thing that some very rich people had. And what happened is people got cars is that it changed the shape of our cities, or in ways that people couldn't have anticipated if you'd asked Henry Ford, right when he'd created the Model T and made cars was affordable to the middle classes in the US? What's this gonna do? You know, you couldn't have blamed him for not foreseeing that Los Angeles would be an entirely car dependent city. And that actually the the, the fabric of the American West would be changed completely by dependence on on a motorcar. These are unintended consequences that we are now locked into.

Laura Hewison  20:28  
And we might see, you know, both beneficial and detrimental changes to industry. In a similar way you talking about the cars then that that drove you know, Carter's out of business with they're delivering, you know, horse and cart goods and things like that. What businesses might we see suffer from an autonomous car revolution, like an industrial revolution?

Jack Stilgoe  20:53  
cars? Yeah, so the Industrial Revolution radically changed what business looked like, or rather, business changed during the Industrial Revolution, because you know, the cause and effect thing isn't, isn't obvious. But perhaps more importantly, in policy terms, it radically changed what families looked like what work looked like, it created opportunities, it allowed women to enter the workforce in large numbers. And so these sorts of big social changes are bound up with big technological changes. And it's really, it's really hard to think Well, how do you get those right, given how unpredictable those sorts of changes might be? But as we're going along, the crucial thing is that we need to think about this stuff, right? We need to think about who is likely to benefit from this? Is it really going to be the people worst off in society? who benefit from example for from self driving cars? Or are they likely to be taken up and used or maybe even bought by the same people that adopt other technologies, who in most cases are rich people in rich countries?

Laura Hewison  22:10  
So you really going to examine a lot of this with your next project, which is called driverless futures.

Jack Stilgoe  22:17  
It is I'm glad you got the question mark at the end. I did. It's quite hard to say the question mark, but it's called driverless futures. Well, I'm Australian drivers.

Laura Hewison  22:27  
Well, I'm Australian. So I go up at the end of every sentence, right. Anyway, in case that fits the name of this project extremely well, I should do all your announcing.

Jack Stilgoe  22:35  
Yeah. Thank you very much. So yeah, the driverless future driverless futures. Yeah, that project is asking questions of the people involved in developing self driving cars, asking them what they think the desirable futures are, what they're interested in, what they're worried about what keeps them awake at night, we are going to be speaking to the various stakeholders involved. So people that aren't developing the technology, but may have an intense interest in it, whether those are cycling groups or truck drivers or anybody, you know, making policies, say for London, people in Transport for London. And the other thing that we're going to be doing as part of that project is having discussions with members of the public who normally don't get asked about this stuff, because normally, they don't have much to say about new technologies until those technologies are presented to them, by which point it's a sort of yes or no. And our sense on this project is that it's worth having that discussion upstream, that it's worth having that discussion, why or members of the public might still have a say, might still have some opportunity to shape the way that this technology is going. So we'll be doing going out and doing some public focus groups as well. And the aim with that project is to is to present some options for governing the technology in the public interest, so to say, how might we do things differently in the UK, but also how might we in the UK, inspire others to do things better? Rather than just a sort of privatized version of the future? How might we have a more democratic version of the future?

Laura Hewison  24:20  
Excellent. Well, we're, unfortunately run out of time today, but I wanted to thank you so much for coming on. Thank you for having me. And you can keep up to date with more Hypot-enthuse episodes on UCL SoundCloud. I'll see you next time.