(Theme music starts)
Hello, and welcome to IM@UCL: The Podcast, a podcast about the research at UCL that will revolutionise the future of driving. My name is Cassidy Martin, and I am your host on this journey of self-driving discovery.
(Theme music fades out)
One of my biggest fears with self-driving vehicles is the loss of control. If the car runs into a problem, I want to be able to take over. And this is a legitimate fear. In 2016, a Tesla Model S crashed into an 18-Wheeler on the highway. The driver was apparently an autopilot and not paying attention. And in 2018, an Uber self-driving car that was being tested with a backup driver in the driving seat ran into a pedestrian. But despite these instances, studies have shown that self-driving cars are safer. And their safety will only continue to increase as more and more cars with these capabilities are on the road, drivers are aware of their vehicle’s limitations, and with humans at the centre of every capability upgrade. For this month's episode, I spoke with a researcher who's using the sense of touch to get driver's attention, and another who specialises in creating technologies by the people and for the people.
Just a note before we start, this podcast was recorded remotely. And sometimes you can hear a bit of background noise. Hopefully, this will not distract from the insightful content from our guests.
Let's get started.
I'm Helge Wurdemann. I'm a professor at UCL in the Department of Mechanical Engineering. And I'm looking into different types of materials and how we can use these to build robotic devices. And particularly, I'm looking into how can we build devices that can give haptic feedback. So, the sense of touch to humans.
Helge is working with Intelligent Mobility at UCL and applying his sense of touch expertise to create a unique safety feature.
When you're looking at the world of autonomous vehicles and vehicles that have a certain level of autonomy, we can see that there are certain types of feedback used to indicate and communicate with the user. You might have seen vehicles from car manufacturers that use the beeping sound, some use a beeping sound in combination with an icon that lights up on the dashboard, we are looking into a driving chair that has small, soft robotic devices embedded all over the surface. And these will give them tactile information to the driver. So, in another sense, to communicate with the driver, what is happening at the moment. What is the car doing? Is there now time to interfere? Does the car want me to take over, etc?
Can you give some examples of when a car might need a driver to take over?
When we are driving, we're driving along certain road markings. All of us have experienced when road markings disappear suddenly. And it's then also difficult for us to keep the lane, for instance. It is very similar to cars that have a certain level of autonomy. So, if there are road markings disappearing, it's very difficult for the vehicle also to say ‘I have to be guided by these lane markings that are not there anymore’. This can be fairly instant, because they can just disappear because of roadworks for instance. And that would be incidents where the car would ask the driver ‘now please take back control. I cannot see road markings. You should now be in charge of driving. Please do so’.
Is it like really quick usually between like when they… Or can the cars sense before? I don't know. Maybe that's a silly question.
This is a very interesting question because of many reasons. And I'll come to that in a second. And so, when a lot of cars drive on road markings, they will essentially disappear. So, they might be visible for some time and then they might fade and then the vehicle still immediately responds to a take-over. When we look into the future of vehicles. Vehicles also can communicate with one another. We see this possibly when we use certain map data. We see other vehicle users and based on that data; they compute some algorithms so that they can tell us how long a journey takes. When vehicles can communicate with each other, they can also let other vehicles know in which location of the world or in a city, road markings fade. So instead of then, having a very immediate transition from now you have to take back control vehicles understand that maybe in one kilometre or 500 metres, there will be road markings that are not visible. So, then the car can prepare the driver through feedback, for example haptic feedback, to then take back control more intuitively. It's also important that when it's very quick, what is the feeling perception of the human? So is the human subconsciously thinking continuously, ‘anytime now, the car could ask me to take back control’. Which means that the driver has to be continuously aware of their surrounding and the environment. When there's a more guided approach to request a take back control, then it might also lead to more trust of the human in the vehicle. In the end, it's an autonomous vehicle and autonomous vehicle is essentially a robot.
Can that be problematic as well, I guess? Like if someone becomes too trusting, then do they end up putting themselves in danger?
I think that can be also an issue when people trust autonomous systems by default. There has to be a mechanism. And I think there's also a lot of research that needs to go into the questions, how do we inform humans and keep them informed maybe, that these scenarios do not happen?
As discussed in the intro, there have been accidents with vehicles that were driving autonomously. But the more we understand what leads to these accidents in the first place, the better protected we’ll be.
Bani Anvari and her team, they have been looking into these driving scenarios they have analysed when, in which scenarios, did fatal accidents happen. And we're trying to replicate these ones in a simulated environment using this IM@UCL driving simulator, and then see how can we avoid these fatal accidents using the feedback. So, going into that part is very crucial, because I think that will give us then the possibility to try this out in the real world.
Yeah. And so, do you have research participants like people that come in that aren't familiar with it to try to test it out to see how they react?
So I'm collaborating with a researcher from the University of Hanover and during the pandemic, we have not only developed the haptic seat, but we have built our own little mobile driving simulator and we have done some initial tests how drivers would feel if they get this feedback and the results are very promising because they are very, very similar to the data that we have seen when using visual, like these little icons that on the dashboard and are blinking up on commercially available autonomous vehicles. So, our results can actually meet these results. They can be as good maybe even better than what is currently on the market. And to confirm that, the driver simulator is a crucial platform for us.
I was fortunate to have the opportunity to visit IM@UCL and try out the infamous feedback chair.
(You can hear people talking in the background. Helge and Cassidy are in the lab where the feedback chair is.)
So, if you want to have a seat.
Yeah, sure yeah. That would be great.
And then maybe you can... Do you feel something?
Let me know when you feel something.
Oh, okay. I don't feel anything right now.
(There is a vibrating noise)
Wait, what is the sound of that?
So this is compressed air.
So I'm just preparing compressed air into the cylinder. And the air will then go in here and then we have valves over there. So this is just compressed air.
Now it’s available for robots.
Yeah, to be able to… Oh, yeah! Now I feel it! (laughs) Oh, it surprised me!
(People talking in the background fades out. Helge and Cassidy are no longer in the lab.)
The touches felt like the same little pressure things that are in massage chairs. Only, these don't massage but instead give you a gentle poke at multiple points. Behind the chair, you could see this metal box that was maybe the size of a large shoe box. And this is where all the engineering elements of the chair were.
(You can hear people talking in the background again. Helge and Cassidy are back in the lab where the feedback chair is.)
This is what it looks like in there. Of course, there's pressure coming in. And then here are four fast actuating valves, so these are very quick opening and closing pressure. And these are not so quick. They are different in price, that's why we had to do it. This is only the electronics how to control everything. So, the black pipe is the supply. And then these will actuate. And you can actuate in any pattern you want. So, you can have a sequence of them like wave for all of them at the same time.
Now that Helga and his team have created the haptic feedback chair, they are now working out how to implement its use in a semi-autonomous vehicle. And that is a complex task.
Getting something from the ground in terms of understanding how haptic feedback can inform the driver to understand the environment, and then take back control in time - that sentence requires knowledge from so many different disciplines. We are talking here about mechanical engineers who build a driving seat; electrical engineering to interface; computer science to develop a software; to build the driving simulator, we need computer scientists who are familiar with creating virtual reality environments; and interface this so that would be then system engineers interface this with the vehicle because someone needs to drive and needs to get the feeling from the steering wheel how to drive; disciplines that analyse physiological data from the human to then conclude, what is the level of the understanding of the environment? In other words, what is the situational awareness of the human? So you can see that there's a range of disciplines that need to be pulled together to give more in depth understanding of what is actually happening, and how can we improve the current situation and how can that have an impact on developing these interfaces for highly automated vehicles.
Just this one type of usage in a semi-autonomous vehicle requires input from a variety of specialists. This will prove true for all other devices that are engineered as well, each needing input from the other as well as specialists outside of the engineering department. Not to mention the test users and feedback they will need to receive to make sure that the devices are user friendly. And this later point brings me to my next guests.
My name is Aneesha Singh. I am a lecturer at the UCL Interaction Centre. And my research is mostly around human computer interaction.
This involves making sure new technologies are accessible to all walks of life. Historically, technological advances were created by a very specific demographic. And this led to those who fall under different categories to be left out of consideration.
Even just trains, right? When I had a baby, it was really difficult for me to access a lot of the transport network, because if I was using a pram, I needed somebody to help me carry the pram around. And primes are not allowed in escalators, you can't stroll them down the stairs. And so you think about the same thing with respect to, for example, wheelchairs. And a lot of things are not accessible if you are on a wheelchair. So, it's around thinking about those kinds of things as to who really benefits from the things that you're designing.
And making sure that designs are made accessible and applicable for all means you need to collect a myriad of data.
You might have surveys, and you have all that side of things. And then you could have interviews. So, you're actually looking at what people really want. You make sure you're talking to the right set of people for the technology that you're designing. So, who are the users for this technology really going to be and what do they want from it? And what would make sense for them to have? And then you could have things like codesigning things, getting their ideas down. You could coproduce, you could have workshops and focus groups. And we also do diary studies. So, it could be long-term, like how are people actually using things in their homes? So you could have ethnographic studies and diary studies and things like that. And then finally we could also think about, you have a lot of data from sensors these days. So you could think about how somebody is using a phone, for example. You have physiological sensors, you have environmental sensors, or you can use the data from that as well to analyse how people are using a particular technology or could be using it in a particular environment. So there's all of that kind of data that you could use and analyse, to understand how people use technology, but also how it can be situated within a particular context of use. Because that can change as well. How I use the phone, in a work context could be very different from how I use it in public transport, for example. So, you know, how do people really use things in different contexts can also feed into the design.
So, you're getting like a really, I guess, like, with all these different types of research you're doing, you're getting kind of a holistic picture of how people utilise whatever technology you're trying to create or help with?
Yeah, yeah, exactly. And actually, in some cases, do they even want it? Do they need it, there are some things where you could have a technology that disrupts something more than it helps with. So, it's about understanding, do people really want a technology in that context? Or is the human interaction really a more important thing to sustain? So for example, there was a project that I was working on, which was to do with HIV self-testing. And I was doing research as a Research Fellow in the UK. And this project was across the UK and South Africa. So there was research going on in South Africa. And in the UK, it's about people doing the self-tests themselves. But in certain, and I wouldn't say this is like, it's not applicable across South Africa, but in certain places, you would find health workers who are going around doing the HIV tests with people. And so that particular setup was about doing the self-tests with somebody, right? So, there's a different setup and so there are different needs. Again, education, they might be different needs, because they might be different cultural norms. In a particular country, or in a particular group¬ – even within the UK, I think you would find that in particular communities, you would have different cultural expectations, or different, you know, sort of needs from a technology than more centrally. So, it's about trying to broaden out the thing and not generalising at the centre that I'm very passionate about. It's about like looking at the margins and saying, ‘it's not the margins, we need to specify when things apply to a certain group of people’.
This was done because for some in South Africa, pushing for self-testing was found to be offensive, reinforcing the stigma around HIV and homosexuality. There were feedback suggestions that testing could be done in the presence of others to show it's okay and destigmatise HIV. So, the approach changed. This same concept can apply when making cars more autonomous. And what Aneesha’s research and IM@UCL will be all about.
And in some cases, do we want things to become completely autonomous? Or is there, is there a halfway house? Or is there some cases where it should be autonomous and takeover and in some cases there should be more control? It's a lot about… Control is a big part of it when people have to trust something and really want it to take over, rather than just sort of give over control because that's the only way that the technology is going to work.
But then there is the other end of the spectrum where you don't want the level of autonomy to become too personalised.
But if there's too many things to personalise, then it can become a barrier in itself. And you have to find that balance. But you can find it with users, you can do different studies to find out what would work. You can you can create a prototype and test it with people and see, okay, that's just too much what do we need to do? But that involvement is, I think, critical for this to be successful, rather than something being pushed out at people, regardless of whether it's useful to them or not. Or they consider it useful to them or not.
Yeah, absolutely. And I guess that's also why you had talked about before, like, these things have to come out slowly, because people have to learn these new types of technology and the way these things work, because it and it also makes like such significant changes to the cities and the and the community around it. And being slow and diligent and talking to people and working with people as you're doing it, it's all just part of this necessarily slow process, but that leads to better results in the end.
Yeah, and I think the word I'd use rather than slow would be iterative. So, you make improvements, but iteratively. And you kind of keep people involved as you're doing them. Some of these things are already being done. Like, for example, touch interface is being introduced in cars is one way or audio interfaces and so on. And so all of a sudden, you can control things with your voice. And is that something that people want? You know, so there are these many things that are already being done. Initially, we had the parking – beep, beep, beep, beep, beep – and you knew how far you were from things. And then now you've got things like self-parking, you've got the automatic braking, you've got, you've got so many things which are, which are coming in slowly. And I think, being tested in certain ways, because when you're using those things, some things that work or some things that need improvement, and other things that just don't work and need to be, you know, sort of rethought or reimagined. And there is that process that's in place already. And I think this is a nice thing about having this kind of facility is that you could go down the route of really thinking about what parts of these are really useful in a sort of safe environment. And also, I think, what's the other thing that's important is that it's multidisciplinary. It's not just specialists in transportation, or specialists in sort of physiological segments or specialists in different types of feedback. You have people working together, riffing off these ideas, and that multidisciplinarity I think, and you're even right from the first call, it's been a multidisciplinary thing. That multidisciplinarity and interdisciplinary way of working is what is going to be critical to something like this. Because you're coming up with new ideas, you're looking at what is the… What the technology can achieve, but also you're doing it with the user at the centre of it, and that makes it very, very enticing.
Tune in at the beginning of every month to learn more about the early career researchers who are part of the multidisciplinary and interdisciplinary team at IM@ UCL.
(Theme music starts)
Thank you for listening to IM@ UCL: The Podcast. If you would like to learn more about the research at IM@UCL you can check out their website at www.ucl-intelligent-mobility.com and/or subscribe wherever you are listening to this podcast so you can be notified when new episodes come out. This episode was produced and hosted by myself, Cassidy Martin, with music from Blue Dot Sessions. It was brought to you by IM@UCL, which is part of UCL Pearl in Dagenham, and supported by UCL Minds, bringing together UCL knowledge, insights and expertise through events, digital content, and activities that are open to everyone. A special thank you to Helge and Aneesha this month for sharing their time, knowledge, and insight. I hope you enjoyed listening to this podcast and feel like you learned something new, like I have with everyone I've interviewed in this series. Take care, and I'll see you again next month. Same time same place. Cheers!
(Theme music fades out)
Back to IM@UCL Podcast Home