XClose

UCL Institute of Healthcare Engineering

Home
Menu

Is AI killing the internet?

What impact does AI have in our society? What about ethical concerns in healthcare? What's AI's role in art and governance?

In this episode, we discuss the impact of artificial intelligence (AI) on society with Stephen Hughes, a social scientist, and Reese Campbell, a freelance project coordinator. Stephen explores the mixed public sentiment towards AI, highlighting its benefits in healthcare and concerns about its ethical implications, particularly in mental health services. Reese shares her negative views on AI, citing deepfakes and privacy violations. They also discuss AI's potential to exacerbate social inequalities and the role of AI in the art world. The conversation concludes with reflections on the future of AI and its governance.

Stephen Hughes

Stephen Hughes in front of red curtains wearing a black top
Dr Stephen Hughes is a Lecturer in Science, Technology and Society and Director of UCL's Responsible Innovation short courses.

Stephen's research explores relationships between science, technology, and society through an affective psychosocial lens. He is fascinated by the role that emotions play in science communication and public engagement, particularly in contexts of difficulty, discomfort, and controversy.

Stephen uses insights from STS, psychoanalysis, and affect theory to understand the roles that fantasy, desire, anxiety, and embodiment play in relationships between people and technologies.

Reese Campbell

Reese sitting on stairs, smiling
Reese is an East London resident passionate about arts, culture and community. She is currently an assistant producer in the engagement team at Ally Pally, and also works in various freelance roles across Waltham Forest.

 

 

 

 

 

 

 


Listen Now

SoundCloud Widget Placeholderhttps://soundcloud.com/uclsound/the-impact-of-ai-on-society?in=uclsound/...


Keywords

AI impact, social scientists, emotional aspects, public opinion, AI threats, deepfakes, government protection, social media control, conspiracy theories, AI in healthcare, AI psychotherapy, inequality access, AI in art, AI future, AI governance

Transcript

Ferdouse Akhter  00:05

Hi and welcome to Health in a Handbasket. I'm your host, Ferdouse, Marketing and Community Manager at UCL's Institute of Healthcare Engineering. In this podcast, we sit down with an expert to learn about the wonderful and impactful things happening in Healthcare Engineering. Today we're looking at artificial intelligence and the way we view it. I'm with Stephen Hughes, social scientist at UCL, and greatest and most intelligent human in all the world. Now that's his intro. I'm also with Reese Campbell, who is the greatest female of all time. Reese is a freelance project coordinator and multidisciplinary artist from East London. So, Hi, both. So Stephen, before I met you, I didn't think social scientists were a thing. Tell me a bit about what you do.

Stephen Hughes  00:48

Hi, yeah, great to be here. So yeah, social scientists is pretty much someone who studies people and studies society and culture and these types of things. And because my field is Science and Technology Studies, I'm really interested in the relationships between people and new technologies, between people and new scientific knowledge, and also thinking about how scientists relate to the things that they do and to new technologies.

Ferdouse Akhter  01:16

And that's why you're here, because you're looking at the correlation between people and AI?

Stephen Hughes  01:23

yeah, exactly. I mean, it's no great piece of news that artificial intelligence has had a huge impact in society. It's something that literally everybody's talking about. People are talking about in government, in schools and workplaces. So as social scientists, we're really interested in those conversations, you know, like, what do people think and feel about artificial intelligence? And in my research, I'm particularly interested in the emotional aspects of those relationships. So are people afraid? Are they anxious? Are they really excited and hopeful about AI? And you know, thinking about what that means in terms of questions around responsibility

Ferdouse Akhter  02:01

And in your research, what have you found people's responses towards AI to be?

Stephen Hughes  02:06

Yeah, so it's, it's really mixed bag. I think people don't tend to have, you know, one feeling about technologies. It's usually quite complex, quite conflicted. People have contradictory feelings. You know, sometimes they might find something like chat, GPT say, for example, really helpful and really beneficial. It can, you know, ease stress in work for students, you know, it cuts down any amount of time they have to do when it comes to researching for essays. But then people might be hearing on the news, people like Elon Musk or this guy, Jeffrey Hinton, who used to be at Google, talking about existential threats from Ai. So then they're sort of thinking, what is chatgpt going to take over and kill the human race or something? So there's a broad spectrum of feelings, and they all overlap each other that makes our job really interesting, because we have to kind of tease that apart and make sense of it and try and understand, you know, what is the public mood or the public opinion when there's so many layers and nuances to it?

Ferdouse Akhter  03:12

And Reese, as someone who doesn't work in academia, what are your opinions of AI?

Reese Campbell  03:19

Well, first of all, thank you for having me. My overall opinions of AI, I'd have to say, is quite negative. I've seen just a whole bunch of different things that's come out in regards to using it that I think are quite just bad, and I just don't really have a good outlook on it. So, for example, the incident with Taylor Swift, where her face was put on to a porn video, and she's been looking with lawyers to get that taken down. And so I think thinking about AI and society, particularly women and children, and I think minorities, it does pose quite a big threat. As you know, people are just taking deep fakes. The government really hasn't looked at it as a proper threat yet.

Ferdouse Akhter  04:07

So that's an interesting thing that you've mentioned, like the abandonment by the state, which I think a lot of people feel when it comes to AI and how the government isn't really protecting them. So, you know, I think you might have remembered, like, what, a few years back, where US Congress took Mark Zuckerberg to trial over the stuff on Facebook, and none of them, or most of them, didn't even know what Facebook was, how it worked. They were asking such stupid questions. So it's kind of like a thing of like, how do we protect ourselves when our government doesn't even know what's happening or what this is.

Stephen Hughes  04:43

yeah, it's interesting that you're saying that you're an artist, so immediately I'm thinking, you know, where does AI impact on the work that you're doing, or in the art world more generally, because we hear so much about especially image generation and generative AI.

Reese Campbell  04:59

Yeah, so luckily, I haven't really come across AI in my work, because I work quite practically, so it's painting and nothing really digital. But I do do digital drawings, and I have been quite cautious of putting those out there, simply because of the fact that with these art generators, it's taking other people's art and putting them into an algorithm and coming out with something that's supposed to be new and fresh. And I think that not only is this a problem for artists like digital artists in general, but I think across the medium of art, it does kind of pose a problem with music and maybe sampling people's voices and beats, and also in film as well, using people's faces and maybe people's art when they're using a green screen. And so I think it is a little bit of a doomsday looking thing, like, oh, art is doomed, but I do think that with awareness, it could potentially change. Is that a bit like dead internet theory, where you're creating content just for the sake of creating content?

Stephen Hughes  06:07

Yeah, definitely. I think you know, this idea of the dead internet theory that all of the kids are talking about at the moment is really interesting. You know, it's this idea that Facebook, social media has just become, like, completely awash with these mediocre, AI generated images that just have all of these bots responding to them, going, yay, great job. And you know, they've become total memes at this point. And you know, what you get is this massive amount of production of imagery and artwork, but it's really generic. It's really bland. It's some of it is just, like, really bad and just weird and strange, but, like, not in a creative or good way. Yeah, and people are basically saying that it's killing the internet because it's just, you know, the actual ingenuity and creativity of, you know, human artistic expression is being drowned out by all of this really crap imagery, Basically. It actually reminds me just what you were saying there Reese about the copyright issues. You know, on the one hand, you have, like, the total extremes of that, where you have, you know, deep fakes of of Taylor Swift right in like, these, like, gross, disgusting, misogynistic images. But on the other hand, in the sort of, you know, what you might say is, like, you know, more acceptable version of it, you still get this total violation. I don't know if you heard recently about Scarlett Johansson and her interaction with open AI, the company behind chatgpt. Basically open AI released their sort of, like brand new, kind of, like sparkling, shiny version of chat GPT, which, you know, also has a voice attached to it. And it sounds, you know, almost like you're speaking to a person, because you have this female voice which is responding to your prompts. And it speaks in a very kind of naturalistic way. And Sam Altman, who's the CEO of open, AI approached Scarlett Johansson, and he basically said, we think that you have the perfect voice, the most universally, you know, attractive voice to be basically the voice of chat. GPT into the future. And he was inspired by this movie called her, which is directed by Spike Jones, and it involved an artificial intelligence machine, and Scarlett Johansson played, sort of, she did the voice acting for that. And Scarlett Johansson was like, No, thank you. Not really interested in this. And then Sam Altman, I don't know how he did it, but basically he produced a voice that sounded eerily similar to Scarlet johanssons. To the point that her friends and family were saying, Did you actually agree in the end to do that? And she was like, no, he's just basically artificially generated my voice. And she had to publicly publish a letter, which, you know, explained what happened. And she pointed out that on the day of the release, Sam Altman tweeted her just h e r, which is like a brazen sort of admission that he had done that. So yeah, it's not the same as, you know, deep faking Taylor Swift, but it's still deep faking someone and using, you know, their voice to to make money and kind of gain exposure.

Reese Campbell  09:19

Makes you think Scarlett Johansson can be taken advantage of. Is there really any hope for just regular people, regular artists and models, etc? Just makes you think,

Ferdouse Akhter  09:29

I mean, is a bit scary as well, because I see a lot of stuff about parents saying, Oh, don't post pictures of your kids online, because you just don't know how it's going to be used. And is so disgusting, but yeah, it is. There's a lot of wicked people out there that you know you don't know what they're going to do. So both of you mentioned the negatives of AI and social media. So to what extent do you think social media platforms control our actions?

Reese Campbell  09:56

I think with X Twitter and. And Instagram, which was is owned by Facebook nNow there's been a huge problem with social justice, particularly what's going on in Palestine, the Trump trials and campaigns, and because both of the owners, Musk and Zuckerberg, are quite right wing, it's been found that their politics have bled into the social media platform, and so I suppose it kind of brings the bigger question that, can AI be neutral, and can they really be used to control or have a say in these big platforms, if the people behind it have really questionable morals, in a sense.

Ferdouse Akhter  10:43

I do find that a bit it's like, you know, especially in England, we talk about free speech and all of that stuff, but you can't even say the word genocide or say the word Israel without being censored. And it's like, I never knew that criticizing another country, even my own, was against censorship rights. Like you're allowed to criticize a country, it doesn't mean you're like, even though a terrorist is censored like it's crazy, we can use language to censor so many people from talking about the rights of other people.

Stephen Hughes  11:14

yeah. Like, when thinking about the sort of inequalities that AI can produce on social media, it's not even just the fact that you know the data that they're working from is super biased and that they're reproducing that it's also the speed and the power with which machine learning algorithms can do that in a way that you know, just people speaking or sending messages to each other can't do. And I think that makes it, you know, all the more dangerous. And even on top of that, it's not just the case that It's like, you know, individual people are disagreeing in, you know, on Twitter where, you know, Elon Musk likes to call it like the public town square. It's like someone like Elon Musk, just this white dude just goes, right, I'm actually a billionaire. I'm just going to buy the entire platform and then change all of the sort of algorithmic settings on it. So you're being fed all of this like absolutely ridiculous, inane crap that is kind of like dead internet style stuff. And the actual contact that you want to access, the sort of the people that you're interested in seeing, just disappear from view unless they have a blue tick.

Ferdouse Akhter  12:20

There's a lot of conspiracy theories about AI, both on x and elsewhere. So what are some of the conspiracy theories that you guys have encountered?

Stephen Hughes  12:30

Yeah, so conspiracy theories are something that I look at a lot, because I do tend to look at the sort of the tensions and the conflicts, because I think that's where the most interesting stuff is happening, because, you know, that's where people are disagreeing. That's where you're getting different views, kind of coming together around the topic. And I think that's where you really learn about, you know, how people actually feel and think about something like AI and conspiracy theories are sort of, I like to think of them as being on the extreme end of what can be common everyday concerns or fears about AI. And for some people, they they think that we are basically, you know, part of, like, the plot of the movie, The Matrix whereby we're going to be enslaved. And this kind of, you know, like, fake reality is going to be, like, piped into our brains, and that the artificial intelligence systems are going to, you know, take over the world. And, you know, obviously most of us say that's absolutely ridiculous. It's important to remember that that exact idea isn't just coming from, you know, crazy bunkers conspiracy theorists, but you have very powerful people like Elon Musk, who I mentioned, and like Geoffrey Hinton, who I mentioned, who are basically saying that the biggest threat to humanity right now is this, you know, existential crisis, or existential threat that AI poses. And a lot of social scientists are pointing out that this actually serves the interests of big companies like open AI, because it makes them seem incredibly powerful, and it also distracts from the idea that, well, really the people who are being affected right now in real time are people who are already marginalized, poor people, people of color, trans people, And obviously people in Palestine, because you have automated systems deciding you know who lives and who dies, and including thresholds of you know, you can have 10% error. And when you're talking about 30,000 people, that's, you know, how do ethical questions make any sense? Or, you know, how are they meaningful when that is part of what AI is. So, yeah. So, you know, on the one hand, it feels like, yeah, there's these conspiracy theories that might seem completely crazy, but on the other hand, they are connected to genuinely, really horrific things that are happening in the world. I mean, Ferdouse, you probably know even more than I do about the benefits of it, because you work in. Healthcare engineering,

Ferdouse Akhter  15:01

I guess we say conspiracy theories, and we think of a guy in a tin hat just shouting down the street. But some of the stuff we've been talking about today, like, you know, the Taylor Swift stuff, or like you're saying, using AI to kind of pretty much murder people, is, is real? It's not tin hat stuff is it's literally happening in this day and age. Though it's funny how you know, the matrix was released, what in the 1990s and what 30 years later, we're here in living in this kind of tin hat reality, which is really funny and really scary at the same time. So you can understand why people are a bit scared of AI when all this stuff is happening. But I guess I want to also say that working in healthcare is not all bad. Like you hear a lot of stuff about how AI is being used to kind of streamline the NHS, streamline processes within the NHS, especially when it comes to scans and all of that stuff. We talked about it on our AI episode last season. So if you do want to listen to that, it's there. But even everyday things like spell check or autocorrect on our phones, that's all AI, but also AI is, is, is a lot of statistics. So having a computer model mimic human kind of human brains or human, human kind of working is pretty good when it comes to statistics.

Reese Campbell  16:22

Yeah, when I think of the good part, I think of, you know, as you mentioned, spell check or to correct chat GPT, even though, I mean, that is kind of questionable. But I think when people use it for like CV's or cover letters, it can be a good way to give you a start maybe like help your vocabulary and such. Focusing on the healthcare aspect of it, AI and therapy, is quite interesting. I think obviously there are downsides to having a voice or a chat, speak to you and try and work through your trauma, but I think it could maybe be quite useful when having affirmations, or when people have CBT, having something there that can relate to them, and they can say out loud, like, I'm grounded, I'm okay. I think that is a positive, but it can't be used as a substitute to make people heal.

Ferdouse Akhter  17:21

that's something you're working on, actually, Stephen AI psychotherapy,

Stephen Hughes  17:25

yeah, I'm quite interested in that, because I think, yeah, like Reese  saying there's, there's definitely benefits to that, you know, it can, it can use those features of machine learning and automation for a Good purpose, right? And that means that you can, you know, see an enormous amount of people, basically, you can provide CBT, some basic CBT mindfulness exercises and things like that to a large amount of people, pretty cost effectively and in a quite efficient way. And that can be, you know, hugely beneficial. But I think again, the problem becomes a case of, you know, inequality of access, for example, so that it becomes, yeah, poor people, people who are typically already underserved by the healthcare system, they can have the AI version, the crap, shitty AI version, and people who have money can go and see an actual human psychotherapist. And you start getting that thing where AI becomes, you know, it's AI for poor people. And then you have the sort of good quality art, the good quality, you know, human healthcare, human oversight, healthcare, human oversight, mental health services, psychotherapy, then for people who have money. So I think it's, you know, it's important to recognize the limitations and also kind of acknowledge that it's not really just the technology itself which is good or bad, but it's the social systems and the systems of, you know, equality or inequality, that they are deployed within.

Ferdouse Akhter  19:02

If it's being used in the NHS, and if it's cheap, it's more likely to be commonplace. But then, is that really a good thing in the long run?

Stephen Hughes  19:11

I mean, like, would you see a say, for example, you're having, like, issues of stress mental health, or maybe a family member who's really struggling with anxiety or depression. Would you suggest that they use a, you know, a chat bot, basically?

Reese Campbell  19:30

 Probably not. I think I myself, I've been quite lucky that I actually do take therapy and I have a human person to be able to speak to, and you really can't replicate that kind of in person connection that you have with a screen. And I think perhaps maybe for anxiety and again, grounding people with CBT, it's easy to say repeat an affirmation or such, but when you're struggling with something like depression. It's very difficult to look at a screen and respond to it and try and get anything out of it. It's really it's more useful just to have somebody and actually be able to speak rather than staring at a screen or something.

Stephen Hughes  20:16

and it doesn't, it doesn't take into account the fact that the reason that you're stressed out or anxious or depressed is because you're working two jobs, you're working and you're studying, you you know, have a family member who's struggling with addiction. You're single parenting, you know, you're living in a like, highly polluted area that's, you know, overcrowded or, you know, you're just absolutely distraught at the fact that there's, you know, a genocide taking place in the world, or multiple genocides taking place in the world you know. Like, how far can affirmations stretch to cover those things, you know? And you know, I would really wonder, yeah, I mean, how effective a kind of automated chat bot can be really getting to the heart of of, you know, most of the issues that that people are struggling with.

Ferdouse Akhter  20:16

I mean, like you said, it sounds like it's a poor man's problem, isn't it? Like, if you're if you have money, you're not really going to go the AI option, because, like you said, Reese, it's a lot nicer to speak to someone. But if you don't have money, and if you're a year long into the waiting list, the NHS waiting list for mental health services, then you're going to take what you're going to get. Yeah, really sad. Like, how it all kind of like stems down to our social status in the UK, which is a funny place, isn't it, in terms of, like, how we say we don't have social hierarchies, but really we do. You can see it in terms of, like accessing mental health services.

Reese Campbell  21:47

both my parents work in the NHS. My mom is a biomedical scientist, so she looks at microscopes in the lab and such, and so I'm not entirely sure, but perhaps there's already an AI system that looks at these things, so her job might potentially become obsolete one day. But when I think of the NHS and AI, I do see that, you know, when people do, like online consultations, and there's, like, really small symptoms, It can be easier on the doctors themselves, as they don't have to really shuffle through, you know, 1000s of people a week or a day, or however many. And I think we kind of overlook the older generation when we're looking at medical benefits, if they have to speak to an AI doctor, or if they're submitting their symptoms, then they're just not going to feel comfortable with AI.

Ferdouse Akhter  22:46

I think especially in healthcare, there's a lot of stuff around how AI mimics the society that we live in, and health healthcare is quite unjust to people of color, trans people, you know, people on the margins of society, or people who aren't a white, western male, so AI kind of mimics those kind of social injustices as well. How much of AI can be seen as putting up an equal front?

Stephen Hughes  23:13

Yeah, it definitely raises difficult questions. I know genomics England are really concerned about the diversity of the data that they're building in their database to try and produce cures for genetic diseases, or diseases that have a genetic component. And they're, you know, they really want to have more representative samples within their their database, because they know that it's overwhelmingly, you know, basically, like, you know, white people. It raises kind of like, further, kind of like difficulties, because it's, you know, what do you do? Then do you go to, you know, different ethnic groups and then say, like, you know, you're now obliged to engage with us and provide DNA samples, you know, for the good of your race, because we've decided that we've messed things up so badly before now that you know it's now your duty to do this. So it creates difficulties in terms of, you know, how do you engage with populations that you want to include in your database? You know, it's like, where's the grounds for trust to say, like, Well, no, you can trust us now. Now we want to do it for good reasons. It's, yeah, it's a tricky one. So one thing I think about conspiracy theories as well is that we just, you know, we could think about them sort of as just a bad analysis, you know, a bad way of thinking about things. It's like, yeah, there is a huge amount of inequality, and there is a huge power imbalance, but it's just not the way that people think. It's not the matrix. It's not that there's just a shadowy group of evil people who want to, you know, trap you in pods and take over your brain. It's more that there's people who feel entitled to a country's worth of resources to themselves and that, you know, they can Decide what the future of social interaction should be. That's something that meta claimed. They said that the metaverse will be the future of social interaction. And, you know, it's just think about the arrogance of a claim like that, that as an individual, person who owns, you know, a multi billion dollar company, you get to just decide what the future of social connection is. And you know, you've already absolutely balled it up so far with your social media platforms. And you know, if that's your track record, you still think that you have a claim and the entitlement to just decide what the future of social interaction is going to be, it's like, well, for me, you know, that's the conspiracy, if you want to call it a conspiracy, and I think that that's, you know, deeply unjust.

Ferdouse Akhter  25:50

We've talked a lot about the negatives in AI, but it's not all bad, like in especially in the healthcare setting, there's so many positives.

Stephen Hughes  25:58

Yeah, absolutely there are lots of positives, especially in healthcare. You know, you have automated skin cancer detection, which is super reliable and performs really, really well. You have other forms of advancements, like Google, DeepMind has been using their AI systems to explore and research how proteins fold, which is a super time intensive and complex process that they've been able to automate with an incredible degree of accuracy, which is amazing for scientific research, And it's, you know, rightly being heralded as as a massive, massive step forward where, you know, automation is being used for, you know, hugely beneficial purposes. But yet again, there's a caveat to it, and that is when they published this study, which was only a few weeks ago, in in 2024 they they didn't make the actual algorithmic system itself available. So they published it in a in a huge journal, I think it might have been science, and they didn't publish the actual you know, system itself because it's, it's protected. It's, you know, privacy protected, because Deep Mind is a private company. And so there's, there's been a huge conversation about, you know, well, science is supposed to be open access for people to actually understand how these things work. But they sort of said, well, no, it is it's their intellectual property, so they don't have to do that. So it's, it's created a conflict there between science being about, you know, everyone can can check to verify and make sure that this thing is is right or wrong, and then a private company saying, Yeah, but we're different. You know, we don't have to to show you that.

Ferdouse Akhter  27:51

Where do you both see the future of AI? Also, did you know that it began with checkers, a game of checkers in the 1950s which I found out when I was researching this question. So what do you guys see the future of AI being?

Stephen Hughes  28:02

So for me, I think that we'll kind of stop being dazzled by the, you know, supposed magical features of something like chat GPT, and we'll realize the limitations and the constraints, and we'll realize, You know, the fact that it can't just scale endlessly, and it'll probably become another tool, and it'll become something quite familiar, the way that we think about Google search, say, for example, it'll become quite mundane, and it'll kind of fit into our lives in, you know, a pretty kind of normal way. But, you know, I think that there's nothing inherently wrong with artificial intelligence as a technology. I think the real question will be, how well can we govern it? How well can we democratize it and ensure that there's equitable access to the benefits and that the harms are distributed evenly as well, and that the harms are properly considered, you know, not just issues of inequality, of access, bias and discrimination, but also thinking about things like environmental sustainability and the amount of energy that's required, and making sure that AI is actually, You know, working for us and working for everyone, and not just making money for, you know, a select group of of people who are already ridiculously wealthy.  I think it's good to remember that AI runs on computers, and to run computers, you need loads of water, loads of electricity, all of that stuff, which I myself forget because you don't see it. There's a huge data center that's just guzzling water and electricity off in Milton Keynes somewhere, Or rather.

Reese Campbell  29:51

Yeah, thinking about AI in the future. As an artist, I'm always thinking about AI in art, and particularly Hollywood, has been quite a big thing using, I think they used Carrie Fisher's face in one of the Star Wars, and in fast and furious, they used Paul Walker's face. And so it would be quite interesting to look at to see how AI is going to develop with live action films, but also animation. Thinking back to what we said earlier about art and these art content generators, is the animation industry maybe going to be threatened, or are studios going to use AI generated art within their films? And so I suppose it's a bit more of a negative on the animation side, but maybe with live action, I think it could be used for a good thing in, you know, retconning people that aren't there anymore. I don't really see a harm in that personally. And also on social media, I think a lot more of these virtual influences is going to pop up. So like aI Michaela. And also, even outside of social media, robots, in a sense, particularly in places like Japan, where there are, I believe there are, like restaurants and cafes use robots as waitresses. So there's always, again, there's a plus and a downside to that in, you know, maybe it's entertaining for a while, but then, you know, some somebody will probably come around and use it for bad intentions. So it's interesting to look at it from a society and arts based perspective.

Ferdouse Akhter  31:32

I feel like a lot of the mistrust kind of lies with the fact that a lot of AI, a lot of the uses of AI, belongs to very wealthy billionaires rather than governments. And you know, normally you wouldn't think, oh, yeah, the governments are just so to be trusted or whatever it is. But really, I'd rather I don't know something, like Facebook belong to the British government, rather than Mark Zuckerberg, or, like, I don't know the European Union or whatever it is, rather than one very rich individual who's being influenced by monetary goals, because, like, you know, with Cambridge Analytica all of that stuff, like it was because he wanted money. So, yeah, that's my only note on AI. But yeah, thank you both for coming in today. Honestly, I generally enjoy talking to you guys about AI and all the conspiracy theories or not so conspiracy theories, because really they're real. Yeah. So thank you for coming in today. Thank you.

Stephen Hughes  32:32

Thank you very much.

Ferdouse Akhter  32:37

This has been held in a Health in  Handbasket, produced by UCL Institute of Healthcare Engineering, and edited by Shakira Crawford from Waltham forests. Feature formed, what's the Institute of Healthcare Engineering? Well, let me tell you the it brings together leading researchers to develop the tools and devices that will make your life better. We're using this podcast to share all the amazing work taking place, but there's so much more going on, so please check out our website@ucl.ac.uk forward slash health, dash in Dash, a dash hand basket to find out more, and please share with your friends and family. If you found this interesting, we're available everywhere, especially where you've just listened to us.