XClose

UCL Institute of Healthcare Engineering

Home
Menu

Health in a Handbasket - Episode 2: Will AI replace doctors?

There's a lot of contention around AI. Peter's here to give us the facts and tell us how AI will be helping healthcare.

AI is starting to play a part in our every day lives - it's there when we unlock our phones with Face ID or we use ChatGPT and other variations. But how can we use AI in healthcare, safely?

Peter Woodward-Court is using AI to help radiologists diagnose patients from scans. He'll be covering what AI is, how it works and how we can use it safely.  

About Peter Woodward-Court

Peter Woodward-Court
Peter is a medical doctor and PhD student in machine learning. He will explain what machine learning actually is, in a way we can all understand. 

Machine learning requires lots of training data to ‘learn’. But this real-life data can be hard to get hold of for a number of reasons. Peter’s research involves developing synthetic data to train machine learning models which can help diagnose medical conditions - making life easier for doctors and patients.


Listen now

SoundCloud Widget Placeholderhttps://soundcloud.com/uclsound/will-ai-replace-doctors?in=uclsound/sets...

 

Other ways to listen and subscribe


View the episode transcript

Ferdouse Akhter  00:05

Hello and welcome to health in a handbasket, your podcast about the sexy world of Healthcare Engineering. I'm Ferdouse Akhter, and I'll be your host.  I'm the Marketing and Community Manager at UCL's Institute of Healthcare Engineering. And although I don't always understand what's written in the research papers published by our academics, I know that what we're doing in the world of healthcare engineering is important and impactful. And I want to share that with you by speaking to those who know a bit more about it than me. Hi, again, and welcome to Health in a Handbasket. I'm your host Ferdouse, Marketing and Community Manager at the Institute of Healthcare Engineering. And in this podcast, we sit down with an expert to learn about all the wonderful and impactful things happening in healthcare engineering.

So today we're picking out the topic of machine learning and AI from our handbasket. It's a contentious topic these days, but we're using it every day. If you're unlocking your phone with face ID, the that's a form of AI, or, you know, I'm showing my age a little bit here. Snapchats, new AI Chatbot. I used it the other day, and I had a really nice chat with the AI where I told them about this podcast, and they wished me good luck. And I told them about my life. And it was very interesting. I'm sure you've seen all the raging debate about AI being the end of the human race, and how we're going to lose our jobs and all of that stuff. But today we're speaking to Peter Woodward Court, who'll be covering his work on AI in the healthcare sector and how it works and how it can be used to diagnose certain conditions. Peter is a clinical Doctor currently undertaking a PhD at UCL in AI enabled healthcare systems. His research involves looking at the role of synthetic data to train machine learning models, which could improve the performance of AI decision making tools. So Peter, what do you do?

Peter Woodward-Court  01:55

So yeah, thanks very much for that introduction. I'm a clinician by background, I spent a few years now working as a doctor on the wards. But I've decided to take some time out and I'm now doing a PhD at UCL looking at the interface of healthcare and machine learning.

Ferdouse Akhter  02:10

I always get a bit confused by this. So you can be a doctor and a PhD student at the same time. Does that mean you work part time for each job?

Peter Woodward-Court  02:17

Yeah, that's, that's right. So I was doing a full time job as a doctor in a hospital. But then the PhD programme I'm doing is now a full time PhD, but I do spend a few few days just to make sure I keep up my clinical skills and make sure that I'm maintaining all the sort of relevant professional qualifications I need to do.

Ferdouse Akhter  02:36

So you're a doctor? Part time doctor?

Peter Woodward-Court  02:40

at the moment when you're working part time, I guess, plan is to go back.

Ferdouse Akhter  02:44

How did you get into the field of AI? How do you go from being a doctor full time to now working in research?

Peter Woodward-Court  02:49

Yeah, so I think that when I was in medical school, and during my, like, early years working in the wards, I think that, you know, I was really enjoying it. And I was really engaged with what I was doing. But I guess I could see that there were some aspects in the way in which certain systems were running that maybe led me to think that like, Okay, well, maybe Is this the best way in which we're operating? Could this thing be improved in some way? And I guess that around that time, was when some of the like, early stuff, looking at how we could use machine learning to try and solve some of these problems was was coming about. And I guess, as I was looking at that, I was thinking that okay, well, this does seem like a sensible approach. And I kind of was buying into the idea that machine learning could be a bit of a, you know, a shift in how we operate and practice in medicine. So I think that that sort of put the idea into my brain, I was thinking like, Okay, well, what's going on here? Is this actually something that could be useful? I felt like the answer was yes. And so I took some time, after the beginning of my training to go and do a little Research Fellowship and explore, like, what's going on here and spend some time looking at how the, you know, learning the basics of machine learning and how it actually works in practice. And then yeah, that's going really well. So I then applied to the PhD programme at UCL and thankfully, got in and yeah, the rest is history.

Ferdouse Akhter  04:00

I really like that you finding a passion or a niche and just going for it, you know, it's kind of scary. So yeah, that's pretty cool. So what is machine learning?

Peter Woodward-Court  04:09

Yeah, it's a it's a really good question. I think some of the difficulty that you're alluding to in the introduction comes from the fact that machine learning and artificial intelligence are terms that aren't necessarily super well defined, or necessarily, there's a universal agreement on exactly what they mean. But essentially, if I was going to summarise it: it's a way of using computers to make a sort of decision if you'd like, which improves on over time, using a variety of different methods which we can get into discussing. There are lots and lots of different subtypes. But maybe a helpful way of thinking about it in broad terms is to think of it in two categories. The first first one is this idea of discriminative machine learning, which is essentially trying to say that you're trying to discriminate or separate two or more different groups of things you're trying to work out. So I guess a classic example in that would be thinking about if you've got a bunch of images of something that say you've got an image of a dog and a cat, you can use machine learning to show many, many example images of dogs and cats and eventually it will learn to discriminate or separate these images into different groups. And once it's done that it can then say, for a new image that's never seen ever before that this is a dog was the kind of the other main category out although there are lots is this idea of generative machine learning. The word is coming from the idea that you're generating new new content that we hadn't seen before. So I guess maybe some people who are listening will have come across chat GPT, which is quite a new, exciting generative machine model and that means it's producing new content as you interact with it. So for those who don't know, it's a kind of very advanced chatbot, probably like the Snapchat one you were talking about. And when you interact with it, or ask a question, it will produce an answer that to us make sense. And it seems like it's almost talking to a human.

Ferdouse Akhter  05:54

So I mean, in all this, we know that machines don't have brains, but they can formulate some pretty empathetic answers, I guess. So what does learning mean, in this context.

Peter Woodward-Court  06:04

I think that sort of, it's really important to point out that when a human is looking at something like an image, it's very, very different to you, when a machine or a computer is sort of looking at an image, it's not quite the same. When we see an image of, you know, let's say a dog, and we sort of just see it as a whole image. And we interpret it in our brain to sort of process it as one big object. I guess one of the things that's different with machines is that if you think about an image, I'm guessing most people will be familiar with the idea that it's made up of lots of individual pixels. And when we're talking about the resolution of an image, for example, we say it's like 512, by 512, that means there's 512 pixels going across, and 512 pixels going down. All of those pixels are just one individual colour, because we're looking at it really far away. And it's quite small, it looks like a coherent image to us. And the way in which computers look at that, again, sort of simplistic way is that those images, those little bits of colour that are pixels, those get converted for machine learning purposes into numbers. So you can imagine that a black and white image zero would represent white and then one would represent black. And then if it's grey, it's just between that zero and one. And then so each that happens to each pixel, and it gets converted into just a number. And then that's kind of what the computer is seeing when it when it sees an image and then those all get separated out into one really long row. So if you think you've got an image, it's 100 pixels by 100 pixels, that's 10,000 numbers in a big long row. And then the way in which the machine learning model works is that it takes all of those 10,000 numbers or more depending on your larger image, for example, and it will perform a bunch of kind of relatively simple but fairly high level maths will just then be applied to those images. And then at the very end, you'll be left with two numbers that will go from 10,000, or how big it is to two at the end, if you're thinking about it simple kind of is this a dog? Or is this a cat kind of model? The number at the end will be between zero and one.

Ferdouse Akhter  07:55

So will it be binary numbers, or will it be?

Peter Woodward-Court  07:58

So yeah, in this example, we're talking about something in machine learning terms. It's called a binary image classifier. And it basically is trying to say, between two groups, is this a dog? Or is this a cat? So yeah, it's you're absolutely right. It is binary. But I think that, yeah, there are obviously many different types of machine learning models. But yeah, essentially, it comes out with a number at the end. And the number that is at the end, if it's zero, or one or something in between will be more or less correct. So if you if you gave it an image of a dog originally, and the output label for that image, so would be for example, one would be this is the correct answer. This is a dog, or zero would be the answer for a cat. And if the model after it's taken all those images, comes up with, say, a number that's close to one, then you're like, Okay, well, that's great. Like, it's, it's, it's done quite a good job of working out that this is a dog. But the difference, basically, I hope, it's not too too confusing. But essentially, if the model at the end comes up with a number like naught point six, and the correct answer was one, which is dog, the difference between the naught point six and the one is the kind of the amount of mistake it made, if that makes sense. And it will use that mistake to then change the model, update it. And when that update step has happened. The next time it sees another image of a dog, the idea is that it will get closer and closer to one as you get more and more examples. And then that means that eventually over time, it learns to get very, very good. And when it sees a new image, it will know pretty confidently that it is a dog. That makes sense.

Ferdouse Akhter  09:25

Like some learning involved. It's like a child. It learns from its mistakes.

Peter Woodward-Court  09:28

Lots of like examples. Yeah.

Ferdouse Akhter  09:31

So we're talking a lot about household pets. How does that translate into healthcare then?

Peter Woodward-Court  09:38

Yeah, that's a really good question. So I think that in healthcare, there's, there's a very big use of images in healthcare, and increasingly so images are becoming the way in which we diagnose lots of lots of conditions. And we're developing lots of very advanced scans, which are very, very good at being able to look at your organs and then we can look at those organs in great detail and say like, Okay, well, it looks like this organ is going wrong, or you've got this problem as a result of this scan. And so we want to say If using cats and dogs, you want to be able to sort of use these high resolution scans of your brain of your of your abdomen, if your thorax or something like that, be able to say, Okay, well, this, this scan shows that you've got this condition, therefore we need to make you see this specialist or you need to have this treatment or you need to have this biopsy or this surgery or whatever. So yeah, it's related very much to the kind of scans that we do in hospital. And I guess that the main reason that this is like relevant to healthcare and why it's a bit of a problem is that at the moment, when you're in hospital, and you and you see a doctor, and the doctor says, Okay, well, I think we need to, we need to do like a brain scan, you know, that doctor who often requests that scan will not be able to understand what the scan shows themselves, they will request the scan, once the images have been taken, the images are then sent to another doctor who's called a radiologist, specialists in the scans and images. And they will look through the scan very, very carefully. And they will then write a report on what they think the scan shows, and then that report will then be sent back to the original doctor who requested the scan. And then that will allow them to be to make a decision based on what the scan shows. We've got roughly 30% shortfall in the number of radiologists at the moment. And that means that there's not enough of them to interpret the scans in a timely manner. So what's happening is that people are requesting scans, and there's increasingly a delay in the time it takes to write that report. And that means that for people who there's I guess there's two problems there, I think for the people who have, who are who are healthy, and they've had a scan, but they're not, they're not sure whether they're healthy or not, they're waiting a very long time and they're very anxious and worried. So they have to wait a long time for their scan to be then just be told that they're fine. But I think the more significant and worrying group is when you've got some condition which is getting worse over time, or progressing or growing or anything like that. And they're waiting several weeks for their scan to be reported all the time, their situation is getting worse and sometimes irreversibly worse. So they're having to wait an unacceptably long time to be seen and treated. I guess the other thing that's also important is that the cost associated with with interpreting and managing these images is very significant. So the NHS at the moment is spending 10s of millions of pounds in outsourcing. So it's paying private companies who have their own specialist radiologists to interpret these in a faster way. And they think that by 2030, there'll be like a 400 million cost to the NHS of just paying these extra private companies to be able to interpret a lot of medical images. So yeah, there's a there's a pressing need to sort of try and address this. And I guess that bringing it back to the subject of the podcast, I guess that there's hope that AI would be able to use or be able to look at these images, interpret them to be a basically another tool in the toolkit that doctors have to be able to help improve the way in which we can look at these images,

Ferdouse Akhter  12:47

I guess, filter out the you know, people who are healthy and can just move forward with their lives and stuff and flag up the people who need flagging kind of thing.

Peter Woodward-Court  12:57

Yeah exactly. I think that's probably like one of the main areas that will hopefully be useful is that there's many steps that we need to work through, I think before we can get here, but I guess at some point, it would be the ideal outcome would be to have a situation where you have your scan, and maybe there's an initial vetting process that's been done by a by a machine learning model that we have tested and verified in a safe way and then that can sort of filter out. So this is how the percentage of people who actually fine and they can be told that they're fine, but it can be trained in such a way that if there's something that feels like it needs to be flagged to a human or is looks very suspicious that that can be sort of seen as a priority. And then the radiologist will look at that as a first class course.

Ferdouse Akhter  13:35

Oh, that's so interesting, because I did not know that scans went to radiologists, because if I remember my days of watching Casualty on a Saturday, the doctor would just hold it up against the what is it called the light box? Right? And he would find the answers or she would find the answers. And that was it. There was no other person involved in this in this process.

Peter Woodward-Court  13:55

Yeah, that's that's kind of actually really old school now. We don't have Yea light box - I think we're talking about like, you'd have this sort of like special X ray paper and then you would put it up against the light box and have a look - so yeah, that doesn't happen anymore. Thankfully, the NHS has moved on. Oh, it's kind of long ago, I think yeah, I think that's that's as broadly a fair way of putting it you know, there are like a handful of scans that doctors will be almost all doctors will be happy looking at and so like a chest X Ray would be a classic one of those where like most if not all doctors are quite happy just to sort of look at and interpret a chest X ray which is a simple kind of get the front of the chest to sort of see the lungs in the heart but even then those would be reported on by radiologists. And yeah, if you're ordering anything more complicated like a brain scan or a scan of your of your chest and tummy like that would be something that would be done by specialist special.

Ferdouse Akhter  14:46

Okay, thank casualty need to like have you on as a producer or something? I don't know. Do they still have run? I'm not sure. So what is your specific research look at them?

Peter Woodward-Court  14:55

Yeah, so I'm looking at a specialist type of scam bitch Hopefully like some people who are listening will be familiar with. It's called an OCT - stands for optical coherence tomography, which is a bit of a mouthful, but essentially it's a very high resolution scan at the back of the eye.

Ferdouse Akhter  15:11

Ooh,like in Specsavers.

Peter Woodward-Court  15:12

Yeah, exactly, yeah, so it's one of these scans, which is, it's good because it's, it's pretty cheap to perform. Yeah, you can get it done on the high street, it doesn't have any of this sort of worrying - it's called ionising radiation, which is where it can cause damage to your cells. If you haven't done time and time again, it's actually it's a very safe scan. And it's become what in medical speak, we would say is the gold standard, which basically means that it's a very good scan for being able to look at diseases which are affecting the back of the eye. So my research is looking at this kind of scan, tying it back to what we were saying a little bit earlier, we've got this idea of can we build a machine learning system, which can say, Is this a cat? Or is this a dog? My research is looking at trying to produce images of these old CT scans for specific diseases. So common won't be like diabetic eye disease. And if we can produce examples of these images, which are kind of fake or synthetic, the idea is, well, if we could use those images as the kind of training images, the ones that we use to improve the model and get it to work out what it's looking at, would that mean that we can actually get a system which is more accurate at being able to classify or diagnose what's being shown on the scan?

Ferdouse Akhter  16:25

So do you get data from places like Specsavers, like real life data? Yeah, and then feed it into the machine? And it produces?

Peter Woodward-Court  16:34

That's exactly right. Yeah, so we get images from from datasets from various and hospitals like Moorfields is where I'm currently working. And we can use those images that are of real eyes and then we give a sufficient number of example images. And eventually the model that I'm using will then learn exactly how, what those images look like. And then it can produce its own example version of that image.

Ferdouse Akhter  16:57

Using those examples you hope to kind of diagnose diabetic eye diseases quicker. And I guess it trains the person on Specsavers, like to see it quicker, like the computer will just see it quicker?

Peter Woodward-Court  17:09

Yeah. So I think that one of the things that's particularly we're kind of hoping for here is that lots of eye conditions are more or less common. So diabetic eye disease is really common and that's quite useful for being able to train a model because we've got lots of example images. But one of the problems is that for patients who suffer with rare diseases, the number of example images we have to train models is far, far fewer, because by virtue of the fact that the disease is much rarer. So if we can get as many images as we can of this rare disease, and then train the kind of model I'm looking at these samples then we can produce a lot more examples of rare conditions. And that means that we can use those images to train the model better, because they tend the models tend to perform worse on conditions that don't have very many example images.

Ferdouse Akhter  17:51

So you get like one real version of the image and then create 50 fake ones that will help the computer analyse it.

Peter Woodward-Court  17:59

Yeah, exactly. So that's exactly what's happening with the model that I'm training, it's sort of when you give it all these example real images, it basically learns what the images look like. So if you if you take something that we all know about, for example, like if you think about how tall the human is, for example, and you get lots of examples of how tall the human is, you'll be able to draw a map of people in terms of their height. So there'll be a lot of people who are average height, and there'll be very few people who are incredibly short, and they'll be very few people who are incredibly tall. And as you give more and more example, heights to a model, for example, we'll be able to draw a map of what the I guess the term we would use is the distribution, which is how they're spread out across population. And this is exactly the same for an image of a rare eye disease. So you get lots of example images, and it will learn, okay, well, I can sort of see that sometimes the disease looks like this, or sometimes it looks like that. And it will draw its own map. And once it's learned how to draw its own map, you can then tell the model like Okay, once an image looks like this kind of type of rare eye disease, or this this type of rare eye disease, and it'll be able to sort of create examples for you that you can then use to use it to train these other models that I'm talking about.

Ferdouse Akhter  19:02

That's super interesting. So computers, training computers. So if I went to the hospital, I do have like an eye test coming up. But if I went to the hospital, would my data be used in this? How How does it work? How do you get permission?

Peter Woodward-Court  19:13

I think that's a really important question, actually. And it's there's a lot of discussion and debate in amongst people who are like managing NHS data and how it's used because I think the topic of consent is a really important one, especially when we're talking about using our own sort of personal data and discussing sort of what rights we have around how that is used and how it's shared. It will depend a bit on what kind of research you're doing and what kind of data you're using, whether it's sort of what we would call identifiable data, which means that this data is linked to a particular person, or if it's data that's anonymized. So it's data that we don't have any idea like who it's coming from, or, or how to tie it back to a specific person. And there are various different rules around how that's used. But in general, you would be something that you want to have your patients consented for or be aware of, and then they would sort of give permission for their scans to be used. And I think that like that does tie back to the work that I'm doing, I guess, because when we're the idea is that when we're generating synthetic data, this would be data that is something that would be well, it would require more research to check for sure that we're that it's all safe and verifiable and good to use. But with synthetic data, there's this idea that we could use that more freely, and it wouldn't be synthetic data so it's not tied to any particular individual. And the idea is, well, maybe we could use that data, it would have less sort of restrictions, even though the restrictions are totally appropriate on how that it could it could be shared and used for research more broadly and more widely. So yeah, it's a really important area to be thinking about.

Ferdouse Akhter  20:35

To a lot of people, I guess, you know, who are listening to all these things about how bad AI can be? Is AI taking over everything? Is AI taking over the job of radiologists in the hospital? So tell me a bit more about how disruptive I guess or how good AI can be?

Peter Woodward-Court  20:51

Yeah, I think in terms of how it will affect jobs, I don't think there's any sense in which it's going to replace the job of a radiologist anytime soon. I think that as I was saying right at the beginning, radiologists are the qualified doctors and they've got a huge level of sort of clinical and contextual understanding, as I was kind of mentioning, it'll be another tool in the toolkit where hopefully, depending on how we evaluate and go through these models in terms of how we test them and see how they perform, it will be something that they can use to help them go through their workload in a more efficient and safer, faster way, as I was alluding to at the beginning, that would hopefully mean that the, for example, the amount of money the NHS has to spend on paying for these extra outsource radiologists will drop in, it will mean that the cost and efficiency savings for the NHS will be very significant. But yeah, it will be used in context with working radiologists to be able to improve outcomes for patients.

Ferdouse Akhter  21:41

I guess my job will be safe too. I mean, I did use Chat GPT. And isn't that great. Like, I think it's good as a starting point. And then I realised that, hey, I can write better than this, like, a lot better.

Peter Woodward-Court  21:51

Yeah, for sure. I think that I think that. Yeah, I think I do buy into AI being something that is going to change the way in which jobs work in quite significant way. But as you're saying, like, it's the kind of thing where you can be like, well, you know, I don't know much about this topic, let's ask ChatGPT, tea, or whatever. Yeah, it's a good starting point. And it can come up with a series of helpful sort of bullet points, which you might not have considered, or it can form a good structure at the start. And, and it can be used to sort of, you know, say, like, improve your output and, you know, help you to write things that you maybe you wouldn't have initially considered or whatever. I think that even if there was any, I don't think it's there yet. But even if it was to get to a stage where, you know, there were jobs that needed to change quite a lot, I think often the kind of thing that it's doing is removing work, which can be kind of menial or annoying or like tedious to write the same sort of boilerplate stuff over and over again, and it can sort of automate that stuff. And then that might mean that the job or the way in which the job operates changes slightly, or you do less of one thing or more of a sort of, you know, human level thing more, and I think that that could be good. So it's a kind of a way of sort of trying to adapt and consider how these things are changing how we're working.

Ferdouse Akhter  22:55

Yeah, I mean, I think that's a good way to end things on a positive note. Thank you for joining us today, Peter. It was a super interesting conversation. And I hope everyone listening enjoyed it, too. Thank you, Peter. I know you've got little photography websites, I guess that in keeping with AI not taking over our jobs and stuff like that, it can't take over photography. Yeah. And the niche that that is all so yeah, follow Peter on Twitter.

Peter Woodward-Court  23:18

Yeah, I'm on Twitter at Court.Peter. And yeah, I think it's super important for people to be able to have their creative outlets and and do their individual pursuits. So yeah, I've got I need a little bit of photography on the side and enjoy that. It's good fun.

Ferdouse Akhter  23:33

Yeah, we'll put the Twitter handle and the photography website in the show notes so you can all check it out and see how amazing Peter's photography is. Thank you for joining us today and yeah, speak to you soon maybe around UCL. Health in a handbasket is produced by UCL Institute of Healthcare Engineering and edited by Cerys Bradley, the Institute of Healthcare Engineering brings together leading researchers to develop the tools and devices that will make your life better. Were using this podcast to share all the amazing work taking place. You can learn more by searching UCL health in a handbasket aorre following the link in the show notes. So share with your friends and family if you found this interesting, were available everywhere, especially where you just listened to us.