XClose

UCL Institute of Healthcare Engineering

Home
Menu

BLOG: Machine learning is transforming healthcare - but there’s a long way to go

17 December 2018

This is the first in our blog series exploring the big issues in healthcare engineering. Alice Hardy explores how machine learning is already changing the face of healthcare and the challenges we still have to overcome.

Machine learning in healthcare

Machine learning is one of those buzzwords which is impossible to ignore. While it still sounds like the stuff of science fiction, you’re probably using machine learning every day without realising it. From predictive text to your ‘top picks’ on Netflix, machine learning algorithms are observing your habits, anticipating your next move and making life easier. And, thanks to the enormous benefits it could entail for patients and clinicians, machine learning is taking healthcare by storm.

What is machine learning?

Machine learning is a form of artificial intelligence - in other words, it’s a computer system able to perform cognitive tasks normally carried out by a human such as problem-solving or learning. The technique involves feeding computers vast amounts of data (more than a human being could ever physically process) and providing an algorithm to decipher that data. This analysis can reveal intriguing and useful patterns, for example, predicting when a patient will be readmitted to hospital, or whether they are likely to develop a particular disease. The more good-quality data the model is fed, the more accurate its predictions will be. (Good-quality is the key word here, but we’ll come back to that).

Although the concept of machine learning has existed for centuries, increased computer power and masses of data mean we are currently making enormous strides in the technique. These two components are sometimes the only barriers standing in the way of more powerful machine learning techniques - in many cases, the algorithms are created far in advance while researchers wait for data and technology to catch up.

Embracing machine learning as a healthcare tool is a logical move. Thanks to digitised medical records and wearable sensors, we have an abundance of information to tap into. Coupled with the decreasing cost of data storage, machine learning is quickly becoming a feasible support tool for clinicians.

How can we use machine learning in healthcare?

Clinicians are already using machine learning as a tool to diagnose, medicate and even plan recovery paths for patients.

On average, hospitals produce 50 petabytes of data each year. To put that in perspective, if you recorded your life in HD non-stop for over three years, you’d produce a 1 petabyte file. GE Healthcare estimates that 90% of medical data comes from medical imaging - and more than 97% of it goes unused. As humans, we could not possibly extract meaningful conclusions from such a tsunami of information - with machine learning, we can start leveraging its potential.

In 2016, a collaboration between researchers at Moorfields Eye Hospital and Google-owned DeepMind set out to explore AI applications in healthcare. The team, led by Dr Pearse Keane, used thousands of historical anonymous eye scans to develop technology capable of detecting eye disease. The system can recommend the correct referral decision for over 50 types of eye disease with 94% accuracy - the same rate as ophthalmology experts.

Pearse Keane

A technician performs an OCT scan (credit: Moorfields Eye Hospital)

Globally, more than 285 million people have some form of sight loss with eye disease being the leading cause. Speaking about the technology's impact on patients, Dr Keane said it would allow clinicians to prioritise those who need to be seen and treated urgently. "If we can diagnose and treat eye conditions early, it gives us the best chance of saving people's sight. With further research, it could lead to greater consistency and quality of care for patients with eye problems in the future".  

The versatility of machine learning means it can be applied to countless conditions. UCL researchers are also using the method to explore new treatments for dementia.

In their paper, published in Nature Communications, the team created and applied a new algorithm called SuStaIn (Subtype and Stage Inference) to MRI scans from patients with dementia.

The model is able to identify three separate subtypes of Alzheimer's disease and several different subtypes of frontotemporal dementia. By identifying subtypes early on in the disease process and using non-invasive MRI scanning, there is a better chance of identifying the best treatment for individuals.
SuStaIn uses medical imaging to look at specific locations of protein build up within the brain and deduce which parts are degenerating.

Professor Daniel Alexander from the UCL Centre for Medical Image Computing said the new algorithm has the unique ability to reveal groups of patients with different variants of the disease. "One key reason for the failure of drug trials in Alzheimer's disease is the broad mixture of very different patients they test - a treatment with a strong effect on a particular subgroup of patients may show no overall effect on the full population so the trial fails".

Danny Alexander

Danny Alexander (centre) and Dr Alexandra Young (left) applied machine learning techniques to identifying dementia subtypes

"SuStaIn provides a way to show treatment effects on distinct subgroups, potentially expediting treatments to market".

Dr Alexandra Young, also from the Centre for Medical Image Computing said "Individuals might present with similar symptoms to each other but using SuStaIn we can find that they belong to different subgroups. This allows us to predict more accurately how their disease will progress and diagnose it earlier".

The team are now looking for ways to apply the algorithm to other diseases that progress in a longitudinal manner. That is to say, diseases which worsen in clear stages over time, such as multiple sclerosis or chronic pulmonary lung disease.

The benefits of machine learning for patients and clinicians

Machine learning and artificial intelligence won’t replace clinicians anytime soon but they can enhance the personal experience they offer patients. Automating the time-consuming tasks will free up doctors to concentrate on the jobs only humans can do.

This takes us a step closer to the ultimate goal - personalised medicine. We still have a long way to go, but technological advances are already allowing us to predict how an individual patient will respond to a particular drug or whether they’re at risk of developing an illness.

With insights gained from machine learning, clinicians can develop therapy plans tailored to an individual’s genetic makeup. For the patient, this means a better response to treatment and fewer side effects. For healthcare systems, this means enormous savings. The NHS spends £15 billion a year on drugs, but 40-70% of the time these don’t work on the person they’re prescribed for. The benefits of targeted medicine reach far beyond individual patients – it can make an enormous difference to our healthcare system and society as a whole.

The challenges of using machine learning in healthcare

Given its incredible applications, you might be wondering why machine learning isn’t ubiquitous in healthcare. Essentially, what we can unlock from the method is dependent on three things - robust algorithms, powerful machines and very large amounts of good quality data. This final component poses an enormous challenge in the healthcare field.

There are large inconsistencies and missing values within datasets from the same hospital department, let alone the immense batch needed to build a robust machine learning system. Cleaning this amount of data can be a logistical nightmare. Furthermore, data reflects the biases of the people who collected it, leading to even worse health inequality amongst different groups. An overwhelming proportion of medical data is based on Caucasian men - what if you fit into neither of those categories? Will the model based on this data work for you? Machine learning systems built on poor-quality data will produce poor-quality results - which in healthcare can mean the difference between life and death.

Lack of access to data also stands in the way of progress. Where data is available, researchers face lengthy waits to receive ethics approval. Understandably, institutions are reluctant to publicly share this information and patients have serious concerns about how their personal data might be used. Particularly in countries with privatised health systems, there are concerns that this type of patient information could be exploited by insurers.

In order to progress the capabilities of machine learning, we need a huge shift in the way we collect and store data. In April this year, a House of Lords select committee on artificial intelligence called on the NHS to develop a consistent approach to data collection, and make these (anonymised) data sets available to researchers. In their report, the committee stated “Maintaining public trust over the safe and secure use of [patient] data is paramount to the successful widespread deployment of AI”.

The Institute of Healthcare Engineering has also been trying to remove data roadblocks. In 2017, we hosted Dr Dan Marcus from Washington University on a five-month sabbatical. Dan is the director of XNAT, an open-source database designed to cope with large medical imaging datasets. During his stay at UCL, he worked closely with us to build new algorithms and image-processing pipelines to help diagnose conditions such as Alzheimer’s and epilepsy. He is now working on expanding XNAT’s capabilities so it can handle even larger datasets. 

Dan Marcus

Dr Dan Marcus (left) created an open-source platform capable of handling large imaging datasets

What does the future of machine learning in healthcare look like?

Through the perfect concoction of computing advancements and big data, we are making significant breakthroughs in machine learning techniques.

These advancements are being supported by the ‘democratisation’ of machine learning code. Tech behemoth Google has its own open-source machine learning library, TensorFlow, which makes the technology available to developers across the world. By opening up this type of knowledge, machine learning techniques will rapidly improve.

The Government is keen to make sure medicine benefits from these advancements, but to keep moving forward, we need significant investment in healthcare infrastructure. On 5 December, the Government announced its updated Life Sciences Sector Deal which includes a £75 million programme to develop new diagnostic tests using artificial intelligence and a commitment to sequence one million whole genomes.

Machine learning and artificial intelligence will eventually go from high-tech novelties to backdrops in our daily lives. In the world of medicine, machine learning will be seamlessly embedded in the clinical decision process – from triage, to diagnosis, to treatment plan. And, in some cases, it will even stop patients needing hospital intervention – machine learning insights will enable preventative healthcare. Using their medical records, machine learning can provide personalised feedback to patients about their health and empower them to manage their own conditions.

We still have a long way to go until machine learning is fully integrated into healthcare and personalised medicine is a reality, but the advances we are making today at the IHE and elsewhere are moving us closer.

 

If you'd like to contribute to this blog series on healthcare engineering, get in touch with our Communications team.