XClose

Surgical Robot Vision Research Group

Home
Menu

Projects

AID-PitSurg: AI-enabled Decision support in Pituitary Surgery

Link to UKRI website

The pituitary is a small gland at the base of the brain that produces hormones that control several important bodily functions. Pituitary tumours are one of the most common types of brain tumours, where a symptomatic tumour can cause hormonal imbalances and other health problems. Transsphenoidal surgery is the gold standard treatment for most symptomatic pituitary tumours. This is a minimally invasive surgery as it is performed through the nostrils and nasal sinuses leaving no visible scars from the procedure.

Transsphenoidal surgery is challenging and high risk due to the narrow approach and proximity of critical neurovascular structures such as the optic nerves and carotid arteries, resulting in a relatively high rate of complications. The most common of these complications requiring medical or surgical treatment are dysnatraemia (related to pituitary dysfunction), and post-operative cerebrospinal fluid (CSF) rhinorrhoea (related to insufficient repair of the skull base). Thus, leading to increased hospitalization and recovery time with high risk of life-threatening conditions.

To reduce the risk of these complications, this research project aims to develop a real-time Artificial Intelligence (AI) assisted decision support framework that can understand the surgical procedure, predict surgical errors and identify intraoperative causes of complications. The AI model will recognise surgical steps, detect surgical instruments, and identify specific instrument-tissue interactions during the sellar phase (for dysnatraemia) and closure phase (for CSF rhinorrhoea) of the surgery. The framework will use multimodal data, including pre- and post-operative clinical data and surgical scene perception, to predict and alert the surgeon of any surgical errors and potential post-operative complications in real-time.

By developing this framework, the project aims to improve surgical outcomes by reducing the frequency of post-operative complications, shortening the length of hospital stays, and improving patients' recovery.

CADDIE: Computer Aided Detection and Diagnosis for Intelligent Endoscopy

Link to UKRI website

Computer Aided Detection and Diagnosis for Intelligent Endoscopy (CADDIE) will disrupt gastroenterology by using artificial intelligence to analyse colonoscopy video images in real-time. CADDIE will automatically detect and analyse cancerous and pre-cancerous polyps with the goal of better earlier detection and diagnosis of cancer leading to better patient outcomes.

CARE Surgery: Context Aware Augmented Reality for Endonasal Endoscopic Surgery

Link to EPSRC website

This project aims to develop tools to guide a surgeon during surgery to remove cancers on the pituitary gland.

Access to the pituitary gland is difficult, and one current approach is the endonasal approach, through the nose. However, while this approach is minimally invasive which is better for the patient, it is technically challenging for the surgeon. It is difficult for the surgeon to manoeuvre the tools, but also difficult for the surgeon to maintain contextual awareness and remember the location of and identify critical structures. 

One proposed solution is to combine pre-operative scan data, such as information from Magnetic Resonance Imaging (MRI), or Computed Tomography (CT) scans, and use them in conjunction with the video. Typically, engineers have proposed "Augmented Reality", where the information from MRI/CT scans is simply overlaid on top of the endoscopic video. But this approach has not found favour with clinical teams, and the result is often confusing and difficult to use.

In this project we have assembled a team of surgeons and engineers to re-think the Augmented Reality paradigm from the ground up. First, the aim is to identify the most relevant information to display on-screen at each stage of the operation. Then machine learning will be used to analyse the endoscopic video, and automatically identify which stage of the procedure the surgeon is working on. The guidance system will then automatically switch modes, and provide the most useful information for each stage of the procedure. Finally, we will automate the alignment of pre-operative data to the endoscopic video, using machine learning techniques.

The end result should be more accurate, and more clinically relevant than the current state of the art methods, and represent a genuine step change in performance for image-guidance during skull-base procedures.

EndoMapper: Real-time mapping from endoscopic video

Link to EndoMapper website

Endoscopes traversing body cavities such as the colon are routine in medical practice. However, they lack any autonomy. An endoscope operating autonomously inside a living body would require, in real-time, the cartography of the regions where it is navigating, and its localization within the map. The goal of EndoMapper is to develop the fundamentals for real-time localization and mapping inside the human body, using only the video stream supplied by a standard monocular endoscope. 

In the short term, will bring to endoscopy live augmented reality, for example, to show to the surgeon the exact location of a tumour that was detected in a tomography, or to provide navigation instructions to reach the exact location where to perform a biopsy. In the longer term, deformable intracorporeal mapping and localization will become the basis for novel medical procedures that could include robotized autonomous interaction with the live tissue in minimally invasive surgery or automated drug delivery with millimetre accuracy. 

Our objective is to research the fundamentals of non-rigid geometry methods to achieve, for the first time, mapping from GI endoscopies. We will combine three approaches to minimize the risk. Firstly, we will build a fully handcrafted EndoMapper approach based on existing state-of-the-art rigid pipelines. Overcoming the non-rigidity challenge will be achieved by the new non-rigid mathematical models for perspective cameras and tubular topology. Secondly, we will explore how to improve using machine learning. We propose to work on new deep learning models to compute matches along endoscopy sequences to feed them to a VSLAM algorithm where the non-rigid geometry is still hard-coded. We finally plan to attempt a more radical end-to-end deep learning approach, that incorporates the mathematical models for non-rigid geometry as part of the training of data-driven learning algorithms.

Endoo: Endoscopic Versatile robotic guidancE, diagnoSis and theraPy of magnetic-driven soft-tethered enluminAI robots

Link to European Commission website

The Endoo project aims at developing an integrated robotic platform for the navigation of a soft-tethered colonoscope capable of performing painless diagnosis and treatment. Colorectal cancer is one of the major causes of mortality but survival rate dramatically increase in case of early diagnosis. Current screening colonoscopy is limited due to a variety of factors including invasiveness, patient discomfort, fear of pain, and the need for sedation; these factors consistently limit the pervasiveness of mass screening campaigns. Built around a novel robotic colonoscope and designed to make its use straightforward for the endoscopist and ideal for mass screening, the Endoo system has the potential to introduce in the clinical practice a disruptive new paradigm for painless colonoscopy. Endoo combines a “front-wheel” magnetic–driven approach for active and smooth navigation with diagnostic and therapeutic capabilities for overcoming the limitations of current colonoscopy in terms of patient discomfort, dependence on operator skills, costs and outcomes for the healthcare systems. The acceptance and consolidation of robotics in the medical domain and the ever–growing development of endoscopic–driven technologies are the fundamental building blocks for the realization of the Endoo platform which can take advantage of solid and IPR protected technologies provided by the Project Partners. Aim of the Endoo Project is to bring the system to the market for an extensive clinical use. The Endoo Consortium is a unique blend of internationally recognized European pioneers (in all the involved disciplines), which will guarantee a dramatic leap forward in the current technology through successful implementation in terms of scientific innovation, industrial engineering, certification, market analysis, and ultimately clinical deployment.

EPSRC UK Image-Guided Therapies Network+

Link to EPSRC website

Research in the field of image-guided therapies aims to develop new and improved technologies specifically designed for modern surgical interventions. By using smart devices which interact with patient-specific pre-surgical data, it is hoped we can increase safety levels, reduce recovery times and advance the treatment options available to patients. This field of research combines advances in medical imaging, sensor technology, computer modelling, robotics and visualisation to enable greater surgical precision.

GIFT-Surg: Guided Instrumentation for Fetal Therapy and Surgery
UCL is working towards a major development in surgery on unborn babies thanks to a £10 million award from the Wellcome Trust and the Engineering and Physical Sciences Research Council (EPSRC), under the ‘Innovative Engineering for Health’ initiative. This research project, titled GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery), is led by UCL in collaboration with KU Leuven in Belgium, who will work with surgeons and doctors at Great Ormond Street Hospital, University College London Hospital NHS Trust, and UZ Leuven as part of a highly multidisciplinary team. GIFT-Surg will engineer a novel combination of innovative interventional imaging systems and MRI/ultrasound scans to provide extremely accurate visualisation, both pre-operative and real-time, which will be used by the surgeon in conjunction with advanced surgical tools that offer new levels of flexibility and precision. A training platform will also be developed to equip surgeons with the necessary skills in the treatment of congenital birth defects such as spina bifida and twin-to-twin transfusion syndrome. Performing surgery on fetuses while still in the womb is a risky business. Although pioneering work in this field began in the 1980s, it remains very challenging, with only a few highly trained teams around the world treating a handful of conditions. Our aim with GIFT-Surg is to create breakthrough transformations and improvements in the treatment of congenital problems in the womb. We’re working to develop an extended flexible mechatronic multi-finger device that will be fed in through a small incision, of approximately 4mm, in the abdomen of the mother. This device will enable surgeons to operate and perform complex procedures from outside the womb. The three “fingers” of the device will offer the surgeon superior dexterity and better vision at the surgery site. While two of the fingers can carry out delicate procedures, in the case of spina bifida, patching up the source of the protrusion from the spine, the third will carry an innovative endoscopic imaging system that will create 3D images of the environment inside womb and acquire tissue properties making it possible to identify anatomic structures on and below the tissue surfaces. The surgeon will have direct and real-time access to these images and guidance cues which will be clearly displayed on a screen visible to them in the operating theatre. The images will also serve as a feedback for guiding the dexterous instrumentised arms. SurgeryBy passing tools through a small incision, surgeons minimise the risks of complications for the mother and fetus, notably preterm labour. There already exists some fetal surgery which takes advantage of this. Minimal access with laser coagulation, for example, is the current standard of care for treating twin-to-twin transfusion syndrome (TTTS) a condition where two identical twins share blood through the same placenta. While the donor twin is deprived of blood and can develop growth defects, the other twin is given too much blood and can suffer from heart failure. To correct this, surgeons currently operate using a fetoscope with a laser coagulator attached to it. This cauterises the blood vessels that link the blood circulation between the twins. Although this is a working solution, the existing tools lack flexibility and therefore, this kind of surgery is still quite complex and has limited possibilities when twins, triplets and the placenta are arranged in certain configurations. Minimally invasive surgery can also be currently used to stimulate lung growth in another condition called congenital diaphragmatic hernia. However a subsequent more invasive procedure called ex utero intrapartum treatment procedure, or “EXIT surgery”, might then be required at the time of the baby’s delivery. Even for spina bifida, minimally invasive repair is possible in a few cases but, it remains inaccurate, has a high failure rate and requires multiple small incisions, which increases the risks associated with the procedure. Although spina bifida is one of the most common birth defects, it also requires one of the most complex procedures for treatment.
Multispectral Polarization-Resolved Endoscopy

Link to EPSRC website

The paradigm of modern surgical treatment is to reduce the invasive trauma of procedures by using small keyhole ports to enter the body with endoscopic imaging used to see the interior. In clinical endoscopic investigations, information about tissue characteristics or function is therefore limited to colours and features that are visible from white light reflection images, which correspond well to vision by naked eye but only represent a small part of what may be recorded with modern image sensors. Efforts are well underway to bring multispectral (or narrow band imaging), visible and near infrared fluorescence, and microscopic/endoconfocal modalities into endoscopic investigations. One interaction that is frequently overlooked is the scattering of polarized light by biological tissue, which is affected in a complex way by the tissue’s scattering and absorption properties. As a simple example of how the polarization-resolved interaction can be used in practice, light that is singly scattered preferentially maintains its polarization over multiply scattered light and this can act as a filter for superficial scattering. The depolarization effect can be used to characterise the scatterer sizes and thus for detection of enlarged nuclei found in precancerous pathologies. More complex analysis requires 16 images to be acquired and processed, allowing detailed analysis of depolarization and retardance in particular leading to the detection of early cancer by revealing tissue structural properties such as birefringence and structural alignment. This project will develop advanced endoscopic imaging approaches that better inform the surgeon about tissue structure and function in real-time during a procedure. Underpinned by polarization properties and effects, we will develop a new endoscopic imaging device that will combine a novel approach to polarization resolved endoscopy (PRE) together with computational tools and models to understand the images it can acquire. This will include Monte Carlo modelling of the polarized light-tissue interaction and approaches for registering, processing and reducing the number of polarized images required for diagnostics. The aim is to produce a clinical instrument that can be applied in the detection and characterisation of peritoneal carcinomatosis, a form of metastatic disease that is common for ovarian and colorectal cancers. This requires image registration, augmentation and data reduction as well as simplified endoscopic hardware. Beyond this immediate clinical investigation, there are a range of screening and image-guided procedures that may be aided by PRE. As well as having direct applications in surgery, the PRE imaging paradigm will be applicable to many other sectors transformed by powerful, small profile imaging endoscopes, for example manufacturing or inspection in constrained environments. For this cross sector impact to be achieved the project will build theoretical knowledge and robust software platforms as well as hardware and optical solutions.

Robotic Actuated Imaging Skins: Royal Academy of Engineering Chair in Emerging Technologies

Link to RAEng website

This project will develop robotic surface structures with embedded sensors that can adapt their shape and size using artificial intelligence algorithms to control and interpret sensory information. Using this data, these new systems will help to enhance imaging capabilities during minimally invasive surgery and enable safer and more precise procedures to treat diseases across different anatomical regions.

Robotic Assisted Imaging

Link to EPSRC website

The paradigm of modern surgical treatment is to reduce the invasive trauma of procedures by using small keyhole ports to enter the body. Robotic assistant systems provide tele-manipulated instruments that facilitate minimally invasive surgery by improving the ergonomics, dexterity and precision of controlling manual keyhole surgery instruments. Robotic surgery is now common for minimally invasive prostate and renal cancer procedures. But imaging inside the body is currently restricted by the access port and only provides information at visible organ surfaces which is often insufficient for easy localisation within the anatomy and avoiding inadvertent damage to healthy tissues. This project will develop robotic assisted imaging which will exploit the autonomy and actuation capabilities provided by robotic platforms, to optimise the images that can be acquired by current surgical imaging modalities. In the context of robotic assisted surgery, now an established surgical discipline, advanced imaging can help the surgeon to operate more safely and efficiently by allowing the identification of structures that need to be preserved while guiding the surgeon to anatomical targets that need to be removed. Providing better imaging and integration with the robotic system will result in multiple patient benefits by ensuring safe, accurate surgical actions that lead to improved outcomes. To expose this functionality, new theory, computing, control algorithms and real-time implementations are needed to underpin the integration of imaging and robotic systems within dynamic environments. Information observed by the imaging sensor needs to feed back into the robotic control loop to guide automatic sensor positioning and movement that maintains the alignment of the sensor to moving organs and structures. This level of automation is largely unexplored in robotic assisted surgery at present because it involves multiple challenges in visual inference, reconstruction and tracking; calibration and re-calibration of sensors and various robot kinematic strategies; integration with surgical workflow and user studies. Combined with the use of pre-procedural planning, robotic assisted imaging can lead to pre-planned imaging choices that are motivated by different clinical needs. As well as having direct applications in surgery, the robotic assisted imaging paradigm will be applicable to many other sectors transformed by robotics, for example manufacturing or inspection, especially when working within non-rigid environments. For this cross sector impact to be achieved the project will build the deep theoretical and robust software platforms that are ideally suited for foundational fellowship support.

Self-guided Microrobotics for Automated Brain Dissection

Link to UKRI website

This project addresses the need for improved microsurgical tools in neurobiology through development of a new autonomous micro-robotic concept powered by Artificial Intelligence (AI). The work brings together a diverse range of world leading research expertise across both Canadian and UK institutions in nerobiology, computer science, robotics and engineering. Given the broad range of connected sciences that also include, optics, machine vision, neuroscience, healthcare engineering etc. we expect the results generated to have a wide reach. Importantly, UCL is well placed to support these activities by providing access to aligned investments to facilitate clinical applications and translation through our flagship Wellcome / EPSRC Centre in Interventional and Surgical Sciences (WEISS) and its links to institutionally backed units like the Translational Research Office, the NIHR UCLH BRC Joint Research Office and the UCL Institute for Healthcare Engineering. As such, we believe this consortium has the potential for significant impact opportunities for novel technical development and knowledge exchange that aims to change healthcare communities to adopt new ways of working.

Use of Magnetic Resonance Imaging in the Design and Manufacture of Patient Specific Posterior Pedicle Screw Insertion Guides for the treatment of Scoliosis

Link to Wellcome Trust website

Scoliosis surgery involves insertion of screws in the spine (called pedicle screws). Current techniques to insert these screws are not completely accurate. Even in international experts' hands, 25-30% of the screws are misplaced. Misplacement of screws has a high risk of bone weakening, injuries to the spinal cord, nerve roots or blood vessels. Getting screw placement wrong has long term health implications for young patients, including lifelong disability. More accurate methods use computer navigation or image guidance techniques. However, these involve using more radiation (X-rays, CT scans) before and/or during the surgery. Surgeons and parents are concerned regarding the long term effects of ionising radiation in young patients. Our work helps to address this unmet health need by developing MRI based imaging techniques to design patient specific pedicle screw placement devices, which improve the accuracy of placing screws in the spine and removing the need for ionising radiation.