Summer Studentships offer a stipend for UCL Medical Physics and Biomedical Engineering students to pursue a project developed in collaboration with an academic over a number of weeks in the summer.

This year, we have two types of studentships available; Research Support and also Teaching and Learning support. Applicants are welcome to apply for 2-3 projects from either category.
These posts are open to all undergraduate students, including those in their final year. Projects will start in early July 2023.
These placements are intended to allow undergraduate students to undertake paid work during the summer with academics. The key aims of the studentships are to give students the opportunity to work on a collaborative research project, gain experience developing and writing a research proposal and presenting their research to their peers.
Available projects:
- Adaptive Bluetooth Low Energy Motion Sensor
Supervisor: Dr Lynsey Duffell (l.duffell@ucl.ac.uk) and Dimitrios Airantzis (Dimitrios.airantzis.12@ucl.ac.uk)
Location: Hybrid
Project Aim: To develop a motion sensor providing adapted cycling data to Zwift, a virtual reality training platform.
Project Summary: Zwift is a training platform accessible to anyone, irrespective of body capabilities. People living with Spinal Cord Injury (SCI) wishing to exercise may use it for rehabilitation purposes. The platform relies on movement data collected from the participant in order to adapt the creation and use of a virtual reality training environment. Due to limited physical capabilities, data originating from SCI patients leads to the creation of a virtual environment reflecting those capabilities. The result is disheartening for the patients and leads them to abandon their exercising routine.
We are developing a sensor that provides adapted movement data to the Zwift platform, thus allowing the patients to participate at a, virtually, equal level as able bodied participants. This will provide the incentive for the patients to continue their exercising routine.
- Multispectral imaging of skin reaction in radiotherapy
Supervisor: Prof Adam Gibson (adam.gibson@ucl.ac.uk)
Location: Hybrid (majority on campus)
Project Aim: To investigate the ability of multispectral imaging to detect reaction of dark and light skin to changes in local
blood flow, as a first step towards a new method to examine radiation-induced skin damage.Project Summary: This proposal extends a successful undergraduate research project, using a multispectral imaging system to measure skin blood flow in the forearm. The intention of the project was to investigate whether this could provide baseline data on the use of imaging to predict and track local adverse skin reaction following
radiotherapy.I met today with a therapeutic radiographer who was very excited by the project. It aligns with current research interests in the field and he assured me that the approach is novel, appropriate and relevant. Initial results obtained during the undergraduate project show sensitivity to blood vessels, so we know that the system is capable of detecting signals from the layers of skin where the skin reaction occurs.
In the summer project, we aim to use lab-based high-quality multispectral photography to understand the skin response to local vasodilation and vasoconstriction resulting from the application of topical creams or local heating and cooling on healthy volunteers. Ethical approval is in place and in the undergraduate project, we developed expertise in using the imaging system and the necessary image processing. We will investigate the value of imaging on different areas of skin and on healthy volunteers with different skin colours.
We will also use hyperspectral imaging which gives lower spatial resolution but high spectral resolution to look for spectral changes that we can link to the chromophores. This might be particularly useful to identify melanin in dark skin, and provide information on how to best maximise sensitivity to skin blood flow in the presence of melanin. This has not been done before in the context of radiotherapy.
- Optimizing parameters for inter-patient registrations of head and neck cancer patients
Supervisor: Dr Jamie McClelland (j.mcclelland@ucl.ac.uk) and Poppy Nikou (poppy.nikou.20@ucl.ac.uk)
Project Aim: Investigate the optimal combination of parameters for intra-patient registrations of head and neck cancer patients, for spatial alignment
Project Summary: During radiotherapy, head and neck cancer patients often experience large anatomical changes over their course of treatment. To model the anatomical changes over time, all patients must first be spatially aligned. This alignment is done using image registration algorithms. The algorithm typically used with RTIC is an in-house implementation and is open-source. However, there are a large number of parameters which are set by the user. Selecting the optimal parameters for a cohort of head and neck cancer patients is a challenging task, due to the large anatomical differences between patients. This project aims to optimize these parameters using an open-source dataset of head and neck CT scans.
- Visualising a 3D eyeball – repurposing a 3D visual effects software tool (Blender) for ophthalmology
Supervisor: Dr Terence Leung (t.leung@ucl.ac.uk) and Dr Ranjan Rajendram (ranjan.rajendram@ucl.ac.uk)
Location: Hybrid
Project Aim: The project aims to facilitate eye inspection by reconstructing a 3D eyeball using closed-up 2D images captured by a clinical slit lamp. Each image represents a small section of an eye, which will be stitched together digitally on a spherical surface using a popular 3D visual effects software tool – Blender.
Project Summary: Ophthalmologists often need to inspect a patient’s eye over a period of time to monitor eye disease progression and post-surgery recovery. They use a slit lamp, which acts as a microscope, to zoom in on the surface of an eye to make observations and take digital photos for record. However, it can be difficult to locate the same region of the eye every time the patient’s eye is being inspected. Also, each captured photo shows only a small, magnified section of the eye and how it is spatially related to the whole eye is unclear.
In this 8-week project, we will develop a new way to visualise the surface of an eye in 3D. During eye inspection with a slit lamp, the built-in camera will video the eye surface. The captured video will then be processed in Blender where each frame of the video is analysed and stitched together with neighbouring frames, resulting in a 2D flattened “panoramic” view of the eye. This 2D eye will then be mapped onto a 3D sphere by projection painting, creating a virtual 3D eye. Ophthalmologists can rotate the virtual 3D eye to inspect any regions of interest and zoom-in to investigate fine details. They can also compare the 3D eye models captured during past hospital visits.
Blender is a free open-source computer graphics software tool popularised by artists, animators, filmmakers and hobbyists alike to create computer-generated imagery. We aim to capitalise on the advanced processing capabilities and the highly intuitive user interface of this vastly popular software and repurpose it for medical applications. This project will suit a student who is interested in ophthalmology and 3D visual effects software, particularly Blender.
- Incorporating spatiotemporal patterns of brain development into paediatric brain tumour radiotherapy planning to reduce treatment side effects
Supervisor: Dr Jamie Dean (jamie.dean@ucl.ac.uk) and Mohammad Amin Lessan (mohammad.lessan.20@ucl.ac.uk)
Location: Hybrid
Project Aim: We aim to understand the spatiotemporal variation in radiosensitivity of the developing brain. This understanding will be exploited to design novel personalized radiotherapy planning approaches to spare the (age-dependent) most radiosensitive regions of the brain.
Project Summary: Paediatric central nervous system tumours are the second most common childhood malignancy. They are frequently treated with radiotherapy and have relatively high survival rates. However, patients are often left with debilitating neurocognitive side effects resulting from radiation-induced brain damage. The developing brain is particularly sensitive to radiation, with different regions expected to be more or less sensitive at different stages of development. This spatiotemporal variation in radiosensitivity is not accounted for in current treatment approaches. However, technological advances in the spatial targeting of radiotherapy (such as the new UCLH Proton Beam Therapy Centre) provide an unexploited opportunity to spare the most radiosensitive regions of the brain to reduce side effects. In this project we aim to understand the spatiotemporal variation in radiosensitivity of the developing brain. This understanding will be exploited to design novel personalized radiotherapy planning approaches to spare the (age-dependent) most radiosensitive regions of the brain.
We seek to achieve these aims through the application of machine learning to gene expression data. We will use radiation response data of a large panel of cell lines with matched gene expression data to develop a machine learning model to predict radiosensitivity based on gene expression. We will then apply this model to spatiotemporal gene expression data of the developing brain to predict the radiosensitivity of different regions of the brain at different ages. We shall validate these predictions using matched radiotherapy dose distributions and toxicity data of previously treated paediatric brain tumour patients (time permitting). Finally, we will assess the feasibility of designing radiotherapy plans that spare the (age-dependent) most radiosensitive regions of the brain (future work beyond the scope of this project). We intend that this work will contribute to the design of future clinical trials to determine whether neural development-informed radiotherapy planning can reduce the side effects of paediatric brain radiotherapy.
- Compact robotic manipulator for MRI-guided neurosurgery
Supervisor: Ziyan Guo, ziyan.guo@ucl.ac.uk
Location: Hybrid
Project Aim: To design a compact robotic manipulator capable of operating inside an MRI head coil and maintaining minimal electromagnetic interference.
Project Summary: Magnetic resonance imaging (MRI) is a widely-recognized imaging technique that can produce high-contrast images of soft tissue without exposing patients to harmful radiation. Its versatility has enabled its applications in a variety of settings, e.g. intra-operative guidance for brain surgery. However, there are two challenges that limit its widespread use for surgical guidance in regular hospital settings: the strong magnetic field and constraint in-bore space. These issues make it difficult to use conventional surgical robotic tools, as no ferromagnetic materials are permitted in the MRI environment. Without the robot's assistance, patients must be transferred in and out of the scanner bore for imaging updates and surgical operations, making the procedure complex and time-consuming.
Currently, there are very few robot prototypes capable of operating within the compact space of an MRI head coil without degrading imaging quality. This project aims to develop a teleoperated robotic manipulator for MRI-guided needle-based neurosurgeries, that is MRI-compatible and compact enough to fit into the imaging head coil. The robot can operate in proximity to MRI isocenter or lesion targets while maintaining minimal EM interference to the MR images.
This project will involve computer-aided design (CAD) modelling, 3D printing and positional control (Matlab).
- Miniaturized fibre-optic magnetic field sensor for device tracking under MRI conditions
Supervisor: Dr Zhi Li (zhi-li@ucl.ac.uk) and Dr Sacha Noimark (s.noimark@ucl.ac.uk)
Location: On campus
Project Aim: The aim of this project is to develop a highly sensitive and miniaturized magnetic sensor based on magnetic elastomer composites and fibre-optic interferometric sensing technology, to enable precise medical device (e.g. catheters) tracking and localization during MRI guided surgical interventions with minimal invasiveness.
Project Summary: Precise guidance of interventional devices in MRI context is essentially beneficial to achieve higher success rate and reduced risks of surgery, e.g., cardiovascular biopsy, and greatly improved patient outcomes. Clinical device tracking technology using magnetic coils to collect local magnetic resonance (MR) signals for imaging registration have been long challenged either by the size-limited low spatial resolution or the serious MR induced heating. MRI-compatible fibre-optic magnetic sensor can be well suited for solving these issues due to their superior magnetic sensing capabilities. Being integrated with medical devices, it enables real-time position readouts by monitoring the local field which is highly dependent on its spatial coordinates in MRI scenario. Meanwhile, its small size, flexible longitudinal dimension, intrinsic material safety and immunity to EM interference make it extremely safe and easy to be integrated with medical devices for minimally invasive interventions. Nevertheless, current fiber-optic magnetic field sensors can be limited by low sensitivity and reliability due to the poor integration of the magnetic components with the optical fibre or the structural complexity.
This project is based on the previous success of physiological fibre-optic temperature and pressure sensors in our group. A proof-of-concept interferometric sensor design was proposed for highly sensitive magnetic field sensing based on the optical interference modulated by the fibre-tip coated cavity deformations. A simple and cost-effective dip-coating method was employed to directly apply multilayers of magnetically responsive materials onto the end-face of a single mode optical fiber. This project will focus on optimizing the sensor design and its magnetic sensing performance and facilitating steps towards development of a benchtop prototype. Initial clinical tests of device tracking under MRI conditions are expected to be done in collaboration with our clinician colleagues.
- Extracting and describing cell fate dynamics in glioblastoma from live-cell imaging
Supervisor: Dr Peter Embacher (p.embacher@ucl.ac.uk) and Dr Jamie Dean (jamie.dean@ucl.ac.uk)
Location: Hybrid
Project Aim: Establishing an image analysis pipeline to study long-term inheritance effects in glioblastoma
Project Summary: Glioblastoma is one of the most aggressive and difficult to treat cancers. One obstacle to developing effective and widely applicable treatments is the high level of heterogeneity, both between patients and within a single tumour. We aim to better understand how this heterogeneity evolves over several generations of cancer cells. Combining mathematical modelling with quantitative live-cell imaging is a powerful approach to discover mechanisms of variability in the proliferation and therapy response of cells. In order to capture this variability in quantitative measures of cellular proliferation and therapy response, it is vital to be able to reliably process a large number of cells over the course of several generations with minimal error rate. The aim of the project is to develop a computational live-cell imaging analysis pipeline that automatically identifies and tracks the cells to determine their dynamics and genealogical relations. This will involve combining tools from machine learning and statistics to extract the relevant features from microscopy images with the high fidelity necessary to capture intergenerational dependencies. Apart from the image analysis this project will likely also entail modelling aspects to identify and exploit relevant cell characteristics.
A background in programming (preferably Python) and statistical modelling would be helpful, preliminary biological knowledge is not needed.
- Modelling the network that underpins Noonan’s syndrome
Supervisor: Dr Ben Hall (b.hall@ucl.ac.uk)
Project Aim: Develop a model in the BioModelAnalzyer tool of the genes involved in Noonan syndrome
Project Summary: Noonan syndrome is a genetic disorder that is caused by mutations in certain genes. Most cases of Noonan syndrome are caused by mutations in the PTPN11 gene, which provides instructions for making a protein that is involved in regulating cell growth and division. Other genes that are known to be associated with Noonan syndrome include SOS1, RAF1, KRAS, NRAS, and SHOC2, among others. How these genes interact could influence the outcome of the disease, and understanding the relationships could enable future AI based methods for patient assessment.
- Exploring new system designs for optimal sensitivity and acquisition speed with 2D beam tracking X-ray phase-contrast micro-tomography
Supervisor: Prof Marco Endrizzi (m.endrizzi@ucl.ac.uk) and Carlos Navarrete-León (carlos.leon.17@ucl.ac.uk)
Location: Hybrid (majority on campus)
Project Aim: Establishing an in-silico framework for the efficient exploration of new system designs with optimal sensitivity and acquisition speed for phase contrast micro-tomography with the two-directional sensitivity beam tracking technique. Quantitative data analysis will unveil the optimal choices for amplitude modulator design, geometrical magnification, X-ray tube settings and X-ray detector characteristics.
Project Summary: X-ray imaging is an invaluable non-destructive tool for biomedical and material science applications. However, low-Z materials, such as carbon-fibre composites or soft tissue samples, often offer limited contrast and image quality with conventional X-ray methods due to their weak attenuation. This can be solved by using phase-sensitive methods which can significantly enhance image quality in such circumstances, which is AXIm group’s field of expertise.
This project involves exploring new system designs for optimal sensitivity and acquisition speed to optimize the 2D beam tracking technique, which has been demonstrated by the group [1]. The student will use image quality metrics to find optimum trade-offs between different system characteristics such as: attenuation modulator design, geometrical magnification and X-ray detector requirements. Synthetic data will be generate through a rigorous simulation tool and the numerical results with be compared against experimental data for validation.
This project is an excellent opportunity to learn fundamentals of X-ray interaction with matter, image formation, detectors, image analysis and reconstruction. The student will develop an overview of the entire image formation process and hands-on experience in system design and modelling. A simulation pipeline in python will be provided, but the student’s input and own coding creativity is highly encouraged.
- MAIDNEC: Mulitmodal Artificial Intelligence for the Diagnosis of Necrotising EnteroColitis
Supervisor: Dr Evangelos Mazomenos (e.mazomenos@ucl.ac.uk) and Dr Stavros Loukogeorgakis (s.loukogeorgakis@ucl.ac.uk)
Location: Hybrid
Project Aim: Premature born babies have high risk of developing Necrotising Enterocolitis (NEC). Early diagnosis and referral are vital as delays result in worse outcomes for patients. However, identification of NEC signs and patient stratification from abdominal X-rays is challenging. This project will develop Artificial Intelligence techniques for detecting and classifying the severity of NEC.
Project Summary: Necrotising Eneterocolitis (NEC) is a severe neonatal condition with significant morbidity and mortality. Nearly 12% of infants born weighing less than 1500 g will develop NEC and the mortality of the disease has been estimated to be 18-30%. The economic cost of NEC is high (estimated at $5-6B per year in the US - no readily available UK data) as it requires specialist management care at a tertiary NICU with a multidisciplinary team.
Timely diagnosis from abdominal X-rays (AXR) and treatment decisioning are vital as delays result in worse outcomes for patients. This presents a great challenge to radiologists, paediatric surgeons and neonatologists due to confounding factors (variability in presentation, similarity of signs to other conditions) making detection of NEC signs a difficult task. Furthermore, interpreting neonatal AXR imaging is challenging for non-specialised personnel and radiologist out of hours is limited. This has two consequences: 1) patients with NEC are investigated and diagnosed late as a result of late referrals/misdiagnosis, 2) patients without NEC (i.e. sepsis or other pathology) unnecessarily undergo medical management for NEC due to the inability to differentiate between the two conditions early.
Even if NEC is identified early, it is not straightforward to identify which patients would benefit from conservative medical treatment and which patients require urgent surgical intervention. At present the only absolute indication for surgery in NEC is a perforation and all other indications rely on clinical judgement of the patient’s condition and disease progression.
The proposed project aims at developing AI technology to automatically detect and discriminate NEC against non-NEC confounding cases as well as statify NEC cases accoding to their severity. Clinical impact with potential arises across the following areas: i) accelerate detection and correct diagnosis of NEC; ii) streamline patient management through stratification of cases and iii) identify and suggest the course of treatment among cases requiring urgent surgical referral and the ones that can follow a more conservative approach (non-surgically).
- Characterisation and implementation of optical ultrasound generating membranes.
Supervisor: Dr Erwin Alles (e.alles@ucl.ac.uk) and Fraser Watt (fraser.watt.20@ucl.ac.uk)
Location: On campus
Project Aim: The primary aims of this project are to assess the optical and acoustic performance of optically absorbing membranes and to develop reproducible and effective methods of deploying them in optical ultrasound imaging devices.
Project Summary: Optical ultrasound (OpUS) systems utilize the photoacoustic effect to perform pulse-echo ultrasound imaging through entirely optical means. A key component in these systems is the optically absorbing material that is used to generate the ultrasound when illuminated by a pulsed light source. Such materials are typically composites of an elastomeric material and an optical absorber and are either formed into a planar membrane or used to dip-coat optical fibers. The ultrasound fields generated by such structures typically exhibit broad bandwidths, as well as high peak-peak pressures on par with those generated by piezoelectric transducers. Whilst theoretical studies have explored the effect of membrane thickness and composition, to date experimental studies have been limited.
In this project the student will be tasked with designing and running a study to assess the optical and acoustic performance of polydimethylsiloxane (PDMS) membranes doped with various absorbers and developing methods for integrating these membranes into OpUS devices. The project will consist of two aspects. First, the student will fabricate a range of membranes with controlled thickness, composition, and optical performance, and use optical and acoustic metrology techniques to characterize these membranes. Once these membranes have been fabricated, the student will work with members of the MISI team to investigate how these membranes can be more effectively implemented into OpUS imaging systems and how different membranes affect the achieved imaging quality.
This work is predominantly experimental in nature, and will involve: working in a wet-lab to fabricate materials, designing and fabricating structures to test acoustic membranes, carrying out ultrasound field characterizations and carrying out OpUS imaging. Previous laboratory experience and knowledge of MATLAB (for signal analysis) would be preferred, but is not essential.
These placements are intended to allow undergraduate students to undertake paid work this summer with academics to develop teaching and learning materials, content or activities for use in MPBE courses.
Available projects:
- Teaching Resources for 3D Printing of Clinical Training Models
Supervisor: Prof Adrien Desjardins (a.desjardins@ucl.ac.uk) and Mr Jia-En Chen (jia-en.chen.22@ucl.ac.uk)
Location: Hybrid
Project Aim: The aims of this teaching-centred project are: 1) to design and develop medical
phantoms for clinical training and pre-surgery simulation in neurosurgery and cardiac surgery through the
utilisation of 3D printing technologies; 2) to disseminate teaching resources in a new UCL website for use
across the department and worldwide.Project Summary:
Technical Background - This project is focused on optimising a new technique developed by the applicants at UCL for fabricating medical simulation models compatible with MRI and CT, which involves using
heterogeneous flexible polymers that are 3D printed. The use of 3D printing technology in the medical field has
grown significantly in recent years, and there is strong interest in training and skills evaluation in neurosurgery
and cardiac surgery.Location and Resources - This project will be situated in Charles Bell House in the WEISS centre, which has
extensive facilities for 3D printing and clinical validation with a mock operating room.Clinical Collaboration - This project will involve close collaboration with Dr Hani Marcus (UCL) in neurosurgery
and Dr Tara Mastracci (Barts Heart Centre) in vascular surgery. The project will involve three phases. First,
experimental tests will be conducted to assess the impact of different 3D printing materials on the performance
of these models during clinical training. Second, the results of these tests will be used to fabricate novel medical
simulation models. Third, the models will be evaluated in two pilot studies led by Dr Marcus and Dr Mastracci,
which will involve a combination of inexperienced and experienced clinicians.Clinical Impact - Ultimately, the research findings will contribute to advancements in medical training and
positively impact patient outcomes.Teaching Impact - We have seen widespread interest in 3D printing among undergraduate Medical Physics
student; however, there is a need to consolidate experiences and learnings from recent projects into a single
website that provides, for instance, teaching videos created by student as well 3D designs and links to software
resources. We will directly address this need here with a UCL-hosted website created by the student.- Development of teaching materials for motion sensors
Supervisor: Dr Terence Leung (t.leung@ucl.ac.uk)
Location: Hybrid
Project Aim: The aim is to develop teaching materials, including Matlab scripts, lab sheets and examples in course notes, for introducing motion sensors to students. We will use both the built-in motion sensors in smartphones and standalone wireless sensors for the development.
Project Summary: Motion capture is increasingly important in monitoring the well-being of a person. For example, gait analysis provides information on the way the subject walks, which can indicate underlying physical and neurological pathologies.
One way to capture motion is to wear motion sensors on the body. In the module MPHY0039 which is taken by 3rd year and MSc students as an option, we teach motion capture as part of the gait analysis to assess rehabilitation. In January 2023, we modified the syllabus to strengthen the part related to the application of motion sensors.
In this summer project, we will develop teaching materials on the application of motion sensors for the lab sessions in MPHY0039, using Matlab as the main software platform. To access the motion sensors in a smartphone (Android or iOS), the Matlab Mobile app can be used, which also transfers the motion signals to the cloud where a computer can retrieve the motion signals in Matlab. The Matlab platform can also receive motions signals captured by standalone wireless motion sensors (e.g., STMicroelectronics’ SensorTile or BlueCoin) via BlueTooth. Once available on a computer, the motion signals can be processed (e.g., filtering, data fusion) and visualised with the help of all the functionalities available in Matlab.
The student will develop a lab in which the joint angle (elbow or knee) will be estimated by measuring the orientations of two connecting limbs using two smartphones or two standalone wireless sensors, one attached to each limb. This lab will demonstrate the measurement of the joint angle as the subject walks using motion sensors.
- Maintaining inquiry-based laboratory learning with large cohorts
Supervisor: Dr Henry Lancashire (h.lancashire@ucl.ac.uk)
Location: Hybrid
Location: Hybrid optional, with approximately 60% on site required
Project Aim: To develop an evidence based proposal for maintaining practical and inquiry based in person laboratory learning where laboratory time may be restricted due to timetabling or room capacity.
Project Summary: Laboratory work is key to the role of professional scientists and engineers, and therefore higher education must provide students with practical time in the laboratory. Lab based teaching provides students with hands-on experimentation and helps students to develop skills in scientific inquiry [1]. In person lab teaching is cost, time, space, and staff intensive, causing a trend towards virtual lab teaching [2], accellerated by the COVID-19 pandemic [3].
Where student cohorts exceed available laboratory space new approaches must be found. These new approaches include: double teaching, repeating classes with necessary staff and student timetabling pressures; and replacing laboratory teaching with virtual labs or demonstrations, losing hands on or inquiry experience respectively.
This project will investigate existing evidence based approaches to in person laboratory teaching for large cohorts, and will provide proposals and recommendations to maintain hands-on, inquiry based learning.
A suitable student will have:
- Hands on experience with equipment in the Biomedical Engineering Teaching Laboratory.
- An interest in the scholarship of teaching and learning.
- The ability to critically evaluate and synthesise information from the scientific and teaching literature.
- This project is only suitable for students completing their third or fourth year of study.
[1] Hofstein A. The role of laboratory in science teaching and learning. In: Taber KS, Akpan B, editors. Science education. Rotterdam: Sense Publishers; 2017. p. 355–68.
[2] Gomes L, Bogosyan S. Current trends in remote laboratories. IEEE Trans Ind Electron. 2009;56:4744–56.
[3] Lancashire H, Vanhoestenberghe A. Rapid Conversion of a Biomedical Engineering Laboratory from in Person to Online. Biomed Eng Education. 2021;1;181–86.
- Development of resources for new Scenario
Supervisor: Eve Hatten (e.hatten@ucl.ac.uk)
Location: Hybrid
Project Aim: To understand the feasibility of new scenario concept.
Project Summary: A student project to test potential solutions to a new scenario concept.
The student will build on the work of a previous project student to research Electro Oculography EOG and test a range of circuits to review whether they are suitable for controlling a computer mouse. Then if the concept is deemed feasible work to produce suitable resources to support the scenario project. Along with this the student can determine suitable components and hardware for the project.
The student must be a Biomedical Engineering student that has previous completed scenario 4- Regaining Control.
- Level Starting Point for First Year Undergraduates
Supervisor: Dr Paul Burke (maurice.burke@ucl.ac.uk)
Project Aim: To align our expectation of the incoming first year students’ assumed knowledge with what they have really been taught due to continual content changes across multiple national/international pre-university course syllabuses. This project will produce material to bridge the various knowledge gaps, appropriately aligning their pre-knowledge base to our course prerequisites.
Project Summary: In many cases it is assumed that the material taught to incoming students in pre-university courses is an accurate measure as a basis of the prerequisite material for our first year modules. With the discrepancies of material across multiple exam types and differing course entry requirements, the level of material covered by individual students before arrival to the course varies wildly.
This project proposes to examine the difference in material taught by all of the different types and flavours of exams sat by our incoming students. From this it will be determined where there are any discrepancies in the expected level of knowledge and a set of short catch-up courses to align the students’ knowledge with that assumed by our module prerequisites will be prepared.
The material for the courses will consist of educational material, video lectures, question sheets/online quizzes and in person workshops. This material will be made available to incoming students after they have received their A-Level results (or equivalent) and conformation of their place within the department allowing them to study before the start of term one. The workshops could be incorporated into the introductory lecture schedule during the orientation week.
Personal tutors can also refer students to these resources in instances where they are made aware that the student is struggling with the material.
Eligibility
Open to all undergraduate students in Medical Physics and Biomedical Engineering, including iBSc and those in their final year (due to graduate this summer).
Renumeration
The studentship will pay London Living Wage for up to 8 weeks (London Living Wage is currently £11.05ph, for a 36.5 hour UCL week, we pay £403 per week), typically starting from the beginning of July. Training will be provided appropriate to the requirements of the project. In addition, up to £200 may be requested for materials and consumable items.
Submission Instructions
- You are required to complete the above application form.
- You are required to submit a cover letter (no longer than one page) and specify at the top which are your 2 or 3 projects in order of preference. In your cover letter, you should address how you meet the required attributes, your prior experience in research and/or teaching and why you are interested in the chosen project/s.
- You must also submit a one-page CV.