Dr Wenhui Song, the UCL Centre for Biomaterials in Surgical Reconstruction and Regeneration
Current hearing aids have limitations – they don’t have sound location recognition, limited electrode channels and frequency range can be poor making it hard to hear in loud settings, or enjoy music the way you would want to, and they are power-consuming and required frequently charging battery.
Dr Wenhui Song (Professor of Biomaterials and Medical Engineering) and her multidisciplinary team from UCL, London South Bank University and an ENT specialist are using AI and nanotechnology to change the way people who are hard of hearing listen to the world around them.
Inspired by the cochlea, they have developed an AI-assisted piezoelectric nanofibre-based intelligent hearing system that can directly convert sound input into electrical signals across a wide range of frequencies. These signals are then processed by neural networks, enabling precise sound recognition and directional hearing.
The device will improve the hearing range of current hearing implants, so people can capture low sounds and follow conversations in noisy settings. In addition to sound recognition, this new device will also be able to tell users where sound is coming from, with early tests suggesting sound spatial recognition is more accurate than natural human bilaterial hearing. The device can also potentially self-power using just sound input.
The team is currently working on their first prototype, and the technology has been published on and its IP has been patented.Science Advances