Supervisors: Prof. John Shawe-Taylor (Computer Science), Prof. Alan Johnston (Psychology)
From both scientific evidence and our everyday experience of conversation, it is well known that humans use visual facial features to improve their comprehension of words. However when a person suffers hearing impairment and a forced to use hearing aids, this visual advantage is often negated because of inefficient amplification, which creates conflicting acoustic signals. We are aiming to remove these inefficiencies by emulating the visual system and employing the visual information to create a more efficient hearing aid.
Page last modified on 13 nov 12 20:10