Speech Science Forum -- Lida Alampounti
30 May 2024, 4:00 pm–5:00 pm
Assessing how vision benefits speech-in-noise perception and the impact of ageing and hearing loss on audio-visual benefits
Event Information
Open to
- All
Availability
- Yes
Organiser
-
Rana Abu-Zhaya
Location
-
Chandler House G102Wakefield StreetLondonWC1N 1PF
Investigations of the role of audio-visual integration in speech-in-noise perception have largely focused on the benefit provided by lipreading cues. A growing body of evidence suggests that additional audio-visual processes have the capacity to influence auditory scene analysis. Specifically, audio-visual temporal coherence – the extraction, and integration, of the temporally correlated information of the amplitude envelope of speech and the opening and closing of the mouth – is a potential candidate. Whether audio-visual temporal coherence can aid speech-in-noise, and to what extent, relative to lipreading remains, however, largely unknown.
The current work examines, across 125 individuals spanning the ages 19 to 85, and with both typical hearing and hearing loss, the contributions of these two mechanisms to the visual enhancement of speech-in-noise perception. The audio-visual speech-in-noise task ‘vCCRMn’ was developed, designed to capture both lipreading, and audio-visual temporal coherence-related enhancements of listeners’ auditory performance. The vCCRMn task and a battery of accompanying tests were used in three experimental groups: younger participants with normal hearing, older participants with normal hearing, and older participants with hearing loss.
Results suggest that a) the vCCRMn successfully captures an audio-visual benefit derived from video conditions compared to a static image with audio, b) the vCCRMn can separately assess the contributions of lipreading and those of audio-visual temporal coherence-related enhancements and, finally, c) participant speech-in-noise performance, and audio-visual benefit are negatively impacted by both ageing, and hearing loss.
This event will be hosted online as well: https://ucl.zoom.us/j/97718702895?pwd=UzhOcHlwMTd4NWZFWGVTNGZwTndCQT09
About the Speaker
Lida Alampounti
PhD student at UCL
Lida is a PhD student at University College London (UCL) funded by the National Institute for Health and Care Research, University College London Hospitals, Biomedical Research Centre (NIHR UCLH BRC) and UCL. She is a member of the Bizley lab at the UCL Ear Institute.
Her research is on the topic of multisensory integration and, specifically, how vision influences listening in noisy environments in humans. She has investigated this question using psychophysical methods. Past experience includes English Linguistics and human brain imaging.