XClose

UCL Psychology and Language Sciences

Home
Menu

Speech Science Forum -- Emmanuel Ponsot

16 January 2025, 3:00 pm

ep_pp

Across-frequency interactions in the central auditory system: Insights from psychophysics of spectro-temporal modulation processing

Event Information

Open to

All

Availability

Yes

Organiser

Victor Rosi

Location

G15
2
Wakefield Street
London
WC1N 1PF

Most current auditory models treat the processing of spectral and temporal dimensions of sound as independent: cochlear filters decompose auditory signals into frequency bands, followed by modulation filters that process the temporal information within each band. However, animal and human vocalizations exhibit spectro-temporally oriented patterns of energy, such as formant transitions. Electrophysiological studies have identified central auditory neurons sensitive to specific spectro-temporal directions, indicating non-separable processing. These findings suggest that the auditory system may have specialized mechanisms to integrate temporal information across frequency bands, which is essential for forming auditory objects and enabling robust speech perception. Yet, the conditions under which the auditory system engages separable versus non-separable processing, as well as individual variability in these mechanisms, remain poorly understood. In this talk, I will present psychophysical evidence of spectro-temporal integration, highlighting both past and ongoing experiments using spectro-temporal modulation signals (ripples). This research aims to refine our understanding of the underlying integration mechanisms and provide a novel framework for modeling individual differences in speech perception, particularly in complex and noisy listening environments.

Zoom link: https://ucl.zoom.us/j/92052680901

About the Speaker

Emmanuel Ponsot

CNRS Researcher at STMS Lab (IRCAM - CNRS - SU)

Emmanuel Ponsot is a CNRS Researcher at the STMS Lab (Ircam, Paris). Trained as an engineer at École Centrale de Lyon, he earned a Master’s degree in Acoustics in 2012 before transitioning to Psychophysics and Cognitive Sciences. He completed his Ph.D. at Sorbonne Université in 2015 and conducted postdoctoral research at Ircam, École Normale Supérieure (Paris), and Ghent University (Belgium), where he was supported by a Fondation pour l’Audition fellowship. He joined the CNRS as a researcher in 2021.

His research relies on a tight integration of psychophysics, EEG, and computational modeling to study how the human auditory system processes complex sounds, such as speech, at both peripheral and central levels. His current projects explore diverse topics, including the neural and perceptual coding of auditory spectral shape and the computational mechanisms underlying social and emotional cognition in speech prosody. A key goal of his research is translating these findings, along with the novel experimental and computational tools developed, into refined audiological tools for individuals with hearing impairments or neurological conditions.

More about Emmanuel Ponsot