Auditory Cognitive Neuroscience Lab
A A A

Auditory Cognitive Neuroscience Lab (Chait Lab)

We research how listeners use sounds in order to learn about, and interact with their surroundings. Our work is based on behavioral methods (psychophysics), eye tracking and functional brain imaging (MEG, EEG and fMRI).

We are based at the Ear Institute. MEG and fMRI scanning is conducted at the Wellcome Trust Centre for Neuroimaging. We are also affiliated with the Institute of Cognitive Neuroscience.

Keywords

The many sound-generating sources within the environment add up into one combined wave-form that enters the ear. In order to make sense of the world, a listener has to be able to separate this input into representations of the difference objects in the scene, determine their location in space, recognize them, and react appropriately. This generally occurs automatically and without explicit effort. The main objective of our work is to understand the processes by which such a representation is created by the brain and how it is maintained - the processes by which auditory sensory information is converted into a perceptual representation of our surrounding environment. Although most of the experimental questions that we pose are relevant for all sensory modalities, we choose to address them in the auditory domain both because auditory processing is much less understood than vision, for example, and because two of the sensory experiences that are considered to be most uniquely human – Speech and Music – are primarily auditory in nature.

Our methodology is based on a combination of functional brain imaging and psychophysics. By studying how brain responses unfold in time, we explore how representation that are useful for behaviour arise from sensory input and dissociate automatic ,stimulus-driven, processes from those that are affected by the perceptual state, task and goals of the listener. Examples of the questions we address in our experiments are: How do listeners detect the appearance or disappearance of new auditory objects (sound sources) in the environment? What makes certain events ‘pop-out’ and grab listeners’ attention even when it is focused elsewhere while the detection of other events requires directed listening? How are listeners able to focus attention on one sound in a mixture? Are listeners able to selectively ignore a moment in time? How does visual input affect how listeners process auditory information?




Research in the lab is funded by the following grants:

  • Royal Society International Exchange Award IE140319 (MC Principal investigator) 2014-2016.
  • BBSRC International partnering award BB/L026864/1 (MC Principal investigator) 2014-2016.
  • UCL-NTT Research collaboration 2012-2015
  • BBSRC project grant BB/K003399/1 (MC Principal investigator). 2013-2016. £653,284.
  • Wellcome Trust project grant 093292/Z/10/Z (MC Principal investigator) 2011-2014 £137,436 (+400 hours of MEG/fMRI scanning at the Wellcome Trust Center for Neuroimaging, UCL)
  • BBSRC project grant BB/H006958/1 (MC co-investigator together with J. Linden and D. McAlpine) 2010-2013 £506,000
  • Deafness Research UK Research Fellowship 2009-2012
  • EU Marie Curie Individual Fellowship 2007-2009
funders

Page last modified on 10 oct 14 13:50