UCL Ear Institute



Learn more about the breadth of our research across the field of auditory cognitive neuroscience.

Change detection

The goal of this research is to understand how listeners with normal hearing detect and process change events in auditory scenes.

We address questions like:

  • How are object appearance and disappearance events detected, and what brain mechanisms are involved?
  • Are change events detected automatically by the brain, even when listeners’ attentional focus is elsewhere?
  • What makes certain change events fundamentally more salient than others: why do certain events ‘pop-out’ and grab listeners’ attention even when they are focused elsewhere, while the detection of other events requires directed listening?
  • Under what conditions do listeners perform well, and which situations result in reduced performance?

This is useful for designing human-computer interfaces, and other devices intended to help professionals operate effectively in environments where the detection of change is critical (e.g. air traffic controllers and pilots).

Understanding change detection in normal listeners can also provide a measure against which to evaluate hearing impairment, as well as the benefit obtained from hearing aids.

Detecting and representing patterns in sound sequences

The goal of this research is to identify what features of sounds we are sensitive to, how quickly these are learnt, and how this learning is affected by our attentional and perceptual state.

Discovering the limits of these mechanisms is essential to uncovering the neural computations which underlie our sensitivity to sound patterns. More broadly, this research is critical to our understanding of how the dynamics of our acoustic environment are coded by the brain when we perceive sound.

Figure ground segregation

The goal of this research is to understand how listeners are able to focus attention on one sound in a mixture, which is known as ‘figure ground segregation’.

How are we able to extract, and focus attention on a sound of interest in a background of other interfering sounds (e.g. the voice of a friend in a noisy party)?

Our experiments aim to understand how these processes are carried out by listeners, which aspects of sound our brains use to achieve segregation, and the neural systems involved in this process.


The goal of this research approaches the issue of attention from two angles:

To uncover the neural underpinnings and perceptual limits of selective attention, we investigate:

  • What aspects of sound listeners can attend to, and the brain mechanisms involved in this process.
  • What enables us to perceptually pull out a sound of interest out of a mixture of many sounds (‘the cocktail party problem’)?
  • How does the way we listen affect the way sounds are processed by the brain?

We also investigate what aspects of auditory processing are susceptible to attentional manipulation. Which processes are automatic, and which require attention or are affected by the perceptual state and behavioural goals of the listener?

Understanding these processes is important for understanding what information the brain extracts of the auditory world when our focus of attention is diverted away from ‘listening’.


The goal of this research aims to understand and quantify auditory salience and distraction in the context of complex acoustic scenes.

We investigate the neural and computational processes by which concurrently presented sounds compete for, and capture, listeners’ perceptual and attentional resources.

It is widely assumed that the auditory system plays a critical role in the brain’s ‘early warning system’ by continuously scanning the unfolding acoustic scene for potentially relevant events (e.g. the approach of predators or prey), even when attention is focused elsewhere.

Characterising acoustic features that attract attention helps us understand auditory perception in the healthy brain. It is also important for understanding how the system breaks down as a consequence of ageing, hearing impairment or certain neurological disorders (e.g. schizophrenia), which are characterised by abnormal auditory processing.

Understanding acoustic salience, and its reverse – distraction – also has immediate applications in guiding the design of warning systems, hearing aids, human computer interfaces, and other devices intended to help individuals respond efficiently to urgent events in their surroundings.

This work is conducted in collaboration with Shigeto Furukawa and Makio Kashino at the human information science laboratory, NTT, Japan.

COgnitive COntrol of a Hearing Aid (COCOHA project)

The goal of this research is to investigate how our brains can directly control a device like a hearing aid.

Guardian ‘Brain Waves’ project

The goal of this research from 2016 was to understand the effects of music familiarity on the brain’s response using EEG (electroencephalography) and eye-tracking.