- Teaching Programmes
- Research Programmes
- Research Departments
- Clinical, Educational Health and Psychology
- Cognitive, Perceptual and Brain Sciences
- Developmental Science
- Institute of Cognitive Neuroscience
- Language & Communication
- Speech, Hearing and Phonetic Sciences
- UCL Interaction Centre
- Research Facilities
- News and Events
- Vacancies and Opportunities
- Contact Us
News and Events
Read all the
latest news within the Division of Psychology and
Read more about the PALS Swan Silver Award
Divisional Subject Pool
The Division has won the Silver Award for the UCL Green Impact scheme
The Division runs Continuing Professional Development courses.
Speech, Hearing and Phonetic Sciences
Current Funded Research (most recent first) | Completed Projects
Mapping the development of phonetic perception
Infants are born with perceptual abilities that allow them to hear acoustic differences between the speech sounds used in many languages. During the first year of life, they develop to become more specialised, increasing their ability to distinguish native-language speech sounds and decreasing their ability to distinguish some non-native speech sounds. One limitation of existing infant-testing methodologies is that it is only feasible to test isolated speech contrasts (e.g., the vowels in 'beet' vs. 'bit'), which gives a rather narrow view of perceptual development (i.e., one cannot see changes in how individuals process the entire vowel system of a language). We believe that we have found a way to produce a broader view of development by using the Acoustic Change Complex (ACC) of EEG recordings (electrodes on the scalp measuring electrical activity in the brain) to provide a time-efficient measure of auditory perceptual sensitivity, and then using multidimensional scaling to produce perceptual maps based on many stimulus pairs. We will develop this method in tests of 7-month old infants and comparisons of adults with different languages. We will then compare these perceptual maps for 4-, 7, and 11-month olds as well as 4-year olds, in order to chart how the perception of vowels and fricatives change during early development.
The role of speech motor resonances in spoken language processing
Researchers: Patti Adank, Joseph Devlin (CPB)
Leverhulme Trust project grant. Duration 3 years January 2014 - December 2016
This project investigates the relationship between speech production and speech perception by testing how production mechanisms support perception in everyday listening situations, such as hearing someone speak in an unfamiliar accent or understanding speech in background noise. The presence of such acoustic variation makes the speech signal more difficult to understand; yet listeners normally extract the linguistic message relatively effortlessly. Successful speech perception relies crucially on the ability to effectively deal with acoustic variation in speech yet the mechanisms underlying this ability are poorly understood. We will use a combination of behavioural experiments, neurophysiological experiments (involving Motor Evoked Potentials), and Transcranial Magnetic Stimulation to investigate how comprehension of accented speech relies on production mechanisms
Researchers at UCL: Julian Leff (Emeritus Professor, UCL Mental Health Sciences), Mark Huckvale and Geoff Williams. In collaboration with Thomas Jamieson-Craig, Philippa Garety and Paul McCrone, Institute of Psychiatry.
Wellcome Trust Strategic Translation Award. Duration 3 years: August 2012-July 2015
About 25% of people with schizophrenia continue to suffer with persecutory auditory hallucinations despite drug treatment. Their capacity to work and make relationships is grossly impaired, often for the rest of their life. We have developed and evaluated a novel therapy based on computer technology which enables each patient to create an avatar of the entity (human or non-human) that they believe is talking to them. The therapist promotes a dialogue between the patient and the avatar in which the avatar progressively comes under the patient’s control. The project will refine the system, streamline the technology to make it more user-friendly and evaluate the system by a randomised controlled trial conducted by an independent team of researchers.
Funded by ESRC. Duration 3 years: March 2013 - February 2016
Accent differences among speakers and listeners can interfere with the ability of individuals to understand each other under noisy conditions. The overall aim of this research is to understand why and how accent differences among British English speakers and listeners can make speech recognition difficult in noisy conditions. We will test whether (i) listeners are multidialectal (i.e., able to understand many different accents, not only ones that are similar to their own), and (ii) whether the difficulties with accents and noise come from early stages of speech processing in the brain, or at later processing stages associated with the recognition of words. Our experiments will involve testing individuals with a wide range of British English accents. We will test their ability to recognize speech spoken in different accents and mixed with noise, acoustically analyze their own speech in terms of accent, and use neurophysiological measures (EEG) to assess different types of speech processing. This interdisciplinary collection of measures and techniques will be used to address questions that are relevant to sociophonetics (e.g., why certain accents become standards, and the impacts of multidialectal experience), speech science (e.g., which factors explain speech intelligibility) and psychology (e.g., how speech is processed in the brain, and the perceptual learning abilities of adults), and do so in a way that will have practical relevance to understanding how people communicate in the UK.
INSPIRE: Investigating speech processing in realistic environments.
Duration 4 years: 2012-2016. UCL PI: Paul Iverson
This FP7 Marie Curie Initial Training Network INSPIRE comprises 10 European research institutes and 7 associated partners, and has the aim of training researchers to investigate how people recognise speech in real life under a wide range of conditions that are “non-optimal”.
SHaPS hosts two PhD projects within the network. One project, supervised by Paul Iverson, Valerie Hazan and María Luisa García Lecumberri (University of the Basque Country, Spain), examines how speakers and listeners, particularly second-language learners, modify their phonetic perception and production during speech communication. The second project, supervised by Stuart Rosen, Andrew Faulkner and Torsten Dau (Technical University of Denmark), investigates the ability of normal hearing and hearing impaired listeners to perceive speech targets in the background of maskers that manipulate the presence and absence of periodicity.
You came TO DIE?! Perceptual adaptation to regional accents as a new lens on the puzzle of spoken word recognition
Researchers: Bronwen Evans in collaboration with Cathy Best and Jason Shaw (University of Western Sydney), Jennifer Hay (Christchurch NZL), Gerry Docherty (Newcastle), Paul Foulkes (York).
Funded by the Australian Research Council. Duration 3 years: 2012-2015
The project uses behavioural measures (eye tracking, traditional speech perception tests) to investigate how Australian, New Zealand and UK listeners adapt to eachothers’ accents with the aim of revealing how we achieve stable word recognition via flexible adjustment to pronunciation differences. Results will inform word recognition theory and illuminate why unfamiliar accents are difficult for language learners and automatic speech recognisers.
Pitch perception and production in children with cochlear implants
Duration 3 years: October 2011-September 2014
Pitch processing is widely thought to play an important role in speech and language development, yet children developing speech and language through auditory input from a CI may be impeded in this because they do not receive sufficiently good pitch information. This project is studying a sample of at minimum 20 children with cochlear implants aged 6-10 to characterise their pitch processing for speech-like sounds and investigate the relations of pitch processing to prosodic processing and language development.
Researchers: Valerie Hazan and Michèle Pettinato.
Funded by ESRC. Duration 3 years: June 2011 - May 2014
How do children and teenagers adapt their speech so that they can maintain good communication in challenging listening environments? Are they able to modify their speech specifically to counteract the effects of different types of noise or interference? Are there differences between how adults and children/young people achieve this? What does this ability depend on, and how does it develop? These are the questions we are pursuing with this research project.
Clear speech strategies of adolescents with hearing loss in interactions with their peers
ESRC Linked studentship July 2011-June 2014 PI: Valerie Hazan. Studentship holder: Sonia Granlund
The aim of this studentship is twofold. The first is to investigate the clear speech strategies used by adolescents when interacting with peers with a hearing impairment. The second is to carry out a detailed analysis of the communication strategies used by the adolescents with hearing impairment both when interacting with their hearing and hearing-impaired peers. The project complements research on the clear speech strategies in normally-hearing children aged 9 to 14 carried out in Hazan's concurrent ESRC project on speaker-controlled variability (see above).
Funded by Medical Research Council. Duration 3 years: April 2011 – March 2014.
Most speech is heard in the background of other sounds, particularly other people talking. Listeners with normal hearing have remarkable abilities to filter out extraneous sounds and listen only to the desired talker. In fact, this is known among researchers as the ‘cocktail party effect’, because these abilities are so important at a noisy cocktail party. People with a hearing impairment, however, find this situation very challenging, even though they might function perfectly well with their hearing aids or cochlear implants in a quiet room. The main aim is to more fully explain how people with normal hearing manage to understand speech in the background of other talkers and why people with hearing impairment do not. We are hopeful that this deeper understanding will lead to new ideas for hearing aids that will enable hearing-impaired people to enjoy cocktail parties more!
Many cochlear implant users enjoy improved speech perception, particularly in noise, from using a contralateral hearing aid. However, uncertainty remains regarding the sources of bimodal benefit. The project seeks to further develop our understanding of factors underlying bimodal benefit, helping to establish clinically applicable methods for optimally combining an implant and a contralateral hearing aid and extending the population of implant users able to benefit from residual hearing.
Auditory brainstem responses to speech sounds in quiet and noise: The effects of ageing and hearing impairment
Older people often complain about the difficulty of understanding speech in the presence of background noises, whether they are hearing impaired or not. We are investigating the extent to which the fidelity and distinctiveness of neural representations of speech sounds at the auditory nerve and brain stem level are related to abilities to understand speech in noise, and how they change with hearing impairment and age. Auditory brainstem responses reflect neural activity from the auditory nerve up to the midbrain and retain much of the temporal complexity of speech, so are well suited to assess the extent to which important speech features are preserved at this early level of processing. Our goal is to understand the extent to which difficulties in understanding speech in background noises can arise from deficits in auditory encoding at the first neural stages of the auditory pathway, and so provide guidance to future methods of diagnosis and rehabilitation.
Performance-based measures of speech quality
Researchers: Mark Huckvale, Gaston Hilkhuysen, Mark Wibrow. Funded by Research in Motion. Duration: 3 years: 2010-2013.
This project seeks to design and test new methods for the evaluation of speech communication systems. The area of application is for systems which operate at high levels of speech intelligibility or for systems which make little change to intelligibility (such as noise-reduction systems). Conventional intelligibility testing is not appropriate in these circumstances, and existing measures of speech quality are based on subjective opinion rather than speech communication performance.
It is common for people to report requiring more "effort" to perceive noisy speech. If true, then the effectiveness of digital noise reduction (DNR) could be measured by the reduction in "listening effort" it provides: a "higher quality" system should provide a greater reduction in listening effort compared to a "lower quality" system.
Traditional evaluations of auditory communication technologies (such as DNR systems) have relied on intelligibility scores (which often fail to distinguish between systems) and speech quality ratings (which rely on listener opinion).
But increased listening effort is associated with increased load on working memory which, in turn, can impact on the listener's memory and attention processes. Thus, this project aims to establish novel objective performance measures that target these processes in order to go beyond the traditional intelligibility and speech quality scores and establish listening effort as an evaluation criterion for all auditory communication technologies.