- Teaching Programmes
- Research Programmes
- Research Departments
- Clinical, Educational Health and Psychology
- Cognitive, Perceptual and Brain Sciences
- Developmental Science
- Institute of Cognitive Neuroscience
- Language & Communication
- Speech, Hearing and Phonetic Sciences
- UCL Interaction Centre
- Research Facilities
- News and Events
- Vacancies and Opportunities
- Contact Us
Swan Award - More>>
News and Events
For all the latest news and event details within the Division of Psychology and Language Sciences follow this link. More>>
Divisional Subject Pool
To access and sign up for the Divisional Subject Pool, follow this link. More >>
The Division runs Continuing Professional Development courses. For more details follow this link. More>>
The Division has won the Silver Award for the UCL Green Impact scheme.. More>>
To buy audio-visual products from the Language Sciences Shop, follow this link. More >>
Speech, Hearing and Phonetic Sciences
Current Funded Research (most recent first) | Completed Projects
Computer-based system for avatar therapy: effectiveness in a randomised controlled trial
Researchers at UCL: Julian Leff (Emeritus Professor, UCL Mental Health Sciences), Mark Huckvale and Geoff Williams. In collaboration with Thomas Jamieson-Craig, Philippa Garety and Paul McCrone, Institute of Psychiatry. Wellcome Trust Strategic Translation Award.Duration 3 years: August 2012-July 2015
About 25% of people with schizophrenia continue to suffer with persecutory auditory hallucinations despite drug treatment. Their capacity to work and make relationships is grossly impaired, often for the rest of their life. We have developed and evaluated a novel therapy based on computer technology which enables each patient to create an avatar of the entity (human or non-human) that they believe is talking to them. The therapist promotes a dialogue between the patient and the avatar in which the avatar progressively comes under the patient’s control. The project will refine the system, streamline the technology to make it more user-friendly and evaluate the system by a randomised controlled trial conducted by an independent team of researchers. It builds on prior work outlined here
INSPIRE: Investigating speech processing in realistic environments.
Duration 4 years: 2012-2016. UCL PI: Paul Iverson
This FP7 Marie Curie Initial Training Network INSPIRE comprises 10 European research institutes and 7 associated partners, and has the aim of training researchers to investigate how people recognise speech in real life under a wide range of conditions that are “non-optimal”.
SHaPS hosts two PhD projects within the network. One project, supervised by Paul Iverson, Valerie Hazan and María Luisa García Lecumberri (University of the Basque Country, Spain), examines how speakers and listeners, particularly second-language learners, modify their phonetic perception and production during speech communication. The second project, supervised by Stuart Rosen, Andrew Faulkner and Torsten Dau (Technical University of Denmark), investigates the ability of normal hearing and hearing impaired listeners to perceive speech targets in the background of maskers that manipulate the presence and absence of periodicity.
You came TO DIE?! Perceptual adaptation to regional accents as a new lens on the puzzle of spoken word recognition
Researchers: Bronwen Evans in collaboration with Cathy Best and Jason Shaw (University of Western Sydney), Jennifer Hay (Christchurch NZL), Gerry Docherty (Newcastle), Paul Foulkes (York). Funded by the Australian Research Council.Duration 3 years: 2012-2015
The project uses behavioural measures (eye tracking, traditional speech perception tests) to investigate how Australian, New Zealand and UK listeners adapt to eachothers’ accents with the aim of revealing how we achieve stable word recognition via flexible adjustment to pronunciation differences. Results will inform word recognition theory and illuminate why unfamiliar accents are difficult for language learners and automatic speech recognisers.
Pitch perception and production in children with cochlear implants
Pitch processing is widely thought to play an important role in speech and language development, yet children developing speech and language through auditory input from a CI may be impeded in this because they do not receive sufficiently good pitch information. This project is studying a sample of at minimum 20 children with cochlear implants aged 6-10 to characterise their pitch processing for speech-like sounds and investigate the relations of pitch processing to prosodic processing and language development.
Researchers: Valerie Hazan and Michèle Pettinato. Funded by ESRC. Duration 3 years: June 2011 - May 2014
How do children and teenagers adapt their speech so that they can maintain good communication in challenging listening environments? Are they able to modify their speech specifically to counteract the effects of different types of noise or interference? Are there differences between how adults and children/young people achieve this? What does this ability depend on, and how does it develop? These are the questions we are pursuing with this research project.
Clear speech strategies of adolescents with hearing loss in interactions with their peers
ESRC Linked studentship July 2011-June 2014 PI: Valerie Hazan. Studentship holder: Sonia Granlund
The aim of this studentship is twofold. The first is to investigate the clear speech strategies used by adolescents when interacting with peers with a hearing impairment. The second is to carry out a detailed analysis of the communication strategies used by the adolescents with hearing impairment both when interacting with their hearing and hearing-impaired peers. The project complements research on the clear speech strategies in normally-hearing children aged 9 to 14 carried out in Hazan's concurrent ESRC project on speaker-controlled variability (see above).
Most speech is heard in the background of other sounds, particularly other people talking. Listeners with normal hearing have remarkable abilities to filter out extraneous sounds and listen only to the desired talker. In fact, this is known among researchers as the ‘cocktail party effect’, because these abilities are so important at a noisy cocktail party. People with a hearing impairment, however, find this situation very challenging, even though they might function perfectly well with their hearing aids or cochlear implants in a quiet room. The main aim is to more fully explain how people with normal hearing manage to understand speech in the background of other talkers and why people with hearing impairment do not. We are hopeful that this deeper understanding will lead to new ideas for hearing aids that will enable hearing-impaired people to enjoy cocktail parties more!
Modelling speech prosody based on communicative function and articulatory dynamics
Prosody is an important aspect of speech that contributes to expressiveness and intelligibility of the speech. Quantitative modeling of speech prosody is a key in the advancement of speech science and technology. Based on a previous successful research collaboration, the proposed research will be a major systematic effort to develop an “articulatory-functional” quantitative model of speech prosody and integrate into it meaningful communicative functions.
Many cochlear implant users enjoy improved speech perception, particularly in noise, from using a contralateral hearing aid. However, uncertainty remains regarding the sources of bimodal benefit. The project seeks to further develop our understanding of factors underlying bimodal benefit, helping to establish clinically applicable methods for optimally combining an implant and a contralateral hearing aid and extending the population of implant users able to benefit from residual hearing.
Auditory brainstem responses to speech sounds in quiet and noise: The effects of ageing and hearing impairment
Older people often complain about the difficulty of understanding speech in the presence of background noises, whether they are hearing impaired or not. We are investigating the extent to which the fidelity and distinctiveness of neural representations of speech sounds at the auditory nerve and brain stem level are related to abilities to understand speech in noise, and how they change with hearing impairment and age. Auditory brainstem responses reflect neural activity from the auditory nerve up to the midbrain and retain much of the temporal complexity of speech, so are well suited to assess the extent to which important speech features are preserved at this early level of processing. Our goal is to understand the extent to which difficulties in understanding speech in background noises can arise from deficits in auditory encoding at the first neural stages of the auditory pathway, and so provide guidance to future methods of diagnosis and rehabilitation.
Performance-based measures of speech quality
Researchers: Mark Huckvale, Gaston Hilkhuysen, Mark Wibrow. Funded by Research in Motion. Duration: 3 years: 2010-2013.
This project seeks to design and test new methods for the evaluation of speech communication systems. The area of application is for systems which operate at high levels of speech intelligibility or for systems which make little change to intelligibility (such as noise-reduction systems). Conventional intelligibility testing is not appropriate in these circumstances, and existing measures of speech quality are based on subjective opinion rather than speech communication performance.
One of the key factors that determines speech intelligibility under challenging conditions is the difference between the accents of the talker and listener. For example, normal-hearing listeners can be accurate at recognizing a wide range of accents in quiet, but in noise they are much poorer (e.g., 20 percentage points less accurate) if they try to understand native (L1) or non-native (L2) accented speech that does not closely match their own accent. It is largely unknown exactly why and how these accent effects occur. The aim of this PhD research is to provide a more detailed account of this talker-listener interaction in order to establish the underlying factors involved in L1 and L2 speech communication in noise for normal-hearing and hearing-impaired populations.
Page last modified on 11 dec 12 14:15 by Andrew Faulkner