- Home
- About
- People
- Teaching Programmes
- Research Programmes
- Research Departments
- Research Facilities
- News and Events
- Vacancies and Opportunities
- Contact Us
- Intranet
In Focus
Swan Award - More>>
News and Events
For all the latest news and event details within the Division of Psychology and Language Sciences follow this link. More>>
Divisional Subject Pool
To access and sign up for the Divisional Subject Pool, follow this link. More >>
CPD@PaLS Courses
The Division runs Continuing Professional Development courses. For more details follow this link. More>>
Green Issues
The Division has won the Silver Award for the UCL Green Impact scheme.. More>>
Shop
To buy audio-visual products from the Language Sciences Shop, follow this link. More >>
Speech, Hearing and Phonetic Sciences
Current Funded Research (most recent first) | Completed Projects
Computer-based system for avatar therapy: effectiveness in a randomised controlled trial
Researchers at UCL: Julian Leff (Emeritus Professor, UCL Mental Health Sciences), Mark Huckvale and Geoff Williams. In collaboration with Thomas Jamieson-Craig, Philippa Garety and Paul McCrone, Institute of Psychiatry. Wellcome Trust Strategic Translation Award.
Duration 3 years: August 2012-July 2015
About 25% of people with schizophrenia continue to suffer with persecutory auditory hallucinations despite drug treatment. Their capacity to work and make relationships is grossly impaired, often for the rest of their life. We have developed and evaluated a novel therapy based on computer technology which enables each patient to create an avatar of the entity (human or non-human) that they believe is talking to them. The therapist promotes a dialogue between the patient and the avatar in which the avatar progressively comes under the patient’s control. The project will refine the system, streamline the technology to make it more user-friendly and evaluate the system by a randomised controlled trial conducted by an independent team of researchers.
Understanding British Accents in Noise
Researchers: Paul Iverson, Bronwen Evans, Mel Pinet, Alex Leff, Jyrki Tuomainen.
Funded by ESRC. Duration 3 years: March 2013 - February 2016
INSPIRE: Investigating speech processing in realistic environments.
Duration 4 years: 2012-2016. UCL PI: Paul Iverson
This FP7 Marie Curie Initial Training Network INSPIRE comprises 10 European research institutes and 7 associated partners, and has the aim of training researchers to investigate how people recognise speech in real life under a wide range of conditions that are “non-optimal”.
SHaPS hosts two PhD projects within the network. One project, supervised by Paul Iverson, Valerie Hazan and María Luisa García Lecumberri (University of the Basque Country, Spain), examines how speakers and listeners, particularly second-language learners, modify their phonetic perception and production during speech communication. The second project, supervised by Stuart Rosen, Andrew Faulkner and Torsten Dau (Technical University of Denmark), investigates the ability of normal hearing and hearing impaired listeners to perceive speech targets in the background of maskers that manipulate the presence and absence of periodicity.
You came TO DIE?! Perceptual adaptation to regional accents as a new lens on the puzzle of spoken word recognition
Researchers: Bronwen Evans in collaboration with Cathy Best and Jason Shaw (University of Western Sydney), Jennifer Hay (Christchurch NZL), Gerry Docherty (Newcastle), Paul Foulkes (York).
Funded by the Australian Research Council. Duration 3 years: 2012-2015
The project uses behavioural measures (eye tracking, traditional speech perception tests) to investigate how Australian, New Zealand and UK listeners adapt to eachothers’ accents with the aim of revealing how we achieve stable word recognition via flexible adjustment to pronunciation differences. Results will inform word recognition theory and illuminate why unfamiliar accents are difficult for language learners and automatic speech recognisers.
Pitch perception and production in children with cochlear implants
Deafness Research UK studentship to Lucy Carroll; supervisors Andrew Faulkner and Debi Vickers (UCL Ear Institute).
Duration 3 years: October 2011-September 2014
Pitch processing is widely thought to play an important role in speech and language development, yet children developing speech and language through auditory input from a CI may be impeded in this because they do not receive sufficiently good pitch information. This project is studying a sample of at minimum 20 children with cochlear implants aged 6-10 to characterise their pitch processing for speech-like sounds and investigate the relations of pitch processing to prosodic processing and language development.
Speaker-controlled variability in children's speech in interaction
Researchers: Valerie Hazan and Michèle Pettinato.
Funded by ESRC. Duration 3 years: June 2011 - May 2014
How do children and teenagers adapt their speech so that they can maintain good communication in challenging listening environments? Are they able to modify their speech specifically to counteract the effects of different types of noise or interference? Are there differences between how adults and children/young people achieve this? What does this ability depend on, and how does it develop? These are the questions we are pursuing with this research project.
Clear speech strategies of adolescents with hearing loss in interactions with their peers
ESRC Linked studentship July 2011-June 2014 PI: Valerie Hazan. Studentship holder: Sonia Granlund
The aim of this studentship is twofold. The first is to investigate the clear speech strategies used by adolescents when interacting with peers with a hearing impairment. The second is to carry out a detailed analysis of the communication strategies used by the adolescents with hearing impairment both when interacting with their hearing and hearing-impaired peers. The project complements research on the clear speech strategies in normally-hearing children aged 9 to 14 carried out in Hazan's concurrent ESRC project on speaker-controlled variability (see above).
Perceiving speech in single and multi-talker babble in normal and impaired hearing
Researchers: Stuart Rosen, Tim Green.
Funded by Medical Research Council. Duration 3 years: April 2011 – March 2014.
Most speech is heard in the background of other sounds, particularly other people talking. Listeners with normal hearing have remarkable abilities to filter out extraneous sounds and listen only to the desired talker. In fact, this is known among researchers as the ‘cocktail party effect’, because these abilities are so important at a noisy cocktail party. People with a hearing impairment, however, find this situation very challenging, even though they might function perfectly well with their hearing aids or cochlear implants in a quiet room. The main aim is to more fully explain how people with normal hearing manage to understand speech in the background of other talkers and why people with hearing impairment do not. We are hopeful that this deeper understanding will lead to new ideas for hearing aids that will enable hearing-impaired people to enjoy cocktail parties more!
The bases of benefit from bimodal combinations of cochlear implant and hearing aid
Researchers: Andrew Faulkner, Tim Green, Marine Ardoint, Stuart Rosen. Funded by Action on Hearing Loss (formerly RNID). Duration 3 years: November 2010 - September 2013
Many cochlear implant users enjoy improved speech perception, particularly in noise, from using a contralateral hearing aid. However, uncertainty remains regarding the sources of bimodal benefit. The project seeks to further develop our understanding of factors underlying bimodal benefit, helping to establish clinically applicable methods for optimally combining an implant and a contralateral hearing aid and extending the population of implant users able to benefit from residual hearing.
Auditory brainstem responses to speech sounds in quiet and noise: The effects of ageing and hearing impairment
Action on Hearing Loss studentship to Tim Schoof. Supervisors: Stuart Rosen and Ifat Yasin (UCL Ear Institute). Duration 4 years: October 2010 –
September 2014.
Older people often complain about the difficulty of understanding speech in the presence of background noises, whether they are hearing impaired or not. We are investigating the extent to which the fidelity and distinctiveness of neural representations of speech sounds at the auditory nerve and brain stem level are related to abilities to understand speech in noise, and how they change with hearing impairment and age. Auditory brainstem responses reflect neural activity from the auditory nerve up to the midbrain and retain much of the temporal complexity of speech, so are well suited to assess the extent to which important speech features are preserved at this early level of processing. Our goal is to understand the extent to which difficulties in understanding speech in background noises can arise from deficits in auditory encoding at the first neural stages of the auditory pathway, and so provide guidance to future methods of diagnosis and rehabilitation.
Performance-based measures of speech quality
Researchers: Mark Huckvale, Gaston Hilkhuysen, Mark Wibrow. Funded by Research in Motion. Duration: 3 years: 2010-2013.
This project seeks to design and test new methods for the evaluation of
speech communication systems. The area of application is for systems
which operate at high levels of speech intelligibility or for systems
which make little change to intelligibility (such as noise-reduction
systems). Conventional intelligibility testing is not appropriate in
these circumstances, and existing measures of speech quality are based
on subjective opinion rather than speech communication performance.
It is common for people to report requiring more "effort" to perceive noisy speech. If true, then the effectiveness of digital noise reduction (DNR) could be measured by the reduction in "listening effort" it provides: a "higher quality" system should provide a greater reduction in listening effort compared to a "lower quality" system.
Traditional evaluations of auditory communication technologies (such as DNR systems) have relied on intelligibility scores (which often fail to distinguish between systems) and speech quality ratings (which rely on listener opinion).
But increased listening effort is associated with increased load on working memory which, in turn, can impact on the listener's memory and attention processes. Thus, this project aims to establish novel objective performance measures that target these processes in order to go beyond the traditional intelligibility and speech quality scores and establish listening effort as an evaluation criterion for all auditory communication technologies.
Page last modified on 17 jun 13 14:37 by Andrew Faulkner

