- Teaching Programmes
- Research Programmes
- Research Departments
- Clinical, Educational Health and Psychology
- Developmental Science
- Experimental Psychology
- Institute of Cognitive Neuroscience
- Language & Communication
- Speech, Hearing and Phonetic Sciences
- UCL Interaction Centre
- Research Facilities
- News and Events
- Vacancies and Opportunities
- Contact Us
News and Events
Read all the
latest news within the Division of Psychology and
Read more about the PALS Swan Silver Award
Divisional Subject Pool
The Division has won the Silver Award for the UCL Green Impact scheme
The Division runs Continuing Professional Development courses.
Speech, Hearing and Phonetic Sciences
Current Funded Research (most recent first) | Completed Projects
Speech communication in older adults: an acoustic and perceptual investigation
Researchers: Valerie Hazan, Outi Tuomainen. ESRC Project Grant August 2014-July 2017
We propose to gain a comprehensive account of older people's speech production and perception in situations involving communication with another individual. Adults with age-related hearing loss and the rarer group of older adults with normal hearing will be included as well as younger adult controls. In Study 1, communication with another speaker, while reading sentences or completing a problem-solving task, will either be in good listening conditions, where both speakers hear each other normally, or in adverse conditions, where the participant has to get their message across to another speaker who has a simulated hearing loss or when both are speaking in a noisy background. These comparisons will enable us to get a sense of the degree to which an older person is able to adapt their speech to overcome difficult listening conditions, a skill which is of paramount importance in speech communication in everyday life. We will obtain high-quality digital recordings of the participants' speech but will also, via sensors placed on the neck, record information about their vocal fold vibration, which determines the quality of their voice. Video recordings will also be analysed to investigate whether older speakers make use of eye gaze and head gestures to signal aspects of discourse such as turntaking and back-channelling (e.g., saying 'okay' to signal understanding), to the same degree as younger speakers. In Study 2, older and younger listeners with normal and impaired hearing will be presented some of the sentence materials recorded in Study 1 by all speaker groups in good and adverse listening conditions. Tests will be presented in both auditory-alone and audiovisual conditions. Intelligibility tests will be run to see what impact age, hearing status and visual cues have on speech understanding and to see whether the 'clear speech' adaptations made by older speakers to counter the effects of poor communication conditions gives the same benefit to that of younger speakers. Sentence recall tests will also be run to investigate whether the listening effort is reduced listening to 'clear speech'.
This project will lead to a better understanding of the effects of ageing on speech communication and of the various contributing factors to potentially degraded speech communication in a population of 'healthy aged' individuals. These benchmarks will be of use for practitioners such as speech and language therapists and audiologists who work on aspects of communication with older people who have health complications. A better understanding of communication difficulties that older individuals experience and of their strategies to overcome these difficulties will also assist professionals such as
social workers and care professionals who work to improve quality of life for older people, as well as developers of speech technology devices for telemedicine and remote monitoring. Importantly, this research will also contribute to our basic understanding of speech perception and production development across the lifespan
Listening effort in users of cochlear implants
Researchers: Stuart Rosen, Debi Vickers, Helen Willis. Action on Hearing Loss PhD studentship to Helen Willis. October 2014 - September 2017
A common complaint amongst the hearing impaired is that of increased listening effort (LE: the cognitive resource necessary for speech understanding). Increased LE has debilitative long term health consequences. This issue has only just begun to be explored in the cochlear implant (CI) population. Furthermore, current emphasis of clinical assessment during CI rehabilitation is speech comprehension. Considering LE’s impact on patients’ physical wellbeing, a clinical measurement of LE is essential. By the investigation of four participant groups (newly implanted CI recipients to be studied over the course of 9 months; experienced CI recipients after tuning; CI recipients where remapping has been induced; and controls listening to CI simulations), the impact of LE during key phases of CI rehabilitation will be assessed. This will assist in evaluating which behavioural measure of LE (dual-task paradigm or subjective ratings) is the most sensitive (validated against a physiological measure of LE: pupil dilation), from which a clinical test can be developed. Having a clinical test of LE would ultimately promote better rehabilitation outcomes, for physical wellbeing as well as speech comprehension, because there would then be an accurate measurement of the cognitive cost in the CI recipients’ speech processing and their capacity to improve.
Computer-based connected text training of speech perception for cochlear implant users
Researchers: Stuart Rosen, Tim Green, Andy Faulkner. Action on Hearing Loss International Project Grant - May 2014-April 2017
While CI users’ speech recognition typically improves with everyday listening it is likely that in many cases, this process can be facilitated by appropriate training. The aim of the proposed research is to investigate the extent to which formal training can facilitate the development of CI users’ speech understanding. Formal training may have several advantages over learning through everyday experience, including providing listening conditions and speech materials that are optimised for promoting learning, and giving the opportunity to enhance listening skills without the constraints and risks associated with everyday communication. Improvements in such skills over the course of relatively short-term formal training may have benefits beyond immediate improvements in speech understanding by, for example, imparting increased confidence to engage more fully with the wider world.
Training will be carried out at home on tablet computers and will use recordings of stories divided up into phrases from which the listener selects target words from amongst similar alternatives. This approach is designed to target different listening skills, including both distinguishing between similar elements of speech sounds and using contextual information to enhance understanding. The use of connected narrative materials may enhance the motivation to persist with training. Different implementations of the same general approach will be targeted at CI users with different initial levels of speech understanding. The effectiveness of training will be examined with a wide range of speech perception tests and with questionnaire-based measures of perceived benefit, allowing assessment both of particular abilities improved by training and the extent to which any improvements translate into meaningful benefits in everyday communication. If proven effective, computer-based training requiring minimal supervision would be a highly cost-effective intervention of benefit to many CI users.
Mapping the development of phonetic perception
Researchers: Paul Iverson, Jyrki Toumainen, Kathleen McCarthy, Katrin Skoruppa (University of Essex) ESRC project grant. Duration 3 years January 2014 - December 2016
Infants are born with perceptual abilities that allow them to hear acoustic differences between the speech sounds used in many languages. During the first year of life, they develop to become more specialised, increasing their ability to distinguish native-language speech sounds and decreasing their ability to distinguish some non-native speech sounds. One limitation of existing infant-testing methodologies is that it is only feasible to test isolated speech contrasts (e.g., the vowels in 'beet' vs. 'bit'), which gives a rather narrow view of perceptual development (i.e., one cannot see changes in how individuals process the entire vowel system of a language). We believe that we have found a way to produce a broader view of development by using the Acoustic Change Complex (ACC) of EEG recordings (electrodes on the scalp measuring electrical activity in the brain) to provide a time-efficient measure of auditory perceptual sensitivity, and then using multidimensional scaling to produce perceptual maps based on many stimulus pairs. We will develop this method in tests of 7-month old infants and comparisons of adults with different languages. We will then compare these perceptual maps for 4-, 7, and 11-month olds as well as 4-year olds, in order to chart how the perception of vowels and fricatives change during early development.
The role of speech motor resonances in spoken language processing
Researchers: Patti Adank, Joseph Devlin (CPB)
Leverhulme Trust project grant. Duration 3 years January 2014 - December 2016
This project investigates the relationship between speech production and speech perception by testing how production mechanisms support perception in everyday listening situations, such as hearing someone speak in an unfamiliar accent or understanding speech in background noise. The presence of such acoustic variation makes the speech signal more difficult to understand; yet listeners normally extract the linguistic message relatively effortlessly. Successful speech perception relies crucially on the ability to effectively deal with acoustic variation in speech yet the mechanisms underlying this ability are poorly understood. We will use a combination of behavioural experiments, neurophysiological experiments (involving Motor Evoked Potentials), and Transcranial Magnetic Stimulation to investigate how comprehension of accented speech relies on production mechanisms
Researchers at UCL: Julian Leff (Emeritus Professor, UCL Mental Health Sciences), Mark Huckvale and Geoff Williams. In collaboration with Thomas Jamieson-Craig, Philippa Garety and Paul McCrone, Institute of Psychiatry.
Wellcome Trust Strategic Translation Award. Duration 3 years: August 2012-July 2015
About 25% of people with schizophrenia continue to suffer with persecutory auditory hallucinations despite drug treatment. Their capacity to work and make relationships is grossly impaired, often for the rest of their life. We have developed and evaluated a novel therapy based on computer technology which enables each patient to create an avatar of the entity (human or non-human) that they believe is talking to them. The therapist promotes a dialogue between the patient and the avatar in which the avatar progressively comes under the patient’s control. The project will refine the system, streamline the technology to make it more user-friendly and evaluate the system by a randomised controlled trial conducted by an independent team of researchers.
Funded by ESRC. Duration 3 years: March 2013 - February 2016
Accent differences among speakers and listeners can interfere with the ability of individuals to understand each other under noisy conditions. The overall aim of this research is to understand why and how accent differences among British English speakers and listeners can make speech recognition difficult in noisy conditions. We will test whether (i) listeners are multidialectal (i.e., able to understand many different accents, not only ones that are similar to their own), and (ii) whether the difficulties with accents and noise come from early stages of speech processing in the brain, or at later processing stages associated with the recognition of words. Our experiments will involve testing individuals with a wide range of British English accents. We will test their ability to recognize speech spoken in different accents and mixed with noise, acoustically analyze their own speech in terms of accent, and use neurophysiological measures (EEG) to assess different types of speech processing. This interdisciplinary collection of measures and techniques will be used to address questions that are relevant to sociophonetics (e.g., why certain accents become standards, and the impacts of multidialectal experience), speech science (e.g., which factors explain speech intelligibility) and psychology (e.g., how speech is processed in the brain, and the perceptual learning abilities of adults), and do so in a way that will have practical relevance to understanding how people communicate in the UK.
INSPIRE: Investigating speech processing in realistic environments.
Duration 4 years: 2012-2016. UCL PI: Paul Iverson
This FP7 Marie Curie Initial Training Network INSPIRE comprises 10 European research institutes and 7 associated partners, and has the aim of training researchers to investigate how people recognise speech in real life under a wide range of conditions that are “non-optimal”.
SHaPS hosts two PhD projects within the network. One project, supervised by Paul Iverson, Valerie Hazan and María Luisa García Lecumberri (University of the Basque Country, Spain), examines how speakers and listeners, particularly second-language learners, modify their phonetic perception and production during speech communication. The second project, supervised by Stuart Rosen, Andrew Faulkner and Torsten Dau (Technical University of Denmark), investigates the ability of normal hearing and hearing impaired listeners to perceive speech targets in the background of maskers that manipulate the presence and absence of periodicity.
You came TO DIE?! Perceptual adaptation to regional accents as a new lens on the puzzle of spoken word recognition
Researchers: Bronwen Evans in collaboration with Cathy Best and Jason Shaw (University of Western Sydney), Jennifer Hay (Christchurch NZL), Gerry Docherty (Newcastle), Paul Foulkes (York).
Funded by the Australian Research Council. Duration 3 years: 2012-2015
The project uses behavioural measures (eye tracking, traditional speech perception tests) to investigate how Australian, New Zealand and UK listeners adapt to eachothers’ accents with the aim of revealing how we achieve stable word recognition via flexible adjustment to pronunciation differences. Results will inform word recognition theory and illuminate why unfamiliar accents are difficult for language learners and automatic speech recognisers.
Pitch perception and production in children with cochlear implants
Duration 3 years: October 2011-September 2014
Pitch processing is widely thought to play an important role in speech and language development, yet children developing speech and language through auditory input from a CI may be impeded in this because they do not receive sufficiently good pitch information. This project is studying a sample of at minimum 20 children with cochlear implants aged 6-10 to characterise their pitch processing for speech-like sounds and investigate the relations of pitch processing to prosodic processing and language development.
Researchers: Valerie Hazan and Michèle Pettinato.
Funded by ESRC. Duration 3 years: June 2011 - May 2014
How do children and teenagers adapt their speech so that they can maintain good communication in challenging listening environments? Are they able to modify their speech specifically to counteract the effects of different types of noise or interference? Are there differences between how adults and children/young people achieve this? What does this ability depend on, and how does it develop? These are the questions we are pursuing with this research project.
Clear speech strategies of adolescents with hearing loss in interactions with their peers
ESRC Linked studentship July 2011-June 2014 PI: Valerie Hazan. Studentship holder: Sonia Granlund
The aim of this studentship is twofold. The first is to investigate the clear speech strategies used by adolescents when interacting with peers with a hearing impairment. The second is to carry out a detailed analysis of the communication strategies used by the adolescents with hearing impairment both when interacting with their hearing and hearing-impaired peers. The project complements research on the clear speech strategies in normally-hearing children aged 9 to 14 carried out in Hazan's concurrent ESRC project on speaker-controlled variability (see above).
Funded by Medical Research Council. Duration 3 years: April 2011 – August 2014.
Most speech is heard in the background of other sounds, particularly other people talking. Listeners with normal hearing have remarkable abilities to filter out extraneous sounds and listen only to the desired talker. In fact, this is known among researchers as the ‘cocktail party effect’, because these abilities are so important at a noisy cocktail party. People with a hearing impairment, however, find this situation very challenging, even though they might function perfectly well with their hearing aids or cochlear implants in a quiet room. The main aim is to more fully explain how people with normal hearing manage to understand speech in the background of other talkers and why people with hearing impairment do not. We are hopeful that this deeper understanding will lead to new ideas for hearing aids that will enable hearing-impaired people to enjoy cocktail parties more!
Auditory brainstem responses to speech sounds in quiet and noise: The effects of ageing and hearing impairment
Older people often complain about the difficulty of understanding speech in the presence of background noises, whether they are hearing impaired or not. We are investigating the extent to which the fidelity and distinctiveness of neural representations of speech sounds at the auditory nerve and brain stem level are related to abilities to understand speech in noise, and how they change with hearing impairment and age. Auditory brainstem responses reflect neural activity from the auditory nerve up to the midbrain and retain much of the temporal complexity of speech, so are well suited to assess the extent to which important speech features are preserved at this early level of processing. Our goal is to understand the extent to which difficulties in understanding speech in background noises can arise from deficits in auditory encoding at the first neural stages of the auditory pathway, and so provide guidance to future methods of diagnosis and rehabilitation.