XClose

UCL Psychology and Language Sciences

Home
Menu

Completed projects

Listening effort and multilingual speech communication: Neural measures of auditory and lexical processing by adults and older children

Researchers: Paul IversonStuart Rosen, Jieun Song. ESRC project grant. Duration 3 years October 2017 - September 2020

People feel like they need to "listen harder" when communicating in a second language, but it isn't clear how this effort changes the brain processes involved in recognising speech. Our initial research has produced a surprising finding; we tested people who were trying to listen to a talker in a noisy background (i.e., a distracting talker), and found that auditory areas of the brain are better at picking out the target talker when people are listening to a second language than their first language. We did this by recording neural activity (electroencephalography; EEG) and measuring how it becomes entrained to the acoustics of speech. Although people would normally be expected to perform better when listening to their first language, we think that second-language listeners had more selective auditory processing because of their additional listening effort. We found related effects for neural measures of word recognition in the same task, and think that we've found mechanisms that allow second-language learners to partially compensate for their speech recognition difficulties. In this grant project, we will expand our investigation in a series of studies that manipulate the acoustics of speech, and compare how speech is recognised in first and second languages by speakers of English and Korean. Furthermore, we will test adults who learned both languages at the same time when they were young children, adults who learned their second language later in life, and older children who are in the process of learning both languages. Our goals are to understand how people can use listening effort to compensate for their difficulties with second-language speech, and examine how this ability develops and relates to proficiency. This work is important for understanding how people apply their processes and structures for language during everyday speech communication, and is relevant to a wide range of difficult listening conditions (e.g., hearing impairment). The work will also advance our scientific understanding of how new measures of neural entrainment for speech relate to practical aspects of speech recognition.

Speech masking effects in speech communication across the lifespan

Researchers: Valerie Hazan, Outi Tuomainen, Stuart Rosen, ESRC project grant. Duration 3 years June 2017 – May 2020

Our ability to communicate successfully with others can be strongly affected by the presence of noise and other voices in the environment, and children and older adults are more greatly affected than young adults in these situations. Some of the disruption is due to physical masking by interfering sounds (energetic masking EM) but if the disrupting sound can be understood, this also causes further difficulty (informational masking IM). Previous work suggests that informational masking causes relatively more disruption for children and older adults than for young adults but these findings are based on laboratory tests that are far from realistic communication.

Although the impact of adverse conditions on speech communication has been the object of studies in different age groups, no study to date has taken a full lifespan view, looking at the relative impact of IM and EM on participants aged from 8 to 85 using a common experimental design. Also, many studies have focused on the impact of EM and IM on speech perception using recorded sentences or words; this ignores the fact that speakers make dynamic adaptations in speech communication to counter the effects of masking. We evaluate the impact of adverse conditions in an interactive task using measures reflecting speech communication efficiency (e.g., task transaction time, rate of dysfluencies). Finally, there is little evidence to date as to whether laboratory-based evaluations reflect the level of difficulty experienced in everyday life. The proposed project will, for the first time, relate evaluations of speech communication difficulties in adverse listening conditions as measured in the laboratory with real-life ratings of difficulty collected in real time over a two week period. It will also test whether primarily informational masking causes greater interference for some age groups (e.g. children, older adults), and if the underlying reasons for the interference differ between children and adults.

ENRICH European Training Network

UCL PI: Valerie Hazan; Key researchers: Patti Adank, Paul Iverson. EU Marie-Curie ETN grant, Duration 4 years October 2016 – September 2020

Speech is a hugely efficient means of communication: a reduced capacity in listening or speaking creates a significant barrier to social inclusion at all points through the lifespan, in education, work and at home. Hearing aids and speech synthesis can help address this reduced capacity but their use imposes greater listener effort. The fundamental objective of the ENRICH network is to modify or augment speech with additional information to make it easier to process. The network will train 14 early-stage researchers (ESRs) and give them not just the necessary cross-disciplinary knowledge and research skills, but also experience of entrepreneurship and technology transfer so they can translate research findings into products and services that will help everybody communicate successfully. Two ESRs are based at UCL: Anna Exenberger and Max Paulus.

Attentional effects in spoken word recognition: An event-related potential study

Researchers: Jyrki Tuomainen, Faith Chiu. Leverhulme Small Research Grants project, duration: October 2017- December 2018.

The ability to recognise spoken words is a complex cognitive process which currently is modelled as a hierarchical process in which the acoustic signal is transformed into a neural code for accessing phonetic/phonological, word form (lexical), and meaning level representations. Less is known about the role of attention in this process. Our focus is on whether auditory attention is important for accessing the long-term memory representations of word form (lexical) and meaning (semantics).

The participants perform an auditory lexical decision task, and we will obtain an accurate measure of focus of auditory attention by using dichotic presentation of the stimuli (two different stimuli presented at the same to separate ears). We give an auditory cue to the participant with an instruction to focus on the cued side and ignore the non-cued side. During the task, we will also record brain's electophysiological responses (EEG). This method will provide data for attended and ignored channels in the same block of trials, and will inform about the time course of the attentional effects as a function of the lexicality effect. These results will extend the scope of the current models of spoken word recognition.

Speech communication in older adults: an acoustic and perceptual investigation

Researchers: Valerie Hazan, Outi Tuomainen. ESRC Project Grant August 2014-July 2017

We propose to gain a comprehensive account of older people's speech production and perception in situations involving communication with another individual. Adults with age-related hearing loss and the rarer group of older adults with normal hearing will be included as well as younger adult controls. In Study 1, communication with another speaker, while reading sentences or completing a problem-solving task, will either be in good listening conditions, where both speakers hear each other normally, or in adverse conditions, where the participant has to get their message across to another speaker who has a simulated hearing loss or when both are speaking in a noisy background. These comparisons will enable us to get a sense of the degree to which an older person is able to adapt their speech to overcome difficult listening conditions, a skill which is of paramount importance in speech communication in everyday life. We will obtain high-quality digital recordings of the participants' speech but will also, via sensors placed on the neck, record information about their vocal fold vibration, which determines the quality of their voice. Video recordings will also be analysed to investigate whether older speakers make use of eye gaze and head gestures to signal aspects of discourse such as turn-taking and back-channelling (e.g., saying 'okay' to signal understanding), to the same degree as younger speakers. In Study 2, older and younger listeners with normal and impaired hearing will be presented some of the sentence materials recorded in Study 1 by all speaker groups in good and adverse listening conditions. Tests will be presented in both auditory-alone and audiovisual conditions. Intelligibility tests will be run to see what impact age, hearing status and visual cues have on speech understanding and to see whether the 'clear speech' adaptations made by older speakers to counter the effects of poor communication conditions gives the same benefit to that of younger speakers. Sentence recall tests will also be run to investigate whether the listening  effort is reduced listening to 'clear speech'. This project will lead to a better understanding of the effects of ageing on speech communication and of the various contributing factors to potentially degraded speech communication in a population of 'healthy aged' individuals. These benchmarks will be of use for practitioners such as speech and language therapists and audiologists who work on aspects of communication with older people who have health complications. A better understanding of communication difficulties that older individuals experience and of their strategies to overcome these difficulties will also assist professionals such as social workers and care professionals who work to improve quality of life for older people, as well as developers of speech technology devices for telemedicine and remote monitoring. Importantly, this research will also contribute to our basic understanding of speech perception and production development across the lifespan.

Computer-based connected text training of speech perception for cochlear implant users

Researchers: Stuart Rosen, Tim Green, Andy Faulkner. Action on Hearing Loss International Project Grant - May 2014-April 2017
While CI users’ speech recognition typically improves with everyday listening it is likely that in many cases, this process can be facilitated by appropriate training. The aim of the proposed research is to investigate the extent to which formal training can facilitate the development of CI users’ speech understanding. Formal training may have several advantages over learning through everyday experience, including providing listening conditions and speech materials that are optimised for promoting learning, and giving the opportunity to enhance listening skills without the constraints and risks associated with everyday communication. Improvements in such skills over the course of relatively short-term formal training may have benefits beyond immediate improvements in speech understanding by, for example, imparting increased confidence to engage more fully with the wider world. Training will be carried out at home on tablet computers and will use recordings of stories divided up into phrases from which the listener selects target words from amongst similar alternatives. This approach is designed to target different listening skills, including both distinguishing between similar elements of speech sounds and using contextual information to enhance understanding. The use of connected narrative materials may enhance the motivation to persist with training. Different implementations of the same general approach will be targeted at CI users with different initial levels of speech understanding. The effectiveness of training will be examined with a wide range of speech perception tests and with questionnaire-based measures of perceived benefit, allowing assessment both of particular abilities improved by training and the extent to which any improvements translate into meaningful benefits in everyday communication. If proven effective, computer-based training requiring minimal supervision would be a highly cost-effective intervention of benefit to many CI users.

Speech communication in older adults: an acoustic and perceptual investigation

Researchers: Valerie Hazan, Outi Tuomainen. ESRC Project Grant August 2014-July 2017
We propose to gain a comprehensive account of older people's speech production and perception in situations involving communication with another individual. Adults with age-related hearing loss and the rarer group of older adults with normal hearing will be included as well as younger adult controls. In Study 1, communication with another speaker, while reading sentences or completing a problem-solving task, will either be in good listening conditions, where both speakers hear each other normally, or in adverse conditions, where the participant has to get their message across to another speaker who has a simulated hearing loss or when both are speaking in a noisy background. These comparisons will enable us to get a sense of the degree to which an older person is able to adapt their speech to overcome difficult listening conditions, a skill which is of paramount importance in speech communication in everyday life. We will obtain high-quality digital recordings of the participants' speech but will also, via sensors placed on the neck, record information about their vocal fold vibration, which determines the quality of their voice. Video recordings will also be analysed to investigate whether older speakers make use of eye gaze and head gestures to signal aspects of discourse such as turntaking and back-channelling (e.g., saying 'okay' to signal understanding), to the same degree as younger speakers. In Study 2, older and younger listeners with normal and impaired hearing will be presented some of the sentence materials recorded in Study 1 by all speaker groups in good and adverse listening conditions. Tests will be presented in both auditory-alone and audiovisual conditions. Intelligibility tests will be run to see what impact age, hearing status and visual cues have on speech understanding and to see whether the 'clear speech' adaptations made by older speakers to counter the effects of poor communication conditions gives the same benefit to that of younger speakers. Sentence recall tests will also be run to investigate whether the listening  effort is reduced listening to 'clear speech'.

This project will lead to a better understanding of the effects of ageing on speech communication and of the various contributing factors to potentially degraded speech communication in a population of 'healthy aged' individuals. These benchmarks will be of use for practitioners such as speech and language therapists and audiologists who work on aspects of communication with older people who have health complications. A better understanding of communication difficulties that older individuals experience and of their strategies to overcome these difficulties will also assist professionals such as social workers and care professionals who work to improve quality of life for older people, as well as developers of speech technology devices for telemedicine and remote monitoring. Importantly, this research will also contribute to our basic understanding of speech perception and production development across the lifespan.

The role of speech motor resonances in spoken language processing

Researchers: Patti Adank, Joseph Devlin (CPB). Leverhulme Trust project grant. Duration 3 years January 2014 - December 2016
This project investigates the relationship between speech production and speech perception by testing how production mechanisms support perception in everyday listening situations, such as hearing someone speak in an unfamiliar accent or understanding speech in background noise. The presence of such acoustic variation makes the speech signal more difficult to understand; yet listeners normally extract the linguistic message relatively effortlessly. Successful speech perception relies crucially on the ability to effectively deal with acoustic variation in speech yet the mechanisms underlying this ability are poorly understood. We will use a combination of behavioural experiments, neurophysiological experiments (involving Motor Evoked Potentials), and Transcranial Magnetic Stimulation to investigate how comprehension of accented speech relies on production mechanisms.

Understanding British Accents in Noise

Researchers: Paul Iverson, Bronwen Evans, Mel Pinet, Alex Leff (Inst. Neurology), Jyrki Tuomainen. Funded by ESRC. Duration 4 years: March 2013 - March 2017
Accent differences among speakers and listeners can interfere with the ability of individuals to understand each other under noisy conditions. The overall aim of this research is to understand why and how accent differences among British English speakers and listeners can make speech recognition difficult in noisy conditions. We will test whether (i) listeners are multidialectal (i.e., able to understand many different accents, not only ones that are similar to their own), and (ii) whether the difficulties with accents and noise come from early stages of speech processing in the brain, or at later processing stages associated with the recognition of words. Our experiments will involve testing individuals with a wide range of British English accents. We will test their ability to recognize speech spoken in different accents and mixed with noise, acoustically analyze their own speech in terms of accent, and use neurophysiological measures (EEG) to assess different types of speech processing. This interdisciplinary collection of measures and techniques will be used to address questions that are relevant to sociophonetics (e.g., why certain accents become standards, and the impacts of multidialectal experience), speech science (e.g., which factors explain speech intelligibility) and psychology (e.g., how speech is processed in the brain, and the perceptual learning abilities of adults), and do so in a way that will have practical relevance to understanding how people communicate in the UK.

Mapping the development of phonetic perception

Researchers: Paul Iverson, Jyrki Tuomainen, Kathleen McCarthy, Katrin Skoruppa (University of Essex) ESRC project grant. Duration 3 years January 2014 - December 2016
Infants are born with perceptual abilities that allow them to hear acoustic differences between the speech sounds used in many languages. During the first year of life, they develop to become more specialised, increasing their ability to distinguish native-language speech sounds and decreasing their ability to distinguish some non-native speech sounds. One limitation of existing infant-testing methodologies is that it is only feasible to test isolated speech contrasts (e.g., the vowels in 'beet' vs. 'bit'), which gives a rather narrow view of perceptual development (i.e., one cannot see changes in how individuals process the entire vowel system of a language). We believe that we have found a way to produce a broader view of development by using the Acoustic Change Complex (ACC) of EEG recordings (electrodes on the scalp measuring electrical activity in the brain) to provide a time-efficient measure of auditory perceptual sensitivity, and then using multidimensional scaling to produce perceptual maps based on many stimulus pairs. We will develop this method in tests of 7-month old infants and comparisons of adults with different languages. We will then compare these perceptual maps for 4-, 7, and 11-month olds as well as 4-year olds, in order to chart how the perception of vowels and fricatives change during early development.

INSPIRE: Investigating speech processing in realistic environments.

Duration 4 years: 2012-2016. UCL PI: Paul Iverson

This FP7 Marie Curie Initial Training Network INSPIRE comprises 10 European research institutes and 7 associated partners, and has the aim of training researchers to investigate how people recognise speech in real life under a wide range of conditions that are “non-optimal”.

SHaPS hosts two PhD projects within the network. One project, supervised by Paul Iverson, Valerie Hazan and María Luisa García Lecumberri (University of the Basque Country, Spain), examines how speakers and listeners, particularly second-language learners, modify their phonetic perception and production during speech communication. The second project, supervised by Stuart Rosen,  Andrew Faulkner and Torsten Dau (Technical University of Denmark), investigates the ability of normal hearing and hearing impaired listeners to perceive speech targets in the  background of maskers that manipulate the presence and absence of periodicity.

You came TO DIE?! Perceptual adaptation to regional accents as a new lens on the puzzle of spoken word recognition

Researchers: Bronwen Evans in collaboration with Cathy Best and Jason Shaw (University of Western Sydney), Jennifer Hay (Christchurch NZL), Gerry Docherty (Newcastle), Paul Foulkes (York).

Funded by the Australian Research Council. Duration 3 years: 2012-2015

The project uses behavioural measures (eye tracking, traditional speech perception tests) to investigate how Australian, New Zealand and UK listeners adapt to each others’ accents with the aim of revealing how we achieve stable word recognition via flexible adjustment to pronunciation differences. Results will inform word recognition theory and illuminate why unfamiliar accents are difficult for language learners and automatic speech recognisers.

Speaker-controlled variability in children's speech in interaction

Researchers: Valerie Hazan and Michèle Pettinato.

Funded by ESRC. Duration 3 years: June 2011 - May 2014

How do children and teenagers adapt their speech so that they can maintain good communication in challenging listening environments? Are they able to modify their speech specifically to counteract the effects of different types of noise or interference? Are there differences between how adults and children/young people achieve this? What does this ability depend on, and how does it develop? These are the questions we are pursuing with this research project.

Clear speech strategies of adolescents with hearing loss in interactions with their peers

ESRC Linked studentship July 2011-June 2014 PI: Valerie Hazan. Studentship holder: Sonia Granlund

The aim of this studentship is twofold. The first is to investigate the clear speech strategies used by adolescents when interacting with peers with a hearing impairment. The second is to carry out a detailed analysis of the communication strategies used by the adolescents with hearing impairment both when interacting with their hearing and hearing-impaired peers. The project complements research on the clear speech strategies in normally-hearing children aged 9 to 14 carried out in Hazan's concurrent ESRC project on speaker-controlled variability (see above).

Perceiving speech in single and multi-talker babble in normal and impaired hearing

Researchers: Stuart Rosen, Tim Green.

Funded by Medical Research Council. Duration 3 years: April 2011 – August 2014.

Most speech is heard in the background of other sounds, particularly other people talking. Listeners with normal hearing have remarkable abilities to filter out extraneous sounds and listen only to the desired talker. In fact, this is known among researchers as the ‘cocktail party effect’, because these abilities are so important at a noisy cocktail party. People with a hearing impairment, however, find this situation very challenging, even though they might function perfectly well with their hearing aids or cochlear implants in a quiet room. The main aim is to more fully explain how people with normal hearing manage to understand speech in the background of other talkers and why people with hearing impairment do not. We are hopeful that this deeper understanding will lead to new ideas for hearing aids that will enable hearing-impaired people to enjoy cocktail parties more!

Auditory specialization for speech perception

Researchers: Paul Iverson, Stuart Rosen, Anita Wagner. Funded by the Wellcome Trust. Duration: 3 years: 2008-2012.

Individuals are born with an ability to discern speech sounds (phonemes) in all of the world's languages, but they develop through childhood so that they become specialized to perceive native-language phonemes. The aim of this study is to test our hypothesis that this specialization for native-language phonemes begins to occur in central auditory processing, at a functional level prior to linguistic categorization. The work uses behavioural measures and MEG to examine and perception of English phonemes by adult native speakers of Sinhala and Japanese

Speech perception and language acquisition in children with hearing impairments

Researchers: Katrin Skoruppa, Stuart Rosen. Marie Curie Advanced Fellowship to Katrin Skoruppa. Duration 2 years: October 2010-September 2012.

How do children with hearing aids and cochlear implants learn their native language? Can they use the same learning mechanisms and acoustic cues as their normal-hearing peers? In this study, we examine what children with hearing impairments know about the sound structure of their native language. We are also interested in finding out how they acquire this knowledge and whether it is correlated with their vocabulary and grammar skills.

Accent and language effects on speech perception with noise or hearing loss

MRC-ESRC competitive studentship awarded to Melanie Pinet. Supervisor Paul Iverson. Dates October 2008 - September 2012

One of the key factors that determines speech intelligibility under challenging conditions is the difference between the accents of the talker and listener. For example, normal-hearing listeners can be accurate at recognizing a wide range of accents in quiet, but in noise they are much poorer (e.g., 20 percentage points less accurate) if they try to understand native (L1) or non-native (L2) accented speech that does not closely match their own accent. The aim of this PhD research is to provide a more detailed account of this talker-listener interaction in order to establish the underlying factors involved in L1 and L2 speech communication in noise for normal-hearing and hearing-impaired populations

Speaker-controlled variability in connected discourse: acoustic-phonetic characteristics and impact on speech perception

 This project investigates why certain speakers are easier to understand than others. Speech production is highly variable both across and within speakers. This is partly due to differences in the vocal tract anatomy and partly under the control of the speaker. This project examines whether clearer speakers are more extreme in their articulations (as measured from the acoustic properties of their speech) or whether they are more consistent in their production of speech sounds. In order to better model natural communication, the speech to be analysed is recorded using a new task aimed at eliciting spontaneous dialogue with specific keywords. The first study investigates whether 'inherent' speaker clarity is consistent across different types of discourse and whether speaker clarity is more closely correlated with cross-category differences or within-category consistency in production. The second study investigates whether clearer speakers show a greater degree of adaptation to the needs of listeners. This study has implications for models of speech perception. Understanding what makes a 'clear speaker' will also be informative for applications requiring clear communication, such as teaching, speech and language therapy, and the selection of voices for clinical testing and for speech technology applications.

Dates:2008-2011. Funded by:ESRC . Duration:3 Years. Researchers: Valerie Hazan, Rachel Baker