Listening effort and multilingual speech communication: Neural measures of auditory and lexical processing by adults and older children
People feel like they need to "listen harder" when communicating in a second language, but it isn't clear how this effort changes the brain processes involved in recognising speech. Our initial research has produced a surprising finding; we tested people who were trying to listen to a talker in a noisy background (i.e., a distracting talker), and found that auditory areas of the brain are better at picking out the target talker when people are listening to a second language than their first language. We did this by recording neural activity (electroencephalography; EEG) and measuring how it becomes entrained to the acoustics of speech. Although people would normally be expected to perform better when listening to their first language, we think that second-language listeners had more selective auditory processing because of their additional listening effort. We found related effects for neural measures of word recognition in the same task, and think that we've found mechanisms that allow second-language learners to partially compensate for their speech recognition difficulties. In this grant project, we will expand our investigation in a series of studies that manipulate the acoustics of speech, and compare how speech is recognised in first and second languages by speakers of English and Korean. Furthermore, we will test adults who learned both languages at the same time when they were young children, adults who learned their second language later in life, and older children who are in the process of learning both languages. Our goals are to understand how people can use listening effort to compensate for their difficulties with second-language speech, and examine how this ability develops and relates to proficiency. This work is important for understanding how people apply their processes and structures for language during everyday speech communication, and is relevant to a wide range of difficult listening conditions (e.g., hearing impairment). The work will also advance our scientific understanding of how new measures of neural entrainment for speech relate to practical aspects of speech recognition.
Speech masking effects in speech communication across the lifespan
Our ability to communicate successfully with others can be strongly affected by the presence of noise and other voices in the environment, and children and older adults are more greatly affected than young adults in these situations. Some of the disruption is due to physical masking by interfering sounds (energetic masking EM) but if the disrupting sound can be understood, this also causes further difficulty (informational masking IM). Previous work suggests that informational masking causes relatively more disruption for children and older adults than for young adults but these findings are based on laboratory tests that are far from realistic communication.
Although the impact of adverse conditions on speech communication has been the object of studies in different age groups, no study to date has taken a full lifespan view, looking at the relative impact of IM and EM on participants aged from 8 to 85 using a common experimental design. Also, many studies have focused on the impact of EM and IM on speech perception using recorded sentences or words; this ignores the fact that speakers make dynamic adaptations in speech communication to counter the effects of masking. We evaluate the impact of adverse conditions in an interactive task using measures reflecting speech communication efficiency (e.g., task transaction time, rate of dysfluencies). Finally, there is little evidence to date as to whether laboratory-based evaluations reflect the level of difficulty experienced in everyday life. The proposed project will, for the first time, relate evaluations of speech communication difficulties in adverse listening conditions as measured in the laboratory with real-life ratings of difficulty collected in real time over a two week period. It will also test whether primarily informational masking causes greater interference for some age groups (e.g. children, older adults), and if the underlying reasons for the interference differ between children and adults.
Speech is a hugely efficient means of communication: a reduced capacity in listening or speaking creates a significant barrier to social inclusion at all points through the lifespan, in education, work and at home. Hearing aids and speech synthesis can help address this reduced capacity but their use imposes greater listener effort. The fundamental objective of the ENRICH network is to modify or augment speech with additional information to make it easier to process. The network will train 14 early-stage researchers (ESRs) and give them not just the necessary cross-disciplinary knowledge and research skills, but also experience of entrepreneurship and technology transfer so they can translate research findings into products and services that will help everybody communicate successfully. Two ESRs are based at UCL: Anna Exenberger and Max Paulus.
Attentional effects in spoken word recognition: An event-related potential study
Researchers: Jyrki Tuomainen, Faith Chiu. Leverhulme Small Research Grants project, duration: October 2017- December 2018.
The ability to recognise spoken words is a complex cognitive process which currently is modelled as a hierarchical process in which the acoustic signal is transformed into a neural code for accessing phonetic/phonological, word form (lexical), and meaning level representations. Less is known about the role of attention in this process. Our focus is on whether auditory attention is important for accessing the long-term memory representations of word form (lexical) and meaning (semantics).
The participants perform an auditory lexical decision task, and we will obtain an accurate measure of focus of auditory attention by using dichotic presentation of the stimuli (two different stimuli presented at the same to separate ears). We give an auditory cue to the participant with an instruction to focus on the cued side and ignore the non-cued side. During the task, we will also record brain's electophysiological responses (EEG). This method will provide data for attended and ignored channels in the same block of trials, and will inform about the time course of the attentional effects as a function of the lexicality effect. These results will extend the scope of the current models of spoken word recognition.