Sound Processing and Language in Children with Mild to Moderate Hearing Loss

Methods

Participants

Information about the study was sent out to over 900 8-16 year-old children with MMHL and 2000 normally hearing children in London and South-East England. Of those who responded, 103 were invited in for study. To be included in the study, children had to score within normal limits on the Wechsler Abbreviated Scale of Intelligence [1], a nonverbal IQ test, and to have MMHL in the case of the MMHL group, and normal hearing in the case of controls. Seven children met our exclusion criteria and were therefore excluded from the study. This left a total sample size of 52 children with MMHL (‘MMHL group’) and 44 normally hearing, typically developing controls (‘CA group’).

Procedure

Baseline measures

All children completed the following standardised assessments: (i) nonverbal IQ [1], (ii) phonological processing (‘NESPY’, [2]); (iii) vocabulary (BPVS [[3], CELF vocabulary [4]); (iv) grammar (CELF recalling sentences [4], TROG [5]); and (v) word and nonword reading (WIAT word and nonword [6]). A full history of hearing problems and language development was taken from the parent/guardian of each child using an in-house questionnaire and a standardised parent/teacher questionnaire of communication strengths and weaknesses [7]. Finally, all children underwent an audiometric assessment to check hearing function [8].

Experiment 1

Sound processing was assessed using a battery of behavioural (psychophysical) tasks. These were incorporated into child-friendly computer games. In the games, children saw a screen that had three cartoon characters on it. On each trial, the three characters would take it in turns to make a sound and the child’s task was to select the character that “made the different sound”. Tests started off easy and got harder as the children progressed through the games. In order to encourage motivation, the games were designed so that all children got the answer right about 79% of the time [9].

We tested children’s abilities to tell the difference between (i.e. discriminate) a range of different sounds. First we looked at the abilities of children to discriminate very basic sounds (i.e. pure tones) along the following dimensions: (i) pitch (‘FD’); (ii) the extent to which pitch changed (modulated) over time (‘FMD’); (iii) differences in the onset of sounds (‘ramps’). We also used some more complex sounds that sounded like speech but were not speech, and assessed children’s abilities to discriminate differences in (iv) pitch (‘F0), (v) FM (‘F2), and (vi) loudness of certain components over time (‘AMD). Finally, we looked at children’s abilities to discriminate between the speech sounds /ba/ and /da/ ((vii) ‘speech’).

All children were assessed without hearing aids, and children with MMHL who wore a hearing aid (n = 45) were also assessed when they were wearing their hearing aids. Stimuli were presented at a fixed loudness of 70 dB SPL through a loud speaker.

behavioural testing

Experiment 2

Electrophysiological measures of sound processing were assessed using auditory ERPs. Each child’s EEG was recorded to the following stimuli: (i) pure tones versus FM tones (‘nonspeech’ condition), (ii) complex sounds vs those including a modulation in pitch (‘speech-like’ condition); and (iii) the speech sounds /ba/ and /da/ (‘speech’ condition). Children were not required to perform a task and watched a silent film during the recording. Stimuli were presented using an oddball design, whereby a repeated, standard sound was interrupted occasionally by a different sound. Each child had six blocks each of 333 trials.

We assessed sound processing by looking at children’s brain responses to the sounds. When people are presented with a series of sounds their brains will usually respond in quite a predictable way. If we average the electrical activity in the brain across many trials, we typically see a waveform that researchers call the P1-N1-P2 complex of the late auditory ERP. Each component represents a different peak, with the ‘P’ and the ‘N’ standing for ‘positive’ and ‘negative’ respectively (indicating whether the peak is above or below zero), and the number representing the order of the different components. Research has shown that the P1-N1-P2 to sounds changes with age, even into adolescence and early adulthood. Some researchers think therefore that it can be used as an index of how ‘mature’ the brain is in terms of its ability to process different sounds.

We used a statistic called the intra-class correlation coefficient (ICC) to assess how age-appropriate each child’s ERP was relative to children of their own age group. To do this we divided children into two groups (‘young’: 8-12 years; and ‘older’: 12-16 years; [10])

EEGparaic1.jpg

Because of the need to look for links between measures of sound processing and language (aim 3), experiments 1 and 2 were run concurrently.

References

[1] Wechsler, D. (1999). The Wechsler abbreviated scale of intelligence. San Antonio: Psychological Corporation.

[2] Korkman, M., Kirk, U., & Kemp, S. (1998). A developmental neuropsychological assessment. New York: Psychological Corporation.

[3] Dunn, L. M., Dunn, D. M., Styles, B., & Sewell, J. (2009). British Picture Vocabulary Scale: 3rd Edition - BPVS III. Windsor: NFER-Nelson.

[4] Semel, E., Wiig, E. H., Secord, W. A. (2003). Clinical Evaluation of Language Fundamentals® - Fourth Edition (CELF® - 4). Toronto: Harcourt Assessment.

[5] Bishop, D. V. M. (2003a). Test for reception of grammar (TROG-2). San Antonio: Psychological Corporation.

[6] Wechsler, D. (2005).Wechsler Individual Achievement Test - Second UK Edition (WIAT-II UK).London: Pearson Assessment. 

[7] Bishop, D. V. M. (2003b). The children's communication checklist, version 2 (CCC-2). London: Psychological Corporation.

[8] British Society of Audiology (2004). Recommended Procedure Pure-tone air-conduction and bone-conduction threshold audiometry with and without masking.

[9] Levitt, H. (1971). Transformed up-down methods in psychoacoustics. The Journal of the Acoustical Society of America, 49, 467–477.

[10] Bishop, DVM, Hardiman, M, Uwer, R, and von Suchodoletz, W (2007). Maturation of the long-latency auditory ERP: step function changes at start and end of adolescence. Developmental Science, 10, 565-75.