When your brain stops listening: Predicted discourse content causes the response in auditory cortex to decay
Abstract
Introduction
Understanding sentences is usually easier than understanding single words in isolation. This is likely because listeners are able to use sentence context to predict forthcoming words (e.g., DeLong, Urbach, & Kutas, 2005). We propose a model (Skipper, Nusbaum, & Small, 2006) in which predicted words are mapped onto motor commands that would be used to produce those words. These motor commands activate the acoustic consequences of executing those movements through efference copy. This improves understanding because the brain processes incoming auditory information less once the predicted words are verified, freeing up neural resources for other uses. Thus, we hypothesized that when forthcoming words are highly predictable listeners will show greater activity in the motor system followed by reduced activity in sensory areas relative to conditions of lower predictability. We tested these hypotheses in two fMRI experiments: One using a naturalistic stimulus and one using controlled audio-only stimuli.
Methods
In Experiment 1, participants (N=14) watched a television game show annotated for topic shifts according to criteria described by (Stenstrom, 1994). We used a combination of independent component (IC) and turnpoints analysis (Skipper, Goldin-Meadow, Nusbaum, & Small, 2009) to identify functionally distinct brain regions and the functions of those regions, respectively. An IC is said to be selective for an annotated stimulus feature by the turnpoints analysis if that feature occurs when the response is rising and not when the response is falling. Selectivity implies that the IC functions to process that stimulus feature. We looked for ICs that were selective for words preceding a topic shift and then for ICs selective to words at topic shifts. The rationale is that discourse is at its most predictable before topic shifts, and least predictable during them. We expected to see more motor and fewer sensory ICs preceding topic shifts and fewer motor and more sensory ICs at topic shifts.
In Experiment 2, participants (N=9) listened to sentences that varied in final word predictability. High- and low-predictability sentences were composed of a sentence frame, a filled pause of variable duration, and a final word. We hypothesized that there would be greater activity in motor areas during the high-predictability sentence pause than the low-predictability sentence pause. Additionally, we expected less activity in early sensory areas during the high-predictability final word compared to the low-predictability final word.
Results
In Experiment 1, the ICs sensitive to the time period preceding topic shifts were localized to the temporal pole and premotor cortex. In contrast, ICs sensitive to topic shifts were primarily in the temporal lobe (including the transverse temporal gyrus or primary auditory cortex), inferior parietal, and prefrontal cortex. In Experiment 2, inferior parietal, insula, and premotor cortex showed greater activity during the high- compared to the low-predictability pause. Primary auditory and surrounding cortex including the planum polare and temporale showed less activity during the high- compared to the low-predictability final word. In both Experiments activity in premotor cortex preceded activity in sensory areas.
Conclusions
There was activity in the motor system at times of high predictability in both experiments.This activity preceded a concomitant decrease in activity in areas more closely associated with early sensory processing of the acoustic signal. This is consistent with our model that the brain uses knowledge and expectations about sentence context to activate forthcoming words, and that this is accomplished at least in part through efference copy from areas of the cortex involved in speech production. This may aid comprehension by permitting the brain to sample the acoustic signal less after confirming its initial prediction about what word is being heard. This likely frees up neural resources to process other sensory information and/or internally elaborate an interpretation of the heard message. Thus, the brain, in effect, talks to itself to predict forthcoming words which allows it to stop listening to the outside world. Paradoxically, this may be beneficial for communication.
References
DeLong, K. A., Urbach, T. P., & Kutas, M. (2005). Probabilistic word pre-activation during language comprehension inferred from electrical brain activity. Nat Neurosci, 8(8).
Skipper, J. I., Goldin-Meadow, S., & Small, S. L. (2009). Gestures orchestrate brain networks for language understanding. Current Biology, 19(8). 661-667. (Press in Science News)
Skipper, J. I., Nusbaum, H. C., & Small, S. L. (2006). Lending a helping hand to hearing: Another motor theory of speech perception. Cambridge, MA: Cambridge University Press.
Stenstrom, A. B. (1994). An introduction to spoken interaction. Longman Publishing Group.