UCL Psychology and Language Sciences


Putting the text in neural context: Short term experiential reorganization of language and the brain



Ambiguity is rarely problematic for spoken language comprehension in the real-world. This is likely because listeners constrain interpretation by making use of the abundance of context that accompanies natural language use. We proposed that the brain does this in a constructive way, using prior knowledge about context to predict forthcoming acoustic input (Skipper et al., 2006). Because knowledge varies with individual experience, the “neural context” (Bressler & McIntosh, 2007) supporting language comprehension must be dynamically organizing. We tested this proposal by varying participants’ experience with the visual context that accompanied sentences, either speech-associated mouth movements or printed text. Participants later listened to the same sentences without the accompanying context. We hypothesized that, if participants’ prior experience involved observing the actor’s articulations, sensory-motor regions associated with processing those movements would be relatively stronger contributors to the neural context for processing the words in those sentences. Conversely, if participants’ prior experience involved print, brain regions associated with reading would be stronger contributors when processing those same words.


English speaking participants, unfamiliar with Basque, underwent 256-channel EEG. During Phase I participants saw and heard 40 unfamiliar video clips of an actor standing in front of a white board. In half the clips the actor faced the board and read aloud the English sentences printed on the board. In the other half she faced the participant and spoke English sentences and a “Basque translation” of those sentences was printed on the board. During Phase II participants heard audio-only versions of the 40 sentences from Phase I, created by removing the video track, and 40 previously unheard control sentences. All sentences ended in a target word matched for various stimulus properties. Ocular artifacts were removed from the resulting EEG data and then segmented into epochs time-locked to the onset of targets words. Epochs were bandpass filtered, average referenced, averaged, and source localized using Brainstorm software (Tadel et al., 2011). All analyses were corrected for multiple comparisons to alpha < .05.


When target words heard in Phase II could be read in Phase I there was significantly greater bilateral activity in the angular gyrus, occipito-temporal cortex, and a large number of occipital lobe regions including the fusiform gyri when compared to controls or directly to target words that could not be read in Phase I. Conversely, there was significantly more activity bilaterally in the posterior superior temporal gyrus and sulcus, pars opercularis, and ventral aspects of the pre- and postcentral gyrus and sulcus and central sulcus for Phase II target words when the actor’s mouth could be seen producing those words in Phase I.


Results suggest that hearing a word can be supported by different brain regions depending on  participants’ recent experience with that word. In particular, brain regions thought to be involved in the observation and production of speech-associated mouth movements (Skipper et al., 2007) form part of the neural context associated with processing a word when prior experience includes seeing the person producing that word. Visual brain regions thought to support reading, including the left fusiform gyrus at the location of the putative visual word form area (McCandliss et al., 2003), form the neural context for that same word when participants’ only prior experience with that word includes reading it. This pattern of results suggest that knowledge garnered from previously experienced context is routinely used by the brain during language comprehension, even when that context is absent. This suggests that the organization of language and the brain is not static but, rather, is a constructive process, dynamically organized around our prior and rapidly changing experience of the world.


Bressler, S. L., & McIntosh, A. R. (2007). The role of neural context in large-scale neurocognitive network operations. In Handbook of brain connectivity (pp. 403–419). Springer.

McCandliss, B. D., Cohen, L., & Dehaene, S. (2003). The visual word form area: Expertise for reading in the fusiform gyrus. Trends in Cognitive Sciences, 7(7), 293–299.

Skipper, Jeremy I., Nusbaum, H. C., & Small, S. L. (2006). Lending a helping hand to hearing: Another motor theory of speech perception. Action to Language via the Mirror Neuron System, 250–285.

Skipper, J. I., van Wassenhove, V., Nusbaum, H. C., & Small, S. L. (2007). Hearing Lips and Seeing Voices: How Cortical Areas Supporting Speech Production Mediate Audiovisual Speech Perception. Cerebral Cortex, 17(10), 2387–2399.

Tadel, F., Baillet, S., Mosher, J. C., Pantazis, D., & Leahy, R. M. (2011). Brainstorm: a user-friendly application for MEG/EEG analysis. Computational Intelligence and Neuroscience, 2011, 8.