XClose

UCL Psychology and Language Sciences

Home
Menu

Spatially separable networks for observed mouth and gesture movements during language comprehension

Abstract


Most neurobiological language research discards “context” in favor of studying isolated speech sounds or words. We test an alternative model in which language engages multiple brain networks in a context-dependent manner. We tested this model with source localized high-density encephalography (EEG) and functional magnetic resonance imaging (fMRI) with both naturalistic and controlled stimuli. When participants watched a television game show, both EEG and fMRI revealed that speech-associated mouth movements and co-speech gestures were processed in spatially separable ventral and dorsal pathways. The ventral “mouth” regions included posterior superior temporal (ST), inferior parietal, and ventral premotor (PM) cortex. The dorsal “gesture” regions included anterior ST, superior parietal, and dorsal PM cortex. In the controlled study, participants watched an actress speaking sentences that were constructed to vary on the informativeness of mouth movements (more or less visible) and co-speech gesture (none or more or less imagistic). Preliminary results suggest that mouth and gesture movements were associated with their own ventral and dorsal networks. Functional connectivity analyses show that when mouth movements were more informative, the ventral network was weighted more strongly whereas when gestures were more informative, the dorsal network was weighted more strongly. Results suggest that the organization of language in the brain is not static but, rather, is supported by multiple, distributed, and simultaneously active networks each processing a different type of context, weighted by the informativeness of different sources of information in the environment.