UCL Psychology and Language Sciences


Dynamic, multi-core-periphery architectures support the neurobiology of language in the real-world

Bangjie Wang, Sarah Aliko, Chengbin Peng, and Jeremy I Skipper



Existing models of the neurobiology of language assume a static architecture that cannot adequately explain language processing because they do not address how the brain uses context. Yet, context is necessary for resolving ambiguity at various levels of linguistic analyses, e.g., through using co-speech gestures or observable objects, and this likely involves a whole-brain and dynamically changing distribution of regions (given that context is ever changing). Unlike more fixed architectures, core-periphery organizations can account for the dynamic use of context. These involve core sets of highly connected nodes and a group of loosely connected peripheral nodes. We hypothesized that the brain has a core-periphery architecture during a naturalistic language processing (where context is available), with cores corresponding to ‘language regions’ and peripheries the rest of the brain.


We analyzed functional magnetic resonance imaging (fMRI) data from participants who watched movies. For each participant, we constructed a whole-brain network by building adjacency matrices at sliding time windows. The adjacency matrix for each window was created based on the pairwise Pearson’s correlation coefficient for every voxel. Voxel-wise core-periphery configurations were identified using a new fast algorithm we developed. Spatial independent components analysis was done to determine stable group-level configurations. To further understand the interrelationships between nodes, we partitioned adjacency matrices into communities such that the connectivity of nodes in the same community is higher than the connectivity between communities. We performed a mixed effects model analysis to find communities that overlapped brain regions sensitive to spoken words in the movies.


The number of core nodes varied across time windows and participants, covering ~3-7% of gray matter voxels. On average across participants, the core-periphery structures changed every ~2-3 minutes, resulting in 54.7 core configurations detected throughout the movie. More core figurations were found at the individual level than at the group level. Most stable core nodes were in sensorimotor regions (e.g., primary visual and auditory cortices), language-sensitive regions, and some regions in the putative ‘default mode network’ (e.g., angular gyrus).

Individual-level community configurations changed every ~10 minutes, whereas group-level communities varied less frequently and showed very low similarity with individual-level communities. On average across participants and movies, the most stable temporal community configuration consisted of 5 communities, which corresponded to central sulcus, temporal, occipital, prefrontal, and subcortical regions. Voxels in ‘language’ regions contributed to 5.3% of the stable core nodes of the communities. About 50% of voxels in areas outside ‘language’ regions acted as periphery nodes linked to these ‘language’ core nodes.


Traditional (e.g., ‘dual-stream’) models of the neurobiology of language suggest a static and modular organization of language processing regions, limited to a small portion of the brain. To the contrary, our results suggest a highly flexible core-periphery network architecture, where ‘language regions’ act as connectivity hubs that integrate and share multimodal information with periphery areas. This model might account for the variability and complexity of real-world language processing where context is available for use and offers new insights into potential regions as targets of novel individualized speech therapies.