XClose

UCL Psychology and Language Sciences

Home
Menu

On the non-independence of language subcomponents

Abstract


Introduction

Studies of language in the brain ordinarily assume that a complex language processing task can be subdivided into independent subcomponents, and that comparing the cortical localization of such functionally distinct parts reveals their underlying neural mechanisms. However, if the brain functions as distributed overlapping neural networks, such a subtractive approach could eliminate the appearance of activation in overlapping “assemblies.” Further, if the brain were to process functionally related tasks differently when they are performed together than when they are performed separately, then combined activations would appear emergent and subtractive results would be misleading. We sought to determine whether activations caused by the addition of linguistically relevant visual information to an auditory stimulus is linearly separable from the linguistic component of the auditory stimulus when presented alone. We hypothesized that the brain activation associated with the passive perception of auditory + visual (AV) discourse could not be accounted for by the linear addition of activations caused by the Audio (A) and Visual (V) stimuli.

Methods

Six female and five male right-handed native English speakers participated. Functional MRI was performed at 1.5T with a head coil. Twenty-four spiral gradient echo T2* functional images were collected every 3 seconds. Subjects were asked to attend to viewed and/or heard stimuli. No overt motor response was required. Stimuli included audio discourse (A), audio and video discourse (AV) and video alone (V). Stimuli were interesting self-contained stories of approximately 24 seconds duration. 226 whole brain mages were collected during each of four runs. A general linear model (GLM) was used to reveal brain activation (single voxel uncorrected p < 0.00001) related to each condition as well as to the comparison of AV and V (AV – V)

Results

The relation between total brain volume of activation associated with A, V, and AV was further assessed via multiple regression. In this analysis, neither A nor V alone could account for a significant portion of the variance of AV, and A and V together could account for only 8% of the variance of AV, F(1, 9) = .561, p = .4728. In a direct examination of the subtractions produced by our GLM time series analysis, the total volume of activation of A and (AV – V) were only very weakly correlated (r = 0.1).

Conclusions

The cortical response in the AV condition is not the linear addition of the components A and V. Inspection of statistical images suggests that the interactions between the visual and auditory information presented together in the AV condition may relate to emergent activation of frontal premotor areas. These areas were not significantly active during either of the A or V conditions alone. One possibility is that when the stimulus became linguistically relevant (i.e., lip movements can be utilized to decode the speech signal), the task required different cortical mechanisms resulting in activation of a different structural network. Problems surrounding the assumptions of a modular subcomponent organization of language complicate the use of subtractive methodology in language processing experiments.