UCL Psychology and Language Sciences


Ecological Language (ECOLANG)

ECOLANG is a large project sponsored by ERC studying language learning and processing in real-world settings

Diagram explaining Ecological Language setup

The human brain has evolved the ability to support communication in complex and dynamic environments. In such environments, language is learned, and mostly used in face-to-face contexts in which processing and learning is based on multiple cues: linguistic (such as lexical, syntactic), but also discourse, prosody, face and hands (gestures). Yet, our understanding of how language is learnt and processed, and its associated neural circuitry, comes almost exclusively from reductionist approaches in which the multimodal signal is reduced to speech or text. ECOLANG pioneers a new way to study language comprehension and learning using a real-world approach in which language is analysed in its rich face-to-face multimodal environment (i.e., language’s ecological niche). ECOLANG studies how the different cues available in face-to-face communication dynamically contribute to processing and learning in adults, children and aphasic patients in contexts representative of everyday conversation.

The central — and most time consuming part of the project — is the collection and annotation of a corpus of dyadic communication between two adults, or one adult and their child (2-4 years old). The participants are simply asked to talk about objects (that we provide), some of which are known only by one of them (the caregiver or one of the adults who had learnt those new concepts and words ahead of interaction); moreover, the objects are sometimes present in front of them, sometimes they are absent (see the Figure for recording set-up).  Analyses of the distribution and cooccurence of the cues in the corpus provides us with a first snapshot of how these cues are used in naturalistic interactions crucially depending upon our key manipulations (child vs adult; known vs unknown topic; topic present or absent). Some initial results in Vigliocco et al (2019).

Next, we assess whether and how the multimodal cues are used in language learning and comprehension using electrophysiological and neuroimaging methods (in collaboration with the Jeremy Skipper’s LabLab). Some initial results in Zhang et al. (2021).

Key References

Murgiano, M., Motamedi, Y. & Vigliocco, G. (2021). Situating Language in the Real-World: the role of Multimodal Iconicity and Indexicality. Journal of Cognition. Target article and commentaries.

Vigliocco G, Motamedi Y, Murgiano M, Wonnacott E, Marshall CR, Milan Maillo I, et al. (2019). Onomatopoeias, gestures, actions and words in the input to children: How do caregivers use multimodal cues in their communication to children? In: Proceedings of the 41st Annual Conference of the Cognitive Science Society. Montreal, QB.

Zhang, Y., Frassinelli, D., Tuomainen, J., Skipper, J. I., & Vigliocco, G. (2021).  More than words: word predictability, gesture, prosody and mouth movements in natural language comprehension. Proceedings of the Royal Soc B