the study of language started from the investigation of signed, rather than spoken, languages?

What If logo new3.jpg

EPS Workshop: January 7th and 8th 2012

Psychology UCL






Our understanding of the cognitive and neural underpinnings of language learning and processing has been highly influenced by the fact that empirical research and theoretical development has been based on spoken language. Since the 70s, researchers have begun to recognise the importance of investigating signed languages in order to determine which aspects of language can be considered to be universal and which, instead, ought to be regarded as modality-specific. However, the study of sign languages has been driven in large part by the theoretical ideas developed for spoken languages. As a result, the general approach has been to assess when sign languages behave like spoken languages (supporting universals) and when they do not (supporting modality-specific features of language).

This workshop sets to challenge this approach by asking whether the traditional theoretical ideas about language that have been developed so far would, in fact, have played a central role in our thinking if we had started the investigation of language based on visual, rather than acoustic, systems. In order to make this very broad issue tractable, we plan to focus on two separate but related areas, both of which have begun to receive attention.

(1) Iconicity. It is a central tenet of language studies to assume that meaning and linguistic form are arbitrarily related. Theories of language processing remain dominated by the idea of arbitrary form-meaning relationships. Non-arbitrary mappings, in contrast, are often dismissed as coming only from extremely narrowly constrained domains like onomatopoeia and baby-talk, and are not considered to be representative of language more generally. Yet  in sign languages (taking advantage of the possibility of mapping visual forms onto hand, mouth and body shapes) iconicity is the norm rather than the exception, and numerous non-Indo-European spoken languages include wide repertoires of iconic mappings, variously described as mimetic, ideophonic or sound-symbolic. Such words go far beyond the acoustic domain; forms of such words can also evoke domains of sensory-motor and emotional experience via systematic meaning-to-sound correspondences. Only very recently have researchers begun to explore the processing consequences (e.g., Thompson, Vinson & Vigliocco for signed languages; Nygaard et al, for spoken languages) and developmental consequences (e.g., Volterra for signed languages; Imai et al. for spoken languages) of these iconic mappings, leading to hypotheses in which both iconicity along with arbitrariness are taken to be fundamental tenets of language (possibly both at phylogenetic and ontogenetic levels; Monaghan et al; Perniss et al).

(2) Language as a multi-channel phenomenon. A second general consequence of focusing our attention on spoken languages has been a major emphasis on conceiving language as something that is delivered and received through only one modality (produced by the voice and perceived by the ears). Thus, decades of psycholinguistic research have used acoustic presentation (or visual presentation of written words), thereby isolating language from other aspects of communication like facial and gestural information which so often accompany speech in communicative settings. Such visual information has traditionally been considered as secondary (if not unnecessary) to the speech signal. In sign languages, face-to-face communication is obviously necessary and sign languages take advantage of multiple sources of information. Although the importance of some visual information in the processing of spoken language has long been recognised (both face and hand gestures), the implications for theories of language development, processing, and evolution have not been explored, with a few exceptions (Ozyurek; Kita; Skipper). This perspective highlights novel and important features of language processing and its neural underpinnings, showing how matches and mismatches in the information carried by different, concomitant channels (vocal or manual, but also visual information on the face, mouth and body) are clearly taken into account in processing. Moreover, it also suggests a far greater degree of iconicity in spoken languages than has been previously acknowledged (e.g. hand gestures time-locked to speech can depict important aspects of referents talked about), more similar to what is observed in signed languages.


Organising Committee

  • Gabriella Vigliocco
  • Pamela Perniss
  • Robin Thompson
  • David Vinson

The workshop is sponsored by the Experimental Psychology (EPS) Society, with further support by the Deafness, Cognition and Language (DCAL) Research Centre and the Cognitive, Perceptual and Brain Sciences (CPB) Research Department at University College London.

For enquiries, please contact: Antonietta Esposito (a.esposito@ucl.ac.uk)

Page last modified on 26 sep 11 15:05 by Carolyne S Megan