Abstracts of invited speakers
Information integration is among the brain’s most fundamental abilities. In our natural environment, our senses are constantly bombarded with multiple signals. The brain’s challenge is to integrate information from across the senses to form a more reliable percept. This talk explores the idea that language learning (and processing) also relies strongly on information integration, both across and within sensory modalities.
If true, we might predict the following specific components of language acquisition:
- Language learning is inherently multimodal. Therefore language learners will readily integrate and acquire all relevant input (e.g., from visual and auditory input) that is linked in time and space.
- We are ‘hard-wired’ for multisensory perceptual integration. Therefore information integration may proceed more smoothly for language comprehension (through perceptual input) compared to language production (through motor output) and learners will do better at comprehending multiple inputs compared to producing outputs.
- Literature on multimodal biases suggest that when the senses deliver conflicting information, vision generally dominates and will bias information from other senses. Therefore input in the visual modality may be dominant compared to other sensory inputs.
Experimental data from sign language acquisition (both British Sign Language and American Sign Language), bimodal bilingual acquisition and processing will be presented to address these predictions.
Paula Fikkert (Centre for Language Studies, Radboud University, Nijmegen) Of Sights and sounds. The acquisition of phonological representations in spoken Dutch and Sign Language of the Netherlands (NGT)
Words in spoken languages are what signs are in sign languages: they are the central units in human language comprehension and production. We recognize words fast and reliably, by extracting relevant features from the input, and matching them with phonological representations of words stored in the brain. To produce words we use cognitive phonological representations to initiate articulatory routines. Thus, while words are the semantic building blocks of spoken language, words themselves are decomposed into smaller units organized in phonological representations that are crucial for perception and production. Insight into the nature and the acquisition of phonological representations is therefore essential for understanding human communication.
In the past decades my research has investigated how hearing Dutch children acquiring spoken language learn these representations and put them to use language production and perception (see Fikkert 2007 for an overview). This research was based on (a) detailed investigation of large corpora of spontaneous production data (e.g., Fikkert 1994, Levelt 1994), and (b) experimental studies on early sound discrimination and word learning and recognition. Based on this research I have argued that phonological representations in the mind need to be distinguished from pre-lexical phonetic representations: not all phonetic details that are present in the acoustic signal are used in phonological representations, which are more abstract cognitive entities.
Just like spoken language, signed language also has phonological representations. Infants acquiring a sign language use visual input (sights), rather than acoustic input (sounds). Moreover, the mode of production (output) also differs, as it is mostly manual, although by no means exclusively so. This raises the question to what extent modality differences affect phonological representations.
Analyses of spoken language acquisition suggest that phonological representations are initially underdeveloped, and are gradually specified for features from different phonological dimensions: e.g. Place of Articulation (Fikkert & Levelt 2008, Tsuji et al. 2014, 2015), Manner of Articulation (Altvater-Mackensen, van der Feest & Fikkert 2014), Laryngeal features (Van der Feest 2007). Research based on American Sign Language (ASL) (Cheek et al. 2001, Meier 2006, Bonvillian & Siedlecki 1993, 2000), Brazilian Sign Language (Karnopp 2002, Bonvillian et al. 1997), and British Sign Language (Morgan et al. 2007) has established that children make relatively few errors with regard to certain phonological dimensions (location), while they produce many in other dimensions (movement, handshape). Handshape seems to pose the most problems for children. For Sign Language of the Netherlands (NGT), this has not been investigated. In contrast to spoken language acquisition, there is relatively little research into how infants acquiring a sign language build up phonological representations (cf. Brentari 2011, 2012). The central question is therefore how differences in modality affect the cognitive representations that are constructed during acquisition and used in perception and production.
The emergence of sign linguistics research since the 1960s has led to widespread recognition of the linguistic status of sign languages and establishment of sign bilingual programs for the deaf worldwide. However, there is an apparent lack of synchrony in the development between sign linguistics research and deaf educational practices. While research is flourishing, the movement of promoting use of sign language in educating deaf children especially in the deaf school context has been slowing down quite dramatically in recent years. Advancement in assistive hearing technology has revived, if not further strengthened, the hope of parents, educators for the deaf and language pathologists that difficulties in deaf children’s communication through the auditory-oral modality may be removed, thus obviating the need of sign language support. The change of philosophy of deaf education from segregation in deaf schools to integration or inclusion in mainstream education for deaf children also potentially reduces the size of the signing communities, as deaf children can no longer be clustered together to nurture the growth of sign language and to sustain its transmission.
Amid these challenges, the field of research on sign language acquisition by deaf children has recently begun to orient itself from a monolingual to a bimodal bilingual approach, to build theories of bilingual acquisition or bilingual processing, and to furnish theoretical justifications for the ‘bilingual advantage’ in bringing up deaf children. To respond to such developments, one needs to carefully consider how to build an “acquisition-rich” environment for nurturing bimodal bilingualism among children given the current challenges, and more importantly, how one may tap the insights drawn from the findings that both the deaf and the hearing children born of Deaf parents demonstrate such linguistic capacities.
In this presentation, it is proposed that such a linguistic environment may be created through early bilingual education supporting not ‘deaf isolates’ but groups of deaf students as well as hearing students taught by a deaf signing teacher and a hearing teacher in class. This approach attempts to lift sign language from the bondage of social but misconceived perception that it is primarily a language for deaf children or hearing children born to Deaf parents. In fact, it can be a medium of instruction for any students regardless of their hearing status, given the appropriate linguistic ingredients. Using the findings of a recent experimental deaf education program set up since 2006 in Hong Kong, we will attempt to provide a linguistic interpretation of the various aspects of the program and evaluate to what extent some of these pedagogical variables interact with the development of bimodal bilingualism.