Hostname: page-component-76fb5796d-9pm4c Total loading time: 0 Render date: 2024-04-25T12:29:07.751Z Has data issue: false hasContentIssue false

Visual and auditory input in second-language speech processing

Published online by Cambridge University Press:  10 December 2009

Debra M. Hardison*
Affiliation:
Michigan State University, East Lansing, USAhardiso2@msu.edu

Extract

The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on auditory-visual (AV) input has been conducted more extensively in the fields of infant speech development (e.g., Meltzoff & Kuhl 1994), adult monolingual processing (e.g., McGurk & MacDonald 1976; see reference in this timeline), and the treatment of the hearing impaired (e.g., Owens & Blazek 1985) than in L2 speech processing (Hardison 2007). In these fields, the earliest visual input was a human face on which lip movements contributed linguistic information. Subsequent research expanded the types of visual sources to include computer-animated faces or talking heads (e.g., Massaro 1998), hand-arm gestures (Gullberg 2006), and various types of electronic visual displays such as those for pitch (Chun, Hardison & Pennington 2008). Recently, neurophysiological research has shed light on the neural processing of language input, providing another direction researchers have begun to explore in L2 processing (Perani & Abutalebi 2005).

Type
Research Timeline
Copyright
Copyright © Cambridge University Press 2009

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Chun, D. M., Hardison, D. M. & Pennington, M. C. (2008). Technologies for prosody in context: Past and future of L2 research and practice. In Edwards, J. Hansen & Zampini, M. (eds.), Phonology and second language acquisition. Amsterdam: John Benjamins, 323346.CrossRefGoogle Scholar
Gullberg, M. (2006). Some reasons for studying gesture and second language acquisition. International Review of Applied Linguistics in Language Teaching 44.2, 103124.CrossRefGoogle Scholar
Hardison, D. M. (2007). The visual element in phonological perception and learning. In Pennington, M. C. (ed.), Phonology in context. Basingstoke: Palgrave Macmillan, 135158.CrossRefGoogle Scholar
Massaro, D. W. (1998). Perceiving talking faces: From speech perception to a behavioral principle. Cambridge, MA: MIT Press.Google Scholar
McGurk, H. & MacDonald, J. (1976). Hearing lips and seeing voices. Nature 264, 746748.CrossRefGoogle ScholarPubMed
Meltzoff, A. N. & Kuhl, P. K. (1994). Faces and speech: Intermodal processing of biologically relevant signals in infants and adults. In Lewkowitz, D. J. & Lickliter, R. (eds.), The development of intersensory perception: Comparative perspectives. Hillsdale, NJ: Lawrence Erlbaum, 335369.Google Scholar
Owens, E. & Blazek, B. (1985). Visemes observed by hearing-impaired and normal-hearing adult viewers. Journal of Speech and Hearing Research 28, 381393.CrossRefGoogle ScholarPubMed
Perani, D. & Abutalebi, J. (2005). The neural basis of first and second language processing. Current Opinion in Neurobiology 15, 202206.CrossRefGoogle ScholarPubMed
Rosenblum, L. D. (2005). Primacy of multimodal speech perception. In Pisoni, D. B. & Remez, R. E. (eds.), The handbook of speech perception. Malden, MA: Blackwell, 5178.CrossRefGoogle Scholar