검색 전체 메뉴
PDF
맨 위로
OA 학술지
Gesture and Multimodal Development. Ed. By Jean-Mark Colletta, and Michele Guidetti. Amsterdam: John Benjamins, 2012. Pp. xii, 223. ISBN 9789027202581. $135 (Hb).
  • 비영리 CC BY-NC
ABSTRACT
Gesture and Multimodal Development. Ed. By Jean-Mark Colletta, and Michele Guidetti. Amsterdam: John Benjamins, 2012. Pp. xii, 223. ISBN 9789027202581. $135 (Hb).
KEYWORD
gesture , multimodality , cognitive & communicative development , pre-speech gesture , language acquisition , gesture and education
  • 1. Summary

    A growing number of researchers stress that gesture should no longer be treated as a “paralinguistic accessory” to language but rather that, with language, it forms a single system of communication (McNeill 1985, Iverson & Thelen 1999, McNeill & Duncan 2000, Bates & Dick 2002, Goldin-Meadow 2003, Kendon 2004, Duncan et al. 2007, among others). This volume not only adopts the latter stance, it also elaborates on all aspects of communication—gesture, gaze, body movement, facial expression, vocalization, and speech—from a developmental perspective.

    This edited volume compiles nine papers that were initially published in the 2010 special issue of “Gesture” 10:2/3 after being presented in the “Multimodal” 2009 International Conference (Toulouse, France). The central unifying focal point that characterizes this collection of papers is the study of “multimodality and gesture in cognitive and communicative development” (p. 3).

    The papers are arranged thematically. Subsequent to the brief editors’ introduction, the first three papers address pre-speech pointing gestures in relation to early language acquisition. The next two papers explore the role of gesture in the development of early linguistic and social interaction. The following two contributions explore maternal/caregiver gestural behavior and its implication for the infant’s language development. The last two papers in this volume, however, touch on different issues vis-à-vis gesture; one raises a methodological issue, and the second one investigates the role of gesture in mathematical teaching/learning.

    In the introduction to the volume (pp. 1-6), the editors Jean-Marc Colletta and Michèle Guidetti clarify the focus of the book and explain the significance of the selected contributions to the field of gesture and language acquisition. They enumerate the following major reasons for investigating gesture and multimodal development:

    The next paper by Hélène Cochet and Jacques Vauclair, “Pointing gesture in young children: Hand preference and language development” (pp. 7-26), provides a review of the latest studies on the development of pointing gestures in infants. They tie early language development to the use of deictic gestures (pointing) for intentional and referential communication purposes. Cochet and Vauclair rely on findings from previous research as well as their own to establish a distinction between pointing gestures and manipulative activities based on hemispheric lateralization of hand preferences. First, they reveal a “stronger involvement of the left hemisphere for pointing gestures” (p. 17). This leads Cochet and Vauclair to the conclusion that gesture and speech share an interrelated system that is separate from the system involved in the production of motor activities. Second, they draw attention to the phasic correlation between pointing gestures, manipulative gestures and vocabulary spurt during the developmental process. Results drawn from longitudinal study by Cochet, Jover, and Vauclair (2011) show a correlation between hand preference for imperative pointing and manipulative activities gestures prior to the vocabulary spurt phase. This correlation is reduced once the vocabulary spurt takes place. Cochet and Vauclair conclude the paper by emphasizing the importance of taking the several dimensions of pointing gestures into consideration in order to determine the “multifaceted interconnection” between gesture, language and action.

    In “Support or competition? Dynamic development of the relationship between manual pointing and symbolic gestures (also known as ‘baby signs’) from 6 to 18 months of age” (pp. 27-48), Claire D. Vallotton affirms that there were no previous attempts to examine the effect of manual pointing and symbolic gestures on the development of each other. Vallotton draws her findings from data she collected over a period of eight months, during which she observed and documented by video the gesturing behavior of 10 hearing children (aged between 6 and 18 months). The data were collected during free play and snack time in the Infant Classroom of the UC Davis Center for Child and Family Studies. Vallotton’s study is based on the Dynamic Skills Theory (Fischer & Bidell 1998), which accounts for the dynamic relation between skills during the developmental process. In other terms, DST posits a hierarchical relation between skills within given domains where a newly surging skill may either support or eliminate other skill(s). “This dynamic interplay may result in spurts of growth in one skill concurrently with regression in a related skill” (p.29). Vallotton, accordingly, tests the following hypotheses: (p. 33)

    The results reported in this paper support DST. Vallotton shows the following. First, early pointing predicted earlier production of symbolic gestures and potentially earlier syntactic and lexical development. Second, symbolic gestures suppress pointing while acting like words. Third, emergence of words suppresses symbolic gestures, which also results in the resurgence of pointing gestures. This latter is reintegrated into the multimodal communication system. Despite the significant results, Vallotton draws the readers’ attention to a crucial limitation in the study. This limitation results from the lack of oral language data that Vallotton explains earlier in the paper in terms of technical deficiencies. She, however, foresees a possibility to overcome the technical limitations provided that highpowered audiovisual equipment such as long-ranged focused microphones and smaller wireless microphones are made more available.

    Aliyah Morgenstern, Stéphanie Caët, Marie Collombel-Leroy, Fanny Limousin and Marion Blondel in their paper “From gesture to sign and from gesture to word: Pointing in deaf and hearing children” (pp. 49 - 78) report on a longitudinal study on the continuity between gesture and sign on the one hand and gesture and words on the other hand from a bilingual bimodal perspective. They examine data collected from three children (aged between 8 and 24 months): one hearing monolingual French-speaking child, one deaf signing child (French Sign Language, FSL), and one hearing bilingual child (spoken French and FSL). Their hypothesis, which was indeed validated by the results, is that the modality of input would affect the pointing gestures’ onset, quantity and quality. They predict a correlation between pointing gesture frequency and language acquisition. For the monolingual child pointing is produced at a greater rate at early stages when combined with early single words; then the rate decreases as more verbal communication is developed. For the deaf and bilingual children the same combination of pointing and sign is noticed at early stages. Yet, with the development of more complex FSL the rate of the pointing gestures keeps increasing (with a higher rate reported for the deaf child). The authors also predict a connection between pointing gestures and more complex linguistic structure both in spoken and sign language. The findings reported in this study have interesting consistency with Clark (1978), McNeill (1992), Cormier et al. (1998), and Hoiting & Slobin (2007). However, this study is of additional value since it offers a parallel analysis of data from both hearing and signing modalities.

    In their paper “How the hands control attention during early word learning” (pp. 79-98) Nancy de Villiers Rader and Patricia Zukow-Goldring report on findings based on two eye-tracking experiments conducted on 32 infants (aged between 9 and 14.7 months). Both experiments were designed to test the role gesture plays in directing the infant’s attention during early word learning. Experiment 1 used dynamic synchronous gestures (DSG) vs. static/held gestures (SG). Experiment 2 used dynamic synchronous gestures (DSG) vs. dynamic asynchronous gestures (DAG). In both experiments the infants were presented with a short video stream in which a speaker is introducing a novel object. In the DSG condition the experimenter used a “show gesture” during which the speaker in the video held an object at the center in front of her upper body and said “look at” / “see it”. Then, the speaker either rotated the object to the left (around the vertical axis of the speaker’s arm) then back to the center, or loomed then retracted the object. Both stages of each action were synchronized with the pronunciation of the first and the second syllables of the object’s name (e.g. “gepi”) respectively. For example, the speaker loomed the object at the onset of the first syllable of the word and then retracted it while pronouncing the second syllable. In the SG condition the speaker held the object static at a midway position “between the body and the end of a looming action” (p. 85). In the DAG condition the dynamic gesture and speech were not synchronized. The hypothesis of this study is that the synchrony of speech and dynamic gesture would result in more successful word learning. Results from both experiments support the hypothesis. The findings show that accurate synchrony between speech and gesture (or between auditory and visual processing) helps infants build word-object pairing connections.

    By looking into infants’ body movement behavior as a reaction to language versus music stimuli, in “Infant movement as a window into language processing” (pp. 99-127), Laura Fais, Julia Leibowich, Ladan Hamadani, and Lana Ohira attempt to demonstrate the existence of “differential, systematic, cross-modal responses” based on infants’ perception of the social underpinning of language. They justify their choice of music versus language based on McMullen and Saffran’s (2004) theory that language and music are processed in two separate cortical regions. Therefore, infants are able to understand that only language is social and, thus, are expected to react differently to the stimuli. Based on this hypothesis, Fais et al. predict the following (p. 100):

    20 infants (10 in each group) aged roughly between five and six months participated in the two experiments. Experiment 1 studied speech discrimination and experiment 2 studied perception of melody. Higher rate of voiced vocalization, shifted gaze, torso movement and lateral head movement were reported in experiment 1. Fais et al. conclude that infants’ movement reactions are revealing of their differential understanding of the stimuli they are exposed to. What makes the paper particularly valuable and enriching is the clear and useful presentation of their experimental design and coding system in addition to introducing movement measurement as a dependent variable in language acquisition studies.

    In the next paper “Children’s lexical skills and task demands affect gestural behavior in mothers of late-talking children and children with typical language development” (pp. 129-155) Angela Grimminger, Katharina J. Rohlfing and Prisca Stenneken study maternal adaptive gestural behavior. They compare gestures produced by 17 German-speaking mothers in interaction with their children during task-oriented dialogues. Children were divided into groups with typical language development (TLD) and groups of late talkers (LT). Grimminger et al. make the following predictions (p. 130):

    Findings from this study tend to support both predictions and, thus, both hypotheses stated above. Grimminger et al. conclude that gestural motherese operates on a scale between children’s productive lexical abilities (developmental level) and task demands (level of difficulty). Mothers, therefore, adjust their communicative behavior in accordance with the children’s learning process.

    Daniel Puccini, Mireille Hassemer, Dorothé Salomo and Ulf Liszkowski, in “The type of shared activity shapes caregiver and infant communication” (pp. 157-174) adopt a multimodal usage-based approach to language acquisition. Their primary focal point is caregivers’ and infants’ verbal and non-verbal interaction. They examine language and gesture in two different semi-natural communication settings: a free play context based on object manipulation and action and a display context based on exploring objects around the room. The objectives of this study are the following (p. 160):

    Based on their findings, the authors maintain that the type of shared activity shapes caregiver and infant communication. More specifically, it affects caregivers’ language use and caregivers’ and infants’ gesture use. For example, in the “Context of Action” i.e. when the activity allows for manipulation of objects, proximal deictic gestures are used. In the “Context of Regard”, where the activity involves looking at objects, pointing gestures are used. The second finding concerns the systematic combination of speech with certain deictic gestures. This pattern would account for a multimodal scaffolding process made available to infants by caregivers during language acquisition. Puccini et al. conclude that their research lends support to a socio-pragmatic view of language acquisition in which non-linguistic gestures and activity types shape human communication.

    “Transcribing and annotating multimodality: How deaf children’s productions call into the question the analytical tools” (pp. 175-197) by Agnès Millet and Esabelle Estève is a study that raises methodological issues related to the annotation and transcription of data from deaf children (aged 6 to 12 years old). Deaf children’s language production is characterized by the combination of bilingualism and bimodality (Gillot 1998), which can be problematic for current annotation tools. The authors claim that treating bimodality as a parallel system may result in an annotation of two independent productions. In a typical speech context this would be a reasonable treatment of the data. But in the deaf context this annotation system fails to account for the essence of deaf communication as an integrated bimodal system where the two modalities do not subordinate each other but are systematized around each other. Millet and Estève therefore suggest an annotation grid, created on ELAN, in which a treatment of each modality (verbal and non-verbal, vocal and gestural) is established as an integral part of a “joint” production system. This would guarantee a treatment of the data as “true integrated language productions as opposed to parallel use of modalities” (p. 193). This paper is of utmost importance since it provides a solution to a serious methodological issue in data annotation and transcription.

    The last contribution in this volume is about “Mathematical learning and gesture: Character viewpoint and observer viewpoint in students’ gestured graphs of functions” (pp. 199-220). In this paper Susan Gerofsky studies the use of embodiment for understanding abstract disembodied notions in mathematics. This is ongoing research that is meant to develop a further understanding of the role gesture plays in accessing and grasping the abstract reality of mathematical graphs. It also aims at bringing closer the fields of cognitive neuroscience, gesture studies and mathematics education, which may prove useful to deepen understanding of gesture on the one hand and provide new insight on curriculum development for mathematics on the other. Gerofsky adopts McNeill’s (1992) C-VPT (Character Viewpoint) and O-VPT (Observer Viewpoint) in her study. McNeill differentiates between two viewpoints that can be observed in gesture production. C-VPT is observed when subjects use gesture to enact the character of an action as if they were the character themselves. C-VPT gestures typically involve the whole body (Cassell and McNeill 1991). O-VPT, on the other hand, is seen when subjects use gesture to depict the character of an action as if they were detached from the action and seeing it from a distance. O-VPT gestures typically involve only the hands (Cassell and McNeill 1991). Gerofsky proposes that building the distinction between gestural viewpoints would offer a richer theoretical grounding for the study. By exposing the type of relationship the subjects hold with the topic at hand, gestural viewpoints would ensure an effective diagnosis of the conceptual difficulties in mathematics learning. Additionally, it would make possible a deeper and more refined understanding and gauging of learners’ level of engagement and comprehension of abstract mathematical concepts. Gesture, once again, is shown to play a crucial role in activating and connecting multisensory, multimodal systems during the mathematics learning process. Gerofsky concludes that using gesture as an integrated pedagogy (explicit and embodied teaching of mathematical concepts) offers learners a neurological advantage for learning abstract notions through an embodied sensorimotor experience. This is a paper that may be of interest to scholars from interdisciplinary fields touching on cognitive linguistics, embodiment, mathematics and pedagogy.

       2. Evaluation

    Studies on, and related to, gesture are growing, yet this does not minimize the value of this collection of works. This volume not only assembles a genuine community of scholars and experts in gesture studies but also deals with crucial issues in gesture and multimodality development. The volume covers multiple aspects of gesture from different perspectives. It focuses on different types of gesture. First, it addresses the relationship between pointing gestures, symbolic gesture and speech from a developmental aspect. Then, it studies hand gesture, body movement, gaze, head movement, vocalization and speech as a multimodal system. It also looks at gesture among different populations. There is a focus on gesture as produced by infants at different stages in the developmental continuum (pre-speech infants, early speech infants, infants reaching a vocabulary spurt stage and infants at a more complex linguistic stage). There is also a focus on gesture as produced by parents (mainly mothers), caregivers, and teachers as well as a focus on gesture produced by typically developing children, deaf children and bilingual children (verbal/sign languages). Additionally, this volume not only includes studies on the effect of gesture on language (and vice versa), it also includes a study on the effect of different types of gesture on each other. From a methodological perspective, the studies in this volume contain useful information about a variety of experimental designs and techniques as well as annotation and transcription grids. The literature review in each paper provides the readers with a comprehensive overview of current and older theories. This volume serves as a substantial contribution to the existing literature on gesture studies. It groups carefully selected studies that together represent a reference resource for gesture and multimodal development.

    The papers in this volume provide rich content that may be useful to readers who are new to gesture studies or researchers who are already involved in the field. However, with the exception of Fais et al.’s paper, which includes a separate section for implications and future work, most of the papers in this volume barely (if not at all) mention suggestions for future or follow-up work.

    The editors have done good work in organizing this volume. Yet, the last paper of the volume does not fit well in the overall collection. This, however, by no means reduces the value of the study itself. In their introductory chapter, Guidetti and Colletta mention the study of learning rather than language development processes as an issue of fundamental interest. While I agree with that statement, if we look at the population involved in this study (8th and 11th grade students, aged between 13 and 16), we find a huge gap in the age continuum of the populations involved in this volume (not including mothers and caregivers). Almost all of the studies, except this one, dealt with infants ranging between 4 months and 5 years old. Furthermore, all studies in this volume, except this one, dealt with acquisition. Had the editors included an additional study dealing with learning rather than acquisition and included a population of an intermediary age, a balance between the contributions would have been set.

    Despite these minor remarks, I find this volume a valuable contribution not only to the area of gesture and multimodal development but also to language acquisition and learning, and cognitive linguistics alike. In terms of audience, this volume will prove extremely useful to a wide range of readership. The clarity of the content and the lucid style of the papers make it accessible to students (particularly graduate students interested in gesture and multimodality). The versatile range of topic and theoretical approaches as well as detailed technical and methodological information regarding experimental design, data annotation and transcription (especially Millet’s and Estève’s paper) may be of interest to scholars and postgraduate students who are involved in conducting research on gesture and multimodal development.

참고문헌
  • 1. Bates E., Frederic D. 2002 Language, gesture and the developing brain. [Developmental Psychobiology] Vol.40 P.293-310 google cross ref
  • 2. Duncan S.D., Cassell J., Levy E.T. 2007 Gesture and the dynamic dimension of language: Essays in honor of David McNeill. google
  • 3. Cassell J., McNeill D. 1991 Gesture and the poetics of prose. [Poetics Today] Vol.12 P.375-404 google cross ref
  • 4. Clark E.V. 1978 From gesture to word: On the natural history of deixis in language acquisition. In J. S. Bruner & A. Garton (Eds.), Human growth and development: Wolfson College lectures 1976 P.85-120 google
  • 5. Cochet H., Jover M., Vauclair J. 2011 Hand preference for pointing gestures and bimanual manipulation around the vocabulary spurt period. [Journal of Experimental Child Psychology] Vol.110 P.393-407 google cross ref
  • 6. Cormier K., Claude M., Repp A. 1998 Manual babbling in deaf and hearing Infants: A longitudinal study. In E.V. Clark (Ed.), Proceedings of the Twentyninth Annual Child Language Research Forum P.55-61 google
  • 7. Cross T.G. 1977 Mothers’ speech adjustments: The contribution of selected child listener variables. In C.E. Snow & C.A. Ferguson (Eds.), Talking to Children:Language Input and Acquisition P.151-188 google
  • 8. Fischer K.W., Bidell. T.R. 1998 Dynamic development of psychological structures in action and thought. In W. Damon (Ed.), Handbook of child psychology (5th ed.), 1 P.467-561 google
  • 9. Gillot D. 1998 Le droit des sourds: 115 propositions. Parlementary Report. google
  • 10. Goldin-Meadow S. 2003 Hearing gesture: how our hands help us think. google
  • 11. Hoiting N., Slobin D. 2007 From gestures to signs in the acquisition of sign language. In S. D. Duncan, J. Cassell, & E. T. Levy (Eds.), Gesture and the dynamic dimension of language: Essays in honor of David McNeill et al. (Eds.) P.51-65 google
  • 12. Iverson J.M. 1999 Hand, mouth and brain: The dynamic emergence of speech and gesture. [Journal of Consciousness Studies] Vol.6 P.19-40 google
  • 13. Kendon A. 2004 Gesture: visible action as utterance. google
  • 14. McMullen E., Saffran J.R. 2004 Music and language: a developmental comparison. [Music Perception] Vol.21 P.289-311 google cross ref
  • 15. McNeill D. 1985 So you think gestures are nonverbal? [Psychological Review] Vol.92 P.350-371 google cross ref
  • 16. McNeill D. 1992 Hand and mind. google
  • 17. McNeill D., Duncan S. 2000 Growth points in thinking-for-speaking. In D. google
  • 18. Language and Gesture. google
  • 19. Pine J.M. 1994 The language of primary caregivers. In C. Gallaway & B.J. Richards (Eds.), Input and interaction in language acquisition P.15-37 google
OAK XML 통계
이미지 / 테이블
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.