decoding the expressive intentions in children's … the expressive intentions in...

13
Decoding the Expressive Intentions in Children's Songs Author(s): Mayumi Adachi and Sandra E. Trehub Source: Music Perception: An Interdisciplinary Journal, Vol. 18, No. 2 (Winter, 2000), pp. 213- 224 Published by: University of California Press Stable URL: http://www.jstor.org/stable/40285909 . Accessed: 15/10/2014 20:07 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . University of California Press is collaborating with JSTOR to digitize, preserve and extend access to Music Perception: An Interdisciplinary Journal. http://www.jstor.org This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PM All use subject to JSTOR Terms and Conditions

Upload: hadang

Post on 20-May-2018

216 views

Category:

Documents


1 download

TRANSCRIPT

Decoding the Expressive Intentions in Children's SongsAuthor(s): Mayumi Adachi and Sandra E. TrehubSource: Music Perception: An Interdisciplinary Journal, Vol. 18, No. 2 (Winter, 2000), pp. 213-224Published by: University of California PressStable URL: http://www.jstor.org/stable/40285909 .

Accessed: 15/10/2014 20:07

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

University of California Press is collaborating with JSTOR to digitize, preserve and extend access to MusicPerception: An Interdisciplinary Journal.

http://www.jstor.org

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

Music Perception © 2000 by the regents of the university of California Winter 2000, Vol. 1 8, No. 2, 213-224 all rights reserved.

Decoding the Expressive Intentions in Children's Songs

MAYUMI ADACHI & SANDRA E. TREHUB University of Toronto

Adults and children were exposed to separate visual and auditory cues from paired renditions of familiar songs by young, untrained singers who attempted to express happiness and sadness. Same-age children and adults decoded the expressive intentions of 8- to 10-year-old singers with com- parable accuracy (Experiment 1). For performances by 6- to 7-year-old singers, same-age children were less accurate decoders than were adults (Experiment 2). The younger performers also provided poorer cues to the intended emotion than did the older performers. Moreover, 6- to 7-

year-olds were less accurate than 8- to 10-year-olds at decoding the per- formances of 8- to 10-year-old singers (Experiment 3). The findings indi- cate that, although young children successfully produce and interpret happy and sad versions of familiar songs, 6- to 7-year-old children are less proficient than are 8- to 10-year-old children and adults.

*#H*PIWb*k: r£jfc««a>*ji rgft«MR®*j ̂ zi^frTft^bfco ««##8 -10 *»$>•£. *«#©*HbJMM&tt,l*A* W\¥mi£Mt *>tz*m-c*it «id. «»##6- 7 *»<&•£, «¥»«*«« a

«•1 t*«2K*tt**A©*W(K*Jt«lyfc4:J15, 6-7^!£0«»

8-io :m®im as*6 - 7*vtifiiim'T!*zfrm'<izt z*>, 6-7

^jett8-io*jiE«iffctt*KTr*a:v^t«t»^o& («it 3). -nç>

ft*S**ttt"5îlt»JT#**^ 8-10 ^ie^j5KASiîa)«ltt«B*ttJ*

Received September 29, 1999, accepted May 24, 2000.

Address correspondence to Mayumi Adachi, Faculty of Education and Human Sciences, Yamanashi University, 4-4-37, Takeda, Kofu, Yamanashi, 400-8510, Japan ([email protected]) or to Sandra E. Trehub, Department of Psychology, Univer-

sity of Toronto at Mississauga, Mississauga, Ontario, Canada L5L 1C6

([email protected]). ISSN: 0730-7829. Send requests for permission to reprint to Rights and Permissions,

University of California Press, 2000 Center St., Ste. 303, Berkeley, CA 94704-1223.

213

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

214 Mayumi Adachi & Sandra E. Trehub

Although much is known about children's understanding of emotional

expression (for reviews, see Denham, 1998; Saarni, 1999), most of the research has focused on verbal and facial cues to emotion. The general consensus is that young children are relatively adept at expressing and com- prehending basic feeling states such as happiness and sadness. By 4 or 5 years of age, they can even pose a number of emotional expressions (Lewis, Sullivan, & Vasen, 1987). They are much less adept, however, when their own and others' emotional expressions are at odds with inner states, as in attempts to feign happiness after receiving a disappointing gift (Cole, 1986; Davis, 1995). The disappointed recipient in such instances must have ad- equate control over expressive behaviors as well as knowledge of the im- pact of particular expressive behaviors on others.

Expressive aspects of music performance have analogous elements of pretense in the sense that performers attempt to project particular feelings whether or not these match their own internal states. Successful communi- cation is achieved when the audience appropriately discerns the performer's expressive intentions. In music, as in other domains, conventions govern- ing emotional expressiveness must be known by performer and audience alike (Gabrielsson &c Juslin, 1996; Juslin, 1997a, 1997b; Senju & Ohgushi, 1987). Children, by virtue of their limited experience, might have incom- plete knowledge of these expressive conventions.

We know, however, that when young children listen to performances of instrumental and vocal music, they interpret "happy" or "sad" musical meanings in much the same way as adults (Cunningham & Sterling, 1988; Dolgin & Adelson, 1990; Kratus, 1993). Children can also decode happi- ness and sadness in the expressive body movements of dancers (Boone & Cunningham, 1998), an ability that should facilitate the interpretation of live musical performances. Adachi and Trehub (1998) demonstrated, more- over, that children 4-12 years of age generate highly contrastive renditions of a familiar song when they attempt to convey happiness and sadness to an unfamiliar listener. In such situations, they produce happy renditions that are faster, louder, and at a higher pitch level than are their sad rendi- tions. Children also alter their posture and facial expression across emo- tional contexts, from upright and smiling during their happy performances to slumping and frowning during their sad performances. Interestingly, children make little use of music-specific devices (e.g., major/minor, stac- cato/legato), relying instead on expressive devices used in interpersonal communication.

Although the children in Adachi and Trehub's (1998) study generated contrastive performances for their happy and sad renditions, it is unclear whether their performances were effective in portraying the intended emo- tions. Accordingly, we sought to determine whether the vocal and visual aspects of these performances were interprétable to same-age children and

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

Decoding Emotion in Children's Songs 215

to adults. On the basis of children's ability to interpret the expressive body movements of skilled performers (Boone &c Cunningham, 1998) and the posed facial expressions of other children (Profyt & Whissell, 1991), they were expected to succeed in decoding the visual cues that accompany same- age children's performances of familiar songs. It is possible that children would be more accurate than adults in this respect, just as they are superior at decoding the facial expressions of same-age children (Profyt & Whissell, 1991).

Previous research on children's interpretation of expressive musical per- formances involved professional vocalists or instrumentalists (Cunningham & Sterling, 1988; Dolgin & Adelson, 1990; Kratus, 1993) rather than young, untrained vocalists. Nevertheless, children's use of expressive speech de- vices in their sung performances (Adachi & Trehub, 1998) and their ability to decode vocal paralanguage (Dimitrovsky, 1964; Morton & Trehub, in press) made it likely that same-age children would successfully distinguish happy from sad renditions of songs.

There are claims that adults more accurately interpret the expressive intentions of expert performers on the basis of visual compared with audi- tory cues (Davidson, 1993; Ohgushi & Hattori, 1996). On the one hand, the facial expressions and body movements of untrained performers might provide a clearer index of expressive intentions than would vocal cues such as tempo, loudness, and pitch level. On the other hand, unsophisticated child performers might provide exaggerated vocal cues in their performances, as they do when singing to their infant siblings (Trehub, Unyk, & Henderson, 1994). Thus, it was difficult to predict differences in decoding accuracy from visual cues alone compared with auditory cues alone.

A final goal was to compare girls' and boys' portrayal and decoding of emotion in sung performances. Effective portrayal could be indexed by accurate decoding of the intended emotion by children and adults. On the basis of girls' reportedly greater control of emotional expressiveness rela- tive to boys (Cole, 1986; Denham, 1998; Saarni, 1999), we expected more accurate interpretations of girls' sung performances by child as well as adult viewers and listeners. Despite differences between the sexes in the social- ization of emotion (Brody & Hall, 1993), girls seem no better at decoding emotion than are boys (Saarni, 1989; Thompson, 1989). Thus, there was no basis for expecting girls' superior decoding of emotion.

Experiment 1

The child singers from Adachi and Trehub (1998) who were of greatest interest for the purposes of the present experiment were 8- to 10-year-olds who had been classified as "ordinary" as opposed to "good" singers. Child

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

216 Mayumi Adachi & Sandra E. Trehub

singers had been designated "good" if they could provide two fully in-tune versions of a familiar song, one with the words and the other without the words (la la la); the remaining children had been designated "ordinary" singers. The listeners and viewers in the present study were same-age chil- dren and a comparison group of adults.

METHOD

Participants

The adults were 30 college students (15 men, 15 women) from 18 to 25 years of age (M = 19 years) who received partial course credit for their participation. There were 30 child participants (15 boys, 15 girls) 8-10 years of age (M = 9 years, 4 months) whose families had volunteered to participate in campus research. Music training (private lessons) aver- aged 1.33 years for adults and 1.07 years for children.

Apparatus

Testing took place in a quiet room. Visual stimuli were presented by means of a video cassette player (Panasonic ST S1900 N) and a 50-cm color monitor (Panasonic CT-1930 V). Auditory stimuli were presented through digitally referenced headphones (Sony MDR CD550) by means of a Macintosh 8100/100 computer and PsyScope software.

Stimuli

The stimuli consisted of auditory or visual components of the performances of 20 child singers (11 boys, 9 girls) from Adachi and Trehub (1998); nine children were 8 years old, seven were 9 years old, and four were 10 years old (M = 8 years, 10 months). The children had sung "Twinkle, Twinkle, Little Star" or "The ABC Song" (their choice), both of which have the same tune. These children were relatively unskilled singers (deemed "ordinary" singers in Adachi & Trehub, 1998) in the sense that they had failed, in an initial phase, to produce perfectly in-tune versions of a familiar song. Subsequently, they sang two versions of the song (in counterbalanced order), the first to make the experimenter feel happy, and the second to make her feel sad. The happy and sad performances of each child were ran- domly ordered (with respect to which was first or second) and presented in pairs to the viewing or listening audience. There were two random orders of the videotaped (no sound) pairs. The digital audio pairs were randomly ordered for each listener.

Procedure

All participants were tested individually. Half received the viewing task before the listen- ing task; the other half received the reverse order. Before the viewing task, participants were told that they would see soundless performances of 20 children singing two versions of "Twinkle" or "ABC," and they were to judge which performance appeared to be happier. Before the listening task, participants were told that they would hear 20 performances of children singing two versions of "Twinkle" or "ABC," and they were to judge which perfor- mance sounded happier.

RESULTS AND DISCUSSION

A four-way analysis of variance with repeated measures was used to evaluate the effects of presentation medium (auditory or visual), age of

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

Decoding Emotion in Children's Songs 217

decoder (child or adult), sex of decoder, and sex of singer. There was a main effect of presentation medium, JF(1, 56) = 85.29, p < .0001, with the proportion of correct judgments being greater for auditory (M = .83, SD =.07) than for visual (M = .73, SD = .09) materials. There was a main effect of sex of singer, F(l, 56) = 16.68, p < .0001, with greater accuracy (propor- tion of correct judgments) evident for female singers (M = .83, SD = .09) than for male singers (M = .77, SD = .09), and an interaction between presentation medium and sex of singer, F(l, 56) = 19.87, p < .0001 (see Figure 1). Post hoc paired t -tests revealed that the advantage for female singers was restricted to visual presentation, £(59) = -5.44, p < .0001. There were no other main effects or interactions.

Children and adults succeeded in decoding the expressive intentions of 8- to 10-year-old singers, which indicates that young children can convey happiness and sadness in their renditions of a familiar song. However, girls made more effective use of visual expressive cues in their sung performances than did boys, as reflected in the greater ease of decoding girls' expressive intentions from visual information alone. The latter finding is consistent with previous evidence of girls' superior encoding of facial expressions (Zuckerman & Przewuzman, 1979).

Experiment 2

The successful transmission and interpretation of emotion by chil- dren 8 to 10 years of age provided the impetus for a comparable evalu- ation of younger children. Accordingly, we sought to determine whether 6- to 7-year-old singers would be as effective as 8- to 10-year-olds in

Fig. 1. Proportion correct judgments of emotion from auditory and visual cues in the sung performances of 8- to 10-year-old children. Error bars are standard errors.

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

218 Mayumi Adachi & Sandra E. Trehub

communicating their emotional intentions to same-age and adult audi- ences.

METHOD

Participants

Adults and children were recruited in the same manner as Experiment 1. There were 49 adults (17 male, 32 female) 18-42 years of age (M = 23 years) and 48 children (24 boys, 24 girls) 6 years, 6 months to 7 years, 11 months of age (M = 7 years). Music training averaged 2.8 years for adults and 0.4 years for children.

Apparatus and Stimuli

The apparatus was identical to that of Experiment 1 except for the use of smaller head- phones (Koss WM/60) with the younger children. The stimuli consisted of auditory and visual components of the performances of 20 6- to 7-year-old "ordinary" singers from Adachi and Trehub (1998). These children had been given the choice of singing "Twinkle, Twinkle, Little Star," "The ABC Song," or "Baa Baa Black Sheep," all of which have the same tune. Unlike Experiment 1, in which participants completed auditory and visual de- coding tasks, participants in the present experiment completed either the auditory decoding task (25 adults, 24 children) or the visual decoding task (24 adults, 24 children). In all other respects, the procedure was identical to that of Experiment 1.

RESULTS AND DISCUSSION

The auditory decoding score of one 6-year-old boy was excluded be- cause of its extreme value. A four-way analysis of variance with repeated measures - presentation medium, sex of decoder, and age of decoder as between-subjects factors and sex of singer as a within-subject factor - re- vealed a main effect of presentation medium, F(l, 88) = 16.33, p < .0001, with greater accuracy (proportion correct) on auditory (M = .77, SD = .16) than on visual (M = .66, SD = .10) materials, in line with the findings of

Experiment 1. There was a main effect of age of decoder, P(l, 88) = 7.74, p = .007, with adults performing more accurately (M = .76, SD = .11) than children (M = .67, SD = .16), unlike the situation in Experiment 1. There was also an effect of sex of decoder, F(l, 88) = 8.19, p = .005, with female decoders performing better (M = .75, SD = .13) than male decoders (M =

.67, SD = .15). There was a significant two-way interaction between pre- sentation medium and sex of singer, F(l, 88) = 9.75, p = .002 (see Figure 2). Post hoc paired f-tests revealed superior decoding of girls' expressive audi-

tory cues (M = .79, SD = .17) relative to those of boys (M = .74, SD = .17), £(47) = -2.36, p = .022, and superior decoding of boys' expressive visual cues (M = .69, SD = .14) relative to those of girls (M = .63, SD = .13), t(47) = 2.33, p = .024.

In short, children 6-7 years of age can use auditory and visual expressive devices to transmit happiness and sadness in their performances of familiar

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

Decoding Emotion in Children's Songs 219

Fig. 2. Proportion correct judgments of emotion from auditory and visual cues in the sung performances of 6- to 7-year-old children. Error bars are standard errors.

songs. However, the superior decoding of boys' visual cues relative to those of girls is at odds with the findings of Experiment 1, precluding unequivo- cal interpretation of decoding differences between the sexes in either ex- periment. Similarly, superior decoding by female relative to male audiences also differs from the findings of Experiment 1. Perhaps female decoding superiority becomes evident only when the task is made difficult by the availability of limited cues.

Adults' greater success in decoding the expressive intentions of 6- to 7- year-olds relative to that of same-age children may be attributable, in part, to young singers' less effective use of expressive cues relative to the older singers in Experiment 1. The performance of adult listeners (9 men, 16 women) in the present experiment was compared with that of adults who had received the audio task first in Experiment 1 (7 men, 8 women). Simi- larly, the performance of adult viewers in the present experiment (8 men, 16 women) was compared with that of adults who had received the video task first in Experiment 1 (8 men, 7 women). A three-way analysis of vari- ance (with sex of singer as a within-subject variable and age of singer and sex of audience as between-subjects variables) revealed no differences in

decoding the auditory cues from older and younger children. With respect to the decoding of visual cues, however, there was a significant effect of sex of singer, F(l, 37) = 4.43, p = .042, and a two-way interaction between age of singer and sex of singer, F(l, 37) = 12.05, p- .001. Post hoc f-tests re- vealed that adults were significantly more accurate on the performances of older girls (M = .81, SD =.12) than on the performances of younger girls (M = .67, SD =.13), £(31.78) = -3.47, p = .002; they were no more accurate,

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

220 Mayumi Adachi & Sandra E. Trehub

however, on the performances of older boys (M = .67, SD =.13) than on the performances of younger boys (M = .70, SD = .12). There are indications, then, of developmental differences in the efficacy of girls' visual cues to emotion.

The provision of less interprétable cues by 6- to 7-year-olds than by 8- to 10-year-olds, as reflected in adults' decoding accuracy, may have depressed the decoding performance of the 6- to 7-year-olds. It is possible, however, that younger children make less effective use of comparable expressive cues than do older children, an issue that could not be resolved from the present data set. Accordingly, we evaluated 6- to 7-year-olds' decoding of expres- sive cues from 8- to 10-year-old children in a subsequent experiment.

Experiment 3

The principal purpose of the present experiment was to ascertain whether 6- to 7-year-old children are less effective decoders than are 8- to 10-year- olds. Accordingly, 6- to 7-year-olds were evaluated on the auditory decod- ing task of Experiment 1, which involved the sung performances of 8- to 10-year-old children.

METHOD

Participants

The participants were 24 children (12 boys, 12 girls) who were 6 years, 7 months to 7 years, 10 months of age (M = 7 years, 2 months). With the exception of a single child who had taken music lessons for 1 year, the children had not experienced formal musical train- ing. A further three children were excluded from the data set because of their inattentive- ness throughout the test session.

Stimuli

The auditory materials from Experiment 1, which were derived from the performances of 8- to 10-year-old "ordinary" singers, served as stimuli.

Procedure

The procedure was identical to that used for the auditory decoding tasks in Experiments 1 and 2.

RESULTS AND DISCUSSION

Data from 6- to 7-year-olds were examined together with previous data obtained from 8- to 10-year-olds and adults on the same auditory decoding task (Experiment 1). A three-way analysis of variance with repeated mea- sures, including age and sex of decoder as between-subjects variables and

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

Decoding Emotion in Children's Songs 221

sex of singer as a within-subject variable, revealed a significant effect of age of decoder, F(2, 48) = 6.24, p = .004. Bonferroni tests (a = .05) revealed poorer decoding accuracy among 6- to 7-year-olds (M = .75, SD = .17) than among 8- to 10-year-olds (M = .84, SD = .07) and adults (M = .83, SD = .07), who did not differ from one another (see Figure 3). No other main effects or interactions were significant. It is clear, then, that 6- to 7-year- olds make less effective use of auditory cues when decoding the intended emotions in children's informal songs.

General Discussion

The present findings indicate that young children's expressive renditions of familiar songs are interprétable to same-age children and to adults. Not only do children use distinctive vocal and visual devices when producing "happy" and "sad" versions of familiar songs (Adachi & Trehub, 1998); the resulting performances are readily decoded by 6- to 10-year-old chil- dren and adults. Obviously, children have the requisite skill for portraying emotion, vocally and visually, to child and adult audiences. It is clear, more- over, that young children's ability to discern musical performers' expres- sive intentions does not depend on professional performances such as those used in previous research (Cunningham & Sterling, 1988; Dolgin & Adelson, 1990; Kratus, 1993). Rather, the sung performances of untrained children are sufficient.

Adults' and children's decoding accuracy was greater for auditory cues than for visual cues from the performances of 8- to 10-year-olds (Experi-

Fig. 3. Proportion correct judgments of emotion from auditory cues in the sung perfor- mances of 8- to 10-year-old children. Error bars are standard errors.

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

222 Mayumi Adachi & Sandra E. Trehub

ment 1) and 6- to 7-year-olds (Experiment 2). This finding contrasts with the counterintuitive claim that adults achieve greater accuracy when de- coding visual rather than auditory expressive cues from expert musical per- formances (Davidson, 1993; Ohgushi &c Hattori, 1996). No doubt the performer's repertoire of visual and vocal expressive devices, the nature of the musical materials, and the knowledge of the audience contribute jointly to the relative efficacy of auditory and visual cues.

Adults' greater decoding accuracy on the performances of older relative to younger children implies that 6- to 7-year-old singers provide less differ- entiated cues to their happy and sad intentions than do 8- to 10-year-old children. Although adults' decoding accuracy did not exceed that of 8- to 10-year-old children on performances from that age group (Experiment 1), adults' accuracy did exceed that of 6- to 7-year-old children on perfor- mances from the latter age group (Experiment 2). Comparisons of all three age groups on the auditory materials from 8- to 10-year-olds revealed that 6- to 7-year-olds were less effective decoders than were older children and adults (Experiment 3). These findings concur with Dolgin and Adelson's (1990) report of poorer decoding of emotional meanings in music by 7-

year-olds than by 9-year-olds. It is clear, then, that 6- to 7-year-olds have decoding as well as encoding limitations in relation to expressive perfor- mances of familiar songs. No doubt, age-related differences would be greater in open-ended tasks (e.g., naming the intended emotion) than in forced- choice tasks such as those in the present study (i.e., selecting the happier of two performances).

With respect to differences between the sexes, the situation was less clear. Overall, there was no indication that girls were more effective at portray- ing or decoding emotion than were boys. There were suggestions, however, that 8- to 10-year-old girls provided clearer visual expressive cues than did

boys (Experiment 1), which is in line with girls' reportedly greater facility in expressing emotion (Brody & Hall, 1993; Cole, 1986; Denham, 1998). Although the performances of younger girls and boys led to comparable levels of decoding accuracy, the girls seemed to provide more differentiated auditory cues and the boys more differentiated visual cues (Experiment 2). Finally, when the available expressive cues were less than optimal, as was the case for the performances of younger children, female audiences seemed to have an advantage over male audiences, a finding that is consistent with

greater female sensitivity to emotional expression (Brody & Hall, 1993). In short, the present findings confirmed that children's happy and sad

renditions of familiar songs are interprétable by same-age children and by adults on the basis of visual or vocal expressive cues. Children 8 and older seem to generate more interprétable performances than do those 7 and

younger. With respect to performances such as these, decoding accuracy improves with age, reaching adult levels by 8 to 10 years. It is unclear,

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

Decoding Emotion in Children 's Songs 223

however, whether these age differences would be maintained if visual and vocal cues were made available to prospective decoders. For expert perfor- mances, moreover, adults' greater implicit, if not explicit, knowledge of musical performance conventions would likely lead to greater decoding discrepancies between children and adults.1

1. Funding for this research was provided by the Social Sciences and Humanities Re- search Council of Canada to Sandra E. Trehub. Portions of this research were presented at the meeting of the Japanese Society for Music Perception and Cognition in November, 1998. We are grateful to Leanne Kahro for assistance in the testing of children and to Marianne Fallon for assistance in computer programming.

References

Adachi, M., & Trehub, S. E. (1998). Children's expression of emotion in song. Psychology of Music, 26, 133-153.

Boone, R. T., & Cunningham, J. G. (1998). Children's decoding of emotion in expressive body movement: The development of cue attunement. Developmental Psychology, 34, 1007-1016.

Brody, L. R., & Hall, J. A. (1993). Gender and emotion. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 447-460). New York: Guilford Press.

Cole, P. M. (1986). Children's spontaneous control of facial expression. Child Develop- ment, 57, 1309-1321.

Cunningham, J. G., & Sterling, R. S. (1988). Developmental change in the understanding of affective meaning in music. Motivation and Emotion, 12, 399-413.

Davidson, J. W. (1993). Visual perception of performance manner in the movements of solo musicians. Psychology of Music, 21, 103-113.

Davis, T. (1995). Gender differences in masking negative emotion: Ability or motivation? Developmental Psychology, 31, 660-667.

Denham, S. A. (1998). Emotional development in young children. New York: Guilford Press. Dimitrovsky, L. (1964). The ability to identify the emotional meaning of vocal expression at

successive age levels. In J. Davitz (Ed.), The communication of emotional meaning (pp. 69-86). New York: McGraw-Hill.

Dolgin, K. G., & Adelson, E. H. (1990). Age changes in the ability to interpret affect in sung and instrumentallv-presented melodies. Psychology of Music, 18, 87-98.

Gabrielsson, A., & Juslin, P. N. (1996). Emotion expression in music performance: Between the performer's intention and the listener's experience. Psychology of Music, 24, 68-91.

Juslin, P. N. (1997a). Emotional communication in music performance: A functionalist per- spective and some data. Music Perception, 14, 383-418.

Juslin, P. N. (1997b). Perceived emotional expression in synthesized performances of a short melody: Capturing the listener's judgment policy. Musicae Scientiae, 1, 225-256.

Kratus, J. (1993). A developmental study of children s interpretation of emotion in music. Psychology of Music, 21, 3-19.

Lewis, M., Sullivan, M. W., & Vasen, A. (1987). Making races: Age and emotion ditter- ences in the posing of emotional expressions. Developmental Psychology, 23, 690-697.

Morton, J. B., & Trehub, S. E. (in press). Children's understanding ot emotion in speech. Child Development.

Ohgushi, K., & Hattori, M. (1996). Emotional communication in performance or vocal music. In B. Pennycook & E. Costa-Giomi (Eds.), Proceedings of the Fourth Interna- tional Conference on Music Perception and Cognition (pp. 269-274). Montreal: McGill University.

Profyt, L., & Whissell, C. (1991). Children's understanding of facial expression ot emotion: I. Voluntary creation of emotion-faces. Perceptual and Motor Skills, 73, 199-202.

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions

224 Mayumi Adachi & Sandra E. Trehub

Saarni, C. (1989). Children's understanding of strategic control of emotional expression in social transactions. In C. Saarni &c P. L. Harris (Eds.), Children's understanding of emo- tion (pp. 181-208). New York: Cambridge University Press.

Saarni, C. (1999). The development of emotional competence. New York: Guilford Press. Senju, M., & Ohgushi, K. (1987). How are the player's ideas conveyed to the audience?

Music Perception, 4, 311-324. Thompson, R. A. (1989). Causal attributions and children's emotional understanding. In C.

Saarni 8c P. Harris (Eds.), Childrens3 understanding of emotion (pp. 117-150). New York: Cambridge University Press.

Trehub, S. E., Unyk, A. M., &c Henderson, J. L. (1994). Children's songs to infant siblings: Parallels with speech. Journal of Child Language, 21, 735-744.

Zuckerman, M., 8t Przewuzman, S. (1979). Decoding and encoding facial expressions in preschool-age children. Environmental Psychology and Nonverbal Behavior, 3, 147- 163.

This content downloaded from 130.132.173.47 on Wed, 15 Oct 2014 20:07:34 PMAll use subject to JSTOR Terms and Conditions