psychology of music 2015 argstatter 0305735615589214

17
Psychology of Music 1–17 © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0305735615589214 pom.sagepub.com Perception of basic emotions in music: Culture-specific or multicultural? Heike Argstatter Abstract The perception of basic emotions such as happy/sad seems to be a human invariant and as such detached from musical experience. On the other hand, there is evidence for cultural specificity: recognition of emotional cues is enhanced if the stimuli and the participants stem from the same culture. A cross-cultural study investigated the following research questions: (1) How are six basic universal emotions (happiness, sadness, fear, disgust, anger, surprise) perceivable in music unknown to listeners with different cultural backgrounds?; and (2) Which particular aspects of musical emotions show similarities and differences across cultural boundaries? In a cross-cultural study, 18 musical segments, representing six basic emotions (happiness, sadness, fear, disgust, anger, surprise) were presented to subjects from Western Europe (Germany and Norway) and Asia (South Korea and Indonesia). Results give evidence for a pan-cultural emotional sentience in music. However, there were distinct cultural, emotion and item-specific differences in emotion recognition. The results are qualified by the outcome measurement procedure since emotional category labels are language- based and reinforce cultural diversity. Keywords cross-cultural, culture, emotion, perception, representations, sociocultural Emotion perception is “the ability to detect and decipher emotions in faces, pictures, voices, and cultural artifacts” (such as musical pieces) (Scherer & Scherer, 2011). Human universals, the search for cross-cultural similarities, have been demonstrated for several diverse aspects, such as natural language (Hupka, Lenton, & Hutchinson, 1999), personality (Goldberg, 1980) and introspection (subjective well-being) (Diener & Diener, 1996). In the realm of emotions, a pan- cultural emotional lexicon has been found, meaning that the conceptual organization of Deutsches Zentrum für Musiktherapieforschung (Viktor Dulger Institut) (German Center for Music Therapy Research) DZM e.V., Germany Corresponding author: Heike Argstatter, Deutsches Zentrum für Musiktherapieforschung (Viktor Dulger Institut) DZM e.V., Maaßstr. 32/1, 69123 Heidelberg, Germany. Email: [email protected] 589214POM 0 0 10.1177/0305735615589214Psychology of MusicArgstatter research-article 2015 Article at UNIV FEDERAL DA PARAIBA on July 11, 2015 pom.sagepub.com Downloaded from

Upload: cmgarun

Post on 31-Jan-2016

219 views

Category:

Documents


0 download

DESCRIPTION

music

TRANSCRIPT

Page 1: Psychology of Music 2015 Argstatter 0305735615589214

Psychology of Music 1 –17

© The Author(s) 2015Reprints and permissions:

sagepub.co.uk/journalsPermissions.navDOI: 10.1177/0305735615589214

pom.sagepub.com

Perception of basic emotions in music: Culture-specific or multicultural?

Heike Argstatter

AbstractThe perception of basic emotions such as happy/sad seems to be a human invariant and as such detached from musical experience. On the other hand, there is evidence for cultural specificity: recognition of emotional cues is enhanced if the stimuli and the participants stem from the same culture. A cross-cultural study investigated the following research questions: (1) How are six basic universal emotions (happiness, sadness, fear, disgust, anger, surprise) perceivable in music unknown to listeners with different cultural backgrounds?; and (2) Which particular aspects of musical emotions show similarities and differences across cultural boundaries? In a cross-cultural study, 18 musical segments, representing six basic emotions (happiness, sadness, fear, disgust, anger, surprise) were presented to subjects from Western Europe (Germany and Norway) and Asia (South Korea and Indonesia). Results give evidence for a pan-cultural emotional sentience in music. However, there were distinct cultural, emotion and item-specific differences in emotion recognition. The results are qualified by the outcome measurement procedure since emotional category labels are language-based and reinforce cultural diversity.

Keywordscross-cultural, culture, emotion, perception, representations, sociocultural

Emotion perception is “the ability to detect and decipher emotions in faces, pictures, voices, and cultural artifacts” (such as musical pieces) (Scherer & Scherer, 2011). Human universals, the search for cross-cultural similarities, have been demonstrated for several diverse aspects, such as natural language (Hupka, Lenton, & Hutchinson, 1999), personality (Goldberg, 1980) and introspection (subjective well-being) (Diener & Diener, 1996). In the realm of emotions, a pan-cultural emotional lexicon has been found, meaning that the conceptual organization of

Deutsches Zentrum für Musiktherapieforschung (Viktor Dulger Institut) (German Center for Music Therapy Research) DZM e.V., Germany

Corresponding author:Heike Argstatter, Deutsches Zentrum für Musiktherapieforschung (Viktor Dulger Institut) DZM e.V., Maaßstr. 32/1, 69123 Heidelberg, Germany.Email: [email protected]

589214 POM0010.1177/0305735615589214Psychology of MusicArgstatterresearch-article2015

Article

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 2: Psychology of Music 2015 Argstatter 0305735615589214

2 Psychology of Music

emotion terms seems to be universal rather than specific to culture or language (Russel, 1983). Similar findings have been reported for the appraisal of facial (Ekman, 1972) and vocal (Scherer, Banse, & Wallbott, 2001) emotion expression.

Several investigations have been conducted in order to measure the subjective perception of musically encoded emotions (Scherer, 2004). Depending on the test paradigm, the accuracy of emotion detection in musical pieces is comparable to facial or verbal emotional detection (Fritz et al., 2009; Gabrielsson & Juslin, 1996; Juslin, 1997; Laukka, Eerola, Thingujam, Yamasaki, & Beller, 2013; Sundberg, 1993).

An important moderating variable influencing the ability to detect musically encoded emo-tions seems to be the musical aptitude of the participants; more elaborated musical expertise leads to more sophisticated emotional differentiation (Bigand, Vieillard, Madurell, Marozeau, & Dacquet, 2005).

Nevertheless, there exists a universal sensitivity for basic emotions in music since “conso-nance and permanent sensory dissonance universally influence the perceived pleasantness of music” (Fritz et al., 2009). Investigations with participants from both Western (Canada, Sweden, Germany) and Non-Western (Japan, India, Cameroon) countries suggest a cross- cultural sensitivity to unfamiliar musically expressed emotions (Balkwill & Thompson, 1999; Balkwill, Thompson, & Matsunaga, 2004; Fritz et al., 2009; Laukka et al., 2013). Basic emo-tions such as happy/sad, especially, seem to be human invariants and as such detached from musical experience (Krumhansl, 1997).

The decoding of acoustically encoded information is essential for successful prosody appre-ciation and as such at the base of the ontogeny of communication. The prosodic qualities of speech are musical features, leading the way from pre-verbal interaction to sensible speech. Pre-verbal communication between baby and parents is predominantly aimed at emotional regulation. Voice quality is considered to be the most informative channel for affective expres-sion in early infancy (Trehub, 2001). Cross-cultural investigations confirm that “motherese”, that is, infant directed speech, has universal prosodic features (Saint-Georges et al., 2013). And even in mature speech, vocal cues indicating positive and negative emotional sound character-istics have been found on a phoneme level; that is to say, words with a positive connotation sound different from words with negative connotations (Nastase, Sokolova, & Shirabad, 2007).

On the other hand, there is evidence for cultural specificity of emotion recognition. An experiment on music recognition (Demorest, Morrison, Jungbluth, & Beken, 2008) found empirical evidence for the effect of enculturation on music cognition: when comparing two groups from different cultures, both groups are better at remembering music of their own cul-ture. A cross-cultural study investigated the accuracy of decoding vocal expressions in non-sense sentences made up from Western European syllables presented to participants from Germany, other West-European countries and Indonesia (Scherer et al., 2001). All groups achieved recognition rates above chance level, with a similar overall performance among all Western European countries. The overall recognition rate of the Indonesian sample was lowest compared to all Western European samples.

The cue-redundancy model of emotional communication (Balkwill & Thompson, 1999) explains the perception of emotion in music by an interplay of music-inherent cues and culture specific cues. The music-inherent cues have been explained as psychophysical cues (Balkwill & Thompson, 1999), auditory features (Koelsch, 2011), or “indexical sign qualities” (Fritz et al., 2013) common to all tonal systems (e.g., tempo, rhythm, complexity or timbre). Culture spe-cific cues are culturally determined conventions (e.g., scales, harmonic relationships) that lis-teners have acquired during their enculturation. Some emotions like happiness/joy or sadness seem to be more salient due to distinct musical dimensions such as tempo, rhythm, complexity

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 3: Psychology of Music 2015 Argstatter 0305735615589214

Argstatter 3

or timbre while other emotional states (such as anger or surprise) seem to be more complex and cannot equally well be explained by simple musical patterns (for an overview, see Eerola & Vuoskoski, 2013 and Hunter & Schellenberg, 2010).

According to the dock-in model (Fritz, 2013), there is a distinct connection between a set of underlying musical universals and perceptual universals. Listeners with a certain cultural background will be able to decode a certain amount of information from the “central hub of universals”. However, the further away a person is from another culture’s dock, the less they are able to decode the culture specific cues, possibly due to iconic meaning in music (while being present) being easily overwritten by cultural associations.

Rationale of the present study

In order to explore the possibility of basic emotions being recognized in acoustic stimuli unknown to the participants, the German Centre of Music Therapy Research developed a test of emotion perception in music (Busch et al., 2003). A pilot study demonstrated that 18 musi-cal segments representing six emotions (happiness, anger, disgust, surprise, sadness, and fear) could be reliably allocated to the intended emotional categories. There was no significant differ-ence in classification accuracy between music therapy students and controls without musical training. In a further study, these results were replicated with a larger number of participants consisting of Norwegian students (Mohn, Argstatter, & Wilker, 2011).

The rationale of the current study was to extend the samples to Asian countries in order to explore both the universality and cultural variations of the ability to decode musically encoded basic emotions.

Due to the exploratory nature of this study, no specific hypotheses were formulated. The fol-lowing research questions were asked: (1) are emotions universally perceivable in music unknown to listeners with a different cultural background? Western music is ubiquitous in large parts of the world, hence it is possible that the cross-cultural differences in affect recognition are marginal. If, however, individual enculturation prevails, cultural proximity should lead to simi-lar emotional classification results, resulting in an in-group advantage; that is, participants from Western countries should outperform participants from Asian countries; (2) Which particular aspects of musical emotions show similarities and differences across cultural boundaries? Since some emotions seem to be musically more salient than others, we were particularly interested in the recognition pattern cross-culturally: which musical examples would be reliably identified as the intended emotion, and which examples would be mistaken for a false emotion category.

Methods

Subjects and procedure

The sample consisted of two groups from Western Europe and two groups from Asia. All par-ticipants had to be born and have grown up in the target country. They had to be native-speak-ers and able to understand the spoken and written language of their home country. Immigrants or foreign persons living in the target countries were excluded.

The Western European participants came from Germany and Norway, and the Asian par-ticipants came from Indonesia and South Korea. Participants were recruited through adver-tisements, email lists and personal contact by native-speaking experimenters in the target countries. Demographic characteristics and geographic origins of the participants are depicted in Table 1.

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 4: Psychology of Music 2015 Argstatter 0305735615589214

4 Psychology of Music

Tab

le 1

. D

emog

raph

ic c

hara

cter

istic

s an

d ge

ogra

phic

ori

gins

of p

artic

ipan

ts.

Cou

ntr

yG

ende

rA

ge (y

ears

)M

usi

cal b

ackg

rou

nd

Ori

gin

M

ale

(%)

Fem

ale

(%)

Mu

sici

an (%

)G

ener

al (%

)

Ger

man

yn

= 3

1 (3

7.8

)n

= 5

1 (6

2.2

)2

7.6

(9.5

)n

= 4

6 (4

4.7

)n

= 5

7 (5

5.3

)St

ude

nts

at t

he

Un

iver

sity

of A

pplie

d Sc

ien

ces

Hei

delb

erg,

Ger

man

y.

Par

tici

pan

ts fr

om th

e ge

ner

al p

opu

lati

on.

Nor

way

n =

41

(35

.7)

n =

74

(64

.3)

27

.7 (7

.2)

—n

= 1

15

(10

0)

Stu

den

ts a

t th

e U

niv

ersi

ty o

f Osl

o, N

orw

ay.

Kor

ean

= 9

1 (3

5.7

)n

= 1

51

(62

.4)

29

.8 (1

0.8

)n

= 5

7 (2

3.8

)n

= 1

83

(76

.2)

Stu

den

ts a

t th

e U

niv

ersi

ties

of C

hon

nam

an

d W

onkw

ang,

Sou

th K

orea

.

Par

tici

pan

ts fr

om th

e ge

ner

al p

opu

lati

on.

Indo

nes

ian

= 4

9 (4

4.5

)n

= 6

1 (5

5.5

)2

4.5

(6.5

)n

= 5

5 (5

0.0

)n

= 5

5 (5

0.0

)St

ude

nts

at t

he

Un

iver

sity

of P

elit

a H

arap

an,

Indo

nes

ia.

P

arti

cipa

nts

from

the

gen

eral

pop

ula

tion

.

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 5: Psychology of Music 2015 Argstatter 0305735615589214

Argstatter 5

The participants were tested individually or in groups of up to seven in non-soundproof rooms. The music segments were played on a portable stereo placed on a table that was 3 metres from the participants. The sound volume at the position of the participants was kept constant at 60 dB. The study was approved by the local ethical committees, and all participants signed consent forms prior to the test.

Test of emotion perception in music

The current trial investigated the same musical items that were used in our previous studies (Busch et al., 2003; Mohn et al., 2011). Professional musicians were instructed to improvise short musical pieces on instruments of their choice in a way that a listener should be able to decode one of the intended basic emotions (happiness, anger, disgust, surprise, sadness, and fear). Duration of the segments was limited to a maximum of 7 seconds following the notion of William Stern’s “mental presence time” (Stern, 1897) which is known to be 4–7 seconds for auditory stimuli. Due to the temporal patterns of music, the emotional content of a musical piece might vary with time. The limitation of the duration to a maximum of 7 seconds ensures that the musical segments will be perceived as an entity representing predominantly one emotion. Eighteen music segments (three segments for each emotional quality, see Table 2; see supplementary material) made up the test (for more details on the test see Mohn et al., 2011).

The musical segments were burnt on a compact disc (CD) in randomized order. There was a 10-second pause between each segment. All participants listened to the same CD and were thus exposed to the segments in the same order. The participants were instructed to classify each segment as one of six emotions and to mark the most appropriate emotion category on a forced-choice answer sheet (the answer schedule is depicted in Figure 1). Before the test, six trial seg-ments (one for each emotional quality) were presented in order to familiarize the subjects with the task. An internet version of the test is available at https://www.soscisurvey.de/emu/, depict-ing the instructions and the examples. In the present trial, these instructions and answers were given in a pen-and-paper version. The entire musical emotions test procedure lasted 10 min-utes (for more details see Appendix 1).

The answer categories for the Norwegian, Korean and Indonesian questionnaire sheets were obtained through translation and re-translation. Since the valence of emotional categories var-ies considerably between cultures, the final expressions of the Asian questionnaires were checked by professionals of Korean studies and Indonesian language respectively.

Statistical analysis

Statistical analysis of the data was performed using parametrical and non-parametrical tests in SPSS 20 (SPSS Inc.). Group differences of sociodemographic data were analyzed with the χ²-test or univariate analysis of variance (ANOVA). Assessment of emotion perception was evalu-ated using multivariate analysis of variance (MANOVA) and t-tests for independent samples. Level of significance was p < .05, adjusted for multiple testing by the Scheffé method if neces-sary. All analyses were two-tailed.

Results

Perception of musical emotions

Percentages of correct and incorrect classification of musical emotions in the 18 segments by the four nations are depicted in Table 3.

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 6: Psychology of Music 2015 Argstatter 0305735615589214

6 Psychology of Music

The classification performance differed considerably between the four nations (ANOVA F(3, 564) = 26.73, p < .001): the German participants achieved about 67% (SD = 13%) correct classi-fications, the Norwegian participants 60% (SD = 38%), the Korean participants 48% (SD = 13%) and the Indonesian participants 45% (SD = 20%).

Accuracy levels for the overall recognition in all groups were well above the levels expected from chance guessing (1/6 = 16.7%). Comparison of the recognition accuracy with the cut-off score of 16.7 by single sample t-test revealed cultural, emotion and item-specific differences. On the cultural level, the Indonesian sample classified the emotion categories “Surprise” (p = .268)

Table 2. Characteristics of the musical segments.

Emotion Duration (seconds)

Instrument Musical characteristics Number on CD

Happiness 1 3 Tuba Vivid expression, staccato, broad timbre, high volume, fast tempo

2

Happiness 2 5 Guitar Major mode, dance-like 3/4 rhythm, large intervals, loud volume, no dissonances

12

Happiness 3 5 Piano Major mode, strong timbre, vivid expression, rapid tempo

16

Sadness 1 5 Electric bass Legato, light, subdued ascending and descending tones, slow tempo

3

Sadness 2 5 French horn Minor mode, a stepwise intervals, weak touch, medium volume, consonant harmony

7

Sadness 3 5 Piano Minor mode, weak touch, low volume, slow tempo with large variations

18

Surprise 1 4 Electric bass Short tones, staccato, jumping ascending dynamics, medium volume

6

Surprise 2 4 Piano Major mode, jumping, ascending melody, broad expression, crescendo

11

Surprise 3 3 French horn Major mode, staccato, jumping ascending melody, medium volume, crescendo

13

Fear 1 4 Cello Short, “shivering” vibrato, low volume, fast tempo

1

Fear 2 4 Guitar Very rapid touch, ascending volume, tempo, and dynamics

9

Fear 3 5 Tuba Rapid, irregular vibrato, low pitch, medium volume, from crescendo to decrescendo

15

Anger 1 3 Piano Hard touch, staccato, loud volume, rapidly ascending tempo, dissonant harmony

5

Anger 2 3 Tuba Staccato, low pitch, short intervals between tones, loud volume

10

Anger 3 5 Cello Minor mode, staccato, low pitch, strong vibrato, rapid tempo

17

Disgust 1 5 Violin “Screeching”, medium volume, several variations with changing expression and emphasis

4

Disgust 2 5 Cello Uncontrolled tones in rapid succession, ascending and descending movements

8

Disgust 3 3 Electric bass Weak touch, subdued timbre, slow tempo, low volume, diminuendo

14

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 7: Psychology of Music 2015 Argstatter 0305735615589214

Argstatter 7

and “Disgust” (p = .022) below chance. This was due to the low accuracy performance for the examples Surprise 2 (p = .097), Surprise 3 (p = .656), Disgust 2 (p = .580) and Disgust 3 (p = .469). Furthermore, Anger 2 (p = .045) was classified below chance. The remaining nations clas-sified the following single items below chance level: German: Disgust 2 (p = .199); Norwegian: Surprise 2 (p = .891), Fear 3 (p = .567), Disgust 3 (p = .046), Anger 2 (p = .046); Korea: Happiness 1 (p = .010). On the item level, the example “Disgust 2” was the most indistinct example which did not achieve accuracy above chance level for all participants irrespective of cultural origin.

Cultural proximity led to similar emotional classification results; that is, neither the two West-European (Germany and Norway) samples (p = .102) nor the two Asian (Korea and Indonesia) samples (p = .063) differed in their overall recognition performance, calculated as the total percentage of correct classifications of all 18 examples.

Confusions

Some musical examples seemed to be very representative for the emotional category regardless of the cultural background of the participants (such as Happiness 3, Sadness 3). For other exam-ples, depending on the cultural background, the emotional classification seemed to be rather confused. Therefore, Chi-square analyses were performed on the nominal data representing the

Figure 1. Forced-choice answer sheet. Answer schedule: 1 = English, 2 = German, 3 = Norwegian, 4 = Indonesian, 5 = Korean.

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 8: Psychology of Music 2015 Argstatter 0305735615589214

8 Psychology of Music

Tab

le 3

. A

nsw

er s

heet

(pe

rcen

tage

of c

lass

ifica

tion)

.

Inte

nd

Per

c

Hap

py 1

Hap

py 2

Hap

py 3

Sad

1Sa

d 2

Sad

3Su

rpr

1Su

rpr

2Su

rpr

3Fe

ar 1

Fear

2Fe

ar 3

Dis

gust

1D

isgu

st 2

Dis

gust

3A

nge

r 1

An

ger

2A

nge

r 3

Ger

man

y

n =

82

Hap

py80.5

93.2

94.7

——

0.8

16

.84

4.4

54

.13

.06

.1—

0.8

3.8

—2

.39

.02

.3

Sad

3.0

——

60.9

99.2

99.2

——

0.8

2.3

—3

.80

.8—

37

.6—

0.8

2.3

Surp

rise

9.0

6.0

3.8

9.8

——

74.0

51.1

43.6

21

.11

2.2

0.8

0.8

2.3

0.8

37

.64

.54

.5

Fear

0.8

——

10

.50

.8—

3.8

3.8

1.5

62.4

62.6

31.6

6.0

30

.11

9.5

9.8

3.0

21

.1

Dis

gust

0.8

——

12

.8—

—3

.8—

—7

.53

.12

7.8

85.0

24.8

31.6

3.0

6.8

An

ger

6.0

0.8

1.5

6.0

——

1.5

0.8

—3

.81

6.0

36

.16

.83

9.1

10

.547.4

75.9

69.9

Nor

way

n

= 1

15

Hap

py70.0

83.5

93.3

7.0

—3

.51

6.5

83

.35

2.2

—0

.90

.9—

2.6

1.7

20

.04

.16

.1

Sad

3.5

——

65.2

100

95.7

——

—0

.9—

5.2

6.1

2.6

47

.00

.95

.27

.8

Surp

rise

15

.71

5.7

3.5

11

.3—

0.9

78.3

16.7

46.1

3.5

7.0

1.7

2.6

3.5

5.2

41

.73

.51

.7

Fear

——

—2

.6—

—0

.9—

1.7

80.0

75.7

19.1

11

.43

2.2

13

.98

.73

.53

0.4

Dis

gust

7.8

—1

.71

3.0

——

3.5

——

8.7

1.7

34

.870.2

30.4

25.2

3.5

24

.37

.0

An

ger

2.6

0.9

0.9

0.9

——

0.9

——

7.0

14

.83

8.3

9.6

28

.77

.025.2

59.1

47.0

Kor

ea

n =

40

7H

appy

24.2

69.2

92.1

2.9

2.9

5.8

15

.46

3.8

60

.82

.10

.40

.80

.41

.32

.12

3.8

3.3

7.1

Sad

10

.00

.44

.261.7

78.3

76.3

2.5

1.3

1.7

8.8

2.1

3.3

4.2

2.1

26

.70

.80

.82

.5

Surp

rise

22

.91

6.7

2.9

1.3

1.3

0.4

42.1

25.4

25.8

7.1

20

.85

.03

.84

.65

.83

6.3

12

.92

0.8

Fear

13

.35

.8—

22

.11

1.3

10

.08

.33

.34

.257.1

52.1

32.1

12

.59

.23

9.6

6.7

13

.32

7.1

Dis

gust

9.6

0.8

—5

.42

.92

.91

1.3

2.1

5.0

17

.91

0.0

32

.960.4

48.3

12.9

7.1

19

.67

.1

An

ger

20

.07

.10

.86

.73

.34

.62

0.4

4.2

2.5

7.1

14

.62

5.8

18

.83

4.6

12

.925.4

50.0

35.4

Indo

nes

ia

n =

10

9H

appy

41.8

82.7

96.4

——

—2

6.4

74

.56

0.9

4.5

7.3

——

—0

.92

9.1

6.4

5.5

Sad

10

.92

.7—

60.0

90.9

97.3

0.9

4.5

10

.91

.81

.87

.34

.50

.97

.30

.92

.77

.3

Surp

rise

13

.61

0.0

0.9

4.5

—0

.930.9

11.8

15.5

22

.72

1.8

6.4

20

.01

30

.64

.52

6.4

18

.21

3.6

Fear

5.5

——

21

.86

.41

.85

.53

.66

.439.1

34.5

34.5

29

.11

1.8

63

.6—

16

.42

6.4

Dis

gust

19

.1—

—1

0.9

2.7

—3

3.6

0.9

1.8

4.5

2.7

34

.532.7

19.1

14.5

6.4

30

.91

.8

An

ger

9.1

4.5

2.7

2.7

——

2.7

4.5

4.5

27

.33

1.8

17

.31

3.6

54

.59

.1%

37.3

25.5

45.5

Not

e. D

ark

grey

fiel

ds a

nd b

old

num

bers

rep

rese

nt c

orre

ct h

its (

inte

nded

em

otio

n =

per

ceiv

ed e

mot

ion)

; lig

ht g

rey

field

s an

d ita

lic n

umbe

rs in

dica

te fa

lse

hits

(ab

ove

chan

ce (

> 1

7%)

choi

ce o

f em

otio

n)

and

whi

te fi

elds

and

non

-bol

d nu

mbe

rs r

epre

sent

the

wro

ng c

lass

ifica

tion.

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 9: Psychology of Music 2015 Argstatter 0305735615589214

Argstatter 9

emotional categories recognized by the participants. This procedure revealed three patterns: 1. “Correct hits” were defined as examples which were classified as the intended emotion; 2. “False hits” were defined as examples that were classified as a distinct emotion different from the intended emotion; and 3. “Confusions” were examples where the classifications were distrib-uted among one or more emotional categories (e.g., example Fear 3 was classified as either “fear” or “disgust” or “anger” in equal measure).

“Correct hits” were all three examples representing sadness (Sadness 1, Sadness 2, Sadness 3), two examples representing happiness (Happiness 2, Happiness 3), and one example representing fear (Fear1) and disgust (Disgust 1) respectively.

“False hits” were represented by examples Surprise 2, which was perceived as “happiness” by the Norwegians, Koreans and Indonesians; Disgust 2, which was mistaken for “anger” by the Indonesians, Disgust 3, which was classified as “sadness” by the Norwegians and as “fear” by the Indonesians and Koreans; and Anger 1, which was categorized as “surprise” by the Norwegians and the Koreans.

The remaining examples represent “confusions”. For details see Table 4.

Emotion recognition profile

Next, the three musical segments representing each emotion were combined into six musical emotion indices by simple aggregation and the results of the identification accuracy given in percentage correct answers leading to an “emotion recognition profile” (see Figure 2).

Happiness and Sadness were interculturally easier to classify correctly than the other four emotions, though a repeated measures ANOVA revealed significant differences for all six emo-tions (main effect emotion F(5,560) = 187.65, p < .001). The identification performance was cross-culturally very homogenous (main effect nation F(3,564) = 68.64, p < .001), which led to a culturally specific emotion recognition pattern (interaction nation x emotion F(15,1686) = 9.12, p < .001).

Musical background

Detection accuracy of musically encoded emotions was possibly influenced by musical back-ground. The musical examples have a West European background, therefore it might be easier for participants with training in Western European music to decode the intended emotions. The samples from Germany, Indonesia and Korea consisted of both music students (students at a conservatory of music or students of music therapy) and participants from the general popula-tion. Among the Norwegian participants, there were no professional musicians or music stu-dents, therefore the Norwegian group had to be excluded from this analysis.

For the remaining three countries, a multivariate ANOVA with “culture” (three nations) and “musicality” (music students vs. general population) as group variables and the percentage of correct hits of the 18 musical examples was calculated (see Figure 3). This analysis displayed a “musicality” main effect (F(18, 521) = 4.23, p < .001) due to higher correct hit rates of the musicians for the examples Happiness 1 (p < .001), Sadness 3 (p = .009), and Anger 1 (p = .007) and significantly lower correct hit rates of the musicians for the example Surprise 3 (p = .016) (the remaining examples were statistically indistinguishable, all p > .070). A main effect “cul-ture” (F(54, 1569) = 11,43, p < .001) resulted from significant differences in all items (all p < .001) except for Happiness 3 (p = .873), Sadness 1 (p = .460) and Fear 3 (p = .097). There was no “musicality” x “culture” interaction (F(26, 1044) = 1.28, p = .124).

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 10: Psychology of Music 2015 Argstatter 0305735615589214

10 Psychology of Music T

able

4.

Cla

ssifi

catio

n of

em

otio

nal c

ateg

orie

s (c

orre

ct h

its, f

alse

hits

, con

fusi

ons)

by

natio

n.

Cou

ntr

yEm

otio

nG

erm

any

Nor

way

Kor

eaIn

don

esia

Tot

al

Hap

pyH

appi

nes

s 1

——

Hap

pin

ess/

Surp

rise

——

χ(

1) =

.80

, p =

.77

8

Hap

pin

ess

2—

——

——

Hap

pin

ess

3—

——

——

Sad

Sadn

ess

1—

——

——

Sadn

ess

2—

——

——

Sadn

ess

3—

——

——

Surp

rise

Surp

rise

1—

——

Hap

pin

ess/

Surp

rise

/D

isgu

st—

χ(

1) =

.80

, p =

.77

8

Surp

rise

2—

Hap

pin

ess

Hap

pin

ess

Hap

pin

ess

Hap

pin

ess

χ(

1) =

51

.56

, p =

.00

0χ(

1) =

39

.56

, p =

.00

0χ(

1) =

50

.12

, p =

.00

0χ(

1) =

97

.85

, p =

.00

0Su

rpri

se 3

Hap

pin

ess/

Surp

rise

Hap

pin

ess/

Surp

rise

Hap

pin

ess

Hap

pin

ess

Hap

pin

ess

χ(

1) =

1.6

7, p

= .1

96

χ(1

) = .4

3, p

= .5

10

χ(1

) = 3

3.9

2, p

= .0

00

χ(1

) = 2

9.7

6, p

= .0

00

χ(1

) = 4

6.8

7, p

= .0

00

Fear

Fear

1—

——

——

Fear

2—

——

Fear

/An

ger

χ(1

) = .1

2, p

= .7

25

Fe

ar 3

Fear

/Dis

gust

/An

ger

Dis

gust

/An

ger

Fear

/Dis

gust

/An

ger

Fear

/Dis

gust

Fear

/Dis

gust

/An

ger

χ(

1) =

.24

, p =

.88

6χ(

1) =

.19

, p =

.66

3χ(

1) =

2.3

8, p

= .3

05

χ(1

) = 0

.00

, p =

1.0

00

χ(1

) = 2

.33

, p =

.31

2D

isgu

stD

isgu

st 1

——

——

—D

isgu

st 2

Fear

/Dis

gust

/An

ger

Fear

/Dis

gust

/An

ger

Dis

gust

/An

ger

An

ger

Dis

gust

/An

ger

χ(

1) =

5.0

3, p

= .0

81

χ(1

) = .2

3, p

= .8

92

χ(1

) = 5

.47

, p =

.01

9χ(

1) =

18

.78

, p =

.00

0χ(

1) =

1.1

6, p

= .2

78

Dis

gust

3Sa

dnes

s/Fe

ar/D

isgu

stSa

dnes

sFe

arFe

arSa

dnes

s/Fe

ar

χ (1

) = 5

.36

, p =

.06

8χ(

1) =

7.5

3, p

= .0

06

χ(1

) = 3

2.5

1, p

= .0

00

χ(1

) = 3

3.9

1, p

= .0

00

χ(1

) = 3

.73

, p =

.05

3A

nge

rA

nge

r 1

Surp

rise

/An

ger

Surp

rise

Surp

rise

Hap

pin

ess/

Surp

rise

/A

nge

rSu

rpri

se/A

nge

r

χ(

1) =

2.5

9, p

= .1

08

χ(1

) = 4

.69

, p =

.03

0χ(

1) =

4.5

7, p

= .0

33

χ(1

) = 2

.29

, p =

.31

8χ(

1) =

.85

, p =

.35

7A

nge

r 2

——

—D

isgu

st/A

nge

r—

χ(

1) =

.58

, p =

.44

6

An

ger

3—

—Fe

ar/A

nge

r

χ(

1) =

2.6

7, p

= .1

02

Not

e. “

corr

ect

hit”

: the

inte

nded

em

otio

n w

as r

ecog

nize

d co

rrec

tly (

Chi

-squ

are

resu

lts n

ot in

clud

ed),

“fal

se h

it” =

If p

< .0

5, t

he e

xam

ple

was

cla

ssifi

ed a

s on

e di

stin

ct e

mot

ion

diffe

rent

from

the

in-

tend

ed e

mot

ion

(e.g

., Su

rpris

e 2

was

mis

take

n fo

r “H

appi

ness

”), “

conf

usio

n” =

if p

> .0

5 m

ore

than

one

em

otio

n w

as in

dica

ted

inst

ead

of t

he in

dent

ed e

mot

ion

(e.g

., Fe

ar 3

was

cla

ssifi

ed a

s ei

ther

“fe

ar”

or “

disg

ust”

or

“ang

er”

likew

ise)

.

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 11: Psychology of Music 2015 Argstatter 0305735615589214

Argstatter 11

Figure 2. “Emotion recognition profile” by nation (identification accuracy in percentage of correct answers).

Figure 3. Correct hits by example and musical background (musicians vs. general population).

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 12: Psychology of Music 2015 Argstatter 0305735615589214

12 Psychology of Music

Discussion

The primary aims of this study were to investigate two main issues (1) are emotions universally perceivable in music unknown to listeners with different cultural backgrounds? and (2) are there music-specific similarities and differences across cultural boundaries?

Our results give evidence for both issues under question: generally, the musically encoded basic emotions were classified as the intended emotion well above chance level, independently of the cultural origin, though a detailed analysis disentangled a more complex pattern and could thus confirm cultural specifics.

In-group advantage

On a cultural level, cultural proximity led to similar emotional classification results, that is, the two West European (Germany and Norway) samples and the two Asian (Korea and Indonesia) samples achieved similar recognition patterns while the European participants outperformed the Asian participants. This phenomenon is known as in-group advantage: emotional cues (e.g., faces or vocal stimuli) are better recognized if the stimuli and the participants stem from the same culture (Elfenbein & Ambady, 2002). For example, monolingual speakers of Argentine Spanish performed significantly better in an emotion detection test in their native language than in foreign languages (Pell, Monetta, Paulmann, & Kotz, 2009). There was evidence for an in-group advantage in our investigation, whereby emotions are recognized more accurately when they are both expressed and perceived by members of the same regional group (Western European items and Western European Participants).

Data from a meta-analysis on emotion recognition by European versus Asian samples revealed that “judgments made by participants from the same cultural group as the posers were an average of 13.4% (SD = 9.5%) more accurate than cross-cultural judgments” (Elfenbein & Ambady, 2002). In our analysis, the European samples outperformed the Asian samples by 16.4% (SD = 14.5%) and thus confirmed the typical in-group advantage.

Emotional quality

The universal ability to detect emotional quality in musical pieces seems to be restricted to cer-tain emotional categories. In addition, accuracy of emotion classification (i.e., percentage of correct hits) varied considerably depending on the emotion category. This finding indicates that some of the musical stimuli used in this study seem to be more prototypical than others. Each emotional category had both significant cross-cultural accuracy as well as a significant in-group advantage.

As expected, happiness and sadness were the most distinct emotions. Two “happy” (Happiness 2, Happiness 3), and two “sad” (Sadness 2, Sadness 3) items were unambiguously classified cross-culturally. This fact has been shown in nearly every investigation concerning musically encoded emotions (Balkwill et al., 2004; Fritz et al., 2009; Juslin, 1997; Laukka et al., 2013; Scherer, 2004; Yong & McBride-Chang, 2007).

“Surprise” was the emotion most difficult to decode cross-culturally. Often it was confused with “happiness”, possibly due to similar musical features. The negative emotions “fear”, “disgust” and “anger” revealed a broad range of confusions. Examples resembling “fear” were mainly categorized as “disgust” or “anger” but also “surprise”, while examples repre-senting “disgust” were classified as “fear” or “anger” but also “sadness”. Examples expressing “anger” achieved a broad range of confusions except for not being assigned as “sadness” interculturally.

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 13: Psychology of Music 2015 Argstatter 0305735615589214

Argstatter 13

For both the West European as well as the Korean participants, for all emotions but “sur-prise,” at least one most distinctive, pancultural item could be identified. The Indonesian par-ticipants could identify “happiness” and “sadness” only; for the remaining emotions no single example was instantly recognizable, and Disgust 2 was significantly cross-classified as “anger”.

Musical training/musical background

Musical training influences the accuracy of emotional denotation (Morrison, Demorest, Aylward, Cramer, & Maravilla, 2003); (Lima & Castro, 2011; Park et al., 2014). In our samples, the trained musicians did achieve a slight though significant predominance independent from the cultural background. These differences are due to differences in single items and do not represent a general pattern.

Korean participants outperformed the Indonesian participants – one reason could be the more profound musical training in the Korean educational system. Nearly two thirds of all preschool age children are enrolled in preschool or child care centres in South Korea involving early musical education (Clarke-Stewart, Lee, Allhusen, Kim, & McDowell, 2006). Most pre-ferred musical genres used in preschool education are (Korean) Pop-music and (Western) clas-sical music (Lee, 2009). Korean music has been heavily affected by Western influences (Byeon Jiyeon, 2004), most strikingly represented by Korean pop (K-Pop). This genre incorporates Western popular genres like rap, rock and techno house (Hartong, 2006) and is the most popu-lar musical genre in Korea. The Korean participants are thus exposed to the underlying musical principles of the Western musical pieces from an early age leading to an enhanced emotion recognition ability compared to the Indonesians (Yong & McBride-Chang, 2007).

Indonesia’s culture is extremely diverse since the archipelago consists of about 17,000 islands with approximately 300 ethnic groups. Main influences are the traditional gamelan music, and – depending on the cultural background of the different geographical regions – Western arts (especially from the Portuguese colonialists and from North America) as well as Hindustani (Indian), Arab (based on Islamic traditions) and Malay musical patterns (Anderson & Campbell, 2010). Since these musical styles differ greatly from Western music, this different enculturation might be the reason for the different emotional recognition patterns.

Language

Some of the emotional category labels might be disparate in the different translations due to diverse cultural connotations (Adiwimarta, 1997; Hǒ, 2003). The term “surprise” is ambigu-ous itself and has many different connotations. The chosen translations (Indonesian: terkejut = “startled, shocked” and kaget = “upset, frightened, startled, staggered” and Korean 놀람 (“nol-lam”) = “alarm, amazement, astonishment, dismay, marvel”) have a negative touch resulting in high confusion rates for “Surprise 1” with disgust (Korea) and anger (Indonesia). A prototype analysis of emotion concepts revealed five emotional categories in the Indonesian language (Shaver, Murdaya, & Fraley, 2001): inta (love), senang (happiness), marah (anger), kawatir/takut (anxiety/fear), and sedih (sadness). “Surprise” does not form a distinct emotion category in the Indonesian language and thus is a very untypical word for Indonesian.

Another very difficult emotion seems to be “disgust”. Disgust is not an inherited instinct but rather shaped in the course of cultural socialization, therefore it might be extremely difficult to find an international musical “code” for this very complex emotional category. In the Indonesian language, the emotional category “disgust” is not among the five top-level emotional catego-ries, but is allocated to the category “anger”. This explains the below chance performance of

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 14: Psychology of Music 2015 Argstatter 0305735615589214

14 Psychology of Music

the Indonesian sample for the emotion category “disgust”. Most striking was the high confu-sion rate for Disgust 2 which was significantly allocated to “anger” in the Indonesian sample.

Comparison to other channels

Stimuli from static channels (such as pictures or photographs) are far easier to decode than stimuli from dynamic channels (such as vocal expression or video tapes of emotions) (Elfenbein & Ambady, 2002; Scherer, 2004). One reason seems to be that dynamic stimuli are much more complex and less prototypical than static stimuli. Therefore it is not surprising that the accu-racy rate of musically decoded emotions is below the expected accuracy level obtained through static (mostly visual) stimuli.

Methodological improvement/shortcomings

We decided to use a forced-choice response format following the notion of emotional categories rather than the dimensional approach (valence/arousal). Due to the cross-cultural language entanglements in particular, a response format which is language independent was highly desirable. Nonverbal stimuli require nonverbal responses to minimize the cultural aspects.

The study has an unbalanced design; that is, West European participants did not judge emo-tions expressed by members of the Asian group. Musical samples from Korea are already recorded and data will be presented in a future paper.

Conclusion

Overall, there seems to be evidence for a pancultural musical sentience, but the universal ability to detect emotional quality in musical pieces in our study was restricted to the categories “happy” and “sad”. A clear in-group advantage has been shown, especially for emotions which are untypical for musical expression (such as “disgust”). The results are qualified by the out-come measurement procedure since emotional category labels are language-based and rein-force the cultural diversity.

Acknowledgements

My special thanks go to Dr. Christine Mohn (Olso, Norway), Mihyun Seo and Sookyeong Park (Chonnam 전남, South Korea), Amelia Delphina Kho (Jakarta, Indonesia) for their contribution to intercultural data collection and to Friedrich-Wilhelm Wilker (University of Applied Sciences Heidelberg) for assistance dur-ing data acquisition.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

References

Adiwimarta, S. S. (1997). Indonesisch: Langenscheidts Universal-Wörterbuch : Indonesisch-Deutsch, Deutsch-Indonesisch (Völlige Neuentwicklung.) [Indonesian: Langenscheidt universal dictionary: Indonesian – German, German Indonesian (rev ed.)]. Berlin, Germany: Langenscheidt.

Anderson, W. M., & Campbell, P. S. (Eds.). (2010). Multicultural perspectives in music education (3rd ed.). Lanham, MD: Rowman & Littlefield.

Balkwill, L.-L., & Thompson, W. F. (1999). A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues. Music Perception: An Interdisciplinary Journal, 17(1), 43–64. doi:10.2307/40285811

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 15: Psychology of Music 2015 Argstatter 0305735615589214

Argstatter 15

Balkwill, L.-L., Thompson, W. F., & Matsunaga, R. (2004). Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners. Japanese Psychological Research, 46(4), 337–349.

Bigand, E., Vieillard, S., Madurell, F., Marozeau, J., & Dacquet, A. (2005). Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition & Emotion, 19(8), 1113–1139. doi:10.1080/02699930500204250

Busch, V., Nickel, A. K., Hillecke, T. K., Gross, T., Meißner, N., & Bolay, H. V. (2003). Musikalische und mimische Emotionserkennung: Eine Pilotstudie mit psychiatrischen Patienten [Musical and facial emotion recognition: A pilot study with psychiatric patients]. Musik-, Tanz und Kunsttherapie, 14(1), 1–8. doi:10.1026//0933–6885.14.1.1

Byeon, J. (2004). Die koreanische Musik des 20.Jahrhunderts. In M. R. Entreß (Ed.), Urban + Aboriginal XVI: Alte und neue Musik aus Koreas [Urban + Aboriginal XVI: Early and modern music from Korea] (pp. 59–61). Freunde Guter Musik Berlin e.V.: Berlin, Germany.

Clarke-Stewart, K. A., Lee, Y., Allhusen, V. D., Kim, M. S., & McDowell, D. J. (2006). Observed differences between early childhood programs in the U.S. and Korea: Reflections of “developmentally appropri-ate practices” in two cultural contexts. Journal of Applied Developmental Psychology, 27(5), 427–443.

Demorest, S. M., Morrison, S. J., Jungbluth, D., & Beken, M. N. (2008). Lost in translation: An encul-turation effect in music memory performance. Music Perception: An Interdisciplinary Journal, 25(3), 213–223.

Diener, E., & Diener, C. (1996). Most people are happy. Psychological Science, 7(3), 181–185. doi:10.1111/j.1467–9280.1996.tb00354.x

Eerola, T., & Vuoskoski, J. K. (2013). A review of music and emotion studies: Approaches, emotion models, and stimuli. Music Perception: An Interdisciplinary Journal, 30(3), 307–340. doi:10.1525/MP.2012.30.3.307

Ekman, P. (1972). Facial expressions of emotion: New findings, new questions. Psychological Science, 3(1), 34–38.

Elfenbein, H. A., & Ambady, N. (2002). On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128(2), 203–235. doi:10.1037//0033–2909.128.2.203

Fritz, T. (2013). The dock-in model of music culture and cross-cultural perception. Music Perception: An Interdisciplinary Journal, 30(5), 511–516. doi:10.1525/mp.2013.30.5.511

Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., … Koelsch, S. (2009). Universal recognition of three basic emotions in music. Current Biology, 19(7), 573–576. doi:10.1016/j.cub.2009.02.058

Fritz, T. H., Schmude, P., Jentschke, S., Friederici, A. D., Koelsch, S., & Watanabe, K. (2013). From under-standing to appreciating music cross-culturally. PLoS ONE, 8(9), e72500. doi:10.1371/journal.pone.0072500

Gabrielsson, A., & Juslin, P. N. (1996). Emotional expression in music performance: Between the performer’s intention and the listener’s experience. Psychology of Music, 24(1), 68–91. doi:10.1177/0305735696241007

Goldberg, L. R. (1980). Language and individual differences: The search for universals in personality lexi-cons. In L. Wheeler (Ed.), Review of personality and social psychology (pp. 141–165). Beverly Hills, CA: Sage.

Hartong, J. L. (2006). Musical terms worldwide: A companion for the musical explorer (S. Mills, P. van Amstel & A. Marković, Eds.). Rome, Italy: Semar.

Hǒ, H.-K. (2003). Minjungseorims essence deutsch-koreanisches Wörterbuch: Minjungsǒrim essensǔ tokhan sajǒn [Minjungseorims essence German-Korean dictionary] (2nd ed.). Sǒul, South Korea: Minjungsǒrim.

Hunter, P. G., & Schellenberg, E. G. (2010). Music and emotion. In M. Riess Jones, R. R. Fay & A. N. Popper (Eds.), Springer handbook of auditory research: Music perception (pp. 129–164). New York, NY: Springer.

Hupka, R. B., Lenton, A. P., & Hutchinson, K. A. (1999). Universal development of emotion categories in natural language. Journal of Personality and Social Psychology, 77(2), 247–278.

Juslin, P. N. (1997). Emotional communication in music performance: A functionalist perspective and some data. Music Perception: An Interdisciplinary Journal, 14(4), 383–418.

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 16: Psychology of Music 2015 Argstatter 0305735615589214

16 Psychology of Music

Koelsch, S. (2011). Toward a neural basis of music perception: A review and updated model. Frontiers in Psychology, 2, 110. doi:10.3389/fpsyg.2011.00110

Krumhansl, C. L. (1997). An exploratory study of musical emotions and psychophysiology. Canadian Journal of Experimental Psychology = Revue canadienne de psychologie expérimentale, 51(4), 336–353.

Laukka, P., Eerola, T., Thingujam, N. S., Yamasaki, T., & Beller, G. (2013). Universal and culture-specific factors in the recognition and performance of musical affect expressions. Emotion, 13(3), 434–449. doi:10.1037/a0031388

Lee, Y. (2009). Music practices and teachers’ needs for teaching music in public preschools of South Korea. International Journal of Music Education, 27(4), 356–371. doi:10.1177/0255761409344663

Lima, C. F., & Castro, S. L. (2011). Speaking to the trained ear: Musical expertise enhances the recognition of emotions in speech prosody. Emotion, 11(5), 1021–1031. doi:10.1037/a0024521

Mohn, C., Argstatter, H., & Wilker, F.-W. (2011). Perception of six basic emotions in music. Psychology of Music, 39(4), 503–517. doi:10.1177/0305735610378183

Morrison, S. J., Demorest, S. M., Aylward, E. H., Cramer, S. C., & Maravilla, K. R. (2003). FMRI investi-gation of cross-cultural music comprehension. NeuroImage, 20(1), 378–384. doi:10.1016/S1053–8119(03)00300–8

Nastase, V., Sokolova, M., & Shirabad, J. S. (2007). Do happy words sound happy? A study of the relation between form and meaning for English words expressing emotions. Proceedings of Recent Advances in Natural Language Processing (RANLP 2007) (pp. 406–410). Retrieved from http://rali.iro.umontreal.ca/rali/sites/default/files/publis/wordformCamera.pdf

Park, M., Gutyrchik, E., Bao, Y., Zaytseva, Y., Carl, P., Welker, L., … Meindl, T. (2014). Differences between musicians and non-musicians in neuro-affective processing of sadness and fear expressed in music. Neuroscience Letters, 566, 120–124. doi:10.1016/j.neulet.2014.02.041

Pell, M. D., Monetta, L., Paulmann, S., & Kotz, S. A. (2009). Recognizing emotions in a foreign language. Journal of Nonverbal Behavior, 33(2), 107–120. doi:10.1007/s10919–008–0065–7

Russel, J. A. (1983). Pancultural aspects of the human conceptual organization of emotions. Journal of Personality and Social Psychology, 45(6), 1281–1288.

Saint-Georges, C., Chetouani, M., Cassel, R., Apicella, F., Mahdhaoui, A., Muratori, F., … Cohen, D. (2013). Motherese in interaction: At the cross-road of emotion and cognition? (A systematic review). PLoS ONE, 8(10), e78103. doi:10.1371/journal.pone.0078103

Scherer, K. R. (2004). Which emotions can be induced by music? What are the underlying mech-anisms? And how can we measure them? Journal of New Music Research, 33(3), 239–251. doi:10.1080/0929821042000317822

Scherer, K. R., Banse, R., & Wallbott, H. G. (2001). Emotion inferences from vocal expression cor-relate across languages and cultures. Journal of Cross-Cultural Psychology, 32(1), 76–92. doi:10.1177/0022022101032001009

Scherer, K. R., & Scherer, U. (2011). Assessing the ability to recognize facial and vocal expressions of emo-tion: Construction and validation of the Emotion Recognition Index. Journal of Nonverbal Behavior, 35(4), 305–326. doi:10.1007/s10919–011–0115–4

Shaver, P. R., Murdaya, U., & Fraley, R. C. (2001). Structure of the Indonesian emotion lexicon. Asian Journal of Social Psychology, 4(3), 201–224. doi:10.1111/1467–839X.00086

Stern, W. L. (1897). Psychische Präsenzzeit [Psychic presencetime]. Zeitschrift für Psychologie und Physiologie der Sinnesorgane, 13, 325–349.

Sundberg, J. (1993). How can music be expressive? Speech Communication, 13(1–2), 239–253. doi:10.1016/0167–6393(93)90075-V

Trehub, S. E. (2001). Musical predispositions in infancy. Annals of the New York Academy of Sciences, 930(1), 1–16. doi:10.1111/j.1749–6632.2001.tb05721.x

Yong, B. C., & McBride-Chang, C. (2007). Emotion perception for faces and music: Is there a link? The Korean Journal of Thinking & Problem Solving, 17(2), 57–65.

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from

Page 17: Psychology of Music 2015 Argstatter 0305735615589214

Argstatter 17

Appendix 1. Solution sheet.

Example Intended emotion

example 1 fearexample 2 happinessexample 3 sadnessexample 4 disgustexample 5 angerexample 6 surpriseexample 7 sadnessexample 8 disgustexample 9 fearexample 10 angerexample 11 surpriseexample 12 happinessexample 13 surpriseexample 14 disgustexample 15 fearexample 16 happinessexample 17 angerexample 18 sadness

at UNIV FEDERAL DA PARAIBA on July 11, 2015pom.sagepub.comDownloaded from