seeing happy emotion in fearful and angry faces: qualitative analysis of facial expression...

16
SEEING HAPPY EMOTION IN FEARFUL AND ANGRY FACES: QUALITATIVE ANALYSIS OF FACIAL EXPRESSION RECOGNITION IN A BILATERAL AMYGDALA-DAMAGED PATIENT Wataru Sato 1 , Yasutaka Kubota 2 , Takashi Okada 2 , Toshiya Murai 2 , Sakiko Yoshikawa 1 and Akira Sengoku 2 ( 1 Department of Cognitive Psychology in Education, Graduate School of Education, Kyoto University, Kyoto, Japan; 2 Department of Neuropsychiatry, Graduate School of Medicine, Kyoto University, Kyoto, Japan) ABSTRACT Neuropsychological studies reported that bilateral amygdala-damaged patients had impaired recognition of facial expressions of fear. However, the specificity of this impairment remains unclear. To address this issue, we carried out two experiments concerning the recognition of facial expression in a patient with bilateral amygdala damage (HY). In Experiment 1, subjects matched the emotion of facial expressions with appropriate verbal labels, using standardized photographs of facial expressions illustrating six basic emotions. The performance of HY was compared with age-matched normal controls (n = 13) and brain-damaged controls (n = 9). HY was less able to recognize facial expressions showing fear than normal controls. In addition, the error pattern exhibited by HY for facial expressions of fear and anger were distinct from those exhibited by both control groups, and suggested that HY confused these emotions with happiness. In Experiment 2, subjects were presented with morphed facial expressions that blended happiness and fear, happiness and anger, or happiness and sadness. Subjects were requested to categorize these expressions by two-way forced-choice selection. The performance of HY was compared with age- matched normal controls (n = 8). HY categorized the morphed fearful and angry expressions blended with some happy content as happy facial expressions more frequently than normal controls. These findings support the idea that amygdala-damaged patients have impaired processing of facial expressions relating to certain negative emotions, particularly fear and anger. More specifically, amygdala-damaged patients seem to give positively biased evaluations for these negative facial expressions. Key words: amygdala, facial expression, emotion, fear, anger, happiness, morphing INTRODUCTION Impairment in the recognition of fearful facial expression in patients with amygdala lesions is one of the most important topics regarding the neural and cognitive processing of facial expressions. The initial reports on this issue were by Adolphs et al. (1994, 1995), who reported that a patient with innate bilateral amygdala damage had impaired recognition of facial expressions related to certain emotions, particularly fear. Some of the following studies have confirmed this phenomenon, and additionally reported that patients with unilateral or acquired amygdala lesions are also impaired in the recognition of facial expressions of fear (Young et al., 1995; Calder et al., 1996; Young et al., 1996; Cortex, (2002) 38, 727-742

Upload: wataru-sato

Post on 01-Nov-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

SEEING HAPPY EMOTION IN FEARFUL AND ANGRYFACES: QUALITATIVE ANALYSIS OF FACIALEXPRESSION RECOGNITION IN A BILATERAL

AMYGDALA-DAMAGED PATIENT

Wataru Sato1, Yasutaka Kubota2, Takashi Okada2, Toshiya Murai2, Sakiko Yoshikawa1 and Akira Sengoku2

(1Department of Cognitive Psychology in Education, Graduate School of Education, KyotoUniversity, Kyoto, Japan; 2Department of Neuropsychiatry, Graduate School of Medicine,

Kyoto University, Kyoto, Japan)

ABSTRACT

Neuropsychological studies reported that bilateral amygdala-damaged patients hadimpaired recognition of facial expressions of fear. However, the specificity of thisimpairment remains unclear. To address this issue, we carried out two experimentsconcerning the recognition of facial expression in a patient with bilateral amygdala damage(HY). In Experiment 1, subjects matched the emotion of facial expressions with appropriateverbal labels, using standardized photographs of facial expressions illustrating six basicemotions. The performance of HY was compared with age-matched normal controls (n =13) and brain-damaged controls (n = 9). HY was less able to recognize facial expressionsshowing fear than normal controls. In addition, the error pattern exhibited by HY for facialexpressions of fear and anger were distinct from those exhibited by both control groups,and suggested that HY confused these emotions with happiness. In Experiment 2, subjectswere presented with morphed facial expressions that blended happiness and fear, happinessand anger, or happiness and sadness. Subjects were requested to categorize these expressionsby two-way forced-choice selection. The performance of HY was compared with age-matched normal controls (n = 8). HY categorized the morphed fearful and angry expressionsblended with some happy content as happy facial expressions more frequently than normalcontrols. These findings support the idea that amygdala-damaged patients have impairedprocessing of facial expressions relating to certain negative emotions, particularly fear andanger. More specifically, amygdala-damaged patients seem to give positively biasedevaluations for these negative facial expressions.

Key words: amygdala, facial expression, emotion, fear, anger, happiness, morphing

INTRODUCTION

Impairment in the recognition of fearful facial expression in patients withamygdala lesions is one of the most important topics regarding the neural andcognitive processing of facial expressions. The initial reports on this issue wereby Adolphs et al. (1994, 1995), who reported that a patient with innate bilateralamygdala damage had impaired recognition of facial expressions related tocertain emotions, particularly fear. Some of the following studies have confirmedthis phenomenon, and additionally reported that patients with unilateral oracquired amygdala lesions are also impaired in the recognition of facialexpressions of fear (Young et al., 1995; Calder et al., 1996; Young et al., 1996;

Cortex, (2002) 38, 727-742

Broks et al., 1998; Adolphset al., 1999a, 1999b). Based on these findings, it hasbeen proposed that the amygdala is an indispensable neural substrate forrecognizing facial expressions of fear (e.g., Adolphs et al., 1995).

However, some unresolved questions remain. First, some studies havereported that recognition of facial expressions was intact in subjects withcomplete bilateral amygdala damage (Hamannet al., 1996; Hamann andAdolphs, 1999). These studies purporting intact recognition in patients withbilateral amygdala damage used identical procedures to studies reportingimpaired recognition in such patients. These discrepancies between the studiescannot be attributed to other factors, such as age, IQ, etiology, or the size ofdamaged regions (Brokset al., 1998). Secondly, a recent neuropsychologicalstudy reported that brain-damaged patients without amygdala damage also hadimpaired recognition of facial expressions of fear (Rapcsaket al., 2000). Thisstudy pointed out that the conventional recognition task based on the six basicemotions theory (Ekman and Davidson, 1994) has biased difficulty for therecognition of certain negative expressions, particularly fear. It was suggestedthat impaired recognition of facial expressions of fear in amygdala-damagedpatients might not be specific to the amygdala lesions but be a consequence ofgeneralized brain damage. Finally, some studies reported that amygdala-damagedpatients have impaired recognition of facial emotion, not only for fear, but alsofor some other emotions (Adolphs et al., 1994, 1995; Calder et al., 1996;Adolphs et al., 1999a, 1999b). For example, Calder et al. (1996) reported thatpatients had impaired recognition for facial expressions both for fear and anger.Taken together, although some studies have demonstrated impaired recognitionof facial expressions for fear in amygdala-damaged patients, the relationshipbetween damage to the amygdala and recognition of facial expression remainscontroversial.

The current study explored this issue further, by testing the recognition ofemotional facial expressions in a patient with bilateral amygdala damage. Twoexperiments were conducted.

In Experiment 1, an attempt was made to verify the findings of earlierstudies using the photo-label matching paradigm, a method adopted in previousstudies (Young et al., 1995; Calder et al., 1996; Young et al., 1996; Brokset al.,1998; Rapcsaket al., 2000). The performance of the amygdala-damaged subjectwas compared with that of age-matched normal controls and brain-damagedcontrols. The avocations of the brain-damaged control group would provideinformation on the influence of brain damage not including the amygdaladamage. Error analyses were conducted together with an accuracy analysis offacial expression recognition. Although previous studies using facial images andverbal label-matching paradigms have focused mainly on accuracy, erroranalyses provide information relating to the confusion that a subject may havewith a facial expression, indicative of a qualitative pattern of facial expressionrecognition.

In Experiment 2, we made a further step toward uncovering the nature ofHY’s impairment of emotional expression recognition using a set of morphedfacial expressions.

728 Wataru Sato and Others

EXPERIMENT 1

Materials and Methods

Subjects

The subjects consisted of one patient with bilateral amygdala damage (HY), 9 brain-damaged controls with no damage to the amygdala, and 13 normal controls. All subjectsgave informed consent to participate in this study, which was conducted in accordance withthe institutional ethical provision and the Declaration of Helsinki.

HY: HY is a 37-year-old right-handed woman who suffered from Herpes simplexencephalitis at the age of 27 years. An MRI showed focal aberrant signal regions in thebilateral amygdala and partially their surrounding areas, including the hippocampi andentorhinal cortices. Except for these regions, no other abnormal signal intensity was evident(Figure 1). HY is not aphasic and her spontaneous speech was grammatical and appropriate.Her everyday memory was intact. She showed a normal performance IQ score and a slightlylow verbal IQ score (Wechsler Adult Intelligence Scale Revised (WAIS-R), 103 and 76).

To assess HY’s basic face-processing ability, neuropsychological tests were conducted.Basic processing of unfamiliar faces was assessed using the Benton Facial RecognitionTest (Benton and van Allen, 1983). HY showed a superior performance on this test (25/27,short form). The processing of familiar faces was assessed using the well-known face-naming and face-pointing subset of the Visual Perception Test for Agnosia (JapaneseSociety of Aphasiology, 1997). HY performed perfectly in these tasks (naming from faces:8/8; face selection from names: 8/8). Gender discrimination from faces was assessed usinga subset of the Visual Perception Test for Agnosia, and HY was found to perform the taskperfectly and without hesitation (4/4). In summary, HY was shown to have a normal abilityfor conducting these basic face-processing tasks.

Brain-damaged Controls:Nine right-handed brain-damaged controls (seven femalesand two males), aged 32 to 58 years (mean: 48.2 years; SD: 9.2), were studied. Subjectshad focal brain damage, but no damage in the amygdala. Lesion sites were in the lefttemporal cortex (n = 4), left temporal and parietal cortex (n = 1), left putamen (n = 2), lefttemporal cortex and thalamus (n = 1), and bilateral hippocampi (n = 1). As previous studieshad shown that lesions in the right somatosensory cortices (Adolphs et al., 1996, 2000),right ventral occipital cortices (Adolphs et al., 1996), or orbitofrontal cortices (Hornak etal., 1996) impair recognition of fearful facial expressions, patients with lesions in theseareas were excluded from the group of brain-damaged controls in this study. All subjectswere in a stable neurologic condition at the time of the experiment.

Facial expression and amygdala damage 729

Fig. 1 – T2-weighted brain MRI images of HY. Representative coronal (left) and horizontal(right) slices at the level of the amygdala are shown.

Normal Controls:Thirteen volunteers (five females and eight males), aged 27 to 46years (mean: 35.2 years; SD: 4.8), not significantly different in age compared to HY (p >.1, test), without a history of neurologic or psychiatric illness served as normal controls. Inaddition, the performance of 13 normal subjects (seven females and six males) rangingfrom 20 to 26 years old (mean: 23.5; SD: 2.0) was also tested.

Stimuli

A total of 48 photographs of facial expressions depicting six basic emotions (happiness,surprise, sadness, anger, disgust, and fear) were used as stimuli. Half of these picturesconsisted of Caucasian models and the remaining half consisted of Japanese models; thestimuli were chosen from the standard facial image set of Ekman and Friesen (1976) andMatsumoto and Ekman (1988).

Procedure

The events were controlled using the SuperLab software version 2.0 (Cedrus)implemented on a computer (PC-98NX, NEC) with the Windows operating system.

A label-matching paradigm, as used in previous studies (Young et al., 1995; Calder etal., 1996; Young et al., 1996; Broks et al., 1998; Rapcsak et al., 2000) assessed therecognition of facial expressions in amygdala-damaged patients. Pictures of peopleexpressing various emotions were presented on a CRT monitor (GDM-F400, Sony) one byone in a random order, and the verbal labels of six basic emotions were presented alongsideeach photograph. Subjects were asked to select the label that best described the emotionshown in each photograph. They were instructed to consider all six alternatives carefullybefore responding. There were no time limits and no feedback was provided aboutperformance during the test. There were 8 presentations for each emotional expression, thusmaking a total of 48 trials for each subject.

Before testing, to confirm adequate understanding of emotional labels, participants wereasked to provide examples of situations that would elicit each of the emotions. Thisinterview showed that all subjects gave appropriate examples without difficulty. Forexample, for the verbal label of fear, HY responded that being in an enclosed space, suchas in an MRI scanner, might elicit such an emotion. After this interview, subjects weregiven 5 training trials to become familiarized with the procedure.

Results

Analysis of Accuracy

Preliminary analysis of data derived from normal subjects (13 age-matchedcontrols and an additional 13 other subjects) was preformed. An analysis ofcovariance (ANCOVA) with subject gender (females and males) as a between-subject factor, and stimulus type (Caucasian and Japanese) and emotionalcategory (happiness, surprise, sadness, anger, disgust, and fear) as within-subjectsfactors, and subject age as a covariant, was conducted for the accurate emotionrecognition scores. The results revealed a significant main effect of emotion [F(5, 130) = 6.65, p < .001]; other effects were not significant (all p > .1). Basedon this analysis and our preliminary analysis of brain-damaged controls, thegender and age of subjects, and stimulus type, which showed little effect on theperformance of this task, were not focused on in the following analyses.

Figure 2 shows the accuracy of recognition (mean percent of correctresponses for all groups and SD for brain-damaged controls and normalcontrols).

730 Wataru Sato and Others

Comparing the number of correct responses by HY with normal controls theresults showed evident reduction in the performance of HY with regard tofearful and surprise expressions at about 1 SD of normal controls. For otheremotions, there were no significant differences between the scores of HY andthose of normal controls (all p > .1). Visual inspection of the performanceprofile among emotional categories showed that HY had a lower accuracy forfacial expressions of fear than for angry and disgusted facial expressions. Incontrast, normal controls showed no differences in accuracy between facialexpressions of fear and other stimuli with negative emotions.

When comparing the performance of HY with brain-damaged controls, theresults showed relative superiority in the performance of HY with regard tofearful expressions at about 1 SD of brain-damaged controls. For other emotions,there were no obvious differences between the scores achieved by HY andbrain-damaged controls. Visual inspection of the performance among emotionalcategories of these groups suggested that the pattern of performance by HY wasroughly comparable with that of brain-damaged controls.

In addition, we compared the performance of normal controls with brain-damaged controls, using a two-way analysis of variance (ANOVA) withsubject group (normal controls and brain-damaged controls) as a between-subjectfactor, and emotional category (happiness, surprise, sadness, anger, disgust, and

Facial expression and amygdala damage 731

Fig. 2 – Mean percent correct (and standard deviation) of facial emotion recognition in normalcontrols (NORMAL), in brain-damaged controls (BRAIN DAMAGE), and in an amygdala-damagedpatient (AMYG DAMAGE). HA = happiness; SA = sadness; SU = surprise; AN = anger; DI =disgust; FE = fear.

fear) as a within-subjects factor. The results revealed the significant main effects of subject group [F (1, 20) = 17.15, p < .001], and emotional category [F (5, 100) = 28.44, p < .001] and a significant interaction of subject group × emotional category [F (2, 34) = 2.47, p < .05]. The main effect of subjectgroup showed that the performance of brain-damaged controls was lower thanthat of normal controls. For the interaction of subject group and emotionalcategory, further tests of the simple main effect of subject group revealedsignificant subject group differences for the emotional categories of surprise,anger, disgust, and fear [F (1, 120) = 4.48, p < .05; F (1, 120) = 9.22, p < .005;F (1,120) = 8.03, p < .01; F (1, 120) = 21.62, p < .001], indicative of a relativereduction in the performance of brain-damaged controls.

Analysis of Errors

Error patterns were analyzed to explore the qualitative aspects of theperformance. Figure 3 shows the error responses for each emotional label ineach facial image presentation (mean percent ratio of error responses for allgroups and SD for brain-damaged controls and normal controls).

A comparison of the performance of HY with controls indicated that theformer mistook fearful and angry facial expressions for happy facial expressions.On the other hand, both control groups showed some confusion with othernegative emotions or surprise in responding to fearful and angry expressions.HY misrecognized surprised faces as happy facial expressions; this pattern wasnot found in normal controls, although some brain-damaged controls showed thiserror pattern.

To compare normal controls with brain-damaged controls, a one-waymultivariate analysis of variance (MANOVA) with subject group (normalcontrols and brain-damaged controls) as a between-subject factor on errornumbers was conducted for each expression. The significance of F-values wasdetermined by Wilks’ lambda criteria. Follow-up tests were conducted usingunivariate ANOVAs. For the surprised facial expression, the main effect ofgroup was significant [F (5, 16) = 3.46, p < .05], and the follow-up testsrevealed that misinterpretation of a happy expression was significantly higher in brain-damaged controls [F (1, 20) = 6.29, p < .05]. For a sad facialexpression, there was no significant difference between groups (p < .1). For an angry facial expression, the main effect of group was significant [F (4, 17) = 5.97, p < .005], and the follow-up tests revealed that themisinterpretation with surprise and sadness was higher in brain-damaged controls[F (1, 20) = 9.21, p < .01; F (1, 20) = 9.53, p < .01]. For the disgusted facial expression, there was no significant difference between groups (p> .1).For the facial expression of fear, the main effect of group was significant [F (4, 17) = 7.46, p < .005], and the follow-up tests showed that a sad facialexpression was selected significantly more often by brain-damaged controls [F (1, 20) = 7.96, p < .05].

To condense the information and visualize the configurations of subjects in2-dimensions, a principal component analysis (PCA) on the correlation matrix oferror responses for each emotional expression was conducted (Figure 4). This

732 Wataru Sato and Others

procedure is known to extract multivariate outliers readily. The two primarycomponents accounted for 100.00, 67.40, 69.69, 56.61, 58.84 and 54.29% of thetotal variance in the facial expressions of happiness, surprise, sadness, anger,disgust, and fear, respectively. When viewing the positions of HY in theconfigurations, for fear and anger expressions, she was clearly outstanding in thedirection of the happiness category factor loading. For the remaining facialexpressions, the position of HY was not conspicuous.

Discussion

The analysis of recognition accuracy showed that, although HY performedworse in recognizing fearful expressions relative to normal controls, theperformance of HY for the facial expression of fear was better than that of

Facial expression and amygdala damage 733

Fig. 3 – Mean percent error (and standard deviation) of facial emotion recognition in normalcontrols (NORMAL), in brain-damaged controls (BRAIN DAMAGE), and in an amygdala-damagedpatient (AMYG DAMAGE). HA = happiness; SA = sadness; SU = surprise; AN = anger; DI =disgust; FE = fear.

734 Wataru Sato and Others

Fig. 4 – Scatter plots of the factor scores for each subject with the plots of factor loading foreach emotion category analyzed using principal component analyses. NORMAL = normal controls;BRAIN DAMAGE = brain-damaged controls; AMYG DAMAGE = amygdala-damaged patient; AN =anger; DI = disgust; FE = fear; HA = happiness; SA = sadness; SU = surprise.

brain-damaged controls. Rapcsak et al. (2000) reported that patients with focalbrain damage not including amygdala lesions, and amygdala-damaged patients,were equally impaired in the recognition of facial expressions of fear comparedto normal subjects. Based on this evidence, it was asserted that the apparentfear-recognition deficits exhibited by amygdala-damaged patients were explainedby task-specific difficulties. Our accuracy analysis was not inconsistent with theconclusion of Rapcsak et al. (2000).

However, the analyses of error responses provided informationsupplementary to the results of the accuracy analysis. HY had a specific errorpattern; expressions of fear and anger were misinterpreted as happy facialexpressions. This error pattern did not appear in either normal or brain-damagedcontrols. HY did not misrecognize facial expressions of fear or anger as negativeemotions such as disgust, whereas normal and brain-damaged controls made anumber of such errors. The PCA results showed that HY was clearly an outlierrelative to both controls for fearful and angry facial expressions. Taken together,these results indicated a special profile for facial expression recognition by HY.The following debriefing is of interest in understanding the nature of HY’sunique error pattern: HY mentioned that all the models in the photographslooked funny and peaceful. In line with this comment, Damasio (1999) andAdolphs et al. (1995) reported that patients with bilateral amygdala damage wereless cautious with other people, and proved this empirically (Adolphs et al.,1998). Adolphs and Tranel (1999) demonstrated that amygdala-damaged patientsshowed this abnormal positive bias not only for humans, but also for variousentities, such as nonsense line drawings.

In summary, (i) HY was less able to recognize facial expressions of fearcompared to normal controls. HY’s error pattern was different from that ofnormal and brain-damaged control groups, and suggested a positive bias inemotional processing. (ii) HY showed a similar error pattern for angerrecognition, although her recognition accuracy for this emotion did not clearlydiffer from that of normal controls.

EXPERIMENT 2

Based on the results of Experiment 1 and previously reported findings, wehypothesized that HY would be prone to evaluate the fearful and angryexpressions as happy ones. To test this hypothesis, we conducted an experimentusing a set of morphed facial expressions, which blended two differentexpressions. Use of these morphed facial expressions enables a more sensitiveassessment of facial expression recognition than could be attained with testsusing typical prototypical expressions (Calder et al., 1996). In line with ourhypothesis, we generated the blending of happiness-fear facial expressions, andblending of happiness-anger expressions. As a reference, we blended happinessand sadness, for which Experiment 1 did not detect any impairment in theamygdala-damaged patients. We presented these facial images to HY and 8 age-matched normal controls, all of whom were asked to categorize the expressionsusing a two alternative forced-choice test (e.g., happiness or fear). We predicted

Facial expression and amygdala damage 735

that, for fearful and angry photographs blended with some happy content, HYwould be more likely than normal controls to judge such images as exhibitinghappy facial expressions.

Materials and Methods

Subjects

We studied HY and eight age-matched normal controls. Controls were four females andfour males ranging from 28 to 46 years old (mean: 36.9 years; SD: 7.0), whose ages werenot significantly different from HY (p > .1, test), and who did not have a history ofneurologic or psychiatric illness.

Stimuli

Raw materials were photos of the faces of four individuals chosen form the afore-mentioned Caucasian standard set (Ekman and Friesen, 1976) depicting happy, fearful,angry, and sad facial expressions.

Continua of emotional facial expressions were made from these photos. Betweenhappiness and one of the other emotions (fear, anger, or sadness), nine intermediate imagesin 10% steps were created using computer-morphing techniques (Mukaida et al., 2000),implemented with a computer (Endeavor Pro-400L, Epson Direct) using the Linux operatingsystem. Figure 5 shows the examples of the stimulus sequences. For example, for thehappiness-fear continuum, the morphed faces were generated by blending these twoexpressions in the proportions of 100:0, 90:10, 80:20, and so on (these expressions arereferred to as 0, 10, and 20% fear expressions). For each happiness-fear, happiness-anger,happiness-sadness continuum, a total of 44 stimuli (four models, eleven stages) weregenerated, making a total of 132 stimuli.

Procedure

The events were controlled using the SuperLab software version 2.0 (Cedrus)implemented on a laptop computer (Inspiron 8000, Dell) with the Windows operatingsystem.

Two-way categorizations for a set of morphed faces were conducted. The pictures offacial expressions were presented on a monitor one at a time. Before each trial, subjectswere instructed to select either happiness or the other emotion (fear, anger, or sadness),whichever they considered best described the photographs presented. Each continuum wasdivided into two blocks, each consisting of 22 stimuli, and the stimuli in each block werepresented in random order. The order of blocks was randomized initially, and then fixedfor all subjects. Each stimulus was presented singly, making a total of 132 trials for eachsubject. There were no time limits and no feedback was provided about performance duringthe test. To avoid fatigue and drowsiness, subjects had a short rest after each block. Subjectswere given a few training trials to become familiarized with the procedure.

Results

Figure 6 shows the total number of selections of happiness (mean numberswith SD for controls).

HY’s performance was compared with normal controls, using two-wayANOVAs with subject group (HY and controls) as a between-subject factor, andmixture ratio (0-100%) as a within-subjects factor. For the happiness-fearsequence, the results revealed the significant main effects of subject group [F (1,7) = 11.41, p < .05] and mixture ratio [F (10, 70) = 19.33, p < .001] and the

736 Wataru Sato and Others

significant interaction of subject group × mixture ratio [F (10, 70) = 2.02, p <.05]. For the interaction, further tests of the simple main effect of subject grouprevealed that HY selected happiness significantly more often than controls forthe 50, 60, 70, and 80% fear expressions [F (1, 77) = 10.65, p < .005; F (1, 77)= 10.65, p < .005; F (1, 77) = 4.25, p < .05; F (1, 77) = 4.99, p < .05]. For thissequence, it was also noted that HY selected happiness once for the 100%fearful expression (the prototypical fearful expression); none of the controlsselected happiness for the 100% fearful expression. For the happiness-anger

Facial expression and amygdala damage 737

Fig. 5 – Examples of the stimuli used in Experiment 2. From top to bottom, the interpolatedcontinua of happiness-fear (HA-FE), happiness-anger (HA-AN), and happiness-sadness (HA-SA) areshown. From left to right, each face blends the partner emotion of happiness (i.e., fear, anger, orsadness) at rates of 0, 20, 40, 60, 80, and 100%, respectively.

738 Wataru Sato and Others

Fig. 6 –Mean happiness selection number (and standard deviation) in the two-way forced-choiceemotion recognition of morphed facial images in normal controls (NORMAL), and in an amygdala-damaged patient (AMYG DAMAGE). From top to bottom, the interpolated continua of happiness-fear(HA-FE), happiness-anger (HA-AN), and happiness-sadness (HA-SA) are shown.

sequence, the main effects of subject group and mixture rate, and the interactionwere significant [F (1, 7) = 9.05, p < .05; F (10, 70) = 67.63, p < .001; F (10,70) = 4.66, p < .001]. Follow-up simple effectanalyses revealed that HYselected happiness significantly more often than controls for the 50, 60, 70, and80% anger expressions [F (1, 77) = 8.92, p < .005; F (1, 77) = 21.32, p < .001;F (1, 77) = 10.65, p < .005; F (1, 77) = 26.63, p < .001; F (1, 77) = 4.72, p <.05]. For the happiness-sadness sequence, there were no significant differencesbetween HY’s scores and those of controls, though the main effect of mixturerate was significant [F (10, 70) = 31.81, p < .001].

Discussion

The results clearly demonstrated that HY categorized the morphed fearfuland angry expressions blended with some happy content as happy facialexpressions more frequently than normal controls. For the morphed sadexpressions, HY performed within the normal range. These results wereconsistent with our hypothesis that HY would be prone to evaluate the fearfuland angry expressions as happy facial expressions. This deviation in recognizingmorphed fearful and angry expressions in a bilateral amygdala-damaged patientwas in accordance with a previous study (Calder et al., 1996). However, becausethe study of Calder et al. (1996) blended two types of expressions that weremost likely to be confused with each other (e.g., fear and surprise) or were ofinterest (e.g., fear and anger), the profiles of the patients’ impairment were onlyevident as low accuracies or noisy patterns. Our results clearly showed theimpaired recognition of fearful and angry facial expressions, and the uniqueerror patterns of these expressions.

HY’s comments, which were recorded as supplementary data providedsupportive information for the positive bias expressed towards recognition offearful and angry facial expressions. During the trial with the happiness-fear andhappiness-anger continua, HY stated that she was having difficulty more oftenthan the control subjects did; HY stated that neither of the choices wereappropriate when morphed expressions blended fearful and angry with happy(for 60, 70, 80, 90, and 100% fearful, and 60, 70, 80, and 90% angryexpressions). In contrast, HY reported no difficulty during the happiness-sadnesscontinua trials. Moreover, when HY selected a very fearful expression as ahappy one – because of her difficulty in judgment –, she repeatedly reported thatthese faces were a mixture of happiness and surprise, but did not detect the fearcomponent. Sometimes, HY referred to specific events to describe suchemotions. For instance, HY described a male’s 90% fearful expression as a faceexpressing a happy surprise – as if the man were meeting a cousin that he hadnot met for a long time.

GENERAL DISCUSSION

In the current study, we examined facial expression recognition in a patientwith bilateral amygdala damage. In Experiment 1, the expression recognition

Facial expression and amygdala damage 739

of six basic emotions was tested. In Experiment 2, the categorization of morphed facial expressions was investigated. In summary, the results showedthat HY was impaired in recognizing the facial expressions of fear and anger,and that HY tended to evaluate these facial expressions as happy facialexpressions.

These results support the assumption that the damage to amygdala impairsthe recognition of facial expressions. This has been suggested by many previousstudies (Adolphs et al., 1994, 1995; Young et al., 1995; Calder et al., 1996;Young et al., 1996; Brokset al., 1998; Adolphset al., 1999a, 1999b), althoughthere have been some exceptions (Hamannet al., 1996; Hamann and Adolphs,1999). The impaired recognition of both fearful and angry expressions in anamygdala-damaged patient also corroborates several studies, suggesting thepatients’ impairment in the recognition of expressions for fear and otheremotions, particularly anger (Adolphs et al., 1994, 1995; Calder et al., 1996;Adolphs et al., 1999a, 1999b).

Although a previous study (Rapcsaket al., 2000) has pointed out thatamygdala-damaged patients and other brain-damaged subjects without amygdaladamage have impaired recognition of some negative emotions, such as fear,corresponding to our results in Experiment 1, our results demonstrated that theimpaired expression recognition of amygdala-damaged patients and brain-damaged controls are qualitatively different. HY tended to evaluate fearful andangry facial expressions as happy expressions, but none of the brain-damagedand normal controls made such patterns. This positively biased visualrecognition in an amygdala-damaged patient is similar to findings in previousstudies, which have reported a preferential bias to non-facial entities (e.g.,simple nonsense figures) that are normally not preferred (Adolphs and Hamann,1999), and high trustworthiness and approachability bias for human faces thatare normally not evaluated as such (Adolphs et al., 1998), in amygdala-damagedpatients.

Although HY clearly showed positively biased evaluations with regard to therecognition of fearful and angry facial expression, previous studies investigatingexpression recognition in such patients were not unanimous on this point. Forexample, Rapcsak et al. (2000) reported that patients with amygdala damage alsoshowed a pattern of mistaking fearful faces for surprised faces, as exibited bynormal control subjects. We speculate that this discrepancy may be attributableto additional brain lesions in the amygdala-damged patients tested in theseearlier studies. Some amygdala-damaged patients in other studies had largerlesions, for example, including the anterior cingurate. On the other hand, HYhad relatively circumscribed lesions to the bilateral amygdala, when diagnosedusing MRI; however, it must be noted that her lesion might not be restricted tothe amygdala and their adjacent areas, because Herpes simplexencephalitis canelicit brain damage without any evident abnormality in the MRI. As our resultsfor brain-damaged controls show, brain damage outside of the amygdala wouldaffect perception of fearful facial expressions, and increase the tendency tomisrecognize fearful faces as surprised faces, or as other faces with negativeemotions. This may reflect the disproportionate difficulty for the fearfulexpression recognition in the test using facial images of six basic emotions

740 Wataru Sato and Others

(Rapcsak et al., 2000). Therefore, we speculate that in some previous studies, theeffect of greater brain damage may have obscured positive biases due toamygdala damage.

As for the inconsistency in results between studies, some researchers havepointed to differences in strategies across subjects (e.g., Adolphs et al., 1999b).With regard to this point, HY provided valuable information at her debriefingafter Experiment 1. HY stated that in the course of the test session, shesometimes had difficulty in guessing the models’ emotional states from theirfaces but felt the task easier when she tried to assume that they were mimickingeach emotional expression intentionally. This suggests that multiple strategiesare available for recognizing emotions in facial expressions, and that the impactof amygdala damage differs across strategies.

In conclusion, the results of this study showed that a patient with bilateralamygdala damage had impaired recognition of the facial expressions of fear andanger, and was prone to evaluate these expressions as happy facial expressions.These results suggest that amygdala damage elicits positively biased recognitionfor negative facial expressions, such as fear and anger.

Acknowledgements. The authors heartily thank all the participating subjects, Dr. MakotoNakamura for his technical support, and Professor Edward de Haan and our anonymousreviewers for their helpful advice. This study was supported by Special Coordination Fundsfor promoting Science and Technology by the Science and Technology Agency of theJapanese Government.

REFERENCES

ADOLPHS R, DAMASIO H, TRANEL D, COOPERG and DAMASIO AR. A role for somatosensory cortices inthe visual recognition of emotion as revealed by three-dimensional lesion mapping. Journal ofNeuroscience, 20: 2683-2690, 2000.

ADOLPHS R, DAMASIO H, TRANEL D and DAMASIO AR. Cortical systems for the recognition of emotionin facial expressions. Journal of Neuroscience, 16: 7678-7687, 1996.

ADOLPHS R, RUSSELL JA and TRANEL D. A role for the human amygdala in recognizing emotionalarousal from unpleasant stimuli. Psychological Science, 10: 167-171, 1999a.

ADOLPHS R and TRANEL D. Preferences for visual stimuli following amygdala damage. Journal ofCognitive Neuroscience, 11: 610-616, 1999.

ADOLPHS R, TRANEL D and DAMASIO AR. The human amygdala in social judgment. Nature, 393: 470-474, 1998.

ADOLPHS R, TRANEL D, DAMASIO H and DAMASIO AR. Impaired recognition of emotion in facialexpressions following bilateral damage to the human amygdala. Nature, 372: 669-672, 1994.

ADOLPHS R, TRANEL D, DAMASIO H and DAMASIO AR. Fear and the human amygdala. Journal ofNeuroscience, 15: 5879-5891, 1995.

ADOLPHS R, TRANEL D, HAMANN S, YOUNG AW, CALDER AJ, PHELPS EA, ANDERSON A, LEE GP andDAMASIO AR. Recognition of facial emotion in nine individuals with bilateral amygdala damage.Neuropsychologia, 37: 1111-1117, 1999b.

BENTON AL and VAN ALLEN MW. Test of facial recognition. New York: Oxford University Press, 1983.BROKS P, YOUNG AW, MARATOS EJ, COFFEY PJ, CALDER AJ, ISAAC CL, MAYES AR, HODGES JR,

MONTALDI D, CEZAYIRLI E, ROBERTS N and HADLEY D. Face processing impairments afterencephalitis: Amygdala damage and recognition of fear. Neuropsychologia, 36: 59-70, 1998.

CALDER AJ, YOUNG AW, ROWLAND D, PERRETT DI, HODGES JR and ETCOFF NL. Facial emotionrecognition after bilateral amygdala damage: Differentially severe impairment of fear. CognitiveNeuropsychology, 13: 699-745, 1996.

DAMASIO AR. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. NewYork: Harcourt Brace, 1999.

EKMAN P and DAVIDSON RJ (Eds.). The Nature of Emotion: Fundamental Questions. New York: OxfordUniversity Press, 1994.

EKMAN P and FRIESEN WV. Pictures of facial affect. Palo Alto: Consulting Psychologist, 1976.

Facial expression and amygdala damage 741

HAMANN SB and ADOLPHS R. Normal recognition of emotional similarity between facial expressionsfollowing bilateral amygdala damage. Neuropsychologia, 37: 1135-1141, 1999.

HAMANN SB, STEFANACCI L, SQUIRE LR, ADOLPHS R, TRANEL D, DAMASIO H and DAMASIO AR.Recognizing facial emotion. Nature, 379: 497, 1996.

HORNAK J, ROLLS ET and WADE D. Face and voice expression identification in patients with emotionaland behavioural changes following ventral frontal lobe damage. Neuropsychologia, 34: 247-261,1996.

JAPANESE SOCIETY OF APHASIOLOGY (Ed). Visual Perception Test for Agnosia. Tokyo: Shinko-Igaku,1987.

MATSUMOTO D and EKMAN P. Japanese and Caucasian Facial Expressions of Emotion. San Francisco:Intercultural and Emotion Research Laboratory, Department of Psychology, San Francisco StateUniversity, 1988.

MUKAIDA S, KAMACHI M, KATO T, ODA M, YOSHIKAWA S and AKAMATSU S. Foolproof Utilities for FacialImage Manipulation (unpublished computer software). Kyoto: Advanced TelecommunicationsResearch Institute International, 2000.

RAPCSAK SZ, GALPER SR, COMER JF, REMINGER SL, NIELSEN L, KASZNIAK AW, VERFAELLIE M, LAGUNAJF, LABINER DM and COHEN RA. Fear recognition deficits after focal brain damage: A cautionarynote. Neurology, 54: 575-581, 2000.

RUSSELL JA and FERNANDEZ-DOLS JM (Eds). The Psychology of Facial Expression. New York:Cambridge University Press, 1997.

YOUNG AW, AGGLETON JP, HELLAWELL DJ, JOHNSON M and BROKS P. Face processing impairmentsafter amygdalotomy. Brain, 118: 15-24, 1995.

YOUNG AW, HELLAWELL DJ, VAN DE WAL C and JOHNSON M. Facial expression processing afteramygdalotomy. Neuropsychologia, 34:31-39, 1996.

Wataru Sato, Department of Cognitive Psychology in Education, Graduate School of Education, Kyoto University, Yoshida-honmachi,Sakyo-ku, Kyoto, 606-8501, Japan. e-mail: [email protected]

(Received 25 April 2001; reviewed 9 July 2001; revised 9 January 2002; accepted 11March 2002; Action Editor: Edward de Haan)

742 Wataru Sato and Others