chapman.som.pdf

20
www.sciencemag.org/cgi/content/full/323/5918/1222/DC1 Supporting Online Material for In Bad Taste: Evidence for the Oral Origins of Moral Disgust H. A. Chapman,* D. A. Kim, J. M. Susskind, A. K. Anderson* *To whom correspondence should be addressed. E-mail: [email protected] (H.A.C.); [email protected] (A.K.A.) Published 27 February 2009, Science 323, 1222 (2009) DOI: 10.1126/science.1165565 This PDF file includes: Materials and Methods SOM Text Figs. S1 to S3 Table S1 References

Upload: vlad-preda

Post on 15-Nov-2015

212 views

Category:

Documents


0 download

TRANSCRIPT

  • www.sciencemag.org/cgi/content/full/323/5918/1222/DC1

    Supporting Online Material for

    In Bad Taste: Evidence for the Oral Origins of Moral Disgust

    H. A. Chapman,* D. A. Kim, J. M. Susskind, A. K. Anderson*

    *To whom correspondence should be addressed. E-mail: [email protected] (H.A.C.); [email protected] (A.K.A.)

    Published 27 February 2009, Science 323, 1222 (2009) DOI: 10.1126/science.1165565

    This PDF file includes: Materials and Methods

    SOM Text

    Figs. S1 to S3

    Table S1

    References

  • Supporting Online Material

    Materials and Methods

    Experiment 1

    Participants: Twenty-seven healthy adults (19 female; mean age 20.6 yrs) participated in

    the study for course credit or $20. Participants with past or current history of psychiatric

    disorder were excluded, as were participants with a known disorder of taste or smell. All

    procedures in this and subsequent experiments were approved by the Research Ethics

    Board at the University of Toronto, and participants gave informed consent prior to

    beginning the studies.

    EMG apparatus: EMG data were acquired using a BIOPAC MP-150 system (S1). At

    acquisition, data were amplified ( 1000) and filtered with a bandpass of 100-500 Hz. Electrodes were placed over the levator labii muscle region on the left side of the face,

    using the placements suggested by (S2).

    Chemosensory stimuli: Solutions of quinine sulfate (ranging from 1.0 x 10-3 to 1.0 x 10-

    5M), sodium chloride (5.6 x 10-1 1.0 x 10-1M), citric acid (1.0 x 10-1 1.8 x 10-3M) and

    sucrose (1.0 0.18M) were used as bitter, salty, sour and sweet stimuli respectively.

    Tastant concentrations were selected individually for each participant by having

    participants sample and rate the intensity and valence of 5 concentrations of the bitter and

    sour solutions (which were most aversive, as indicated by pilot testing) and 4

    concentrations of the salty and sweet solutions. Tastants were presented in small cups,

    with a sample size of 2ml. Valence and intensity were rated on 11-point scales, relative to

  • tap water: participants were instructed to assume that water has zero intensity and neutral

    valence. After all the solutions were rated, the experimenter reviewed the participants

    responses and selected bitter, salty and sour concentrations with ratings close to very

    unpleasant. Sucrose concentrations were selected so as to be approximately equivalent

    in perceived intensity to the other solutions (strong intensity).

    Procedure: Following the rating procedure, the experimenter applied the EMG

    electrodes, set up the sample cups for the first block of the experiment, and left the testing

    booth. The experiment was divided into five blocks, one each for water and the bitter,

    sweet, salty, and sour solutions. Because the taste of quinine lingers in the mouth for an

    extended period, the bitter block was always presented last, while the other tastants were

    presented in counterbalanced order. Blocks consisted of five taste trials in which

    participants sampled 2ml of the appropriate liquid. To control the timing of each trial,

    written instructions were presented on a computer monitor using E-Prime experimental

    software (S3). Trials began with a 20s baseline period during which participants rested

    quietly. Next came an 8s period when participants grasped the first sample cup and

    brought it to their lips. This was followed by 8s during which participants sipped the

    contents of the sample cup, swished the liquid twice in their mouth and held the liquid in

    their mouth without swallowing. Participants next swallowed the liquid, and after a

    further 8s, rinsed their mouths with ~10ml of water, swallowing when finished. Finally,

    participants rated the valence and intensity of the preceding sample using the scales

    described above. After participants completed all five trials of a block, the experimenter

    entered the booth to arrange the sample cups for the next block.

    2

  • EMG analysis: EMG data were analyzed using custom software written in MATLAB

    (Version 7; S4). A 30 Hz high-pass filter was applied to the EMG data to reduce low-

    frequency noise. The signal was then rectified with an absolute value function and

    smoothed with a 10ms window. The period of interest in each trial was the 8s of the

    swallow phase, which is least likely to be contaminated by extraneous muscle

    activation due to sipping. Signal from the final 8s of the preceding baseline period was

    used to calculate signal change during the swallow phase. A contour following integrator

    was then applied to compute a running sum of sample values. The final value of the sum

    at the end of each swallow period was used for statistical analysis.

    Statistics: The correlation between levator labii region EMG and valence was computed

    on intertrial variability, in part because of variability in the overall strength of the EMG

    response across individuals. Each trial yielded both an EMG response and a paired self-

    report of valence. For each participant, the EMG/self-report pairs for all 25 trials were

    rank-ordered by decreasing valence. The EMG responses at each rank were then

    averaged across participants, and the average values were correlated with valence rank.

    Experiment 1b

    Appearance model: An additional group of 20 participants (13 female, mean age 23.2)

    were filmed as they underwent the taste sampling procedure described above in order to

    train a computerized facial appearance model to uncover the underlying action tendencies

    associated with the distaste response. Since not all participants showed visible facial

    actions in response to ingestion of the unpleasant tastants, the upper quartile with the

    3

  • strongest overt facial movements were included for training the appearance model. Still

    images capturing these participants peak responses to the bitter, sweet and neutral

    tastants were pulled from the video clips; responses to salty and sour tastants were

    excluded from the analysis so as not to bias the model toward unpleasant tastes. For those

    individuals who were wearing eyeglasses, the spot healing brush tool in Photoshop was

    used to remove the glasses from the still images, to reduce artifacting in subsequent

    analysis steps.

    A facial appearance model was created from these images using the Cootes et al.

    methodology (S5). First, each of the face exemplars was manually annotated with the

    two-dimensional Cartesian coordinates (relative to the image frame) of 68 distinct facial

    landmarks. The set of landmarks conforms to an annotation scheme designed to provide

    feature information for the facial contour, eyebrows, eyes, nose, and mouth, based on a

    subset of features specified by the MPEG-4 coding standard for facial expression

    animation (S6). All faces were cut out from the background image using a polygonal

    region defined by the outer contour delimited by the labeling scheme. Pixels inside the

    face region were histogram equalized to remove variability due to lighting differences

    across faces. All of the faces were then commonly aligned to remove global differences

    in size, rotation, and translation, using a generalized Procrustes transformation (S7).

    Principal components analysis (PCA) was applied to the covariance matrix of the shape

    data using eigenvalue decomposition, resulting in eigenvalue/eigenvector pairs used to

    parameterize uncorrelated dimensions of shape variation. The pixel textures for each face

    were piecewise affine warped to a common shape-normalized reference frame defined as

    the mean shape computed from the entire set of annotated facial landmarks. Textures

    4

  • extracted in this fashion are known in the literature as shape-free, having the property

    that internal features in the texture map for each face are relatively well-aligned (5). PCA

    was then applied via eigenvalue decomposition on the covariance matrix of the shape-

    free pixel vectors, resulting in parameters describing uncorrelated dimensions of texture

    variation. In order to collapse important covariations between face shape and texture, a

    third, combined PCA was performed on concatenated shape and texture parameter

    vectors (S5). First, the shape PCA projections were re-weighted to the same variance

    metric as the texture projections by multiplying the shape coefficients by the sum of the

    texture eigenvalues over the sum of the shape eigenvalues. Then the shape and texture

    vectors were concatenated into single vectors for each face exemplar, and eigenvalue

    decomposition was performed on the covariance matrix of the combined shape and

    texture vectors. The resulting combined shape and texture basis retained the full rank of

    the covariance matrix of the face data (there was no data compression). Each face in the

    dataset was thus parameterized by a vector of appearance loadings.

    Photorealistic face images were created using the appearance model by reversing the

    process that converted images to appearance loadings: given a vector of combined PCA

    loadings, the vector was transformed back into a concatenated shape and texture vector,

    and the shape coefficients were re-weighted to their original metric. Then the shape and

    texture PCA coefficients were transformed back into Euclidean shape coordinates and

    shape-free pixel values. Finally, the shape-free pixels were warped to the specified shape

    coordinates from the shape coordinates of the average face. Expression prototypes for

    bitter, neutral, and sweet tastants were created by averaging the vector representations of

    all exemplars of a category and synthesizing face images via the above process. The

    5

  • resulting prototype taste images were exaggerated in intensity to accentuate their

    characteristic action tendencies by scaling each prototype vector by 2.

    Experiment 2

    Participants: Nineteen healthy adults participated in this experiment. Participants with

    current or past history of psychiatric disorders were excluded. EMG data from one

    participant were eliminated for technical reasons, for a final sample size of 18 (8 female,

    mean age 20.4 years).

    Stimuli: Twenty disgusting and twenty sad photographs were chosen from the

    International Affective Picture System (IAPS; S8) in a two-step selection process. In the

    first step, published emotional category ratings (S9) were used to identify disgusting

    photographs that were rated as high in disgust but low in sadness, as well as sad

    photographs that were high in sadness but low in disgust. A group of 20 neutral

    photographs low in both sadness and disgust was also selected.

    In the second step, ratings from the IAPS norms were used to match the sad and

    disgusting photographs on mean valence. The final set of 20 disgusting photographs was

    largely composed of images of contamination and uncleanliness, such as body products,

    body envelope violations, and insects. The 20 sad photographs included (among others)

    homeless persons, traffic accidents, and distraught or ill individuals (illness photographs

    did not involve overt injury or mutilation, contagious disease or body products). Neutral

    photographs included household objects, outdoor scenes and abstract images.

    6

  • Procedure: The experimenter first applied the EMG electrodes and left the room. The 60

    photographs were then presented once in random order. Each trial began with a 6s

    baseline period during which participants viewed a fixation cross, followed by 6s of

    image presentation. Participants then rated the preceding image on sadness and disgust,

    using 9-point scales, before continuing to the next photograph.

    EMG analysis: EMG data were processed largely as in the first experiment. The period of

    interest in each trial was the 6s image presentation period. Signal from the 6s of fixation

    preceding each image was used as a baseline to calculate signal change scores.

    Statistics: The correlations between levator labii region EMG and disgust/sadness ratings

    were computed similarly to Experiment 1. Disgust and sadness ratings for all 60

    photographs were rank-ordered for each participant, in order of increasing disgust or

    sadness. The paired levator labii region responses at each rank were then averaged across

    participants, and these values were correlated with rank.

    Experiment 3

    Participants: Twenty-one healthy volunteers with no current or past history of

    psychiatric disorder participated in this study. Five participants were disqualified because

    debriefing revealed that they had suspected the deception involved in the experiment. The

    final sample size was 16 (11 female; mean age 22.7 yrs).

    EMG acquisition: EMG data were acquired as in the previous experiments.

    7

  • Procedure: Participants were tested in groups of 2-3, with each participant seated in a

    private booth. The experimenter first explained the nature and rules of the UG, and

    informed participants that they would always play the role of responder. Participants

    were next introduced to a group of 10 proposers (confederates) seated at computer

    terminals in two adjoining rooms. All players were told that they would be paid

    according to their choices in the game. Participants returned to the testing room and the

    EMG electrodes were applied.

    In the UG, each participant played 20 rounds of the game, one with each of the 10

    confederates and 10 with an avowed computer partner. In reality, all offers were

    generated by a computer algorithm so as to control the size and number of offers made.

    Rounds were presented in random order. The 10 offers from the computer partner were

    identical to those from the human partners, and mimicked the range and distribution of

    offers made in uncontrolled versions of the game, in which actual human proposers make

    offers (e.g., S10): 50% fair ($5:$5) offers, 20% $9:$1 offers, 20% $8:$2 offers and

    10% $7:$3 offers. No $6:$4 offers were presented, as we were primarily interested in

    exploring the response to offer that were more strongly unfair. As offer acceptance, self

    reported emotions, and EMG activity did not differ significantly between partner types

    (see supplementary results) data were collapsed across partner type for all analyses.

    Each UG round began with a 6s waiting period followed by 6s of fixation. The

    participant then saw the photograph and name of their partner in that round for 6s. Next,

    participants saw the offer proposed by their partner for 6s, after which came a 6s window

    in which they indicated whether they accepted or rejected the offer by pressing one of

    8

  • two keys on the keyboard. To reinforce the social nature of the interaction, names and

    photographs of proposers remained visible throughout the offer and decision phases.

    After seeing the trials outcome displayed for 6s, participants rated, on a 1-7 scale,

    how well their feelings about the preceding offer were represented by a reliably

    recognized facial expression of a universal emotion (S11). The seven emotion rating

    slides (happiness, sadness, anger, fear, disgust, surprise, contempt) that followed each

    offer were presented in random order. Finally, participants were debriefed and paid for

    their participation according to their choices in the games.

    EMG analysis: Using the same custom software, EMG data were first filtered with a 55-

    65Hz notch filter and then with a 30Hz high-pass filter, before rectifying and smoothing

    as above. We analyzed the 6s period during which the proposers fair or unfair offer was

    revealed; recordings from 6s before the onset of the offer display (i.e., during the partner

    display) were used as a baseline from which to calculate signal change scores. Lastly, a

    contour following integrator was applied.

    Statistics: Preliminary analyses revealed only a few small differences between responses

    to human vs. computer partners. Results presented in the main text thus give combined

    data from all 20 trials.

    The correlations between disgust, anger, sadness and contempt endorsement and

    levator labii region EMG were computed similarly to the previous experiments. Emotion

    ratings for all 20 trials were rank-ordered for each participant by increasing endorsement,

    9

  • and the corresponding EMG values were averaged across participants at each rank.

    Finally, the average EMG values were correlated with emotion rank.

    The correlations between levator labii region activity and behavioral responses to

    offers were computed by rank-ordering the levator labii region EMG response for all 10

    unfair trials (offers of $7:$3 or less) for each participant, in order of increasing activation.

    The corresponding accept/reject decisions (1 or 2) at each rank were then averaged across

    participants, and the average values were correlated with emotion rank. The correlations

    between emotion endorsement and behavioral response were computed similarly, except

    that decisions were rank-ordered by increasing self-reported disgust, sadness or anger.

    Labeling of expressions used in self-report task: To examine whether the non-verbal self-

    report method that we employed separates anger from disgust, a separate group of 12

    participants (7 female, mean age 27.1 years) matched canonical emotional facial

    expressions to written emotion descriptors. On each trial, participants viewed an array of

    emotional expressions (angry, disgusted, contemptuous, sad, fearful, surprised and happy;

    S11) as well as a single written emotion descriptor. The task was to select the expression

    that was the best match for the descriptor. Descriptors were synonyms for the classical

    emotion terms. Four disgust-themed descriptors were presented (tastes something bad,

    smells something bad, touches something dirty, grossed out), as were four anger-

    themed labels (frustrated, annoyed, pissed off, irritated). Descriptors for

    contempt (disapproving), happiness (cheery), surprise (amazed), fear (scared) and

    sadness (glum) were also presented. For analysis, responses to the four disgust-themed

    descriptors were collapsed together, as were response to the four anger-themed

    10

  • descriptors. The frequency with which the anger and disgust expressions were selected to

    match the anger- and disgust-themed descriptors was analyzed using chi-square tests.

    Appearance model: The pattern of negative emotions endorsed for $9:$1 offers was

    depicted using modified versions of the faces used in the self-report task. The larger

    facial appearance model was derived from a standard cross-cultural dataset of posed

    facial expressions that included the self-report faces (S11), using the same methodology

    described for Experiment 1 above. The intensity of disgust, anger and sadness

    expressions used in the self-report task was scaled by the degree of emotion endorsement

    for each expression. More specifically, the vector representations for each face were

    multiplied by the ratio of the mean emotion endorsement value for that emotion to the

    maximum possible endorsement. This resulted in new faces whose expression intensity

    visually reflects the strength of emotion endorsement for that emotion.

    Indexing similarities between expressions: Similarity between the disgust expression

    prototype and the bitter expression from Experiment 1 as well as seven canonical facial

    expressions of emotion (S11) was assessed using the appearance model. The bitter

    expression prototype was fit to the appearance model after training the model on images

    of the seven basic emotions, allowing a test of the generalization of the model to a novel

    expression. Pearson correlations were measured between the vector coding the disgust

    expression prototype and the vectors coding bitter, anger, contempt, sadness, surprise,

    fear and happiness.

    11

  • Results

    Comparison of responses to human vs. computer partners in Experiment 3: Previous

    research has found that behavioral and neural responses to unfair offers differ depending

    on whether the offer was made by a human or a computer partner (S12). In view of these

    findings, we included a manipulation of partner identity in Experiment 3, with the

    expectation that unfair offers from a computer partner might evoke weaker emotional

    responses. However, contrary to this hypothesis, we did not find strong differences

    between responses to humans and computers. For example, our participants accepted

    only marginally fewer unequal offers from computers than from humans (paired samples

    t-test: t[15] = 1.88, p = 0.08). In particular, participants rejected the most unfair offers

    ($9:$1) as often from computers as from humans (t < 1). Self-reported emotion in

    response to the offers did not differ significantly between humans and computers

    (repeated measures ANOVA: F[1,15] = 1.30, p = 0.28). Finally, levator labii region

    EMG did not differ between offers from humans and computers (repeated measures

    ANOVA: F[1,15] = 2.0, p = 0.18).

    It is not clear why we failed to replicate the previously observed differences due to

    partner identity, but we note that there are other experimental paradigms in which

    participants experience strong emotional reactions to computers. For example, exclusion

    from a virtual ball-toss game results in anger and hurt feelings as well as decreases in

    self-esteem and feelings of belonging (S13). The strength of these reactions does not

    depend on whether participants believe they are playing with a computer or with other

    humans (S13). Thus, individuals may not always distinguish strongly between human and

    computer partners in social interactions, as our participants evidently did not.

    12

  • Labeling of expressions used in self-report task: Our aim in this experiment was to test

    whether anger and disgust facial expressions were matched to distinct (and appropriate)

    emotion descriptors. Results supported a separation between labeling of anger and disgust

    expressions: anger expressions were reliably matched to anger descriptors and not disgust

    descriptors, while disgust expressions were matched to disgust descriptors but not anger

    descriptors (Table S1). Averaging across the four anger-themed descriptors, 60.4% of

    participants chose the anger expression as the best match, while disgust was selected by

    only 12.4%. Across the four disgust-themed descriptors, the opposite pattern obtained:

    85.4% of participants chose the disgust expression as the best match, while 4.2% chose

    the anger expression. Chi-square tests on the frequency of selection of disgust and anger

    expressions showed that these differences were significant (anger themed descriptors,

    2(1) = 17.8, p < 0.001; disgust-themed descriptors, 2(1) = 35.4, p < 0.001).

    13

  • Captions

    Fig. S1. Mean proportion of offers accepted in Experiment 3, by offer type. Error

    bars show +1 SEM, calculated within-subjects (S14).

    Fig. S2. Mean self-reported emotion in response to different offers in the

    Ultimatum Game (N = 16), showing all emotions. Higher numbers indicate

    greater endorsement. Emotions that did not vary significantly with offer size are

    shown in dashed lines.

    Fig. S3. Analysis of similarity between the computer model of the disgust

    expression prototype and other expression prototypes developed for Experiment

    3, plus the average distaste expression observed in Experiment 1B. Average

    expressions of disgust and the other basic emotions were generated from

    photographs from a standard set (S11). Points on the plot show Pearson

    correlations comparing the visual similarity of the average disgust expression

    vector representation to other average expressions and the average distaste

    expression from Experiment 1B. Confidence envelope shows 95% CI.

    14

  • Tables

    Table S1. Match between emotion descriptors and facial expressions used for

    the self-report task in Experiment 3. The proportion of times each expression was

    selected as the best match for a particular descriptor is given (averages across

    the four items are shown for disgust and anger). Modal response is highlighted.

    Emotion Descriptor

    Facial

    Expression

    Disgust-

    themed

    Anger-

    themed

    Disapproving Glum Amazed Scared Cheery

    Disgust 0.85 .12 .17 0 0 0 0

    Anger .042 .60 .083 0 0 0 0

    Contempt 0 .23 .67 0 0 0 0

    Sadness .083 .042 .083 1.00 0 .083 0

    Surprise 0 0 0 0 1.00 .083 0

    Fear .021 0 0 0 0 .83 0

    Happiness 0 0 0 0 0 0 1.00

    15

  • 16

    References and Notes

    S1. Biopac Systems. (Goleta, CA).

    S2. L. G. Tassinary, J. T. Cacioppo, in Handbook of Psychophysiology, J. T.

    Cacioppo, L. G. Tassinary, G. G. Berntson, Eds. (Cambridge University Press,

    Cambridge, England, 2000) pp. 163-198.

    S3. Psychcholgy Software Tools. (Pittsburg, PA).

    S4. The MathWorks. (Natick, MA).

    S5. T. F. Cootes, G. J. Edwards, C. J. Taylor, IEEE Trans. Pattern Anal. Mach. Intell.

    23, 681 (2001).

    S6. J. Ostermann, Comput. Animat. 98, 49 (1998).

    S7. C. Goodall, J. Roy. Stat. Soc. B 53, 285 (1991).

    S8. P. J. Lang, M. M. Bradley, B. N. Cuthbert, International Affective Picture System

    (IAPS). Technical Report A-6. (University of Florida, Gainesville, FL, 2005).

    S9. J. A. Mikels et al., Behav. Res. Methods 37, 626 (2005).

    S10. W. Gth, R. Schmittberger, B. Schwarze, J. Econ. Behav. Organ. 3, 367 (1982).

    S11. D. Matsumoto, P. Ekman, Japanese and Caucasian Facial Expressions of

    Emotion (JACFEE) [Slides] (Intercultural and Emotion Research Laboratory,

    Department of Psychology, San Francisco State University, San Francisco, CA,

    1988).

    S12. A. G. Sanfey, J. K. Rilling, J. A. Aronson, L. E. Nystrom, J. D. Cohen, Science

    300, 1755 (2003).

    S13. L. Zadro, K. D. Williams, R. Richardson, J. Exp. Soc. Psychol. 40, 560 (2004).

    S14. G. R. Loftus, M. E. Masson, Psychon. Bull. Rev. 1, 476 (1994).