lay intuitions about overall evaluations of experiences irina cojuharenco gpefm barcelona...

23
Lay Intuitions about Overall Evaluations of Experiences Irina Cojuharenco GPEFM Barcelona Universitat Pompeu Fabra

Upload: esther-amie-wheeler

Post on 01-Jan-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Lay Intuitions about Overall Evaluations of Experiences

Irina CojuharencoGPEFM Barcelona Universitat Pompeu Fabra

Overall Evaluations of Experiences

“I would evaluate my experience in this concert as 7 out of 10!”

“On a scale from 0 to 100 this vacation would deserve 60 points!”

“In terms of painfulness, I would rate this medical procedure as 90 out of 100…”

The Importance of Overall Evaluations

Decision input (motivation, future choices (Wirtz et al., 2003, Oishi & Sullivan, in press), advice)

Decision target (customer satisfaction or organizational performance, e.g. hotel stays, employee appraisals)

What do we know about how overall evaluations come about?

Overall Evaluation=Remembered Utility “Peak-End Rule”(Kahneman, 2000)

Puzzle:duration neglect(Kahneman, Wakker & Sarin, 1997)

Previous Research

Experienced Utility Paradigm

Contribution

Experienced Utility ParadigmLay Intuitions

Motivation

The subjective nature of overall evaluations (Alexandrova, 2005),

Discussions of duration neglect(Ariely & Loewenstein, 2000), and no research on what communication partners expect of overall evaluations of experiences.

MethodInformants evaluating experiences in real-time and overall (0-100 scale, “nothing pleasant” to “a great deal of pleasure”)

Guessers having to infer overall evaluations by means of Active Information Search (Huber, Wider & Huber, 1997) with only the type of experience and metrics of evaluations known initially

Closed format questionnaires for the comparison of open-ended and closed-format elicitation of intuitions

Interpretations of given overall evaluations of experiences (closed format)

MethodInformants evaluating experiences in real-time and overall (0-100 scale, “nothing pleasant” to “a great deal of pleasure”)

Guessers having to infer overall evaluations by means of Active Information Search (Huber, Wider & Huber, 1997) with only the type of experience and metrics of evaluations known initially

Closed format questionnaires for the comparison of open-ended and closed-format elicitation of intuitions

Interpretations of given overall evaluations of experiences (closed format)

Procedure

Guessers Informants

?

4 experiments: music, chocolate, pleasant images, aversive images

2 task conditions: 3 questions / 1 question

closed format questionnaires

feedback and pay

Music Pilot: Questions and Theories

Experienced Utility Paradigm (Kahneman, Wakker & Sarin, 1997)

Accessibility model of emotional self-report (Robinson & Clore, 2000) / Construal level theory (Trope & Liberman, 2003)/ Valence judgments (Brendl & Higgins, 1995)

Personality (Updegraff, Gable & Taylor, in press)

Value as goal supportiveness (Brendl & Higgins, 1995)

Value by “functional” aspects (Zacks & Tversky, 2000)

Questions ClassificationCategory 1. Consistent with “experienced utility” (real-time ratings, duration-free statistics of these and duration)

Category 2. Non-chronological decomposition of utility (functional aspects, liking of aspects, category of experience, liking of category, emotion specification)

Category 3. Personality

Category 4. Decision rule

Category 5. Impications of the overall utility rating (future use pattern, goal supportiveness, WTP, approach-avoidance motivations)

Category 1. “How did you rate the musical performances of this sequence?” · “How did you rate the performance you liked the best?” · “How many performances did you hear?”

Category 2. “What was, or, How much did you like the rhythm of the music you listened to?” · “Was the music you heard classical?” · “Was your experience similar to your experience in a philosophy lecture?”

Category 3. “Are you a person who likes variety?” · “Are you a generally depressed individual?”

Category 4. “Was your overall rating equal to the average of performance ratings?”

Category 5. “How often would you listen to this music if you had it at home?” · “Would you use it as a background for a romantic dinner?”

Questions Structure1 Question Asked to Infer Informant´s Evaluation

% Chocolate Pleasant images

Aversive images

Music (pilot)

Implications of the overall utility rating

52

29

20

39

Future use pattern 35 8 8 33 Goals supportiveness 9 21 4 0 Approach / Avoidance 4 0 4 0 Willingness to pay 4 17 4 6

Non-chronological decomposition of utility

34

34

58

12

Category of the experience 17 4 12 6 Liking of the category 17 0 0 6 Emotion specification 0 13 23 0 Functional aspect of the experience 0 17 23 0 Consistent with “experienced utility” 13 30 15 39

Statistic of experience profile 9 17 15 22 Experience profile 4 0 0 17 Duration/Scope 0 13 0 0 Decision rule 0 4 0 0 Personality 0 0 0 6

Non-classifiable 0 4 8 6

Total

N=23

N=24

N=26

N=18

Questions Structure

% Chocolate Pleasant images

Aversive images

Music (pilot)

TQ AP TQ AP TQ AP TQ AP Non-chronological decomposition of utility

38 74 39 67 47 100 29 61

Implications of the overall utility rating

20 44 26 63 20 54 19 39

Consistent with “experienced utility” 18 44 15 33 16 27 39 61

Personality 6 17 10 25 7 19 14 22

Decision rule 6 9 1 4 1 4 2 6

Non-classifiable 9 22 8 21 9 27 2 6 TQ/ AP (N=) 68 23 72 24 76 26 66 18

3 Questions Asked to Infer Informant´s Evaluation

TQ – in Total Questions Asked; AP – of All Participants

Non-chronological decomposition of utility

% Chocolate Pleasant images

Aversive images

Music (pilot)

TQ AP TQ AP TQ AP TQ AP Non-chronological decomposition of utility

38 74 39 67 47 100 29 61

Liking of the category 25 61 3 8 0 0 5 11

Functional aspect of the experience

7 13 15 33 17 42 5 11

Category of the experience 4 13 6 13 9 27 11 39

Liking of an aspect 1 4 1 4 3 8 3 11

Emotion specification 1 4 14 38 18 46 5 17

Summary

Common principles for interpreting overall evaluations. Context-dependent saliency.

Duration not believed to matter for overall evaluations.

A role for future decision-making.

Experience’s future use WTP for the same or a similar experience Personality of the reporting person

Personality of the experiencing person Category of experience Experienced utility

Closed Format Questionnaires

Guessers faced questions that “could have been asked in the guessing task” chose 3 that would have been most helpful underlined 1 they would have chosen if constrained

Questionnaire A: 9 items inspired by question categories

found in pilot experiment (music)

Questionnaire B: 12 items inspired by “experienced utility”

paradigm

Questionnaire A

Questionnaire items Chocolate

(% ) Pleasant

images (% ) Aversive

images (% ) Average of real-time ratings 65*** 67*** 77*** Willingness-to-pay 52*** 25 19 Maximum rating 48*** 54*** 69*** Future use pattern 48*** 25 8 Category of the experience 43* 46*** 77*** Personality 13 38 19 Goals supportiveness 13 17 15 Duration/Scope 9 33 15 Functional aspect of the experience 4 8 12 Total participants N=23 N=24 N=26

*** significantly different from random choice (5% significance level)* significantly different from random choice (10% significance level)__ underlined are percentages corresponding to items chosen when participants could choose one item only (percentages are not reported, all significantly different from random choice at 5% level)

Across Paradigms: Percentage of Participants Choosing an Item.

Questionnaire B

Questionnaire items Chocolate

(% )

Pleasant images

(% )

Aversive images

(% )

Music (pilot) (% )

Average of real-time ratings 65*** 63*** 65*** 67*** Mode rating 52*** 63*** 73*** 39 Trend of ratings 39* 17 31 NA Maximum rating 35* 42*** 15 56*** Variance of ratings 35* 25 31 NA Sum of ratings 26 42*** 38* 17 Duration/Scope 17 42*** 23 11 End rating 13 4 8 11 Minimum rating 9 21 15 56*** Location of maximum rating 4 0 15 11 Middle rating 4 0 4 11 First rating 4 0 0 6 Total participants N=23 N=24 N=26

N=18

Experienced Utility Paradigm: Percentage of Participants Choosing an Item.

Conclusions

Multiple paradigms. Interactions unexplored.

Duration not believed to matter for overall evaluations.

Eagerness to assume “overall=average” (as dictionaries?).

Did lay intuitions reveal potential interpretations of overall evaluations?

Opinions Poll

Overall Evaluation Subject to Interpretation Interpretations: 90 points 50 points 10 points Personality 7.3 8.1 5.5 Average rating 7.0 7.3 6.1 Rating of a functional aspect 6.0 6.8 5.8 Future use pattern 5.6 5.8 7.3 Rating of experience category 5.6 5.0 6.0 Goals supportiveness 5.5 3.9 6.4 Duration/Scope 5.4 3.8 4.6 Maximum rating 4.3 3.1 4.8 Willingness to pay 3.4 4.3 7.5 Unrelated (weather last week) 2.4 1.8 2.0 Total participants N=23 N=24 N=24

71 undergraduates rated 0-10 plausibility of various interpretations they would give to hearing a participant of a lab image-viewing experiment evaluate his/her experience overall after he/she had evaluated each image viewed.

Future research

More on the use and social context of overall evaluations.

“Type of experience – principles for evaluation” interaction.

Standard of measurement: Utility Rating versus Willingness-to-Pay.

“Framing” overall evaluations.

Thank You!

www.econ.upf.edu/~irinac

For any questions regarding this work, please, contact Irina Cojuharenco at

[email protected]