how opinions are received by online communities a case study on amazon.com helpfulness votes...

Post on 17-Jan-2016

217 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

How Opinions are Received by Online Communi-tiesA Case Study on Amazon.com Helpfulness Votes

Cristian Danescu-Niculescu-Mizil1, Gueorgi Kossinets2, Jon Kleinberg1, Lillian Lee1

1Dept. of Computer Science, Cornell University, 2Google Inc.

WWW 2009

2009. 07. 30.

IDS Lab.

Hwang Inbeom

Copyright 2009 by CEBT

Outline

Users’ evaluation on online reviews: Helpfulness votes

Observation of behaviors

Making some hypothesis and proving their validity

Coming up with a mathematical model explains these be-haviors

2

Copyright 2009 by CEBT

Introduction

Opinion

What did Y think of X?

3

Copyright 2009 by CEBT

Introduction

Meta-Opinion

What did Z think of Y’s opinion of X?

4

Copyright 2009 by CEBT

The Helpfulness of Reviews

Widely-used web sites include not just reviews, but also evaluations of the helpfulness of the reviews

The helpfulness vote

– “Was this review helpful to you?”

Helpfulness ratio:

– “a out of b people found the review itself helpful”

5

b

a

Copyright 2009 by CEBT

Amazon.com Helpfulness Votes Data

4,000,000 reviews about roughly 700,000 books, includ-ing average star ratings and helpfulness ratios

6

Average star rating

Helpfulness ratio

Copyright 2009 by CEBT

Definitions of “Helpfulness”

Helpfulness in the narrow sense: “Does this review help you in making a purchase

decision?”

Liu’s work: annotation and classification of review helpful-ness

Annotators’ evaluation differed significantly from the help-fulness votes

Helpfulness “in the wild”

The way Amazon users evaluate each others’ reviews

Intertwined with complex social feedback mechanisms

7

Copyright 2009 by CEBT

Flow of Presentation

Hypothe-siz-ing

Verify-ing

Model-ing

8

Copyright 2009 by CEBT

Flow of Presentation

Hypoth-esizing•Con-formity

•Indi-vidual-bias

•Bril-liant-but-cruel

•Qual-ity-only

Verifying Modeling

9

Copyright 2009 by CEBT

Hypotheses: Social Mechanisms underlying

Well-studied hypotheses for how social effects influence group’s reaction to an opinion

The conformity hypothesis

The individual-bias hypothesis

The brilliant-but-cruel hypothesis

The quality-only straw-man hypothesis

10

Copyright 2009 by CEBT

Hypotheses

The conformity hypothesis

Review is evaluated as more helpful when its star rating is closer to the consensus star rating

– Helpfulness ratio will be the highest of which reviews have star rating equal to overall average

The individual-bias hypothesis

When a user considers a review, he or she will rate it more highly if it expresses an opinion that he or she agrees with

11

Copyright 2009 by CEBT

Hypotheses (contd.)

The brilliant-but-cruel hypothesis

Negative reviewers are perceived as more intelligent, com-petent, and expert than positive reviewers

The Quality-only straw-man hypothesis

Helpfulness is being evaluated purely based on the textual content of reviews

Non-textual factors are simply correlates of textual quality

12

Copyright 2009 by CEBT

Flow of Presentation

Hypothe-siz-ing

Verifying•Absolute deviation of helpful-ness ratio

•Signed deviation of helpful-ness ratio

•Variance of star rat-ing and helpful-ness ratio

•Making use of plagiarism

Model-ing

13

Copyright 2009 by CEBT

Hypotheses

Conformity•A review is evaluated as more helpful when its star rating is closer to the average star rating

Individual-bias•A review is evaluated as more helpful when its star rating is closer to evaluator’s opinion

Brilliant-but-cruel•A review is evaluated as more helpful when its star rating is below to the average star rating

Quality-only•Only textual infor-mation affects help-fulness evaluation

14

Copyright 2009 by CEBT

Absolute Deviation from Average

Consistent with conformity hypothesis

Strong inverse correlation between the median helpfulness ratio and the absolute deviation

Reviews with star rating close to the average gets higher helpfulness ratio

15

Copyright 2009 by CEBT

Hypotheses

Conformity•A review is evaluated as more helpful when its star rating is closer to the average star rating

Individual-bias•A review is evaluated as more helpful when its star rating is closer to evaluator’s opinion

Brilliant-but-cruel•A review is evaluated as more helpful when its star rating is below to the average star rating

Quality-only•Only textual infor-mation affects help-fulness evaluation

16

Copyright 2009 by CEBT

Signed Deviation from Average

Not consistent with brilliant-but-cruel hypothesis

There is tendency towards positivity

Black lines should not be sloped that way if it is valid hy-pothesis

17

Copyright 2009 by CEBT

Hypotheses

Conformity•A review is evaluated as more helpful when its star rating is closer to the average star rating

Individual-bias•A review is evaluated as more helpful when its star rating is closer to evaluator’s opinion

Brilliant-but-cruel•A review is evaluated as more helpful when its star rating is below to the average star rating

Quality-only•Only textual infor-mation affects help-fulness evaluation

18

Copyright 2009 by CEBT

Addressing Individual-bias Effects

It is hard to distinguish between the conformity and the individual-bias hypothesis

We need to examine cases in which individual people’s opinions do not come from exactly the same distribution

Cases in which there is high variance in star ratings

Otherwise conformity and individual-bias are indistinguish-able

– Everyone has same opinion

19

Copyright 2009 by CEBT

Variance of Star Rating and Helpfulness Ra-tio

20

Helpfulness ratio is the highest with re-views of which rating is slightly-above the average

Two-humped camel plots: local minimum around average

Helpfulness ratio is the highest when star ratings of re-views have average value

Copyright 2009 by CEBT

Hypotheses

Conformity•A review is evaluated as more helpful when its star rating is closer to the average star rating

Individual-bias•A review is evaluated as more helpful when its star rating is closer to evaluator’s opinion

Brilliant-but-cruel•A review is evaluated as more helpful when its star rating is below to the average star rating

Quality-only•Only textual infor-mation affects help-fulness evaluation

21

Copyright 2009 by CEBT

Plagiarism

Making use of plagiarism is effective way to control for the effect of review text

Definition of plagiarized pair(s) of reviews

Two or more reviews of different products

With near-complete textual overlap

22

Copyright 2009 by CEBT

An Example

Skull-splitting headache guaranteed!•If you enjoy thumping, skull splitting migraine headache, then Sing N Learn is for you.As a longtime language instructor, I agree with the attempt and effort that this series makes, but it is the execution that ultimately weakens Sing N Learn Chinese.To be sure, there are much, much better ways to learn Chi-nese. In fact, I would recommend this title only as a last re-sort and after you’ve thoroughly exhausted traditional ways to learn Chinese …

Migraine Headache at No Extra Charge•If you enjoy a thumping, skull splitting migraine headache, then the Sing N Learn series is for you.As a longtime language instructor, I agree with the effort that this series makes, but it is the execution that ultimately weakens Sing N Learn series. To be sure, there are much, much better ways to learn a foreign language. In fact, I would recommend this title only as a last resort and after you’ve thoroughly exhausted traditional ways to learn Korean …

23

Copyright 2009 by CEBT

Plagiarism (contd.)

Plagiarized reviews

Almost(not exact) same text

– More possibly, same text could be considered as spam reviews

Different non-textual information

If the quality-only straw man hypothesis holds, helpful-ness ratios of documents in each pair should be the same

Possible other methods

Human annotation

– Could be subjective

Classification using machine learning methods

– We cannot guarantee the accuracies of algorithms

24

Copyright 2009 by CEBT

Experiments with Plagiarism

Text quality is not the only explanatory factor

Statistically significant difference between the helpfulness ratios of plagiarized pairs

25

The plagiarized reviews with deviation 1 is significantly more helpful than those with deviation 1.5

Copyright 2009 by CEBT

Hypotheses

Conformity•A review is evaluated as more helpful when its star rating is closer to the average star rating

Individual-bias•A review is evaluated as more helpful when its star rating is closer to evaluator’s opinion

Brilliant-but-cruel•A review is evaluated as more helpful when its star rating is below to the average star rating

Quality-only•Only textual infor-mation affects help-fulness evaluation

26

Copyright 2009 by CEBT

Flow of Presentation

Hypothe-siz-ing

Verify-ing

Model-ing•Based on in-divid-ual bias andmix-tures of dis-tribu-tions

27

Copyright 2009 by CEBT

Authors’ Model

Based on individual bias and mixtures of distributions

Two distributions: one for positive, one for negative eval-uators

Balance between positive and negative evaluators:

Controversy level:

– Density function of helpfulness ratios of positive evaluators

– Gaussian distribution of which average is -centered

– Density function of helpfulness ratios of negative evaluators

– Gaussian distribution of which average is -centered

28

)( q

)( p

)()()( xqgxpfxh

)(xg

)(xf

p

Copyright 2009 by CEBT

Validity of the Model

Empirical observation and model generated

29

Copyright 2009 by CEBT

Conclusion

A review’s perceived helpfulness depends not just on its content, but also the relation of its score to other scores

The dependence of the score is consistent with a simple and natural model of individual-bias in the presence of a mixture of opinion distributions

Directions for further research

Variations in the effect can be used to form hypotheses about differences in the collective behaviors of the underly-ing populations

It would be interesting to consider social feedback mecha-nisms that might be capable of modifying the effects au-thors observed here

Considering possible outcomes of design problem for sys-tems enabling the expression and dissemination of opinions

30

Copyright 2009 by CEBT

Discussions

So, how can we use this?

In which cases would this information be helpful?

Available information is very limited

– Star ratings

– Helpfulness ratios

Conclusion is rather trivial

Does not present new discoveries

31

top related