rsweb @ acm recsys 2014 - exploring social network effects on popularity biases in recommender...

26
IRG IR Group @ UAM Exploring social network effects on popularity biases in recommender systems 6 th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014 Foster City, CA, 6 October 2014 6 th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014 Exploring social network effects on popularity biases in recommender systems Rocío Cañamares and Pablo Castells Universidad Autónoma de Madrid http://ir.ii.uam.es Foster City, CA, 6 October 2014

Upload: pablo-castells

Post on 02-Dec-2014

430 views

Category:

Technology


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Exploring social network effects on popularity biases

in recommender systems

Rocío Cañamares and Pablo CastellsUniversidad Autónoma de Madrid

http://ir.ii.uam.es

Foster City, CA, 6 October 2014

Page 2: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Outline of my talk

Why is popularity effective?

When is popularity effective?

– How does an item become popular?

– A stochastic model of social communication and rating behavior

Simulation-based experiments for “what if” scenarios

Conclusions

Page 3: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

The effectiveness of precision in top-k recommendation

Popularity tests well for top-k precision in offline experiments

(Cremonesi et al RecSys 2010, etc.)

But… does this reflect true precision?

…or could there be an artificial bias that rewards popular items

in the offline experimental procedure?

There is of course the issue of lack of novelty, but we shall focus

here on accuracy

Page 4: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Why is popularity effective?

Page 5: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Why is popularity rank an effective recommendation

Items

Observed user-item interaction

Unobserved preference

Use

rs

The good old rating matrix…

Page 6: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Why is popularity rank an effective recommendation

Popular items(short head)

Rest of items(long tail)

Observed user-item interaction

Unobserved preference

Items

Use

rs

Rating matrix in practice

Page 7: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Why is popularity rank an effective recommendation

In a random split, popular items have more test hits than average (more more )

Thus recommending them is effective (at least better than random)

But how about true precision? What’s in the “ ” cells?

Test data (relevant items)

Training data

Unobserved preference

Items

Use

rs

Popular items(short head)

Rest of items(long tail)

avg P@𝑘 ∼+

𝑘

Page 8: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Or is it? A toy simplified example

ItemA

ItemB

1 2

3 8

3 4 7 8

ObservedP@1

TrueP@1

Popularity recommendation

Random recommendation

Ratings

Page 9: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

When is popularity effective?

Page 10: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

When is popularity effective?

Why do popular items get more ratings?

And how does that relate with item relevance?

(“relevance” meaning target users like the items)

Page 11: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Rating generation

In order for a rating to be produced…

1. Discovery: the user needs to discover the item

– And then find out whether or not she likes it

2. Rating decision: the user needs to tell the system about it

– I.e. rate the item

So the biases in discovery and rating decisions should result in

(may explain?) biases in rating distribution (i.e. popularity)

Page 12: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Discovery sources

How do people find items

We search/browse for them

We randomly run into them

They are advertised to us

They are brought to us by a recommender system

···

We find them through our friends

We define a stochastic model

– Social communication and rating

– User decisions dependent on item relevance

We analyze the effect on popularity precision

– Simulation

Page 13: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

A model of social discovery and rating propagation

Rate

Rate?

Rate?Tell?

Rating decision𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛,¬𝑙𝑖𝑘𝑒𝑑

Communication decision𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛,¬𝑙𝑖𝑘𝑒𝑑

• Kown item sampling• Friend sampling• Boostrapping discovery

from exogenous source

Page 14: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

From user behavior model to macro social effect

Communication-relevance bias𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 , 𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛,¬𝑙𝑖𝑘𝑒𝑑

Global discovery-relevance bias𝑝 𝑠𝑒𝑒𝑛 𝑙𝑖𝑘𝑒𝑑 , 𝑝 𝑠𝑒𝑒𝑛 ¬𝑙𝑖𝑘𝑒𝑑

Rating-relevance decision bias𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 , 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛,¬𝑙𝑖𝑘𝑒𝑑

Global rating-relevance bias𝑝 𝑙𝑖𝑘𝑒𝑑 𝑟𝑎𝑡𝑒𝑑 , 𝑝 𝑙𝑖𝑘𝑒𝑑 ¬𝑟𝑎𝑡𝑒𝑑

Expected precision of popularity-rank recommendation

User behaviormodel parameters

Page 15: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Two approaches to analyze the model effects

Theoretical

Simulate and see what happens…

Challenging! Work in progress…

Page 16: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Experiments

Page 17: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Experiments – Simulation setup

Social network: ~4,000 users, ~88,000 arcs

– Facebook network data from Jure Leskovec

– Random graphs: Barabási-Albert, Erdös-Rényi

3,700 items

We simulate a relevance distribution with a long-tail shape,

randomly assigned to user-item pairs

Bootstrapping: exogenous random

discovery every ~1,000 time cycles

Stop simulation when 500,000 ratings

are produced

RoughlyMovieLens 1Mscale

0

0.2

0.4

0.6

0.8

1

0 1000 2000 3000𝑖

Page 18: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Experiments – Simulation setup

At any point in the simulation we are able to:

– Split the rating data and run a recommender system (e.g. popularity)

– Measure the precision of the recommendations – observed and true

By running different configurations we can observe the

results in different scenarios

– We test in general one bias at a time: discovery or rating

– We show single shot no average

Page 19: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Research questions for experiments

How does popularity compare with random recomendation

precision depending on the four user behavior parameters?

Does it make a difference to consider all ratings or only positive

ratings in popularity rank?

Does the social network topology and network phenomena

make a difference?

Can observed and true precision disagree?

Page 20: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Effect of communication behavior (with 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛 = 1)Si

mp

le p

op

ula

rity

Posi

tive

po

pu

lari

ty

𝑝𝑡𝑒𝑙𝑙𝑠𝑒𝑒𝑛,¬

𝑙𝑖𝑘𝑒𝑑

0

1

𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑

𝑝𝑡𝑒𝑙𝑙𝑠𝑒𝑒𝑛,¬

𝑙𝑖𝑘𝑒𝑑

1 1

1

0

0

0

00

-00

> rnd < rnd= rnd

11

0 1 0 1

-0 -0 0 0 0 0 0 0 0 0 0

-0 0 0 0 0 0 0 0 0 0 0

-0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

-0 -0 -0 -0 -0 -0 -0 -0 -0 -0 -0

-0 -0 -0 -0 -0 -0 -0 -0 -0 0 0

-0 -0 -0 -0 -0 -0 -0 -0 -0 0 0

-0 -0 -0 -0 -0 -0 -0 0 0 0 0

-0 -0 -0 -0 -0 -0 -0 0 0 0 0

-0 -0 -0 -0 -0 -0 0 0 0 0 0

-0 -0 -0 -0 -0 0 0 0 0 0 0

-0 -0 -0 0 0 0 0 0 0 0 0

-0 -0 0 0 0 0 0 0 0 0 0

-0 -0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

-0 0 0 0 -0 0 0 0 0 0 0

-0 -0 0 0 0 0 0 0 0 0 0

-0 -0 0 0 0 0 0 0 0 0 0

0 -0 0 0 0 0 0 0 0 0 0

-0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

(Temporal split)

Precision grows with 𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑

True precision worse than rnd sometimes

Positive pop better than simple pop

Observed precision

Almost always better than random

Grows with 𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛

Viral discovery effect on pop concentration

True precision

Degrades with 𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛,¬𝑙𝑖𝑘𝑒𝑑

True P@10 diffObserved P@10 diff

Page 21: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Effect of communication behavior (with 𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛 = 1)Si

mp

le p

op

ula

rity

Posi

tive

po

pu

lari

ty

𝑝𝑟𝑎𝑡𝑒

𝑠𝑒𝑒𝑛,¬

𝑙𝑖𝑘𝑒𝑑

0

1

0 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑

𝑝𝑟𝑎𝑡𝑒

𝑠𝑒𝑒𝑛,¬

𝑙𝑖𝑘𝑒𝑑

1

1

0

0

-00

> rnd < rnd= rnd

1

0 0 11

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

-0 0 0 0 0 0 0 0 0 0

-0 -0 0 0 0 0 0 0 0 0

-0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 -0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 -0 -0 -0 -0

0 0 0 0 0 -0 -0 -0 -0 0

-0 -0 -0 -0 -0 -0 0 0 -0 -0

-0 -0 -0 -0 0 0 -0 0 -0 -0

-0 -0 -0 -0 0 0 0 0 0 0

-0 -0 -0 0 0 0 0 0 0 0

-0 -0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

(Temporal split)

Observed precision

Almost always better than random

Grows with 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑

Grows slightly with 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛,¬𝑙𝑖𝑘𝑒 !!

True precision

Positive pop always better than random

Simple pop sometimes worse than random

Degrades with 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 !!

Viral effect: liked items get “sold out”

1

1

0

0

True P@10 diffObserved P@10 diff

Page 22: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Effect of communication behavior (with 𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛 = 1)Si

mp

le p

op

ula

rity

Posi

tive

po

pu

lari

ty

𝑝𝑟𝑎𝑡𝑒

𝑠𝑒𝑒𝑛,¬

𝑙𝑖𝑘𝑒𝑑

0

1

0 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑

𝑝𝑟𝑎𝑡𝑒

𝑠𝑒𝑒𝑛,¬

𝑙𝑖𝑘𝑒𝑑

1

1

0

0

-00

> rnd < rnd= rnd

1

0 0 11

(Random split)

Observed precision

Always better than rnd

Grows with 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑

Decreases with 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛,¬𝑙𝑖𝑘𝑒

True precision

Positive pop always better than random, almost constant

Simple pop worse than rnd when rating biasis negative

Viral discovery has little effect

1

1

0

0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

-0 -0 -0 -0 -0 -0 -0 0 -0 -0

-0 -0 -0 -0 -0 -0 -0 -0 0 -0

-0 -0 -0 -0 -0 -0 -0 0 0 0

-0 -0 -0 -0 -0 -0 -0 0 0 0

-0 -0 -0 -0 -0 -0 0 0 0 0

-0 -0 -0 -0 0 0 0 0 0 0

-0 -0 -0 -0 0 0 0 0 0 0

-0 -0 0 0 0 0 0 0 0 0

-0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

True P@10 diffObserved P@10 diff

Page 23: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Social network topology effect

0

0.1

0.2

0.3

Observed True Observed True

Facebook Barabási-Albert

P@10

Relevant popularity

Random recommendation

𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 = 1 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 = 1

𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, ¬𝑙𝑖𝑘𝑒𝑑 = 0𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛,¬𝑙𝑖𝑘𝑒𝑑 = 1

0

0.1

0.2

0.3

Observed True Observed True

Facebook Barabási-Albert

P@10

Popularity

Random recommendation

Page 24: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Contradicting observed and true precision

0

0.05

0.1

0.15

0.2

0.25

Observed True

P@10

Simple popularity

Positive popularity

Randomrecommendation

𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 = 0 𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, 𝑙𝑖𝑘𝑒𝑑 = 1

𝑝 𝑟𝑎𝑡𝑒 𝑠𝑒𝑒𝑛, ¬𝑙𝑖𝑘𝑒𝑑 = 1𝑝 𝑡𝑒𝑙𝑙 𝑠𝑒𝑒𝑛,¬𝑙𝑖𝑘𝑒𝑑 = 1

Randomis here

Page 25: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Conclusions

Observed precision of popularity is always better than random

True precision of popularity is worse than random when:

– Users talk about items they dislike more often than ones they like

– Users rate items they dislike more often than ones they like

Positive popularity is considerably more robust than simple popularity

– Fairly immune to user rating behavior on disliked items

Viral effects in temporal split

– Determined by a) user communication frequency, and b) social network topology

– Early popular items are recommendable to fewer users than in a random split

– Popularity may then become less useful for recommendation

It is not impossible for true and observed precision to be inconsistent

Page 26: RSWeb @ ACM RecSys 2014 - Exploring social network effects on popularity biases in recommender systems

IRGIR Group @ UAM

Exploring social network effects on popularity biases in recommender systems6th ACM RecSys Workshop on Recommender Systems and the Social Web – RSWeb 2014

Foster City, CA, 6 October 2014

Future work

Analytic work (in progress)

Very easy to generalize the model, just to mention a few possibilities…

– Arbitrarily biased exogenous sources, including recommender systems

– Dynamic social network, dynamic item lifecycles

– User behavior dependence on discovery source

– Social influence propagation, dynamic user preferences

So far a first step

– Understanding how social behavior patterns impact true popularity effectiveness

Next questions

– User studies

– Tracking and detecting the collective behavior patterns in real settings

– What to do about it

a) In the evaluation procedure & metrics and/or interpretation of results

b) In the algorithms which may potentially take popularity as a signal