recommendation system

83
Recommendation System PengBo Dec 4, 2010

Upload: kevyn-cobb

Post on 02-Jan-2016

54 views

Category:

Documents


4 download

DESCRIPTION

Recommendation System. PengBo Dec 4, 2010. Book Recommendation. 拉蒙 - 卡哈尔 ( 西班牙 ) 1906 诺贝尔生理学或医学奖:“现代神经科学的主要代表人物和倡导者” “最重要的问题已经解决完了” “过度关注应用科学” “认为自己缺乏能力”. Outline Today. What: Recommendation System How: Collaborative Filtering (CF) Algorithm Evaluation on CF algorithms. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Recommendation System

Recommendation System

PengBoDec 4, 2010

Page 2: Recommendation System

Book Recommendation

拉蒙 - 卡哈尔 ( 西班牙 ) 1906 诺贝尔生理学或医学奖:

“现代神经科学的主要代表人物和倡导者” “ 最重要的问题已经解决完了” “ 过度关注应用科学” “ 认为自己缺乏能力”

Page 3: Recommendation System

Outline Today

What: Recommendation System How:

Collaborative Filtering (CF) Algorithm Evaluation on CF algorithms

Page 4: Recommendation System

What is Recommendation System?

Page 5: Recommendation System

The Problem

分类分类

检索检索

还有什么更有效的手段?

还有什么更有效的手段?

Page 6: Recommendation System

Recommendation

Page 7: Recommendation System
Page 8: Recommendation System

This title is a textbook-style exposition on the topic, with its information organized very clearly into topics such as compression, indexing, and so forth. In addition to diagrams and example text transformations, the authors use "pseudo-code" to present algorithms in a language-independent manner wherever possible. They also supplement the reading with mg--their own implementation of the techniques. The mg C language source code is freely available on the Web.

This title is a textbook-style exposition on the topic, with its information organized very clearly into topics such as compression, indexing, and so forth. In addition to diagrams and example text transformations, the authors use "pseudo-code" to present algorithms in a language-independent manner wherever possible. They also supplement the reading with mg--their own implementation of the techniques. The mg C language source code is freely available on the Web.

Page 9: Recommendation System

Personalized Recommendation

Personalized Recommendation

Page 10: Recommendation System

Everyday Examples of Recommendation Systems…

Bestseller lists Top 40 music lists The “recent returns” shelf at the library Many weblogs “Read any good books lately?” ....

Common insight: personal tastes are correlated:•If Marry and Bob both like X and Marry likes Y then Bob is more likely to like Y•especially (perhaps) if Bob knows Marry

Common insight: personal tastes are correlated:•If Marry and Bob both like X and Marry likes Y then Bob is more likely to like Y•especially (perhaps) if Bob knows Marry

Page 11: Recommendation System

Correlation Between two random variables

Mean

Standard variance

Pearson's correlation indicating the degree of

linear dependence between the variables

Page 12: Recommendation System

Correlation Between two random variables

Page 13: Recommendation System

Rec System: Applications

Ecommerce Product recommendations - amazon

Corporate Intranets Recommendation, finding domain experts, …

Digital Libraries Finding pages/books people will like

Medical Applications Matching patients to doctors, clinical trials, …

Customer Relationship Management Matching customer problems to internal experts

Page 14: Recommendation System

Recommendation Systems

给出一个 users 和 items 集合 Items 可以是 documents, products, other users

… 向一个 user 推荐 items ,根据 :

users 和 items 的属性信息 age, genre, price, …

这个 user 以及其它 user 过去的 behavior Who has viewed/bought/liked what?

来帮助人们 make decisions maintain awareness

Page 15: Recommendation System

Recommender systems are software applications that aim to support users in their decision-making while interacting with large information spaces.

Recommender systems help overcome the information overload problem by exposing users to the most interesting items, and by offering novelty, surprise, and relevance.

Recommender systems are software applications that aim to support users in their decision-making while interacting with large information spaces.

Recommender systems help overcome the information overload problem by exposing users to the most interesting items, and by offering novelty, surprise, and relevance.

Page 16: Recommendation System

The Web, they say, is leaving the era of search and entering one of discovery. What's the difference? Search is what you do when you're looking for something. Discovery is when something wonderful that you didn't know existed, or didn't know how to ask for, finds you.

The Web, they say, is leaving the era of search and entering one of discovery. What's the difference? Search is what you do when you're looking for something. Discovery is when something wonderful that you didn't know existed, or didn't know how to ask for, finds you.

Page 17: Recommendation System

Collaborative Filtering Algorithm

Page 18: Recommendation System

Ad Hoc Retrieval and Filtering

Ad hoc retrieval ( 特别检索 : 文档集合保持不变 )

Collection“Fixed Size”

Q2

Q3

Q1

Q4Q5

Page 19: Recommendation System

Ad Hoc Retrieval and Filtering

Filtering( 过滤 : 用户需求不变 )

Documents Stream

User 1Profile

User 2Profile

Docs Filteredfor User 2

Docs forUser 1

Page 20: Recommendation System

Inputs - more detail

Explicit role/domain/content info: content/attributes of documents Document taxonomies Role in an enterprise Interest profiles

Past transactions/behavior info from users: which docs viewed , browsing history search(es) issued which products purchased pages bookmarked explicit ratings (movies, books … )

Large spaceLarge space

Extremely sparseExtremely sparse

Page 21: Recommendation System

Users Items

The Recommendation Space

Item-ItemLinks

User-UserLinks

Links derived from similar attributes,

similar content, explicit cross references

Links derived from similar attributes,

explicit connections

Observed preferences(Ratings, purchases, page views, laundry

lists, play lists)

Page 22: Recommendation System

Definitions

recommendation system 为 user 提供对 items 的 recommendation/

prediction/ opinion 的系统 Rule-based systems use manual rules to do this

An item similarity/clustering system 使用 item links

A classic collaborative filtering system 使用 links between users and items

Commonly one has hybrid systems 使用前面 all three kinds of links

Page 23: Recommendation System

Link types

User attributes-based Recommendation Male, 18-35: Recommend

The Matrix Item attributes-based

Content Similarity You liked The Matrix:

recommend The Matrix Reloaded

Collaborative Filtering People with interests like

yours also liked Forrest Gump

Page 24: Recommendation System

Example - behavior only

Users Docs viewed

U1

U2

d1

d2

d3

U1 viewed d1, d2, d3.

U2 views d1, d2.

Recommend d3 to U2.

?

Page 25: Recommendation System

Expert finding - simple example

Recommend U1 to U2 as someone to talk to?

U1

U2

d1

d2

d3

Page 26: Recommendation System

Simplest Algorithm: Naïve k Nearest Neighbors

U viewed d1, d2, d5. 看还有谁 viewed d1,

d2 or d5. 向 U 推荐:那些 users

里面 viewed最“ popular”的 doc.

V

W

d1U

d2

d5

Page 27: Recommendation System

Simple algorithm - shortcoming

把所有其它 users 同等对待

实际上,通过过去的历史 behavior 数据可以发现, users 与 U 相像的程度不同。

V

W

d1U

d2

d5

怎样改进?如何区分 user 对于 U 的重要度?

怎样改进?如何区分 user 对于 U 的重要度?

Page 28: Recommendation System

Matrix View

AijAirplane Matrix Room

with a View

... Hidalgo

Joe 1 1 1 ... 1Carol 1 0 1 ... 0

... ... ... ... ... ...Kumar

1 1 0 ... 1

Users-Items Matrix Aij = 1 if user i viewed item j, = 0 otherwise. 共同访问过的 items# by pairs of users = ?AAt

user

item

Page 29: Recommendation System

Voting Algorithm

AAt 的行向量 ri

jth entry is the # of items viewed by both user i and user j.

ri A 是一个向量 kth entry gives a weighted vote count to item k

按最高的 vote count 推荐 items.

riuser

user

Auser

item

Page 30: Recommendation System

Voting Algorithm - implementation issues

不直接使用 maxtrix 运算来实现 use weight-propagation on compressed

adjacency lists 用日志 log 来维护 “ user views doc” 信息 .

typically, log into database update vote-propagating structures periodically.

只保留 ri 中最大的若干个 weights ,提高效率 only in fast structures, not in back-end

database.

Page 31: Recommendation System

Different setting/algorithm

user i 给出评分 一个实数 rating Vik for item k

每个 user i 都拥有一个 ratings vector vi

稀疏,有大量空值 计算每一对 users i,j 之间的相关性 correlation

coefficient measure of how much user pair i,j agrees: wij

Page 32: Recommendation System

Predict user i’s utility for item k

与 voting 算法类似, WiV 是一个向量 Sum (over users j such that Vjk is non-zero) ∑wij Vjk

按这个值为 user i 推荐 item k.

VijAirplane Matrix Room with

a View... Hidalgo

Joe 9 7 2 ... 7Carol 8 ? 9 ... ?

... ... ... ... ... ...Kumar

9 3 ? ... 6

Page 33: Recommendation System

K-nearest neighbor

Cosine distance (from IR)

Pearson correlation coefficient (Resnick ’94, Grouplens):

Correlation Coefficient Wa,i

else0

)neighbors( if1),(

aiiaw

Page 34: Recommendation System

Same algorithm, different scenario

Implicit (user views item) vs. Explicit (user assigns rating to item)

Boolean vs. real-valued utility In practice, must convert user ratings on a form

(say on a scale of 1-5) to real-valued utilities Can be fairly complicated mapping

Likeminds function (Greening white paper) Requires understanding user’s interpretation of

form

Page 35: Recommendation System

Real data problems

User 有各自的 rating bias

VijAirplane Matrix Room with

a View... Hidalgo

Joe 50 10 40 ... 40Carol 100 ? 80 ... ?

... ... ... ... ... ...Kumar

95 85 ? ... 75

Page 36: Recommendation System

vi,j= vote of user i on item j Ii = items for which user i has voted Mean vote for i is

User u,v similarity is

avoids overestimating who happen to have rated a few items identically

User Nearest Neighbor Algorithm

Page 37: Recommendation System

User Nearest Neighbor Algorithm

选取 user u 的 nearest neighbor 集合 V ,计算 u 对 item j 的 vote 如下

How about Item Nearest Neighbor?

Page 38: Recommendation System

Nearest-Neighbor CF

Basic principle: utilize user’s vote history to predict future votes/recommendations based on “nearest-neighbors”

A typical normalized prediction scheme: goal: predict vote for item ‘j’ based on other

users, weighted towards those with similar past votes as target user ‘a’

Page 39: Recommendation System

Challenges of Nearest-Neighbor CF

What is “the most optimal weight calculation” to use? Requires fine tuning of weighting algorithm for the

particular data set What do we do when the target user has not

voted enough to provide a reliable set of nearest-neighbors? One approach: use default votes (popular items) to

populate matrix on items neither the target user nor the nearest-neighbor have voted on

A different approach: model-based prediction using Dirichlet priors to smooth the votes

Other factors include relative vote counts for all items between users, thresholding, clustering (see Sarwar, 2000)

Page 40: Recommendation System

Summary of Advantages of Pure CF

No expensive and error-prone user attributes or item attributes

Incorporates quality and taste Want not just things that are similar, but things

that are similar and good Works on any rate-able item One model applicable to many content

domains Users understand it

It’s rather like asking your friends’ opinions

Page 41: Recommendation System

Evaluation

Page 42: Recommendation System

Netflix Prize

NetFlix: on-line DVD-rental company a collection of 100,000 titles and over

10 million subscribers. They have over 55 million discs and

ship 1.9 million a day, on average a training data set of over 100 million

ratings that over 480,000 users gave to nearly 18,000 movies

Submitted predictions are scored against the true grades in terms of root mean squared error (RMSE)

Page 43: Recommendation System

Netflix Prize

prize of $1,000,000 A trivial algorithm got RMSE of 1.0540 Netflix, Cinematch, got RMSE of 0.9514 on the quiz

data, a 9.6% improvement To WIN

10% over Cinematch on the test set a progress prize of $50,000 is granted every year for the

best result so far By June, 2007, over 20,000 teams had registered

for the competition from over 150 countries. On June 26, 2009 the team "BellKor's Pragmatic Chaos", a

merger of teams "Bellkor in BigChaos" and "Pragmatic Theory", achieved a 10.05% improvement over Cinematch (an RMSE of 0.8558).

Page 44: Recommendation System

Measuring collaborative filtering

How good are the predictions? How much of previous opinion do we need? How do we motivate people to offer their

opinions?

Page 45: Recommendation System

Measuring recommendations

Typically, machine learning methodology Get a dataset of opinions; mask “half” the

opinions Train system with the other half, then

validate on masked opinions Studies with varying fractions half

Compare various algorithms (correlation metrics)

<User, Item, Grade><User, Item, Grade><User, Item, Grade><User, Item, Grade>。。。 。。。 。。。

<User, Item, Grade><User, Item, Grade><User, Item, Grade><User, Item, Grade>。。。 。。。 。。。

Page 46: Recommendation System

Common Prediction Accuracy Metric

Mean absolute error (MAE)

Root mean square error (RMSE)N

rpE

N

iii

1

N

rpE

N

iii

1

2

Page 47: Recommendation System

McLaughlin & Herlocker 2004

Argues that current well-known algorithms give poor user experience Nearest neighbor algorithms are the most

frequently cited and the most widely implemented CF algorithms, consistently are rated the top performing algorithms in a variety of publications

But many of their top recommendations are terrible

These algorithms perform poorly where it matters most in user recommendations

Page 48: Recommendation System

Characteristics of MAE

Characteristics of MAE Assumes errors at all levels in the ranking have

equal weight Works well for measuring how accurately the

algorithm predicts the rating of a randomly selected item.

Seems not appropriate for “Find Good Items” task

Limitations of the MAE metric have concealed the flaws of previous algorithms it looks at all predictions not just top predictionsPrecision?Precision?

Page 49: Recommendation System

Precision of top k

Concealed because past evaluation mainly on offline datasets not real users Many un-rated item exist, but not participate

the evaluation

100 ? 80 ... ?

96 97 70 ... 95

test-data

prediction

Appear in recommendation list but not calculated in PrecisionAppear in recommendation list but not calculated in Precision

What’sthis?What’sthis?

Page 50: Recommendation System

Improve the Precision Measure

Precision of top k has wrongly been done on top k rated movies. Instead, treat not-rated as disliked

(underestimate) Captures that people pre-filter movies

Precision with non-rated items should be counted as non-relevant

Page 51: Recommendation System

Novelty versus Trust

There is a trade-off High confidence recommendations

Recommendations are obvious Low utility for user However, they build trust

Users like to see some recommendations that they know are right

Recommendations with high prediction yet lower confidence Higher variability of error Higher novelty → higher utility for user

McLaughlin and Herlocker argue that “very obscure” recommendations are often bad (e.g., hard to obtain)

Page 52: Recommendation System

Rsults from SIGIR 2004 Paper

Much better predicts top movies

Cost is that it tends to often predict blockbuster movies

A serendipity/ trust trade-off

Modified Precision at Top-N

0

0.05

0.1

0.15

0.2

0.25

0.3

Top 1 Top 5 Top 10 Top 15 Top 20

Mo

dif

ied

Pre

cisi

on

User-to-User Item-Item Distribution

Page 53: Recommendation System

Recommendation Systems

Page 54: Recommendation System

Early systems

GroupLens (U of Minn) (Resnick, Iacovou, Bergstrom, Riedl) netPerceptions company Based on nearest neighbor recommendation

model Tapestry (Goldberg/Nichols/Oki/Terry) Ringo (MIT Media Lab) (Shardanand/Maes) Experiment with variants of these

algorithms

Page 55: Recommendation System
Page 56: Recommendation System
Page 57: Recommendation System

Datasets @ GroupLens

MovieLens Data Sets consists of 100,000 ratings for 1682 movies by 943 users 1 million ratings for 3900 movies by 6040 users

Book-Crossing Data Set 278,858 users (anonymized but with demographic

information) providing 1,149,780 ratings (explicit / implicit) about 271,379 books.

Jester Joke Data Set 4.1 million continuous ratings (-10.00 to +10.00) of 100

jokes from 73,496 users. EachMovie Data Set

2,811,983 ratings entered by 72,916 for 1628 different movies

Page 58: Recommendation System
Page 59: Recommendation System

Strands Recommendation Engine

Page 60: Recommendation System
Page 61: Recommendation System
Page 62: Recommendation System
Page 63: Recommendation System

Summary

Collaborative Filtering Input data space ,

especially the User-Item links

Nearest Neighbor CF Weighting scheme

Evaluation of CF MAE failure

Page 64: Recommendation System

Resources

GroupLens http://citeseer.nj.nec.com/resnick94grouplens.html http://www.grouplens.org

Has available data sets, including MovieLens Breese et al. UAI 1998

http://research.microsoft.com/users/breese/cfalgs.html McLaughlin and Herlocker, SIGIR 2004

http://portal.acm.org/citation.cfm?doid=1009050 CoFE CoFE “Collaborative Filtering Engine”

Open source Java Reference implementations of many popular CF

algorithms http://eecs.oregonstate.edu/iis/CoFE

C/Matlab Toolkit for Collaborative Filtering http://www.cs.cmu.edu/~lebanon/IR-lab.htm

Page 65: Recommendation System

Readings

[1] MIW Ch8 [2] R. M. Matthew and L. H. Jonathan, "A

collaborative filtering algorithm and evaluation metric that accurately model the user experience," in Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval. Sheffield, United Kingdom: ACM, 2004.

Page 66: Recommendation System

Thank You!

Q&A

Page 67: Recommendation System

Related Conferences

http://recsys.acm.org/

Page 68: Recommendation System

Challenges of Nearest-Neighbor CF

Structure based recommendations Recommendations based on similarities between

items with positive votes (as opposed to votes of other users)

Structure of item dependencies modeled through dimensionality reduction via singular value decomposition (SVD) aka latent semantic indexing

Approximate the set of row-vector votes as a linear combination of basis column-vectors

i.e. find the set of columns to least-squares minimize the difference between the row estimations and their true values

Perform nearest-neighbor calculations to project predictions for all items

Page 69: Recommendation System

The next level - modeling context

Suppose we could view users and docs in a common vector space of terms docs already live in term space

How do we cast users into this space? Combination of docs they liked/viewed Terms they used in their writings Terms from their home pages, resumes …

Page 70: Recommendation System

Context modification

Then “user u viewing document d” can be modeled as a vector in this space: u+ d

User u issuing search terms s can be similarly modeled: add search term vector to

the user vector More generally, any term

vector (say recent search/browse history) can offset the user vector

u

vw

+ d

Page 71: Recommendation System

Summary so far

Content/context expressible in term space Combined into inter-user correlation

This is an algebraic formulation, but can also recast in the language of probability

What if certain correlations are “constrained” two users in the same department/zip code two products by the same manufacturer?

Page 72: Recommendation System

Capturing role/domain

Additional axes in vector space Corporate org chart - departments Product manufacturers/categories

Make these axes “heavy” (weighting) Challenge: modeling hierarchies

Org chart, product taxonomy

Page 73: Recommendation System

A story

Page 74: Recommendation System

Brief History

Page 75: Recommendation System
Page 76: Recommendation System
Page 77: Recommendation System
Page 78: Recommendation System
Page 79: Recommendation System
Page 80: Recommendation System

AdSenseAdSense

Page 81: Recommendation System
Page 82: Recommendation System

Everyday Examples of Recommendation Systems…

Bestseller lists Top 40 music lists The “recent returns” shelf at the library Many weblogs “Read any good books lately?” ....

Common insight: personal tastes are correlated:•If Alice and Bob both like X and Alice likes Y then Bob is more likely to like Y•especially (perhaps) if Bob knows Alice

Common insight: personal tastes are correlated:•If Alice and Bob both like X and Alice likes Y then Bob is more likely to like Y•especially (perhaps) if Bob knows Alice

Page 83: Recommendation System

GroupLens Collaborative Filtering Scheme

aqaaq pvp .Prediction for active user a on

item q

n

iiqaiaq zwp

1

Weighted average of preferences

Similarity weight between active user and user i

k

ikakai zzw .

z-scores for item q

i

iiqiq

vvz

Rating for user i on item q

Mean vote for user i

iIjij

ii v

Iv

||

1