scalable mining of social data using stochastic gradient fisher … · 2013. 10. 29. ·...

18
Scalable Mining of Social Data using Stochastic Gradient Fisher Scoring Jeon-Hyung Kang and Kristina Lerman USC Information Sciences Institute

Upload: others

Post on 17-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

Scalable Mining of Social Data using Stochastic Gradient Fisher Scoring

Jeon-Hyung Kang and Kristina Lerman USC Information Sciences Institute

Page 2: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

Information Overload in Social Media

Ruff, J. (2002). Information Overload: Causes, Symptoms and Solutions. Harvard Graduate School of Education’s Learning Innovations Laboratory (LILA)

2,500,000,000,000,000,000 (2.5 quintillion) bytes of data is created everyday! -IBM estimates 2012

27% of total U.S. internet time is spent on social networking sites -Experian 2012

Social Networking is the No. 1 Online Activity in the U.S. (based on the Average time U.S. consumers spent with digital media per day)

-Mashable 2013

Page 3: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

Recommendations in Social Media • Social Media provides a solution by allowing users to subscribe/follow updates from specific users. • But Information Overload will comeback when users follows many others.

User Social Media Observations

Recommendations

2.5 quintillion

Information Overload

Scalability (Computation)

Long tail Distribution

Sparsity (Data)

Filtering & Recommendation!

Easily Extensible

Flexibility (Model)

Variety of Data

Page 4: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

4.0 Ratings

1.0 2.0

User-Item Rating Prediction Problem

5.0

Use

rs

Items

Daniel

Bob

Sara

Page 5: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

Probabilistic Matrix Factorization (PMF)

User

Item

N

D

K

K

D

User N

V

U

R

Item

Topic

Topic

UTV

Bob

Daniel

Sara

Marvel’s hero, Classic, Action...

TV series, Classic, Action…

Drama, Family, …

“PMF is a probabilistic linear model with Gaussian observation noise that handles very large and possibly sparse data.”

Page 6: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

Probabilistic Matrix Factorization (PMF)

User

Item

N

D

K

K

D

User N

V

U

R

Item

Topic

Topic

UTV User’s interests

Item’s topics V

R

UN

λv

λu

D

PMF  [Salakhutdinov & Mnih 08]  

“PMF is a probabilistic linear model with Gaussian observation noise that handles very large and possibly sparse data.”

Page 7: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

Additionally Available Data

Item

Term

User

User

User

Profile User

Item

N

D

Ratings

D

M

N

N

N User

Item

N

DSource

Of Adoptions

Profile

Social Network

Descriptions

Page 8: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

CTR  [Wang & Blei 11]  

CTR-­‐smf  [Purushotham et al. 12]

LDA & PMF LDA & PMF & SMF

LDA [Blei  et al. 03]  

V

R

U

K

N

λv

λu

D

PMF [Salakhutdinov et al. 08]  

V

R

U

K

N

λv

λu

D

Q

Sλs

λq

M

PMF

LDA

SMF [Ma et al. 08]  

V

R

S

K

N

λv

λu

λs

λφ

D

LA-­‐CTR  [Kang & Lerman 13]  

LA &LDA & PMF

LDA

Item

Term

D

M

Descriptions User

User

N

N

Social Network

User

Item

N

DSource

Of Adoptions

Given the parameters, computing the full posterior of latent variables conditioned on a set of observations is intractable !

! Approximate Posterior Inference

PMF

LA

Page 9: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

Approximate Posterior Inference

• Simplest Markov chain Monte Carlo (MCMC) algorithm. • Sampling each one from its distribution conditioned on the current assignment of all other variables. • it requires computations over the whole data set

Gibbs Sampling

• Stochastic Gradient: To approximate the gradient over the whole data set using min-batch of data

• The average of gradient of the log likelihood w.r.t. given mini-batches • Bayesian Central Limit Theorem (Le Cam, 1986): under suitable regularity conditions, approximately equals as N becomes very large!

Stochastic Gradient Fisher Scoring

P(θ | X) Ν(θ0, IN−1−1)

S. Ahn, A. Korattikara, and M. Welling. Bayesian posterior sampling via stochastic gradient fisher scoring. arXivpreprint arXiv:1206.6380, 2012.

Page 10: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

Social  Data:  characterized  by  a  long-­‐tailed  distribuGon.  

The long tail of some distribution having a large number of occurrences far from the “head” or central part of the distribution. The distribution could involve popularities, random numbers of occurrences of events with various probabilities. (i.e. online business, mass media, micro-finance, social network, economic models, and marketing)

Number of observations (N)

The Pareto principle (also known as the 80-20 rule, the law of the vital few, and the principle of factor scarcity) states that, for many events, roughly 80% of the effects come from 20% of the causes.

Users (or Items)

80% 20% Why do we need to care about ?

Page 11: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

Why  do  we  care  about  a  long-­‐tail?  

Gained popularity in recent times as describing the retailing strategy of selling a large number of unique items in addition to selling fewer popular items.

Number of observations (N)

The distribution and inventory costs of businesses successfully applying this strategy allow them to realize significant profit out of selling small volumes of hard-to-find items to many customers instead of only selling large volumes of a reduced number of popular items.

Items (or Users)

Chris Anderson (2002) book The Long Tail: Why the Future of Business Is Selling Less of More.

80% 20% The total sales of this large number of "non-hit items" is called "the long tail".

Businesses applying long tail strategy !

Page 12: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

MovieLens  2000     KDD Cup 2012 CiteULike 2013 Twitter 2012

Source of data set

GroupLens Research has collected from the MovieLens

Tencent Weibo, KDD Cup 2012 Track 1

CiteULike on Feb 2013

Nov 2011 – Jul 2012

# of users 6K users 2.3 mil users 5K users 9.5 mil users

# of items 3.9K movies 6K items 31K items 114K items

# of user-item adoptions

1 million anonymous ratings

5.2 mil adoptions 303K adoptions 13 mil adoptions

Sparseness 76.6% 98.75% 99.81% 99.87%

Mean # of adoptions/user

166 items 77 items 9 items 8 items

Median # of adoptions/user

96 items 58 items 8 items 6 items

notes We used the same training and test data set split as in the original dataset.(0.9 mil training and 0.1 mil test)

User profiles (including user keywords, age, gender, tags, number of tweets), item categories (pre-assigned topic hierarchy), item keywords, and user activity.

The public information about users, papers, tags that have been used by users to describe papers, the group information that link users to group by subscription.

707 mil social network links. Collected whole chains of tweets containing URLs to monitor information propagations over the social net. Focused on URLs containing at least 5 terms resulting in 14K users, 6K items, & 121K adoptions.

Page 13: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

•  CPU time per epoch is different with the number of different size of mini batch.

•  Except the burn-in period, SGFS outperformed Gibbs Sampling.

Result  on  MovieLens  data  set  

Faster

Page 14: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

SGFS  Result  on  Social  Media  Data  

Likelihood

Recall@N

Page 15: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

Hybrid-Stochastic Gradient Fisher Scoring

NCLT

Nu>NCLT

Number of observations (Nu)

Users

• NCLT: the number of observations for the Bayesian Central Limit Theorem to hold

SGFS GD

• Hidden variables with enough observations will be inferred using SGFS, while those with few observations will be inferred using GD.

• Can handle data that follows a long-tail distribution.

Page 16: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

SGFS Result on Social Media Data

Scalability of distributed SGFS Recall@N

Page 17: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

•  Explored SGFS for mining social data and showed that algorithm scales up with both the number of processors and the size of mini-batch. •  Failed to learn good models of long-tailed distributed data,

resulting in poor prediction performance. •  Many social media data sets have long tailed distributions,

containing sparse regions with few observations per variable. •  Proposed hSGFS algorithm by combining SGFS and Gradient

descent to provide more efficient performance. •  Showed significant performance improvement in both speed

and prediction accuracy. •  We believe that hSGFS is an initial step to further work on

efficient MCMC sampling based on stochastic gradients on long-tailed distributed data.

Conclusions

Page 18: Scalable Mining of Social Data using Stochastic Gradient Fisher … · 2013. 10. 29. · Approximate Posterior Inference •Simplest Markov chain Monte Carlo (MCMC) algorithm. •Sampling

USC  Informa,on  Sciences  Ins,tute  

•  Stochastic Inference –  Bayesian learning via stochastic gradient langevin dynamics.

[Welling and The 11] –  Bayesian posterior sampling via stochastic gradient fisher

scoring. [Ahn et al 12] –  Stochastic Variational Inference. [Hoffman et al 12]

•  Collaborative filtering methods for item recommendations –  Probabilistic Matrix Factorization [Salakhutdinov & Mnith 08] –  Recommend new items that were liked by similar users [Ma et

al. 08, Chua et al 12] –  Improve explanatory power of recommendations by extending

LDA & SN matrix factorization [Wang and Blei 11,Prushotham et al 12 ]

–  Limited attention in social media [Kang & Lerman 13]

Related Work