Transcript
Page 1: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

1

CS 2750 Machine Learning

Lecture 10

Milos Hauskrecht

[email protected]

5329 Sennott Square

Generative classification models

Classification

• Data:

– represents a discrete class value

• Goal: learn

• Binary classification

– A special case when

• First step:

– we need to devise a model of the function f

}1,0{Y

YXf :

},..,,{ 21 ndddD

iii yd ,x

iy

Page 2: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

2

Discriminant functions

• A common way to represent a classifier is by using

– Discriminant functions

• Works for both the binary and multi-way classification

• Idea:

– For every class i = 0,1, …k define a function

mapping

– When the decision on input x should be made choose the

class with the highest value of

)(xig

X

)(xig

)(maxarg* xii gy

Logistic regression model

• Discriminant functions:

• Values of discriminant functions vary in interval [0,1]

– Probabilistic interpretation

),|1( wxyp

x

Input vector

1

1x

0w

1w

2w

dw2x

z

dx

)()(1 xwxTgg )(1)(0 xwx

Tgg

)()(),|1()( 1 xwxxwwx,Tggypf

Page 3: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

3

When does the logistic regression fail?

• Nonlinear decision boundary

-2 -1.5 -1 -0.5 0 0.5 1 1.5-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

2.5

3

Decision boundary

When does the logistic regression fail?

• Another example of a non-linear decision boundary

-4 -3 -2 -1 0 1 2 3 4 5-4

-3

-2

-1

0

1

2

3

4

5

Page 4: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

4

CS 2750 Machine Learning

Non-linear extension of logistic regression

)()(1

0 xx j

m

j

jwwf

)(1 x

)(2 x

)(xm

1

1x

0w

1w

2w

mwdx

)(xj - an arbitrary function of x

• use feature (basis) functions to model nonlinearities

• the same trick as used for the linear regression

))(()|1(1

0 xj

m

j

jwwgxyp

Linear regression Logistic regression

Regularized logistic regression

• If the model is too complex and can cause overfitting, its prediction accuracy can be improved by removing some inputs from the model = setting their coefficients to zero

• Recall the linear model:

xwxT

dd xwxwxwxwxwf 33221100)(

1

1x

0w

01 w

2w

dw

dx

2x

x

Input vector

xwxT

dd xwxwxwxxwf 3322100 0)(

Page 5: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

5

Regularized logistic regression

• If the model is too complex and can cause overfitting, its prediction accuracy can be improved by removing some inputs from the model = setting their coefficients to zero

• We can apply the same idea to the logistic regression:

)()|1( xwx

Tgyp kwww ,, 10 - parameters (weights)

1

1x

0w

01 w

2w

dw

dx

2x

x

Input vector

)()0()|1( 3322100 xwxT

dd gxwxwxwxxwgyp

and

Ridge (L2) penalty

Linear regression – Ridge penalty:

Logistic regression:

2

2

2

,..1

)(1

)(Li

T

i

ni

n yn

J wxww

wwwT

d

i

iLw

0

22

20

2

2)|(log)(

Ln DPJ www

2

21

))(1log()1()(log)(Li

T

ii

Tn

i

in xgyxgyJ wwww

Fit to data Model complexity penalty

Fit to data Model complexity penalty

Fit to data measured using the negative log likelihood

Page 6: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

6

and

Lasso (L1) penalty

Linear regression – Lasso penalty:

Logistic regression:

1

2

,..1

)(1

)(Li

T

i

ni

n yn

J wxww

||0

1

d

i

iLww 0

1)|(log)(

Ln DPJ www

11

))(1log()1()(log)(Li

T

ii

Tn

i

in xgyxgyJ wwww

Fit to data Model complexity penalty

Fit to data Model complexity penalty

Fit to data measured using the negative log likelihood

Generative approach to classification

Logistic regression:

• Represents and learns a model of

• An example of a discriminative classification approach

• Model is unable to sample (generate) data instances (x, y)

Generative approach:

• Represents and learns the joint distribution

• Model is able to sample (generate) data instances (x, y)

• The joint model defines probabilistic discriminant functions

How?

1)|1()|0( xx ypyp

),( yp x

)|( xyp

)(

)0()0|(

)(

)0,()|0()(

x

x

x

xxx

p

ypyp

p

ypypgo

)(

)1()1|(

)(

)1,()|1()(1

x

x

x

xxx

p

ypyp

p

ypypg

Page 7: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

7

Generative approach to classification

Typical joint model

• = Class-conditional distributions

(densities)

binary classification: two class-conditional

distributions

• = Priors on classes

– probability of class y

– for binary classification: Bernoulli distribution

)0|( yp x

1)1()0( ypyp

y

x

)()|(),( ypypyp xx

)|( yp x

)1|( yp x

)(yp

)(yp

)|( yp x

Quadratic discriminant analysis (QDA)

Model:

• Class-conditional distributions are

– multivariate normal distributions

• Priors on classes (class 0,1)

– Bernoulli distribution

)()(

2

1exp

)2(

1)|( 1

2/12/μxΣμx

ΣΣμ,x

T

dp

0for),(~ 00 yN Σμx

1for),(~ 11 yN Σμx

y

x

Multivariate normal ),(~ Σμx N

yyyp 1)1(),(

Bernoulliy ~

}1,0{y

Page 8: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

8

Learning of parameters of the QDA model

Density estimation in statistics

• We see examples – we do not know the parameters of Gaussians (class-conditional densities)

• ML estimate of parameters of a multivariate normal for a set of n examples of x

Optimize log-likelihood:

• How about class priors?

)()(

2

1exp

)2(

1),|( 1

2/12/μxΣμx

ΣΣμx

T

dp

n

i

in 1

1ˆ xμ

Tn

in)ˆ)(ˆ(

1

μxμxΣ ii

)|(log),,(1

Σ,μxΣμ

n

i

ipDl

),( ΣμN

Learning Quadratic discriminant analysis

(QDA)

• Learning Class-conditional distributions

– Learn parameters of 2 multivariate normal

distributions

– Use the density estimation methods

• Learning Priors on classes (class 0,1)

– Learn the parameter of the Bernoulli distribution

– Again use the density estimation methods

0for),(~ 00 yN Σμx

1for),(~ 11 yN Σμx

y

x

yyyp 1)1(),(

Bernoulliy ~

}1,0{y

Page 9: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

9

QDA

)()( 01 xx gg

-2 -1.5 -1 -0.5 0 0.5 1 1.5-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

2 Gaussian class-conditional densities

• .

Page 10: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

10

QDA: Making class decision

Basically we need to design discriminant functions

• Posterior of a class – choose the class with better posterior

probability

• Notice it is sufficient to compare:

)|0()|1( xx ypyp then y=1

else y=0

)1(),|()0(),|(

)1(),|()|1(

1100

11

yppypp

yppyp

ΣxΣx

Σxx

)(1 xg )(0 xg

)0(),|()1(),|( 0011 yppypp ΣxΣx

QDA: Quadratic decision boundary

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2Contours of class-conditional densities

Page 11: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

11

QDA: Quadratic decision boundary

-2 -1.5 -1 -0.5 0 0.5 1 1.5-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

2.5

3

Decision boundary

Linear discriminant analysis (LDA) • Assumes covariances are the same 0,),(~ 0 yN Σμx

1,),(~ 1 yN Σμx

Page 12: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

12

LDA: Linear decision boundary

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2Contours of class-conditional densities

LDA: linear decision boundary

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

Decision boundary

Page 13: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

13

Generative classification models

Idea:

1. Represent and learn the distribution

2. Model is able to sample (generate) data instances (x, y)

3. The model is used to get probabilistic discriminant

functions

Typical model

• = Class-conditional distributions (densities)

binary classification: two class-conditional distributions

• = Priors on classes - probability of class y

binary classification: Bernoulli distribution

)0|( yp x

1)1()0( ypyp

y

x

)()|(),( ypypyp xx

)|( yp x

)1|( yp x

)(yp

)|0()( xx ypgo )|1()(1 xx ypg

),( yp x

Naïve Bayes classifier

A generative classifier model with an additional simplifying

assumption:

• All input attributes are conditionally independent of each

other given the class.

• One of the basic ML classification models (often performs very

well in practice)

So we have:

1x

2x dx…

y

)|()|(1

yxpyp i

d

i

x

)()|(),( ypypyp xx

)|( 1 yxp )|( 2 yxp )|( yxp d

)(yp

Page 14: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

14

Learning parameters of the model

Much simpler density estimation problems

• We need to learn:

and and

• Because of the assumption of the conditional independence we

need to learn:

for every input variable i: and

• Much easier if the number of input attributes is large

• Also, the model gives us a flexibility to represent input

attributes of different forms !!!

• E.g. one attribute can be modeled using the Bernoulli, the

other using Gaussian density, or a Poisson distribution

)0|( yp x )1|( yp x )(yp

)0|( yxp i )1|( yxp i

Making a class decision for the Naïve Bayes

Discriminant functions

• Posterior of a class – choose the class with better posterior

probability

)|0()|1( xx ypyp then y=1

else y=0

)1()|()0()|(

)1()|(

)|1(

,2

1

,1

1

,1

1

ypxpypxp

ypxp

yp

ii

d

i

ii

d

i

ii

d

ix

Page 15: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

15

Next: two interesting questions

(1) Two models with linear decision boundaries:

– Logistic regression

– LDA model (2 Gaussians with the same covariance

matrices

• Question: Is there any relation between the two models?

(2) Two models with the same gradient:

– Linear model for regression

– Logistic regression model for classification

have the same gradient update

• Question: Why is the gradient the same?

0for),(~ 0 yNx

1for),(~ 1 yNx

n

i

iii fy1

))(( xxww

CS 2750 Machine Learning

Logistic regression and generative models

• Two models with linear decision boundaries:

– Logistic regression

– Generative model with 2 Gaussians with the same

covariance matrices

Question: Is there any relation between the two models?

Answer: Yes, the two models are related !!!

– When we have 2 Gaussians with the same covariance

matrix the probability of y given x has the form of a

logistic regression model !!!

0for),(~ 0 yNx

1for),(~ 1 yNx

)(),,,|1( 10 xwΣμμxTgyp

Page 16: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

16

CS 2750 Machine Learning

Logistic regression and generative models

• Members of the exponential family can be often more

naturally described as

• Claim: A logistic regression is a correct model when class

conditional densities are from the same distribution in the

exponential family and have the same scale factor

• Very powerful result !!!!

– We can represent posteriors of many distributions with

the same small logistic regression model

)(

)(exp),()|(

φ

θxθφφθ,x

a

Axhf

T

θ - A location parameter φ - A scale parameter

φ

CS 2750 Machine Learning

The gradient puzzle …

Logistic regression Linear regression

dx

1

1x

0w

1w

2w

dw2x

)(xf

xwxTf )( )(),|1()( xwwxx

Tgypf

xxww ))(( fy

Gradient update: Gradient update:

The same

xxww ))(( fy

1

)|1( xyp

0w

1w

2w

dw

z

)(xf

1x

dx

2x

n

i

iii fy1

))(( xxww

n

i

iii fy1

))(( xxww

Online: Online:

Page 17: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

17

The gradient puzzle …

• The same simple gradient update rule derived for both the

linear and logistic regression models

• Where the magic comes from?

• Under the log-likelihood measure the function models and the

models for the output selection fit together:

– Linear model + Gaussian noise

– Logistic + Bernoulli

xwTy ),0(~ 2 N

)()|1( xwxTgyp

)Bernoulli(y

dx

1

1x

0w

1w

2w

dw2x

xwT

y

Gaussian noise

1

)( xwTg

0w

1w

2w

dw

z1x

dx

2x

y

Bernoulli trial

Generalized linear models (GLIMs)

Assumptions:

• The conditional mean (expectation) is:

– Where is a response function

• Output y is characterized by an exponential family distribution

with a conditional mean

Examples:

– Linear model + Gaussian noise

– Logistic + Bernoulli

)( xwTf

(.)f

xwTy ),0(~ 2 N

xwxw

T

eg T

1

1)(

)Bernoulli(y

dx

1

1x

0w

1w

2w

dw2x

xwT

y

Gaussian noise

1

)( xwTg

0w

1w

2w

dw

z1x

dx

2x

y

Bernoulli trial

Page 18: Generative classification modelspeople.cs.pitt.edu/~milos/courses/cs2750-Spring2018/Lectures/Class10… · Generative classification models Classification • Data: – represents

18

Generalized linear models (GLIMs)

• A canonical response functions :

– encoded in the sampling distribution

• Leads to a simple gradient form

• Example: Bernoulli distribution

– Logistic function matches the Bernoulli

)(

)(exp),()|(

φ

θxθφφθ,x

a

Axhp

T

(.)f

xxxp 1)1()|(

)1log(

1logexp

x

1log

e1

1


Top Related