data miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · factor analysis linear...

43
Data Mining Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 1 / 40

Upload: others

Post on 20-Jul-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Data MiningDimensionality reduction

Hamid Beigy

Sharif University of Technology

Fall 1394

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 1 / 40

Page 2: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Outline

1 Introduction

2 Feature selection methods

3 Feature extraction methodsPrincipal component analysisKernel principal component analysisFactor analysisLinear discriminant analysis

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 2 / 40

Page 3: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Table of contents

1 Introduction

2 Feature selection methods

3 Feature extraction methodsPrincipal component analysisKernel principal component analysisFactor analysisLinear discriminant analysis

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 3 / 40

Page 4: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Introduction

The complexity of any classifier or regressors depends on the number of input variables orfeatures. These complexities include

Time complexity: In most learning algorithms, the time complexity depends on thenumber of input dimensions(D) as well as on the size of training set (N). Decreasing Ddecreases the time complexity of algorithm for both training and testing phases.

Space complexity: Decreasing D also decreases the memory amount needed for trainingand testing phases.

Samples complexity: Usually the number of training examples (N) is a function of lengthof feature vectors (D). Hence, decreasing the number of features also decreases thenumber of training examples.Usually the number of training pattern must be 10 to 20 times of the number of features.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 3 / 40

Page 5: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Introduction

There are several reasons why we are interested in reducing dimensionality as a separatepreprocessing step.

Decreasing the time complexity of classifiers or regressors.

Decreasing the cost of extracting/producing unnecessary features.

Simpler models are more robust on small data sets. Simpler models have less varianceand thus are less depending on noise and outliers.

Description of classifier or regressors is simpler / shorter.

Visualization of data is simpler.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 4 / 40

Page 6: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Peaking phenomenon

In practice, for a finite N, by increasing the number of features we obtain an initialimprovement in performance, but after a critical value further increase of the number offeatures results in an increase of the probability of error. This phenomenon is also knownas the peaking phenomenon.

If the number of samples increases (N2 � N1), the peaking phenomenon occures forlarger number of features (l2 > l1).

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 5 / 40

Page 7: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Dimensionality reduction methods

There are two main methods for reducing the dimensionality of inputs

Feature selection: These methods select d (d < D) dimensions out of D dimensions andD − d other dimensions are discarded.Feature extraction: Find a new set of d (d < D) dimensions that are combinations of theoriginal dimensions.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 6 / 40

Page 8: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Table of contents

1 Introduction

2 Feature selection methods

3 Feature extraction methodsPrincipal component analysisKernel principal component analysisFactor analysisLinear discriminant analysis

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 7 / 40

Page 9: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Feature selection methods

Feature selection methods select d (d < D) dimensions out of D dimensions and D − dother dimensions are discarded.

Reasons for performing feature selection

Increasing the predictive accuracy of classifiers or regressors.Removing irrelevant features.Enhancing learning efficiency (reducing computational and storage requirements).Reducing the cost of future data collection (making measurements on only those variablesrelevant for discrimination/prediction).Reducing complexity of the resulting classifiers/regressors description (providing an improvedunderstanding of the data and the model).

Feature selection is not necessarily required as a pre-processing step forclassification/regression algorithms to perform well.

Several algorithms employ regularization techniques to handle over-fitting or averagingsuch as ensemble methods.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 7 / 40

Page 10: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Feature selection methods

Feature selection methods can be categorized into three categories.

Filter methods: These methods use the statistical properties of features to filter out poorlyinformative features.Wrapper methods: These methods evaluate the feature subset within classifier/regressoralgorithms. These methods are classifier/regressors dependent and have better performancethan filter methods.Embedded methods:These methods use the search for the optimal subset intoclassifier/regression design. These methods are classifier/regressors dependent.

Two key steps in feature selection process.

Evaluation: An evaluation measure is a means of assessing a candidate feature subset.Subset generation: A subset generation method is a means of generating a subset forevaluation.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 8 / 40

Page 11: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Feature selection methods (Evaluation measures)

Large number of features are not informative- either irrelevant or redundant.

Irrelevant features are those features that don’t contribute to a classification or regressionrule.

Redundant features are those features that are strongly correlated.

In order to choose a good feature set, we require a means of a measure to contribute tothe separation of classes, either individually or in the context of already selected features.We need to measure relevancy and redundancy.

There are two types of measures

Measures that relay on the general properties of the data. These assess the relevancy ofindividual features and are used to eliminate feature redundancy. All these measures areindependent of the final classifier. These measures are inexpensive to implement but may notwell detect the redundancy.Measures that use a classification rule as a part of their evaluation. In this approach, aclassifier is designed using the reduced feature set and a measure of classifier performance isemployed to assess the selected features. A widely used measure is the error rate.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 9 / 40

Page 12: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Feature selection methods (Evaluation measures)

The following measures relay on the general properties of the data.

Feature ranking: Features are ranked by a metric and those that fail to achieve a prescribedscore are eliminated.Examples of these metrics are: Pearson correlation, mutual information, and information gain.Interclass distance: A measure of distance between classes is defined based on distancesbetween members of each class.Example of these metrics is: Euclidean distance.Probabilistic distance: This is the computation of a probabilistic distance betweenclass-conditional probability density functions, i.e. the distance between p(x |C1) and p(x |C2)(two-classes).Example of these metrics is: Chhernoff dissimilarity measure.Probabilistic dependency: These measures are multi-class criteria that measure the distancebetween class-conditional probability density functions and the mixture probability densityfunction for the data irrespective of the class, i.e. the distance between p(x |Ci ) and p(x).Example of these metrics is: Joshi dissimilarity measure.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 10 / 40

Page 13: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Feature selection methods (Search algorithms)

Complete search: These methods guarantee to find the optimal subset of featuresaccording to some specified evaluation criteria.For example exhaustive search and branch and bound methods are complete.

Sequential search: In these methods, features are added or removed sequentially. Thesemethods are not optimal, but are simple to implement and fast to produce results.

Best individual N: The simplest method is to assign a score to each feature and thenselect N top ranks features.

Sequential forward selection: It is a bottom-up search procedure that adds new featuresto a feature set one at a time until the final feature set is reached.

Generalized sequential forward selection: In this approach, at each time r > 1, featuresare added instead of a single feature.

Sequential backward elimination: It is a top-down procedure that deletes a single featureat a time until the final feature set is reached.

Generalized sequential backward elimination : In this approach, at each time r > 1features are deleted instead of a single feature.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 11 / 40

Page 14: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Table of contents

1 Introduction

2 Feature selection methods

3 Feature extraction methodsPrincipal component analysisKernel principal component analysisFactor analysisLinear discriminant analysis

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 12 / 40

Page 15: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Feature extraction

In feature extraction, we are interested to find a new set of k (k > D) dimensions thatare combinations of the original D dimensions.

Feature extraction methods may be supervised or unsupervised.

Examples of feature extraction methods

Principal component analysis (PCA)Factor analysis (FA)sMulti-dimensional scaling (MDS)ISOMapLocally linear embeddingLinear discriminant analysis (LDA)

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 12 / 40

Page 16: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis

PCA project D−dimensional input vectors to k−dimensional input vectors via a linearmapping with minimum loss of information. Dimensions are combinations of the originalD dimensions.

The problem is to find a matrix W such that the following mapping results in theminimum loss of information.

Z = W TX

PCA is unsupervised and tries to maximize the variance.

The principle component is w1 such that the sample after projection onto w1 is mostspread out so that the difference between the sample points becomes most apparent.

For uniqueness of the solution, we require ‖w1‖ = 1,

Let Σ = Cov(X ) and consider the principle component w1, we have

z1 = wT1 x

Var(z1) = E [(wT1 x − wT

1 µ)2] = E [(wT1 x − wT

1 µ)(wT1 x − wT

1 µ)]

= E [wT1 (x − µ)(x − µ)Tw1] = wT

1 E [(x − µ)(x − µ)T ]w1 = wT1 Σw1

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 13 / 40

Page 17: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (cont.)

The mapping problem becomes

w1 = argmaxw

wTΣw subject to wT1 w1 = 1.

Writing this as Lagrange problem, we have

maximizew1

wT1 Σw1 − α(wT

1 w1 − 1)

Taking derivative with respect to w1 and setting it equal to 0, we obtain

2Σw1 = 2αw1 ⇒ Σw1 = αw1

Hence w1 is eigenvector of Σ and α is the corresponding eigenvalue.

Since we want to maximize Var(z1), we have

Var(z1) = wT1 Σw1 = αwT

1 w1 = α

Hence, we choose the eigenvector with the largest eigenvalue, i.e. λ1 = α.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 14 / 40

Page 18: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (cont.)

The second principal component, w2, should also

maximize variancebe unit lengthorthogonal to w1 (z1 and z2 must be uncorrelated)

The mapping problem for the second principal component becomes

w2 = argmaxw

wTΣw subject to wT2 w2 = 1 and wT

2 w1 = 0.

Writing this as Lagrange problem, we have

maximizew2

wT2 Σw2 − α(wT

2 w2 − 1)− β(wT2 w1 − 0)

Taking derivative with respect to w2 and setting it equal to 0, we obtain

2Σw2 − 2αw2 − βw1 = 0

Pre-multiply by wT1 , we obtain

2wT1 Σw2 − 2αwT

1 w2 − βwT1 w1 = 0

Note that wT1 w2 = 0 and wT

1 Σw2 = (wT2 Σw1)T = wT

2 Σw1 is a scaler.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 15 / 40

Page 19: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (cont.)

Since Σw1 = λ1w1, therefore we have

wT1 Σw2 = wT

2 Σw1 = λ1wT2 w1 = 0

Then β = 0 and the problem reduces to

Σw2 = αw2

This implies that w2 should be the eigenvector of Σ with the second largest eigenvalueλ2 = α.

Similarly, you can show that the other dimensions are given by the eigenvectors withdecreasing eigenvalues.

Since Σ is symmetric, for two different eigenvalues, their corresponding eigenvectors areorthogonal. (Show it)

If Σ is positive definite (xTΣx > 0 for all non-null vector x), then all its eigenvalues arepositive.

If Σ is singular, its rank is k (k < D) and λi = 0 for i = k + 1, . . . ,D.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 16 / 40

Page 20: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (cont.)

DefineZ = W T (X −m)

Then k columns of W are the k leading eigenvectors of S (the estimator of Σ).

m is the sample mean of X .

Subtracting m from X before projection centers the data on the origin.

How to normalize variances?

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 17 / 40

Page 21: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (cont.)

How to select k?

If all eigenvalues are positive and |S | =∏D

i=1 λi is small, then some eigenvalues have littlecontribution to the variance and may be discarded.

Scree graph is the plot of variance as a function of the number of eigenvectors.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 18 / 40

Page 22: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (cont.)

How to select k?

We select the leading k components that explain more than for example 95% of thevariance.

The proportion of variance (POV) is

POV =

∑ki=1 λi∑Di=1 λi

By visually analyzing it, we can choose k.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 19 / 40

Page 23: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (cont.)

How to select k?

Another possibility is to ignore the eigenvectors whose corresponding eigenvalues are lessthan the average input variance (why?).

In the pre-processing phase, it is better to pre-process data such that each dimension hasmean 0 and unit variance(why and when?).

Question: Can we use the correlation matrix instead of covariance matrix? Drive solutionfor PCA.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 20 / 40

Page 24: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (cont.)

PCA is sensitive to outliers. A few points distant from the center have large effect on thevariances and thus eigenvectors.

Question: How can use the robust estimation methods for calculating parameters in thepresence of outliers?

A simple method is discarding the isolated data points that are far away.

Question: When D is large, calculating, sorting, and processing of S may be tedious. Is itpossible to calculate eigenvectors and eigenvalues directly from data without explicitlycalculating the covariance matrix?

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 21 / 40

Page 25: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (reconstruction error)

In PCA, input vector x is projected to the z−space as

z = W T (x − µ)

When W is an orthogonal matrix (W TW = I ), it can be back-projected to the originalspace as

x̂ = Wz + µ

x̂ is the reconstruction of x from its representation in the z−space.

Question: Show that PCA minimizes the following reconstruction error.∑i

‖x̂i − xi‖2

The reconstruction error depends on how many of the leading components are taken intoaccount.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 22 / 40

Page 26: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Principal component analysis (cont.)

PCA is unsupervised and does not use label/output information.

Karhunen-loeve expansion allows using label/output information.

Common principle components assumes that the principal components are the same foreach class whereas the variance of these components differ for different classes.

Flexible discriminant analysis, which does a linear projection to a lower-dimensional spacein which all features are uncorrelated. This method use minimum distance classifier.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 23 / 40

Page 27: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Kernel principal component analysis

PCA can be extended to find nonlinear directions in the data using kernel methods.

Kernel PCA finds the directions of most variance in the feature space instead of the inputspace.

Linear principal components in the feature space correspond to nonlinear directions in theinput space.

Using kernel trick, all operations can be carried out in terms of the kernel function ininput space without having to transform the data into feature space.

Let φ correspond to a mapping from the input space to the feature space.

Each point in feature space is given as the image of φ(x) of the point x in the input space.

In feature space, we can find the first kernek principal component W1 (W T1 W1 = 1) by

solvingΣφW1 = λ1W1

Covariance matrix Σφ in feature space is equal to

Σφ =1

N

∑i=1

Nφ(xi )φ(xi )T

We assume that the points are centered.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 24 / 40

Page 28: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Kernel principal component analysis (cont.)

Plugging Σφ into ΣφW1 = λ1W1, we obtain

(1

N

N∑i=1

φ(xi )φ(xi )T

)W1 = λ1W1

1

N

N∑i=1

φ(xi )(φ(xi )

TW1

)= λ1W1

N∑i=1

(φ(xi )

TW1

Nλ1

)φ(xi ) = W1

N∑i=1

ciφ(xi ) = W1

ci = φ(xi )TW1

Nλ1is a scalar value

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 25 / 40

Page 29: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Kernel principal component analysis (cont.)

Now substitute∑N

i=1 ciφ(xi ) = W1 in ΣφW1 = λ1W1, we obtain

(1

N

N∑i=1

φ(xi )φ(xi )T

)(N∑i=1

ciφ(xi )

)= λ1

N∑i=1

ciφ(xi )

1

N

N∑i=1

N∑j=1

cjφ(xi )φ(xi )Tφ(xj) = λ1

N∑i=1

ciφ(xi )

N∑i=1

φ(xi )N∑j=1

cjφ(xi )Tφ(xj)

= Nλ1

N∑i=1

ciφ(xi )

Replacing φ(xi )Tφ(xj) by K (xi , xj)

N∑i=1

φ(xi )N∑j=1

cjK (xi , xj)

= Nλ1

N∑i=1

ciφ(xi )

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 26 / 40

Page 30: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Kernel principal component analysis (cont.)

Multiplying with φ(xk)T , we obtain

N∑i=1

φ(xk)Tφ(xi )N∑j=1

cjK (xi , xj)

= Nλ1

N∑i=1

ciφ(xk)Tφ(xi )

N∑i=1

K (xk , xi )N∑j=1

cjK (xi , xj)

= Nλ1

N∑i=1

ciK (xk , xj)

By some algebraic simplificatin, we obtain (do it)

K 2C = = Nλ1KC

Multiplying by K−1, we obtain

KC = = Nλ1C

KC = = η1C

Weight vector C is the eigenvector corresponding to the largest eigenvalue η1 of thekernel matrix K .

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 27 / 40

Page 31: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Kernel principal component analysis (cont.)

Replacing∑N

i=1 ciφ(xi ) = W1 in constraint W T1 W1 = 1, we obtain

N∑i=1

N∑j=1

cjcjφ(xi )Tφ(xj) = 1

CTKC = 1

Using KC = η1C , we obtain

CT (η1C ) = = 1

η1CTC = = 1

||C ||2 =1

η1

Since C is an eigenvector of K , it will have unit norm.

To ensure that W1 is a unit vector, multiply C by√

1η1

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 28 / 40

Page 32: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Kernel principal component analysis (cont.)

In general, we do not ma input space to the feature space via φ, hence we cannotcompute W1 using

N∑i=1

ciφ(xi ) = W1

We can project any point φ(x) on to principal direction W1

W T1 φ(x) =

N∑i=1

ciφ(xi )Tφ(x) =

N∑i=1

ciK (xi , x)

When x = xi is one of the input points, we have

ai = W T1 φ(xi ) = KT

i C

where Ki is the column vector corresponding to the I th row of K and ai is the vector inthe reduced dimension.

This shows that all computation are carried out using only kernel operations.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 29 / 40

Page 33: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Factor analysis

In PCA, from the original dimensions xi (for i = 1, . . . ,D), we form a new set of variablesz that are linear combinations of xi

Z = W T (X − µ)

In factor analysis (FA), we assume that there is a set of unobservable, latent factors zj(for j = 1, . . . , k), which when acting in combination generate x .

Thus the direction is opposite that of PCA.

The goal is to characterize the dependency among the observed variables by means of asmaller number of factors.

Suppose there is a group of variables that have high correlation among themselves andlow correlation with all the other variables. Then there may be a single underlying factorthat gave rise to these variables.

FA, like PCA, is a one-group procedure and is unsupervised. The aim is to model the datain a smaller dimensional space without loss of information.

In FA, this is measured as the correlation between variables.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 30 / 40

Page 34: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Factor analysis (cont.)

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 31 / 40

Page 35: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Factor analysis (cont.)

As in PCA, we have a sample x drawn from some unknown probability density withE [x ] = µ and Cov(x) = Σ.

We assume that the factors are unit normals, E [zj ] = 0, Var(zj) = 1, and areuncorrelated,Cov(zi , zj) = 0, i 6= j .

To explain what is not explained by the factors, there is an added source for each inputwhich we denote by εi .

It is assumed to be zero-mean, E [εi ] = 0, and have some unknown variance, Var(εi ) = ψi

.

These specific sources are uncorrelated among themselves, Cov(εi , εi ) = 0, i 6= j , and arealso uncorrelated with the factors, Cov(εi , zj) = 0, ∀i , j .FA assumes that each input dimension, xi can be written as a weighted sum of the k < Dfactors, zj plus the residual term.

xi − µi = vi1z1 + vi2z2 + . . .+ vikzk + εi

This can be written in vector-matrix form as

x − µ = VZ + εi

V is the D × k matrix of weights, called factor loadings.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 32 / 40

Page 36: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Factor analysis (cont.)

We assume that µ = 0 and we can always add µ after projection.

Given that Var(zj) = 1 and Var(εi ) = ψi and

Var(xi ) = v2i1 + v2i2 + . . .+ v2ik + εi

In vector-matrix form, we have

Σ = Var(X ) = VV T + Ψ

There are two uses of factor analysis:

It can be used for knowledge extraction when we find the loadings and try to express thevariables using fewer factors.It can also be used for dimensionality reduction when k < D.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 33 / 40

Page 37: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Factor analysis (cont.)

For dimensionality reduction, we need to find the factor, zj , from xi .

We want to find the loadings wji such that

zj =D∑i=1

wjixi + εi

In vector form, for input x , this can be written as

z = W T x + ε

For N input, the problem is reduced to linear regression and its solution is

W = (XTX )−1XTZ

We do not know Z ; We multiply and divide both sides by N − 1 and obtain (drive it!)

W = (N − 1)(XTX )−1XTZ

N − 1=

(XTX

N − 1

)−1XTZ

N − 1= S−1V

Z = XW = XS−1V

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 34 / 40

Page 38: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Linear discriminant analysis

Linear discriminant analysis (LDA) is a supervised method for dimensionality reduction forclassification problems.

Consider first the case of two classes, and suppose we take the D−dimensional inputvector x and project it down to one dimension using

z = W T x

If we place a threshold on z and classify z ≥ w0 as class C1, and otherwise class C2, thenwe obtain our standard linear classifier.

In general, the projection onto one dimension leads to a considerable loss of information,and classes that are well separated in the original D−dimensional space may becomestrongly overlapping in one dimension.

However, by adjusting the components of the weight vector W , we can select a projectionthat maximizes the class separation.

Consider a two-class problem in which there are N1 points of class C1 and N2 points ofclass C2, so that the mean vectors of the two classes are given by

mj =1

Nj

∑i∈Cj

xi

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 35 / 40

Page 39: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Linear discriminant analysis (cont.)

The simplest measure of the separation of the classes, when projected onto W , is theseparation of the projected class means.

This suggests that we might choose W so as to maximize

m2 −m1 = W T (m2 −m1)

wheremj = W Tmj

This expression can be made arbitrarily large simply by increasing the magnitude of W .

To solve this problem, we could constrain W to have unit length, so that∑

i w2i = 1.

Using a Lagrange multiplier to perform the constrained maximization, we then find that

W ∝ (m2 −m1)

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 36 / 40

Page 40: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Linear discriminant analysis (cont.)

This approach has a problem: The following figure shows two classes that are wellseparated in the original two dimensional space but that have considerable overlap whenprojected onto the line joining their means.

This difficulty arises from the strongly non-diagonal covariances of the class distributions.

The idea proposed by Fisher is to maximize a function that will give a large separationbetween the projected class means while also giving a small variance within each class,thereby minimizing the class overlap.

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 37 / 40

Page 41: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Linear discriminant analysis (cont.)

The idea proposed by Fisher is to maximize a function that will give a large separationbetween the projected class means while also giving a small variance within each class,thereby minimizing the class overlap.

The projection z = W T x transforms the set of labeled data points in x into a labeled setin the one-dimensional space z .

The within-class variance of the transformed data from class Cj equals

s2j =∑i∈Cj

(zi −mj)2

wherezi = wT xi .

We can define the total within-class variance for the whole data set to be s21 + s22 . .

The Fisher criterion is defined to be the ratio of the between-class variance to thewithin-class variance and is given by

J(W ) =(m2 −m1)2

s21 + s22

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 38 / 40

Page 42: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Linear discriminant analysis (cont.)

Between-class covariance matrix equals to

SB = (m2 −m1)(m2 −m1)T

Total within-class covariance matrix equals to

SW =∑i∈C1

(xi −m1) (xi −m1)T +∑i∈C2

(xi −m2) (xi −m2)T

J(W ) can be written as

J(w) =W TSBW

W TSWW

To maximize J(W ), we differentiate with respect to W and we obtain(W TSBW

)SWW =

(W TSWW

)SBW

SBW is always in the direction of (m2 −m1) (Show it!).

We can drop the scalar factors W TSBW and W TSWW (why?).

Multiplying both sides of S−1W we then obtain (Show it!)

W ∝ S−1W (m2 −m1)

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 39 / 40

Page 43: Data Miningce.sharif.edu/courses/94-95/1/ce714-1/resources/... · Factor analysis Linear discriminant analysis Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394

Linear discriminant analysis (cont.)

The result W ∝ S−1W (m2 −m1) is known as Fisher’s linear discriminant, although strictlyit is not a discriminant but rather a specific choice of direction for projection of the datadown to one dimension.

The projected data can subsequently be used to construct a discriminant function.

The above idea can be extended to multiple classes (Read section 4.1.6 of Bishop).

How the Fisher’s linear discriminant can be used for dimensionality reduction? (Show it!)

Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 40 / 40