cs 2750: machine learning dimensionality reduction prof. adriana kovashka university of pittsburgh...

Download CS 2750: Machine Learning Dimensionality Reduction Prof. Adriana Kovashka University of Pittsburgh January 27, 2016

If you can't read please download the document

Upload: arline-todd

Post on 18-Jan-2018

220 views

Category:

Documents


0 download

DESCRIPTION

Mean shift vs K-means Statement last class: – “Mean shift can be made equivalent to K-means” If you have a proof for that, it to me for extra credit

TRANSCRIPT

CS 2750: Machine Learning Dimensionality Reduction Prof. Adriana Kovashka University of Pittsburgh January 27, 2016 Complexity comparison of clustering methods In number of data points n: K-means: O(n) O(n kd ) 1 Mean shift: O(n 2 ) 2 Hierarchical agglomerative clustering: O(n 2 ) / O(n 2 logn) 3 Normalized cuts: O(n 3 ) / O(n) https://webdocs.cs.ualberta.ca/~dale/papers/cvpr09.pdf 12 Mean shift vs K-means Statement last class: Mean shift can be made equivalent to K-means If you have a proof for that,it to me for extra credit Plan for today Dimensionality reduction motivation Principal Component Analysis (PCA) Applications of PCA Other methods for dimensionality reduction Why reduce dimensionality? Data may intrinsically live in a lower-dim space Too many features and too few data Lower computational expense (memory, train/test time) Want to visualize the data in a lower-dim space Want to use data of different dimensionality Goal Input: Data in a high-dim feature space Output: Projection of same data into a lower- dim space F: high-dim X low-dim X Goal Slide credit: Erik Sudderth Some criteria for success Low reconstruction error High variance of the data BOARD Principal Components Analysis Slide credit: Subhransu Maji Principal Components Analysis Lagrange Multipliers Goal: maximize f(x) subject to g(x) = 0 Formulate as follows and take derivative wrt x: Additional info: Bishop Appendix E David Barbers textbook: Slide credit: Subhransu Maji Principal Components Analysis Slide credit: Subhransu Maji Principal Components Analysis Slide credit: Subhransu Maji Principal Components Analysis DemoA_demo.mA_demo.m Demo with eigenfaces: Implementation issue Covariance matrix is huge (D 2 for D pixels) But typically # examples N = P, 1); Variance preserved at i-th eigenvalue Figure 12.4 (a) from Bishop Plan for today Dimensionality reduction motivation Principal Component Analysis (PCA) Applications of PCA Other methods for dimensionality reduction Application: Face Recognition Image from cnet.com Face recognition: once youve detected and cropped a face, try to recognize it DetectionRecognition Sally Slide credit: Lana Lazebnik Typical face recognition scenarios Verification: a person is claiming a particular identity; verify whether that is true E.g., security Closed-world identification: assign a face to one person from among a known set General identification: assign a face to a known person or to unknown Slide credit: Derek Hoiem Simple idea for face recognition 1.Treat face image as a vector of intensities 2.Recognize face by nearest neighbor in database (copy identity of image with minimum distance) Slide credit: Derek Hoiem The space of all face images When viewed as vectors of pixel values, face images are extremely high-dimensional 24x24 image = 576 dimensions Slow and lots of storage But very few 576-dimensional vectors are valid face images We want to effectively model the subspace of face images Adapted from Derek Hoiem Slide credit: Alexander Ihler Eigenfaces (PCA on face images) 1.Compute the principal components (eigenfaces) of the covariance matrix 2.Keep K eigenvectors with largest eigenvalues 3.Represent all face images in the dataset as linear combinations of eigenfaces M. Turk and A. Pentland, Face Recognition using Eigenfaces, CVPR 1991Face Recognition using Eigenfaces Adapted from D. Hoiem Representation and reconstruction Face x in face space coordinates: Reconstruction: =+ + w 1 u 1 +w 2 u 2 +w 3 u 3 +w 4 u 4 + = ^ x= Slide credit: Derek Hoiem Recognition w/ eigenfaces Process labeled training images Find mean and covariance matrix Find k principal components (eigenvectors of ) u 1,u k Project each training image x i onto subspace spanned by principal components: (w i1,,w ik ) = (u 1 T x i, , u k T x i ) Given novel image x Project onto subspace: (w 1,,w k ) = (u 1 T x, , u k T x) Classify as closest training face in k-dimensional subspace M. Turk and A. Pentland, Face Recognition using EigenfacesFace Recognition using Eigenfaces, CVPR 1991 Adapted from Derek Hoiem WARNING: SUPERVISED Face recognition by humans: 20 resultsFace recognition by humans: 20 results (2005), Slides by Jianchao YangSlides Face recognition by humans: 20 resultsFace recognition by humans: 20 results (2005), Slides by Jianchao YangSlides Face recognition by humans: 20 resultsFace recognition by humans: 20 results (2005), Slides by Jianchao YangSlides Digits example Figure 12.5 from Bishop Slide credit: Alexander Ihler Plan for today Dimensionality reduction motivation Principal Component Analysis (PCA) Applications of PCA Other methods for dimensionality reduction PCA General dimensionality reduction technique Preserves most of variance with a much more compact representation Lower storage requirements (eigenvectors + a few numbers per face) Faster matching What are some problems? Slide credit: Derek Hoiem Limitations The direction of maximum variance is not always good for classification Slide credit: Derek Hoiem WARNING: SUPERVISED Limitations PCA preserves maximum variance A more discriminative subspace: Fisher Linear Discriminants Fisher Faces FLD preserves discrimination Find projection that maximizes scatter between classes and minimizes scatter within classes Reference: Eigenfaces vs. Fisherfaces, Belheumer et al., PAMI 1997Eigenfaces vs. Fisherfaces, Belheumer et al., PAMI 1997 Adapted from Derek Hoiem WARNING: SUPERVISED Illustration of the Projection Poor Projection x1 x2 x1 x2 Using two classes as example: Good Slide credit: Derek Hoiem WARNING: SUPERVISED Comparing with PCA Slide credit: Derek Hoiem WARNING: SUPERVISED Other dimensionality reduction methods Non-linear: Kernel PCA (Schlkopf et al., Neural Computation 1998) Independent component analysis Comon, Signal Processing 1994 LLE (locally linear embedding) Roweis and Saul, Science 2000 ISOMAP (isometric feature mapping) Tenenbaum et al., Science 2000 t-SNE (t-distributed stochastic neighbor embedding) van der Maaten and Hinton, JMLR 2008 Kernel PCA Assume zero-mean data Data is transformed via (x) so covariance matrix C becomes: Eigenvalue problem becomes: Projection vector becomes: Kernel PCA Figure from Bishop ISOMAP Example Figure from Carlotta Domeniconi ISOMAP Example Figure from Carlotta Domeniconi t-SNE Example Figure from Genevieve Patterson, IJCV 2014 t-SNE Example Thomas and Kovashka, in submission t-SNE Example Thomas and Kovashka, in submission t-SNE Example Thomas and Kovashka, in submission Feature selection (task-dependent) Filtering approaches Pick features that on their own can classify well E.g. how well they separate cases Wrapper approaches Greedily add features that most increase classification accuracy Embedded methods Joint learning and selection (e.g. in SVMs)