manifold learning: locally linear embedding

23
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University http://www.public.asu.edu/~jye02

Upload: eli

Post on 01-Feb-2016

184 views

Category:

Documents


12 download

DESCRIPTION

Manifold learning: Locally Linear Embedding. Jieping Ye Department of Computer Science and Engineering Arizona State University http://www.public.asu.edu/~jye02. Review: MDS. Metric scaling (classical MDS) assuming D is the squared distance matrix. Non-metric scaling - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Manifold learning: Locally Linear Embedding

Manifold learning: Locally Linear Embedding

Jieping Ye

Department of Computer Science and Engineering

Arizona State University

http://www.public.asu.edu/~jye02

Page 2: Manifold learning: Locally Linear Embedding

Review: MDS

• Metric scaling (classical MDS)– assuming D is the squared distance matrix.

• Non-metric scaling – deal with more general dissimilarity measures

definite-semi positibe benot may :scaling Nonmetric

)()(2 :scaling Metric

ee

ijjiee

DPP

xxDPP

Solutions: (1) Add a large constant to its diagonal. (2) Find its nearest positive semi-definite matrix by setting all negative eigenvalues to zero.

Page 3: Manifold learning: Locally Linear Embedding

Review: Isomap

• Constructing neighbourhood graph G• For each pair of points in G, Computing shortest path

distances ---- geodesic distances.• Use Classical MDS with geodesic distances.

– Euclidean distance Geodesic distance

Page 4: Manifold learning: Locally Linear Embedding

Review: Isomap

• Isomap is still a global approach– All pairwise distances considered

• The resulting distance matrix is dense– Isomap does not scale to large datasets– Landmark Isomap proposed to overcome this problem

• LLE – local approach– The resulting matrix is sparse

• Apply efficient sparse matrix solvers

Page 5: Manifold learning: Locally Linear Embedding

Outline of lecture

• Local Linear Embedding (LLE)– Intuition– Least squares problem– Eigenvalue problem

• Summary

Page 6: Manifold learning: Locally Linear Embedding

LLE

• Nonlinear Dimensionality Reduction by Locally Linear Embedding – Roweis and Saul – Science– http://www.sciencemag.org/cgi/content/full/290/5500/2

323

Page 7: Manifold learning: Locally Linear Embedding

Characterictics of a Manifold

M

x1

x2R2

Rn

z

x

x: coordinate for z

Locally it is a linear patch

Key: how to combine all localpatches together?

Page 8: Manifold learning: Locally Linear Embedding

LLE: Intuition

• Assumption: manifold is approximately “linear” when viewed locally, that is, in a small neighborhood– Approximation error, e(W), can be made small

• Local neighborhood is effected by the constraint Wij=0 if zi is not a neighbor of zj

• A good projection should preserve this local geometric property as much as possible

Page 9: Manifold learning: Locally Linear Embedding

We expect each data point and its neighbors to lie on or close to a locally linear patch of themanifold.

Each point can be written as a linear combination of its neighbors.The weights chosen tominimize the reconstructionError.

LLE: Intuition

Page 10: Manifold learning: Locally Linear Embedding

• The weights that minimize the reconstruction errors are invariant to rotation, rescaling and translation of the data points.

– Invariance to translation is enforced by adding the constraint that the weights sum to one.

– The weights characterize the intrinsic geometric properties of each neighborhood.

• The same weights that reconstruct the data points in D dimensions should reconstruct it in the manifold in d dimensions.– Local geometry is preserved

LLE: Intuition

Page 11: Manifold learning: Locally Linear Embedding

LLE: Intuition

Use the same weights from the original space

Low-dimensional embedding

the i-th row of W

Page 12: Manifold learning: Locally Linear Embedding

Local Linear Embedding (LLE)

• Assumption: manifold is approximately “linear” when viewed locally, that is, in a small neighborhood

• Approximation error, (W), can be made small

• Meaning of W: a linear representation of every data point by its neighbors– This is an intrinsic geometrical property of the manifold

• A good projection should preserve this geometric property as much as possible

Page 13: Manifold learning: Locally Linear Embedding

Constrained Least Square problem

Compute the optimal weight for each point individually:

Neightbors of x

Zero for all non-neighbors of x

Page 14: Manifold learning: Locally Linear Embedding

Finding a Map to a Lower Dimensional Space

• Yi in Rk: projected vector for Xi

• The geometrical property is best preserved if the error below is small

• Y is given by the eigenvectors of the lowest d non-zero eigenvalues of the matrix

Use the same weightscomputed above

Page 15: Manifold learning: Locally Linear Embedding

Eigenvalue problem

.UY then M, of rseigenvecto

bottom thebe],,,[ Let U

].,,,[ where

),(trace)()(

T

21

21

,

d

n

T

jijiij

uuu

YYYY

YMYYYMY

Page 16: Manifold learning: Locally Linear Embedding

A simpler approach

].,,,[ where

)(trace

))()((trace)(

],,,][,,,[],,,[

],,,[)(

21

2

F

2

F

2

212121

2

21

2

n

T

TTTT

F

Tn

TTnn

i

Tini

i ijiji

YYYY

YMY

YWIWIYWIYYWY

WWWYYYYYY

WYYYYYWYY

The following is a more direct and simpler derivation for Y:

Page 17: Manifold learning: Locally Linear Embedding

Eigenvalue problem

Add the following constraint:

The optimal d-dimensional embedding is given by the bottom 2nd to (d+1)-th eigenvectors of the following matrix: (Note that0 is its smallest eigenvalue.)

Page 18: Manifold learning: Locally Linear Embedding

The LLE algorithm

Page 19: Manifold learning: Locally Linear Embedding

Examples

Images of faces mapped into the embedding space described by the first two coordinates of LLE. Representative faces are shown next to circled points. The bottom images correspond to points along the top-right path (linked by solid line) illustrating one particular mode of variability in pose and expression.

Page 20: Manifold learning: Locally Linear Embedding

Some Limitations of LLE

• require dense data points on the manifold for good estimation

• A good neighborhood seems essential to their success– How to choose k?– Too few neighbors

• Result in rank deficient tangent space and lead to over-fitting

– Too many neighbors• Tangent space will not match local geometry well

Page 21: Manifold learning: Locally Linear Embedding

Experiment on LLE

Page 22: Manifold learning: Locally Linear Embedding

Applications

• Application of Locally Linear Embedding to the Spaces of Social Interactions and Signed Distance Functions

• Hyperspectral Image Processing Using Locally Linear Embedding

• Feature Dimension Reduction For Microarray Data Analysis Using Locally Linear Embedding

• Locally linear embedding algorithm. Extensions and applications– http://herkules.oulu.fi/isbn9514280415/isbn9514280415.pdf

Page 23: Manifold learning: Locally Linear Embedding

Next class

• Topics– Laplacian Eigenmaps

• Readings– Laplacian Eigenmaps for Dimensionality Reduction

and Data Representation - Belkin and Niyog• http://math.uchicago.edu/%7Emisha/paper1.ps