som u c clustering september 2011

23
Clustering with Self Organizing Maps Vahid Moosavi Supervisor: Prof. Ludger Hovestadt September 2011 1

Upload: vahid-moosavi

Post on 03-Mar-2015

56 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: SOM U C Clustering September 2011

Clustering with Self Organizing Maps

Vahid Moosavi

Supervisor: Prof. Ludger Hovestadt

September 2011

1

Page 2: SOM U C Clustering September 2011

Outline

• SOM Clustering Approaches

• U*C clustering (1)

– Basic Definitions

– The Algorithm

– Results

2(1):Alfred Ultsch: U*C: Self-organized Clustering with Emergent Feature Maps. LWA 2005: 240-244

Page 3: SOM U C Clustering September 2011

The Learning Algorithm

3

Competition Cooperation And Adaptation

Representation

Page 4: SOM U C Clustering September 2011

SOM Clustering

• One Stage Clustering: For maps with small number of nodes, each node is representative for a cluster

• Two Stage Clustering (for large maps)(1):– First train the SOM– Then apply any clustering algorithm on the nodes

instead of original data• Partitional clustering algorithms• Hierarchical clustering algorithms

• U*C Clustering Algorithm (2)

(1): VESANTO AND ALHONIEMI: CLUSTERING OF THE SELF-ORGANIZING MAP, IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 3, MAY 2000(2): Alfred Ultsch: U*C: Self-organized Clustering with Emergent Feature Maps. LWA 2005: 240-244 4

Page 5: SOM U C Clustering September 2011

Weaknesses of the Existing clustering Algorithms

• Weaknesses of the other algorithms (e.g. K-Means, GK, Hierarchical Clustering):– No. of clusters should be known in advance.– Clustering algorithms are based on some geometrical

assumptions (Euclidean Distance, Ellipsoidal or spherical shapes, …)

– …

• U*C clustering improves all of the above mentioned issues.

5

Page 6: SOM U C Clustering September 2011

U*C clustering(Basic Definitions)

• Component Planes

• U Matrix (1990)

• P Matrix (2003)

• U* Matrix

6

Page 7: SOM U C Clustering September 2011

Presentation and visualization(Component Plane)

7

Page 8: SOM U C Clustering September 2011

Presentation and visualizationU Matrix (1990)

8

Neuron i: ni

Neighborhood neurons of ni:N(i)

Definition of U-Matrix

Page 9: SOM U C Clustering September 2011

Presentation and visualizationU Matrix (1990)

9

A display of all U-heights on top of the grid is called a U-Matrix: Ultsch (1990)

U-Matrix can show visually the hidden clusters in the data set

Page 10: SOM U C Clustering September 2011

Presentation and visualizationU Matrix (1990)

10

The Original Data set

The U-Matrix

Water Shed (Border)

Basins (clusters)

Page 11: SOM U C Clustering September 2011

Presentation and visualizationP Matrix (2003)

• In some Cases the U-Matrix is not enough. We use measure of the Density in addition to the Distance.

11

Page 12: SOM U C Clustering September 2011

Presentation and visualizationP Matrix (2003)

• In some Cases the U-Matrix is not enough. We use measure of the Density in addition to the Distance.

12

Page 13: SOM U C Clustering September 2011

Presentation and visualizationU* Matrix (1990)

• As the TwoDiamonds data set shows, a combination of distance relationships and density relationships is necessary to give an appropriate clustering. The combination of a U-Matrix and a P-Matrix is called U*-Matrix.

• The Main Idea: The U*-Matrix exhibits the local data distances as heights, when the data density is low (cluster border). If the data density is high, the distances are scaled down to zero (cluster center).

13

Page 14: SOM U C Clustering September 2011

Presentation and visualizationU* Matrix (1990)

14

Page 15: SOM U C Clustering September 2011

U*C Clustering(Main Ideas)

15

First Main Idea:Uheight in the center of a cluster is smaller than the Uheight on the border of the cluster in the U-matrix .

Second Main Idea:The P-height in the center of a cluster is larger than the P height in the border of a cluster in P-matrix.At cluster borders the local density of the points should decrease substantially

Page 16: SOM U C Clustering September 2011

U*C Clustering(Main Ideas)

16

A movement from one position ni to another position nj with the result that wj is more within a cluster C than wi is called immersive.

•Some times, immersion can be find on U-Matrix (based on Gradient Descent Method).•Some times, immersion can be find on P-Matrix (based on Gradient Ascent Method)

•Then :1. Do Gradient Descent Method on U-Matrix:

•Start from point (Node) n in U-Matrix and go in a direction in its neighborhood to reach to minimum U-height (distance) point U. (this is probably a node within a cluster).

2. Do Gradient Ascent Method on P-Matrix:•Start from point U in P-matrix and go in a direction in its neighborhood to reach to Maximum P, Immersion Points (which will be probably the center of a Cluster)

3. Calculate the watersheds on the U*Matrix based on any existing algorithm.4. Partition Immersion Points using these water sheds to Cluster Centers C1,…,Cc.5. Assign the data sets to the clusters based on the Immersion Points of their corresponding

Unit of the SOM.

Page 17: SOM U C Clustering September 2011

U*C Clustering (2005)

17

Page 18: SOM U C Clustering September 2011

U*C Clustering (2005)Some Experimental Results

18

Page 19: SOM U C Clustering September 2011

U*C Clustering (2005)Some Experimental Results

19

Page 20: SOM U C Clustering September 2011

U*C Clustering (2005)Some Experimental Results

20

Page 21: SOM U C Clustering September 2011

Conclusion• SOM and can transform high dimensional

Data sets to two dimensional representation and after that just by analyzing the distances and densities of the transformed data, we can find natural clusters hidden in original data sets.

21

High Dimensional Data Set

SOMModeling

Two Dimensional Representation

U-MatrixP-Matrix,…

Classification and Prediction for future

experiments

Clustering Data Sets

Page 22: SOM U C Clustering September 2011

Conclusion• Alternative Way

22

High Dimensional Data Set

SOMModeling

Two Dimensional Representation

U-MatrixP-Matrix,…

Classification and Prediction for future

experiments

Clustering Data Sets

Feature Selection

and Extraction

Transformed (reduced) Data

set

Page 23: SOM U C Clustering September 2011

THANKS

23