cluster analysis (1)

40
Cluster Analysis (1)

Upload: kali

Post on 12-Feb-2016

30 views

Category:

Documents


0 download

DESCRIPTION

Cluster Analysis (1). Inter-cluster distances are maximized. Intra-cluster distances are minimized. What is Cluster Analysis?. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Cluster Analysis (1)

Cluster Analysis (1)

Page 2: Cluster Analysis (1)

What is Cluster Analysis?• Finding groups of objects such that the objects in a group

will be similar (or related) to one another and different from (or unrelated to) the objects in other groups

Inter-cluster distances are maximized

Intra-cluster distances are

minimized

Page 3: Cluster Analysis (1)

Applications of Cluster Analysis• Clustering for Understanding

– Group related documents for browsing

– Group genes and proteins that have similar functionality

– Group stocks with similar price fluctuations

– Segment customers into a small number of groups for additional analysis and marketing activities.

• Clustering for Summarization– Reduce the size of large

data sets

Discovered Clusters Industry Group

1 Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN, Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN,

DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN, Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down,

Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN, Sun-DOWN

Technology1-DOWN

2 Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN, ADV-Micro-Device-DOWN,Andrew-Corp-DOWN,

Computer-Assoc-DOWN,Circuit-City-DOWN, Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN,

Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN

Technology2-DOWN

3 Fannie-Mae-DOWN,Fed-Home-Loan-DOWN, MBNA-Corp-DOWN,Morgan-Stanley-DOWN

Financial-DOWN

4 Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP, Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,

Schlumberger-UP

Oil-UP

Clustering precipitation in Australia

Page 4: Cluster Analysis (1)

Similarity and Dissimilarity• Similarity

– Numerical measure of how alike two data objects are.– Higher when objects are more alike.– Can be transformed to fall in interval [0,1] by doing:

s’ = (s – min_s)/(max_s – min_s)

• Dissimilarity– Numerical measure of how different are two data objects– Lower when objects are more alike– Minimum dissimilarity is often 0– Can be transformed to fall in interval [0,1] by doing:

d’ = (d – min_d)/(max_d – min_d)

• These proximity measures for objects with a number of attributes are defined by combining the proximities of individual attributes.

Page 5: Cluster Analysis (1)

Similarity/Dissimilarity for Simple Attributesp and q are the attribute values for two data objects.

• Nominal• E.g. province attribute of an address with values:

{BC, AB, ON, QC, …}Order not important.

• Dissimilarityd=0 if p=qd=1 if pq

• Similaritys=1 if p=qs=0 if pq

Page 6: Cluster Analysis (1)

Similarity/Dissimilarity for Simple Attributesp and q are the attribute values for two data objects.

• Ordinal• E.g. quality attribute of a product with values:

{poor, fair, OK, good, wonderful}Order is important, but exact difference between values is undefined or not important.

• Map the values of the attribute to successive integers{poor=0, fair=1, OK=2, good=3, wonderful=4}

• Dissimilarityd(p,q) = |p – q| / (max_d – min_d) e.g. d(wonderful, fair) = |4-1| / (4-0) = .75

• Similaritys(p,q) = 1 – d(p,q) e.g. d(wonderful, fair) = .25

Page 7: Cluster Analysis (1)

Similarity/Dissimilarity for Simple Attributesp and q are the attribute values for two data objects.

• Continuous (or Interval)• E.g. weight attribute of a product

• Dissimilarityd(p,q) = |p – q|

• Similaritys(p,q) = – d(p,q)

• Of course, we can transform them in the [0,1] scale.

Page 8: Cluster Analysis (1)

Combining Similarities• Sometimes attributes are of many different types, but an overall

similarity/dissimilarity is needed.• For the k-th attribute, compute a similarity sk in the range [0,1].• Then,

• Similar formula for dissimilarity

ns

qpn

k k 1),(similarity

Page 9: Cluster Analysis (1)

Euclidean Distance• When all the attributes are continuous we can use the Euclidean Distance

Where n is the number of dimensions (attributes) and pk and qk are, respectively, the kth attributes (components) or data objects p and q.

• Standardization is necessary, if scales differ– E.g. weight, salary have different scales

n

kkk qpdist

1

2)(

Page 10: Cluster Analysis (1)

Euclidean Distance

0

1

2

3

0 1 2 3 4 5 6

p1

p2

p3 p4

point x yp1 0 2p2 2 0p3 3 1p4 5 1

Distance Matrix

p1 p2 p3 p4p1 0 2.828 3.162 5.099p2 2.828 0 1.414 3.162p3 3.162 1.414 0 2p4 5.099 3.162 2 0

Page 11: Cluster Analysis (1)

Minkowski Distance• Minkowski Distance is a generalization of Euclidean Distance

Where r is a parameter, n is the number of dimensions (attributes) and pk and qk are, respectively, the kth attributes (components) or data objects p and q.

Examples• r = 1. City block (Manhattan, taxicab, L1 norm) distance.

• r = 2. Euclidean distance

• r . “supremum” (Lmax norm, L norm) distance. – This is the maximum difference between any component of the vectors

rn

k

rkk qpdist

1

||

Page 12: Cluster Analysis (1)

Minkowski Distance

Distance Matrix

point x yp1 0 2p2 2 0p3 3 1p4 5 1

L1 p1 p2 p3 p4p1 0 4 4 6p2 4 0 2 4p3 4 2 0 2p4 6 4 2 0

L2 p1 p2 p3 p4p1 0 2.828 3.162 5.099p2 2.828 0 1.414 3.162p3 3.162 1.414 0 2p4 5.099 3.162 2 0

L p1 p2 p3 p4p1 0 2 3 5p2 2 0 1 3p3 3 1 0 2p4 5 3 2 0

0

1

2

3

0 1 2 3 4 5 6

p1

p2

p3 p4

Page 13: Cluster Analysis (1)

Similarity Between Binary Vectors• Common situation is that objects, p and q, have only binary attributes

• Compute similarities using the following quantities

M01 = the number of attributes where p was 0 and q was 1

M10 = the number of attributes where p was 1 and q was 0

M00 = the number of attributes where p was 0 and q was 0

M11 = the number of attributes where p was 1 and q was 1

• Simple Matching and Jaccard Coefficients

SMC = number of matches / number of attributes = (M11 + M00) / (M01 + M10 + M11 + M00)

J = number of M11 matches / number of not-both-zero attributes values

= (M11) / (M01 + M10 + M11)

Page 14: Cluster Analysis (1)

SMC versus Jaccard: Examplep = 1 0 0 0 0 0 0 0 0 0 q = 0 0 0 0 0 0 1 0 0 1

M01 = 2 (the number of attributes where p was 0 and q was 1)M10 = 1 (the number of attributes where p was 1 and q was 0)M00 = 7 (the number of attributes where p was 0 and q was 0)M11 = 0 (the number of attributes where p was 1 and q was 1)

SMC = (M11 + M00)/(M01 + M10 + M11 + M00) = (0+7) / (2+1+0+7) = 0.7

J = (M11) / (M01 + M10 + M11) = 0 / (2 + 1 + 0) = 0

Page 15: Cluster Analysis (1)

Cosine SimilarityIf D1 and D2 are two document vectors, then

cos( D1, D2 ) = (D1 D2) / ||D1||.||D2|| ,

where indicates vector dot product and || D || is the length of vector D.

Example:

D1 D2=

.4*0 + .33*0 + 0*.33 + 0*1 + .17*.33 = .0561

||D1|| = sqrt(.40^2 + .33^2 + .17^2) = .55

||D2|| = sqrt(.33^2 + 1^2 + .33^2) = 1.1

cos( D1, D2 ) = .0561 / (.55 * 1.1) = .093

TID W1 W2 W3 W4 W5D1 0.40 0.33 0.00 0.00 0.17D2 0.00 0.00 0.33 1.00 0.33D3 0.40 0.50 0.00 0.00 0.00D4 0.00 0.00 0.33 0.00 0.17D5 0.20 0.17 0.33 0.00 0.33

If the cosine similarity is 1, the angle between D1 and D2 is 0o, and D1 and D2 are the same except for the magnitude.

If the cosine similarity is 0, then the angle between D1 and D2 is 90o, and they don’t share any terms (words).

Page 16: Cluster Analysis (1)

What is Cluster Analysis?• Finding groups of objects such that the objects in a group

will be similar (or related) to one another and different from (or unrelated to) the objects in other groups

Inter-cluster distances are maximized

Intra-cluster distances are

minimized

Page 17: Cluster Analysis (1)

Types of Clusters: Well-Separated• Well-Separated Clusters:

– Any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster.

3 well-separated clusters

Page 18: Cluster Analysis (1)

Types of Clusters: Center-Based• Center-based

– An object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster

• The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster

4 center-based clusters

Page 19: Cluster Analysis (1)

Types of Clusters: Contiguity-Based• Contiguous Cluster (Nearest neighbor or Transitive)

– A point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster.

8 contiguous clusters

Page 20: Cluster Analysis (1)

Types of Clusters: Density-Based• Density-based

– A cluster is a dense region of points, which is separated by low-density regions, from other regions of high density.

– Used when the clusters are irregular or intertwined, and when noise and outliers are present.

6 density-based clusters

Page 21: Cluster Analysis (1)

K-means Clustering• Each cluster is associated with a centroid (center point) • Each point is assigned to the cluster with the closest

centroid• Number of clusters, K, must be specified• Basic algorithm is very simple

Page 22: Cluster Analysis (1)

Example

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 6

Page 23: Cluster Analysis (1)

K-means Clustering – Details• Initial centroids may be chosen randomly.

– Clusters produced vary from one run to another.

• The centroid is (typically) the mean of the points in the cluster.

• ‘Closeness’ is measured by Euclidean distance, cosine similarity, etc.

• Most of the convergence happens in the first few iterations.

– Often the stopping condition is changed to ‘Until relatively few points change clusters’

• Complexity is O(I * K* n * d )– n = number of points, K = number of clusters,

I = number of iterations, d = number of attributes

Page 24: Cluster Analysis (1)

Evaluating K-means Clusters• Most common measure is Sum of Squared Error (SSE)

– For each point, the error is the distance to the nearest cluster– To get SSE, we square these errors and sum them up.

K

i Cxi

i

xmdistSSE1

2)],([

x is a data point in cluster Ci and

mi is the representative point for cluster Ci

Page 25: Cluster Analysis (1)

Reducing SSE with Post-processing• Obvious way to reduce the SSE is to find more clusters, i.e., to use a

larger K. • However, in many cases, we would like to improve the SSE, but don't

want to increase the number of clusters. – Various techniques are used to “fix up” the resulting clusters in

order to produce a clustering that has lower SSE.

• Commonly used approach: Use alternate cluster splitting and merging phases.

• Split a cluster: – split the cluster with the largest SSE

• Merge two clusters: – merge the two clusters that result in the smallest increase in total

SSE.

Page 26: Cluster Analysis (1)

Limitations of K-means• K-means has problems when clusters are of

– Differing Sizes– Differing Densities– Non-globular shapes

Page 27: Cluster Analysis (1)

Limitations of K-means: Differing Sizes

Original Points K-means (3 Clusters)

Page 28: Cluster Analysis (1)

Limitations of K-means: Differing Density

Original Points K-means (3 Clusters)

Page 29: Cluster Analysis (1)

Limitations of K-means: Non-globular Shapes

Original Points K-means (2 Clusters)

Page 30: Cluster Analysis (1)

Overcoming K-means Limitations

Original Points K-means Clusters

One solution is to use many clusters.Find parts of clusters.Apply merge strategy

Page 31: Cluster Analysis (1)

Overcoming K-means Limitations

Original Points K-means Clusters

Page 32: Cluster Analysis (1)

Overcoming K-means Limitations

Original Points K-means Clusters

Page 33: Cluster Analysis (1)

Importance of Choosing Initial Centroids

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 4

Starting with two initial centroids in one cluster of each pair of clusters

Page 34: Cluster Analysis (1)

Importance of Choosing Initial Centroids

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 4

Starting with two initial centroids in one cluster of each pair of clusters

Page 35: Cluster Analysis (1)

Importance of Choosing Initial Centroids

Starting with some pairs of clusters having three initial centroids, while other have only one.

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 4

Page 36: Cluster Analysis (1)

Importance of Choosing Initial Centroids

Starting with some pairs of clusters having three initial centroids, while other have only one.

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

yIteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 4

Page 37: Cluster Analysis (1)

Problems with Selecting Initial Points• Of course, the ideal would be to choose initial centroids, one from

each true cluster. However, this is very difficult.• If there are K ‘real’ clusters then the chance of selecting one

centroid from each cluster is small. – Chance is relatively small when K is large– If clusters are the same size, n, then

• For example, if K = 10, then probability = 10!/1010 = 0.00036• Sometimes the initial centroids will readjust themselves in the ‘right’

way, and sometimes they don’t.• Consider an example of five pairs of clusters

Page 38: Cluster Analysis (1)

Solutions to Initial Centroids Problem• Multiple runs

– Helps, but probability is not on your side

• Bisecting K-means– Not as susceptible to initialization issues

Page 39: Cluster Analysis (1)

Bisecting K means• Straightforward extension of the basic K means algorithm. Simple idea:

To obtain K clusters, split the set of points into two clusters, select one of these clusters to split, and so on, until K clusters have been produced.

AlgorithmInitialize the list of clusters to contain the cluster consisting of all points. repeat

Remove a cluster from the list of clusters. //Perform several “trial” bisections of the chosen cluster.for i = 1 to number of trials do

Bisect the selected cluster using basic K means (i.e. 2-means). end for Select the two clusters from the bisection with the lowest total SSE. Add these two clusters to the list of clusters.

until the list of clusters contains K clusters.

Page 40: Cluster Analysis (1)

Bisecting K-means Example