cluster analysis for gene expression data
DESCRIPTION
Cluster Analysis for Gene Expression Data. Ka Yee Yeung http://staff.washington.edu/kayee/research.html Center for Expression Arrays Department of Microbiology [email protected]. A gene expression data set. ……. Snapshot of activities in the cell Each chip represents an experiment: - PowerPoint PPT PresentationTRANSCRIPT
Cluster Analysis for Gene Expression Data
Ka Yee Yeunghttp://staff.washington.edu/kayee/research.html
Center for Expression ArraysDepartment of Microbiology
10/18/2002 Ka Yee Yeung, CEA 2
• Snapshot of activities in the cell
• Each chip represents an experiment:– time course– tissue samples
(normal/cancer)
……..
A gene expression data set
p experiments
n genes
Xij
10/18/2002 Ka Yee Yeung, CEA 3
What is clustering?• Group similar objects together• Objects in the same cluster (group)
are more similar to each other than objects in different clusters
• Data exploratory tool: to find patterns in large data sets
• Unsupervised approach: do not make use of prior knowledge of data
10/18/2002 Ka Yee Yeung, CEA 4
Applications of clustering gene expression data
• Cluster the genes functionally related genes
• Cluster the experiments discover new subtypes of tissue samples
• Cluster both genes and experiments find sub-patterns
10/18/2002 Ka Yee Yeung, CEA 5
Examples of clustering algorithms
• Hierarchical clustering algorithms eg. [Eisen et al 1998]
• K-means eg. [Tavazoie et al. 1999]• Self-organizing maps (SOM) eg.
[Tamayo et al. 1999]• CAST [Ben-Dor, Yakhini 1999]• Model-based clustering algorithms eg.
[Yeung et al. 2001]
10/18/2002 Ka Yee Yeung, CEA 6
Overview
• Similarity/distance measures• Hierarchical clustering algorithms
– Made popular by Stanford, ie. [Eisen et al. 1998]
• K-means– Made popular by many groups, eg.
[Tavazoie et al. 1999]
• Model-based clustering algorithms [Yeung et al. 2001]
10/18/2002 Ka Yee Yeung, CEA 7
How to define similarity?
• Similarity measures:– A measure of pairwise similarity or dissimilarity– Examples:
• Correlation coefficient• Euclidean distance
Experiments
gene
s gene
s
genes
X
Y
X
Y
Raw matrix
Similarity matrix
1
n
1 p n
n
10/18/2002 Ka Yee Yeung, CEA 8
Similarity measures(for those of you who enjoy equations…)
• Euclidean distance
• Correlation coefficient
2
1
)][][(
p
j
jYjX
p
jX
Xwhere
YjYXjX
YjYXjXp
j
p
j
p
j
p
j
1
1 1
22
1
][
,
)][()][(
)][)(][(
10/18/2002 Ka Yee Yeung, CEA 9
Example
X 1 0 -1 0Y 3 2 1 2Z -1 0 1 0W 2 0 -2 0
-3
-2
-1
0
1
2
3
4
1 2 3 4
X
Y
Z
W
Correlation (X,Y) = 1 Distance (X,Y) = 4
Correlation (X,Z) = -1 Distance (X,Z) = 2.83
Correlation (X,W) = 1 Distance (X,W) = 1.41
10/18/2002 Ka Yee Yeung, CEA 10
Lessons from the example
• Correlation – direction only• Euclidean distance – magnitude &
direction• Array data is noisy need many
experiments to robustly estimate pairwise similarity
10/18/2002 Ka Yee Yeung, CEA 11
Clustering algorithms
• From pairwise similarities to groups• Inputs:
– Raw data matrix or similarity matrix– Number of clusters or some other
parameters
10/18/2002 Ka Yee Yeung, CEA 12
Hierarchical Clustering [Hartigan 1975]• Agglomerative (bottom-up)
• Algorithm:– Initialize: each item a cluster– Iterate:
• select two most similar clusters• merge them
– Halt: when required number of clusters is reached
dendrogram
10/18/2002 Ka Yee Yeung, CEA 13
Hierarchical: Single Link
• cluster similarity = similarity of two most similar members
- Potentially long and skinny clusters
+ Fast
10/18/2002 Ka Yee Yeung, CEA 14
Example: single link
04589
07910
036
02
0
5
4
3
2
1
54321
0458
079
03
0
5
4
3
)2,1(
543)2,1(
1
23
4
5
8}8,9min{},min{
9}9,10min{},min{
3}3,6min{},min{
5,25,15),2,1(
4,24,14),2,1(
3,23,13),2,1(
ddd
ddd
ddd
10/18/2002 Ka Yee Yeung, CEA 15
Example: single link
045
07
0
5
4
)3,2,1(
54)3,2,1(
1
23
4
5
0458
079
03
0
5
4
3
)2,1(
543)2,1(
04589
07910
036
02
0
5
4
3
2
1
54321
5}5,8min{},min{
7}7,9min{},min{
5,35),2,1(5),3,2,1(
4,34),2,1(4),3,2,1(
ddd
ddd
10/18/2002 Ka Yee Yeung, CEA 16
Example: single link
045
07
0
5
4
)3,2,1(
54)3,2,1(
1
23
4
5
0458
079
03
0
5
4
3
)2,1(
543)2,1(
04589
07910
036
02
0
5
4
3
2
1
54321
5},min{ 5),3,2,1(4),3,2,1()5,4(),3,2,1( ddd
10/18/2002 Ka Yee Yeung, CEA 17
Hierarchical: Complete Link
• cluster similarity = similarity of two least similar members
+ tight clusters
- slow
10/18/2002 Ka Yee Yeung, CEA 18
Example: complete link
04589
07910
036
02
0
5
4
3
2
1
54321
0459
0710
06
0
5
4
3
)2,1(
543)2,1(
1
23
4
5
9}8,9max{},max{
10}9,10max{},max{
6}3,6max{},max{
5,25,15),2,1(
4,24,14),2,1(
3,23,13),2,1(
ddd
ddd
ddd
10/18/2002 Ka Yee Yeung, CEA 19
Example: complete link
04589
07910
036
02
0
5
4
3
2
1
54321
0459
0710
06
0
5
4
3
)2,1(
543)2,1(
1
23
4
5
0710
06
0
)5,4(
3
)2,1(
)5,4(3)2,1(
7}5,7max{},max{
10}9,10max{},max{
5,34,3)5,4(,3
5),2,1(4),2,1()5,4(),2,1(
ddd
ddd
10/18/2002 Ka Yee Yeung, CEA 20
Example: complete link
04589
07910
036
02
0
5
4
3
2
1
54321
0459
0710
06
0
5
4
3
)2,1(
543)2,1(
0710
06
0
)5,4(
3
)2,1(
)5,4(3)2,1(
1
23
4
5
10},max{ )5,4(,3)5,4(),2,1()5,4(),3,2,1( ddd
10/18/2002 Ka Yee Yeung, CEA 21
Hierarchical: Average Link
• cluster similarity = average similarity of all pairs
+ tight clusters
- slow
10/18/2002 Ka Yee Yeung, CEA 22
Software: TreeView [Eisen et al. 1998]
• Fig 1 in Eisen’s PNAS 99 paper
• Time course of serum stinulation of primary human fibrolasts
• cDNA arrays with approx 8600 spots
• Similar to average-link• Free download at:
http://rana.lbl.gov/EisenSoftware.htm
10/18/2002 Ka Yee Yeung, CEA 23
Overview• Similarity/distance measures• Hierarchical clustering algorithms
– Made popular by Stanford, ie. [Eisen et al. 1998]
• K-means– Made popular by many groups, eg.
[Tavazoie et al. 1999]
• Model-based clustering algorithms [Yeung et al. 2001]
10/18/2002 Ka Yee Yeung, CEA 25
Details of k-means• Iterate until converge:
– Assign each data point to the closest centroid
– Compute new centroid
2
1
)(
k
i Cxi
i
CCentroidxObjective function:
Minimize
10/18/2002 Ka Yee Yeung, CEA 26
Properties of k-means• Fast• Proved to converge to local optimum• In practice, converge quickly• Tend to produce spherical, equal-sized
clusters• Related to the model-based approach• Gavin Sherlock’s Xcluster:
http://genome-www.stanford.edu/~sherlock/cluster.html
10/18/2002 Ka Yee Yeung, CEA 27
What we have seen so far..
• Definition of clustering• Pairwise similarity:
– Correlation– Euclidean distance
• Clustering algorithms:– Hierarchical agglomerative– K-means
• Different clustering algorithms different clusters
• Clustering algorithms always spit out clusters
10/18/2002 Ka Yee Yeung, CEA 28
Which clustering algorithm should I use?
• Good question• No definite answer: on-going
research• Our preference: the model-based
approach.
10/18/2002 Ka Yee Yeung, CEA 29
Model-based clustering (MBC)
• Gaussian mixture model:– Assume each cluster is generated by the
multivariate normal distribution– Each cluster k has parameters :
• Mean vector: k – Location of cluster k
• Covariance matrix: k
– volume, shape and orientation of cluster k
• Data transformations & normality assumption
10/18/2002 Ka Yee Yeung, CEA 30
More on the covariance matrixk
(volume, orientation, shape)Equal volume, spherical (EI)
unequal volume, spherical (VI)
Equal volume, orientation, shape (EEE)
Diagonal model Unconstrained (VVV)
10/18/2002 Ka Yee Yeung, CEA 31
Key advantage of the model-based approach:
choose the model and the number of clusters
• Bayesian Information Criterion (BIC) [Schwarz 1978]
– Approximate p(data | model)• A large BIC score indicates strong
evidence for the corresponding model.
10/18/2002 Ka Yee Yeung, CEA 32
Gene expression data sets
• Ovary data [Michel Schummer, Institute of Systems Biology]
– Subset of data : 235 clones (portions of genes)
24 experiments (cancer/normal tissue samples)
– 235 clones correspond to 4 genes (external criterion)
10/18/2002 Ka Yee Yeung, CEA 33
BIC analysis: square root ovary data
• EEE and diagonal models -> first local max at 4 clusters
• Global max -> VI at 8 clusters
-3000
-2500
-2000
-1500
-1000
-500
0
0 2 4 6 8 10 12 14 16
number of clusters
BIC
EIVIdiagonalEEE
10/18/2002 Ka Yee Yeung, CEA 34
How do we know MBC is doing well?
Answer: compare to external info
• Adjusted Rand: max at EEE 4 clusters (> CAST)
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0 2 4 6 8 10 12 14 16
number of clusters
Ad
jus
ted
Ra
nd
EIVIdiagonalCASTEEE
10/18/2002 Ka Yee Yeung, CEA 35
Take home messages• MBC has superior performance on:
– Quality of clusters– Number of clusters and model chosen (BIC)
• Clusters with high BIC scores tend to produce a high agreement with the external information
• MBC tends to produce better clusters than a leading heuristic-based clustering algorithm (CAST)
• Splus or R versions:http://www.stat.washington.edu/fraley/mclust/