empirical evaluation of dissimilarity measures for color and texture presented by: dave kauchak...

47
Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California, San Diego [email protected] Jan Puzicha, Joachim M. Buhmann, Yossi Rubner & Carlo Tomasi

Upload: kirk-treece

Post on 19-Jan-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Empirical Evaluation of Dissimilarity Measures for Color

and Texture

Presented by:

Dave Kauchak

Department of Computer Science

University of California, San Diego

[email protected]

Jan Puzicha, Joachim M. Buhmann, Yossi Rubner & Carlo Tomasi

Page 2: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

The Problem: Image Dissimilarity

D( ) = ?,

Page 3: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Where does this problem arise in computer vision?

Image Classification Image Retrieval Image Segmentation

Page 4: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Classification

?

?

?

Page 5: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Retrieval

Jeremy S. De Bonet, Paul Viola (1997). Structure Driven Image Database Retrieval. Neural Information Processing 10 (1997).

Page 6: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Segmentation

http://vizlab.rutgers.edu/~comanici/segm_images.html

Page 7: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Histograms for image dissimilarity

Examine the distribution of features, rather than the features themselves

General purpose (i.e. any distribution of features)

Resilient to variations (shadowing, changes in illumination, shading, etc.)

Can use previous work in statistics, etc.

Page 8: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Histogram Example

Page 9: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Histogramming Image Features

Color Texture Shape Others… Create histogram through binning or some

procedure to get a distribution

Page 10: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Color

Which is more similar?

L*a*b* was designed to be uniform in that perceptual “closeness” corresponds to Euclidean distance in the space.

Page 11: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

L*a*b*

L – lightness (white to black)

a – red-greeness

b – yellowness-blueness

Page 12: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Texture

Texture is not pointwise like color

Texture involves a local neighborhood

Gabor Filters are commonly used to identify texture features

Page 13: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Gabor Filters

Gabor filters are Gaussians modulated by sinusoids

They can be tuned in both the scale (size) and the orientation

A filter is applied to a region and is characterized by some feature of the energy distribution (often mean and standard deviation)

Page 14: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Examples of Gabor Filters

Scale: 4 at 108° Scale: 5 at 144°Scale: 3 at 72°

Page 15: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Creating Histograms from Features

Regular Binning– Simple– Choosing bins important. Bins may be too large or

too small

Adaptive Binning– Bins are adapted to the distribution (usually using

some form of K-means)

Page 16: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Marginal Histograms

Marginal histograms only deal with a single feature

Normal Binning

Marginal binning resulting in 2 histograms

Page 17: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Cumulative Histogram

Normal Histogram

Cumulative Histogram

Page 18: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Dissimilarity Measure Using the Histograms

Heuristic Histogram Distances Non-parametric Test Statistics Information-Theoretic diverges Ground distance measures

Page 19: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Notation

D(I,J) is the dissimilarity of images I and J f(i;J) is histogram entry i in histogram of image

J fr(i;J) is marginal histogram entry i of image J Fr(i;J) is the cumulative histogram

Page 20: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Heuristic Histogram Distances

Minkowski-form distance Lp

Special cases:– L1: absolute, cityblock, or

Manhattan distance– L2: Euclidian distance– L: Maximum value distance

p

i

pJifIifJID

/1

),(),(),(

Page 21: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

More heuristic distances

Weighted-Mean-Variance (WMV)

– Only includes minimal information about distribution

r

rr

r

r JIJIJID rr

),(

Page 22: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Non-parametric Test Statistics

Kolmogorov-Smirnov distance (K-S)

);();(max),( JiFIiFJID rrr

Cramer/von Mises type (CvM)

i

rrr JiFIiFJID 2));();((),(

Page 23: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Cumulative Difference Example

Histogram 1 Histogram 2 Difference

- =

K-S = CvM =

Page 24: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Non-parametric Test Statistics (cont.)

2-statistic (chi-square)– Simple statistical measure to decide if two samples

came from the same underlying distribution

i JifIif

JifIifJID

);();(

);();(),(

2

Page 25: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Information-Theoretic diverges

How well can one distribution be coded using the other as a codebook?

Kullback-Leibler divergence (KL)

Jeffrey-divergence (JD)

i Jif

IifIifJID

);(

);(log);(),(

i if

JifJif

if

IifIifJID

)('

);(log);(

)('

);(log);(),(

Page 26: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Ground Distance Measure

Based on some metric of distance between individual features Earth Movers Distance (EMD)

– Minimal cost to transform one distribution to the other Only measure that works on distributions with a different

number of bins

Page 27: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

EMD

One distribution can be seen as a mass of earth properly spread in space, the other as a collection of holes in that same space

Distributions are represented as a set of clusters and an associated weight

Computing the dissimilarity then becomes the transportation problem

Page 28: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Transportation Problem

Some number of suppliers with goods Some other number of consumers

wanting goods Each consumer-supplier pair has an

associated cost to deliver one unit of the goods

Find least expensive flow of goods from supplier to consumer

Page 29: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Various properties of the metrics

K-S, CvM and WMV are only defined for marginal distributions

Lp, WMV, K-S, CvM and, under constraints, EMD all obey the triangle inequality

WMV is particularly quick because the calculation is quick and the values can be pre-computed offline

EMD is the most computationally expensive

Page 30: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Key Components for Good Comparison

Meaningful quality measure– Subdivision into various tasks/applications

(classification, retrieval and segmentation)

Wide range of parameters should be measured An uncontroversial “ground truth” should be

established

Page 31: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Data Set: Color

Randomly chose 94 images from set of 2000– 94 images represent separate classes

Randomly select disjoint set of pixels from the images– Set size of 4, 8, 16, 32, 64 pixels– 16 disjoint samples per set per image

Page 32: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Data Set: Texture

Brodatz album– Collection of wide range of texture (e.g. cork, lawn,

straw, pebbles, sand, etc.)

Each image is considered a class (as in color) Extract sets of 16 non-overlapping blocks

– sizes 8x8, 16x16,…, 256x256

Page 33: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Setup: Classification

k-Nearest Neighbor classifier is used– Nearest Neighbor classification: given a collection

of labeled points S and a query point q, what point belonging to S is closest to q?

– k nearest is a majority vote of the k closest points– k = 1, 3, 5 and 7

Average misclassification rate percentage using leave-one-out

Page 34: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Setup: Classification (cont.)

Bins: {4, 8, 16, 32, 64, 128, 256} Texture case, three sets of filters were used of

sizes 12, 24 and 40 filters 1000 CPU hours of computation

Page 35: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Results: Classification, color data set

Page 36: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Results: Classification, texture data set

Page 37: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Results: Classification

For small sample sizes, the WMV measure performs best in the texture case.– WMV only estimates means and variances– Less sensitive to sampling noise

EMD also performs well for small sample sizes– Local binning provides additional information

For large sample sizes 2 test performs best

Page 38: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Results: Classification (cont.)

For texture classification, marginal distributions do better than multidimensional distributions except for very large sample sizes (256x256)– Binning is not well adapted to the data since it is

fixed for all the 94 classes– EMD, which uses local adaption does much better

For multidimensional histograms, the more bins the better the performance

For texture, usually 12 filters is enough

Page 39: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Setup: Image Retrieval

Vary sample size Vary number of images retrieved Performance measured based on precision

(i.e. percent correct of the images retrieved) vs. the number of images retrieved

Page 40: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Results: Image Retrieval

Page 41: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Results: Image Retrieval (cont.)

Similar to classification– EMD, WMV, CvM and K-S performed well for small

sample sizes– JD, 2 and KL perform better for larger sizes

Page 42: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Setup: Segmentation

100 images Each image consists of 5 different Brodatz textures For multivariate, the bins are adapted to specific image

Page 43: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Setup: Segmentation (cont.)

Image is divided into 16384 sites (128 x 128 grid)

A histogram is calculate for each site Each site histogram is then compared with 80

randomly selected sites Image sites with high average similarity are

then grouped

Page 44: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Results: Segmentation

Page 45: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Results: Segmentation (cont.)

Median 20% Quantile

L1 marginal 8.2% 12%

2 marginal 8.1% 13%

JD marginal 8.1% 12%

KS marginal 10.8% 20%

CvM marginal 10.9% 22%

L1 full 6.8% 9%

2 full 6.6% 10%

JD full 6.8% 10%

Page 46: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Results: Segmentation (cont.)

Binning can be adapted to image– Increased accuracy in representing

multidimensional distributions– Adaptive multivariate outperforms marginal

Best results were obtained by 2

EMD suffers from high computational complexity

Page 47: Empirical Evaluation of Dissimilarity Measures for Color and Texture Presented by: Dave Kauchak Department of Computer Science University of California,

Conclusions

No measure is overall best Marginal histograms and aggregate measures are best

for large feature spaces Multivariate histograms perform well with large sample

sizes EMD performs generally well for both classification and

retrieval, but with a high computational cost 2 is a good overall metric (particularly for larger

sample sizes