object segmentation

Post on 23-Jan-2016

71 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Object Segmentation. Presented by Sherin Aly. What is a ‘ Good Segmentation ’ ?. http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html. Learning a classification model for segmentation. Xiaofeng Ren and Jitendra Malik. methodology. - PowerPoint PPT Presentation

TRANSCRIPT

Object Segmentation

Presented by

Sherin Aly

1

What is a ‘Good Segmentation’?

http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html

Learning a classification model for segmentation

Xiaofeng Ren and Jitendra Malik

4

methodology

• Two-class classification model

• Over segmentation as preprocessing

• They use classical Gestalt cues– Contour, texture, brightness and

continuation

• A linear classifier is used for training

5

Good Vs Bad segmentation

6

a) Image from Corel Imagebase

b) superimposed with a human markedsegmentation

c) Same image with Bad segmentation

How do we distinguish good segmentations from bad

segmentations?

7

How?

• Use “Classical Gestalt cues”– proximity, similarity and good continuation

• Instead of Ad-hoc decision about features combination

8

Gestalt Principles of Grouping

9

http://allpsych.com/psychology101/perception.html

In order to interpret what we receive through our senses,we attempt to organize this information into certain groups.

Methodology

• Preprocessing

• Feature extraction

• Feature evaluation

• Training

• Optimization

• Find good segmentaion

10

Preprocessing

11

Superpixel mapK=200

Reconstruction of human segmentation from Superpixels

a contour-based measure is used to quantify this approximation

•Local•Coherent•Preserve structure

•Contour •texture

12 The percentage of human marked boundaries covered by the superpixel maps

Tolerance 1,2,and 3

Feature Extraction

1. inter-region texture similarity

2. intra-region texture similarity

3. inter-region brightness similarity

4. intra-region brightness similarity

5. inter-region contour energy

6. intra-region contour energy

7. curvilinear continuity 13

Feature Extraction

1. inter-region texture similarity

2. intra-region texture similarity

3. inter-region brightness similarity

4. intra-region brightness similarity

5. inter-region contour energy

6. intra-region contour energy

7. curvilinear continuity 14

Feature Extraction

1. inter-region texture similarity

2. intra-region texture similarity

3. inter-region brightness similarity

4. intra-region brightness similarity

5. inter-region contour energy

6. intra-region contour energy

7. curvilinear continuity 15

Power of Gestalt cues

16

=

Training the classifier

• simple logistic regression classifier,

17Empirical distribution of pairs of features

18

Precision is the fraction of detections which are true positives. Recall is the fraction of true positives which are detected

Conclusion

• There simple linear classifier had promising results on a variety of natural images.

• boundary contour is the most informative grouping cue, and it is in essence discriminative.

19

Pros & Cons

• Cons– The larger spatial support that superpixels

provide, allowing more global features to be computed than on pixels alone.

– The use of superpixels improves the computational efficiency

– SuperPixels technique is very applicable

• Pros– Might fall in Local Minima

20

Combining Top-down and Bottom-up Segmentation

Eran Borenstein

Eitan Sharon

Shimon Ullman

21

Motivation

• Bottom-Up segmentation– Rely on continuity principle– Capture image properties “texture, grey level uniformity

and contour continuity”– Segmentation based on similarities between image

regions

• How can we capture prior knowledge of a specific object (class)?– Answer: Top-Down Segmentation– use prior knowledge about an object

Credit: Joseph Djugash

Bottom-Up Segmentation

Slides from Eitan Sharon, “Segmentation and Boundary Detection Using Multiscale Intensity Measurements”.

Credit: Joseph Djugash

Normalized-Cut Measure

Slides from Eitan Sharon, “Segmentation and Boundary Detection Using Multiscale Intensity Measurements”.Credit: Joseph Djugash

Top-Down approachInput Fragments

Matching CoverCredit: Joseph Djugash

Another step towards the middle

Bottom-Up

Top-Down

Credit: Joseph Djugash

Some Definitions & Constraints

• Measure of saliency h(Γi), hi є [0,1)

• A configuration vector s contains labels si (1/-1) of all the segments (Si) in the tree

• The label si can be different from its parent’s label s i

• Cost function for a given s

Top-down term Bottom-up termDefines the weighted edge between Si & Si

Classification Costs

• The terminal segments of the tree determine the final classification

• The top-down term is defined as:

• The saliency of a segment should restrict its label (based on its parent’s label)

• The bottom-up term is defined as:

Confidence Map

• Evaluating the confidence of a region:

• Causes of Uncertainty of Classification– Bottom-up uncertainty – regions where there is no

salient bottom-up segment matching the top-down classification

– Top-down uncertainty – regions where the top-down classification is ambiguous (highly variable shape regions)

• The type of uncertainty and the confidence values can be used to select appropriate additional processing to improve segmentation

Results

• Calculate average distance between a given segmentation contour and a benchmark contour.

• Removing from the average all contour points having a confidence measure less than 0.1.

• The resulting confidence map efficiently separated regions of high and low consistency.

• The combined scheme improved the top-down contour by over 67% on average.

• This improvement was even larger in object parts with highly variable shape.

31

Results (cont.)

•top-down process may produce a figure-ground approximation that does not follow the image discontinuities.•Salient bottom-up segments can correct these errors and delineate precise region boundaries

Buttom up

The initial classificationmap T(x, y)

Results III (cont.)

Results III (cont.)

the top-down completely misses a part of the object . The confidence map may be helpful in identifying such cases,

Results III (cont.)

bottom-up segmentation may be insufficient in detecting the figure-ground contour, and the top-down process completes the missing information

Results III (cont.)

Results III (cont.)

Salient bottom-up segments can correct these errors and delineateprecise region boundaries

Conclusion

• Buttom-up and top-down merits• Provide reliable confidence map• It take into account all discontinuities at all

scales

But:• If the object is assigned a given category, the

specific features cannot be adopted for other categories

38

Constrained Parametric Min-Cuts for Automatic Object

SegmentationJoao Carreira

Cristian Sminchisescu

39

Traditional Segmentation: Finding Homogeneous Regions

40

gPb-owt-ucm: P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. PAMI 2010.

Conventional Bottom-up Segmentation

Proposed approach

1. Split multiple times

2. Retain object-like segmentations

Bottom-up Object Segmentation

Credit: J. Carreira

High redundancy

Bottom-up Object Segmentation

42

Credit: J. Carreira

A single multi-region segmentation or a hierarchy

Proposed Bottom-up Object Segmentation

43

Credit: J. Carreira

single-shot multi-region segmentation

robust set of overlapping figure-ground segmentations

Segments with object-like regularitiessuperpixels

44

Constrained Parametric Min-Cuts for Automatic Object Segmentation

Credit: J. Carreira

parametric max-flow solver

Figure ground segmentation by growing regions around seeds

Ranking

45

Constrained Parametric Min-Cuts for Automatic Object Segmentation

Credit: J. Carreira

Initialization

• Foreground– Regular 5x5 grid geometry– Centroids of large N-Cuts regions– Centroids of superpixels closest to grid positions

• Background– Full image boundary– Horizontal boundaries– Vertical boundaries– All boundaries excluding the bottom one

Performance broadly invariant to different initializations

Generating a segment pool:constrained min-cut

min cuthard constraint

background

object

hard constraint

47

Credit: J. Carreira

Generating a Segment Pool:Constrained Parametric Min-Cuts

background

object

hard constraint

48

Credit: J. Carreira

background

object

hard constraint

49

Generating a Segment Pool:Constrained Parametric Min-Cuts

Credit: J. Carreira

background

object

hard constraint

50

Generating a Segment Pool:Constrained Parametric Min-Cuts

Credit: J. Carreira

background

object

hard constraint

background

object

hard constraint

Can solve for all values of object bias in the same time complexity of solving a single min-cut using a parametric max-flow solver

background

object

hard constraint

51

Generating a Segment Pool:Constrained Parametric Min-Cuts

Credit: J. Carreira

Fast Rejection

Large set of initial segmentations (~5500)

High Energy Low Energy

~2000 segments with the lowest energy

Cluster segments based on spatial overlap (at least 0.95)

Lowest energy member of each cluster (~154 in PASCAL VOC)

Credit:SasiKanth BendapudiYogeshwar Nagaraj

53

Constrained Parametric Min-Cuts for Automatic Object Segmentation

Credit: J. Carreira

•ranks all the sampled object segmentations•discard all but a small subset of confident ones.

Ranking object hypotheses

mid-level, category independent features Boundary – normalized boundary energy Region – location, perimeter, area, Euler

number, orientation, contrast with background Gestalt – convexity, smoothness

GoodLow boundary energy

Non smooth.

High Euler number

High boundary energy

Smooth.

Euler number = 0

Bad

54

Credit: J. Carreira

Segment Ranking

• Model data using a host of features– Graph partition properties– Region properties– Gestalt properties

• Apply Features Normalization• Train regressor with the largest overlap

ground-truth segment using Random Forests

• Diversify similar rankings using Maximal Marginal Relevance (MMR)

Graph Partition Properties

• Cut – Sum of affinities along segment boundary

• Ratio Cut – Sum along boundary divided by the number

• Normalized Cut – Sum of cut and affinity in foreground and background

• Unbalanced N-cut – N-cut divided by foreground affinity

• Thresholded boundary fraction of a cut

Region Properties

• Area• Perimeter• Relative Centroid• Bounding Box

properties• Fitting Ellipse

properties• Eccentricity• Orientation

• Convex Area• Euler Number• Diameter of Circle

with the same area of the segment

• Percentage of bounding box covered

• Absolute distance to the center of the image

Gestalt Properties

• Inter-region texton similarity

• Intra-region texton similarity

• Inter-region brightness similarity

• Intra-region brightness similarity

• Inter-region contour energy

• Intra-region contour energy

• Curvilinear continuity

• Convexity – Ratio of foreground area to convex hull area

Feature Importance for the Random Forest regressor

Feature Importance

How to Model Segment Quality ?

Best overlap with a ground truth object computed by intersection-over-union.

64

Credit: J. Carreira

Diversifying the Ranking

Diversified

Original

Best two hypotheses

Middle two hypotheses

Worst two hypotheses

Segment Ranking using Maximum Marginal Relevance

66

Performance

Credit:SasiKanth BendapudiYogeshwar Nagaraj

Ranking

79Credit: J. Carreira

Running Demos

• Methodologies employed– Kmeans using:

• Texture• RGB• Texture + RGB• RGB + HSV• Texture + Lab + HSV

80

Running Demos

• Data set used– Microsoft Research Cambridge Object Recognition

Image Database, version 1.0.– Used: 7 classes with 23 per class

• Animal-grass

• Trees-sky-grass

• Buildings-sky-grass

• Airplanes-sky-grass

• Animal-grass• Faces-BG

• Car-wall-ground81

Experiment ResultsFeatures Texture Texture +

RGBRGB RGB +HSV Texture+Lab+

HSV

Animal-grass 72.7% 74.1% 72.3% 72.6% 74.1%

Trees-sky-grass

37.1% 37.1% 40.7% 38.2% 37.1%

Buildings-sky-grass

44.6% 42.8% 51.9% 45.4% 44.7%

Airplanes-sky-grass

58.8% 58.8% 54.6% 59.7% 58.7%

Animal-grass 64.8% 64.8% 69.3% 71% 64.9%

Faces-BG 100% 100% 100% 100% 100%

Car-wall-ground 67.2% 67.2% 68.4% 64.9% 67.2%

Mean 63.6% 63.5% 65.3% 64.6% 63.8% 82

Experiment ResultsFeatures Textur

eTextur

e + RGB

RGB RGB +HSV Texture+Lab+ HSV

One iteration Elapsed time is

7.42 secs

12.26 secs.

1.62 secs

1.5 secs 7.84 sec

Overall Elabsed time for experiment

19.9 mins

32.9 mins

4.4 mins 4 min 21 mins

83

Microsoft Research Cambridge Object Recognition Image Database, version 1.0.

84

85

86

87

Acknowledgment

• Dr. Devi Parikh

• Dr. Joao Carreira

88

top related