2009. 8. 21 (fri) young ki baik computer vision lab

44
2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab.

Upload: myrtle-barnett

Post on 23-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

2009. 8. 21 (Fri)

Young Ki Baik

Computer Vision Lab.

Page 2: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

References

Compact Signatures for High-speed Interest Point Description and Matching Michael Calonder, Vincent Lepetit, Pascal Fua et.al. (ICCV 2009 oral)

Keypoint signatures for fast learning and recognition M. Calonder, V. Lepetit, P. Fua (ECCV 2008)

Fast Keypoint Recognition using Random Ferns M. Ozuysal, M. Calonder, V. Lepetit, P. Fua (PAMI 2009)

Keypoint recognition using randomized trees V. Lepetit, P. Fua (PAMI 2006)

Computer Vision Lab.

ClassificationClassification

DescriptorDescriptor

Page 3: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Outline

Previous worksRandomized treeFern classifiersSignatures

Proposed algorithmCompact signatures

Experimental results

Conclusion

Computer Vision Lab.

Page 4: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Problem statement

Classification (or Recognition)Assumption

• The number of class = N• Appearance of an image patch = p

• Obtain new patches from the input image

Computer Vision Lab.

What is class of p?What is class of p?

Page 5: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Problem statement

Classification (or Recognition)Conventional approaches

• Compute all possible distance of p • Select NN class to the solution.

Computer Vision Lab.

∑ ( - )2 = 1

∑ ( - )2 =10

∑ ( - )2 =11

=

Nerest neighbor

Page 6: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Problem statement

Classification (or Recognition)Advanced approaches (SIFT, …)

• Compute descriptor and match…• View (rotation, scale, affine) and illumination changes.

Computer Vision Lab.

∑ ( - )2 =?

SIFTSURF …Bottleneck!!Bottleneck!!

!!

When we have to make descriptors for many input patches, bottleneck problem is occurred.

Page 7: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Computer Vision Lab.

Page 8: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

A simple classifier fRandomly select pixel position a and b in the image

patchSimple binary test

• The tests compare the intensities of two pixels around the keypoint:

I(*) : Intensity of pixel position *

Invariant to light change by any raising function

Computer Vision Lab.

a

b

Page 9: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Initialization of RT with classifier fDefine depth of tree d and establish binary tree…Each leaf has a N dimensional vector, which denotes the

probability.

Computer Vision Lab.

f0

f1 f2

f3 f4 f5 f6

depth 1

leaf

1

1 0

0

01

depth 2

depth 3

NNumber of leaves = 2d , where d = total depth

Page 10: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Training RTGenerate patches to cover image variations (scale, rotation, affine transform, …)

Computer Vision Lab.

Page 11: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Training RT Implement training for all generating patches…Update probabilities of leaves…

Computer Vision Lab.

f0

f1 f2

depth 1

leaf

1

1 0

0

01

depth 2

N

Page 12: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Training RT Implement training for all generating patches…Update probabilities of leaves…

Computer Vision Lab.

f0

f1 f2

depth 1

leaf

1

1 0

0

01

depth 2

N N

Page 13: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Training RT Implement training for all generating patches…Update probabilities of leaves…

Computer Vision Lab.

f0

f1 f2

depth 1

leaf

1

1 0

0

01

depth 2

N N

Page 14: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Classification with trained RT Implement classification for input patches…Confirm probability when a patch reaches a leaf…

Computer Vision Lab.

f0

f1 f2

depth 1

leaf

1

1 0

0

01

depth 2

N N

Page 15: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Random forestMultiple RTs (or RF) are used for robustness .

Computer Vision Lab.

Page 16: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Random forest Final probability is summed value of probability of each RT.

Computer Vision Lab.

+

Final probability

Page 17: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Randomized tree

Pros.Easily handle multi-class problems.Easily cover large perspective and scale variations.Classifier training is time consuming, but recognition is

very fast and robust.

Cons.Memory requirement is high.

Computer Vision Lab.

Page 18: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Fern classifier

Computer Vision Lab.

Page 19: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Fern classifier

Randomized tree

Computer Vision Lab.

f0

f1 f2

f3 f4 f5 f6

1

1 0

0

01

depth 1

depth 2

depth 3

Page 20: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Fern classifier

Modified randomized tree In same depth, same classifier f is used

Computer Vision Lab.

f0

f1 f1

f2 f2 f2 f2

1

1 0

0

01

depth 1

depth 2

depth 3

Page 21: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Fern classifier

Fern classifier (Randomized list)Modified RT and RL(Fern) are identical…

Computer Vision Lab.

f0

f1

f2

depth 1

depth 2

depth 3

N classes

2d

Page 22: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Fern classifier

Fern classifier (Randomized list)

Computer Vision Lab.

Tree List

Page 23: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Fern classifier

Multiple fern classifier (training) Initialize each fern classifier with depth d and class N

Computer Vision Lab.

Page 24: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Multiple fern classifier (training)Update probabilities of fern classifiers for all reference

image…

Fern classifier

Computer Vision Lab.

0

1

1

1

1

1

0

0

1

6

4

7

Page 25: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Fern classifier

Multiple fern classifier (training)Establish trained fern classifiers…

Computer Vision Lab.

Page 26: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Multiple fern classifier (Recognition) Just apply new patch image and obtain final probability.

Fern classifier

Computer Vision Lab.

+

+

Page 27: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Fern classifier

Pros.Same as pros. of randomized tree.A small number of f classifiers are used.Fast, easy to implement relative to RT.

Cons.Still takes a lot of memory…

Computer Vision Lab.

Page 28: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Signature

Computer Vision Lab.

Page 29: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Signature

AssumptionDefinition

• Classifier C with patch p

Computer Vision Lab.

Page 30: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Signature

AssumptionDefinition

• If the classifier C has been trained well, then result of classifier C for the deformed patch is also same as original one.

Computer Vision Lab.

Page 31: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Signature

Assumption Input patch q is not member of class.Definition of signature …

• Signature of q is the result of classifier C.

Computer Vision Lab.

Page 32: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Signature

Assumption If input patch q’ is same member of q, then signatures of

q and q’ are almost same...

C() can make the signature of q.Signature of q (= C(q)) can be descriptor.

Computer Vision Lab.

Page 33: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Signature

Sparse signature q is not a member of K. Response of C(q) can not be an exact probability of S(q). Change the signature value by using user defined threshold.

Fern classifiers is used for descriptor maker (=SIFT). A result of sparse signature(q) = descriptor of q.

Computer Vision Lab.

Page 34: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Compact Signature

Computer Vision Lab.

Page 35: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Compact Signature

PurposeThere is no loss of good matching rate while reduced

memory size of classifiers!

Computer Vision Lab.

FF11 FF22 FFJJ

+

+

2d

N

th()

Signature generation

Page 36: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Compact Signature

Approach #1 (Dimension reduction)To reduce the size of memory…Random Ortho-Projection(ROP) matrix

Definition of compact signature• A ROP is applied to all probabilities…

Computer Vision Lab.

FF11

2d

N

FF11

M

(N >>M)

Page 37: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Compact Signature

Approach #1 (Dimension reduction)No loss of good matching rate, while reduced memory

size of classifiers! (N >>M)

Computer Vision Lab.

FF11 FF22 FFJJ

+

+

2d

M

Compact Signature generation

Page 38: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Compact Signature

Approach #2 (Quantization) The computation can be further streamlined by quantizing the

compressed leaf vectors. Using 1 byte per element instead of 4 bytes required by floats.

Computer Vision Lab.

FF11

2d

M

Pfloat

Pbyte

Quantization

Page 39: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Compact Signature Total complexity of compact signature

classifier (or descriptor maker)

Computer Vision Lab.

50 times less memory

Page 40: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Experimental results

Comparison4 image database

• Wall, light, jpg and fountain • http://www.robots.ox.ac.uk/~vgg/research/affine

Descriptors• SURF-64 (Speed Up Robust Feature)• Sparse signatures• Compact signatures-176 • Compact signatures-88

Computer Vision Lab.

Class N = 500

Page 41: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Experimental results

Matching performance

Computer Vision Lab.

m|n

test image

reference image

Page 42: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Experimental results

CPU time and Memory Consumption

Computer Vision Lab.

Page 43: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Conclusion

ContributionA Descriptor maker is proposed by using novel classifier

which is well-known Fern-classifier.• Descriptors can be comuted rapidly.

Classifier size is extremly reduced • Almost 50 times less memory than sparse ones…• by using dimesion reduction (ROP) and simple

quantization technique

DiscussionWell trained classifier is difficult to obtain in in normal

situation.

Computer Vision Lab.

Page 44: 2009. 8. 21 (Fri) Young Ki Baik Computer Vision Lab

Q & A

Computer Vision Lab.