cs262: computer vision lect 09: sift descriptors

28
CS262: Computer Vision Lect 09: SIFT Descriptors John Magee 13 February 2017 Slides Courtesy of Diane H. Theriault

Upload: others

Post on 26-Mar-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CS262: Computer Vision Lect 09: SIFT Descriptors

CS262: Computer VisionLect 09: SIFT Descriptors

John Magee13 February 2017

Slides Courtesy of Diane H. Theriault

Page 2: CS262: Computer Vision Lect 09: SIFT Descriptors

Questions of the Day:

• How can we find matching points in images?• How can we use matching points to recognize

objects?

Page 3: CS262: Computer Vision Lect 09: SIFT Descriptors

SIFT

• Find repeatable, scale-invariant points in images • Compute something about them • Use the thing you computed to perform matching

• A lot of engineering decisions

• “Distinctive Image Features from Scale-Invariant Keypoints” by David Lowe

• Patented!

Page 4: CS262: Computer Vision Lect 09: SIFT Descriptors

How to find the same cat?

• Imagine that we had a library of cats

• How could we find another picture of the same cat in the library?

• Look for the markings?

Page 5: CS262: Computer Vision Lect 09: SIFT Descriptors

Scale Space

• Image convolved with Gaussians of different widths

Page 6: CS262: Computer Vision Lect 09: SIFT Descriptors

Keypoints with Image Filtering• Perform image filtering by

convolving an image with a “filter”/”mask” / “kernel” to obtain a “result” / “response”

• The value of the result will be positive in regions of the image that “look like” the filter

• What would a “dot” filter look like?

Image

Filter

Page 7: CS262: Computer Vision Lect 09: SIFT Descriptors

Laplacian of a Gaussian• Sum of spatial second derivatives

Page 8: CS262: Computer Vision Lect 09: SIFT Descriptors

Difference of Gaussians

• Approximation of the Laplacian of a Gaussian

Page 9: CS262: Computer Vision Lect 09: SIFT Descriptors

Scale-space Extrema

• “Extremum” = local minimum or maximum

• Check 8 neighbors at a particular scale

• Check neighbors at scales above and below

Page 10: CS262: Computer Vision Lect 09: SIFT Descriptors

Scale-space Extrema• Find locations and scales where the response to

the LoG filter is a local extremum

Page 11: CS262: Computer Vision Lect 09: SIFT Descriptors

Removing Low Contrast Points

• Threshold on the magnitude of the response to the LoG filter

• Threshold empirically determined

Page 12: CS262: Computer Vision Lect 09: SIFT Descriptors

Removing Points Along Edges

• In 1D: first derivative shows how the function is changing (velocity)

• In 1D: second derivative how the change is changing (acceleration)

• In 2D: first derivative leads to a gradient vector, which has a magnitude and direction

• In 2D: second derivatives lead to a matrix, which gives information about the rate and orientation of the change in the gradient

Page 13: CS262: Computer Vision Lect 09: SIFT Descriptors

Removing Points Along Edges

• Hessian is a matrix of 2nd derivatives• Eigenvectors tell you the orientation of the curvature• Eigenvalues tell you the magnitude• Ratio of eigenvalues tells you extent to which one orientation

is dominant

Gradient of a Gaussian

Hessian of a Gaussian

Page 14: CS262: Computer Vision Lect 09: SIFT Descriptors

Attributes of a Keypoint

• Position (x,y) – location in the image

• Scale– scale where this point is a LoG extremum

• Orientation?

Page 15: CS262: Computer Vision Lect 09: SIFT Descriptors

Gradient Orientation Histogram

• Make a histogram over gradient orientation

• Weighted by gradient magnitude

• Weighted by distance to key point

• Contribution to bins with linear interpolation

Page 16: CS262: Computer Vision Lect 09: SIFT Descriptors

Gradient Orientation Histogram

Gradient orientation histogram

Page 17: CS262: Computer Vision Lect 09: SIFT Descriptors

Gradient Orientation Histogram

• Plain Histogram of Gradient Orientation

Page 18: CS262: Computer Vision Lect 09: SIFT Descriptors

Gradient Orientation Histogram

• Weighted by gradient magnitude

• (Could also weight by distance to center of window)

Page 19: CS262: Computer Vision Lect 09: SIFT Descriptors

Gradient Orientation Histogram

• Interpolated to avoidedge effects of binquantization

Page 20: CS262: Computer Vision Lect 09: SIFT Descriptors

Assigning Orientation to Keypoint

• Support: from image at assigned scale, all points in a window surrounding keypoint

• 36 bins over 360 degrees• Contributions weighted by distance to center of key point,

weighted by a Gaussian with sigma 1.5 x assigned scale

Dominant orientation

Page 21: CS262: Computer Vision Lect 09: SIFT Descriptors

Computing SIFT Descriptor

• Divide 16 x 16 region surrounding keypoint into 4 x 4 windows

• For each window, compute a histogram with 8 bins

• 128 total elements• Interpolation to improve

stability (over orientation and over distance to boundary of window)

Page 22: CS262: Computer Vision Lect 09: SIFT Descriptors

Computing SIFT Descriptor

• Divide 16 x 16 region surrounding keypoint into 4 x 4 windows

• For each window, compute a histogram with 8 bins

• 128 total elements• Interpolation to improve

stability (over orientation and over distance to boundary of window)

Page 23: CS262: Computer Vision Lect 09: SIFT Descriptors

Normalizing the descriptor

• To get (some) invariance to brightness and contrast– Clamp weight due to gradient magnitude

(In case some edges are very strong due to weird lighting)

– Normalize entire vector to unit length (So the absolute value of the gradient magnitude isn’t as important as the distribution of the gradient magnitude)

Page 24: CS262: Computer Vision Lect 09: SIFT Descriptors

Using the keypoints

• Assemble a database:– Pick some “training” images of different objects– Find keypoints and compute descriptors– Store the descriptors and associated source image,

position, scale, and orientation

Page 25: CS262: Computer Vision Lect 09: SIFT Descriptors

Using the keypoints

• New Image– Find keypoints and compute descriptors– Search database for matching descriptors– (Throw out descriptors that are not distinctive)– Look for clusters of matching descriptors

• (e.g. In your new image, you found 10 keypoints and associated descriptors, and in the database, there is an image where 6 of the descriptors match, but only 1 or 2 on other database images)

Page 26: CS262: Computer Vision Lect 09: SIFT Descriptors

Using the keypoints– http://chrisjmccormick.wordpress.com/2013/01/24/opencv-sift-

tutorial/

Page 27: CS262: Computer Vision Lect 09: SIFT Descriptors

Voting for Pose

• Matching keypoints from database image and new image will imply some relationship in pose (position, scale, and orientation)– Example: This keypoint was found 20 pixels down and 50

pixels to the right of the matching descriptor from the database image

– Example: This keypoint was computed at 2x the scale of the matching descriptor from the database image

– Look for clusters of matches with similar offsets– (“Generalized Hough Transform”)

Page 28: CS262: Computer Vision Lect 09: SIFT Descriptors

Discussion Questions

• What types of invariance do we want to have when we think about doing object recognition?

• What does it mean to be invariant to different image attributes? (brightness, contrast, position, scale, orientation)

• What does it mean for an image feature to be stable?• Why might it make sense to use a weighted histogram? What

kinds of weights?• What is a problem with the quantization associated with

creating a histogram and what can we do about it?