feature extractionmanj/ece181b-winter2010...(figure 4.5).1 note how the auto-correlation surface for...

25
Feature extraction January 2010 READING Section 4.1.1 Today Harris Corner Detector Basics of Linear filtering (to continue on Thursday) Monday, January 11, 2010

Upload: others

Post on 03-Aug-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

Feature extractionJanuary 2010

READINGSection 4.1.1

TodayHarris Corner Detector

Basics of Linear filtering (to continue on Thursday)

Monday, January 11, 2010

Page 2: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

Feature extraction

• why feature extraction?

• what features?

Monday, January 11, 2010

Page 3: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

Today’s illusion: Rotating Rays, A. Kitaokahttp://www.ritsumei.ac.jp/~akitaoka/index-e.html

Monday, January 11, 2010

Page 4: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

4

Consider a “stereo pair”

(ignore the white lines and circles for now)

Monday, January 11, 2010

Page 5: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

5

another example

(ignore the lines)

Monday, January 11, 2010

Page 9: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

6

Correspondence problem

Right imageLeft image

Monday, January 11, 2010

Page 10: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

6

Correspondence problem

Right imageLeft image

• What is a point?• How do we compare points in different images? (Similarity measure)

Monday, January 11, 2010

Page 11: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

7

Correspondence problem 4.1. Points 215

Figure 4.3: Image pairs with extracted patches below. Notice how some patches can be localizedor matched with higher accuracy than others.

region around detected keypoint locations in converted into a more compact and stable (invariant)descriptor that can be matched against other descriptors. The third feature matching stage, §4.1.3,efficiently searches for likely matching candidates in other images. The fourth feature trackingstage, §4.1.4, is an alternative to the third stage that only searches a small neighborhood aroundeach detected feature and is therefore more suitable for video processing.

A wonderful example of all of these stages can be found in David Lowe’s (2004) Distinctiveimage features from scale-invariant keypoints paper, which describes the development and refine-ment of his Scale Invariant Feature Transform (SIFT). Comprehensive descriptions of alternativetechniques can be found in a series of survey and evaluation papers by Schmid, Mikolajczyk,et al. covering both feature detection (Schmid et al. 2000, Mikolajczyk et al. 2005, Tuytelaarsand Mikolajczyk 2007) and feature descriptors (Mikolajczyk and Schmid 2005). Shi and Tomasi(1994) and Triggs (2004) also provide nice reviews of feature detection techniques.

4.1.1 Feature detectors

How can we find image locations where we can reliably find correspondences with other images,i.e., what are good features to track (Shi and Tomasi 1994, Triggs 2004)? Look again at the imagepair shown in Figure 4.3 and at the three sample patches to see how well they might be matchedor tracked. As you may notice, textureless patches are nearly impossible to localize. Patches withlarge contrast changes (gradients) are easier to localize, although straight line segments at a single

Monday, January 11, 2010

Page 12: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

Aperture Problem

8

216 Computer Vision: Algorithms and Applications (October 18, 2009 draft)

(a) (b) (c)

Figure 4.4: Aperture problems for different image patches: (a) stable (“corner-like”) flow; (b)classic aperture problem (barber-pole illusion); (c) textureless region. The two images I0 (yellow)and I1 (red) are overlaid. The red vector u indicates the displacement between the patch centers,and the w(xi) weighting function (patch window) is shown as a dark circle.

orientation suffer from the aperture problem (Horn and Schunck 1981, Lucas and Kanade 1981,Anandan 1989), i.e., it is only possible to align the patches along the direction normal to the edgedirection (Figure 4.4b). Patches with gradients in at least two (significantly) different orientationsare the easiest to localize, as shown schematically in Figure 4.4a.

These intuitions can be formalized by looking at the simplest possible matching criterion forcomparing two image patches, i.e., their (weighted) summed square difference,

EWSSD(u) =!

i

w(xi)[I1(xi + u)! I0(xi)]2, (4.1)

where I0 and I1 are the two images being compared, u = (u, v) is the displacement vector, w(x)is a spatially varying weighting (or window) function, and the summation i is over all the pixels inthe patch. (Note that this is the same formulation we later use to estimate motion between completeimages §8.1, and that this section shares some material with that later section.)

When performing feature detection, we do not know which other image location(s) the featurewill end up being matched against. Therefore, we can only compute how stable this metric is withrespect to small variations in position !u by comparing an image patch against itself, which isknown as an auto-correlation function or surface

EAC(!u) =!

i

w(xi)[I0(xi +!u)! I0(xi)]2 (4.2)

(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red1 Strictly speaking, the auto-correlation is the product of the two weighted patches; I’m using the term here in a

more qualitative sense. The weighted sum of squared differences is often called an SSD surface §8.1.

Ack: Szelski Chapter 4

Monday, January 11, 2010

Page 13: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

9

The correspondence problem

Monday, January 11, 2010

Page 14: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

9

The correspondence problem

• A classically difficult problem in computer vision– Is every point visible in both images?– Do we match points or regions or …?– Are corresponding (L-R) image regions similar?

Monday, January 11, 2010

Page 15: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

9

The correspondence problem

• A classically difficult problem in computer vision– Is every point visible in both images?– Do we match points or regions or …?– Are corresponding (L-R) image regions similar?

• The so called “aperture problem”

Monday, January 11, 2010

Page 16: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

Next week lectures: Corner detection

• Helpful to come prepared – read up on basic linear algebra, eigenvalues of a 2x2 matric,

diagonalization etc.– will introduce you to some vision buzz-words: Gaussian kernels,

linear filtering, convolution vs correlation– try to read Ch. 3 and Ch. 4 to the extent possible (even if you ignore

the math details in those chapters)

10

Monday, January 11, 2010

Page 17: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

f(x + a) ! f(x) + f !(x).a

!I(xi) = (!I/!x, !I/!y)(xi)

I0(xi +!u) " I0(xi) +#I0(xi).! u

Comparing image patches

11

216 Computer Vision: Algorithms and Applications (October 18, 2009 draft)

(a) (b) (c)

Figure 4.4: Aperture problems for different image patches: (a) stable (“corner-like”) flow; (b)classic aperture problem (barber-pole illusion); (c) textureless region. The two images I0 (yellow)and I1 (red) are overlaid. The red vector u indicates the displacement between the patch centers,and the w(xi) weighting function (patch window) is shown as a dark circle.

orientation suffer from the aperture problem (Horn and Schunck 1981, Lucas and Kanade 1981,Anandan 1989), i.e., it is only possible to align the patches along the direction normal to the edgedirection (Figure 4.4b). Patches with gradients in at least two (significantly) different orientationsare the easiest to localize, as shown schematically in Figure 4.4a.

These intuitions can be formalized by looking at the simplest possible matching criterion forcomparing two image patches, i.e., their (weighted) summed square difference,

EWSSD(u) =!

i

w(xi)[I1(xi + u)! I0(xi)]2, (4.1)

where I0 and I1 are the two images being compared, u = (u, v) is the displacement vector, w(x)is a spatially varying weighting (or window) function, and the summation i is over all the pixels inthe patch. (Note that this is the same formulation we later use to estimate motion between completeimages §8.1, and that this section shares some material with that later section.)

When performing feature detection, we do not know which other image location(s) the featurewill end up being matched against. Therefore, we can only compute how stable this metric is withrespect to small variations in position !u by comparing an image patch against itself, which isknown as an auto-correlation function or surface

EAC(!u) =!

i

w(xi)[I0(xi +!u)! I0(xi)]2 (4.2)

(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red1 Strictly speaking, the auto-correlation is the product of the two weighted patches; I’m using the term here in a

more qualitative sense. The weighted sum of squared differences is often called an SSD surface §8.1.

(4.1)

216 Computer Vision: Algorithms and Applications (October 18, 2009 draft)

(a) (b) (c)

Figure 4.4: Aperture problems for different image patches: (a) stable (“corner-like”) flow; (b)classic aperture problem (barber-pole illusion); (c) textureless region. The two images I0 (yellow)and I1 (red) are overlaid. The red vector u indicates the displacement between the patch centers,and the w(xi) weighting function (patch window) is shown as a dark circle.

orientation suffer from the aperture problem (Horn and Schunck 1981, Lucas and Kanade 1981,Anandan 1989), i.e., it is only possible to align the patches along the direction normal to the edgedirection (Figure 4.4b). Patches with gradients in at least two (significantly) different orientationsare the easiest to localize, as shown schematically in Figure 4.4a.

These intuitions can be formalized by looking at the simplest possible matching criterion forcomparing two image patches, i.e., their (weighted) summed square difference,

EWSSD(u) =!

i

w(xi)[I1(xi + u)! I0(xi)]2, (4.1)

where I0 and I1 are the two images being compared, u = (u, v) is the displacement vector, w(x)is a spatially varying weighting (or window) function, and the summation i is over all the pixels inthe patch. (Note that this is the same formulation we later use to estimate motion between completeimages §8.1, and that this section shares some material with that later section.)

When performing feature detection, we do not know which other image location(s) the featurewill end up being matched against. Therefore, we can only compute how stable this metric is withrespect to small variations in position !u by comparing an image patch against itself, which isknown as an auto-correlation function or surface

EAC(!u) =!

i

w(xi)[I0(xi +!u)! I0(xi)]2 (4.2)

(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red1 Strictly speaking, the auto-correlation is the product of the two weighted patches; I’m using the term here in a

more qualitative sense. The weighted sum of squared differences is often called an SSD surface §8.1.

Using Taylor approximation:

image gradient:

Monday, January 11, 2010

Page 18: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

218 Computer Vision: Algorithms and Applications (October 18, 2009 draft)

cross in the lower right-hand quadrant of Figure 4.5a) exhibits a strong minimum, indicating thatit can be well localized. The correlation surface corresponding to the roof edge (Figure 4.5c) hasa strong ambiguity along one direction, while the correlation surface corresponding to the cloudregion (Figure 4.5d) has no stable minimum.

Using a Taylor Series expansion of the image function I0(xi +!u) ! I0(xi) +"I0(xi) ·!u

(Lucas and Kanade 1981, Shi and Tomasi 1994), we can approximate the auto-correlation surfaceas

EAC(!u) =!

i

w(xi)[I0(xi +!u)# I0(xi)]2 (4.3)

!!

i

w(xi)[I0(xi) +"I0(xi) ·!u# I0(xi)]2 (4.4)

=!

i

w(xi)["I0(xi) ·!u]2 (4.5)

= !uTA!u, (4.6)

where"I0(xi) = (

!I0!x

,!I0!y

)(xi) (4.7)

is the image gradient at xi. This gradient can be computed using a variety of techniques (Schmidet al. 2000). The classic “Harris” detector (Harris and Stephens 1988) uses a [-2 -1 0 1 2] filter,but more modern variants (Schmid et al. 2000, Triggs 2004) convolve the image with horizontaland vertical derivatives of a Gaussian (typically with " = 1). [ Note: Bill Triggs doubts thatHarris and Stephens (1988) used such a bad filter kernel, as reported in (Schmid et al. 2000), butthe original publication is hard to find. ]

The auto-correlation matrix A can be written as

A = w $"

# I2x IxIyIxIy I2y

$

% , (4.8)

where we have replaced the weighted summations with discrete convolutions with the weightingkernel w. This matrix can be interpreted as tensor (multiband) image, where the outer products ofthe gradients "I are convolved with a weighting function w to provide a per-pixel estimate of thelocal (quadratic) shape of the auto-correlation function. In more detail, the computation of imagethat contains a 2% 2 matrix A at each pixel can be performed in two steps:

1. At each pixel, compute the gradient "I =&Ix Iy

'and then compute the four values

&I2x IxIy IxIy I2y

';

2. Convolve the resulting 4-band image with a blur kernel w.

218 Computer Vision: Algorithms and Applications (October 18, 2009 draft)

cross in the lower right-hand quadrant of Figure 4.5a) exhibits a strong minimum, indicating thatit can be well localized. The correlation surface corresponding to the roof edge (Figure 4.5c) hasa strong ambiguity along one direction, while the correlation surface corresponding to the cloudregion (Figure 4.5d) has no stable minimum.

Using a Taylor Series expansion of the image function I0(xi +!u) ! I0(xi) +"I0(xi) ·!u

(Lucas and Kanade 1981, Shi and Tomasi 1994), we can approximate the auto-correlation surfaceas

EAC(!u) =!

i

w(xi)[I0(xi +!u)# I0(xi)]2 (4.3)

!!

i

w(xi)[I0(xi) +"I0(xi) ·!u# I0(xi)]2 (4.4)

=!

i

w(xi)["I0(xi) ·!u]2 (4.5)

= !uTA!u, (4.6)

where"I0(xi) = (

!I0!x

,!I0!y

)(xi) (4.7)

is the image gradient at xi. This gradient can be computed using a variety of techniques (Schmidet al. 2000). The classic “Harris” detector (Harris and Stephens 1988) uses a [-2 -1 0 1 2] filter,but more modern variants (Schmid et al. 2000, Triggs 2004) convolve the image with horizontaland vertical derivatives of a Gaussian (typically with " = 1). [ Note: Bill Triggs doubts thatHarris and Stephens (1988) used such a bad filter kernel, as reported in (Schmid et al. 2000), butthe original publication is hard to find. ]

The auto-correlation matrix A can be written as

A = w $"

# I2x IxIyIxIy I2y

$

% , (4.8)

where we have replaced the weighted summations with discrete convolutions with the weightingkernel w. This matrix can be interpreted as tensor (multiband) image, where the outer products ofthe gradients "I are convolved with a weighting function w to provide a per-pixel estimate of thelocal (quadratic) shape of the auto-correlation function. In more detail, the computation of imagethat contains a 2% 2 matrix A at each pixel can be performed in two steps:

1. At each pixel, compute the gradient "I =&Ix Iy

'and then compute the four values

&I2x IxIy IxIy I2y

';

2. Convolve the resulting 4-band image with a blur kernel w.

where

218 Computer Vision: Algorithms and Applications (October 18, 2009 draft)

cross in the lower right-hand quadrant of Figure 4.5a) exhibits a strong minimum, indicating thatit can be well localized. The correlation surface corresponding to the roof edge (Figure 4.5c) hasa strong ambiguity along one direction, while the correlation surface corresponding to the cloudregion (Figure 4.5d) has no stable minimum.

Using a Taylor Series expansion of the image function I0(xi +!u) ! I0(xi) +"I0(xi) ·!u

(Lucas and Kanade 1981, Shi and Tomasi 1994), we can approximate the auto-correlation surfaceas

EAC(!u) =!

i

w(xi)[I0(xi +!u)# I0(xi)]2 (4.3)

!!

i

w(xi)[I0(xi) +"I0(xi) ·!u# I0(xi)]2 (4.4)

=!

i

w(xi)["I0(xi) ·!u]2 (4.5)

= !uTA!u, (4.6)

where"I0(xi) = (

!I0!x

,!I0!y

)(xi) (4.7)

is the image gradient at xi. This gradient can be computed using a variety of techniques (Schmidet al. 2000). The classic “Harris” detector (Harris and Stephens 1988) uses a [-2 -1 0 1 2] filter,but more modern variants (Schmid et al. 2000, Triggs 2004) convolve the image with horizontaland vertical derivatives of a Gaussian (typically with " = 1). [ Note: Bill Triggs doubts thatHarris and Stephens (1988) used such a bad filter kernel, as reported in (Schmid et al. 2000), butthe original publication is hard to find. ]

The auto-correlation matrix A can be written as

A = w $"

# I2x IxIyIxIy I2y

$

% , (4.8)

where we have replaced the weighted summations with discrete convolutions with the weightingkernel w. This matrix can be interpreted as tensor (multiband) image, where the outer products ofthe gradients "I are convolved with a weighting function w to provide a per-pixel estimate of thelocal (quadratic) shape of the auto-correlation function. In more detail, the computation of imagethat contains a 2% 2 matrix A at each pixel can be performed in two steps:

1. At each pixel, compute the gradient "I =&Ix Iy

'and then compute the four values

&I2x IxIy IxIy I2y

';

2. Convolve the resulting 4-band image with a blur kernel w.

- A symmetric matrix can be diagonalized.

A! =!

!1 00 !2

"

where !1 and !2 are the eigenvalues of the matrix A.

Monday, January 11, 2010

Page 19: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

Local Structure Matrix

• the matrix A is often referred to as the local structure matrix.• Eigenvalues of A are real and positive, and provide

information about the local image structure.• Examples:

– flat image: both eigenvalues will be zero.– ideal ramp edge: one eigenvalue will be non-zero and the other zero

(independent of the edge orientation).– a corner will have a strong edge in the direction corresponding to the

larger eigenvalue, and another edge normal to the first, and both eigenvalues will be non-zero.

– for a “good” corner, both eigenvalues must be significant.– hence, conditioning on the smaller of the two eigenvalues is needed to

determine a strong corner.

13

Monday, January 11, 2010

Page 20: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

!1 ! !2 = 2.!

0.25.(trace(A))2 ! det(A)

Corner response function

• Computing the eigenvalues is expensive

14

!1,2 =trace(A)

!(trace(A)

2)2 ! det(A)

• At a corner, this expression should always be positive, so one can consider the following as a corner response function:

det(A)! !(trace(A))2

• \alpha determines the sensitivity of the detector. Larger value ==> less sensitive. Typical: 0.04 - 0.06.

Monday, January 11, 2010

Page 21: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

Implementation (page 220, Oct 18 draft)

1.compute the horizontal (I_x) and vertical (I_y) derivatives of the image, by convolving the original image with derivatives of Gaussians.

2.compute the three images corresponding to the outer product of these gradients (the matrix A in the previous slides).

3.blur these three images with a larger Gaussian kernel.4.compute a “corner response function” as discussed in the

previous slide.5.find the local maxima above a certain threshold and report

these as detected feature point locations.

15

Monday, January 11, 2010

Page 22: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

222 Computer Vision: Algorithms and Applications (October 18, 2009 draft)

(a) Strongest 250 (b) Strongest 500

(c) ANMS 250, r = 24 (d) ANMS 500, r = 16

Figure 4.9: Adaptive non-maximal suppression (ANMS) (Brown et al. 2005). The two upperimages show the strongest 250 and 500 interest points, while the lower two images show the interestpoints selected with adaptive non-maximal suppression (along with the corresponding suppressionradius r). Note how the latter features have a much more uniform spatial distribution across theimage.

survey, they find that the improved (Gaussian derivative) version of the Harris operator with !d = 1

(scale of the derivative Gaussian) and !i = 2 (scale of the integration Gaussian) works best.

Scale invariance

In many situations, detecting features at the finest stable scale possible may not be appropriate. Forexample, when matching images with little high frequency (e.g., clouds), fine-scale features maynot exist.

One solution to the problem is to extract features at a variety of scales, e.g., by performingthe same operations at multiple resolutions in a pyramid and then matching features at the samelevel. This kind of approach is suitable when the images being matched do not undergo large

Monday, January 11, 2010

Page 23: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

Project 1. Implement a Harris corner detector

• There are many resources available online, include source code. For example, a matlab code for Harris detector is available at http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/.

• The purpose of this project is two fold:– get your hands on programming with images while learning some

basics of image processing/analysis– do something interesting, leading towards the next project

• I am ok if you use online resources, but you should make sure that you acknowledge that fact. In addition, you should only use it as a starting point, to learn and implement your own version, not just copy & paste.

• All project group members are expected to contribute towards the implementation equally. The grade is assigned to the individual group as a whole. 17

Monday, January 11, 2010

Page 24: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

Project 1 (contd.) DUE JAN 22, 5PM.

• you organize your report as follows:– a pdf file that includes cover page( your group number, names of the

students, date of submission) and at least two pictures on which you show the results demonstrating the corner detection performance.♦ include highlights/special features of your implementation,

including an overview of the specific details that differ from the generic class discussion.

♦discuss the effects of various parameter choices in your implementation (e.g., threshold selection, smoothing filter choices, etc. etc.)

– a file that consists of the source code of your implementation. The code must be adequately documented. Any dependencies on binaries should be minimal and clearly explained.

– Combine the files into a single ZIP archive with the following naming convention: Project1_Group##.zip. Insert your group number in the range 00-10.

• email the ZIP archive to [email protected]. 18

Monday, January 11, 2010

Page 25: Feature extractionmanj/ece181b-Winter2010...(Figure 4.5).1 Note how the auto-correlation surface for the textured flower bed (Figure 4.5b, red 1 Strictly speaking, the auto-correlation

project: final note

• If you do not have significant programming experience, use MATLAB.

• Programming/software specific issues are often difficult to resolve, particularly for a course such as this. Keep this in mind while choosing your software environment.– choose wisely!

• In addition to the software link provided before, there are many, many, computer vision libraries that are freely available online, including OpenCV, ImageJ, etc. However, the TAs will not be providing any support for these external libraries.– again, my recommendation is to stick with the inefficient MATLAB

while you learn about computer vision.

19

Monday, January 11, 2010