Transcript
Page 1: Computer Vision: Vision and Modeling

8/16/99

Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling

Page 2: Computer Vision: Vision and Modeling

8/16/99

• Lucas-Kanade Extensions

• Support Maps / Layers:

Robust Norm, Layered Motion, Background Subtraction, Color Layers

• Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5)

- Bayesian Decision Theory

- Density Estimation

Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling

Page 3: Computer Vision: Vision and Modeling

8/16/99

A Different View of Lucas-KanadeA Different View of Lucas-Kanade

I (1) - I(1) v t 1

I (2) - I(2) v t 2

I (n) - I(n) vt n

...

2

E = ( )

=

I (i) - I(i) v t i

2

i

White board

High Gradient hasHigher weight

Page 4: Computer Vision: Vision and Modeling

8/16/99

Constrained OptimizationConstrained Optimization

V V

Constrain-

I (1) - I(1) v t 1

I (2) - I(2) v t 2

I (n) - I(n) vt n

...

2

Page 5: Computer Vision: Vision and Modeling

8/16/99

Constraints = SubspacesConstraints = Subspaces

E(V)V V

Constrain-

Analytically derived:Affine / Twist/Exponential Map

Learned:Linear/non-linear

Sub-Spaces

Page 6: Computer Vision: Vision and Modeling

8/16/99

Motion ConstraintsMotion Constraints

• Optical Flow: local constraints

• Region Layers: rigid/affine constraints

• Articulated: kinematic chain constraints

• Nonrigid: implicit / learned constraints

Page 7: Computer Vision: Vision and Modeling

8/16/99

V = M( )

Constrained Function Minimization

= E(V)

V V

Constrain-

I (1) - I(1) v t 1

I (2) - I(2) v t 2

I (n) - I(n) vt n

...

2

Page 8: Computer Vision: Vision and Modeling

8/16/99

2D Translation: Lucas-Kanade

= E(V)

V V

Constrain-

dx, dy

dx, dy

...

dx, dy

V =

2D

I (1) - I(1) v t 1

I (2) - I(2) v t 2

I (n) - I(n) vt n

...

2

Page 9: Computer Vision: Vision and Modeling

8/16/99

2D Affine: Bergen et al, Shi-Tomasi

= E(V)

V V

Constrain-

a1, a2

a3, a4v =

6D

dx

dy

x

yi

ii +

I (1) - I(1) v t 1

I (2) - I(2) v t 2

I (n) - I(n) vt n

...

2

Page 10: Computer Vision: Vision and Modeling

8/16/99

Affine Extension

6

5

43

21

a

a

y

x

aa

aa

vy

ux

Affine Motion Model:

- 2D Translation- 2D Rotation- Scale in X / Y- Shear

Matlab demo ->

Page 11: Computer Vision: Vision and Modeling

8/16/99

Affine Extension

Affine Motion Model -> Lucas-Kanade:

Matlab demo ->

ROIyx

yxGvyuxFvuE,

2)),(),((),(

321 ayaxa 654 ayaxa

D

a

a

C

6

1

...

Page 12: Computer Vision: Vision and Modeling

8/16/99

2D Affine: Bergen et al, Shi-Tomasi

V V

Constrain-

6D

Page 13: Computer Vision: Vision and Modeling

8/16/99

K-DOF Models

= E(V)

V V

Constrain-

K-DOF

V = M( )

I (1) - I(1) v t 1

I (2) - I(2) v t 2

I (n) - I(n) vt n

...

2

Page 14: Computer Vision: Vision and Modeling

8/16/99

V = M( )

Quadratic Error Norm (SSD) ???

= E(V)

V V

Constrain-

I (1) - I(1) v t 1

I (2) - I(2) v t 2

I (n) - I(n) vt n

...

2

White board (outliers?)

Page 15: Computer Vision: Vision and Modeling

8/16/99

Support Maps / LayersSupport Maps / Layers

- L2 Norm vs Robust Norm

- Dangers of least square fitting:

L2

D

Page 16: Computer Vision: Vision and Modeling

8/16/99

Support Maps / LayersSupport Maps / Layers

- L2 Norm vs Robust Norm

- Dangers of least square fitting:

L2 robust

D D

Page 17: Computer Vision: Vision and Modeling

8/16/99

Support Maps / LayersSupport Maps / Layers

- Robust Norm -- good for outliers

- nonlinear optimization

robust

D

Page 18: Computer Vision: Vision and Modeling

8/16/99

Support Maps / LayersSupport Maps / Layers

- Iterative Technique

Add weights to each pixel eq (white board)

Page 19: Computer Vision: Vision and Modeling

8/16/99

Support Maps / LayersSupport Maps / Layers

- how to compute weights ?

-> previous iteration: how good does G-warp matches F ?

-> probabilistic distance: Gaussian:

Page 20: Computer Vision: Vision and Modeling

8/16/99

Error Norms / Optimization TechniquesError Norms / Optimization Techniques

SSD: Lucas-Kanade (1981) Newton-Raphson

SSD: Bergen-et al. (1992) Coarse-to-Fine

SSD: Shi-Tomasi (1994) Good Features

Robust Norm: Jepson-Black (1993) EM

Robust Norm: Ayer-Sawhney (1995) EM + MRF

MAP: Weiss-Adelson (1996) EM + MRF

ML/MAP: Bregler-Malik (1998) Twists / EM

ML/MAP: Irani (+Ananadan) (2000) SVD

Page 21: Computer Vision: Vision and Modeling

8/16/99

• Lucas-Kanade Extensions

• Support Maps / Layers:

Robust Norm, Layered Motion, Background Subtraction, Color Layers

• Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5)

- Bayesian Decision Theory

- Density Estimation

Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling

Page 22: Computer Vision: Vision and Modeling

8/16/99

Support Maps / LayersSupport Maps / Layers

- Black-Jepson-95

Page 23: Computer Vision: Vision and Modeling

8/16/99

Support Maps / LayersSupport Maps / Layers

- More General: Layered Motion (Jepson/Black, Weiss/Adelson, …)

Page 24: Computer Vision: Vision and Modeling

8/16/99

Support Maps / LayersSupport Maps / Layers

- Special Cases of Layered Motion:

- Background substraction

- Outlier rejection (== robust norm)

- Simplest Case: Each Layer has uniform color

Page 25: Computer Vision: Vision and Modeling

8/16/99

Support Maps / LayersSupport Maps / Layers

- Color Layers:

P(skin | F(x,y))

Page 26: Computer Vision: Vision and Modeling

8/16/99

• Lucas-Kanade Extensions

• Support Maps / Layers:

Robust Norm, Layered Motion, Background Subtraction, Color Layers

• Statistical Models (Duda+Hart+Stork: Chap. 1-5)

- Bayesian Decision Theory

- Density Estimation

Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling

Page 27: Computer Vision: Vision and Modeling

8/16/99

• Statistical Models: Represent Uncertainty and Variability

• Probability Theory: Proper mechanism for Uncertainty

• Basic Facts White Board

Statistical Models / Probability TheoryStatistical Models / Probability Theory

Page 28: Computer Vision: Vision and Modeling

8/16/99

General Performance CriteriaGeneral Performance Criteria

Optimal BayesOptimal Bayes

With Applications to ClassificationWith Applications to Classification

Optimal BayesOptimal Bayes

With Applications to ClassificationWith Applications to Classification

Page 29: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Example: Character Recognition:Example: Character Recognition:

Goal: Goal: Classify new character in a way as to Classify new character in a way as to

minimize probability of misclassificationminimize probability of misclassification

Example: Character Recognition:Example: Character Recognition:

Goal: Goal: Classify new character in a way as to Classify new character in a way as to

minimize probability of misclassificationminimize probability of misclassification

Page 30: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

)( kCP )( kCP• 1st Concept: Priors

a a b a b a a b ab a a a a b a a b aa b a a a a b b a b a b a a b a a

P(a)=0.75P(b)=0.25

?

Page 31: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• 2nd Concept: Conditional Probability

)|( kCXP)|( aXP

)|( bXP# black pixel

# black pixel

Page 32: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Example:

)|( aXP )|( bXP

X=7

?kC

Page 33: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Example:

)|( aXP )|( bXP

X=8

?kC

Page 34: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Example:

)|( aXP )|( bXP

X=8

Well…P(a)=0.75P(b)=0.25

aCk

Page 35: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Example:

)|( aXP )|( bXP

X=9 P(a)=0.75P(b)=0.25

?kC

Page 36: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Bayes Theorem:

)(

)()|()|(

XP

CPCXPXCP kk

k

Page 37: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Bayes Theorem:

)(

)()|()|(

XP

CPCXPXCP kk

k

jjj

kk

CPCXP

CPCXP

)()|(

)()|(

Page 38: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Bayes Theorem:

Posterior = Likelihood x prior

Normalization factor

Page 39: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Example:

)|( aXP )|( bXP

Page 40: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Example:

)()|( aPaXP)()|( bPbXP

Page 41: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

• Example:

)|( XbP)|( XaP

X>8 class b

Page 42: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Goal: Goal: Classify new character in a way as to Classify new character in a way as to

minimize probability of misclassificationminimize probability of misclassification

Decision boundaries:Decision boundaries:

Goal: Goal: Classify new character in a way as to Classify new character in a way as to

minimize probability of misclassificationminimize probability of misclassification

Decision boundaries:Decision boundaries:

kjxCPxCP jk allfor )|()|(

Page 43: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Goal: Goal: Classify new character in a way as to Classify new character in a way as to

minimize probability of misclassificationminimize probability of misclassification

Decision boundaries:Decision boundaries:

Goal: Goal: Classify new character in a way as to Classify new character in a way as to

minimize probability of misclassificationminimize probability of misclassification

Decision boundaries:Decision boundaries:

kjxCPxCP jk allfor )|()|(

kjCPCxPCPCxP jjkk allfor )()|()()|(

Page 44: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Decision Regions: Decision Regions:

Decision Regions: Decision Regions:

cRR ,...,1

R1 R2 R3

Page 45: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Goal:Goal: minimize probability of misclassificationminimize probability of misclassification

Goal:Goal: minimize probability of misclassificationminimize probability of misclassification

),(),()error( 2112 CRxPCRxPP

Page 46: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Goal:Goal: minimize probability of misclassificationminimize probability of misclassification

Goal:Goal: minimize probability of misclassificationminimize probability of misclassification

),(),()error( 2112 CRxPCRxPP

)()|()()|( 221112 CPCRxPCPCRxP

Page 47: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Goal:Goal: minimize probability of misclassificationminimize probability of misclassification

Goal:Goal: minimize probability of misclassificationminimize probability of misclassification

),(),()error( 2112 CRxPCRxPP

)()|()()|( 221112 CPCRxPCPCRxP

2 1

)()|()()|( 2211

R R

dxCPCxpdxCPCxp

Page 48: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Goal:Goal: minimize probability of misclassificationminimize probability of misclassification

Goal:Goal: minimize probability of misclassificationminimize probability of misclassification

2 1

)()|()()|( 2211

R R

dxCPCxpdxCPCxp

Page 49: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Discriminant functions:Discriminant functions:

• class membership solely based on relative sizesclass membership solely based on relative sizes

• Reformulate classification process in terms of Reformulate classification process in terms of

discriminant functions: discriminant functions:

x x is assigned tois assigned to Ck Ck ifif

Discriminant functions:Discriminant functions:

• class membership solely based on relative sizesclass membership solely based on relative sizes

• Reformulate classification process in terms of Reformulate classification process in terms of

discriminant functions: discriminant functions:

x x is assigned tois assigned to Ck Ck ifif

)(),...,(1 xyxy k

kjxyxy jk allfor )()(

Page 50: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Discriminant function examples:Discriminant function examples:Discriminant function examples:Discriminant function examples:

)|()( xCPxy kk

)()|()( kkk CPCxpxy

)( ln )|( ln)( kkk CPCxpxy

Page 51: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Discriminant function examples: 2-class problemDiscriminant function examples: 2-class problemDiscriminant function examples: 2-class problemDiscriminant function examples: 2-class problem

))()(( 0)( 21 xyxyxy

)|()|()( 21 xCPxCPxy

)(

)( ln

)|(

)|( ln )(

2

1

2

1

CP

CP

Cxp

Cxpxy

Page 52: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Why is such a big deal ?Why is such a big deal ?Why is such a big deal ?Why is such a big deal ?)()|( kk CPCxp

Page 53: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

)()|( kk CPCxp

7189

= x y [/ah/, /eh/, .. /uh/]FFTmelscalebank

apple, ...,zebra

Page 54: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

)()|( kk CPCxp

FFTmelscalebank

/t/ /t/ /t/ /t/

/aal/ /aol/ /owl/

Page 55: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

)()|( kk CPCxp

How do Humans do it?

Page 56: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

)()|( kk CPCxp

“This machine can recognize speech” ??

Page 57: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

)()|( kk CPCxp

“This machine can wreck a nice beach” !!

Page 58: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

)()|( kk CPCxp

7189

= x y FFTmelscalebank

)|( kCxp

Page 59: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

Why is such a big deal ?Why is such a big deal ?

Example #1: Speech RecognitionExample #1: Speech Recognition

)()|( kk CPCxp

7189

= x y FFTmelscalebank

)|( kCxp

P(“wreck a nice beach”) = 0.001P(“recognize speech”) = 0.02

Language Model

)( kCP

Page 60: Computer Vision: Vision and Modeling

8/16/99

Bayes Decision TheoryBayes Decision Theory

Why is such a big deal ?Why is such a big deal ?

Example #2: Computer VisionExample #2: Computer Vision

Why is such a big deal ?Why is such a big deal ?

Example #2: Computer VisionExample #2: Computer Vision

)()|( kk CPCxp

Low-LevelImageMeasurements

High-LevelModelKnowledge

)|( kCxp )( kCP

Page 61: Computer Vision: Vision and Modeling

8/16/99

BayesBayes

Why is such a big deal ?Why is such a big deal ?

Example #3: Curve FittingExample #3: Curve Fitting

Why is such a big deal ?Why is such a big deal ?

Example #3: Curve FittingExample #3: Curve Fitting

)()|( kk CPCxp

E + ln p(x|c) + ln p(c)

Page 62: Computer Vision: Vision and Modeling

8/16/99

BayesBayes

Why is such a big deal ?Why is such a big deal ?

Example #4: Snake TrackingExample #4: Snake Tracking

Why is such a big deal ?Why is such a big deal ?

Example #4: Snake TrackingExample #4: Snake Tracking

)()|( kk CPCxp

E + ln p(x|c) + ln p(c)

Page 63: Computer Vision: Vision and Modeling

8/16/99

• Lucas-Kanade Extensions

• Support Maps / Layers:

Robust Norm, Layered Motion, Background Subtraction, Color Layers

• Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5)

- Bayesian Decision Theory

- Density Estimation

Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling

Page 64: Computer Vision: Vision and Modeling

8/16/99

Probability Density EstimationProbability Density Estimation

)|( Cxp

Collect Data: x1,x2,x3,x4,x5,...

x

x

?

Estimate:

Page 65: Computer Vision: Vision and Modeling

8/16/99

Probability Density EstimationProbability Density Estimation

• Parametric Representations• Non-Parametric Representations• Mixture Models

Page 66: Computer Vision: Vision and Modeling

8/16/99

Probability Density EstimationProbability Density Estimation

• Parametric Representations- Normal Distribution (Gaussian)- Maximum Likelihood- Bayesian Learning

Page 67: Computer Vision: Vision and Modeling

8/16/99

Normal DistributionNormal Distribution

mean

variance

Page 68: Computer Vision: Vision and Modeling

8/16/99

Multivariate Normal DistributionMultivariate Normal Distribution

Page 69: Computer Vision: Vision and Modeling

8/16/99

Multivariate Normal DistributionMultivariate Normal Distribution

Why Gaussian ?

• Simple analytical properties:- linear transformations of Gaussians are Gaussian- marginal and conditional densities of Gaussians are Gaussian- any moment of Gaussian densities is an explicit function of

• “Good” Model of Nature:- Central Limit Theorem: Mean of M random variables is distributed

normally in the limit.

Page 70: Computer Vision: Vision and Modeling

8/16/99

Multivariate Normal DistributionMultivariate Normal Distribution

Discriminant functions:

)(ln )|(ln )( kkk CPCxpxy

Page 71: Computer Vision: Vision and Modeling

8/16/99

Multivariate Normal DistributionMultivariate Normal Distribution

Discriminant functions:

)(ln )|(ln )( kkk CPCxpxy

equal priors + cov:Mahalanobis dist.

Page 72: Computer Vision: Vision and Modeling

8/16/99

Multivariate Normal DistributionMultivariate Normal Distribution

How to “learn” it from examples:

• Maximum Likelihood

• Bayesian Learning

Page 73: Computer Vision: Vision and Modeling

8/16/99

Maximum LikelihoodMaximum Likelihood

How to “learn” density from examples:

x

x

?

?

Page 74: Computer Vision: Vision and Modeling

8/16/99

Maximum LikelihoodMaximum Likelihood

Likelihood that density model generated data X:

)|()|()( Likelihood1

n

N

nxpXpL

Page 75: Computer Vision: Vision and Modeling

8/16/99

Maximum LikelihoodMaximum Likelihood

Likelihood that density model generated data X:

)|()|()( Likelihood1

n

N

nxpXpL

N

nnxpLE

1

)|(ln )(ln :convenient more

Page 76: Computer Vision: Vision and Modeling

8/16/99

Maximum LikelihoodMaximum Likelihood

Learning = optimizing (maximizing likelihood / minimizing E):

N

nnxpLE

1

)|(ln )(ln :convenient more

Page 77: Computer Vision: Vision and Modeling

8/16/99

Maximum LikelihoodMaximum Likelihood

Maximum Likelihood for Gaussian density:

N

nnxpLE

1

)|(ln )(ln :convenient more

N

nnx

N 1

1̂Close-form solution:

N

n

Tnn xx

N 1

)ˆ)(ˆ(1ˆ


Top Related