computational modeling of visual attention (1)

15
Computational Modeling of Visual Attention “Attention is the cognitive process of selectively concentrating on one aspect of the environment while ignoring other things.” Presented By : Rahul Agrawal(1265EC65R11) Soumyajit Gupta(12EC65R14) Under Guidance of : Dr. Jayanta Mukhopadhyay Dr. Ritwik Kumar Layek

Upload: rahul-agrawal

Post on 11-Apr-2016

219 views

Category:

Documents


1 download

DESCRIPTION

My thesis defence

TRANSCRIPT

Page 1: Computational Modeling of Visual Attention (1)

Computational Modeling of Visual Attention

“Attention is the cognitive process of selectively concentrating on one aspect of the environment while ignoring other things.”

Presented By :Rahul Agrawal(1265EC65R11)Soumyajit Gupta(12EC65R14)

Under Guidance of :Dr. Jayanta MukhopadhyayDr. Ritwik Kumar Layek

Page 2: Computational Modeling of Visual Attention (1)

2

What is Attention ?Attention is the set of mechanisms that optimize/control the search processes inherent in vision.1. Select

1. Spatial region of interest.2. Temporal window of interest3. World/Task/Object/Event model.4. Gaze/Viewpoint

2. Restrict1. Task relevant search space pruning.2. Location cues.3. Fixation points.4. Search depth control.

3. Suppress1. Spatial/Feature surround inhibition.2. Inhibition of return. Computational Modelling of Visual

Attention

Page 3: Computational Modeling of Visual Attention (1)

Computational Modelling of Visual Attention

3

Factors governing AttentionBottom-Up Cues.Top-Down Cues.

Which bar catches your attention first ? Where is

Launchpad Mcquack ?

Fig. 1 Fig. 2

Page 4: Computational Modeling of Visual Attention (1)

Computational Modelling of Visual Attention

4

Retinal Structure• 120 million rods (intensity)• 7 million cones (color)• Fovea: 2 degrees of visual field

Fig. 3

Page 5: Computational Modeling of Visual Attention (1)

5

Psychophysical Models of AttentionTreisman’s Feature integration

theory.

Computational Modelling of Visual Attention

Wolfe’s Guided search model.

Fig. 4Fig. 5

Page 6: Computational Modeling of Visual Attention (1)

6

General flow of computational models

Extraction of feature maps.

Computational Modelling of Visual Attention

1. Intensity2. Color3. Orientation4. Foveation5. Motion6. Shape/Size7. Location8. Foreground/Background

Activation map of features.Normalization of activation maps.

Fig. 6

Page 7: Computational Modeling of Visual Attention (1)

7

Image pyramids

W

116

116

14

38

1/161/4

3/8

1/161/4

Where, O is orientation map at scale n and orientation alpha.

Page 8: Computational Modeling of Visual Attention (1)

Computational Modelling of Visual Attention

8

Computational Models of AttentionNo.

Model Year Ap.

Resolution

1. Koch & Ullman [ ] 1985 I w/16 x h/16

2. NVT by itti et al. [] 1998 I w/16 x h/16

3. VOCUS by frintrop et al.[] 2005 B w/4 x h/44. Saliency Toolbox [] 2006 I w/16 x

h/165. GBVS by harel et al. [] 2006 I wxh6. Spectral Residual [] 2007 I 64x647. Judd et al. 2009 I Wxh8. Achanta9. Sir10. Context aware11. DIVOG

Page 9: Computational Modeling of Visual Attention (1)

9

Koch & Ullman

Computational Modelling of Visual Attention

Page 10: Computational Modeling of Visual Attention (1)

10

NVT by itti et al./Saliency Toolbox

Computational Modelling of Visual Attention

Page 11: Computational Modeling of Visual Attention (1)

11

Spectral Residual

Computational Modelling of Visual Attention

Page 12: Computational Modeling of Visual Attention (1)

12

Achanta

Computational Modelling of Visual Attention

Page 13: Computational Modeling of Visual Attention (1)

13

DIVOG

Computational Modelling of Visual Attention

Page 14: Computational Modeling of Visual Attention (1)

14

VOCUS : Bottom-Up part

Computational Modelling of Visual Attention

(Visual Object detection with Computational attention System)

• Three different feature dimensionsare computed independently.• Compute image pyramids ofcorresponding features. • Scale maps I’’,O’’,C’’ are computedusing center surround mechanism.• Scale maps are then fused to getdifferent feature maps(I’,O’,C’).S

TEP 1: All maps are resized to scale S2.

STEP 2: The maps are added up pixel by pixel.For eg Intensity feature map(I’)

Page 15: Computational Modeling of Visual Attention (1)

Computational Modelling of Visual Attention

15