multiple object detection and tracking system by optical approach

Upload: sanddyv

Post on 14-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 multiple Object Detection and Tracking System by optical approach

    1/6

    Development of Embedded Processor Based MultipleObject Detection and Tracking System

    Sandeep Vankineni1, Shiva Prasad Yadav S. G.2, Santhosh Kumar S.3, K. Kanakaraju4

    1- M.Sc Engineering student, Department of Computer Engineering2- Assistant Professor, Department of Computer Engineering3- Assistant Professor, Department of Electronics and Electrical Engineering

    M.S.Ramaiah School of Advanced Studies, Bangalore4-Technical director, System Controls

    AbstractEstimation and tracking of multiple moving objects in a dynamic environment is a challenging task in computer vision. It

    demands real-time computation with reliable accuracy on embedded processors. This project focuses on the identification,analysis and implementation of suitable single and multiple moving object detection and tracking algorithmwhich makes useof spatiotemporal information.

    In this Thesis, the spatiotemporal information is obtained using Optic Flow computation. Optical Flow is a vector fieldwhich gives both magnitude and direction of the pixel movements with respect to consecutive frames. Horn and Schunckalgorithmis used in this work for computation of Optical Flow vectors. Single and multiple objects based on the computedvectors are segmented using intensity based thresholding. Image enhancement techniques consisting of morphologicaloperations and pixel connectivity are applied to enhance the segmented objects. The designed tracking algorithmisimplemented MATLAB. The developed algorithmis first validated and its parameters tuned using real and virtual data withstatic background. For tracking in dynamic background the segmented objects are identified and the user is asked to chooseone of the objects to track. A centroid based approach is used to track the chosen object. The optimized multiple objectsdetection and tracking systemis implemented in C and further optimized for OMAP L138 embedded processor.

    This optimized multiple objects detection and tracking systemhas been implemented using C and further optimized to suitfor an Embedded DSP processor. It has been found that the computational time on Linux platformis better in comparison withMATLAB and embedded DSP processor. The average computation time to process consecutive image sequence of resolution160x120 is 0.12 seconds on Intel Core 2 Duo processor under Linux. As a future work, it would be better to go for a high endembedded DSP processor to enhance the computation time.

    Key Words:Optical Flow, Segmentation, Threshold and Centroid

    Nomenclature

    I(x,y) IntensityEx Intensity in x directionEy Intensity in y directionEt Intensity in t frameu,v Vectors obtained from Optical Flow

    Regularization term

    AbbreviationsHS Horn and SchunckCLG Combined Local and Global MethodROI Region of Interest

    1. INTRODUCTIONThe goal of computer vision is to make useful decisions

    about physical objects and scenes based on sensed images.

    In order to make decisions about real objects it is almostalways necessary to construct some description or model ofthem from the image [1]. Computer vision is thecombination of hardware and software algorithms, the visionobtained through hardware been analyzed throughalgorithms and the necessary information been extracted.

    Multiple object detection and tracking involves inmajor constraints like object detection, motion estimation,object segmentation and object tracking.

    An object detection algorithm must have the ability torecognize the variation in visual appearance [2]. Motionestimation is the process to find the motion vectors of a realworld 3D scene captured projected onto 2D plane. Detectionand motion estimation of object is based upon comparingtwo frames for finding moving objects. Segmentation refersto segregating the objects detected in the image in groupsdepending on its size, shape and various characteristics. Bydoing a segmentation on the object which has movement inthe given frame that particular object been tracked.

    There are two main problems to be solved, estimationof the object and data association with the object[3].Background subtraction needs background models todetect an object [4][5][6][7]. Optical Flow algorithms are

    used to track objects in static and dynamic background andare robust to noise [8][9] [10][11].Optical Flow method is

    implemented, through MACH filter, log r- mapping to trackan object [6] and also polar-log images is proposed as aneffective method to track multiple objects [7]. Colour andblob tracking been done through Bayesian tracking methods[12] and also by using multiple stationary and movingcameras [13].

    They are complexities involved in tracking object indynamic camera scenes. The complexity involved in objectdetection in dynamic environment is foreground and

  • 7/30/2019 multiple Object Detection and Tracking System by optical approach

    2/6

    background image data changes in successive frames whichmakes difficult to detect object.

    In this paper object detection is implemented usingOptical Flow algorithm. The obtained vectors are segmentedusing intensity based threshold technique. The segmented

    object tracked based on centroid approach.

    1.1 Optical FlowOptical Flow is the distribution of apparent velocities of

    movement of brightness patterns in an image. Optical Flowcan arise from the relative motion of objects and the viewer[10].

    Optical Flow is an approximation of the local imagemotion based upon local derivatives in a given sequence ofimages. That is, in 2D it specifies how much each imagepixel moves between adjacent images while in 3D it

    specifies how much each volume voxel moves betweenadjacent volumes [11]. The 2D image sequences used hereare formed under perspective projection via the relativemotion of a camera. Scene objects Optical Flow vectors areobtained based on the estimation of the pixels intensity

    motion in the corresponding frame and apparent velocity ofthe pattern [11]. The different methods in Optical Flowalgorithm are differential method, frequency method and

    correlation method.Horn and Schunck is a differential technique used to

    compute image velocity from spatiotemporal derivative ofimage intensities. There are mainly two differentialtechniques are local and global method. Global method findsout the vectors on entire image and does normalization onthe vectors to obtain dense flow of vectors. On other handlocal methods are compared in locality by fixing its

    neighborhood size and processed to obtain vectors. Thesealgorithms give more dense flow vectors, robust to noise andsensitive for small intensity change in the successive images.The computation of differential Optical Flow is a two-step

    procedure:1. Measure the spatio temporal intensity derivative (which isequivalent to measuring the velocities normal to the localintensity structures) [11].

    2. Integrate normal velocities into full velocities, forexample, either locally via a least squares calculation orglobally via a regularization [11].

    Optical Flow vectors been calculated throughapparent velocities of movement of brightness pattern in animage. The change in pixel intensity in consecutive framesof an object is zero and equation given as [10]

    0),,(=

    d

    tyxdE(1)

    Consider the intensity of a pixel at time t is

    ),,( tyxI and at time t the intensity

    is ),,( tttytxI +++ . The rate of change of

    motion in x and y direction is given as [12] ),(d

    dy

    d

    dx.

    1.2 Horn and Schunck Algorithm

    The intensity at time mathematically given

    by ),,( tyxI , where I is the intensity of object at position(x, y). The Horn and Schunck algorithm computes the

    velocity ),( vuv = for each pixel in an image by

    minimizing the functional given by [12].

    F= [ ]

    + ,222 dEE bc (2)

    Where is regularization parameter,2

    bE and2

    cE are the data and regularization term respectively givenby [12]

    ,)( 22 tyxb IvIuIE ++= (3)222222 )()()()( vvvuuE yxyxc =+++=

    (4)

    In (3), ,

    I

    I x

    = y

    I

    I y

    = and .

    I

    I t

    = In (4),

    uux

    = ,

    y

    uuy

    = ,

    vvx

    = and .

    y

    vvy

    =

    A. Estimating the Partial DerivativesWe must estimate the derivatives of brightness

    from the discrete set of image brightness measurements

    available. Horn and Schunck method is a global OpticalFlow method does smoothening on the computed flow fieldgiven by

    ++++= dxdyuvfvfufE tyxHK ))()((222

    (5)It is important to estimate xf , yf and tf be

    consistent. The values are obtained through partialderivatives of image as

    }

    {4

    1

    1,,11,1,11,,1,1,

    ,,1,1,1,,,1,

    ++++++++

    ++++

    ++

    +=

    kjikjikjikji

    kjikjikjikjix

    EEEE

    EEEEf

    (6)

    }

    {4

    1

    1,1,1,1,11,,1,,1

    ,1,,1,1,,,,1

    ++++++++

    ++++

    ++

    +=

    kjikjikjikji

    kjikjikjikjiy

    EEEE

    EEEEf

    (7)

    }

    {4

    1

    ,1,11,1,1,1,1,1,

    ,,11,,1,,1,,

    kjikjikjikji

    kjikjikjikjit

    EEEE

    EEEEf

    ++++++++

    ++++

    ++

    +=

    (8)

  • 7/30/2019 multiple Object Detection and Tracking System by optical approach

    3/6

    B. Estimation the Laplacian of the Flow VelocitiesThe Laplacian of u and v vectors are given by [1]

    2

    2

    2

    22

    y

    u

    x

    uu

    +

    = and

    2

    2

    2

    22

    y

    v

    x

    vv

    +

    =

    To obtain more dense flow fields the local

    averages of u and v vectors between the pixels areapproximated using

    )

    (6

    1

    ,1,1,1,1,1,1,1,1

    ,1,,,1,1,,,1,,

    kjikjikjikji

    kjikjikjikjikji

    uuuu

    uuuuu

    ++++

    ++

    ++++

    +++=

    (9)

    )

    (6

    1

    ,1,1,1,1,1,1,1,1

    ,1,,,1,1,,,1,,

    kjikjikjikji

    kjikjikjikjikji

    vvvv

    vvvvv

    ++++

    ++

    ++++

    +++=

    (10)The new set of velocity estimates are calculated

    from previous vector values to smoothen the obtained

    vectors values given by [1]

    )(

    ][222

    1

    yx

    tavgyavgxx

    avgn

    EE

    EUEUEEUU

    ++

    ++=+

    (11)

    )(

    ][222

    1

    yx

    tavgyavgxx

    avgn

    EE

    EVEVEEVV

    ++

    ++=+

    (12)

    1.3 The Combined Local Global (CLG) Method

    Local and global differential methods havecomplementary advantages and short comings. It would be

    interesting to construct a hybrid technique that constitutesthe advantages of two methods. It combines the robustnessof local methods with the density of global approaches.Lucas kanade algorithm is designed based on the foundationconsidering the flow is essentially constant in a localneighborhood by least square criterion. Lucas and Kanadeproposed to assume that the unknown optic flow vector isconstant within some neighborhood of size . In this case itis possible to determine the two constants u and v atsome location (x, y, t) from a weighted least square fit byminimizing the function given by equation 5.14. Equation5.11 represents the

    ( ) ,v,u,=W T1: (5)

    | | | | | | ,v+u=w 222 : (5.12)Equation 5.12 is the regulariser used to fill in the

    information from the neighbourhood where no reliable localestimate is possible. u and v are vectors where w isregulariser.

    ( ) ,f,f,f=f Tzyx:3 (5.13)

    In equation 5.13, zyx f,f,f represents the partial

    derivate of the image I with respect to position x, yand t.

    ( ) ( pp ffK=fJ 333 : (5.14)

    In equation ( )fJ p 3 is the second momentmatrix derived from the gradient, Kp is the kernel mask forthe size of the neighbourhood size of .

    ( ) ( )WfJW=wE pT

    LK 3: (5.15)

    Equation 5.15 is the Lucas Kanade methodquadratic form obtained from equation 5.11-5.14.

    From Horn and Schunck it is evident that the intensity isgiven by equation

    EHK= ( )wfJwT

    30 2+ (|u|2+|u|2)) dx dy

    (5.16)

    By replacing the matrix ( )fJ 30 by structure

    tensor ( )fJ 3 on combining both local and globalintensity levels the obtained equation is

    ECLG= ( )wfJwT 32+ (|u|2+|u|2)) dx

    dy (5.17)

    1.4 Segmentation

    The Optical Flow vectors obtained from the algorithm

    are segmented through threshold technique. The concurrentmean of the image and the pixel value is changed iterativelyto the size of the image M x N.

    Mean =

    =

    =

    1

    0

    1

    0

    M

    i

    N

    j

    k (21)

    ==

    ===

    1,.....1,0,0

    1....1,0,1),(

    Njmeanh

    Mimeanhjik (22)

    1.5 Centroid Calculation

    The centroid of the segmented image of size w x h isbeen calculated from the following equation.

    =

    =

    =1

    0

    1

    0

    ),(*w

    x

    h

    yc yxIxX (23)

    =

    =

    =1

    0

    1

    0 ),(*

    w

    x

    h

    yc yxIyY (24)

    The centroid of the identified object is been calculatedas with primary implication the object intensity will be samein the consecutive frames so the segmented image of objectwill be almost equal to original segmented area. The centroidof the newly segmented image will be near to the identifiedobject centroid value. The centroid of the interested objectwill be within the neighbourhood size of assumed value n. Ifthe centroid falls within the neighbourhood then it is

    considered as the movement of the object selected.

  • 7/30/2019 multiple Object Detection and Tracking System by optical approach

    4/6

    )()),(),(( nwindowsizeCCCCabs tytxyx

  • 7/30/2019 multiple Object Detection and Tracking System by optical approach

    5/6

    Figure 6(a) shows the object movements labelled withrespect object movements. The user selected object is beentracked in the consecutive frames.

    Figure 6(b) shows the optical flow vectors obtained byimplementing Horn and Schunck algorithm. The obtained

    optical flow vectors are segmented using threshold techniqueand the movements are shown in figure 6(c) Figure 6(d)

    shows the object tracked highlighted through boundary box.The Input video sequence is a car moving from left toright direction in a dynamic background. Figure 9(a) showsthe object movements labelled which is obtained throughHorn and Schunck method.

    (a) (b)

    (c) (d)

    Fig. 7 Results of Second Video Sequence (135x240)

    Figure 7 (b) shows the Optical flow vectors obtainedfrom by implementing HS algorithm. The Segmentedregions are obtained by implementing threshold technique.The ROI movement is tracked and highlighted byconstructing boundary box.

    Figure 8(a) and 9(a) shows the labeled image obtainedthrough Combined Local and Global method. The Opticalflow vectors are shown in Figure 8(b) and 9(b). The Optical

    Flow vectors are segmented by threshold technique. TheROI movement is tracked has been highlighted throughboundary box as shown in 8(c) and 9(c).

    (a) (b)

    (c) (d)

    Fig. 8 Results obtained through CLG method (180x240)

    The results obtained though Horn and Schunck andCLG method are shown in figure 6 and 8 respectively. Theoptical flow vectors obtained through CLG method aredenser flow fields than Horn and Schunck. But Horn andSchunck highlights only the larger intensity changes whichresults in accurate tracking.

    (a) (b)

    (c) (d)

    Fig.9 Results obtained through CLG method (68x120)Examining the results of Horn and Schunck algorithm

    with combined local and global method its clear that the

    vector fields obtained through combined method gives moreaccurate number of vectors due to its local processingtechnique.

    1.2 Hardware ImplementationFigure 10 shows the block diagram for real time

    implementation of multiple objects detect and tracking blockdiagram. The peripherals used are USB, Composite IN, RS-232 port. Camera connected to composite video port. Thedata received in this port is of YUV4:2:2 format. This formathas to be converted to RGB24 bit image to display on LCD.

    CCD camera with 640*480 resolution is given as input.CCD camera gives an analog video input, which is convertedinto digital by TIs TVP codec engine. The CCD controlledthrough Video Port Interface (VPIF) expansion port.

    Fig. 10 Block Diagram of Real Time Implementation ofMultiple Object Detection and Tracking

    Fig. 11 Hardware Setup for I mplementing Real Time forMultiple Object Detection and Tracking

    Figure 11 shows all the peripheral interface setup on theHAWK board, where CCD camera connected to composite

  • 7/30/2019 multiple Object Detection and Tracking System by optical approach

    6/6

    video IN, Flash drive and RS232 to communicate withcomputer.

    Figure 12(a) shows the image sequence in which theobject to be tracked which captured by a stationary camera.Figure 12(b) shows the object tracked for the given input

    sequence.

    (a) (b)Fig. 12 Object Tracking in Processor

    6. TIMING ANALYSISTable 1Average computation times on different

    platformsPlatform

    / IDE

    OF

    Vector

    Motio

    nSegme

    ntation

    Post

    processing

    Motio

    nTrack

    ing

    Tota

    ltime

    MATLA

    B on

    T6600

    0.327

    9

    1.4341 0.0309 0.006

    8

    1.79

    97

    VC++

    on

    T6600

    0.424 0.001 0.07 0.049 0.54

    4

    C oncore 2duo

    0.12 0 0.9 0.9 0.30

    C on

    ARM

    5.21 0.15 2.94 7.1 15.4

    From the Table 1, it can be concluded that thecomputational time on Intel Core 2 Duo processor underLinux environment gives better accuracy in vectors withreduced computation time in comparison with MATLABand OMAP L-138 processor under Linux environment. Thetiming results are taken for a single object tracking in a staticbackground.

    5. CONCLUSIONMultiple object detection and tracking in a dynamic

    background video is tracked using horn and Schunck OpticalFlow method along with CLG method. Estimation of theobject movement and data association with objects in videoplays a vital role in tracking of an object. So in Optical Flowmethods the estimation of the object movement is beentracked using centroid. The execution time of Horn and

    Schunck and CLG method are nearly similar. The vectorsobtained from combined method are more accurate. Theexecution time to track an object in Horn and Schunck is lesscomparatively with CLG method, but vectors are accurate.Implementing pyramidal techniques with these methodsgives much less execution time. Centroid based trackinggives good results. The work can be extended to recognizethe object and dynamic occlusion has to be solved.

    REFERENCES[1] Linda Shapiro and George Stockman, Computer Vision,Prentice Hall,Jan 2001[2] Henry Schneiderman and Takeo Kanade, ObjectDetection Using the Statistics of Parts, International Journalof Computer Vision, Issue no: 56, pp. 151177, 2004

    [3] Hue C., Le Cadre J.-P and P. Perez,Tracking Multiple

    Objects With Particle Filtering, IEEE transactions onAerospace and Electronic Systems, Vol. 38, pp. 791-812,Dec. 10. 2002.[4] L. Li, W. Huang, I. Y. H. Gu, and Q. Tian, Foregroundobject detectionin changing background based on color co-occurrence statistics, In Proc. IEEE Workshop onApplications of Computer Vision, 2002.[5] C. Stauffer and W. E. L. Grimson, Adaptive backgroundmixture models for real-time tracking, In Proc. IEEE Conf.Computer Vision and Pattern Recognition, 1999.[6] Mittal and N. Paragios, Motion-based backgroundsubtraction using adaptive kernel density estimation, InProc. IEEE Conf. Computer Vision and Pattern Recognition,vol. 2, 2004, pp. 302309.[7] Elgammal, D. Harwood, and L. Davis, Non-parametric

    model for background subtraction, in Proc. European Conf.Computer Vision vol. II, May 2000, pp. 751767.[8] Waqar Shahid Qureshi and Abu-Baqar Nisar Alvi,Object Tracking using MACH filter and Optical Flow inCluttered Scenes and Variable Lighting Conditions, WorldAcademy of Science, 2009.[9] Hai-Yan Zhang, Multiple Moving Objects Detection and

    Tracking Based On Optical Flow in Polar-Log Images,College of Information, Beijing Forestry University,

    Proceedings of the Ninth Beijing, China, Proceedings ofInternational Conference on Machine Learning andCybernetics, Qingdao, 11-14 July 2010.[10] Berthold K. P. Horn and Brain G. Schunck,Determining Optical Flow, Artificial Intelligence, Vol. 17(1981), pp. 185-203.

    [11] Rafael A. B. de Queiroz, Gilson A. Giraldi, Pablo J.Blanco and Raul A. Feijoo, Determining Optical Flow usinga Modified Horn and Schunck's Algorithm, 17thInternational Conference on Systems, Signals and ImageProcessing, Brazil, June 2010.[12] Manjunath Narayana and Donna Haverkamp, ABayesian algorithm for tracking multiple moving objects inoutdoor surveillance video, IEEE, 2007[13] Jinman Kang, Isaac Cohen and Gerard Medioni,

    Tracking Objects from Multiple Stationary and MovingCameras, IRIS, Computer Vision Group, University ofSouthern California, USA