particle filters and projective transform

Upload: steven-grima

Post on 07-Apr-2018

230 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Particle Filters and Projective Transform

    1/12

    Hindawi Publishing CorporationEURASIP Journal on Image and Video ProcessingVolume 2011, Article ID 839412, 11 pagesdoi:10.1155/2011/839412

    Research ArticleIntegrating the Projective Transform withParticle Filtering for Visual Tracking

    P. L. M. Bouttefroy, 1 A. Bouzerdoum, 1 S. L. Phung, 1 and A. Beghdadi 2

    1 School of Electrical, Computer & Telecom. Engineering, University of Wollongong, Wollongong, NSW 2522, Australia 2L2TI, Institut Galilee, Universite Paris 13, 93430 Villetaneuse, France

    Correspondence should be addressed to P. L. M. Bouttefroy, [email protected]

    Received 9 April 2010; Accepted 26 October 2010

    Academic Editor: Carlo Regazzoni

    Copyright 2011 P. L. M. Bouttefroy et al. This is an open access article distributed under the Creative Commons AttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    This paper presents the projective particle lter, a Bayesian ltering technique integrating the projective transform, which describesthe distortion of vehicle trajectories on the camera plane. The characteristics inherent to tra ffi c monitoring, and in particular theprojective transform, are integrated in the particle ltering framework in order to improve the tracking robustness and accuracy.It is shown that the projective transform can be fully described by three parameters, namely, the angle of view, the height of thecamera, and the ground distance to the rst point of capture. This information is integrated in the importance density so as toexplore the feature spacemoreaccurately. By providing a ne distribution of the samples in the feature space, the projective particlelter outperforms the standard particle lter on di ff erent tracking measures. First, the resampling frequency is reduced due to abetter t of the importance density for the estimation of the posterior density. Second, the mean squared error between the featurevector estimate and the true state is reduced compared to the estimate provided by the standard particle lter. Third, the trackingrate is improved for the projective particle lter, hence decreasing track loss.

    1. Introduction and Motivations

    Vehicle tracking has been an active eld of research withinthe past decade due to the increase in computational powerand the development of video surveillance infrastructure.The area of Intelligent Transportation Systems (ITSs) is inneed for robust tracking algorithms to ensure that top-enddecisions such as automatic tra ffi c control and regulation,automatic video surveillance and abnormal event detectionare made with a high level of condence. Accurate trajectory extraction provides essential statistics for tra ffic control, suchas speed monitoring, vehicle count, and average vehicle ow.Therefore, as a low-level task at the bottom-end of ITS,vehicle tracking must provide accurate and robust informa-tion to higher-level modules making intelligent decisions.In this sense, intelligent transportation systems are a majorbreakthrough since they alleviate the need for devices thatcan be prohibitively costly or simply unpractical to imple-ment. For instance, the installation of inductive loop sensorsgenerates traffic perturbations that cannot alwaysbe a ff orded

    in dense traffic areas. Also, robust video tracking enables newapplications such as vehicle identication and customizedstatistics that are not available with current technologies, forexample, suspect vehicle tracking or di ff erentiated vehiclespeed limits. At the top-end of the system are high level-taskssuch as event detection (e.g., accident and animal crossing)or traffi c regulation (e.g., dynamic adaptation and laneallocation). Robust vehicle tracking is therefore necessary toensure eff ective performance.

    Several techniques have been developed for vehicletracking over the past two decades. The most commonones rely on Bayesian ltering, and Kalman and particlelters in particular. Kalman lter-based tracking usually relies on background subtraction followed by segmentation[1, 2], although some techniques implement spatial featuressuch as corners and edges [ 3, 4] or use Bayesian energy minimization [ 5]. Exhaustive search techniques involvingtemplate matching [ 6] or occlusion reasoning [ 7] havealso been used for tracking vehicles. Particle ltering ispreferred when the hypothesis of multimodality is necessary,

  • 8/6/2019 Particle Filters and Projective Transform

    2/12

    2 EURASIP Journal on Image and Video Processing

    for example, in case of severe occlusion [8, 9]. Particlelters off er the advantage of relaxing the Gaussian andlinearity constraints imposed upon the Kalman lter. Onthe downside, particle lters only provide a suboptimalsolution, which converges in a statistical sense to theoptimal solution. The convergence is of the order O ( N S),

    where N S is the number of particles; consequently, they arecomputation-intensive algorithms. For this reason, particleltering techniques for visual tracking have been developedonly recently with the widespread of powerful computers.Particle lters for visual object tracking have rst beenintroduced by Isard and Blake, part of the CONDENSATIONalgorithm [ 10, 11], and Doucet [ 12]. Arulampalam et al.provide a more general introduction to Bayesian ltering,encompassing particle lter implementations [ 13]. Withinthe last decade, the interest in particle lters has beengrowing exponentially. Early contributions were based onthe Kalman lter models; for instance, Van Der Merwe etal. discussed an extended particle lter (EPF) and proposedan unscented particle lter (UPF), using the unscentedtransform to capture second order nonlinearities [ 14]. Later,a Gaussian sum particle lter was introduced to reducethe computational complexity [ 15]. There has also been aplethora of theoretic improvements to the original algorithmsuch as the kernel particle lter [ 16, 17], the iteratedextended Kalman particle lter [ 18], the adaptive samplesize particle lter [19, 20], and the augmented particle lter[21]. As far as applications are concerned, particle ltersare widely used in a variety of tracking tasks: head trackingvia active contours [ 22, 23], edge and color histogramtracking [ 24, 25], sonar [ 26], and phase [ 27] tracking, toname few. Particle lters have also been used for objectdetection and segmentation [ 28, 29], and for audiovisualfusion [ 30].

    Many vehicle tracking systems have been proposed thatintegrate features of the object, such as the traditionalkinematic model parameters [ 2, 7, 3133] or scale [1],in the tracking model. However, these techniques seldomintegrate information specic to the vehicle tracking prob-lem, which is key to the improvement of track extraction;rather, they are general estimators disregarding theparticulartraffic surveillance context. Since particle lters require alarge number of samples in order to achieve accurate androbust tracking, information pertaining to the behavior of the vehicle is instrumental in drawing samples from theimportance density. To this end, the projective fractionaltransform is used to map the vehicle position in the realworld to its position on the camera plane. In [ 35], Bouttefroy et al. proposed the projective Kalman lter (PKF), whichintegrates the projective transform into the Kalman trackerto improve its performance. However, the PKF tracker di ff ersfrom the proposed particle lter tracker in that the formerrelies on background subtraction to extract the objects,whereas the latter uses color information to track the objects.

    The aim of this paper is to study the performance of a particle lter integrating vehicle characteristics in orderto decrease the size of the particle set for a given errorrate. In this framework, the task of vehicle tracking can beapproached as a specic application of object tracking in

    a constrained environment. Indeed, vehicles do not evolvefreely in their environment but follow particular trajecto-ries. The most notable constraints imposed upon vehicletrajectories in traffic video surveillance are summarizedbelow.

    Low Denition and Highly Compressed Videos. Traffic mon-itoring video sequences are often of poor quality becauseof the inadequate infrastructure of the acquisition andtransport system. Therefore, the size of the sample set ( N S)necessary for vehicle tracking must be large to ensure robustand accurate estimates.

    Slowly-Varying Vehicle Speed.A common assumption invehicle tracking is the uniformity of the vehicle speed. Thenarrow angle of view of the scene and the short period of time a vehicle is in the eld of view justify this assumption,especially when tracking vehicles on a highway.

    Constrained Real-World Vehicle Trajectory. Normal drivingrules impose a particular trajectory on the vehicle. Indeed,the curvature of the road and the di ff erent lanes constrainthe position of the vehicle. Figure 1 illustrates the pattern of vehicle trajectories resulting from projective constraints thatcan be exploited in vehicle tracking.

    Projection of Vehicle Trajectory on the Camera Plane.Thetrajectory of a vehicle on the camera plane undergoes severedistortion due to the low elevation of the tra ffic surveillancecamera. The curve described by the position of the vehicleconverges asymptotically to the vanishing point.

    We propose here to integrate these characteristics toobtain a ner estimate of the vehicle feature vector. Morespecically, the mapping of real-world vehicle trajectory through a fractional transform enables a better estimate of the posterior density. A particle lter is thus implemented,which integrate cues of the projection in the importancedensity, resulting in a better exploration of the state spaceand a reduction of the variance in the trajectory estimation.Preliminary results of this work have been presented in [ 34];this paper develops the work further. Its main contribu-tions are: (i) a complete description of the homographicprojection problem for vehicle tracking and a review of the solutions proposed to date; (ii) an evaluation of theprojective particle lter tracking rate on a comprehensivedataset comprising around 2,600 vehicles; (iii) an evaluationof the resampling accuracy for the projective particle lter;(iv) a comparison of the performance of the projectiveparticle lter and the standard particle lter using threediff erent measures, namely, the sampling frequency, themean squared error and tracking drift. The rest of thepaper is organized as follows. Section 2 introduces thegeneral particle ltering framework. Section 3 developsthe proposed Projective Particle Filter (PPF). An analy-sis of the PPF performance versus the standard parti-cle lter is presented in Section 4 before concluding inSection 5.

  • 8/6/2019 Particle Filters and Projective Transform

    3/12

    EURASIP Journal on Image and Video Processing 3

    (a)

    0

    20

    40

    60

    80

    100

    120

    140

    P o s i t i o n o n t h e

    i m a g e

    ( x )

    0 50 100 150 200 250 300 350 400 450 500Ground distance from the camera ( r )

    (b)

    Figure 1: Examples of vehicle trajectories from a traffi c monitoring video sequence. Most vehicles follow a predetermined path: (a) vehicletrajectories in the image; (b) vehicle positions in the image w.r.t. the distance from the monitoring camera.

    2. Bayesian and Particle Filtering

    This section presents a brief review of Bayesian and particleltering. Bayesian ltering provides a convenient frameworkfor object tracking due to the weak assumptions on thestate space model and the rst-order Markov chain recursiveproperties. Without loss of generality, let us consider a systemwith state x of dimension n and observation z of dimensionm. Let x 1:k {x 1, . . . , x k} and z1:k {z1, . . . , zk} denote,respectively, the set of states and the set of observations priorto and including time instant t k . The state space model canbe expressed as

    x k = f (x k 1) + v k 1, (1)

    zk = h(x k) + nk, (2)

    when the process and observation noises, v k 1 and nk,respectively, are assumed to be additive. The vector-valuedfunctions f and h are the process and observation functions,respectively. Bayesian ltering aims to estimate the posterior probability density function (pdf) of the state x given theobservation z as p(x k | zk). The probability density functionis estimated recursively, in two steps: prediction and update.First, let us denote by p(x k 1 | zk 1) the posterior pdf at timet k 1, and let us assume it is known. The prediction stage relieson the Chapman-Kolmogorov equation to estimate the priorpdf p(x k | zk 1):

    p(x k | zk 1) = p(x k | x k 1) p(x k 1 | zk 1)d x k 1. (3)When a new observation becomes available, the prior isupdated as follows:

    p(x k | zk) = k p(zk | x k) p(x k | zk 1), (4)

    where p(zk | x k) is the likelihood function and k is anormalizing constant, k = p(zk | x k) p(x k | zk 1)d x k.As the posterior probability density function p(x k | zk) is

    recursively estimated through ( 3) and ( 4), only the initialdensity p(x 0 | z0) is to be known.

    Monte Carlo methods and more specically particlelters have been extensively employed to tackle the Bayesianproblem represented by ( 3) and ( 4) [36, 37]. Multimodality enables the system to evolve in time with several hypotheseson the state in parallel. This property is practical to corrobo-rate or reject an eventual track after several frames. However,the Bayesian problem then cannot be solved in closed form,as in the Kalman lter, due to the complex density shapesinvolved. Particle lters rely on Sequential Monte Carlo(SMC) simulations, as a numerical method, to circumventthe direct evaluation of the Chapman-Kolmogorov equation(3). Let us assume that a large number of samples {x ik, i =1 N S} are drawn from the posterior distribution p(x k |zk). It follows from the law of large numbers that

    p(x k | zk) N S

    i= 1wik x k x

    ik , (5)

    where wik are positive weights, satisfying wik = 1, and

    ( ) is the Kronecker delta function. However, because itis often difficult to draw samples from the posterior pdf,an importance density q( ) is used to generate the samplesx ik. It can then be shown that the recursive estimate of theposterior density via ( 3) and ( 4) can be carried out by the setof particles, provided that the weights are updated as follows[13]:

    wik wik 1

    p zk | x ik p x ik | x

    ik 1

    q x ik | x ik 1, zk

    = wik 1k p zk | x ik .

    (6)

    The choice of the importance density q(x ik | x ik 1, zk) is

    crucial in order to obtain a good estimate of the posteriorpdf. It has been shown that the set of particles and associatedweights {x ik, w

    ik} will eventually degenerate, that is, most of

    the weights will be carried by a small number of samples

  • 8/6/2019 Particle Filters and Projective Transform

    4/12

    4 EURASIP Journal on Image and Video Processing

    H

    / 2

    D

    xo r

    d

    X vp

    d p

    Figure 2: Projection of the vehicle on a plane parallel to the imageplane of the camera. The graph shows a cross-section of the scenealong the direction d (tangential to the road).

    and a large number of samples will have negligible weight[38]. In such a case, and because samples are not drawnfrom the true posterior, the degeneracy problem cannot beavoided and resampling of the set needs to be performed.Nevertheless, the closer the importance density is fromthe true posterior density, the slower the set {x ik, w

    ik} will

    degenerate; a good choice of importance density reduces theneed for resampling. In this paper, we propose to model thefractional transform mapping the real world space onto thecamera plane and to integrate the projection in the particlelter through the importance density q(x ik | x

    ik 1, zk).

    3. Projective Particle Filter

    The particle lter developed is named Projective ParticleFilter (PPF) because the vehicle position is projected on thecamera plane and used as an inference to di ff use the particlesin the feature space. One of the particularities of the PPFis to diff erentiate between the importance density and thetransition prior pdf, whilst the SIR (Sampling ImportanceResampling) lter, also called standard particle lter, doesnot. Therefore, we need to dene the importance density from the fractional transform as well as the transition prior p(x k | x k 1) and the likelihood p(zk | x k) in order to updatethe weights in (6).

    3.1. Linear Fractional Transformation. The fractional trans-form is used to estimate the position of the object on thecamera plane ( x) from its position on the road ( r ). Thephysical trajectory is projected onto the camera plane asshown in Figure 2. The distortion of the object trajectory happens along the direction d , tangential to the road. Theaxis d p is parallel to the camera plane; the projection x of thevehicle position on d p is thus proportional to the position of the vehicle on the camera plane. The value of x is scaled by X vp, the projection of the vanishing point on d p, to obtainthe position of the vehicle in terms of pixels. For practicalimplementation, it is useful to express the projection alongthe tangential direction d onto the d p axis in terms of videofootage parameters that are easily accessible, namely:

    (i) angle of view (),

    (ii) height of the camera ( H ),(iii) ground distance ( D) between the camera and the rst

    location captured by the camera.

    It can be inferred from Figure 2, after applying the law of cosines, that

    x2 = r 2 + 2 2r cos(), (7)

    2 = x2 + r 2 2r x cos , (8)

    where cos = (D + r ) / H 2 + ( D + r )2 and =arctan( D/H ) + / 2. After squaring and substituting 2 in (7),we obtain

    r 2 x2 + r 2 2r x cos cos2 = r 2 r x cos 2. (9)

    Grouping the terms in x to get a quadratic form leads to

    x2 cos2 cos2 + 2xr 1 cos2 cos

    + r 2 cos2 1 = 0.(10)

    After discarding the nonphysically acceptable solution, onegets

    x(r ) =rH

    (D + r ) sin + H cos . (11)

    However, because D H and is small in practice (seeTable 1), the angle is approximately equal to / 2 and,consequently, ( 11) simplies to x = rH/ (D + r ). Note thatthis result can be veried using the triangle proportionality theorem. Finally, we scale x with the position of the vanishingpoint X vp in the image to nd the position of the vehicle in

    terms of pixel location, which yields

    x = X vp

    limr x(r )x(r ) =

    X vpH

    x(r ). (12)

    (The position of the vanishing point can either be approx-imated manually or estimated automatically [ 39]. In ourexperiments, the position of the vanishing point is estimatedmanually). The projected speed and the observed size of theobject on the camera plane are also important variables forthe problem of tracking, and hence it is necessary to derivethem. Let v = dr/dt and x = dx/dt . Diff erentiating ( 12),after substituting for x (x = rH/ (D + r )) and eliminating r , yields the observed speed of the vehicle on the camera plane:

    x = f x(x) = X vp x

    2v

    X vpD, (13)

    The observed size of the vehicle b can also be derived fromthe position x if the real size of the vehicles is known. If thecenter of the vehicle is x, its extremities are located at x + s/ 2and x s/ 2. Therefore, applying the fractional transformation yields

    b = f b(x) =sDX vp

    DX vp / ( X vp x)2

    (s/ 2)2. (14)

  • 8/6/2019 Particle Filters and Projective Transform

    5/12

    EURASIP Journal on Image and Video Processing 5

    Table 1: Video sequences used for the evaluation of the algorithm performance along with the duration, the number of vehicles, and thesetting parameters, namely, the height ( H ), the angle of view () and the distance to eld of view ( D).

    Video sequence Duration No. of vehicles Camera height ( H ) Angle of view () Distance to FOV (D)Video 001 199 s 74 6 m 8.5 0.10 deg 48 mVideo 002 360 s 115 5.5 m 15.7 0.12 deg 75 mVideo 003 480 s 252 5.5 m 15.7 0.12 deg 75 mVideo 004 367 s 132 6 m 19.2 0.12 deg 29 mVideo 005 140 s 33 5.5 m 12.5 0.15 deg 80 mVideo 006 312 s 83 5.5 m 19.2 0.2 deg 57 mVideo 007 302 s 84 5.5 m 19.2 0.2 deg 57 mVideo 008 310 s 89 5.5 m 19.2 0.2 deg 57 mVideo 009 80 s 42 5.5 m 19.2 0.2 deg 57 mVideo 010 495 s 503 7.5 m 6.9 0.15 deg 135 mVideo 011 297 s 286 7.5 m 6.9 0.15 deg 80 mVideo 012 358 s 183 8 m 21.3 0.2 deg 43 mVideo 013 377 s 188 8 m 21.3 0.2 deg 43 mVideo 014 278 s 264 6 m 18.5 0.18 deg 64 mVideo 015 269 s 267 6 m 18.5 0.18 deg 64 m

    3.2. Importance Density and Transition Prior. The projectiveparticle lter integrates the fractional transform into theimportance density q(x ik | x

    ik 1, zk). The state vector x is

    modeled with the position, the speed and the size of thevehicle in the image:

    x =

    x

    y

    x

    y

    b

    , (15)

    where x and y are the Cartesian coordinates of the vehicle,x and y are the respective speeds and b is the apparent sizeof the vehicle; more precisely, b is the radius of the circlebest tting the vehicle shape. Object tracking is traditionally performed using a standard kinematic model (NewtonsLaws), taking into account the position, the speed andthe size of the object (The size of the object is essentially maintained for the purpose of likelihood estimation). In thispaper, the kinematic model is rened with the estimationof the speed and the object size through the fractional

    transform along the distorted direction d . Therefore, theprocess function f , dened in ( 1), is given by

    f (x k 1) =

    xk 1 + f x(xk 1)

    yk 1 + yk 1 f x(xk 1)

    yk 1 f b(xk 1)

    . (16)

    It is important to note that since the fractional transformis along the x-axis, the function f x provides a better estimatethan a simple kinematic model taking into account the

    speed of the vehicle. On the other hand, the distortionalong the y-axis is much weaker and such an estimationis not necessary. One novel aspect of this paper is theestimation of the vehicle position along the x axis and itssize through f x and f b(x), respectively. It is worthwhilenoting that the standard kinematic model of the vehicle isrecovered when f x(xk 1) = xk 1 and f b(x) = bk 1. Thevector-valued function g (x k 1) = { f (x k 1) | f x(xk 1) =xk 1, f b(x) = bk 1} denotes the standard kinematic modelin the sequel. The samples of the PPF are drawn from theimportance density q(x k | x k 1, zk) = N (x k, f (x k 1), q) andthe standard kinematic model is used in the prior density p(x k | x k 1) = N (x k, g (x k 1), p), where N ( , , ) denotesthe normal distribution of covariance matrix centered on . The distributions are considered Gaussian and isotropic toevenly spread the samples around the estimated state vectorat time step k.

    3.3. Likelihood Estimation. The estimation of the likelihood p(zk | x ik) is based on the distance between color his-tograms, as in [ 40]. Let us dene an M -bin histogramH = {H [u]}u= 1 M , representing the distribution of J colorpixel values c, as follows:

    H [u] =1 J

    J

    i= 1 ci u , (17)

    where u is the set of bins regularly spaced on the interval[1, M ], is a linear binning function providing the bin indexof pixel value ci, and ( ) is the Kronecker delta function.The pixels ci are selected from a circle of radius b centeredon (x, y). Indeed, after projection on the camera plane, thecircle is the standard shape that delineates the vehicle best.Let us denote the target and the candidate histograms by

  • 8/6/2019 Particle Filters and Projective Transform

    6/12

    6 EURASIP Journal on Image and Video Processing

    H t and H x , respectively. The Bhattacharyya distance betweentwo histograms is dened as

    (x ) =

    1 M

    u= 1H t [u]H x [u]

    . (18)

    Finally, the likelihood p(zk | x ik) is calculated as

    p zk | x ik exp x ik . (19)

    3.4. Projective Particle Filter Implementation. Because mostapproaches to tracking take the prior density as importancedensity, the samples x ik are directly drawn from the standardkinematic model. In this paper, we di ff erentiate betweenthe prior and the importance density to obtain a betterdistribution of the samples. The initial state x 0 is chosenas x 0 = [x0, y0,10,0,20]T where x0 and y0 are the initialcoordinates of the object. The parameters are selected to

    cater for the majority of vehicles. The position of the vehicles(x0, y0) is estimated either manually or with an automaticprocedure (see Section 4.2). The speed along the x-axiscorresponds to the average pixel displacement for a speedof 90km h 1 and the apparent size b is set so that theelliptical region for histogram tracking encompasses at leastthe vehicle. The size is overestimated to t all cars and moststandard trucks at initialization: the size is then adjustedthrough tracking by the particle lters. The value x 0 is usedto draw the set of samples x i0 : q(x 0 | z0) = N (x i0, f (x 0), q).The transition prior p(x k | x k 1) and the importancedensity q(x k | x k 1, zk) are both modeled with normaldistributions. The prior covariance matrix and mean are

    initialized as

    p=

    diag([6 1 1 1 4]) and p=

    g (x 0),respectively, and q = diag([1 1 0.5 1 4]) and q = f (x 0),for the importance density.These initializations represent thephysical constraints on the vehicle speed.

    A resampling scheme is necessary to avoid the degeneracy of the particle set. Systematic sampling [ 41] is performedwhen the variance of the weight set is too large, that is, whenthe number of the e ff ective samples N eff falls below a giventhreshold N , arbitrarily set to 0 .6N S in the implementation.The number of e ff ective samples N eff is evaluated as

    N eff =1

    N Si= 1 wik

    2 . (20)

    The implementation of theprojective particle lter algorithmis summarized in Algorithm 1.

    4. Experiments and Results

    In this section, the performances of the standard and theprojective particle lters are evaluated on tra ffi c surveillancedata. Since the two vehicle tracking algorithms possessthe same architecture, the di ff erence in performance canbe attributed to the distribution of particles through theimportance density integrating the projective transform. Theexperimental results presented in this section aim to evaluate

    Require : x i0 q(x 0 | z0) and wi0 = 1 /N Sfor i = 1 to N S do

    Compute f (x ik 1) from ( 16)Draw x ik q(x

    ik | x

    ik 1, zk) = N (x

    ik, f (x

    ik 1), q)

    Compute the ratio k = p(x ik | x ik 1) /q(x ik | x ik 1, zk)Update weights wik = wik 1 k p(zk | x k)

    end forNormalize wik

    if N eff < N thenl = 0for i = 1 to N S do

    i = cumsum( wik)

    whilel

    N S< i do

    xl k = xikwl k = 1 /N Sl = l + 1

    end whileend for

    end if

    Algorithm 1: Projective particle lter algorithm.

    (1) the improvement in sample distribution with theimplementation of the projective transform,

    (2) the improvement in the position error of the vehicleby the projective particle lter,

    (3) the robustness of vehicle tracking (in terms of anincrease in tracking rate) due to the ne distributionof the particles in the feature space.

    The algorithm is tested on 15 tra ffic monitoring videosequences, labeled Video 001 to Video 015 in Algorithm 1.The number of vehicles, and the duration of the videosequences as well as the parameters of the projectivetransform are summarized in Table 1. Around 2,600 movingvehicles are recorded in the setof video sequences.The videosrange from clear weather to cloudy with weak illuminationconditions. The camera was positioned above highways ata height ranging from 5.5m to 8 m. Although the camerawas placed at the center of the highways, a shift in theposition has no e ff ect on the performance, be it only forthe earlier detection of vehicles and the length of the vehiclepath. On the other hand, the rotation of the camera wouldaff ect the value of D and the position of the vanishingpoint X vp. The video sequences are low-denition (128 160) to comply with the characteristics of tra ffic monitoringsequences. The video sequences are footage of vehiclestraveling on a highway. Although the roads are straight inthe dataset, the algorithm can be applied to curved roadswith approximation of the parameters over short distancesbecause the projection tends to linearize the curves in theimage plane.

    4.1. Distribution of Samples. An evaluation of the impor-tance density can be performed by comparing the distribu-tion of the samples in the feature space for the standardand the projective particle lters. Since the degeneracy of

  • 8/6/2019 Particle Filters and Projective Transform

    7/12

    EURASIP Journal on Image and Video Processing 7

    the particle set indicates the degree of tting of theimportance density through the number of e ff ective sam-ples N eff (see (20)), the frequency of particle resamplingis an indicator of the similarity between the posteriorand the importance density. Ideally, the importance density should be the posterior. This is not possible in practice

    because the posterior is unknown; if the posterior wereknown, tracking would not be required.First, the mean squared error (MSE) between the true

    state of the feature vector and the set of particles is presentedwithout resampling in order to compare the tracking accu-racy of the projective and standard particle ltersbased solely on the performance of the importance and prior densities,respectively. Consequently, the t of the importance density to the vehicle tracking problem is evaluated. Furthermore,computing the MSE provides a quantitative estimate of the error. Since there is no resampling, a large number of particles is required in this experiment: we chose N S =300. Figure 3 shows the position MSE for the standard

    and the projective particle lters for 80 trajectories inVideo 008 sequence; the average MSEs are 1.10 and 0.58,respectively.

    Second, the resampling frequencies for the projectiveand the standard particle lters are evaluated on the entiredataset. A decrease in the resampling frequency is the resultof a better (i.e., closer to the posterior density) modelingof the density from which the samples are drawn. Theresampling frequencies are expressed as the percentage of resampling compared to the direct sampling at each timestep k. Figure 4 displays the resampling frequencies acrossthe entire dataset for each particle lter. On average, theprojective particle lter resamples 14.9% of the time and the

    standard particle lter 19.4%, that is, an increase of 30%between the former and the latter.For the problem of vehicle tracking, the importance

    density q used in the projective particle lter is thereforemore suitable for drawing samples, compared to the priordensity used in the standard particle lter. An accurateimportance density is benecial not only from a compu-tational perspective since the resampling procedure is lessfrequently called, but also for tracking performance, as theparticles provide a better t to the true posterior density.Subsequently, the tracker is less prone to distraction in caseof occlusion or similarity between vehicles.

    4.2. Trajectory Error Evaluation. An important measure invehicle tracking is the variance of the trajectory. Indeed,high-level tasks, such as abnormal behavior or DUI (drivingunder the inuence) detection, require an accurate trackingof the vehicle and, in particular, a low MSE for the position.Figure 5 displays a track estimated with the projective particlelter and the standard particle lter. It can be inferredqualitatively that the PPF achieves better results than thestandard particle lter. Two experiments are conducted toevaluate the performance in terms of position variance: onewith semiautomatic variance estimation and the other onewith ground truth labeling to evaluate the inuence of thenumber of particles.

    0

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    M e a n

    s q u a r e

    d e r r o r

    0 10 20 30 40 50 60 70 80Track index

    Mean squared error for 300 samples without resampling

    Projective particle lterStandard particle lter

    Figure 3: Position mean squared error for the standard (solid) andthe projective (dashed) particle lter without resampling step.

    05

    1015202530354045

    (%)

    M 2 U 0 0 0 1 0

    M 2 U 0 0 0 1 2

    M 2 U 0 0 0 1 2 b

    M 2 U 0 0 0 1 5

    M 2 U 0 0 1 4 5

    M 2 U 0 0 1 4 9

    M 2 U 0 0 1 5 0

    M 2 U 0 0 1 5 1

    M 2 U 0 0 1 5 2

    M 2 U 0 0 1 5 8

    M 2 U 0 0 1 5 9

    M 2 U 0 0 1 6 0

    M 2 U 0 0 1 6 1

    M 2 U 0 0 1 8 6

    M 2 U 0 0 1 8 7

    Percentage of resampling

    Standard particle lterProjective particle lter

    Figure 4: Resampling frequency for the 15 videos of the dataset.The resampling frequency is the ratio between the number of resampling and the number of particle lter iteration. The averageresampling frequency for the projective particle lter is 14.9%, and19.4% for the standard particle lter.

    In the rst experiment, the performance of each trackeris evaluated in terms of MSE. In order to avoid the tedioustask of manually extracting the groundtruth of every track,a synthetic track is generated automatically based on theparameters of the real world projection of the vehicletrajectory on the camera plane. Figure 6 shows that thetheoretic and the manually extracted tracks match almostperfectly. The initialization of the tracks is performed as in[35]. However, because the initial position of the vehiclewhen tracking starts may di ff er from one track to another,it is necessary to align the theoretic and the extracted tracksin order to cancel the bias in the estimation of the MSE.Furthermore, the variance estimation is semiautomatic sincethe match between the generated and the extracted tracks isvisually assessed. It was found that Video 005, Video 006,and Video 008 sequences provide the best matches over-all. The 205 vehicle tracks contained in the 3 sequences

  • 8/6/2019 Particle Filters and Projective Transform

    8/12

    8 EURASIP Journal on Image and Video Processing

    120

    100

    80

    60

    40

    20

    20 40 60 80 100 120 140 160

    (a) Standard

    120

    100

    80

    60

    40

    20

    20 40 60 80 100 120 140 160

    (b) Projective

    Figure 5: Vehicle track for (a) the standard and (b) the projective particle lter. The projective particle lter exhibits a lower variance in theposition estimation.

    20

    40

    60

    80

    100

    120

    P i x e l v a

    l u e a l o n g t h e

    d - a x i s

    0 50 100 150 200 250Time

    Theoretic and ground truth track after alignment

    Theoretic trackGround truth track

    Figure 6: Alignment of theoretic and extracted trajectories alongthe d -axis. The diff erence between the two tracks represents error inthe estimation of the trajectory.

    Table 2: MSEforthe standard andthe projective particle lters with100 samples.

    Video sequence Video 005 Video 006 Video 008Avg. MSE Std PF 2.26 0.99 1.07Avg. MSE Proj. PF 1.89 0.83 1.02

    are matched against their respective generated tracks andvisually inspected to ensure adequate correspondence. Theaverage MSEs for each video sequence are presented inTable 2 for a sample set size of 100. It can be inferred fromTable 2 that the PPF consistently outperforms the standardparticle lter. It is also worth noting that the higher MSE inthis experiment, compared to the one presented in Figure 5for Video 008, is due to the smaller number of particleseven with resampling, the particle lters do not reach theaccuracy achieved with 300 particles.

    1

    2

    3

    4

    5

    6

    M e a n s q u a r e

    d e r r o r

    0 50 100 150 200 250 300

    Number of particles ( N S)

    Position MSE for standard and projective particle lters

    Projective particle lterStandard particle lter

    Figure 7: Position mean squared error versus number of particlesfor the standard and the projective particle lter.

    In the second experiment, we evaluate the performanceof the two tracking algorithms w.r.t. the number of par-ticles. Here, the ground truth is manually labeled in thevideo sequence. This experiment serves as validation tothe semiautomatic procedure described above as well as an

    evaluationof the eff

    ect of particle set size on the performanceof both the PPF and the standard particle lter. To ensurethe impartiality of the evaluation, we arbitrarily decidedto extract the ground truth for the rst 5 trajectories inVideo 001 sequence. Figure 7 displays the average MSE over10 epochs for the rst trajectory and for di ff erent values of N S. Figure 8 presents the average MSE for 10 epochs onthe 5 ground truth tracks for N S = 20 and N S = 100.The experiments are run with several epochs to increasethe condence in the results due to the stochastic natureof particle lters. It is clear that the projective particle lteroutperforms the standard particle lter in terms of MSE.The higher accuracy of the PPF, with all parameters being

  • 8/6/2019 Particle Filters and Projective Transform

    9/12

    EURASIP Journal on Image and Video Processing 9

    2

    4

    6

    8

    M e a n s q u a r e

    d e r r o r

    1 2 3 4 5

    Track index

    Position MSE for 20 particles and 5 di ff erent tracks

    Projective particle lterStandard particle lter

    (a)

    1

    2

    3

    4

    M e a n s q u a r e

    d e r r o r

    1 2 3 4 5

    Track index

    Position MSE for 100 particles and 5 di ff erent tracks

    Projective particle lterStandard particle lter

    (b)

    Figure 8: Position mean squared error for 5 ground truth labeled vehicles using the standard and the projective particle lter. (a) with 20particles; (b) with 100 particles.

    0102030405060708090

    100

    V i d e o 0 0 1

    V i d e o 0 0 2

    V i d e o 0 0 3

    V i d e o 0 0 4

    V i d e o 0 0 5

    V i d e o 0 0 6

    V i d e o 0 0 7

    V i d e o 0 0 8

    V i d e o 0 0 9

    V i d e o 0 0 1 0

    V i d e o 0 0 1 1

    V i d e o 0 0 1 2

    V i d e o 0 0 1 3

    V i d e o 0 0 1 4

    V i d e o 0 0 1 5

    Vehicle tracking performance

    Standard particle lter

    Projective particle lter

    Figure 9: Tracking rate for the projective and standard particlelters on the traffi c surveillance dataset.

    identical in the comparison, is due to the ner estimation of the sample distribution by the importance density and theconsequent adjustment of the weights.

    4.3. Tracking Rate Evaluation. An important problemencountered in vehicle tracking is the phenomenon of tracker drift. We propose here to estimate the robustness of the tracking by introducing a tracking rate based on driftmeasure and to estimate the percentage of vehicles trackedwithout severe drift, that is, for which the track is notlost. The tracking rate primarily aims to detect the loss of vehicle track and, therefore, evaluates the robustness of thetracker. Robustness is di ff erentiated from accuracy in thatthe former is a qualitative measure of tracking performancewhile the latter is a quantitative measure, based on an errormeasure as in Section 4.2, for instance. The drift measure forvehicle tracking is based on the observation that vehicles areconverging to the vanishing point; therefore, the trajectory of the vehicle along the tangential axis is monotonically decreasing. As a consequence, we propose to measure thenumber of steps where the vehicle position decreases ( pd )

    and the number of steps where the vehicle position increasesor is constant ( pi), which is characteristic of drift of atracker. Note that horizontal drift is seldom observed sincethe distortion along this axis is weak. The rate of vehiclestracked without severe drift is then calculated as

    Tracking Rate =pd

    pd + pi. (21)

    The tracking rate is evaluated for the projective andstandard particle lters. Figure 9 displays the results for theentire traffi c surveillance dataset. It shows that the projectiveparticle lter yields better tracking rate than the standardparticle lter across the entire dataset. The projective particlelter improves the tracking rate compared to the standard

    particle lter. Figure 9 also shows that the diff erence betweenthe tracking rates is not as important as the di ff erence inMSE because the secondonealready performs well on vehicletracking. At a high-level, the projective particle lter still yields a reduction in the drift of the tracker.

    4.4. Discussion. The experiments show that the projectiveparticle lter performs better than the standard particle lterin terms of sample distribution, tracking error and trackingrate. The improvement is due to the integration of the pro- jective transform in the importance density. Furthermore,the implementation of the projective transform requiresvery simple calculationsunder simplifying assumptions ( 12).Overall, since the projective particle lter requires fewersamples than the standard particle lter to achieve bettertracking performance, the increase in computation dueto the projective transform is o ff set by the reduction insample set size. More specically, the projective particlelter requires the computation of the vector-valued processfunction and the ratio k for each sample. For the processfunction, ( 13) and ( 14), representing f x(x) and f b(x),respectively, must be computed. The computation burden islow assuming that constant terms can be precomputed. Onthe other hand, the projective particle lter yields a gain inthe sample set size since less particles are required for a givenerror and the resampling is 30% more e ffi cient.

  • 8/6/2019 Particle Filters and Projective Transform

    10/12

    10 EURASIP Journal on Image and Video Processing

    The projective particle lter performs better on thethree diff erent measures. The projective transform leads toa reduction in resampling frequency since the distributionof the particles carries accurately the posterior and, con-sequently, the degeneracy of the particle set is slower. Themean squared error is reduced since the particles focus

    around the actual position and size of the vehicle. Thedrift rate benets from the projective transform since thetracker is less distracted by similar objects or by occlusion.The improvement is benecial for applications that requirevehicle locking such as vehicle counts or other applicationsfor which performance is not based on the MSE. It isworthwhile noting here that the MSE and the tracking rateare independent: it can be observed from Figure 9 that thetracking rate is almost the same for Video 005, Video 006,and Video 008, but there is a factor of 2 between the MSEsof Video 005 and Video 008 (see Table 2).

    5. Conclusion

    A plethora of algorithms for object tracking based onBayesian ltering are available. However, these systems failto take advantage of traffic monitoring characteristics, inparticular slow-varying vehicle speed, constrained real-worldvehicle trajectory and projective transform of vehicles ontothe camera plane. This paper proposed a new particle lter,namely, the projective particle lter, which integrates thesecharacteristics into the importance density. The projectivefractional transform, which maps the real world position of avehicle onto the camera plane, provides a better distributionof the samples in the feature space. However, since the prioris not used for sampling, the weights of the projective particle

    lter have to be readjusted. The standard and the projectiveparticle lters have been evaluated on tra ffic surveillancevideos using three diff erent measures representing robustand accurate vehicle tracking: (i) the degeneracy of thesample set is reduced when the fractional transform isintegrated within the importance density; (ii) the trackingrate, measured through drift evaluation, shows an improve-ment in robustness of the tracker; (iii) the MSE on thevehicle trajectory is reduced with the projective particlelter. Furthermore, the proposed technique outperforms thestandard particle lter in terms of MSE even with a fewernumber of particles.

    References[1] Y. K. Jung and YO. S. Ho, Traffi c parameter extraction using

    video-based vehicle tracking, in Proceedings of IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems,pp. 764769, October 1999.

    [2] J. Melo, A. Naftel, A. Bernardino, and J. Santos-Victor, Detec-tion and classication of highway lanes using vehicle motiontrajectories, IEEE Transactions on Intelligent TransportationSystems, vol. 7, no. 2, pp. 188200, 2006.

    [3] C. P. Lin, J. C. Tai, and K. T. Song, Tra ffi c monitoring basedon real-time image tracking, in Proceedings of IEEE Interna-tional Conference on Robotics and Automation, pp. 20912096,September 2003.

    [4] Z. Qiu, D. An, D. Yao, D. Zhou, and B. Ran, An adaptiveKalman Predictor applied to tracking vehicles in the tra ffi cmonitoring system, in Proceedings of IEEE Intelligent VehiclesSymposium, pp. 230235, June 2005.

    [5] F. Dellaert, D. Pomerleau, and C. Thorpe, Model-based cartracking integrated with a road-follower, in Proceedings of IEEE International Conference on Robotics and Automation,vol. 3, pp. 18891894, 1998.

    [6] J. Y. Choi, K. S. Sung, and Y. K. Yang, Multiple vehiclesdetection and tracking based on scale-invariant feature trans-form, in Proceedings of the 10th International IEEE Conferenceon Intelligent Transportation Systems (ITSC 07), pp. 528533,October 2007.

    [7] D. Koller, J. Weber, and J. Malik, Towards realtime visualbased tracking in cluttered tra ffi c scenes, in Proceedings of theIntelligent Vehicles Symposium, pp. 201206, October 1994.

    [8] E. B. Meier and F. Ade, Tracking cars in range images usingthe condensation algorithm, in Proceedings of IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems ,pp. 129134, October 1999.

    [9] T. Xiong and C. Debrunner, Stochastic car tracking with line-and color-based features, in Proceedings of IEEE Conference onIntelligent Transportation Systems, vol. 2, pp. 9991003, 2003.

    [10] M. A. Isard, Visual motion analysis by probabilistic propagationof conditional density , Ph.D. thesis, University of Oxford,Oxford, UK, 1998.

    [11] M. Isard and A. Blake, Condensationconditional density propagation for visual tracking, International Journal of Computer Vision, vol. 29, no. 1, pp. 528, 1998.

    [12] A. Doucet, On sequential simulation-based methods forBayesian ltering, Tech. Rep., University of Cambridge,Cambridge, UK, 1998.

    [13] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, Atutorial on particle lters for online nonlinear/non-GaussianBayesian tracking, IEEE Transactions on Signal Processing ,vol. 50, no. 2, pp. 174188, 2002.

    [14] R. Van Der Merwe, A.Doucet, N. De Freitas, andE. Wan, Theunscented particle lter, Tech. Rep., Cambridge University Engineering Department, 2000.

    [15] J. H. Kotecha and P. M. Djuri c, Gaussian sum particleltering, IEEE Transactions on Signal Processing , vol. 51,no. 10, pp. 26022612, 2003.

    [16] C. Chang and R. Ansari, Kernel particle lter for visualtracking, IEEE SignalProcessing Letters, vol. 12, no. 3, pp. 242245, 2005.

    [17] C. Chang, R. Ansari, and A. Khokhar, Multiple objecttracking with kernel particle lter, in Proceedings of IEEEComputer Society Conference on Computer Vision and PatternRecognition (CVPR 05), pp. 568573, June 2005.

    [18] L. Liang-qun, J. Hong-bing, and L. Jun-hui, The iteratedextended kalman particle lter, in Proceedings of the Inter-national Symposium on Communications and InformationTechnologies (ISCIT 05), pp. 12131216, 2005.

    [19] C. Kwok, D. Fox, and M. Meila, Adaptive real-time particlelters for robot localization, in Proceedings of IEEE Interna-tional Conference on Robotics and Automation, pp. 28362841,September 2003.

    [20] C. Kwok, D. Fox, and M. Meila, Real-time particle lters,Proceedings of the IEEE, vol. 92, no. 3, pp. 469484, 2004.

    [21] C. Shen, M. J. Brooks, and A. D. van Hengel, Augmentedparticle ltering for effi cient visual tracking, in Proceedings of IEEE International Conference on Image Processing (ICIP 05),pp. 856859, September 2005.

  • 8/6/2019 Particle Filters and Projective Transform

    11/12

    EURASIP Journal on Image and Video Processing 11

    [22] Z. Zeng and S. Ma, Head tracking by active particle ltering,in Proceedings of IEEE Conference on Automatic Face andGesture Recognition, pp. 8287, 2002.

    [23] R. Feghali and A. Mitiche, Spatiotemporal motion boundary detection and motion boundary velocity estimation for track-ing moving objects with a moving camera: a level sets PDEsapproach with concurrent camera motion compensation,IEEE Transactions on Image Processing , vol. 13, no. 11, pp.14731490, 2004.

    [24] C. Yang, R. Duraiswami, and L. Davis, Fast multiple objecttracking via a hierarchical particle lter, in Proceedings of the 10th IEEE International Conference on Computer Vision(ICCV 05), pp. 212219, October 2005.

    [25] J. Li and C.-S. Chua, Transductive inference for color-basedparticle lter tracking, in Proceedings of IEEE International Conference on Image Processing , vol. 3, pp. 949952, September2003.

    [26] P. Vadakkepat and L. Jing, Improved particle lter insensor fusion for tracking randomly moving object, IEEETransactions on Instrumentation and Measurement , vol. 55,no. 5, pp. 18231832, 2006.

    [27] A. Ziadi and G. Salut, Non-overlapping deterministic Gaus-sian particles in maximum likelihood non-linear ltering.Phase tracking application, in Proceedings of the International Symposiumon Intelligent Signal Processing and CommunicationSystems (ISPACS 05), pp. 645648, December 2005.

    [28] J. Czyz, Object detection in video via particle lters, inProceedings of the 18th International Conference on PatternRecognition (ICPR 06), pp. 820823, August 2006.

    [29] M. W. Woolrich and T. E. Behrens, Variational bayesinference of spatial mixture models for segmentation, IEEETransactions on Medical Imaging , vol. 25, no. 10, pp. 13801391, 2006.

    [30] K. Nickel, T. Gehrig, R. Stiefelhagen, and J. McDonough,

    A joint particle lter for audio-visual speaker tracking, inProceeding of the 7th International Conference on Multimodal Interfaces (ICMI 05), pp. 6168, October 2005.

    [31] E. Bas, A. M. Tekalp, and F. S. Salman, Automatic vehiclecounting from video for tra ffi c ow analysis, in Proceedingsof IEEE Intelligent Vehicles Symposium (IV 07), pp. 392397,June 2007.

    [32] B. Gloyer, H. K. Aghajan, K.-Y. Siu, and T. Kailath, Vehicledetection and tracking for freeway tra ffi c monitoring, inProceedings of the Asilomar Conference on Signals, Systems andComputers, vol. 2, pp. 970974, 1994.

    [33] L. Zhao and C. Thorpe, Qualitative and quantitative cartracking from a range image sequence, in Proceedings of theIEEE Computer Society Conference on Computer Vision and

    Pattern Recognition, pp. 496501, June 1998.[34] P. L. M. Bouttefroy, A. Bouzerdoum, S. L. Phung, and A.

    Beghdadi, Vehicle tracking using projective particle lter,in Proceedings of the 6th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS 09),pp. 712, September 2009.

    [35] P. L. M. Bouttefroy, A. Bouzerdoum, S. L. Phung, andA. Beghdadi, Vehicle tracking by non-drifting mean-shiftusing projective kalman lter, in Proceedings of the 11thInternational IEEE Conference on Intelligent TransportationSystems (ITSC 08), pp. 6166, December 2008.

    [36] F. Gustafsson, F. Gunnarsson, N. Bergman et al., Particlelters for positioning, navigation, and tracking, IEEE Trans-actions on Signal Processing , vol. 50, no. 2, pp. 425437, 2002.

    [37] Y. Rathi, N. Vaswani, and A. Tannenbaum, A genericframework for tracking using particle lter with dynamicshape prior, IEEE Transactions on Image Processing , vol. 16,no. 5, pp. 13701382, 2007.

    [38] A. Kong, J. S. Lui, and W. H. Wong, Sequential imputationsand bayesian missing data problems, Journal of the AmericanStatistical Assocation, vol. 89, no. 425, pp. 278288, 1994.

    [39] D. Schreiber, B. Alefs, and M. Clabian, Single camera lanedetection and tracking, in Proceedings of the 8th International IEEE Conference on Intelligent Transportation Systems, pp. 302307, September 2005.

    [40] D. Comaniciu, V. Ramesh, and P. Meer, Kernel-based objecttracking, IEEE Transactions on Pattern Analysis and MachineIntelligence, vol. 25, no. 5, pp. 564577, 2003.

    [41] G. Kitagawa, Monte Carlo lter and smoother for non-Gaussian nonlinear state space models, Journal of Computa-tional and Graphical Statistics, vol. 5, no. 1, pp. 125, 1996.

  • 8/6/2019 Particle Filters and Projective Transform

    12/12

    Photograph Turisme de Barcelona / J.Trulls

    Preliminary call for papers

    The 2011 European Signal Processing Conference (EUSIPCO 2011) is thenineteenth in a series of conferences promoted by the European Association for

    Signal Processing (EURASIP, www.eurasip.org ). This year edition will take placein Barcelona, capital city of Catalonia (Spain), and will be jointly organized by theCentre Tecnolgic de Telecomunicacions de Catalunya (CTTC) and theUniversitat Politcnica de Catalunya (UPC).EUSIPCO 2011 will focus on key aspects of signal processing theory and

    Organizing Committee

    Honorary Chair Miguel A. Lagunas (CTTC)

    General Chair

    Ana I. Prez Neira (UPC)General Vice Chair Carles Antn Haro (CTTC)

    Technical Program Chair Xavier Mestre (CTTC)

    Technical Pro ram Co Chairsapp ca ons as s e e ow. ccep ance o su m ss ons w e ase on qua y,relevance and originality. Accepted papers will be published in the EUSIPCOproceedings and presented during the conference. Paper submissions, proposalsfor tutorials and proposals for special sessions are invited in, but not limited to,the following areas of interest.

    Areas of Interest

    Audio and electro acoustics.

    Design, implementation, and applications of signal processing systems.

    Javier Hernando (UPC)Montserrat Pards (UPC)

    Plenary TalksFerran Marqus (UPC)Yonina Eldar (Technion)

    Special SessionsIgnacio Santamara (Unversidadde Cantabria)Mats Bengtsson (KTH)

    Finances

    Mu ltimedia signal processing and coding. Image and multidimensional signal processing. Signal detection and estimation. Sensor array and multi channel signal processing. Sensor fusion in networked systems. Signal processing for communications. Medical imaging and image analysis. Non stationary, non linear and non Gaussian signal processing .

    TutorialsDaniel P. Palomar(Hong Kong UST)Beatrice Pesquet Popescu (ENST)

    PublicityStephan Pfletschinger (CTTC)Mnica Navarro (CTTC)

    PublicationsAntonio Pascual (UPC)Carles Fernndez (CTTC)

    Procedures to submit a paper and proposals for special sessions and tutorials will be detailed at www.eusipco2011.org . Submitted papers must be camera ready, nomore than 5 pages long, and conforming to the standard specified on theEUSIPCO 2011 web site. First authors who are registered students can participatein the best student paper competition.

    Important Deadlines:

    Industrial Liaison & ExhibitsAngeliki Alexiou(University of Piraeus)Albert Sitj (CTTC)

    International Liaison Ju Liu (Shandong University China) Jinhong Yuan (UNSW Australia)Tamas Sziranyi (SZTAKI Hungary)Rich Stern (CMU USA)Ricardo L. de Queiroz (UNB Brazil)

    Webpage: www.eusipco2011.org

    roposa s or spec a sess ons ecProposals for tutorials 18 Feb 2011Electronic submission of full papers 21 Feb 2011Notification of acceptance 23 May 2011Submission of camera ready papers 6 Jun 2011