research article crowd motion analysis based on social

13
Research Article Crowd Motion Analysis Based on Social Force Graph with Streak Flow Attribute Shaonian Huang, 1,2 Dongjun Huang, 1 and Mansoor Ahmed Khuhro 1 1 School of Information Science and Engineering, Central South University, Changsha 410083, China 2 School of Computer and Information Engineering, Hunan University of Commerce, Changsha 420005, China Correspondence should be addressed to Shaonian Huang; [email protected] Received 28 July 2015; Accepted 27 September 2015 Academic Editor: Stefano Basagni Copyright © 2015 Shaonian Huang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Over the past decades, crowd management has attracted a great deal of attention in the area of video surveillance. Among various tasks of video surveillance analysis, crowd motion analysis is the basis of numerous subsequent applications of surveillance video. In this paper, a novel social force graph with streak flow attribute is proposed to capture the global spatiotemporal changes and the local motion of crowd video. Crowd motion analysis is hereby implemented based on the characteristics of social force graph. First, the streak flow of crowd sequence is extracted to represent the global crowd motion; aſter that, spatiotemporal analogous patches are obtained based on the crowd visual features. A weighted social force graph is then constructed based on multiple social properties of crowd video. e graph is segmented into particle groups to represent the similar motion patterns of crowd video. A codebook is then constructed by clustering all local particle groups, and consequently crowd abnormal behaviors are detected by using the Latent Dirichlet Allocation model. Extensive experiments on challenging datasets show that the proposed method achieves preferable results in the application of crowd motion segmentation and abnormal behavior detection. 1. Introduction With the rapid increasing demand of public safety and security, crowd management is becoming one of the most challenging tasks of government department. It has been reported that there were 215 human stampede events world- wide resulting in 7069 deaths and over 14000 injuries between 1980 and 2007. Manually analyzing surveillance video is obviously time-consuming. Moreover, nowadays high-throughput video surveillance systems include a great amount of image data, which makes manual analysis virtually impossible. erefore, how to model crowd motion plays a significant role in video surveillance system. Crowd motion analysis has been intensively studied in computer vision fields; the current methods can be roughly divided into three categories from a high level perspective: microscopic approach, macroscopic approach, and hybrid methods. Microscopic approach needs to analyze pedestrian individuals’ motivation in the video and then spread indi- vidual behavior to crowd motion. Most of these methods need to firstly detect the movement of pedestrian individual or pedestrian groups and then analyze local crowd motion behavior [1–4]. Helbing et al. firstly introduced social force model to simulate pedestrian dynamics. A few methods used social force model to detect and localize unusual events in crowd video [5–7]. Recently, some researchers adopt trajectory-based approach to analyze crowd behavior. Zhou et al. [8] and Bae et al. [9] modeled crowd behavior by extracting individual pedestrian’s trajectories which were obtained by KLT tracker [10]. However, object tracking still has some unsolved challenges, such as occlusion or pedestrians entering/existing. Macroscopic approach reviews crowd as an entity instead of modeling the motion of individual. is kind of algorithms usually takes optical flow to capture the crowd motion dynamic [11–13]. However, optical flow is instantaneous information between consecutive frames, and it fails to capture the motion by itself in a long-range time. A few researchers modeled fluid dynamic to characterize global crowd behaviors. Ali and Shah constructed a Finite Time Lyapunov Exponent (FTLE) field to segment crowd flow [14], Hindawi Publishing Corporation Journal of Electrical and Computer Engineering Volume 2015, Article ID 492051, 12 pages http://dx.doi.org/10.1155/2015/492051

Upload: others

Post on 17-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Research ArticleCrowd Motion Analysis Based on Social Force Graph withStreak Flow Attribute

Shaonian Huang12 Dongjun Huang1 and Mansoor Ahmed Khuhro1

1School of Information Science and Engineering Central South University Changsha 410083 China2School of Computer and Information Engineering Hunan University of Commerce Changsha 420005 China

Correspondence should be addressed to Shaonian Huang hsnhunnueducn

Received 28 July 2015 Accepted 27 September 2015

Academic Editor Stefano Basagni

Copyright copy 2015 Shaonian Huang et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Over the past decades crowd management has attracted a great deal of attention in the area of video surveillance Among varioustasks of video surveillance analysis crowd motion analysis is the basis of numerous subsequent applications of surveillance videoIn this paper a novel social force graph with streak flow attribute is proposed to capture the global spatiotemporal changes andthe local motion of crowd video Crowd motion analysis is hereby implemented based on the characteristics of social force graphFirst the streak flow of crowd sequence is extracted to represent the global crowd motion after that spatiotemporal analogouspatches are obtained based on the crowd visual features A weighted social force graph is then constructed based on multiple socialproperties of crowd video The graph is segmented into particle groups to represent the similar motion patterns of crowd videoA codebook is then constructed by clustering all local particle groups and consequently crowd abnormal behaviors are detectedby using the Latent Dirichlet Allocation model Extensive experiments on challenging datasets show that the proposed methodachieves preferable results in the application of crowd motion segmentation and abnormal behavior detection

1 Introduction

With the rapid increasing demand of public safety andsecurity crowd management is becoming one of the mostchallenging tasks of government department It has beenreported that there were 215 human stampede events world-wide resulting in 7069 deaths and over 14000 injuriesbetween 1980 and 2007 Manually analyzing surveillancevideo is obviously time-consuming Moreover nowadayshigh-throughput video surveillance systems include a greatamount of image data whichmakesmanual analysis virtuallyimpossible Therefore how to model crowd motion plays asignificant role in video surveillance system

Crowd motion analysis has been intensively studied incomputer vision fields the current methods can be roughlydivided into three categories from a high level perspectivemicroscopic approach macroscopic approach and hybridmethods Microscopic approach needs to analyze pedestrianindividualsrsquo motivation in the video and then spread indi-vidual behavior to crowd motion Most of these methodsneed to firstly detect the movement of pedestrian individual

or pedestrian groups and then analyze local crowd motionbehavior [1ndash4] Helbing et al firstly introduced social forcemodel to simulate pedestrian dynamics A few methodsused social force model to detect and localize unusualevents in crowd video [5ndash7] Recently some researchersadopt trajectory-based approach to analyze crowd behaviorZhou et al [8] and Bae et al [9] modeled crowd behaviorby extracting individual pedestrianrsquos trajectories which wereobtained by KLT tracker [10] However object trackingstill has some unsolved challenges such as occlusion orpedestrians enteringexisting

Macroscopic approach reviews crowd as an entity insteadofmodeling themotion of individualThis kind of algorithmsusually takes optical flow to capture the crowd motiondynamic [11ndash13] However optical flow is instantaneousinformation between consecutive frames and it fails tocapture the motion by itself in a long-range time A fewresearchers modeled fluid dynamic to characterize globalcrowd behaviors Ali and Shah constructed a Finite TimeLyapunov Exponent (FTLE) field to segment crowd flow [14]

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2015 Article ID 492051 12 pageshttpdxdoiorg1011552015492051

2 Journal of Electrical and Computer Engineering

Streak flow Graph construction Particle groups

Crowd segmentation

Code book of words

Latent DirichletAllocation

Crowd abnormaldetection

Crowd motion feature extraction module

Crowd abnormal event analysis module

Input video

Figure 1 Flow chart of the proposed crowd motion analysis method

but the segmented regions were unstable when the crowddensity was not very high or crowd flow varied quickly

Hybrid methods model crowd motion frommacroscopicview as well as microscopic view Hybrid models need tocombine benefits of both macro and micro approaches Panet al firstly studied human and social behavior from theperspectives of human decision-making and social inter-action for emergency egress analysis [15] Borrmann etal presented a bidirectional coupling technique to predictpedestrian evacuation time [16] Recently Bera andManochamodeled crowd motion based on discrete and continuousflow models the discrete model is based on microscopicagent formulation and the continuummodel analyzesmacro-scopic behaviors [17] Inspired by the above hybrid methodwe model crowd motion pattern by the combination of amacroscopic and a microscopic model of crowd dynamicsThe macroscopic motion model is derived from streak flowrepresentation which can depict motion dynamic over aninterval of time [18] And the microscopic motion model isbased on the social force between the particle groups of crowdsequence

The general flow chart of our proposed method to modelcrowd motion and detect abnormal behavior is illustrated inFigure 1 We first segment crowd sequence into spatiotempo-ral patches based on the similarity of streak flow Then weconstruct weighted social force graph by particle adventurescheme [14] and cluster the crowd video into particle groupsto represent different motion patterns of crowd Finally weuse Bag of Words (BoW) to generate codebook and thenanalyze crowd motion behavior based on Latent DirichletAllocation (LDA) [19] model We evaluate our methodon three public datasets and perform comparison with anumber of state-of-the-artmethodsThe experimental resultsillustrate that the proposed method can achieve preferableresults in application of crowd motion segmentation andcrowd abnormal behavior detection

In summary the main contributions of this paper includethe following (1) a proposed framework to analyze crowdmotion behavior which takes the global spatiotemporalmotion and local pedestrian interaction into account (2)construction of weighted social force graph to extract localmotion pattern based on multiple social properties (3)utilization of the LDA model to represent the relationshipbetween the distribution of codebook and the crowd behaviorpattern

The remainder of this paper is organized as followsSection 2 describes how to extract streak flow which canintegrate the optical flow over an interval of time Section 3provides details on the construction of weighted socialforce graph for extracting crowd local motion Section 4demonstrates the strength of our method in application ofcrowd motion segmentation and crowd abnormal behaviordetection The experimental results from different data setsand the comparison results with existing methods are shownin Section 5 Finally Section 6 includes the conclusion andpotential extensions of our paper

2 Streak Flow Representation ofCrowd Motion

Streak flow is defined as an accurate representation of motionfield of crowd video [18] Different from optical flow whichonly provides motion information in a small time intervalstreak flow can provide temporal evolution of motion objectsin a period of time Our approach of computing streak flow issimilar to that proposed in [18] However for completenesswewill now shortly introduce the steps of thismethod Firstlywe initialize regular particles at time 119905 and then update theposition of particles according to the optical flow at instanttime Let (119909119901

119891119905

119910119901

119891119905

) denote the position of particle 119901 at theinstant time 119905 which is initialized at point 119901 and frame 119891 for

Journal of Electrical and Computer Engineering 3

(a) (b)

(c)

Figure 2 Comparison of optical flow and streak flow (a) The original image (b) Optical flow of the original image (c) Streak flow of theoriginal image

119905 = 0 1 2 119879Theposition of particle 119901 at any instant timeis denoted as [18]

119909119901

119891119905

= 119909119901

119891119905minus1

+ 119906 (119909119901

119891119905minus1

119910119901

119891119905minus1

119891119905minus1)

119910119901

119891119905

= 119910119901

119891119905minus1

+ V (119909119901119891119905minus1

119910119901

119891119905minus1

119891119905minus1)

(1)

where 119906 and V represent the velocity field of particle 119901 attime 119905 This will generate a series of curves which start fromparticle 119901 For steady motion flow these curves are the sameas the paths of particles but they will vary dramatically inunsteady flow In order to fill the gaps in unsteady flowwe propagate the particles backward based on current flowfiled using fourth-order Runge-Kutta-Fehlberg algorithm[14]

However the positions of particles have usually subpixelaccuracy In order to represent streak flow 119904 = (119904119906 119904V) ofevery pixel we utilize the three nearest neighbor particlesto compute the streak flow by linear interpolation Here wedescribe the computation of 119904119906 the method of estimating 119904Vis the same We use 119874119906 = [119874

119894

119906] to represent the streak flow of

each particle 119894 in vertical direction Then we define

119874119894

119906= 1198861 lowast 119904

119894

119906(1198991) + 1198862 lowast 119904

119894

119906(1198992) + 1198863 lowast 119904

119894

119906(1198993) (2)

where 119899119894 is the index of the neighbor pixels and 119886119895 is theknown triangulation function [18] for thejth neighbor pixel

Applying formula (2) on all of the data of 119904119906 we can obtain thefollowing equation

119860119904119906 = 119874119906 (3)

We resolve the above equation and obtain the value of 119904119906Figure 2 illustrates the difference between optical flow andstreak flow We can see that streak flow can better depict themotion changes

3 Social Force Graph forCrowd Motion Analysis

The aforementioned streak flow represents particle motionin a period of time However in the process of crowdbehavior analysis we are concernedmore about themotion ofparticle groups than a particular particle Most of the existingmethods extract group motion by clustering spatiaotemporalfeature However the interaction between particles playsan important role in crowd motion fluid The particularpedestrian determines and changes his own motion stateaccording to the motion of his neighbor and the globalmotion in his perspective Recently social force model isadapted to simulate the dynamic of pedestrianrsquos movement[1 5 7 20 21] In some crowd applications social forcemodelhas achieved satisfying performance

Based on the above motivation we propose the idea ofmotion particle group representation by social force graphwith streak flow attribution We use streak flow to represent

4 Journal of Electrical and Computer Engineering

(a)

12

3

(b)

Figure 3 An example of spatiotemporal analogous patches (a) The original image (b) Partition patches overlap on the original image

the global motion state in a period of time meanwhilewe extract the social interaction force between particles torepresent the influence of individual movement and thenconstruct a weighted social force graph to extract particlegroups revealing the local crowd motion patterns

31 Partition Spatiotemporal Analogous Patches In order tomodel crowd motion we first partition crowd sequenceinto analogous patches which have similar spatiotemporalfeatures Specifically we generate analogous patches by clus-tering the pixels with similar motion features For a specificimage in crowd sequence point 119901119894(119909 119910) represents any pixelof the imageWe define the similarity of different pixels basedon the streak flow similarity and distance similarity

For any two pixels 119901119894 and 119901119895 the distance similarity isdefined as

dis119894119895 =10038171003817100381710038171003817119909119894 minus 119909119895

10038171003817100381710038171003817

2

2+10038171003817100381710038171003817119910119894 minus 119910119895

10038171003817100381710038171003817

2

2 (4)

In order to describe similarity ofmotion directionwe dividedthe whole direction plane into sixteen equal folds which aredenoted as 120596 = 119894 | 119894 isin (1 2 3 119873 119873 = 16) The motiondirection of any pixel is quantized into one of the sixteenbins based on the streak flow direction Then the directionsimilarity of two pixels is defined as

dir119894119895 =1003816100381610038161003816100381610038161003816cos(120596119894 lowast 2 lowast

120587

119873) minus cos(120596119895 lowast 2 lowast

120587

119873)

1003816100381610038161003816100381610038161003816 (5)

And motion magnitude similarity of pixels is denoted as

mag119894119895=

(119904119894

119906minus 119904119895

119906)2

+ (119904119894

V minus 119904119895

V)2

1205752

(6)

where 120575 is the average magnitude of crowd streak flow and 119904119894119906

and 119904119894V are the streak flow of pixel 119901119894 in horizontal directionand vertical direction

In order to partition crowd sequence into spatiotemporalanalogous patches 119875119899

119873

119899=1 we firstly divide the crowd image

into 119873 regular patches and then put seeds on each regularpatchThe process of partition is to iteratively find the nearestseed for each pixel and then add the pixels into the patch

where the nearest seed belongs In each iteration processwe update the position of seed as the center of the patchThe determination of nearest seed is based on minimizingspatiotemporal motion similarity and spatial distance as

min119873

sum

119899=1

sum

119894isin119875119899

(120572dis119894120583(119899) + 120573dir119894120583(119899) + 120574mag119894120583(119899)) (7)

where 119894 is any pixel in patch119875119899 120583(119899) is the center pixel of119875119899119873

is the number of spatiotemporal analogous patches specifiedin our experiment and the parameters 120572 120573 and 120574 are balanceconditions according to different crowd sequence

Figure 3 illustrates an example of the analogous patchesbased on the above methodWe overlap the partition patcheson the original image The result shows that our partitionmethod based on spatiotemporal similarity of particles cancapture the change of motion objects We can see that thepixels with similar motion pattern tend to be divided into thesame patch for example the patches with labels 1 2 and 3

32 Social Force Graph for Particle Groups Spatiotemporalanalogous patches reflect the global visual similarity ofmoving pedestrian but these patches neglect the interactionbehavior between pedestrians Here we describe how toconstruct a weighted social force graph 119866 = (119881 119864) to modelthe group pattern of crowd where 119881 is a set of vertices and119864 is a set of edges In this paper the vertices set 119881 representsthe set of regular particles in particle adventure scheme Theedges set 119864measures the individualsrsquo motion similarity fromthe respective interaction social force By the graph119866 we cancluster similar particles together into particle groups to formthe basis of crowd motion behavior analysis

We need firstly to compute the interaction social forceof particles Interaction social force model describes crowdbehavior by considering the interaction of individuals and theconstraints of environment The model is described as [5]

119865119894

int =1

120591(119881119902

119894minus 119881119894) minus 119898119894

119889119881119894

119889119905 (8)

where 119865119894int is the interaction social force of particle i 119881119902119894

denotes the personal desired velocity and 119881119894 indicates the

Journal of Electrical and Computer Engineering 5

actual velocity of individual In this paper 119881119894 is representedas the streak flow of particle i defined in Section 2 And119881119902

119894is

defined as

119881119902

119894= (1 minus 120578) 119904

119894+ 120578119904119901 (9)

where 119904119894 is the streak flowof the particle i and 119904119901 is the averagestreak flow of all particles in patch 119901 where the particle ibelongs

As shown in [5] any high magnitude of interaction socialforce implicates that particle motion is different from thecollective movement of the crowd However here interactionforce is just only the motion information of one particleIn real-life crowd analysis we usually need to use particlegroups with similar motion patterns to characterize crowdbehaviorTherefore in this paper we utilize a weighted socialforce graph to extract different particle groups by consideringmultiple factors for example distance factor and the initialanalogous patches factor For the weighted graph 119866 = (119881 119864)the weight 119908119894119895 of edge from particle i to particle j is used tomeasure the dissimilarity of different particles And the valueof 119908119894119895 is defined as

119908119894119895 = 120596119889

119894119895lowast 120596119892

119894119895lowast10038171003817100381710038171003817119865119894

int minus 119865119895

int10038171003817100381710038171003817

2

2 (10)

where 120596119889119894119895and 120596119892

119894119895are the weighting factors based on distance

factor and analogous patch factor respectively The smallerthe value of 119908119894119895 is the more similar the two particles are120596119889

119894119895is used to measure the influence of distance and we

think the motion similarity of different particles is inverselyproportional to the distance between particles which meansthe influence of particle i on neighbor particles is larger thanon far particles

Based on the above observation 120596119889119894119895is defined as

120596119889

119894119895= exp( 120590

119889119894119895

) (11)

where 120590 is the influential factor and 119889119894119895 is the Euler distancebetween particles 120596119892

119894119895is used to judge whether the two

particles belong to the same initial analogous patch extractedin Section 31 If the two particles are in the same patch thenthe value of 120596119892

119894119895will be zero otherwise the value of 120596119892

119894119895is one

We use the clustering method [22] to extract particlegroups to indicate similarmotion patterns in a period of timeThe details of obtained particle groups will be described inSection 4

4 Application of Social Force Graph withStreak Flow Attribute

Using the weighted social force graph with steak flowattribute we demonstrate the strength of our method forcrowd motion segmentation and crowd abnormal behaviordetection

41 Crowd Segmentation Based on the social force graphwe can obtain particle groups which have the similar motion

patterns In the graph 119866 the values of edge weights indicatethe dissimilarity of motion pattern between particles Asshown in [22] a greedy method to segment image intoregions was presented based on the intensity differenceacross the region boundary Motivated by this method inthis paper we firstly define a predicate value to representgraph region boundary and then segment the graph intodifferent components Each component of the graph has asmaller motion difference therefore different componentsrepresent the dissimilar particle groups In order to measurethe dissimilarity of two components we utilize Dif in tomeasure themotion difference of all particles in a componentand Dif in(119877) is defined as

Dif in (119877) = Max119890isinMST(119877119864)

(119908 (119890)) (12)

where MST(119877 119864) is the minimum generation tree of compo-nent 119877 We define Difbew to indicate the motion differencebetween different components

Difbew (119877119894 119877119895) = MinV1015840119894isin119877119894V1015840119895isin119877119895(V1015840119894V1015840119895)isin119864

(119908V1015840119894V1015840119895

) (13)

When the value of Difbew(119877119894 119877119895) is smaller than the predicatevalue Thmerg we will merge the two components and thenupdate the value of Thmerg as

Thmerg (119877119894 119877119895)

= min (Dif in (119877119894) + 120591 (119877119894) Dif in (119877119895) + 120591 (119877119895)) (14)

where 120591 is a threshold function specified in the experimentbased on the size of the components

The results of crowd segmentation are presented inSection 5

42 Crowd Abnormal Behavior Detection Abnormal behav-ior detection is one of the most challenging tasks in the fieldof crowd analysis because the crowd abnormal behavior isdifficult to be defined formally Generally speaking crowdabnormal behavior is defined as a sudden change or irregular-ity in crowdmotion such as the phenomenon of escape panicin public places or the dramatic increase of crowd density Inthis section we will describe how to detect crowd abnormalbehavior based on our crowd motion model

Based on aforementioned crowd segmentation we haveclustered crowd sequence into particle groups with similarmotion pattern We view each group as a visual word 119908which is represented by interaction social force feature andthe streak flow feature And then we generate a codebook119863 = 1199081 1199082 119908119870 by 119870-means scheme The process ofcrowd abnormal behavior detection is regarded as the prob-lem of word-document analysis We adopt Latent DirichletAllocation [19]model to infer the distribution of visual wordsand then judge the emergence of abnormal event

LDA is a well-known topic model for document analysisIt views the generation of document as the random selectionof document topics and topic words Particularly in thecrowd analysis crowd behavior is represented as the result

6 Journal of Electrical and Computer Engineering

(a)

(b)

(c)

Figure 4 Comparison of segmentation results by using proposed method (1st row) and proposedNC (2nd row) and SRF (3rd row)

of random distribution of latent topics where the topics aredenoted as the distribution of visual words In this paper weview the whole crowd video as a corpus each crowd imageas a document Any document is modeled as the mixtureof topics 119911 = 1199111 1199112 119911119899 with the joint distributionparameter 120579

LDA model is trained to obtain the value of parameters120572 and 120573 which denotes the corresponding latent topics andthe conditional probability of each visual word belonging toa given topic respectively During the inferential processthe distribution of topic can be inferred by LDA modelAccording to the value of 120572 and 120573 the distribution of topicis denoted as

119901 (120579 119911 | V 120572 120573) =119901 (120579 119911 V | 120572 120573)119901 (V | 120572 120573)

(15)

Crowd abnormal behavior detection is then achieved bycomparing the topic distribution of current crowd frame tothe model of training data

5 Experiments and Discussion

In this section we will perform different experiments onsome publicly available datasets of crowd videos We applyour method to some common crowd motion analysis prob-lems The goals of our experiments are (1) to segment thecrowd video sequence into different motion pattern and (2)to detect abnormal behavior of crowd video

51 Results of Crowd Segmentation Here we provide theresults of our crowd segmentation algorithm We use thesame experiment video as that in [18] which is an intersection

video at Boston Boston video includes 531 frames and thevideo has three crowd behavior phases between twice trafficlight (1) traffic is formed (2) traffic lights change and a newtraffic flow emerges and (3) traffic lights change again andanother flow develops We compare the proposed methodwith the method in [18] (denoted as SRF) and the proposedmethod without initially spatiotemporal analogous patches(denoted as proposedNC)

Figure 4 presents the segmentation results of the abovethreemethodsWe describe the results of ourmethod in eachframe In frame 32 the north vehicles are moving to southand one person is walking on the sidewalk from east to westWe can see that our proposed method correctly segments thevehicle flow (1st row red) and the person (1st row green)But SRF is unable to distinguish the different motion patternsof vehicle and people (3rd row green) and the proposedNCeven neglected the motion of the people (2nd row green)In frame 146 pedestrians who are walking on the sidewalkare detected as a group (1st row red) however SRF views thepedestrian as different groups (3rd row yellow and purple)In frame 212 our method correctly segments the cars havingdifferent motion directions (1st row green and red) but SRFviews them as the same group (3rd row red) In frame 425the bottom pedestrian flow actually contains two differentmotion directions some people are walking east and somepeople are going straight Our method correctly detects thetwo motion patterns (1st row purple and aquamarine) SRFhowever views the two motion patterns as a group (3rd rowaquamarine) From the above experimental results we cansee clearly that our proposed method not only is able tocapture the temporal changes of crowd sequence (eg the carflow from north to south in frame 32) but also depicts thelocal motion changes (eg the cars from different direction

Journal of Electrical and Computer Engineering 7

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

100200300400500600700800900

1000C

orre

ctly

segm

ente

d m

otio

n

(a) Correctly segmented motion results

50

100

150

200

250

Inco

rrec

tly se

gmen

ted

mot

ion

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

(b) Incorrectly segmented motion results

Figure 5 Comparison of segmentation results by using SRF (blue) proposed method (red) and proposedNC (green)

Figure 6 Sample frames from three datasets

8 Journal of Electrical and Computer Engineering

(a) Sample videos from PETS2009

Ground truth

Proposed

SFM

StrFM

OFM

Walk WalkRun Run

Frame 222(b) Abnormal detection results of PETS2009

Figure 7 Comparison results of abnormal detection for sample videos from PETS2009

in frame 212 and the two motion patterns of pedestrians inframe 425)

Figure 5 shows the quantitative comparison of the threemethods In our experiment we manually label the seg-mented regions frame by frame according to the actualmotion scene and then count the correctly segmented regionsand incorrectly segmented regions in crowd sequenceFigure 5 demonstrates that ourmethod is inferior to the othertwomethods in number of incorrect segmentmotions whichare due to the different label method in [18] and our methodIn [18] the segment region is viewed as correct segmentationwhen difference of direction between the object and themajority of the objects in the region is less than 90 degreesHowever in our experiment we count the segment resultaccording to the actual direction of motion groups whichresults in the numbers of incorrect segmentation sharplyincreased

52 Results of Crowd Abnormal Behavior Detection In thispart our method is tested on three public datasets fromPETS2009 (httpwwwcvgreadingacukPETS2009ahtml)University of Minnesota (UMN) (httpmhacsumnedu

proj eventsshtmlcrowd) and some video collection fromwebsites [14] PETS2009 contains different crowd motionpatterns such as walking running gathering and splittingWe select 1066 frames from ldquoS3rdquo sequence as our experimentobject and in this dataset crowd movement from walkingto running is defined as abnormal behavior UMN includeseleven different kinds of escape events which are shot inthree indoor and outdoor scenes We select 1377 frames fromUMN dataset in our experiment and the abnormal behaviorin this dataset is the crowd escape The web dataset includescrowd videos captured at road intersection which contains3071 frames This crowd video contains two types of motionpatterns waiting and crossing and pedestrian crossing isconsidered as abnormal behavior We also manually labelall videos as ground truth From the three datasets werandomly select 40 frames for training and the otherframes for testing Figure 6 illustrates the sample frames ofthe three datasets The first row is the sampled frames fromPETS2009 the second row fromUMN and the last row fromweb dataset

In the streak flow phrase we downsampled the originalpixels to fasten the process of experiment The sample rate

Journal of Electrical and Computer Engineering 9

(a) Sample videos from UMN

Ground truth

Proposed

SFM

StrFM

OFM

Frame 828

EscapeWait

(b) Abnormal detection results of UMN

Figure 8 Comparison results of abnormal detection for sample videos from UMN

of the three datasets is 30 50 and 20 respectivelyIn order to utilize the different characteristic of the videosthe parameters in formula (7) are set to 120572 = 02 120573 =05 and 120574 = 03 After constructing social force graph wesegment the video sequence into different motion groupsand then visual words are generated based on clustering allmotion groups The length of visual words is 32 64 and 100respectively In the LDA phrase we learned 119873119911 = 16 latenttopics for abnormal behavior detection We compare ourmethod with the social force based method [5] (denoted asSFM) the streak flow based method [18] (denoted as StrFM)and the optical flow based method (denoted as OFM) Inthe process of comparison we obtain visual words based oncorresponding visual features and then train the words byLDA model

Figures 7ndash9 show some sample detection results of thethree datasets In each figure the first picture is first frameand the second picture is abnormal frame of the experimentalscene We then use color bar to represent the detected resultsof each frame in the crowd sequence Different color of thebar denotes the status of current frame green is normalframe and red is the abnormal frame The first bar indicatesthe ground truth bar and the rest of the bars show theresults from our proposed method SFM StrFM and OFMrespectively All in all these results show that our proposed

method is able to detect the crowd abnormal dynamic andinmost cases the results of ourmethod outperform the otheralgorithms

Figure 7 shows the partial results on running videosequence of PETS2009 Figure 8 illustrates some of the resultson escape crowd video of UMN Figure 9 exhibits part ofthe results on web video We can see that the results ofthe proposed method better conform to the ground truthbecause we not only consider the temporal characteristic ofthe motion objects but also exploit the spatial social forcerelationship between objects It is observed that in Figure 7SFM obtains an even better result compared to our proposedmethod because the social force is sharply increased inrunning scene However SFM results in more false detectionin some crowd scene for example crowd walking (Figure 8)and road crossing (Figure 9) because most of the social forcein these scenes is nearly zero and thus SFM faces difficultyin depicting this type of crowd motion OFM is obviouslyinferior to ourmethodbecause the optical flow fails to capturethe motion in a period of time In particular in Figure 9there is more false detection for road crossing behaviorbecause most of the walking behavior is irregular but opticalflow fails to capture irregular motion pattern The results ofStrFM are better than OFM in the three figures howeverthis method still has more false detection compared to our

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

2 Journal of Electrical and Computer Engineering

Streak flow Graph construction Particle groups

Crowd segmentation

Code book of words

Latent DirichletAllocation

Crowd abnormaldetection

Crowd motion feature extraction module

Crowd abnormal event analysis module

Input video

Figure 1 Flow chart of the proposed crowd motion analysis method

but the segmented regions were unstable when the crowddensity was not very high or crowd flow varied quickly

Hybrid methods model crowd motion frommacroscopicview as well as microscopic view Hybrid models need tocombine benefits of both macro and micro approaches Panet al firstly studied human and social behavior from theperspectives of human decision-making and social inter-action for emergency egress analysis [15] Borrmann etal presented a bidirectional coupling technique to predictpedestrian evacuation time [16] Recently Bera andManochamodeled crowd motion based on discrete and continuousflow models the discrete model is based on microscopicagent formulation and the continuummodel analyzesmacro-scopic behaviors [17] Inspired by the above hybrid methodwe model crowd motion pattern by the combination of amacroscopic and a microscopic model of crowd dynamicsThe macroscopic motion model is derived from streak flowrepresentation which can depict motion dynamic over aninterval of time [18] And the microscopic motion model isbased on the social force between the particle groups of crowdsequence

The general flow chart of our proposed method to modelcrowd motion and detect abnormal behavior is illustrated inFigure 1 We first segment crowd sequence into spatiotempo-ral patches based on the similarity of streak flow Then weconstruct weighted social force graph by particle adventurescheme [14] and cluster the crowd video into particle groupsto represent different motion patterns of crowd Finally weuse Bag of Words (BoW) to generate codebook and thenanalyze crowd motion behavior based on Latent DirichletAllocation (LDA) [19] model We evaluate our methodon three public datasets and perform comparison with anumber of state-of-the-artmethodsThe experimental resultsillustrate that the proposed method can achieve preferableresults in application of crowd motion segmentation andcrowd abnormal behavior detection

In summary the main contributions of this paper includethe following (1) a proposed framework to analyze crowdmotion behavior which takes the global spatiotemporalmotion and local pedestrian interaction into account (2)construction of weighted social force graph to extract localmotion pattern based on multiple social properties (3)utilization of the LDA model to represent the relationshipbetween the distribution of codebook and the crowd behaviorpattern

The remainder of this paper is organized as followsSection 2 describes how to extract streak flow which canintegrate the optical flow over an interval of time Section 3provides details on the construction of weighted socialforce graph for extracting crowd local motion Section 4demonstrates the strength of our method in application ofcrowd motion segmentation and crowd abnormal behaviordetection The experimental results from different data setsand the comparison results with existing methods are shownin Section 5 Finally Section 6 includes the conclusion andpotential extensions of our paper

2 Streak Flow Representation ofCrowd Motion

Streak flow is defined as an accurate representation of motionfield of crowd video [18] Different from optical flow whichonly provides motion information in a small time intervalstreak flow can provide temporal evolution of motion objectsin a period of time Our approach of computing streak flow issimilar to that proposed in [18] However for completenesswewill now shortly introduce the steps of thismethod Firstlywe initialize regular particles at time 119905 and then update theposition of particles according to the optical flow at instanttime Let (119909119901

119891119905

119910119901

119891119905

) denote the position of particle 119901 at theinstant time 119905 which is initialized at point 119901 and frame 119891 for

Journal of Electrical and Computer Engineering 3

(a) (b)

(c)

Figure 2 Comparison of optical flow and streak flow (a) The original image (b) Optical flow of the original image (c) Streak flow of theoriginal image

119905 = 0 1 2 119879Theposition of particle 119901 at any instant timeis denoted as [18]

119909119901

119891119905

= 119909119901

119891119905minus1

+ 119906 (119909119901

119891119905minus1

119910119901

119891119905minus1

119891119905minus1)

119910119901

119891119905

= 119910119901

119891119905minus1

+ V (119909119901119891119905minus1

119910119901

119891119905minus1

119891119905minus1)

(1)

where 119906 and V represent the velocity field of particle 119901 attime 119905 This will generate a series of curves which start fromparticle 119901 For steady motion flow these curves are the sameas the paths of particles but they will vary dramatically inunsteady flow In order to fill the gaps in unsteady flowwe propagate the particles backward based on current flowfiled using fourth-order Runge-Kutta-Fehlberg algorithm[14]

However the positions of particles have usually subpixelaccuracy In order to represent streak flow 119904 = (119904119906 119904V) ofevery pixel we utilize the three nearest neighbor particlesto compute the streak flow by linear interpolation Here wedescribe the computation of 119904119906 the method of estimating 119904Vis the same We use 119874119906 = [119874

119894

119906] to represent the streak flow of

each particle 119894 in vertical direction Then we define

119874119894

119906= 1198861 lowast 119904

119894

119906(1198991) + 1198862 lowast 119904

119894

119906(1198992) + 1198863 lowast 119904

119894

119906(1198993) (2)

where 119899119894 is the index of the neighbor pixels and 119886119895 is theknown triangulation function [18] for thejth neighbor pixel

Applying formula (2) on all of the data of 119904119906 we can obtain thefollowing equation

119860119904119906 = 119874119906 (3)

We resolve the above equation and obtain the value of 119904119906Figure 2 illustrates the difference between optical flow andstreak flow We can see that streak flow can better depict themotion changes

3 Social Force Graph forCrowd Motion Analysis

The aforementioned streak flow represents particle motionin a period of time However in the process of crowdbehavior analysis we are concernedmore about themotion ofparticle groups than a particular particle Most of the existingmethods extract group motion by clustering spatiaotemporalfeature However the interaction between particles playsan important role in crowd motion fluid The particularpedestrian determines and changes his own motion stateaccording to the motion of his neighbor and the globalmotion in his perspective Recently social force model isadapted to simulate the dynamic of pedestrianrsquos movement[1 5 7 20 21] In some crowd applications social forcemodelhas achieved satisfying performance

Based on the above motivation we propose the idea ofmotion particle group representation by social force graphwith streak flow attribution We use streak flow to represent

4 Journal of Electrical and Computer Engineering

(a)

12

3

(b)

Figure 3 An example of spatiotemporal analogous patches (a) The original image (b) Partition patches overlap on the original image

the global motion state in a period of time meanwhilewe extract the social interaction force between particles torepresent the influence of individual movement and thenconstruct a weighted social force graph to extract particlegroups revealing the local crowd motion patterns

31 Partition Spatiotemporal Analogous Patches In order tomodel crowd motion we first partition crowd sequenceinto analogous patches which have similar spatiotemporalfeatures Specifically we generate analogous patches by clus-tering the pixels with similar motion features For a specificimage in crowd sequence point 119901119894(119909 119910) represents any pixelof the imageWe define the similarity of different pixels basedon the streak flow similarity and distance similarity

For any two pixels 119901119894 and 119901119895 the distance similarity isdefined as

dis119894119895 =10038171003817100381710038171003817119909119894 minus 119909119895

10038171003817100381710038171003817

2

2+10038171003817100381710038171003817119910119894 minus 119910119895

10038171003817100381710038171003817

2

2 (4)

In order to describe similarity ofmotion directionwe dividedthe whole direction plane into sixteen equal folds which aredenoted as 120596 = 119894 | 119894 isin (1 2 3 119873 119873 = 16) The motiondirection of any pixel is quantized into one of the sixteenbins based on the streak flow direction Then the directionsimilarity of two pixels is defined as

dir119894119895 =1003816100381610038161003816100381610038161003816cos(120596119894 lowast 2 lowast

120587

119873) minus cos(120596119895 lowast 2 lowast

120587

119873)

1003816100381610038161003816100381610038161003816 (5)

And motion magnitude similarity of pixels is denoted as

mag119894119895=

(119904119894

119906minus 119904119895

119906)2

+ (119904119894

V minus 119904119895

V)2

1205752

(6)

where 120575 is the average magnitude of crowd streak flow and 119904119894119906

and 119904119894V are the streak flow of pixel 119901119894 in horizontal directionand vertical direction

In order to partition crowd sequence into spatiotemporalanalogous patches 119875119899

119873

119899=1 we firstly divide the crowd image

into 119873 regular patches and then put seeds on each regularpatchThe process of partition is to iteratively find the nearestseed for each pixel and then add the pixels into the patch

where the nearest seed belongs In each iteration processwe update the position of seed as the center of the patchThe determination of nearest seed is based on minimizingspatiotemporal motion similarity and spatial distance as

min119873

sum

119899=1

sum

119894isin119875119899

(120572dis119894120583(119899) + 120573dir119894120583(119899) + 120574mag119894120583(119899)) (7)

where 119894 is any pixel in patch119875119899 120583(119899) is the center pixel of119875119899119873

is the number of spatiotemporal analogous patches specifiedin our experiment and the parameters 120572 120573 and 120574 are balanceconditions according to different crowd sequence

Figure 3 illustrates an example of the analogous patchesbased on the above methodWe overlap the partition patcheson the original image The result shows that our partitionmethod based on spatiotemporal similarity of particles cancapture the change of motion objects We can see that thepixels with similar motion pattern tend to be divided into thesame patch for example the patches with labels 1 2 and 3

32 Social Force Graph for Particle Groups Spatiotemporalanalogous patches reflect the global visual similarity ofmoving pedestrian but these patches neglect the interactionbehavior between pedestrians Here we describe how toconstruct a weighted social force graph 119866 = (119881 119864) to modelthe group pattern of crowd where 119881 is a set of vertices and119864 is a set of edges In this paper the vertices set 119881 representsthe set of regular particles in particle adventure scheme Theedges set 119864measures the individualsrsquo motion similarity fromthe respective interaction social force By the graph119866 we cancluster similar particles together into particle groups to formthe basis of crowd motion behavior analysis

We need firstly to compute the interaction social forceof particles Interaction social force model describes crowdbehavior by considering the interaction of individuals and theconstraints of environment The model is described as [5]

119865119894

int =1

120591(119881119902

119894minus 119881119894) minus 119898119894

119889119881119894

119889119905 (8)

where 119865119894int is the interaction social force of particle i 119881119902119894

denotes the personal desired velocity and 119881119894 indicates the

Journal of Electrical and Computer Engineering 5

actual velocity of individual In this paper 119881119894 is representedas the streak flow of particle i defined in Section 2 And119881119902

119894is

defined as

119881119902

119894= (1 minus 120578) 119904

119894+ 120578119904119901 (9)

where 119904119894 is the streak flowof the particle i and 119904119901 is the averagestreak flow of all particles in patch 119901 where the particle ibelongs

As shown in [5] any high magnitude of interaction socialforce implicates that particle motion is different from thecollective movement of the crowd However here interactionforce is just only the motion information of one particleIn real-life crowd analysis we usually need to use particlegroups with similar motion patterns to characterize crowdbehaviorTherefore in this paper we utilize a weighted socialforce graph to extract different particle groups by consideringmultiple factors for example distance factor and the initialanalogous patches factor For the weighted graph 119866 = (119881 119864)the weight 119908119894119895 of edge from particle i to particle j is used tomeasure the dissimilarity of different particles And the valueof 119908119894119895 is defined as

119908119894119895 = 120596119889

119894119895lowast 120596119892

119894119895lowast10038171003817100381710038171003817119865119894

int minus 119865119895

int10038171003817100381710038171003817

2

2 (10)

where 120596119889119894119895and 120596119892

119894119895are the weighting factors based on distance

factor and analogous patch factor respectively The smallerthe value of 119908119894119895 is the more similar the two particles are120596119889

119894119895is used to measure the influence of distance and we

think the motion similarity of different particles is inverselyproportional to the distance between particles which meansthe influence of particle i on neighbor particles is larger thanon far particles

Based on the above observation 120596119889119894119895is defined as

120596119889

119894119895= exp( 120590

119889119894119895

) (11)

where 120590 is the influential factor and 119889119894119895 is the Euler distancebetween particles 120596119892

119894119895is used to judge whether the two

particles belong to the same initial analogous patch extractedin Section 31 If the two particles are in the same patch thenthe value of 120596119892

119894119895will be zero otherwise the value of 120596119892

119894119895is one

We use the clustering method [22] to extract particlegroups to indicate similarmotion patterns in a period of timeThe details of obtained particle groups will be described inSection 4

4 Application of Social Force Graph withStreak Flow Attribute

Using the weighted social force graph with steak flowattribute we demonstrate the strength of our method forcrowd motion segmentation and crowd abnormal behaviordetection

41 Crowd Segmentation Based on the social force graphwe can obtain particle groups which have the similar motion

patterns In the graph 119866 the values of edge weights indicatethe dissimilarity of motion pattern between particles Asshown in [22] a greedy method to segment image intoregions was presented based on the intensity differenceacross the region boundary Motivated by this method inthis paper we firstly define a predicate value to representgraph region boundary and then segment the graph intodifferent components Each component of the graph has asmaller motion difference therefore different componentsrepresent the dissimilar particle groups In order to measurethe dissimilarity of two components we utilize Dif in tomeasure themotion difference of all particles in a componentand Dif in(119877) is defined as

Dif in (119877) = Max119890isinMST(119877119864)

(119908 (119890)) (12)

where MST(119877 119864) is the minimum generation tree of compo-nent 119877 We define Difbew to indicate the motion differencebetween different components

Difbew (119877119894 119877119895) = MinV1015840119894isin119877119894V1015840119895isin119877119895(V1015840119894V1015840119895)isin119864

(119908V1015840119894V1015840119895

) (13)

When the value of Difbew(119877119894 119877119895) is smaller than the predicatevalue Thmerg we will merge the two components and thenupdate the value of Thmerg as

Thmerg (119877119894 119877119895)

= min (Dif in (119877119894) + 120591 (119877119894) Dif in (119877119895) + 120591 (119877119895)) (14)

where 120591 is a threshold function specified in the experimentbased on the size of the components

The results of crowd segmentation are presented inSection 5

42 Crowd Abnormal Behavior Detection Abnormal behav-ior detection is one of the most challenging tasks in the fieldof crowd analysis because the crowd abnormal behavior isdifficult to be defined formally Generally speaking crowdabnormal behavior is defined as a sudden change or irregular-ity in crowdmotion such as the phenomenon of escape panicin public places or the dramatic increase of crowd density Inthis section we will describe how to detect crowd abnormalbehavior based on our crowd motion model

Based on aforementioned crowd segmentation we haveclustered crowd sequence into particle groups with similarmotion pattern We view each group as a visual word 119908which is represented by interaction social force feature andthe streak flow feature And then we generate a codebook119863 = 1199081 1199082 119908119870 by 119870-means scheme The process ofcrowd abnormal behavior detection is regarded as the prob-lem of word-document analysis We adopt Latent DirichletAllocation [19]model to infer the distribution of visual wordsand then judge the emergence of abnormal event

LDA is a well-known topic model for document analysisIt views the generation of document as the random selectionof document topics and topic words Particularly in thecrowd analysis crowd behavior is represented as the result

6 Journal of Electrical and Computer Engineering

(a)

(b)

(c)

Figure 4 Comparison of segmentation results by using proposed method (1st row) and proposedNC (2nd row) and SRF (3rd row)

of random distribution of latent topics where the topics aredenoted as the distribution of visual words In this paper weview the whole crowd video as a corpus each crowd imageas a document Any document is modeled as the mixtureof topics 119911 = 1199111 1199112 119911119899 with the joint distributionparameter 120579

LDA model is trained to obtain the value of parameters120572 and 120573 which denotes the corresponding latent topics andthe conditional probability of each visual word belonging toa given topic respectively During the inferential processthe distribution of topic can be inferred by LDA modelAccording to the value of 120572 and 120573 the distribution of topicis denoted as

119901 (120579 119911 | V 120572 120573) =119901 (120579 119911 V | 120572 120573)119901 (V | 120572 120573)

(15)

Crowd abnormal behavior detection is then achieved bycomparing the topic distribution of current crowd frame tothe model of training data

5 Experiments and Discussion

In this section we will perform different experiments onsome publicly available datasets of crowd videos We applyour method to some common crowd motion analysis prob-lems The goals of our experiments are (1) to segment thecrowd video sequence into different motion pattern and (2)to detect abnormal behavior of crowd video

51 Results of Crowd Segmentation Here we provide theresults of our crowd segmentation algorithm We use thesame experiment video as that in [18] which is an intersection

video at Boston Boston video includes 531 frames and thevideo has three crowd behavior phases between twice trafficlight (1) traffic is formed (2) traffic lights change and a newtraffic flow emerges and (3) traffic lights change again andanother flow develops We compare the proposed methodwith the method in [18] (denoted as SRF) and the proposedmethod without initially spatiotemporal analogous patches(denoted as proposedNC)

Figure 4 presents the segmentation results of the abovethreemethodsWe describe the results of ourmethod in eachframe In frame 32 the north vehicles are moving to southand one person is walking on the sidewalk from east to westWe can see that our proposed method correctly segments thevehicle flow (1st row red) and the person (1st row green)But SRF is unable to distinguish the different motion patternsof vehicle and people (3rd row green) and the proposedNCeven neglected the motion of the people (2nd row green)In frame 146 pedestrians who are walking on the sidewalkare detected as a group (1st row red) however SRF views thepedestrian as different groups (3rd row yellow and purple)In frame 212 our method correctly segments the cars havingdifferent motion directions (1st row green and red) but SRFviews them as the same group (3rd row red) In frame 425the bottom pedestrian flow actually contains two differentmotion directions some people are walking east and somepeople are going straight Our method correctly detects thetwo motion patterns (1st row purple and aquamarine) SRFhowever views the two motion patterns as a group (3rd rowaquamarine) From the above experimental results we cansee clearly that our proposed method not only is able tocapture the temporal changes of crowd sequence (eg the carflow from north to south in frame 32) but also depicts thelocal motion changes (eg the cars from different direction

Journal of Electrical and Computer Engineering 7

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

100200300400500600700800900

1000C

orre

ctly

segm

ente

d m

otio

n

(a) Correctly segmented motion results

50

100

150

200

250

Inco

rrec

tly se

gmen

ted

mot

ion

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

(b) Incorrectly segmented motion results

Figure 5 Comparison of segmentation results by using SRF (blue) proposed method (red) and proposedNC (green)

Figure 6 Sample frames from three datasets

8 Journal of Electrical and Computer Engineering

(a) Sample videos from PETS2009

Ground truth

Proposed

SFM

StrFM

OFM

Walk WalkRun Run

Frame 222(b) Abnormal detection results of PETS2009

Figure 7 Comparison results of abnormal detection for sample videos from PETS2009

in frame 212 and the two motion patterns of pedestrians inframe 425)

Figure 5 shows the quantitative comparison of the threemethods In our experiment we manually label the seg-mented regions frame by frame according to the actualmotion scene and then count the correctly segmented regionsand incorrectly segmented regions in crowd sequenceFigure 5 demonstrates that ourmethod is inferior to the othertwomethods in number of incorrect segmentmotions whichare due to the different label method in [18] and our methodIn [18] the segment region is viewed as correct segmentationwhen difference of direction between the object and themajority of the objects in the region is less than 90 degreesHowever in our experiment we count the segment resultaccording to the actual direction of motion groups whichresults in the numbers of incorrect segmentation sharplyincreased

52 Results of Crowd Abnormal Behavior Detection In thispart our method is tested on three public datasets fromPETS2009 (httpwwwcvgreadingacukPETS2009ahtml)University of Minnesota (UMN) (httpmhacsumnedu

proj eventsshtmlcrowd) and some video collection fromwebsites [14] PETS2009 contains different crowd motionpatterns such as walking running gathering and splittingWe select 1066 frames from ldquoS3rdquo sequence as our experimentobject and in this dataset crowd movement from walkingto running is defined as abnormal behavior UMN includeseleven different kinds of escape events which are shot inthree indoor and outdoor scenes We select 1377 frames fromUMN dataset in our experiment and the abnormal behaviorin this dataset is the crowd escape The web dataset includescrowd videos captured at road intersection which contains3071 frames This crowd video contains two types of motionpatterns waiting and crossing and pedestrian crossing isconsidered as abnormal behavior We also manually labelall videos as ground truth From the three datasets werandomly select 40 frames for training and the otherframes for testing Figure 6 illustrates the sample frames ofthe three datasets The first row is the sampled frames fromPETS2009 the second row fromUMN and the last row fromweb dataset

In the streak flow phrase we downsampled the originalpixels to fasten the process of experiment The sample rate

Journal of Electrical and Computer Engineering 9

(a) Sample videos from UMN

Ground truth

Proposed

SFM

StrFM

OFM

Frame 828

EscapeWait

(b) Abnormal detection results of UMN

Figure 8 Comparison results of abnormal detection for sample videos from UMN

of the three datasets is 30 50 and 20 respectivelyIn order to utilize the different characteristic of the videosthe parameters in formula (7) are set to 120572 = 02 120573 =05 and 120574 = 03 After constructing social force graph wesegment the video sequence into different motion groupsand then visual words are generated based on clustering allmotion groups The length of visual words is 32 64 and 100respectively In the LDA phrase we learned 119873119911 = 16 latenttopics for abnormal behavior detection We compare ourmethod with the social force based method [5] (denoted asSFM) the streak flow based method [18] (denoted as StrFM)and the optical flow based method (denoted as OFM) Inthe process of comparison we obtain visual words based oncorresponding visual features and then train the words byLDA model

Figures 7ndash9 show some sample detection results of thethree datasets In each figure the first picture is first frameand the second picture is abnormal frame of the experimentalscene We then use color bar to represent the detected resultsof each frame in the crowd sequence Different color of thebar denotes the status of current frame green is normalframe and red is the abnormal frame The first bar indicatesthe ground truth bar and the rest of the bars show theresults from our proposed method SFM StrFM and OFMrespectively All in all these results show that our proposed

method is able to detect the crowd abnormal dynamic andinmost cases the results of ourmethod outperform the otheralgorithms

Figure 7 shows the partial results on running videosequence of PETS2009 Figure 8 illustrates some of the resultson escape crowd video of UMN Figure 9 exhibits part ofthe results on web video We can see that the results ofthe proposed method better conform to the ground truthbecause we not only consider the temporal characteristic ofthe motion objects but also exploit the spatial social forcerelationship between objects It is observed that in Figure 7SFM obtains an even better result compared to our proposedmethod because the social force is sharply increased inrunning scene However SFM results in more false detectionin some crowd scene for example crowd walking (Figure 8)and road crossing (Figure 9) because most of the social forcein these scenes is nearly zero and thus SFM faces difficultyin depicting this type of crowd motion OFM is obviouslyinferior to ourmethodbecause the optical flow fails to capturethe motion in a period of time In particular in Figure 9there is more false detection for road crossing behaviorbecause most of the walking behavior is irregular but opticalflow fails to capture irregular motion pattern The results ofStrFM are better than OFM in the three figures howeverthis method still has more false detection compared to our

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Journal of Electrical and Computer Engineering 3

(a) (b)

(c)

Figure 2 Comparison of optical flow and streak flow (a) The original image (b) Optical flow of the original image (c) Streak flow of theoriginal image

119905 = 0 1 2 119879Theposition of particle 119901 at any instant timeis denoted as [18]

119909119901

119891119905

= 119909119901

119891119905minus1

+ 119906 (119909119901

119891119905minus1

119910119901

119891119905minus1

119891119905minus1)

119910119901

119891119905

= 119910119901

119891119905minus1

+ V (119909119901119891119905minus1

119910119901

119891119905minus1

119891119905minus1)

(1)

where 119906 and V represent the velocity field of particle 119901 attime 119905 This will generate a series of curves which start fromparticle 119901 For steady motion flow these curves are the sameas the paths of particles but they will vary dramatically inunsteady flow In order to fill the gaps in unsteady flowwe propagate the particles backward based on current flowfiled using fourth-order Runge-Kutta-Fehlberg algorithm[14]

However the positions of particles have usually subpixelaccuracy In order to represent streak flow 119904 = (119904119906 119904V) ofevery pixel we utilize the three nearest neighbor particlesto compute the streak flow by linear interpolation Here wedescribe the computation of 119904119906 the method of estimating 119904Vis the same We use 119874119906 = [119874

119894

119906] to represent the streak flow of

each particle 119894 in vertical direction Then we define

119874119894

119906= 1198861 lowast 119904

119894

119906(1198991) + 1198862 lowast 119904

119894

119906(1198992) + 1198863 lowast 119904

119894

119906(1198993) (2)

where 119899119894 is the index of the neighbor pixels and 119886119895 is theknown triangulation function [18] for thejth neighbor pixel

Applying formula (2) on all of the data of 119904119906 we can obtain thefollowing equation

119860119904119906 = 119874119906 (3)

We resolve the above equation and obtain the value of 119904119906Figure 2 illustrates the difference between optical flow andstreak flow We can see that streak flow can better depict themotion changes

3 Social Force Graph forCrowd Motion Analysis

The aforementioned streak flow represents particle motionin a period of time However in the process of crowdbehavior analysis we are concernedmore about themotion ofparticle groups than a particular particle Most of the existingmethods extract group motion by clustering spatiaotemporalfeature However the interaction between particles playsan important role in crowd motion fluid The particularpedestrian determines and changes his own motion stateaccording to the motion of his neighbor and the globalmotion in his perspective Recently social force model isadapted to simulate the dynamic of pedestrianrsquos movement[1 5 7 20 21] In some crowd applications social forcemodelhas achieved satisfying performance

Based on the above motivation we propose the idea ofmotion particle group representation by social force graphwith streak flow attribution We use streak flow to represent

4 Journal of Electrical and Computer Engineering

(a)

12

3

(b)

Figure 3 An example of spatiotemporal analogous patches (a) The original image (b) Partition patches overlap on the original image

the global motion state in a period of time meanwhilewe extract the social interaction force between particles torepresent the influence of individual movement and thenconstruct a weighted social force graph to extract particlegroups revealing the local crowd motion patterns

31 Partition Spatiotemporal Analogous Patches In order tomodel crowd motion we first partition crowd sequenceinto analogous patches which have similar spatiotemporalfeatures Specifically we generate analogous patches by clus-tering the pixels with similar motion features For a specificimage in crowd sequence point 119901119894(119909 119910) represents any pixelof the imageWe define the similarity of different pixels basedon the streak flow similarity and distance similarity

For any two pixels 119901119894 and 119901119895 the distance similarity isdefined as

dis119894119895 =10038171003817100381710038171003817119909119894 minus 119909119895

10038171003817100381710038171003817

2

2+10038171003817100381710038171003817119910119894 minus 119910119895

10038171003817100381710038171003817

2

2 (4)

In order to describe similarity ofmotion directionwe dividedthe whole direction plane into sixteen equal folds which aredenoted as 120596 = 119894 | 119894 isin (1 2 3 119873 119873 = 16) The motiondirection of any pixel is quantized into one of the sixteenbins based on the streak flow direction Then the directionsimilarity of two pixels is defined as

dir119894119895 =1003816100381610038161003816100381610038161003816cos(120596119894 lowast 2 lowast

120587

119873) minus cos(120596119895 lowast 2 lowast

120587

119873)

1003816100381610038161003816100381610038161003816 (5)

And motion magnitude similarity of pixels is denoted as

mag119894119895=

(119904119894

119906minus 119904119895

119906)2

+ (119904119894

V minus 119904119895

V)2

1205752

(6)

where 120575 is the average magnitude of crowd streak flow and 119904119894119906

and 119904119894V are the streak flow of pixel 119901119894 in horizontal directionand vertical direction

In order to partition crowd sequence into spatiotemporalanalogous patches 119875119899

119873

119899=1 we firstly divide the crowd image

into 119873 regular patches and then put seeds on each regularpatchThe process of partition is to iteratively find the nearestseed for each pixel and then add the pixels into the patch

where the nearest seed belongs In each iteration processwe update the position of seed as the center of the patchThe determination of nearest seed is based on minimizingspatiotemporal motion similarity and spatial distance as

min119873

sum

119899=1

sum

119894isin119875119899

(120572dis119894120583(119899) + 120573dir119894120583(119899) + 120574mag119894120583(119899)) (7)

where 119894 is any pixel in patch119875119899 120583(119899) is the center pixel of119875119899119873

is the number of spatiotemporal analogous patches specifiedin our experiment and the parameters 120572 120573 and 120574 are balanceconditions according to different crowd sequence

Figure 3 illustrates an example of the analogous patchesbased on the above methodWe overlap the partition patcheson the original image The result shows that our partitionmethod based on spatiotemporal similarity of particles cancapture the change of motion objects We can see that thepixels with similar motion pattern tend to be divided into thesame patch for example the patches with labels 1 2 and 3

32 Social Force Graph for Particle Groups Spatiotemporalanalogous patches reflect the global visual similarity ofmoving pedestrian but these patches neglect the interactionbehavior between pedestrians Here we describe how toconstruct a weighted social force graph 119866 = (119881 119864) to modelthe group pattern of crowd where 119881 is a set of vertices and119864 is a set of edges In this paper the vertices set 119881 representsthe set of regular particles in particle adventure scheme Theedges set 119864measures the individualsrsquo motion similarity fromthe respective interaction social force By the graph119866 we cancluster similar particles together into particle groups to formthe basis of crowd motion behavior analysis

We need firstly to compute the interaction social forceof particles Interaction social force model describes crowdbehavior by considering the interaction of individuals and theconstraints of environment The model is described as [5]

119865119894

int =1

120591(119881119902

119894minus 119881119894) minus 119898119894

119889119881119894

119889119905 (8)

where 119865119894int is the interaction social force of particle i 119881119902119894

denotes the personal desired velocity and 119881119894 indicates the

Journal of Electrical and Computer Engineering 5

actual velocity of individual In this paper 119881119894 is representedas the streak flow of particle i defined in Section 2 And119881119902

119894is

defined as

119881119902

119894= (1 minus 120578) 119904

119894+ 120578119904119901 (9)

where 119904119894 is the streak flowof the particle i and 119904119901 is the averagestreak flow of all particles in patch 119901 where the particle ibelongs

As shown in [5] any high magnitude of interaction socialforce implicates that particle motion is different from thecollective movement of the crowd However here interactionforce is just only the motion information of one particleIn real-life crowd analysis we usually need to use particlegroups with similar motion patterns to characterize crowdbehaviorTherefore in this paper we utilize a weighted socialforce graph to extract different particle groups by consideringmultiple factors for example distance factor and the initialanalogous patches factor For the weighted graph 119866 = (119881 119864)the weight 119908119894119895 of edge from particle i to particle j is used tomeasure the dissimilarity of different particles And the valueof 119908119894119895 is defined as

119908119894119895 = 120596119889

119894119895lowast 120596119892

119894119895lowast10038171003817100381710038171003817119865119894

int minus 119865119895

int10038171003817100381710038171003817

2

2 (10)

where 120596119889119894119895and 120596119892

119894119895are the weighting factors based on distance

factor and analogous patch factor respectively The smallerthe value of 119908119894119895 is the more similar the two particles are120596119889

119894119895is used to measure the influence of distance and we

think the motion similarity of different particles is inverselyproportional to the distance between particles which meansthe influence of particle i on neighbor particles is larger thanon far particles

Based on the above observation 120596119889119894119895is defined as

120596119889

119894119895= exp( 120590

119889119894119895

) (11)

where 120590 is the influential factor and 119889119894119895 is the Euler distancebetween particles 120596119892

119894119895is used to judge whether the two

particles belong to the same initial analogous patch extractedin Section 31 If the two particles are in the same patch thenthe value of 120596119892

119894119895will be zero otherwise the value of 120596119892

119894119895is one

We use the clustering method [22] to extract particlegroups to indicate similarmotion patterns in a period of timeThe details of obtained particle groups will be described inSection 4

4 Application of Social Force Graph withStreak Flow Attribute

Using the weighted social force graph with steak flowattribute we demonstrate the strength of our method forcrowd motion segmentation and crowd abnormal behaviordetection

41 Crowd Segmentation Based on the social force graphwe can obtain particle groups which have the similar motion

patterns In the graph 119866 the values of edge weights indicatethe dissimilarity of motion pattern between particles Asshown in [22] a greedy method to segment image intoregions was presented based on the intensity differenceacross the region boundary Motivated by this method inthis paper we firstly define a predicate value to representgraph region boundary and then segment the graph intodifferent components Each component of the graph has asmaller motion difference therefore different componentsrepresent the dissimilar particle groups In order to measurethe dissimilarity of two components we utilize Dif in tomeasure themotion difference of all particles in a componentand Dif in(119877) is defined as

Dif in (119877) = Max119890isinMST(119877119864)

(119908 (119890)) (12)

where MST(119877 119864) is the minimum generation tree of compo-nent 119877 We define Difbew to indicate the motion differencebetween different components

Difbew (119877119894 119877119895) = MinV1015840119894isin119877119894V1015840119895isin119877119895(V1015840119894V1015840119895)isin119864

(119908V1015840119894V1015840119895

) (13)

When the value of Difbew(119877119894 119877119895) is smaller than the predicatevalue Thmerg we will merge the two components and thenupdate the value of Thmerg as

Thmerg (119877119894 119877119895)

= min (Dif in (119877119894) + 120591 (119877119894) Dif in (119877119895) + 120591 (119877119895)) (14)

where 120591 is a threshold function specified in the experimentbased on the size of the components

The results of crowd segmentation are presented inSection 5

42 Crowd Abnormal Behavior Detection Abnormal behav-ior detection is one of the most challenging tasks in the fieldof crowd analysis because the crowd abnormal behavior isdifficult to be defined formally Generally speaking crowdabnormal behavior is defined as a sudden change or irregular-ity in crowdmotion such as the phenomenon of escape panicin public places or the dramatic increase of crowd density Inthis section we will describe how to detect crowd abnormalbehavior based on our crowd motion model

Based on aforementioned crowd segmentation we haveclustered crowd sequence into particle groups with similarmotion pattern We view each group as a visual word 119908which is represented by interaction social force feature andthe streak flow feature And then we generate a codebook119863 = 1199081 1199082 119908119870 by 119870-means scheme The process ofcrowd abnormal behavior detection is regarded as the prob-lem of word-document analysis We adopt Latent DirichletAllocation [19]model to infer the distribution of visual wordsand then judge the emergence of abnormal event

LDA is a well-known topic model for document analysisIt views the generation of document as the random selectionof document topics and topic words Particularly in thecrowd analysis crowd behavior is represented as the result

6 Journal of Electrical and Computer Engineering

(a)

(b)

(c)

Figure 4 Comparison of segmentation results by using proposed method (1st row) and proposedNC (2nd row) and SRF (3rd row)

of random distribution of latent topics where the topics aredenoted as the distribution of visual words In this paper weview the whole crowd video as a corpus each crowd imageas a document Any document is modeled as the mixtureof topics 119911 = 1199111 1199112 119911119899 with the joint distributionparameter 120579

LDA model is trained to obtain the value of parameters120572 and 120573 which denotes the corresponding latent topics andthe conditional probability of each visual word belonging toa given topic respectively During the inferential processthe distribution of topic can be inferred by LDA modelAccording to the value of 120572 and 120573 the distribution of topicis denoted as

119901 (120579 119911 | V 120572 120573) =119901 (120579 119911 V | 120572 120573)119901 (V | 120572 120573)

(15)

Crowd abnormal behavior detection is then achieved bycomparing the topic distribution of current crowd frame tothe model of training data

5 Experiments and Discussion

In this section we will perform different experiments onsome publicly available datasets of crowd videos We applyour method to some common crowd motion analysis prob-lems The goals of our experiments are (1) to segment thecrowd video sequence into different motion pattern and (2)to detect abnormal behavior of crowd video

51 Results of Crowd Segmentation Here we provide theresults of our crowd segmentation algorithm We use thesame experiment video as that in [18] which is an intersection

video at Boston Boston video includes 531 frames and thevideo has three crowd behavior phases between twice trafficlight (1) traffic is formed (2) traffic lights change and a newtraffic flow emerges and (3) traffic lights change again andanother flow develops We compare the proposed methodwith the method in [18] (denoted as SRF) and the proposedmethod without initially spatiotemporal analogous patches(denoted as proposedNC)

Figure 4 presents the segmentation results of the abovethreemethodsWe describe the results of ourmethod in eachframe In frame 32 the north vehicles are moving to southand one person is walking on the sidewalk from east to westWe can see that our proposed method correctly segments thevehicle flow (1st row red) and the person (1st row green)But SRF is unable to distinguish the different motion patternsof vehicle and people (3rd row green) and the proposedNCeven neglected the motion of the people (2nd row green)In frame 146 pedestrians who are walking on the sidewalkare detected as a group (1st row red) however SRF views thepedestrian as different groups (3rd row yellow and purple)In frame 212 our method correctly segments the cars havingdifferent motion directions (1st row green and red) but SRFviews them as the same group (3rd row red) In frame 425the bottom pedestrian flow actually contains two differentmotion directions some people are walking east and somepeople are going straight Our method correctly detects thetwo motion patterns (1st row purple and aquamarine) SRFhowever views the two motion patterns as a group (3rd rowaquamarine) From the above experimental results we cansee clearly that our proposed method not only is able tocapture the temporal changes of crowd sequence (eg the carflow from north to south in frame 32) but also depicts thelocal motion changes (eg the cars from different direction

Journal of Electrical and Computer Engineering 7

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

100200300400500600700800900

1000C

orre

ctly

segm

ente

d m

otio

n

(a) Correctly segmented motion results

50

100

150

200

250

Inco

rrec

tly se

gmen

ted

mot

ion

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

(b) Incorrectly segmented motion results

Figure 5 Comparison of segmentation results by using SRF (blue) proposed method (red) and proposedNC (green)

Figure 6 Sample frames from three datasets

8 Journal of Electrical and Computer Engineering

(a) Sample videos from PETS2009

Ground truth

Proposed

SFM

StrFM

OFM

Walk WalkRun Run

Frame 222(b) Abnormal detection results of PETS2009

Figure 7 Comparison results of abnormal detection for sample videos from PETS2009

in frame 212 and the two motion patterns of pedestrians inframe 425)

Figure 5 shows the quantitative comparison of the threemethods In our experiment we manually label the seg-mented regions frame by frame according to the actualmotion scene and then count the correctly segmented regionsand incorrectly segmented regions in crowd sequenceFigure 5 demonstrates that ourmethod is inferior to the othertwomethods in number of incorrect segmentmotions whichare due to the different label method in [18] and our methodIn [18] the segment region is viewed as correct segmentationwhen difference of direction between the object and themajority of the objects in the region is less than 90 degreesHowever in our experiment we count the segment resultaccording to the actual direction of motion groups whichresults in the numbers of incorrect segmentation sharplyincreased

52 Results of Crowd Abnormal Behavior Detection In thispart our method is tested on three public datasets fromPETS2009 (httpwwwcvgreadingacukPETS2009ahtml)University of Minnesota (UMN) (httpmhacsumnedu

proj eventsshtmlcrowd) and some video collection fromwebsites [14] PETS2009 contains different crowd motionpatterns such as walking running gathering and splittingWe select 1066 frames from ldquoS3rdquo sequence as our experimentobject and in this dataset crowd movement from walkingto running is defined as abnormal behavior UMN includeseleven different kinds of escape events which are shot inthree indoor and outdoor scenes We select 1377 frames fromUMN dataset in our experiment and the abnormal behaviorin this dataset is the crowd escape The web dataset includescrowd videos captured at road intersection which contains3071 frames This crowd video contains two types of motionpatterns waiting and crossing and pedestrian crossing isconsidered as abnormal behavior We also manually labelall videos as ground truth From the three datasets werandomly select 40 frames for training and the otherframes for testing Figure 6 illustrates the sample frames ofthe three datasets The first row is the sampled frames fromPETS2009 the second row fromUMN and the last row fromweb dataset

In the streak flow phrase we downsampled the originalpixels to fasten the process of experiment The sample rate

Journal of Electrical and Computer Engineering 9

(a) Sample videos from UMN

Ground truth

Proposed

SFM

StrFM

OFM

Frame 828

EscapeWait

(b) Abnormal detection results of UMN

Figure 8 Comparison results of abnormal detection for sample videos from UMN

of the three datasets is 30 50 and 20 respectivelyIn order to utilize the different characteristic of the videosthe parameters in formula (7) are set to 120572 = 02 120573 =05 and 120574 = 03 After constructing social force graph wesegment the video sequence into different motion groupsand then visual words are generated based on clustering allmotion groups The length of visual words is 32 64 and 100respectively In the LDA phrase we learned 119873119911 = 16 latenttopics for abnormal behavior detection We compare ourmethod with the social force based method [5] (denoted asSFM) the streak flow based method [18] (denoted as StrFM)and the optical flow based method (denoted as OFM) Inthe process of comparison we obtain visual words based oncorresponding visual features and then train the words byLDA model

Figures 7ndash9 show some sample detection results of thethree datasets In each figure the first picture is first frameand the second picture is abnormal frame of the experimentalscene We then use color bar to represent the detected resultsof each frame in the crowd sequence Different color of thebar denotes the status of current frame green is normalframe and red is the abnormal frame The first bar indicatesthe ground truth bar and the rest of the bars show theresults from our proposed method SFM StrFM and OFMrespectively All in all these results show that our proposed

method is able to detect the crowd abnormal dynamic andinmost cases the results of ourmethod outperform the otheralgorithms

Figure 7 shows the partial results on running videosequence of PETS2009 Figure 8 illustrates some of the resultson escape crowd video of UMN Figure 9 exhibits part ofthe results on web video We can see that the results ofthe proposed method better conform to the ground truthbecause we not only consider the temporal characteristic ofthe motion objects but also exploit the spatial social forcerelationship between objects It is observed that in Figure 7SFM obtains an even better result compared to our proposedmethod because the social force is sharply increased inrunning scene However SFM results in more false detectionin some crowd scene for example crowd walking (Figure 8)and road crossing (Figure 9) because most of the social forcein these scenes is nearly zero and thus SFM faces difficultyin depicting this type of crowd motion OFM is obviouslyinferior to ourmethodbecause the optical flow fails to capturethe motion in a period of time In particular in Figure 9there is more false detection for road crossing behaviorbecause most of the walking behavior is irregular but opticalflow fails to capture irregular motion pattern The results ofStrFM are better than OFM in the three figures howeverthis method still has more false detection compared to our

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

4 Journal of Electrical and Computer Engineering

(a)

12

3

(b)

Figure 3 An example of spatiotemporal analogous patches (a) The original image (b) Partition patches overlap on the original image

the global motion state in a period of time meanwhilewe extract the social interaction force between particles torepresent the influence of individual movement and thenconstruct a weighted social force graph to extract particlegroups revealing the local crowd motion patterns

31 Partition Spatiotemporal Analogous Patches In order tomodel crowd motion we first partition crowd sequenceinto analogous patches which have similar spatiotemporalfeatures Specifically we generate analogous patches by clus-tering the pixels with similar motion features For a specificimage in crowd sequence point 119901119894(119909 119910) represents any pixelof the imageWe define the similarity of different pixels basedon the streak flow similarity and distance similarity

For any two pixels 119901119894 and 119901119895 the distance similarity isdefined as

dis119894119895 =10038171003817100381710038171003817119909119894 minus 119909119895

10038171003817100381710038171003817

2

2+10038171003817100381710038171003817119910119894 minus 119910119895

10038171003817100381710038171003817

2

2 (4)

In order to describe similarity ofmotion directionwe dividedthe whole direction plane into sixteen equal folds which aredenoted as 120596 = 119894 | 119894 isin (1 2 3 119873 119873 = 16) The motiondirection of any pixel is quantized into one of the sixteenbins based on the streak flow direction Then the directionsimilarity of two pixels is defined as

dir119894119895 =1003816100381610038161003816100381610038161003816cos(120596119894 lowast 2 lowast

120587

119873) minus cos(120596119895 lowast 2 lowast

120587

119873)

1003816100381610038161003816100381610038161003816 (5)

And motion magnitude similarity of pixels is denoted as

mag119894119895=

(119904119894

119906minus 119904119895

119906)2

+ (119904119894

V minus 119904119895

V)2

1205752

(6)

where 120575 is the average magnitude of crowd streak flow and 119904119894119906

and 119904119894V are the streak flow of pixel 119901119894 in horizontal directionand vertical direction

In order to partition crowd sequence into spatiotemporalanalogous patches 119875119899

119873

119899=1 we firstly divide the crowd image

into 119873 regular patches and then put seeds on each regularpatchThe process of partition is to iteratively find the nearestseed for each pixel and then add the pixels into the patch

where the nearest seed belongs In each iteration processwe update the position of seed as the center of the patchThe determination of nearest seed is based on minimizingspatiotemporal motion similarity and spatial distance as

min119873

sum

119899=1

sum

119894isin119875119899

(120572dis119894120583(119899) + 120573dir119894120583(119899) + 120574mag119894120583(119899)) (7)

where 119894 is any pixel in patch119875119899 120583(119899) is the center pixel of119875119899119873

is the number of spatiotemporal analogous patches specifiedin our experiment and the parameters 120572 120573 and 120574 are balanceconditions according to different crowd sequence

Figure 3 illustrates an example of the analogous patchesbased on the above methodWe overlap the partition patcheson the original image The result shows that our partitionmethod based on spatiotemporal similarity of particles cancapture the change of motion objects We can see that thepixels with similar motion pattern tend to be divided into thesame patch for example the patches with labels 1 2 and 3

32 Social Force Graph for Particle Groups Spatiotemporalanalogous patches reflect the global visual similarity ofmoving pedestrian but these patches neglect the interactionbehavior between pedestrians Here we describe how toconstruct a weighted social force graph 119866 = (119881 119864) to modelthe group pattern of crowd where 119881 is a set of vertices and119864 is a set of edges In this paper the vertices set 119881 representsthe set of regular particles in particle adventure scheme Theedges set 119864measures the individualsrsquo motion similarity fromthe respective interaction social force By the graph119866 we cancluster similar particles together into particle groups to formthe basis of crowd motion behavior analysis

We need firstly to compute the interaction social forceof particles Interaction social force model describes crowdbehavior by considering the interaction of individuals and theconstraints of environment The model is described as [5]

119865119894

int =1

120591(119881119902

119894minus 119881119894) minus 119898119894

119889119881119894

119889119905 (8)

where 119865119894int is the interaction social force of particle i 119881119902119894

denotes the personal desired velocity and 119881119894 indicates the

Journal of Electrical and Computer Engineering 5

actual velocity of individual In this paper 119881119894 is representedas the streak flow of particle i defined in Section 2 And119881119902

119894is

defined as

119881119902

119894= (1 minus 120578) 119904

119894+ 120578119904119901 (9)

where 119904119894 is the streak flowof the particle i and 119904119901 is the averagestreak flow of all particles in patch 119901 where the particle ibelongs

As shown in [5] any high magnitude of interaction socialforce implicates that particle motion is different from thecollective movement of the crowd However here interactionforce is just only the motion information of one particleIn real-life crowd analysis we usually need to use particlegroups with similar motion patterns to characterize crowdbehaviorTherefore in this paper we utilize a weighted socialforce graph to extract different particle groups by consideringmultiple factors for example distance factor and the initialanalogous patches factor For the weighted graph 119866 = (119881 119864)the weight 119908119894119895 of edge from particle i to particle j is used tomeasure the dissimilarity of different particles And the valueof 119908119894119895 is defined as

119908119894119895 = 120596119889

119894119895lowast 120596119892

119894119895lowast10038171003817100381710038171003817119865119894

int minus 119865119895

int10038171003817100381710038171003817

2

2 (10)

where 120596119889119894119895and 120596119892

119894119895are the weighting factors based on distance

factor and analogous patch factor respectively The smallerthe value of 119908119894119895 is the more similar the two particles are120596119889

119894119895is used to measure the influence of distance and we

think the motion similarity of different particles is inverselyproportional to the distance between particles which meansthe influence of particle i on neighbor particles is larger thanon far particles

Based on the above observation 120596119889119894119895is defined as

120596119889

119894119895= exp( 120590

119889119894119895

) (11)

where 120590 is the influential factor and 119889119894119895 is the Euler distancebetween particles 120596119892

119894119895is used to judge whether the two

particles belong to the same initial analogous patch extractedin Section 31 If the two particles are in the same patch thenthe value of 120596119892

119894119895will be zero otherwise the value of 120596119892

119894119895is one

We use the clustering method [22] to extract particlegroups to indicate similarmotion patterns in a period of timeThe details of obtained particle groups will be described inSection 4

4 Application of Social Force Graph withStreak Flow Attribute

Using the weighted social force graph with steak flowattribute we demonstrate the strength of our method forcrowd motion segmentation and crowd abnormal behaviordetection

41 Crowd Segmentation Based on the social force graphwe can obtain particle groups which have the similar motion

patterns In the graph 119866 the values of edge weights indicatethe dissimilarity of motion pattern between particles Asshown in [22] a greedy method to segment image intoregions was presented based on the intensity differenceacross the region boundary Motivated by this method inthis paper we firstly define a predicate value to representgraph region boundary and then segment the graph intodifferent components Each component of the graph has asmaller motion difference therefore different componentsrepresent the dissimilar particle groups In order to measurethe dissimilarity of two components we utilize Dif in tomeasure themotion difference of all particles in a componentand Dif in(119877) is defined as

Dif in (119877) = Max119890isinMST(119877119864)

(119908 (119890)) (12)

where MST(119877 119864) is the minimum generation tree of compo-nent 119877 We define Difbew to indicate the motion differencebetween different components

Difbew (119877119894 119877119895) = MinV1015840119894isin119877119894V1015840119895isin119877119895(V1015840119894V1015840119895)isin119864

(119908V1015840119894V1015840119895

) (13)

When the value of Difbew(119877119894 119877119895) is smaller than the predicatevalue Thmerg we will merge the two components and thenupdate the value of Thmerg as

Thmerg (119877119894 119877119895)

= min (Dif in (119877119894) + 120591 (119877119894) Dif in (119877119895) + 120591 (119877119895)) (14)

where 120591 is a threshold function specified in the experimentbased on the size of the components

The results of crowd segmentation are presented inSection 5

42 Crowd Abnormal Behavior Detection Abnormal behav-ior detection is one of the most challenging tasks in the fieldof crowd analysis because the crowd abnormal behavior isdifficult to be defined formally Generally speaking crowdabnormal behavior is defined as a sudden change or irregular-ity in crowdmotion such as the phenomenon of escape panicin public places or the dramatic increase of crowd density Inthis section we will describe how to detect crowd abnormalbehavior based on our crowd motion model

Based on aforementioned crowd segmentation we haveclustered crowd sequence into particle groups with similarmotion pattern We view each group as a visual word 119908which is represented by interaction social force feature andthe streak flow feature And then we generate a codebook119863 = 1199081 1199082 119908119870 by 119870-means scheme The process ofcrowd abnormal behavior detection is regarded as the prob-lem of word-document analysis We adopt Latent DirichletAllocation [19]model to infer the distribution of visual wordsand then judge the emergence of abnormal event

LDA is a well-known topic model for document analysisIt views the generation of document as the random selectionof document topics and topic words Particularly in thecrowd analysis crowd behavior is represented as the result

6 Journal of Electrical and Computer Engineering

(a)

(b)

(c)

Figure 4 Comparison of segmentation results by using proposed method (1st row) and proposedNC (2nd row) and SRF (3rd row)

of random distribution of latent topics where the topics aredenoted as the distribution of visual words In this paper weview the whole crowd video as a corpus each crowd imageas a document Any document is modeled as the mixtureof topics 119911 = 1199111 1199112 119911119899 with the joint distributionparameter 120579

LDA model is trained to obtain the value of parameters120572 and 120573 which denotes the corresponding latent topics andthe conditional probability of each visual word belonging toa given topic respectively During the inferential processthe distribution of topic can be inferred by LDA modelAccording to the value of 120572 and 120573 the distribution of topicis denoted as

119901 (120579 119911 | V 120572 120573) =119901 (120579 119911 V | 120572 120573)119901 (V | 120572 120573)

(15)

Crowd abnormal behavior detection is then achieved bycomparing the topic distribution of current crowd frame tothe model of training data

5 Experiments and Discussion

In this section we will perform different experiments onsome publicly available datasets of crowd videos We applyour method to some common crowd motion analysis prob-lems The goals of our experiments are (1) to segment thecrowd video sequence into different motion pattern and (2)to detect abnormal behavior of crowd video

51 Results of Crowd Segmentation Here we provide theresults of our crowd segmentation algorithm We use thesame experiment video as that in [18] which is an intersection

video at Boston Boston video includes 531 frames and thevideo has three crowd behavior phases between twice trafficlight (1) traffic is formed (2) traffic lights change and a newtraffic flow emerges and (3) traffic lights change again andanother flow develops We compare the proposed methodwith the method in [18] (denoted as SRF) and the proposedmethod without initially spatiotemporal analogous patches(denoted as proposedNC)

Figure 4 presents the segmentation results of the abovethreemethodsWe describe the results of ourmethod in eachframe In frame 32 the north vehicles are moving to southand one person is walking on the sidewalk from east to westWe can see that our proposed method correctly segments thevehicle flow (1st row red) and the person (1st row green)But SRF is unable to distinguish the different motion patternsof vehicle and people (3rd row green) and the proposedNCeven neglected the motion of the people (2nd row green)In frame 146 pedestrians who are walking on the sidewalkare detected as a group (1st row red) however SRF views thepedestrian as different groups (3rd row yellow and purple)In frame 212 our method correctly segments the cars havingdifferent motion directions (1st row green and red) but SRFviews them as the same group (3rd row red) In frame 425the bottom pedestrian flow actually contains two differentmotion directions some people are walking east and somepeople are going straight Our method correctly detects thetwo motion patterns (1st row purple and aquamarine) SRFhowever views the two motion patterns as a group (3rd rowaquamarine) From the above experimental results we cansee clearly that our proposed method not only is able tocapture the temporal changes of crowd sequence (eg the carflow from north to south in frame 32) but also depicts thelocal motion changes (eg the cars from different direction

Journal of Electrical and Computer Engineering 7

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

100200300400500600700800900

1000C

orre

ctly

segm

ente

d m

otio

n

(a) Correctly segmented motion results

50

100

150

200

250

Inco

rrec

tly se

gmen

ted

mot

ion

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

(b) Incorrectly segmented motion results

Figure 5 Comparison of segmentation results by using SRF (blue) proposed method (red) and proposedNC (green)

Figure 6 Sample frames from three datasets

8 Journal of Electrical and Computer Engineering

(a) Sample videos from PETS2009

Ground truth

Proposed

SFM

StrFM

OFM

Walk WalkRun Run

Frame 222(b) Abnormal detection results of PETS2009

Figure 7 Comparison results of abnormal detection for sample videos from PETS2009

in frame 212 and the two motion patterns of pedestrians inframe 425)

Figure 5 shows the quantitative comparison of the threemethods In our experiment we manually label the seg-mented regions frame by frame according to the actualmotion scene and then count the correctly segmented regionsand incorrectly segmented regions in crowd sequenceFigure 5 demonstrates that ourmethod is inferior to the othertwomethods in number of incorrect segmentmotions whichare due to the different label method in [18] and our methodIn [18] the segment region is viewed as correct segmentationwhen difference of direction between the object and themajority of the objects in the region is less than 90 degreesHowever in our experiment we count the segment resultaccording to the actual direction of motion groups whichresults in the numbers of incorrect segmentation sharplyincreased

52 Results of Crowd Abnormal Behavior Detection In thispart our method is tested on three public datasets fromPETS2009 (httpwwwcvgreadingacukPETS2009ahtml)University of Minnesota (UMN) (httpmhacsumnedu

proj eventsshtmlcrowd) and some video collection fromwebsites [14] PETS2009 contains different crowd motionpatterns such as walking running gathering and splittingWe select 1066 frames from ldquoS3rdquo sequence as our experimentobject and in this dataset crowd movement from walkingto running is defined as abnormal behavior UMN includeseleven different kinds of escape events which are shot inthree indoor and outdoor scenes We select 1377 frames fromUMN dataset in our experiment and the abnormal behaviorin this dataset is the crowd escape The web dataset includescrowd videos captured at road intersection which contains3071 frames This crowd video contains two types of motionpatterns waiting and crossing and pedestrian crossing isconsidered as abnormal behavior We also manually labelall videos as ground truth From the three datasets werandomly select 40 frames for training and the otherframes for testing Figure 6 illustrates the sample frames ofthe three datasets The first row is the sampled frames fromPETS2009 the second row fromUMN and the last row fromweb dataset

In the streak flow phrase we downsampled the originalpixels to fasten the process of experiment The sample rate

Journal of Electrical and Computer Engineering 9

(a) Sample videos from UMN

Ground truth

Proposed

SFM

StrFM

OFM

Frame 828

EscapeWait

(b) Abnormal detection results of UMN

Figure 8 Comparison results of abnormal detection for sample videos from UMN

of the three datasets is 30 50 and 20 respectivelyIn order to utilize the different characteristic of the videosthe parameters in formula (7) are set to 120572 = 02 120573 =05 and 120574 = 03 After constructing social force graph wesegment the video sequence into different motion groupsand then visual words are generated based on clustering allmotion groups The length of visual words is 32 64 and 100respectively In the LDA phrase we learned 119873119911 = 16 latenttopics for abnormal behavior detection We compare ourmethod with the social force based method [5] (denoted asSFM) the streak flow based method [18] (denoted as StrFM)and the optical flow based method (denoted as OFM) Inthe process of comparison we obtain visual words based oncorresponding visual features and then train the words byLDA model

Figures 7ndash9 show some sample detection results of thethree datasets In each figure the first picture is first frameand the second picture is abnormal frame of the experimentalscene We then use color bar to represent the detected resultsof each frame in the crowd sequence Different color of thebar denotes the status of current frame green is normalframe and red is the abnormal frame The first bar indicatesthe ground truth bar and the rest of the bars show theresults from our proposed method SFM StrFM and OFMrespectively All in all these results show that our proposed

method is able to detect the crowd abnormal dynamic andinmost cases the results of ourmethod outperform the otheralgorithms

Figure 7 shows the partial results on running videosequence of PETS2009 Figure 8 illustrates some of the resultson escape crowd video of UMN Figure 9 exhibits part ofthe results on web video We can see that the results ofthe proposed method better conform to the ground truthbecause we not only consider the temporal characteristic ofthe motion objects but also exploit the spatial social forcerelationship between objects It is observed that in Figure 7SFM obtains an even better result compared to our proposedmethod because the social force is sharply increased inrunning scene However SFM results in more false detectionin some crowd scene for example crowd walking (Figure 8)and road crossing (Figure 9) because most of the social forcein these scenes is nearly zero and thus SFM faces difficultyin depicting this type of crowd motion OFM is obviouslyinferior to ourmethodbecause the optical flow fails to capturethe motion in a period of time In particular in Figure 9there is more false detection for road crossing behaviorbecause most of the walking behavior is irregular but opticalflow fails to capture irregular motion pattern The results ofStrFM are better than OFM in the three figures howeverthis method still has more false detection compared to our

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Journal of Electrical and Computer Engineering 5

actual velocity of individual In this paper 119881119894 is representedas the streak flow of particle i defined in Section 2 And119881119902

119894is

defined as

119881119902

119894= (1 minus 120578) 119904

119894+ 120578119904119901 (9)

where 119904119894 is the streak flowof the particle i and 119904119901 is the averagestreak flow of all particles in patch 119901 where the particle ibelongs

As shown in [5] any high magnitude of interaction socialforce implicates that particle motion is different from thecollective movement of the crowd However here interactionforce is just only the motion information of one particleIn real-life crowd analysis we usually need to use particlegroups with similar motion patterns to characterize crowdbehaviorTherefore in this paper we utilize a weighted socialforce graph to extract different particle groups by consideringmultiple factors for example distance factor and the initialanalogous patches factor For the weighted graph 119866 = (119881 119864)the weight 119908119894119895 of edge from particle i to particle j is used tomeasure the dissimilarity of different particles And the valueof 119908119894119895 is defined as

119908119894119895 = 120596119889

119894119895lowast 120596119892

119894119895lowast10038171003817100381710038171003817119865119894

int minus 119865119895

int10038171003817100381710038171003817

2

2 (10)

where 120596119889119894119895and 120596119892

119894119895are the weighting factors based on distance

factor and analogous patch factor respectively The smallerthe value of 119908119894119895 is the more similar the two particles are120596119889

119894119895is used to measure the influence of distance and we

think the motion similarity of different particles is inverselyproportional to the distance between particles which meansthe influence of particle i on neighbor particles is larger thanon far particles

Based on the above observation 120596119889119894119895is defined as

120596119889

119894119895= exp( 120590

119889119894119895

) (11)

where 120590 is the influential factor and 119889119894119895 is the Euler distancebetween particles 120596119892

119894119895is used to judge whether the two

particles belong to the same initial analogous patch extractedin Section 31 If the two particles are in the same patch thenthe value of 120596119892

119894119895will be zero otherwise the value of 120596119892

119894119895is one

We use the clustering method [22] to extract particlegroups to indicate similarmotion patterns in a period of timeThe details of obtained particle groups will be described inSection 4

4 Application of Social Force Graph withStreak Flow Attribute

Using the weighted social force graph with steak flowattribute we demonstrate the strength of our method forcrowd motion segmentation and crowd abnormal behaviordetection

41 Crowd Segmentation Based on the social force graphwe can obtain particle groups which have the similar motion

patterns In the graph 119866 the values of edge weights indicatethe dissimilarity of motion pattern between particles Asshown in [22] a greedy method to segment image intoregions was presented based on the intensity differenceacross the region boundary Motivated by this method inthis paper we firstly define a predicate value to representgraph region boundary and then segment the graph intodifferent components Each component of the graph has asmaller motion difference therefore different componentsrepresent the dissimilar particle groups In order to measurethe dissimilarity of two components we utilize Dif in tomeasure themotion difference of all particles in a componentand Dif in(119877) is defined as

Dif in (119877) = Max119890isinMST(119877119864)

(119908 (119890)) (12)

where MST(119877 119864) is the minimum generation tree of compo-nent 119877 We define Difbew to indicate the motion differencebetween different components

Difbew (119877119894 119877119895) = MinV1015840119894isin119877119894V1015840119895isin119877119895(V1015840119894V1015840119895)isin119864

(119908V1015840119894V1015840119895

) (13)

When the value of Difbew(119877119894 119877119895) is smaller than the predicatevalue Thmerg we will merge the two components and thenupdate the value of Thmerg as

Thmerg (119877119894 119877119895)

= min (Dif in (119877119894) + 120591 (119877119894) Dif in (119877119895) + 120591 (119877119895)) (14)

where 120591 is a threshold function specified in the experimentbased on the size of the components

The results of crowd segmentation are presented inSection 5

42 Crowd Abnormal Behavior Detection Abnormal behav-ior detection is one of the most challenging tasks in the fieldof crowd analysis because the crowd abnormal behavior isdifficult to be defined formally Generally speaking crowdabnormal behavior is defined as a sudden change or irregular-ity in crowdmotion such as the phenomenon of escape panicin public places or the dramatic increase of crowd density Inthis section we will describe how to detect crowd abnormalbehavior based on our crowd motion model

Based on aforementioned crowd segmentation we haveclustered crowd sequence into particle groups with similarmotion pattern We view each group as a visual word 119908which is represented by interaction social force feature andthe streak flow feature And then we generate a codebook119863 = 1199081 1199082 119908119870 by 119870-means scheme The process ofcrowd abnormal behavior detection is regarded as the prob-lem of word-document analysis We adopt Latent DirichletAllocation [19]model to infer the distribution of visual wordsand then judge the emergence of abnormal event

LDA is a well-known topic model for document analysisIt views the generation of document as the random selectionof document topics and topic words Particularly in thecrowd analysis crowd behavior is represented as the result

6 Journal of Electrical and Computer Engineering

(a)

(b)

(c)

Figure 4 Comparison of segmentation results by using proposed method (1st row) and proposedNC (2nd row) and SRF (3rd row)

of random distribution of latent topics where the topics aredenoted as the distribution of visual words In this paper weview the whole crowd video as a corpus each crowd imageas a document Any document is modeled as the mixtureof topics 119911 = 1199111 1199112 119911119899 with the joint distributionparameter 120579

LDA model is trained to obtain the value of parameters120572 and 120573 which denotes the corresponding latent topics andthe conditional probability of each visual word belonging toa given topic respectively During the inferential processthe distribution of topic can be inferred by LDA modelAccording to the value of 120572 and 120573 the distribution of topicis denoted as

119901 (120579 119911 | V 120572 120573) =119901 (120579 119911 V | 120572 120573)119901 (V | 120572 120573)

(15)

Crowd abnormal behavior detection is then achieved bycomparing the topic distribution of current crowd frame tothe model of training data

5 Experiments and Discussion

In this section we will perform different experiments onsome publicly available datasets of crowd videos We applyour method to some common crowd motion analysis prob-lems The goals of our experiments are (1) to segment thecrowd video sequence into different motion pattern and (2)to detect abnormal behavior of crowd video

51 Results of Crowd Segmentation Here we provide theresults of our crowd segmentation algorithm We use thesame experiment video as that in [18] which is an intersection

video at Boston Boston video includes 531 frames and thevideo has three crowd behavior phases between twice trafficlight (1) traffic is formed (2) traffic lights change and a newtraffic flow emerges and (3) traffic lights change again andanother flow develops We compare the proposed methodwith the method in [18] (denoted as SRF) and the proposedmethod without initially spatiotemporal analogous patches(denoted as proposedNC)

Figure 4 presents the segmentation results of the abovethreemethodsWe describe the results of ourmethod in eachframe In frame 32 the north vehicles are moving to southand one person is walking on the sidewalk from east to westWe can see that our proposed method correctly segments thevehicle flow (1st row red) and the person (1st row green)But SRF is unable to distinguish the different motion patternsof vehicle and people (3rd row green) and the proposedNCeven neglected the motion of the people (2nd row green)In frame 146 pedestrians who are walking on the sidewalkare detected as a group (1st row red) however SRF views thepedestrian as different groups (3rd row yellow and purple)In frame 212 our method correctly segments the cars havingdifferent motion directions (1st row green and red) but SRFviews them as the same group (3rd row red) In frame 425the bottom pedestrian flow actually contains two differentmotion directions some people are walking east and somepeople are going straight Our method correctly detects thetwo motion patterns (1st row purple and aquamarine) SRFhowever views the two motion patterns as a group (3rd rowaquamarine) From the above experimental results we cansee clearly that our proposed method not only is able tocapture the temporal changes of crowd sequence (eg the carflow from north to south in frame 32) but also depicts thelocal motion changes (eg the cars from different direction

Journal of Electrical and Computer Engineering 7

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

100200300400500600700800900

1000C

orre

ctly

segm

ente

d m

otio

n

(a) Correctly segmented motion results

50

100

150

200

250

Inco

rrec

tly se

gmen

ted

mot

ion

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

(b) Incorrectly segmented motion results

Figure 5 Comparison of segmentation results by using SRF (blue) proposed method (red) and proposedNC (green)

Figure 6 Sample frames from three datasets

8 Journal of Electrical and Computer Engineering

(a) Sample videos from PETS2009

Ground truth

Proposed

SFM

StrFM

OFM

Walk WalkRun Run

Frame 222(b) Abnormal detection results of PETS2009

Figure 7 Comparison results of abnormal detection for sample videos from PETS2009

in frame 212 and the two motion patterns of pedestrians inframe 425)

Figure 5 shows the quantitative comparison of the threemethods In our experiment we manually label the seg-mented regions frame by frame according to the actualmotion scene and then count the correctly segmented regionsand incorrectly segmented regions in crowd sequenceFigure 5 demonstrates that ourmethod is inferior to the othertwomethods in number of incorrect segmentmotions whichare due to the different label method in [18] and our methodIn [18] the segment region is viewed as correct segmentationwhen difference of direction between the object and themajority of the objects in the region is less than 90 degreesHowever in our experiment we count the segment resultaccording to the actual direction of motion groups whichresults in the numbers of incorrect segmentation sharplyincreased

52 Results of Crowd Abnormal Behavior Detection In thispart our method is tested on three public datasets fromPETS2009 (httpwwwcvgreadingacukPETS2009ahtml)University of Minnesota (UMN) (httpmhacsumnedu

proj eventsshtmlcrowd) and some video collection fromwebsites [14] PETS2009 contains different crowd motionpatterns such as walking running gathering and splittingWe select 1066 frames from ldquoS3rdquo sequence as our experimentobject and in this dataset crowd movement from walkingto running is defined as abnormal behavior UMN includeseleven different kinds of escape events which are shot inthree indoor and outdoor scenes We select 1377 frames fromUMN dataset in our experiment and the abnormal behaviorin this dataset is the crowd escape The web dataset includescrowd videos captured at road intersection which contains3071 frames This crowd video contains two types of motionpatterns waiting and crossing and pedestrian crossing isconsidered as abnormal behavior We also manually labelall videos as ground truth From the three datasets werandomly select 40 frames for training and the otherframes for testing Figure 6 illustrates the sample frames ofthe three datasets The first row is the sampled frames fromPETS2009 the second row fromUMN and the last row fromweb dataset

In the streak flow phrase we downsampled the originalpixels to fasten the process of experiment The sample rate

Journal of Electrical and Computer Engineering 9

(a) Sample videos from UMN

Ground truth

Proposed

SFM

StrFM

OFM

Frame 828

EscapeWait

(b) Abnormal detection results of UMN

Figure 8 Comparison results of abnormal detection for sample videos from UMN

of the three datasets is 30 50 and 20 respectivelyIn order to utilize the different characteristic of the videosthe parameters in formula (7) are set to 120572 = 02 120573 =05 and 120574 = 03 After constructing social force graph wesegment the video sequence into different motion groupsand then visual words are generated based on clustering allmotion groups The length of visual words is 32 64 and 100respectively In the LDA phrase we learned 119873119911 = 16 latenttopics for abnormal behavior detection We compare ourmethod with the social force based method [5] (denoted asSFM) the streak flow based method [18] (denoted as StrFM)and the optical flow based method (denoted as OFM) Inthe process of comparison we obtain visual words based oncorresponding visual features and then train the words byLDA model

Figures 7ndash9 show some sample detection results of thethree datasets In each figure the first picture is first frameand the second picture is abnormal frame of the experimentalscene We then use color bar to represent the detected resultsof each frame in the crowd sequence Different color of thebar denotes the status of current frame green is normalframe and red is the abnormal frame The first bar indicatesthe ground truth bar and the rest of the bars show theresults from our proposed method SFM StrFM and OFMrespectively All in all these results show that our proposed

method is able to detect the crowd abnormal dynamic andinmost cases the results of ourmethod outperform the otheralgorithms

Figure 7 shows the partial results on running videosequence of PETS2009 Figure 8 illustrates some of the resultson escape crowd video of UMN Figure 9 exhibits part ofthe results on web video We can see that the results ofthe proposed method better conform to the ground truthbecause we not only consider the temporal characteristic ofthe motion objects but also exploit the spatial social forcerelationship between objects It is observed that in Figure 7SFM obtains an even better result compared to our proposedmethod because the social force is sharply increased inrunning scene However SFM results in more false detectionin some crowd scene for example crowd walking (Figure 8)and road crossing (Figure 9) because most of the social forcein these scenes is nearly zero and thus SFM faces difficultyin depicting this type of crowd motion OFM is obviouslyinferior to ourmethodbecause the optical flow fails to capturethe motion in a period of time In particular in Figure 9there is more false detection for road crossing behaviorbecause most of the walking behavior is irregular but opticalflow fails to capture irregular motion pattern The results ofStrFM are better than OFM in the three figures howeverthis method still has more false detection compared to our

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

6 Journal of Electrical and Computer Engineering

(a)

(b)

(c)

Figure 4 Comparison of segmentation results by using proposed method (1st row) and proposedNC (2nd row) and SRF (3rd row)

of random distribution of latent topics where the topics aredenoted as the distribution of visual words In this paper weview the whole crowd video as a corpus each crowd imageas a document Any document is modeled as the mixtureof topics 119911 = 1199111 1199112 119911119899 with the joint distributionparameter 120579

LDA model is trained to obtain the value of parameters120572 and 120573 which denotes the corresponding latent topics andthe conditional probability of each visual word belonging toa given topic respectively During the inferential processthe distribution of topic can be inferred by LDA modelAccording to the value of 120572 and 120573 the distribution of topicis denoted as

119901 (120579 119911 | V 120572 120573) =119901 (120579 119911 V | 120572 120573)119901 (V | 120572 120573)

(15)

Crowd abnormal behavior detection is then achieved bycomparing the topic distribution of current crowd frame tothe model of training data

5 Experiments and Discussion

In this section we will perform different experiments onsome publicly available datasets of crowd videos We applyour method to some common crowd motion analysis prob-lems The goals of our experiments are (1) to segment thecrowd video sequence into different motion pattern and (2)to detect abnormal behavior of crowd video

51 Results of Crowd Segmentation Here we provide theresults of our crowd segmentation algorithm We use thesame experiment video as that in [18] which is an intersection

video at Boston Boston video includes 531 frames and thevideo has three crowd behavior phases between twice trafficlight (1) traffic is formed (2) traffic lights change and a newtraffic flow emerges and (3) traffic lights change again andanother flow develops We compare the proposed methodwith the method in [18] (denoted as SRF) and the proposedmethod without initially spatiotemporal analogous patches(denoted as proposedNC)

Figure 4 presents the segmentation results of the abovethreemethodsWe describe the results of ourmethod in eachframe In frame 32 the north vehicles are moving to southand one person is walking on the sidewalk from east to westWe can see that our proposed method correctly segments thevehicle flow (1st row red) and the person (1st row green)But SRF is unable to distinguish the different motion patternsof vehicle and people (3rd row green) and the proposedNCeven neglected the motion of the people (2nd row green)In frame 146 pedestrians who are walking on the sidewalkare detected as a group (1st row red) however SRF views thepedestrian as different groups (3rd row yellow and purple)In frame 212 our method correctly segments the cars havingdifferent motion directions (1st row green and red) but SRFviews them as the same group (3rd row red) In frame 425the bottom pedestrian flow actually contains two differentmotion directions some people are walking east and somepeople are going straight Our method correctly detects thetwo motion patterns (1st row purple and aquamarine) SRFhowever views the two motion patterns as a group (3rd rowaquamarine) From the above experimental results we cansee clearly that our proposed method not only is able tocapture the temporal changes of crowd sequence (eg the carflow from north to south in frame 32) but also depicts thelocal motion changes (eg the cars from different direction

Journal of Electrical and Computer Engineering 7

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

100200300400500600700800900

1000C

orre

ctly

segm

ente

d m

otio

n

(a) Correctly segmented motion results

50

100

150

200

250

Inco

rrec

tly se

gmen

ted

mot

ion

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

(b) Incorrectly segmented motion results

Figure 5 Comparison of segmentation results by using SRF (blue) proposed method (red) and proposedNC (green)

Figure 6 Sample frames from three datasets

8 Journal of Electrical and Computer Engineering

(a) Sample videos from PETS2009

Ground truth

Proposed

SFM

StrFM

OFM

Walk WalkRun Run

Frame 222(b) Abnormal detection results of PETS2009

Figure 7 Comparison results of abnormal detection for sample videos from PETS2009

in frame 212 and the two motion patterns of pedestrians inframe 425)

Figure 5 shows the quantitative comparison of the threemethods In our experiment we manually label the seg-mented regions frame by frame according to the actualmotion scene and then count the correctly segmented regionsand incorrectly segmented regions in crowd sequenceFigure 5 demonstrates that ourmethod is inferior to the othertwomethods in number of incorrect segmentmotions whichare due to the different label method in [18] and our methodIn [18] the segment region is viewed as correct segmentationwhen difference of direction between the object and themajority of the objects in the region is less than 90 degreesHowever in our experiment we count the segment resultaccording to the actual direction of motion groups whichresults in the numbers of incorrect segmentation sharplyincreased

52 Results of Crowd Abnormal Behavior Detection In thispart our method is tested on three public datasets fromPETS2009 (httpwwwcvgreadingacukPETS2009ahtml)University of Minnesota (UMN) (httpmhacsumnedu

proj eventsshtmlcrowd) and some video collection fromwebsites [14] PETS2009 contains different crowd motionpatterns such as walking running gathering and splittingWe select 1066 frames from ldquoS3rdquo sequence as our experimentobject and in this dataset crowd movement from walkingto running is defined as abnormal behavior UMN includeseleven different kinds of escape events which are shot inthree indoor and outdoor scenes We select 1377 frames fromUMN dataset in our experiment and the abnormal behaviorin this dataset is the crowd escape The web dataset includescrowd videos captured at road intersection which contains3071 frames This crowd video contains two types of motionpatterns waiting and crossing and pedestrian crossing isconsidered as abnormal behavior We also manually labelall videos as ground truth From the three datasets werandomly select 40 frames for training and the otherframes for testing Figure 6 illustrates the sample frames ofthe three datasets The first row is the sampled frames fromPETS2009 the second row fromUMN and the last row fromweb dataset

In the streak flow phrase we downsampled the originalpixels to fasten the process of experiment The sample rate

Journal of Electrical and Computer Engineering 9

(a) Sample videos from UMN

Ground truth

Proposed

SFM

StrFM

OFM

Frame 828

EscapeWait

(b) Abnormal detection results of UMN

Figure 8 Comparison results of abnormal detection for sample videos from UMN

of the three datasets is 30 50 and 20 respectivelyIn order to utilize the different characteristic of the videosthe parameters in formula (7) are set to 120572 = 02 120573 =05 and 120574 = 03 After constructing social force graph wesegment the video sequence into different motion groupsand then visual words are generated based on clustering allmotion groups The length of visual words is 32 64 and 100respectively In the LDA phrase we learned 119873119911 = 16 latenttopics for abnormal behavior detection We compare ourmethod with the social force based method [5] (denoted asSFM) the streak flow based method [18] (denoted as StrFM)and the optical flow based method (denoted as OFM) Inthe process of comparison we obtain visual words based oncorresponding visual features and then train the words byLDA model

Figures 7ndash9 show some sample detection results of thethree datasets In each figure the first picture is first frameand the second picture is abnormal frame of the experimentalscene We then use color bar to represent the detected resultsof each frame in the crowd sequence Different color of thebar denotes the status of current frame green is normalframe and red is the abnormal frame The first bar indicatesthe ground truth bar and the rest of the bars show theresults from our proposed method SFM StrFM and OFMrespectively All in all these results show that our proposed

method is able to detect the crowd abnormal dynamic andinmost cases the results of ourmethod outperform the otheralgorithms

Figure 7 shows the partial results on running videosequence of PETS2009 Figure 8 illustrates some of the resultson escape crowd video of UMN Figure 9 exhibits part ofthe results on web video We can see that the results ofthe proposed method better conform to the ground truthbecause we not only consider the temporal characteristic ofthe motion objects but also exploit the spatial social forcerelationship between objects It is observed that in Figure 7SFM obtains an even better result compared to our proposedmethod because the social force is sharply increased inrunning scene However SFM results in more false detectionin some crowd scene for example crowd walking (Figure 8)and road crossing (Figure 9) because most of the social forcein these scenes is nearly zero and thus SFM faces difficultyin depicting this type of crowd motion OFM is obviouslyinferior to ourmethodbecause the optical flow fails to capturethe motion in a period of time In particular in Figure 9there is more false detection for road crossing behaviorbecause most of the walking behavior is irregular but opticalflow fails to capture irregular motion pattern The results ofStrFM are better than OFM in the three figures howeverthis method still has more false detection compared to our

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Journal of Electrical and Computer Engineering 7

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

100200300400500600700800900

1000C

orre

ctly

segm

ente

d m

otio

n

(a) Correctly segmented motion results

50

100

150

200

250

Inco

rrec

tly se

gmen

ted

mot

ion

SRFProposedNCProposed

100 150 200 250 300 350 400 450 50050Frames

(b) Incorrectly segmented motion results

Figure 5 Comparison of segmentation results by using SRF (blue) proposed method (red) and proposedNC (green)

Figure 6 Sample frames from three datasets

8 Journal of Electrical and Computer Engineering

(a) Sample videos from PETS2009

Ground truth

Proposed

SFM

StrFM

OFM

Walk WalkRun Run

Frame 222(b) Abnormal detection results of PETS2009

Figure 7 Comparison results of abnormal detection for sample videos from PETS2009

in frame 212 and the two motion patterns of pedestrians inframe 425)

Figure 5 shows the quantitative comparison of the threemethods In our experiment we manually label the seg-mented regions frame by frame according to the actualmotion scene and then count the correctly segmented regionsand incorrectly segmented regions in crowd sequenceFigure 5 demonstrates that ourmethod is inferior to the othertwomethods in number of incorrect segmentmotions whichare due to the different label method in [18] and our methodIn [18] the segment region is viewed as correct segmentationwhen difference of direction between the object and themajority of the objects in the region is less than 90 degreesHowever in our experiment we count the segment resultaccording to the actual direction of motion groups whichresults in the numbers of incorrect segmentation sharplyincreased

52 Results of Crowd Abnormal Behavior Detection In thispart our method is tested on three public datasets fromPETS2009 (httpwwwcvgreadingacukPETS2009ahtml)University of Minnesota (UMN) (httpmhacsumnedu

proj eventsshtmlcrowd) and some video collection fromwebsites [14] PETS2009 contains different crowd motionpatterns such as walking running gathering and splittingWe select 1066 frames from ldquoS3rdquo sequence as our experimentobject and in this dataset crowd movement from walkingto running is defined as abnormal behavior UMN includeseleven different kinds of escape events which are shot inthree indoor and outdoor scenes We select 1377 frames fromUMN dataset in our experiment and the abnormal behaviorin this dataset is the crowd escape The web dataset includescrowd videos captured at road intersection which contains3071 frames This crowd video contains two types of motionpatterns waiting and crossing and pedestrian crossing isconsidered as abnormal behavior We also manually labelall videos as ground truth From the three datasets werandomly select 40 frames for training and the otherframes for testing Figure 6 illustrates the sample frames ofthe three datasets The first row is the sampled frames fromPETS2009 the second row fromUMN and the last row fromweb dataset

In the streak flow phrase we downsampled the originalpixels to fasten the process of experiment The sample rate

Journal of Electrical and Computer Engineering 9

(a) Sample videos from UMN

Ground truth

Proposed

SFM

StrFM

OFM

Frame 828

EscapeWait

(b) Abnormal detection results of UMN

Figure 8 Comparison results of abnormal detection for sample videos from UMN

of the three datasets is 30 50 and 20 respectivelyIn order to utilize the different characteristic of the videosthe parameters in formula (7) are set to 120572 = 02 120573 =05 and 120574 = 03 After constructing social force graph wesegment the video sequence into different motion groupsand then visual words are generated based on clustering allmotion groups The length of visual words is 32 64 and 100respectively In the LDA phrase we learned 119873119911 = 16 latenttopics for abnormal behavior detection We compare ourmethod with the social force based method [5] (denoted asSFM) the streak flow based method [18] (denoted as StrFM)and the optical flow based method (denoted as OFM) Inthe process of comparison we obtain visual words based oncorresponding visual features and then train the words byLDA model

Figures 7ndash9 show some sample detection results of thethree datasets In each figure the first picture is first frameand the second picture is abnormal frame of the experimentalscene We then use color bar to represent the detected resultsof each frame in the crowd sequence Different color of thebar denotes the status of current frame green is normalframe and red is the abnormal frame The first bar indicatesthe ground truth bar and the rest of the bars show theresults from our proposed method SFM StrFM and OFMrespectively All in all these results show that our proposed

method is able to detect the crowd abnormal dynamic andinmost cases the results of ourmethod outperform the otheralgorithms

Figure 7 shows the partial results on running videosequence of PETS2009 Figure 8 illustrates some of the resultson escape crowd video of UMN Figure 9 exhibits part ofthe results on web video We can see that the results ofthe proposed method better conform to the ground truthbecause we not only consider the temporal characteristic ofthe motion objects but also exploit the spatial social forcerelationship between objects It is observed that in Figure 7SFM obtains an even better result compared to our proposedmethod because the social force is sharply increased inrunning scene However SFM results in more false detectionin some crowd scene for example crowd walking (Figure 8)and road crossing (Figure 9) because most of the social forcein these scenes is nearly zero and thus SFM faces difficultyin depicting this type of crowd motion OFM is obviouslyinferior to ourmethodbecause the optical flow fails to capturethe motion in a period of time In particular in Figure 9there is more false detection for road crossing behaviorbecause most of the walking behavior is irregular but opticalflow fails to capture irregular motion pattern The results ofStrFM are better than OFM in the three figures howeverthis method still has more false detection compared to our

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

8 Journal of Electrical and Computer Engineering

(a) Sample videos from PETS2009

Ground truth

Proposed

SFM

StrFM

OFM

Walk WalkRun Run

Frame 222(b) Abnormal detection results of PETS2009

Figure 7 Comparison results of abnormal detection for sample videos from PETS2009

in frame 212 and the two motion patterns of pedestrians inframe 425)

Figure 5 shows the quantitative comparison of the threemethods In our experiment we manually label the seg-mented regions frame by frame according to the actualmotion scene and then count the correctly segmented regionsand incorrectly segmented regions in crowd sequenceFigure 5 demonstrates that ourmethod is inferior to the othertwomethods in number of incorrect segmentmotions whichare due to the different label method in [18] and our methodIn [18] the segment region is viewed as correct segmentationwhen difference of direction between the object and themajority of the objects in the region is less than 90 degreesHowever in our experiment we count the segment resultaccording to the actual direction of motion groups whichresults in the numbers of incorrect segmentation sharplyincreased

52 Results of Crowd Abnormal Behavior Detection In thispart our method is tested on three public datasets fromPETS2009 (httpwwwcvgreadingacukPETS2009ahtml)University of Minnesota (UMN) (httpmhacsumnedu

proj eventsshtmlcrowd) and some video collection fromwebsites [14] PETS2009 contains different crowd motionpatterns such as walking running gathering and splittingWe select 1066 frames from ldquoS3rdquo sequence as our experimentobject and in this dataset crowd movement from walkingto running is defined as abnormal behavior UMN includeseleven different kinds of escape events which are shot inthree indoor and outdoor scenes We select 1377 frames fromUMN dataset in our experiment and the abnormal behaviorin this dataset is the crowd escape The web dataset includescrowd videos captured at road intersection which contains3071 frames This crowd video contains two types of motionpatterns waiting and crossing and pedestrian crossing isconsidered as abnormal behavior We also manually labelall videos as ground truth From the three datasets werandomly select 40 frames for training and the otherframes for testing Figure 6 illustrates the sample frames ofthe three datasets The first row is the sampled frames fromPETS2009 the second row fromUMN and the last row fromweb dataset

In the streak flow phrase we downsampled the originalpixels to fasten the process of experiment The sample rate

Journal of Electrical and Computer Engineering 9

(a) Sample videos from UMN

Ground truth

Proposed

SFM

StrFM

OFM

Frame 828

EscapeWait

(b) Abnormal detection results of UMN

Figure 8 Comparison results of abnormal detection for sample videos from UMN

of the three datasets is 30 50 and 20 respectivelyIn order to utilize the different characteristic of the videosthe parameters in formula (7) are set to 120572 = 02 120573 =05 and 120574 = 03 After constructing social force graph wesegment the video sequence into different motion groupsand then visual words are generated based on clustering allmotion groups The length of visual words is 32 64 and 100respectively In the LDA phrase we learned 119873119911 = 16 latenttopics for abnormal behavior detection We compare ourmethod with the social force based method [5] (denoted asSFM) the streak flow based method [18] (denoted as StrFM)and the optical flow based method (denoted as OFM) Inthe process of comparison we obtain visual words based oncorresponding visual features and then train the words byLDA model

Figures 7ndash9 show some sample detection results of thethree datasets In each figure the first picture is first frameand the second picture is abnormal frame of the experimentalscene We then use color bar to represent the detected resultsof each frame in the crowd sequence Different color of thebar denotes the status of current frame green is normalframe and red is the abnormal frame The first bar indicatesthe ground truth bar and the rest of the bars show theresults from our proposed method SFM StrFM and OFMrespectively All in all these results show that our proposed

method is able to detect the crowd abnormal dynamic andinmost cases the results of ourmethod outperform the otheralgorithms

Figure 7 shows the partial results on running videosequence of PETS2009 Figure 8 illustrates some of the resultson escape crowd video of UMN Figure 9 exhibits part ofthe results on web video We can see that the results ofthe proposed method better conform to the ground truthbecause we not only consider the temporal characteristic ofthe motion objects but also exploit the spatial social forcerelationship between objects It is observed that in Figure 7SFM obtains an even better result compared to our proposedmethod because the social force is sharply increased inrunning scene However SFM results in more false detectionin some crowd scene for example crowd walking (Figure 8)and road crossing (Figure 9) because most of the social forcein these scenes is nearly zero and thus SFM faces difficultyin depicting this type of crowd motion OFM is obviouslyinferior to ourmethodbecause the optical flow fails to capturethe motion in a period of time In particular in Figure 9there is more false detection for road crossing behaviorbecause most of the walking behavior is irregular but opticalflow fails to capture irregular motion pattern The results ofStrFM are better than OFM in the three figures howeverthis method still has more false detection compared to our

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Journal of Electrical and Computer Engineering 9

(a) Sample videos from UMN

Ground truth

Proposed

SFM

StrFM

OFM

Frame 828

EscapeWait

(b) Abnormal detection results of UMN

Figure 8 Comparison results of abnormal detection for sample videos from UMN

of the three datasets is 30 50 and 20 respectivelyIn order to utilize the different characteristic of the videosthe parameters in formula (7) are set to 120572 = 02 120573 =05 and 120574 = 03 After constructing social force graph wesegment the video sequence into different motion groupsand then visual words are generated based on clustering allmotion groups The length of visual words is 32 64 and 100respectively In the LDA phrase we learned 119873119911 = 16 latenttopics for abnormal behavior detection We compare ourmethod with the social force based method [5] (denoted asSFM) the streak flow based method [18] (denoted as StrFM)and the optical flow based method (denoted as OFM) Inthe process of comparison we obtain visual words based oncorresponding visual features and then train the words byLDA model

Figures 7ndash9 show some sample detection results of thethree datasets In each figure the first picture is first frameand the second picture is abnormal frame of the experimentalscene We then use color bar to represent the detected resultsof each frame in the crowd sequence Different color of thebar denotes the status of current frame green is normalframe and red is the abnormal frame The first bar indicatesthe ground truth bar and the rest of the bars show theresults from our proposed method SFM StrFM and OFMrespectively All in all these results show that our proposed

method is able to detect the crowd abnormal dynamic andinmost cases the results of ourmethod outperform the otheralgorithms

Figure 7 shows the partial results on running videosequence of PETS2009 Figure 8 illustrates some of the resultson escape crowd video of UMN Figure 9 exhibits part ofthe results on web video We can see that the results ofthe proposed method better conform to the ground truthbecause we not only consider the temporal characteristic ofthe motion objects but also exploit the spatial social forcerelationship between objects It is observed that in Figure 7SFM obtains an even better result compared to our proposedmethod because the social force is sharply increased inrunning scene However SFM results in more false detectionin some crowd scene for example crowd walking (Figure 8)and road crossing (Figure 9) because most of the social forcein these scenes is nearly zero and thus SFM faces difficultyin depicting this type of crowd motion OFM is obviouslyinferior to ourmethodbecause the optical flow fails to capturethe motion in a period of time In particular in Figure 9there is more false detection for road crossing behaviorbecause most of the walking behavior is irregular but opticalflow fails to capture irregular motion pattern The results ofStrFM are better than OFM in the three figures howeverthis method still has more false detection compared to our

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

10 Journal of Electrical and Computer Engineering

(a) Sample videos fromWEB

Ground truth

Proposed

SFM

StrFM

OFM

Frame 1851

Crossing WaitWait

(b) Abnormal detection results of WEB

Figure 9 Comparison results of abnormal detection for sample videos fromWEB

Table 1 Accuracy comparison of the proposed method with SFMStrFM andOFM in crowd abnormal detection for the three datasets

Proposed method SFM StrFM OFMPETS2009 8333 8219 7308 6852UMN12 8510 8025 7952 6531WEB 9621 9123 9402 9589Overall accuracy 8821 8479 8221 7657

method and SFM This method attains a comparable resultwhen the motion dynamic is relatively simple (Figure 7) butit undergoes a performance descent when a substantial newflow constantly enters into the scene (Figure 9)

In order to evaluate our method quantitatively the accu-racy results of abnormal detection are shown in Table 1 Theresults show that our proposedmodel has improved nearly by5 accuracy compared with social force model In particularin the higher density crowd scene the social force of particlegroups will have a slowly changing ratio thus the social forcemodel is not very obvious for normal behaviors For examplethe SFM accuracy is lower than our proposed method andeven lower than optical flow method in Table 1

Figure 10 shows the ROC curves for abnormal behav-ior detection based on our proposed method against the

alternative methods It is observed that our method providesbetter results

6 Conclusion

In this paper we proposed a novel framework for analyzingmotion patterns in crowd video On the aspect of spatiotem-poral feature we extracted the streak flow to represent theglobal motion while for the interaction of crowd objectswe constructed a weighted social force graph to extractparticle groups to represent local group motion Finally weutilized LDAmodel to detect crowd abnormal behavior basedon the proposed crowd model The experimental resultshave shown that our proposed method successfully segmentscrowdmotion and detects crowd abnormal behavior And theproposed method outperformed the current state-of-the-artmethods for crowd motion analysis As part of our futurework we plan to further study the crowd motion behaviorin high density scenes and increase the robustness of ouralgorithm

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Journal of Electrical and Computer Engineering 11

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(a) ROC of PETS2009

ROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(b) ROC of UMNROC curves

0 02 03 04 05 06 07 08 09 101

FPR

0

01

02

03

04

05

06

07

08

09

1

TPR

Proposed methodSocial force method

Steak flow methodOptical flow method

(c) ROC of WEB

Figure 10 The ROCs for abnormality detection on three datasets

Acknowledgment

This work was supported by the Research Foundation ofEducation Bureau of Hunan Province China (Grant no13C474)

References

[1] D Helbing and P Molnar ldquoSocial force model for pedestriandynamicsrdquo Physical Review E vol 51 no 5 pp 4282ndash4286 1995

[2] W Ge R T Collins and R B Ruback ldquoVision-based analysisof small groups in pedestrian crowdsrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 34 no 5 pp1003ndash1016 2012

[3] M Rodriguez S Ali and T Kanade ldquoTracking in unstructuredcrowded scenesrdquo in Proceedings of the 12th International Con-ference on Computer Vision (ICCV rsquo09) pp 1389ndash1396 IEEEKyoto Japan October 2009

[4] L Kratz and K Nishino ldquoTracking pedestrians using localspatio-temporal motion patterns in extremely crowded scenesrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 34 no 5 pp 987ndash1002 2012

[5] R Mehran A O Oyama and M Shah ldquoAbnormal crowdbehavior detection using social force modelrdquo in Proceedings

of the IEEE Computer Society Conference on Computer Visionand Pattern Recognition Workshops (CVPR rsquo09) pp 935ndash942Miami Fla USA June 2009

[6] R Raghavendra A Del Bue M Cristani and V MurinoldquoOptimizing interaction force for global anomaly detectionin crowded scenesrdquo in Proceedings of the IEEE 12th Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 136ndash143Barcelona Spain November 2011

[7] R Li and R Chellappa ldquoGroup motion segmentation using aspatio-temporal driving forcemodelrdquo in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition (CVPRrsquo10) pp 2038ndash2045 IEEE San Francisco Calif USA June 2010

[8] B Zhou X Tang H Zhang and X Wang ldquoMeasuring crowdcollectivenessrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 36 no 8 pp 1586ndash1599 2014

[9] G T Bae S Y Kwak and H R Byun ldquoMotion pattern analysisusing partial trajectories for abnormal movement detection incrowded scenesrdquo Electronics Letters vol 49 no 3 pp 186ndash1872013

[10] C Tomasi and T Kanade ldquoDetection and tracking of point fea-turesrdquo Tech Rep CMU-CS-91-132 CarnegieMellon UniversityPittsburgh Pa USA 1991

[11] E L Andrade S Blunsden and R B Fisher ldquoModellingcrowd scenes for event detectionrdquo in Proceedings of the 18th

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

12 Journal of Electrical and Computer Engineering

International Conference on Pattern Recognition (ICPR rsquo06) vol1 pp 175ndash178 IEEE Hong Kong August 2006

[12] T Li H ChangMWang B Ni R Hong and S Yan ldquoCrowdedscene analysis a surveyrdquo IEEE Transactions on Circuits andSystems for Video Technology vol 25 no 3 pp 367ndash386 2015

[13] N Ihaddadene and C Djeraba ldquoReal-time crowd motionanalysisrdquo in Proceedings of the 19th International Conference onPattern Recognition (ICPR rsquo08) pp 1ndash4 December 2008

[14] S Ali and M Shah ldquoA Lagrangian particle dynamics approachfor crowd flow segmentation and stability analysisrdquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR rsquo07) pp 1ndash7 Minneapo-lis MN USA June 2007

[15] X Pan C S Han K Dauber and K H Law ldquoHuman and socialbehavior in computational modeling and analysis of egressrdquoAutomation in Construction vol 15 no 4 pp 448ndash461 2006

[16] A Borrmann A Kneidl G Koster S Ruzika and M Thie-mann ldquoBidirectional coupling of macroscopic andmicroscopicpedestrian evacuation modelsrdquo Safety Science vol 50 no 8 pp1695ndash1703 2012

[17] A Bera and D Manocha ldquoREACHmdashrealtime crowd trackingusing a hybrid motion modelrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo15) pp 740ndash747 IEEE Seattle Wash USA May 2015

[18] R Mehran B E Moore and M Shah ldquoA streakline repre-sentation of flow in crowded scenesrdquo in Computer VisionmdashECCV 2010 11th European Conference on Computer VisionHeraklion Crete Greece September 5ndash11 2010 Proceedings PartIII vol 6313 of Lecture Notes in Computer Science pp 439ndash452Springer Berlin Germany 2010

[19] D M Blei A Y Ng and M I Jordan ldquoLatent DirichletallocationrdquoThe Journal of Machine Learning Research vol 3 pp993ndash1022 2003

[20] M Mancas N R Riche J Leroy and B Gosselin ldquoAbnormalmotion selection in crowds using bottom-up saliencyrdquo inProceedings of the 18th IEEE International Conference on ImageProcessing (ICIP rsquo11) pp 229ndash232 IEEE Brussels BelgiumSeptember 2011

[21] X Zhu J Liu J Wang W Fu and H Lu ldquoWeighted interactionforce estimation for abnormality detection in crowd scenesrdquo inProceedings of the 11th Asian Conference on Computer Vision(ACCV rsquo12) pp 507ndash518 Daejeon Republic of Korea Novem-ber 2012

[22] P F Felzenszwalb andD PHuttenlocher ldquoEfficient graph-basedimage segmentationrdquo International Journal of Computer Visionvol 59 no 2 pp 167ndash181 2004

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of