vs final1 doc

49
7/28/2019 Vs Final1 Doc http://slidepdf.com/reader/full/vs-final1-doc 1/49

Upload: nav-rao

Post on 03-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 1/49

Page 2: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 2/49

4.3.2 Comparing each frames 18

4.3.2.1 Background template construction 18

4.3.2.2 Moving object recognition 19

4.3.3 Alerting system 19

5 Performance Analysis 20

5.1 J2me technology 20

5.1.1 The java platform 21

5.1.1.1 Write once run anywhere 21

5.1.1.2 Security 21

5.1.1.3 Rich graphical user interface 21

5.1.1.4 Network awareness 22

5.1.2 The J2me application style 23

5.1.3 J2me benefits on wireless service 24

5.1.4 Type of application J2me enable 25

5.1.5 Mobile media API (Jsr-135) 27

5.2 Wireless message API(JSR-120) 27

5.2.1Creating an message connection 27

5.2.2 Creating and sending a text message 27

5.3 Prototype 28

5.4 Experiment 28

5.5 Capturing the video 33

5.6 Getting a video capture player 33

[2]

Page 3: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 3/49

5.7 Showing the camera video 33

5.8 Capturing an image 33

5.9 Comparing each frames 34

5.10 Alerting systems 34

5.11 Mobile media API archietecture 34

5.12 Using the mobile media API 35

5.13 Media types supported 37

5.14 Feasibility specification 38

5.14.1 Technical feasibility 38

5.14.2 Mobile media API,JSR135 38

5.14.3 Wireless messaging API 40

5.14.4 Operational feasibility 41

5.14.4.1 Getting a video capture player 41

5.14.4.2 Showing the camera video 41

5.14.4.3 Capturing an image 42

5.14.5 Economic feasibility 42

5.15 Software requirement specification 43

5.15.1 Functional requirements 43

5.16 Future enhancements 43

6 Screenshots 44

7 Conclusion and References   49 

[3]

Page 4: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 4/49

List Of Figures

[4]

Page 5: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 5/49

Figure -a Flow diagram of a generic background subtraction algorithm 11

Figure -b Working of background subtraction 13

Figure -c Background subtraction algorithm 14

Figure -d J2ME is part of Java 2 platform 21 

Figure -e J2ME applications can exchange data over WAP, i-mode 24

or TCP based wireless networks

Figure- g User Interface Of Prototype Application 29

Figure-h Background template 31

[5]

Page 6: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 6/49

1.ABSTRACT

Increasing need for intelligent video surveillance in public, commercial and family

applications makes automated video surveillance systems one of the main current application

domains in computer vision. Intelligent video surveillance systems deal with the real-time

monitoring of persistent and transient objects within a specific environment.

A low-cost intelligent mobile phone-based wireless video surveillance solution using

moving object recognition technology is proposed in this paper. The Proposed solution can be

applied not only to various security systems, but also to environmental surveillance. Firstly, the

 basic principle of moving object detecting is given. Limited by the memory consuming and

computing capacity of a mobile phone, a background subtraction algorithm is presented for 

adaptation. Then, a self-adaptive background model that can update automatically and timely

to adapt to the slow and slight changes of natural environment is detailed. When the

subtraction of the current captured image and the background reaches a certain threshold, a

moving object is considered to be in the current view, and the mobile phone will automatically

notify the central control unit or the user through phone call, SMS (Short Message System) or 

other means.The proposed algorithm can be implemented in an embedded system with little

memory consumption and storage space, so it’s feasible for mobile phones and other embedded

 platforms, and the proposed solution can be used in constructing mobile security monitoring

system with low-cost hardware and equipments. Thus our results first emulated in Wireless

toolkit and the results show the effectiveness of proposed solution.

[6]

Page 7: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 7/49

CHAPTER 1

2.INTRODUCTION

2.1 Introduction

Increasing need for intelligent video surveillance in public, commercial and

family applications makes automated video surveillance systems one of the main current

application domains in computer vision. Intelligent video surveillance systems deal with the

real-time monitoring of persistent and transient objects within a specific environment.

A low-cost intelligent mobile phone-based wireless video surveillance solution

using moving object recognition technology is proposed in this paper. The Proposed solution

can be applied not only to various security systems, but also to environmental surveillance.

Firstly, the basic principle of moving object detecting is given. Limited by the memory

consuming and computing capacity of a mobile phone, a background subtraction algorithm is

 presented for adaptation. Then, a self-adaptive background model that can update

automatically and timely to adapt to the slow and slight changes of natural environment is

detailed. When the subtraction of the current captured image and the background reaches a

certain threshold, a moving object is considered to be in the current view, and the mobile

 phone will automatically notify the central control unit or the user through phone call, SMS

(Short Message System) or other means.The proposed algorithm can be implemented in an

embedded system with little memory consumption and storage space, so it’s feasible for 

mobile phones and other embedded platforms, and the proposed solution can be used in

constructing mobile security monitoring system with low-cost hardware and equipments. Thus

our results first emulated in Wireless toolkit and the results show the effectiveness of 

 proposed solution.

2.2 Importance of Video Surveillance

According to the US Bureau of Justice Statistics, approximately 75% of all crime in the states

is property crime. In 2003, there were 14 million thefts of property, and of these, 83% were

home and business burglaries. In the United States, the FBI has calculated that a burglary

occurs every 8 seconds and that three out of four homes will be burglarized within the next 20

[7]

Page 8: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 8/49

years. There's no reason to wait until it happens to you.While it might not make the evening

news, when your home or business is burgled, safeguarding it becomes the most important

issue in the world.Originally developed to provide the ultimate in security for banks, and

traditionally used by security intensive operations like casinos and airports, today closed-

circuit television (directly connecting video to a recording or viewing source without beingbroadcast) and video surveillance systems are now inexpensive and simple enough to be

used at home. Now that this powerful technology is within the reach of the average consumer,

it makes an effective part of any home security system, as well as a small business' everyday

video surveillance.

Advances in Closed Circuit TV (CCTV) technology are turning video surveillance equipment 

into the most valuable loss prevention, safety/security tool available today for both commercial

and residential applications. In fact, in the last 5 years alone, spending on surveillance

equipment has nearly doubled and is expected to grow from $9.2 billion to $21 billion by 2010.

The use of surveillance camera systems can alert you before threatening situations worsen,

as well as provide you with an important record of events. Monitoring your store or business

can be invaluable, in identifying and apprehending thieves and vandals. The prevention or 

resolution of just one crime would be enough to pay for video surveillance system equipment.

Retailers use CCTV video surveillance systems  to monitor for shoplifters and dishonest

employees, compile recorded evidence against bogus accident claims and monitor 

merchandising displays in the stores. Manufacturers, governments, hospitals and universities

use video surveillance equipment to identify visitors and employees, monitor hazardous work

areas, thwart theft and ensure the security of their premises and parking facilities.

The increasing need for intelligent video surveillance in public, commercial and family

applications makes automated video surveillance systems one of the main current application

domains in computer vision. Intelligent video surveillance systems deal with the real-time

monitoring of persistent and transient objects within a specific environment. Intelligent

surveillance system has been evolution to third generation, known as automated wide-area

video surveillance system [1]. Combined computer vision technology, the distributed system is

autonomic, which can also be controlled by remote terminals.

[8]

Page 9: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 9/49

A low-cost intelligent wireless security and monitoring solution using moving object

recognition technology is presented in this paper. The system has good mobility, which makes

it a useful supplement of traditional monitoring system. It can also perform independent

surveillance mission and can be extended to a distributed surveillance system. Limited by the

memory consuming and computing capacity in a mobile phone, background subtraction

algorithm [1-6] is presented to be adopted in mobile phones. In order to be adapted to the slow

and slight changes of the natural environment, a self-adaptive background model updated

automatically and timely is detailed. When the subtraction of the current captured image and

the background reaches a certain threshold, a moving object is thought to be in the current

view, and the mobile phone will automatically notify the central control unit or the user 

through phone call, SMS, or other means. Based on J2ME technology, we use JSR135 and

JSR120 to implement a prototype.

CHAPTER 3

3.AIM AND SCOPE OF THE PROJECT

3.1 AIM

[9]

Page 10: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 10/49

The main aim of our   project is to provide a low-cost intelligent mobile phone-based wireless

video surveillance solution using moving object recognition technology. Our goal is to

minimize the network traffic and with good mobile ability, the system can be deployed rapidly

in emergency. These components are chosen as they have the greatest impact on

implementation effort. Proposed solution can be applied not only to various security systems,

 but also to environmental surveillance.

Firstly, the basic principle of moving object detecting is given. Limited by the memory

consuming and computing capacity of a mobile phone, a background subtraction algorithm is

 presented for adaptation. Then, a self-adaptive background model that can update

automatically and timely to adapt to the slow and slight changes of natural environment is

detailed. When the subtraction of the current captured image and the background reaches a

certain threshold, a moving object is considered to be in the current view, and the mobile

 phone will automatically notify the central control unit or the user through phone call, SMS

(Short Message System) or other means.

3.2 SCOPE OF THE PROJECT

• Intelligent video surveillance systems deal with the real-time monitoring of persistent

and transient objects within a specific environment.

• A low-cost intelligent wireless security and monitoring solution using moving object

recognition technology is presented in this project.

• The subtraction of the current captured image and the background reaches a certain

threshold, a moving object is thought to be in the current view, and the mobile phone

will automatically notify the central control unit or the user through SMS

CHAPTER 4

4.MATERIAL AND ALGORITHM USED

4.1 ALGORITHM USED

4.1.1Background Subtraction Algorithm

[10]

Page 11: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 11/49

Even though there exist a myriad of background subtraction algorithms in the literature, most

of them follow a simple flow diagram shown in Figure 1. The four major steps in a background

subtraction algorithm are preprocessing, background modeling, foreground detection, and data

validation. Preprocessing consists of a collection of simple image processing tasks that change

the raw input video into a format that can be processed by subsequent steps. Background

modeling uses the new video frame to calculate and update a background model. This

 background model provides a statistical description of the entire background scene.

Foreground detection then identifies pixels in the video frame that cannot be adequately

explained by the background model, and outputs them as a binary candidate foreground mask.

Finally, data validation examines the candidate mask, eliminates those pixels that do not

correspond to actual moving objects, and outputs the final foreground mask. Domain

knowledge and computationally-intensive vision algorithms are often used in data validation.

Real-time processing is still feasible as these sophisticated algorithms are applied only on the

small number of candidate foreground pixels. Many different approaches have been proposed

for each of the four processing steps. We review some of the representative ones in the

following subsections.

Fig a Flow diagram of a generic background subtraction algorithm.

4.1.1.1 Preprocessing

In most computer vision systems, simple temporal and/or spatial smoothing is used in the early

stage of processing to reduce camera noise. Smoothing can also be used to remove transient

environmental noise such as rain and snow captured in outdoor camera. For real-time systems,

frame-size and frame-rate reduction are commonly used to reduce the data processing rate. If 

the camera is moving or multiple cameras are used at different locations, image registration

[11]

Page 12: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 12/49

 between successive frames or among different cameras is needed before background

modeling .Another key issue in preprocessing is the data format used by the particular 

 background subtraction algorithm. Most of the algorithms handle luminance intensity, which is

one scalar value per each pixel. However, color image, in either RGB or HSV color space, is

 becoming more popular in the background subtraction literature. These papers argue that color 

is better than luminance at identifying objects in low-contrast areas and suppressing shadow

cast by moving objects. In addition to color, pixel-based image features such as spatial and

temporal derivatives are sometimes used to incorporate edges and motion information. For 

example, intensity values and spatial derivatives can be combined to form a single state space

for background tracking with the Kalman filter. Pless et al. combine both spatial and temporal

derivatives to form a constant velocity background model for detecting speeding vehicles. The

main drawback of adding color or derived features in background modeling is the extra

complexity for model parameter estimation. The increase in complexity is often significant as

most background modeling techniques maintain an independent model for each pixel.

[12]

Page 13: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 13/49

Fig b Background subtraction

[13]

Page 14: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 14/49

 

Figure c: Background Subtraction Algorithm

4.2 Background Subtraction Technology

Background subtraction is a commonly used class of techniques for segmenting out moving

objects of interest in a scene for applications such as surveillance. It involves comparing an

observed image with an estimate of the image if it contained no objects of interest. The areas of 

the image plane where there is a significant difference between the observed and estimated

images indicate the location of the objects of interest. The term “background subtraction"

comes from the simple technique of subtracting the timely updated background template from

the observed image and then thresholding the result to generate the objects of interest

4.2.1. Background Template Construction

Before the moving objects can be identified, a background template must be built. Generally,

 background and foreground (moving objects) are mixed together such as waving leaves in the

garden and running automobiles on high way. The foreground cannot be removed so the ideal

 background image cannot be retrieved. But the moving objects do not exist in the same

location in each frame of a real-time video sequence. An “average” frame of the video

[14]

Page 15: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 15/49

sequence can be retrieved to approach the ideal background image. The gray values of pixels

which have the same location in each frame of the video sequence are averaged to represent

the gray value of the pixel which located in the same place in the approximate background. An

average value of pixels in the same location of each frame in a video sequence is calculated. To

simplify, the approximate background is also called “background template”, “background” or 

“template” in the following contents.

In our prototype, the first 10 frames are captured to calculate the background template (i=10).

Moving objects can not be identified in these frames. If the moving objects move too slowly, i

should be increased to reduce the tolerance.

4.2.2 Moving Object Recognition

After the background template has been constructed, the background image can be subtracted

from the observed image. The result is foreground (moving objects). Actually, the background

is timely updated. The update algorithm is detailed in the next section.

Foreground j =| Frame j − Background j | (j>i) (2)

In case of some random disturbances, each pixel will fluctuate in a small range even there is no

expected moving objects in the scene. So there must be a strategy to judge it. A threshold is

defined in the system. If the difference of one pixel between real time frame and template is

more than 10, then add 1 to the threshold. When differences of all pixels in the frame are all

calculated, moving objects is thought to appear if the threshold is more then 3 percent of the

total number of pixels in the frame.

[15]

Page 16: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 16/49

4.2.3 BACKGROUND TEMPLATE UPGRADE

Due to the sun light changing very slowly, the background template must be updated timely.

Otherwise the foreground cannot be correctly identified anymore. Add 1 to the pixels value in

the background template if the corresponding pixels value is more than it in the template, or 

subtract 1 if it is less. This algorithm is more efficient than “Moving Average Algorithm”

 because it only uses addition and subtraction operation, and do not need much memory storage.

The algorithm is presented as follow:

Pixel k is a pixel in frame j, and Pixel background k is the corresponding pixel in background

template. These two pixels have the same location in their frames. With such method, the

 background template can adjust automatically according to environment change.

4.3 MODULES

(i) Capturing The Video

(ii) Comparing each Frames

(iii) Alerting System

[16]

Page 17: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 17/49

4.3.1Capturing The Video:

Once the camera video is shown on the device, capturing an image is easy. All

you need to do is call Video Control’s get Snapshot() method. The get Snapshot() method

returns an array of bytes, which is the image data in the format you

requested. The default image format is PNG (Portable Network Graphic).

The Mobile Media API (MMAPI) extends the functionality of the J2ME platform by providing

audio,

video and other time-based multimedia support to resource-constrained devices.

4.3.1.1 Getting a Video Capture Player:

The first step in taking pictures (officially called video capture) in a

MIDlet is obtaining a Player from the Manager.

 Player mPlayer = Manager.createPlayer("capture://video");

The Player needs to be realized to obtain the resources that are needed to take pictures.

mPlayer.realize();

4.3.1.2 Showing the Camera Video :

The video coming from the camera can be displayed on the screen either 

as an Item in a Form or as part of a Canvas. A Video Control makes this possible. To get a

VideoControl, just ask the Player for it:

VideoControlmVideoControl= (VideoControl)mPlayer.getControl("VideoControl");

[17]

Page 18: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 18/49

4.3.1.3 Capturing an Image

Once the camera video is shown on the device, capturing an image is easy. All you

need to do is call VideoControl's getSnapshot() method. The getSnapshot() method returns an

array of bytes, which is the image data in the format you requested. The default image format

is PNG (PortableNetwork Graphic).

 byte[] raw = mVideoControl.getSnapshot(null);

Image image = Image.createImage(raw, 0,

raw.length );

4.3.2 Comparing each Frames:

Background subtraction is a commonly used class of techniques for segmenting out moving

objects of interest in a scene for applications such as surveillance. It involves comparing an

observed image with an estimate of the image if it contained no objects of interest. The areas of 

the image plane where there is a

significant difference between the observed and estimated images indicate the location of the

objects of interest. The term “background subtraction" comes

from the simple technique of subtracting the timelyupdated background template from the

observed image and then thresholding the result to generate the objects

of interest.

4.3.2.1 Background Template Construction

Before the moving objects can be identified, a background template must be built.

Generally, background and foreground (moving objects) are mixed together such as waving

leaves in the garden and running automobiles on high way. The foreground can not be removed

so the ideal background image can not be retrieved. But the moving objects do not exist in the

same location in each frame of a real-time video sequence. An “average” frame of the video

sequence can be retrieved to approach the ideal background image. The gray values of pixels

[18]

Page 19: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 19/49

which have the same location in each frame of the video sequence are averaged to represent the

gray value of the pixel which located in the same place in the approximate background. An

average value of pixels in the same location of each frame in a video sequence is calculated. To

simplify, the approximate background is also called “background template”, “background” or 

“template” in the following contents.

4.3.2.2Moving Object Recognition

After the background template has been constructed, the background image can be

subtracted from the observed image. The result is foreground (moving objects). Actually, the

 background is timely updated.

The update algorithm is detailed in the next section.

Foreground j =| Frame j − Background j | (j>i)

In case of some random disturbances, each pixel will fluctuate in a small range even there is no

expected moving objects in the scene. So there must be a strategy to judge it. A threshold is

defined in the system. If the difference of one pixel between real time frame and template is

more than 10, then add 1 to the threshold. When differences of all pixels in the frame are all

calculated, moving objects is thought to appear if the threshold is more then 3 percent of the

total number of pixels in the frame.

4.3.3 Alerting System:

When the subtraction of the current captured image and the background reaches a

certain threshold, a moving object is considered to be in the current view,

and the mobile phone will automatically notify the central control unit or the user through

 phone call, MMS. The J2ME Wireless Toolkit supports the Wireless Messaging API (WMA)

with a sophisticated simulation environment. WMA 1.1 (JSR 120) enables MIDlets to send and

receive Short Message Service (SMS) or Cell Broadcast Service (CBS) messages. WMA 2.0

(JSR 205) includes support for MMS messages as well.

[19]

Page 20: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 20/49

CHAPTER 5

PERFORMANCE ANALYSIS

SOFTWARE REQUIREMENTS:

Language : JAVA,J2ME

Build Tools : Sun Java Wireless Toolkit.

Hardware Requirements

Main Processor : Pentium Iv

Ram : 512 Mb Sd-Ram

Mother Board : 845gvm Intel Chipset

Hard Disk : 60gb Hdd

Monitor : 17 “ Color Monitor 

Keyboard : At Extended 10 Keyboard

Mouse : Logitech

5.1 J2ME TECHNOLOGY

In this article, we have implemented a prototype on mobile telephones based on J2ME

technology. Java™ Platform, Micro Edition (Java ME) is the most ubiquitous application

 platform for mobile devices across the globe. It provides a robust, flexible environment for 

applications running on a broad range of other embedded devices, such as mobile phones,

[20]

Page 21: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 21/49

PDAs, TV set-top boxes, and printers. Applications based on Java ME software are portable

across a wide range of devices, yet leveraging each device's native capabilities.

5.1.1 The Java Platform

Java 2 Platform, Micro Edition (J2ME) is part of the Java 2 platform. While Java 2 Standard

Edition (J2SE) targets desktop systems, and Java 2 Enterprise Edition (J2EE) targets the server 

 backend applications, J2ME is a collection of APIs focusing on consumer and embedded

devices, ranging from TV set-top boxes, telematics systems, residential gateways, to mobile

 phones and PDAs. Within each edition of the Java 2 platform, there are different Java Virtual

Machine1 (JVM) implementations that are optimized for the type of systems they are targeted

at. For example, the K Virtual Machine (KVM) is a JVM optimized for resource constrained

devices, such as mobile phones and PDAs.

fig d J2ME is part of the Java 2 Platform

The following characteristics are shared among the three Java editions:

[21]

Page 22: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 22/49

• Write Once Run Anywhere: because Java technology relies on Java byte-code that is

interpreted by a virtual machine, applications written in Java can run on similar types of 

systems (servers, desktop systems, mobile devices) independent of the underlying

operating system and processor. For example, a developer doesn't need to develop and

maintain different versions of the same application to run on a Nokia Communicator 

running the EPOC operating system, a Compaq iPAQ running PocketPC, or even a

PDA powered by the Linux operating system. On mobile phones, the variety of 

 processors and operating systems is even more significant, and therefore the wireless

community in general is seeking a solution that is platform agnostic, such as WAP or 

J2ME.

• Security: while on the Internet, people are used to secure data transactions and

downloading files or email messages that may contain viruses, few wireless networks today

support standard Internet protocols, and wireless operators are concerned by the security

issues associated with the download of standard C applications on their networks. Java

technology features a robust security model: before any application is executed by the Java

virtual machine, a byte-code pre-verifier tests its code integrity. Once an application is

running, it cannot access system resources outside of a 'sandbox,' preventing applications

from acting as viruses. Finally, Java applications can take advantage of standard data

encryption solutions (SSL or Elliptic Curve Libraries) on packet based networks (for 

example CDPD, Mobitex, GPRS, W-CDMA), providing a robust infrastructure for 

mCommerce and enterprise application access.

• Rich graphical user interface: you may remember that the first demonstration of Java

technology was done using an animated character on a web page. While animated GIF files

have made this use of the technology obsolete on desktop systems, mobile devices can

 benefit from richer GUI APIs that allow for differentiation of services and the development

of compelling applications.

• Network awareness: while Java applications can operate in disconnected mode, they

are network-aware by default, allowing applications to be dynamically downloaded over a

network. Additionally, Java is network-agnostic, in the sense that Java applications can

exchange data with a backend server over any network protocol, whether it is TCP/IP,

[22]

Page 23: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 23/49

WAP, i-mode, and different bearers, such as GSM, CDMA, TDMA, PHS, CDPD,

Mobitex, and so on.

5.1.2THE J2ME APPLICATION STYLE

Contrary to the web browser model, which requires continuous connectivity and offers a

limited user interface and security experiences, J2ME allows applications to be dynamically

downloaded to a mobile device in a secure fashion. J2ME applications can be posted on a Web

server, allowing end users to initiate the download of an application they select through a

micro browser or other application locator interface. Wireless operators, content providers, and

ISVs can also push a set of J2ME applications and manage them remotely. The Java

 provisioning model puts the responsibility of checking the compatibility of the applications

(such as version of the J2ME specification used, memory available on the handset) on the

handset itself, allowing the end user to ignore the intricacies associated with typical desktop

systems.

Once a J2ME application is deployed on a mobile device, it stays there until the user decides to

upgrade or remove it. The application can be operated in disconnected mode (such as

standalone game, data entry application) and store data locally, providing a level of 

convenience that is not available on current browser-based solutions. Because the application

resides locally, the user doesn't experience any latency issues, and the application can offer a

user interface (drop-down menus, check boxes, animated icons) that is only matched by native

C applications. The level of convenience is increased because the user can control when the

application initiates a data exchange over the wireless network. This allows for big cost

savings on circuit0switched networks, where wireless users are billed per minute, and allows a

more efficient exchange of data, since many applications can use a store and forward

mechanism to minimize network latency.

[23]

Page 24: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 24/49

FIG e J2ME applications can exchange data over WAP, i-mode or 

TCP based wireless networks

Additionally, J2ME applications can leverage any wireless network infrastructure, taking

advantage of a WAP network stack on current circuit-switched networks (GSM, CDMA,

TDMA). The same applications are ready to be used on packet-based networks, allowing the

use of standard Internet protocols, such as HTTPS over SSL (data encryption), IMAP (email),

LDAP (directories), between the J2ME enabled client application and the backend

infrastructure.

5.1.3 J2ME BENEFITS ON WIRELESS SERVICE

Let's look at how Java technology fits in the wireless service evolution. Originally, analog

technology was sufficient to handle voice services, but the quality of the calls was sketchy and

multiple radio networks competed with one another.

Today we take advantage of the second generation of networks and services (2G networks),

which use digital networks and web browser technologies. This provides access to data

services, but markup languages present some limitations. Markup languages are a step in the

[24]

Page 25: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 25/49

right direction, but browser-based applications don't work when out of coverage-require air 

time for even simple operations (such as entering appointments in browser-based calendar) -

offer a limited user interface paradigm (character-based, static black and white images,

cumbersome navigation interface).

When Java technology is added to this environment, it brings additional benefits that translate

into an enhanced user experience. Instead of plain text applications and latency associated to a

 browser-based interface, the user is presented with rich animated graphics, a fast interaction,

the capability to use an application off-line, and maybe most interestingly, the capability to

dynamically download new applications to the device.

For application developers, this means that you can use your favorite programming language

and your favorite development tools, rather than learning a new programming environment.

There are over 2.5 million developers who have already developed applications using the Java

 programming language, primarily on the server side. Once these developers become familiar 

with the small set of J2ME APIs, it becomes relatively easy to develop small client modules

that can exchange data with server applications over the wireless network.

The challenges that remain the same for Java, WAP, or native APIs is that small screens and

limited input interfaces require developers to put some effort into the development of the

application user interface. In other worlds, small devices force developers to abandon bad or 

lazy programming techniques.

5.1.4 TYPE OF APPLICATION J2ME ENABLE

Many people expect to see new type of applications developed with J2ME. You can argue that

the application categories would remain the same, except for a few exceptions such as location

services and data applications that integrate with telephony functionality. The outcome is likely

to be applications that are context sensitive (immediacy, location, personal or professional use)

and are migrating from a character-based interface (browser-based applications) to a graphical

environment, providing developers and end users with an unmatched level of flexibility. Just

think about the evolution from DOS or mainframe applications to Windows, MacOS, or 

Solaris graphical environment. We still use processors, spreadsheets, accounting applications

[25]

Page 26: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 26/49

like in the good old days, but because the new generation of applications take advantage of a

richer graphical environment, the applications are better and easier to use.

Therefore, expect to see J2ME developers targeting the same categories of applications they

focused on with WAP, but this time with the user experience compelling enough for ISVs andsystem integrators to be able to charge for them.

As far as adoption of J2ME, the prognostics are rather good. Evans Data recently conducted a

survey2 among 500 wireless application developers, concluding that more developers will use

Java and J2ME to develop wireless applications (30%) than native C APIs (Palm OS,

PocketPC, EPOC) or even WAP.

The market that J2ME will penetrate the fastest is the Japanese market, with Nikkei Market

Access3 forecasting a penetration rate of 40% this year. NTT DoCoMo, who started shipping

J2ME enabled I-mode phones at the end of January, has already sold 1 million units, and they

expect the number to increase to 3 million by the end of September. The two other major 

Japanese wireless operators (KDDI and J-Phone) will join DoCoMo in the deployment of 

J2ME enabled handsets by the end of the summer.

Obviously, forecasts can be misleading, as the experience with WAP, Bluetooth and 3G has

shown. Therefore, what really matters is the number of handset manufacturers that are

 planning to make available J2ME enabled phones and PDAs this year, as well as the number of 

wireless operators that are endorsing the technology and putting in place a network 

infrastructure that will allow ISVs, content providers and corporations to deploy J2ME

applications and services over their network.

The benefits of Java technology as provided by J2ME in the wireless arena are many and

varied. From its Write Once Run Anywhere flexibility, to its robust security features, to its

support for off-line processing and local data storage, to its leverage of any wireless

infrastructure, to its fine-tuned control of data exchange, J2ME is a natural platform for 

wireless application development. The numbers bear this out -- the ranks of J2ME developers

are growing fast.

[26]

Page 27: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 27/49

5.1.5 MOBILE MEDIA API (JSR-135)

The Mobile Media API (MMAPI) extends the functionality of the J2ME platform by providing

audio, video and other time-based multimedia support to resource-constrained devices [7] [9].

5.2 WIRELESS MESSAGE API (JSR-120)

The J2ME Wireless Toolkit supports the Wireless Messaging API (WMA) with a sophisticated

simulation environment. WMA 1.1 (JSR 120) enables MIDlets to send and receive Short

Message Service (SMS) or Cell Broadcast Service (CBS) messages. WMA 2.0 (JSR 205)

includes support for MMS messages as well [8] [10].

5.2.1 CREATING AN MESSAGE CONNECTION

To create a client Message Connection just calls Connector. Open (), passing a URL that

specifies a valid WMA messaging protocol.

Message Connection mc =(Message Connection)Connector. Open (addr);

5.2.2 CREATING AND SENDING AN TEXT MESSAGE

The connection is a client, the destination address will already be set by the implementation

(the address is taken from the URL that was passed when the client connection was created).

Before sending the text message, the method populates the outgoing message by calling

setPayloadText(). TextMessage tmsg =(Text Message)mc.newMessage(Message

Connection.TEXT_MESSAGE); tmsg.setPayloadText(msg); mc.send(tmsg);

[27]

Page 28: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 28/49

5.3 PROTOTYPE

The system architecture is shown in Figure 4.3

Figure f: System Architecture

In the prototype system, if the difference between real-time frame and template reaches a

 predefined threshold, moving objects are considered to appear. Then the handset will send out

an alert SMS. Since the device has good mobility, it can be put anywhere including those area

not covered by other surveillance system. And it can be deployed rapidly in emergency.

5.4. EXPIREMENT

The prototype has been implemented on Motorola E680 GSM phone and Motorola ic902

CDMA1X Phone. The phone fact sheet is listed below.

[28]

Page 29: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 29/49

Table 1.5: Phone Fact Sheet

Fig g User Interface Of Prototype Application

Figure g is the UI (User Interface) of the prototype application. The first picture in the form is

real time frame, which is got from the camera originally. The second image is the template

image. If there are some moving objects being detected, the third picture will be displayed on

the form. And some real time information is displayed below the pictures .

[29]

Page 30: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 30/49

Since the first several frames must be stored to calculate the template, a big memory heap size

is needed. For E680, the image size is 192*192.The Java Virtual Machine heap is big enough

to store the frames. But for ic902, the image size is 640*480.The JVM heap can’t provide so

much memory. There are two methods to solve this problem. First, the frames can be stored to

the EFS (Embedded File System). Second, the image size can be reduced. The first method can

 provide high resolution image data, which contains more detailed image features. But storing

to EFS will take a much longer time than storing to memory, and much longer time is need in

the following calculating processes. The second method will lost some detailed image features,

 but it can fully operated in memory and reduce much processing time. Considered the

requirements of the real time ability, the second method is adopted. The image size is reduced

to 160*120, which is still enough to be used to identify the moving objects. The performance

for 100 frames is detailed in Table 2. The term “Snapshot time” is the time length to get the

image though J2ME MMA API. It mainly depends on the capability of Hardware, Operating

System and Java VM. The “DIP (digital image process) time” is the time length to perform the

 background subtraction algorithm, including images compare and template update. The “Frame

time” is the total time to process a frame which equals to the sum of “Snapshot time” and “DIP

time”. The “Frame time Average” is an average of several frame time. The “Template time” is

the total time to construct the background template. In this instance, the “Snapshot time” ,

”DIP time” and “Frame time” of the 100th frame are presented. And the “Frame time Average”

of the 100 frames and “Template time” are also given.

[30]

Page 31: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 31/49

Table 1.6 FRAMES

As shown in Table 1.6, the background template can be built in less than half a minute. And

the total time to calculate a frame is around one and a half seconds. It meets the requirements

to be a family security monitoring system and an anti-theft system. The experiment

demonstrates the feasibility of the proposed System.

Some frames are magnified to be seen more clearly (See Figures 4 - 6). Figure 4 is the self-

adaptive background template. As shown in Figure 5 and 6, a person run into the scene was

identified immediately.

Figure h: Background Template

[31]

Page 32: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 32/49

Fig i Intruder image

Fig j Foreground image

5.5 CAPTURING THE VIDEO

[32]

Page 33: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 33/49

Once the camera video is shown on the device, capturing an image is easy. All you need to do

is call Video Control’s get Snapshot () method. The get Snapshot () method returns an array of 

 bytes, which is the image data in the format you requested. The default image format is PNG

(Portable Network Graphic). The Mobile Media API (MMAPI) extends the functionality of the

J2ME platform by providing audio, video and other time-based multimedia support to

resource-constrained devices.

5.6 GETTING A VIDEO CAPTURE PLAYER 

The first step in taking pictures (officially called video capture) in a MID let is obtaining a

Player from the Manager. Player mPlayer = Manager.createPlayer("capture://video"); The

Player needs to be realized to obtain the resources that are needed to take pictures.

mPlayer.realize ();

5.7 SHOWING THE CAMERA VIDEO

The video coming from the camera can be displayed on the screen either as an Item in a Form

or as part of a Canvas. A Video Control makes this possible. To get a VideoControl, just ask 

the Player for it:

VideoControl mVideoControl = (VideoControl)mPlayer.getControl("VideoControl");

5.8 CAPTURING AN IMAGE

Once the camera video is shown on the device, capturing an image is easy. All you need to do

is call VideoControl's getSnapshot() method. The getSnapshot() method returns an array of 

 bytes, which is the image data in the format you requested. The default image format is PNG

(PortableNetwork Graphic).

 byte[] raw = mVideoControl.getSnapshot(null);

Image image = Image.createImage(raw, 0,

raw.length);

5.9 COMPARING EACH FRAME

[33]

Page 34: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 34/49

Background subtraction is a commonly used class of techniques for segmenting out moving

objects of interest in a scene for applications such as surveillance. It involves comparing an

observed image with an estimate of the image if it contained no objects of interest. The areas of 

the image plane where there is a significant difference between the observed and estimated

images indicate the location of the objects of interest. The term “background subtraction"

comes from the simple technique of subtracting the timely updated background template from

the observed image and then thresholding the result to generate the objects of interest.

5.10 ALERTING SYSTEM

When the subtraction of the current captured image and the background reaches a certain

threshold, a moving object is considered to be in the current view, and the mobile phone will

automatically notify the central control unit or the user through phone call, MMS. The J2ME

Wireless Toolkit supports the Wireless Messaging API (WMA) with a sophisticated simulation

environment. WMA 1.1 (JSR 120) enables MIDlets to send and receive Short Message Service

(SMS) or Cell Broadcast Service (CBS) messages. WMA 2.0 (JSR 205) includes support for 

MMS messages as well.

5.11 MOBILE MEDIA API ARCHITECTURE

The Mobile Media API is based on four fundamental concepts:• A player knows how to interpret media data. One type of player, for example, might

know how to produce sound based on MP3 audio data. Another type of player might be

capable of showing a QuickTime movie. Players are represented by implementations of 

the javax.microedition.media.Player interface.

• You can use one or more controls to modify the behavior of a player. You can get the

controls from a Player instance and use them while the player is rendering data from

media. For example, you can use a Volume Control to modify the volume of a sampled

audio Player. Controls are represented by implementations of the

 javax.microedition.media.Control interface; specific control subinterfaces are in the

 javax.microedition.media.control package.

• A data source knows how to get media data from its original location to a player. Media

data can be stored in a variety of locations, from remote servers to resource files or 

[34]

Page 35: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 35/49

RMS databases. Media data may be transported from its original location to the player 

using HTTP, a streaming protocol like RTP, or some other mechanism.

 javax.microedition.media.protocol.DataSource is the abstract parent class for all data

sources in the Mobile Media API.

• Finally, a manager ties everything together and serves as the entry point to the API. The

javax.microedition.media.Manager class contains static methods for obtaining

Players or DataSources.

5.12 Using the Mobile Media API

The simplest thing you can do with Manager is play tones using the following method:

Public static void playTone(int note, int duration, int volume) throws MediaExceptionThe duration is specified in milliseconds and the volume ranges from 0 (silent) to 100 (loud).

The note is specified as a number, as in MIDI, where 60 is middle C and 69 is a 440 Hz A. The

note can range from 0 to 127. The playTone() method is appropriate for playing a single tone

or a very short sequence. For longer monotonic sequences,

you'll use the default tone player, which is capable of playing an entire sequence of tones. The

real magic of the Mobile Media API is exposed through Manager's createPlayer() method.

There are three different versions of this method as follows:

The simplest way to obtain a Player is to use the first version of createPlayer() and just pass

in a string that represents media data. For instance, you might specify an audio file on a web server:

Player p = Manager.createPlayer("http://webserver/music.mp3");

The other createPlayer() methods allow you to create a Player from a DataSource or an

InputStream, whatever you happen to have available. If you think about it, these three methods are

really just three different ways of getting at the media data, the actual bits.

[35]

Page 36: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 36/49

An InputStream is the simplest object, just a byte stream. The DataSource is the next level up,

an object that speaks a protocol to get access to media data. And passing a locator string is the

ultimate shortcut: the MMAPI figures out which protocol to use and gets the media data to the

Player.

Using a Player Once you've successfully created a Player, what do you do next? The simplest

action is to begin playback with the start() method. For anything beyond the rudiments,

however, it helps to understand the life cycle of a Player. This consists of four states.

When a Player is first created, it is in the UNREALIZED state. After a Player has located its

data, it is in the REALIZED state. If a Player is rendering an audio file from an HTTP

connection to a server, for example, the Player reaches REALIZED after the HTTP request is

sent to the server, the HTTP response is received, and the Data Source is ready to begin

retrieving audio data. The next state is PREFETCHED, and is achieved when the Player has

read enough data to begin rendering. Finally, when the data is being rendered, the Player's state

is STARTED.

The Player interface provides methods for state transitions, both forwards and backwards

through the cycle described above. The reason is to provide the application with control over 

operations that might take a long time. You might, for example, want to push a Player through

the REALIZED and PREFETCHED states so that a sound can be played immediately in

response to a user action.

The Mobile Media API in the Java Platform

[36]

 public static Player createPlayer(String locator)

throws IOException, MediaException

 public static Player createPlayer(DataSource source)

throws IOException, MediaException

 public static Player createPlayer(InputStream stream, String type)

throws IOException, MediaException

Page 37: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 37/49

Where exactly does the MMAPI fit in the Java 2 platform? The answer is just about anywhere.

Although the MMAPI was designed with the contraints of the CLDC in mind, it will work just

fine alongside either CLDC or CDC software stacks. As a matter of fact, the MMAPI can be

implemented with J2SE as a lightweight alternative to the Java Media Framework.

5.13 MEDIA TYPES SUPPORTED

If you get a device that supports the Mobile Media API, what kinds of data can it play? What

data transfer protocols are supported? The Mobile Media API doesn't require any specific

content types or protocols, but you can find out at runtime what is supported by calling

Manager's getSupportedContentTypes() and getSupportedProtocols() methods.

What's the worst that can happen? If you ask Manager to give you a Player for a content type

or protocol that is not supported, it will throw an exception. Your application should attempt to

recover gracefully from such an exception, perhaps by using a different content type or 

displaying a polite message to the user.

Media in MIDP 2.0

The MIDP 2.0 specification includes a subset of the Mobile Media API. It is upwardly

compatible with the full API. The MIDP 2.0 subset has the following characteristics:

• Only audio playback (and possibly recording) is supported. No video-specific

control interfaces are included.

• Multiple players cannot be synchronized.

• The DataSource class and the rest of the javax.microedition.media.protocol

 package are excluded; applications cannot provide their own protocol implementations.

• The Manager class is simplified.

5.14 FEASIBILITY SPECIFICATION

5.14.1 TECHNICAL FEASIBILITY :

[37]

Page 38: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 38/49

Based on J2ME (Java2 Micro Edition) technology, a prototype system was developed

using JSR135 (Java Specification Requests 135: Mobile Media API) and JSR120 (Java

Specification Requests 120: Wireless Messaging API) and the test results show the

effectiveness of proposed solution.

5.14.2 Mobile Media API, JSR 135:

The Mobile Media API, JSR 135 in the Java Community Process (JCP), extends the

functionality of the J2ME platform by providing audio, video, and other time-based

multimedia support to resource-constrained devices. As a simple and lightweight optional package, it gives Java developers access to native multimedia services available on a given

device.

The MMAPI is an optional package within the J2ME platform. While the main emphasis is on

devices that implement profiles based on the Connected Limited Device Configuration(CLDC), the API design also aims at supporting devices that implement the Connected Device

Configuration (CDC) and the profiles based on CDC.

The Mobile Media API (MMAPI) is an API specification for the Java ME platform CDC and

CLDC devices such as mobile phones. Depending on how it's implemented, the APIs allow

applications to play and record sounds and video, and to capture still images. MMAPI was

developed under the Java Community Process as JSR 135.

The Multimedia Java API is based around four main types of classes in the

 javax.microedition.media  package —the Manager, the Player, the PlayerListener and various

types of Control.

Java ME programmers wishing to use JSR 135 would first make use of the static methods of 

the Manager class. Although there are other methods such as playTone, the main method used

is createPlayer. This takes either a URI or an InputStream, and a MIME type. In most cases,

URIs are used. Common URI protocols used include:

• file:

• resource: (which may extract a file from within the JAR of the MIDlet, but is

implementation-dependent)

• http:

• rtsp:

[38]

Page 39: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 39/49

• capture: (used for recording audio or video)

The MIME type is optional, and is inferred from the data passed in if not supplied.

The createPlayer method returns an implementation of the Player  interface (even if you use a

capture: protocol URI). This has core methods that are applicable to all players, such as

starting and stopping the media, and requesting that it loop. You can also setPlayerListener to

an object implementing the PlayerListener interface, which will receive various events related

to the clip (starting, stopping, media finishing, etc.)

Player classes also have a getControl method that returns an implementation of a particular 

Control. A Control handles any optional APIs which are not applicable to all media types. Any

given Player may or may not be able to supply an implementation of any given Control.

(Typically, the Control returned is actually the Player itself, but this is not guaranteed to be the

case.)

The set of controls implemented by a Player is not limited; however, some standard ones are

defined in the javax.microedition.media.control package by the JSR:

RateControl for setting the speed of a clip

MetaDataControl for accessing metadata about a clip, for example ID3 tags

FramePositioningControlfor setting a video clip location based on frames rather than

time

StopTimeControl for asking a clip to stop at a given time

RecordControlto specify how you wish to record using a capture: URI, and

for taking snapshots

ToneControl for note-based formats such as MIDI, specifying the tone

PitchControl for note-based formats, specifying the pitch

MIDIControl for MIDI-specific functions such as bank queries

VideoControl for specifying where on the screen video might play

5.14.3 Wireless Messaging API:

[39]

Page 40: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 40/49

The WMA is an optional package based on the Generic Connection Framework (GCF) and

targets the Connected Limited Device Configuration (CLDC) as its lowest commondenominator, meaning that it can extend both CLDC- and CDC-based profiles. It thus supports

Java 2 Platform, Mobile Edition (J2ME) applications targeted at cell phones and other devices

that can send and receive wireless messages. Note that Java 2 Platform, Standard Edition(J2SE) applications will also be able to take advantage of the WMA, once JSR 197 (Generic

Connection Framework Optional Package for J2SE) is complete. Figure 1 illustrates thecomponents of the WMA.

All the WMA components are contained in a single package, javax.wireless.messaging, whichdefines all the interfaces required for sending and receiving wireless messages, both binary and

text. Table 1 describes the contents of this package.

Table 1: Summary of the Wireless Messaging API v1.0 ( javax.wireless.messaging)

Interface Description Methods

Message Base Message interface, from

which subinterfaces (such asTextMessage and BinaryMessage)

are derived

getAddress(), getTimestamp(),setAddress()

BinaryMessage Subinterface of  Message that

 provides methods to set and get the

 binary payload

getPayloadData(),setPayloadData()

[40]

Page 41: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 41/49

TextMessage Subinterface of  Message that

 provides methods to set and get the

text payload

getPayloadText(),setPayloadText()

MessageConnection Subinterface of the GCF

Connection, which provides afactory of  Messages, and methods

to send and receive Messages

newMessage(), receive(),

send(), setMessageListener(),numberOfSegments()

MessageListener  Defines the listener interface to

implement asynchronous

notification of Message objects

notifyIncomingMessage()

The sections below will survey the different components of the WMA. For the low-level

details of each of the WMA methods consult the WMA specification. You can find more

information on the WMA, and a reference implementation (RI), athttp://java.sun.com/products/wma/.

5.14.4 OPERATIONAL FEASIBILITY

5.14.4.1Getting a Video Capture Player The first step in taking pictures (officially

called video capture) in a MIDlet is obtaining a Player from the Manager.

5.14.4.2  Showing the Camera Video  The video coming from the camera can be

displayed on the screen either as an Item in a Form or as part of a Canvas. A

VideoControl makes this possible.

5.14.4.3 Capturing an Image  Once the camera video is shown on the device,

capturing an image is easy. All you need to do is call VideoControl's

getSnapshot() method. The getSnapshot() method returns an array of bytes,

which is the image data in the format you requested. The default image format is

PNG (Portable[41]

Page 42: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 42/49

 Network Graphic).

The prototype has been implemented on Motorola E680 GSM phone and

Motorola ic902 CDMA1X phone. The phone fact sheet is listed below.

5.14.5 ECONOMIC FEASIBILITY:

A low-cost intelligent mobile phone-based wireless video surveillance solution using moving

object recognition technology is proposed in this paper.

The increasing need for intelligent video surveillance in public, commercial and familyapplications makes automated video surveillance systems one of the main current application

domains in computer vision. Intelligent video surveillance systems deal with the real-time

monitoring of persistent and Transient objects within a specific environment.

Intelligent surveillance system has been evolution to third generation, known as

automated wide-area video surveillance system. Combined computer vision technology, the

distributed system is autonomic, which can also be controlled by remote terminals.

A low-cost intelligent wireless security and monitoring solution using moving object

recognition technology is presented in this paper. The system has good mobility, which makes

it a useful supplement of traditional monitoring system. It can also perform independent

surveillance mission and can be extended to a distributed surveillance system.

Limited by the memory consuming and computing capacity in a mobile phone.[42]

Page 43: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 43/49

5.15 SOFTWARE REQUIREMENT SPECIFICATION:

5.15.1 FUNCTIONAL REQUIREMENTS

J2ME (Java2 Micro Edition) technology, a prototype system was developed using

JSR135 (Java Specification Requests 135: Mobile Media API) and

JSR120 (Java Specification Requests 120: Wireless Messaging API)

5.16 Future Enhancement:

The moving object recognition technology led to the development of autonomous systems,

which also minimize the network traffic. With good mobile ability, the system can be deployed

rapidly in emergency. And can be a useful supplement of traditional monitoring system. With

the help of J2ME technology, the differences of various hardware platforms are minimized. All

embedded platforms with camera equipped and

JSR135/JSR120 supported can install this system without making any changes to the

application. Also, the system can be extended to a distributed wireless network system. Many

terminals work together, reporting to a control center and receiving commands from the center.

Thus, a low-cost wide-area intelligent video surveillance system can be built. Further more,

with the development of embedded hardware, more complex digital image process algorithms

can be used to give more kinds of 

applications in the future.

6.0 SCREEN SHOTS:

[43]

Page 44: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 44/49

FIG 6.1.0 COMPILE AND RUN ON CLIENT SIDE

Fig 6.1.0 shows the window of start or stop cam after compiling the client side

FIG 6.1.1 COMPILE AND RUN ON SEVER SIDE

[44]

Page 45: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 45/49

Fig 6.1.1 shows the command window of RMI registry

Fig6.1.2 COMPILING THE EXISTING PROJECT

[45]

Page 46: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 46/49

Fig 6.1.2 shows the video palying and captures the initial background image

FIG6.1.3 CAPTURING THE IMAGES

Fig 6.1.3 shows captured image of intruder 

FIG 6.1.4 PROCESSING OF IMAGE

[46]

Page 47: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 47/49

Fig 6.1.4 shows the comparision of present image with the background image by using

Universal background subtraction algorithm and the result will be a foreground image.If there

is any change background or any foreground image is obtained,the foreground image is stored

in a database and sms is sent to the admin mobile.

FIG 6.1.5 CAPTURED IMAGES

[47]

Page 48: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 48/49

Fig 6.1.5 shows the images in a database

CHAPTER 7

[48]

Page 49: Vs Final1 Doc

7/28/2019 Vs Final1 Doc

http://slidepdf.com/reader/full/vs-final1-doc 49/49

CONCLUSION

7.1 Conclusion

The moving object recognition technology led to the development of autonomous systems,

which also minimize the network traffic. With good mobile ability, the system can be deployed

rapidly in emergency. And can be a useful supplement of traditional monitoring system. With

the help of J2ME technology, the differences of various hardware platforms are minimized. Allembedded platforms with camera equipped and JSR135/JSR120 supported can install this

system without making any changes to the application. Also, the system can be extended to a

distributed wireless network system. Many terminals work together, reporting to a controlcenter and receiving commands from the center. Thus, a low-cost wide-area intelligent video

surveillance system can be built. Further more, with the development of embedded hardware,

more complex digital image process algorithms can be used to give more kinds of applicationsin the future.

References

[1] M Valera, SA Velastin, Intelligent distributed  surveillance systems: a review. IEE Proceedings on Visual

Image Signal Processing, April. 2005, vol. 152, vo.2, pp.192-

204.[2] M. Piccardi, Background subtraction techniques: a

review, IEEE International Conference on Systems, Man and

Cybernetics, Oct. 2004, vol. 4, pp. 3099–3104.[3] T. Horprasert, D. Harwood and L.S. Davis, A Robust 

 Background Subtraction and Shadow Detection, Proc.Proceedings of the Fourth Asian Conference on Computer 

Vision, January 2000, vol. 1, pp. 983-988.[4] Y Ivanov, A Bobick, J Liu, Fast Lighting Independent 

 Background Subtraction, International Journal of Computer 

Vision, Jun. 2000, vol. 37, no. 2, pp. 199–207.[5] A Elgammal, D Harwood, L Davis, Non-parametric

 Model for Background Subtraction, Proceedings of the 6th

European Conference on Computer Vision-Part II, 2000, pp.751-767

[6] Alan, M. McIvor, Background Subtraction Techniques,

Proceedings of Image & Vision Computing New Zealand2000 IVCNZ’00, Reveal Limited, Auckland, New Zealan,2000.

[7] C. Enrique Ortiz, The Wireless Messaging API 

developers.sun.com, 2002