localization using neural networks in wireless sensor networks

Upload: prajwal-bikram-thapa

Post on 30-May-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 Localization Using Neural Networks in Wireless Sensor Networks

    1/7

    Localization Using Neural Networks in Wireless SensorNetworks

    Ali Shareef, Yifeng Zhu, and Mohamad Musavi

    Department of Electrical and Computer EngineeringUniversity of Maine

    Email: {ashareef, zhu, musavi}@eece.maine.edu

    ABSTRACT

    Noisy distance measurements are a pervasive problem inlocalization in wireless sensor networks. Neural networksare not commonly used in localization, however, our ex-periments in this paper indicate neural networks are a vi-able option for solving localization problems. In this paperwe qualitatively compare the performance of three differentfamilies of neural networks: Multi-Layer Perceptron (MLP),

    Radial Basis Function (RBF), and Recurrent Neural Net-works (RNN). The performance of these networks will alsobe compared with two variants of the Kalman Filter whichare traditionally used for localization. The resource require-ments in term of computational and memory resources willalso be compared. In this paper, we show that the RBFneural network has the best accuracy in localizing, howeverit also has the worst computational and memory resourcerequirements. The MLP neural network, on the other hand,has the best computational and memory resource require-ments.

    1. MOTIVATIONSLocalization is used in location-aware applications such as

    navigation, autonomous robotic movement, and asset track-ing [1, 2] to position a moving object on a coordinate system.Given three distances from know points, analytical localiza-tion methods such as triangulation and trilateration can beused when exact distance measurements can be obtained.

    In the event that distance measurements are noisy andfluctuate, the task of localizing becomes difficult. This canbe seen in Figure 1. With fluctuating distances, regionswithin the circles become possible locations for the trackedobject. As a result, rather than the object being located ata single point, the object can be located anywhere in thedark shaded region in Figure 1.

    This uncertainty due to the measurement noise rendersanalytical methods almost useless. Localization methodscapable of accounting for and filtering out the measurement

    noises are desired. Which is why neural networks are very

    Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.Mobilware08, February 12-15, 2008, Innsbruck, Austria.Copyright c 2008 ACM 978-1-59593-984-5/08/02 ...$5.00.

    Figure 1: Trilateration with Noise

    promising in this area. The method by which the distancemeasurements are carried out determines the sources of noisein these measurements. Typically devices known as bea-consare placed at known locations and emit either radio oracoustic signals or both. Mobile nodes can determine thedistance to a beacon by using the signal strength of the RFsignal, Received Signal Strength (RSS) [3]. Other systemsutilize both RF and acoustic signals by emitting both at thesame time and computing the time difference between theirarrival at the mobile node [4, 5, 2, 6].

    The Cricket system computes the distance by using thetime it takes for the first instance of the acoustic pulse toreach the sensor after the RF signal and the speed at whichthe acoustic wave typically travels [2, 6]. However, wavereflection is common to both RF and acoustic signals andit is possible that the mobile node erroneously identifies apulse due to the reflected wave of the original pulse as a newpulse resulting in skewed distance measurements [4].

    As a result, the complexity of the methods that can fac-

    tor these effects increases. However, it must be borne inmind that these methods will be implemented on embeddedsystems with limited computation and memory resources.Different methods with different levels of accuracy requirevarying amounts of resources. A trade-off must be madethat balances the timely computation of the position andthe desired accuracy necessary for the application.

    With this trade-off in mind, we compare the accuracy, ro-bustness, computational, and memory usage of localizationutilizing neural networks with the Position-Velocity (PV)and Position-Velocity-Acceleration models of the Kalman fil-

  • 8/14/2019 Localization Using Neural Networks in Wireless Sensor Networks

    2/7

    ters [7]. We will introduce the use of a Radial Basis Function(RBF) neural network for localization, which to our knowl-edge has not been done before.

    The rest of the paper is organized as follows. Section 2 de-scribes relevant localization methods. Section 3 introducesprincipals of neural networks. The experiments by whichthese methods will be compared are given in Section 4. Theresults will be discussed in Section 5, and concluding com-ments will be made in Section 6.

    2. RELATED WORKThe Active Badge Location System [8] is often credited

    as one of the earliest implementations of an indoor sensornetwork used to localize a mobile node [2]. Although thissystem, utilizing infrared, was only capable of localizing theroom that the mobile node was located in, many other sys-tems based on this concept have been proposed.

    The Bat system [4][5], much like the Active-Badge System,also utilizes a network of sensors. This system features acentral controller that emits an RF query which the mobilenode responds to with an ultrasonic pulse. This pulse ispicked up by a network of receivers at varying times due totheir locations. These times can be used to compute thedistances and hence the location of the mobile node.

    Researchers at MIT have utilized similar concepts fromthe Bat System in their Cricket sensors, albeit using a moredecentralized structure. This system requires less of a sup-port infrastructure than the Bat system. The Cricket Lo-cation System [6] uses a hybrid approach consisting of anExtended Kalman filter, Least Square Minimization to resetthe Kalman filter, and Outlier Rejection to eliminate baddistance readings. Other researchers at MIT have proposedlocalization by exploiting properties of robust quadrilateralsto localize an ad-hoc collection of sensors without the use ofbeacons [9].

    It is also possible to localize optically as shown in theHiBall head tracking system [10]. Arrays of LEDs flash syn-

    chronously, and cameras capture the position of these LEDs.The system utilizes information about the geometry of thesystem and computes the position.

    Localization using signal strength of RF signals has beenstudied extensively, [11, 12, 13, 14] are all examples of meth-ods that were devised using this approach. As popular as theKalman Filter and method involving electromagnetic signalsare for localization, neural networks have not been used ex-tensively in this area. There has been some research con-ducted by Chenna et al in [15]. However, Chenna et al,restricted themselves to comparing Recurrent Neural Net-works (RNN) to the Kalman Filter.

    In [16], we showed that an MLP neural network can beused for localization, and that its performance exceeds thatof the PV and PVA variants of the Kalman Filter.

    3. INTRODUCTIONIn [16], we provide extensive details on the use of Kalman

    filter for localization.Neural networks are modeled after biological nervous sys-

    tems and are represented as a network of interconnectionsbetween nodes called neurons with activation functions.Different classes of neural networks can be obtained by vary-ing the activation functions of the neurons and the structureof the weighted interconnections between them. Examples

    of these classes of neural networks are the Multi-Layer Per-ceptrons (MLP), Radial Basis Function (RBF), and the Re-current Neural Networks (RNN).

    The MLP network is trained to approximate a functionby repeatedly passing the input through the network. Theweights of the interconnections are modified based on thedifference between the desired output and the output of theneural network. The final weights of the MLP network areentirely dependent upon the initial weights. Finding the set

    of weights that result in the best performance is a challengeand ultimately becomes a guess and check exercise.

    The weights of the RBF network are obtained by solv-ing them through the application of Greens functions [17].Generally, variations of the Gaussian function are used asactivation functions for the nodes. Unlike the MLP net-work, the weights of the RBF network can be solved for. Amajor draw back of the RBF neural networks is that it spansthe input/output data space of the application, and possiblyrequires many nodes. Memory and computational require-ments of this network can be staggering. However, it is pos-sible to use fewer nodes at key locations to approximate theperformance of the much larger network, and thereby signif-icantly reducing memory and computational requirements.

    These networks are referred to as Reduced RBF (RRBF)networks.RNN networks are very similar to the MLP network struc-

    ture except in one respect. MLP interconnections only flowfrom the inputs to the outputs in one direction. There areno connections between the output layers and the previouslayers. RNN networks, on the other hand, can posses such afeedback structure. The outputs of a node can be fed backas inputs to previous layers. This unique trait may lend it-self well when localization using noisy measurements as willbe discussed later.

    A major benefit of a neural network is that prior knowl-edge of the noise distribution is not required. Noisy distancemeasurements can be used directly to train the network withthe actual coordinate locations. The neural network is ca-

    pable of characterizing the noise and compensating for itto obtain the accurate position. Unlike the Kalman filter,which depends upon the knowledge of noise distribution tolocalize [7].

    In any case, the power of neural networks lies in the factthat they can be used to model very complicated relation-ships easily. Detailed description can be found in [17].

    4. EXPERIMENT DESIGNWe will explore the performance of the three neural net-

    works described above using MIT Cricket sensors [18]. Thedistance measurements returned by the sensors are rarelyconstant and fluctuate often. During our experiment, themeasured distances varied by as much as 32.5 cm. The dis-

    tances from each of the beacons to the mobile node will beused as inputs and the neural networks will output positionsthat correspond to the location of the mobile node. We willsimulate the two dimensional motion of an object by collect-ing distance measurements of a mobile node while movingit in a network of Cricket sensors.

    Training data was collected to train the neural networksby utilizing the regular grid of a tile floor where the sensorscan be accurately located as shown in Figure 2. Each squaretile measures 30 cm wide and lends itself well to the Cricketsensors which can be programmed to measure distance in

  • 8/14/2019 Localization Using Neural Networks in Wireless Sensor Networks

    3/7

    centimeters.Using a grid of 300 cm 300 cm, beacons were placed at

    positions (0, 300), (300, 300), and (300, 0) of the grid. A rep-resentation of this grid can be seen if Figure 3. The trainingdata was collected by placing the mobile node at each in-tersection of the tiles and collecting distance measurementsto each of the three beacons S1, S2, and S3. The distancemeasurements to the beacons fluctuated constantly, and bycollecting more than one set of distances to each of the bea-

    cons, we were able to capture the noise in the system. Bytraining the neural networks with these multiple sets of fluc-tuating distances to beacons for each position, the accuracyof the neural networks improved as it became capable offiltering out the noise in the distance measurements. Thepositions with known locations for which distance measure-ments were collected are marked using + signs in Figure 3.For each of the 121 position marked with a + sign, severalsets of distances measurement were collected resulting in atotal of 1521 sets of distance measurements for training thenetworks.

    Figure 2: Experiment Test Bed

    0 50 100 150 200 250 300

    0

    50

    100

    150

    200

    250

    300

    Locations of Training and Testing Points

    Training Points

    Testing Points

    Figure 3: Locations of Training and Testing DataCollected

    Once the training data was collected, the testing data wascollected by moving the mobile node through the sensor net-work following a winding path. Again the distances, for eachknown positions were collected. This is indicated using the sign in Figure 3. The distances will be input to thenetwork and they will output estimates of the X and Y co-ordinates location for those distances.

    The MLP neural network is a two-layer network composedof nine nodes in the first layer and two nodes in the outputlayer. The nodes in the first layer use the hyperbolic tan-gent sigmoid activation function, and the second layer uses

    a linear activation function. This network was trained inMATLAB for 200 epochs with a training goal of 0.001 error.

    The RNN neural network that will be used is almost thesame as the MLP network except that the outputs of thefirst layer will also be fed back to each of the nodes in thefirst layer as inputs.

    Two forms of the RBF network will also be presented:the exact RBF and RRBF. The first layer of the exact RBFnetwork contains 1521 nodes. Each training point has a

    node associated with it. The RBF network was trained inMATLAB using a variance of 473 until the difference in errorbetween training iterations stabilized.

    The RRBF network, on the other hand, only has 121nodes corresponding to the positions where data was col-lected. All 1521 sets of distances will be used to train thenetwork with a variance of 33. Note that these varianceswere selected by plotting the localization error of the net-works for a variety of different variances and selecting theone that results in the least error.

    5. RESULTS AND DISCUSSIONS

    5.1 Performance Comparison

    Figures 4, 5, 6, 10, 11, and 12 show the localization accu-racy of the RBF, MLP, RNN, PV, PVA, and RRBF methodsrespectively.

    Method Distance Error RMSE Netper Estimate (X, Y) RMSE

    RBF 5.049 (5.651, 4.582) 7.275MLP 5.726 (5.905, 4.693) 7.543RNN 5.738 (5.936, 4.710) 7.576

    PV 8.771 (7.276, 7.537) 10.476PVA 9.613 (8.159, 7.937) 11.382

    RRBF 9.998 (8.177, 11.335) 13.977

    Table 1: Comparison of Localization Errors (cm)

    The RBF neural network has the best localization perfor-mance according to the average distance error per estimateas given under Distance Error per Estimate in Table 1. Thisis followed by the MLP, RNN, the PV and PVA model of theKalman Filter, and then the RRBF neural network. How-ever, a better metric to use is the Root Mean Square Error(RMSE) of the X and Y components of the estimates whichreveals the difference, if any, between the X and Y localiza-tions [16].

    Our results reveal that the neural networks with the ex-ception of RRBF have a higher percentage of errors of lessthan 10 cm. Whereas, the Extended Kalman filters havea lower percentage of errors, but most of these errors aregreater than 10. Comparing the RBF with the MLP neuralnetwork, the MLP network has a lower percentage of errors,but errors of greater magnitude.

    The RRBF network is also the only method that has errorgreater than 40 cm, with a spike of about 90 cm. We canalso see from Figure 15 this peculiar instance of an extremelylarge error in the RRBFs localization at about (267, 260)that significantly affected its RMSE. With significantly fewernodes spanning the data space, certain locations are notproperly accounted for resulting in this singularity.

  • 8/14/2019 Localization Using Neural Networks in Wireless Sensor Networks

    4/7

    0 50 100 150 200 250 300

    0

    50

    100

    150

    200

    250

    300

    Testing Points v.s. Localization by RBF NN

    Training Points

    Testing Points

    RBF NN

    Figure 4: Tracking trajectory ofRBF

    0 50 100 150 200 250 300

    0

    50

    100

    150

    200

    250

    300

    Testing Points v.s. Localization by MLP NN

    Training Points

    Testing Points

    MLP NN

    Figure 5: Tracking trajectory ofMLP

    0 50 100 150 200 250 300

    0

    50

    100

    150

    200

    250

    300

    Testing Points v.s. Localization by RNN NN

    Training Points

    Testing Points

    RNN NN

    Figure 6: Tracking trajectory ofRNN

    050

    100150

    200250

    300

    050

    100150

    200250

    300

    0

    5

    10

    15

    20

    25

    30

    35

    Location of Errors (cm) for RBF Network

    Figure 7: Localization errors ofRBF

    0

    50

    100

    150

    200

    250

    300

    0

    50

    100

    150

    200

    250

    300

    0

    10

    20

    30

    40

    Location of Errors (cm) for MLP Network

    Figure 8: Localization errors ofMLP

    050

    100150

    200250

    300

    050

    100150

    200250

    3000

    5

    10

    15

    20

    25

    30

    35

    Location of Errors (cm) for RNN Network

    Figure 9: Localization errors ofRNN

    0 50 100 150 200 250 300

    0

    50

    100

    150

    200

    250

    300

    Testing Points v.s. Localization by PV Model Kalman Filter

    Training Points

    Testing Points

    PV Model Kalman Filter

    Figure 10: Tracking trajectory ofPV

    0 50 100 150 200 250 300

    0

    50

    100

    150

    200

    250

    300

    Testing Points v.s. Localization by PVA Model Kalman Filter

    Training Points

    Testing Points

    PVA Model Kalman Filter

    Figure 11: Tracking trajectory ofPVA

    0 50 100 150 200 250 300

    0

    50

    100

    150

    200

    250

    300

    Testing Points v.s. Localization by Reduced RBF NN

    Training Points

    Testing Points

    RRBF NN

    Figure 12: Tracking trajectory ofRRBF

    050

    100150

    200250

    300

    0

    50

    100

    150

    200

    250

    300

    0

    5

    10

    15

    20

    25

    30

    35

    Locations of Errors (cm) for PV Model Kalman Filter

    Figure 13: Localization errors ofPV

    0

    50

    100

    150

    200

    250

    300

    0

    50

    100

    150

    200

    250

    300

    0

    5

    10

    15

    20

    25

    30

    35

    Locations of Errors (cm) for PVA Kalman Filter

    Figure 14: Localization errors ofPVA

    050

    100150

    200250

    300

    0

    50

    100

    150

    200

    250

    3000

    10

    20

    30

    40

    50

    60

    70

    80

    90

    100

    Location of Errors (cm) for RRBF Network

    Figure 15: Localization errors ofRRBF

  • 8/14/2019 Localization Using Neural Networks in Wireless Sensor Networks

    5/7

    The RRBF network attempts to approximate the RBFnetwork with fewer nodes, and it comes as no surprise thatits performance is lower than the RBFs.

    A reason why the Kalman filter makes fewer mistakes butlarger in magnitude may be due to the process noise of thesimulated motion of the object not adhering to the assump-tions of Gaussian characteristics [19]. Figures 7, 8, 9, 13,14, and 15 reveal the location and magnitude of errors inthe testing area as shown in Figure 3.

    It is also interesting to note that the Kalman filters displayrelatively large errors than the neural networks at the edgesof the other boundaries as well. This may be due to the factthat the Kalman filters iteratively close in on the localizedposition. At the boundaries, where the objects motion takesa sudden turn, the Kalman filters estimates requires severaliterations b efore it can catch up with the object, resultingin larger errors.

    It is also interesting to see that the performance of theMLP and RNN networks is so closely related. The feedbackinterconnections in this instance does not seem to providean advantage for the RNN network. However, just like theMLP, the weights associated with the RNN are randomlyinitialized and trained thereafter, and it is quite possible

    that for some set of weights, the RNN may indeed functionbetter than the MLP network.

    Figure 16: Overlapping Error Ranges Associatedwith Localization Estimates

    The intuition in using RNN for localization with noisymeasurements is that there is an error range associated witheach estimate due to the noise. However, it may be possibleto reduce this error range by utilizing the past estimate. InFigure 16, the error ranges for estimates P1 and P2 can beseen. However, by utilizing the estimate P1 and its associateerror range, the error range for P2 can be narrowed to theintersection of the error range of P1 and P2.

    The RNNs feedback mechanisms allow for this behavior.However, in order for this feature to be exploited, the stepsize of the moving object must be small enough that theerror ranges overlap. If the step size is too large, as seems tobe the case here, the RNN reduces to the MLP network. Thestep size of the moving object here is approximately 30 cm.However, due to the variance of the distance measurementsof 32.5 cm, this results in an error range with about radius16 cm. The step size of this experiment just barely exceedsthe error range of each estimate for the features of the RNN

    network to be useful.In addition to the above analysis, the path of a moving ob-

    ject based on kinematics was simulated. Distances betweenbeacons located at (300, 0), (300, 300), and (0, 300) and amovingmobile node were computed. These distances werefed to each of the different localization methods and theirperformance was analyzed. Figure 17 depicts the estimatedpath of each of these localization methods. Table 2 revealsthat the RBF has the best performance, followed closely byMLP and RNN. There is an interesting paradox in the per-formance of the RRBF. If the Distance Error Per Estimate

    50 0 50 100 150 200 2500

    20

    40

    60

    80

    100

    120

    140

    Actual

    Kalman PV

    Kalman PVA

    MLP

    RBF

    RRBF

    RNN

    Figure 17: Simulated Motion Using KinematicsEquations

    Method Dist Error RMSE NetPer RMSE

    EstimateRBF 16.494 (13.152, 10.830) 17.037

    MLP 16.568 (13.910, 11.594) 18.108RNN 16.574 (13.900, 11.622) 18.118

    PV 23.586 (16.893, 17.136) 24.063PVA 24.845 (17.774, 18.010) 25.303

    RRBF 22.258 (20.225, 14.330) 24.787

    Table 2: RMSE and Sum of Distance Error for Sim-ulated Path

    is to be used as a metric, then RRBF performs better thanPV. However, if the Net RMSE is to be used as a metric,then RRBF would fall between PV and PVA. Examining theindividual X and Y RMSE values, RRBF posses both theworst X localization while possessing the best Y localization

    among PV, PVA, and RRBF. One of the characteristics ofthe RMSE is that it is biased towards large errors since largeerrors squared result in larger values resulting in a greatercontribution of error. Due to the unpredictable nature ofthe RRBF, especially as seen in Figure 15 with the spike inerror of about 90 cm, it is best to rank it below the PVAmethod.

    5.2 Computation ComparisonThus far, only the accuracy of the localization methods

    has been examined without any discussion of the compu-tation requirements associated with them. As mentionedbefore, these localization methods will be implemented onan embedded system with limited capabilities. Based on

    the application, an appropriate localization method mustbe used that balances accuracy with the capabilities of thesystem.

    The following analysis utilizes the number of floating pointoperations as a metric to compare the different methods. Forsimplicity of presentation, this analysis assumes that the em-bedded system can perform floating point operations. Fur-ther, this analysis only accounts for steady-state computa-tion, meaning the initialization and setup computations arenot considered. It is assumed that the neural networks aretrained before-hand and the appropriate weights and biases

  • 8/14/2019 Localization Using Neural Networks in Wireless Sensor Networks

    6/7

    are available.

    Method Number of FloatingPoint Operations

    RBF 25857MLP 153RNN 315

    PV 884PVA 2220

    RRBF 2057

    Table 3: Comparison of Floating Point Operationsof Localization Methods Per Estimate

    As Table 3 reveals, the MLP is the least computationallyintensive. It is followed by the RNN, the PV, the RRBF,the PVA, and then finally the RBF. The RBF requires avery large number of floating point operations because ofthe large number of nodes. These results as described inTable 3 provide an insight into the workings of these local-ization methods, however, it is very difficult to generalizethese results because the size of the neural network maychange with different application. The computation of the

    PV and PVA models of the Kalman filter equations, on theother hand, remains consistent [7].A two layer MLP, RBF, and RRBF can be implemented

    using two sets of nested loops. The MLP has complexityO(m2) where m is the greatest number of nodes in the twolayers. The RNN can also be implemented using two setsof nested loops, however, the complexity changes dependingupon the number of nodes in the output layer. The compu-tation at the first layer due to the n2 feedback connectionscan become the bottle neck for the RNN.

    Method Order of CommentMagnitude

    RBF O(m2) m is the greaternumber of nodes

    in the two layers.MLP O(m2) m is the greater

    number of nodesin the two layers.

    RNN If (p2 > mn + n2), O(p2) p Output Nodeselse, O(mn + n2) n Hidden Nodes.

    m inputs.PV O(k3) k is the number

    of elements in thestate variable.

    PVA O(k3) k is the numberof elements in thestate variable.

    RRBF O(m2) m is the greater

    number of nodesin the two layers.

    Table 4: Comparison of computational complexitybetween localization methods

    It is difficult to arrive at a generalized statement compar-ing the computational complexity of these methods. It ispossible to compare the PV and PVA models, and it is pos-sible to compare the MLP, RNN, RBF and RRBF networks.However, there are difficulties in trying to compare Kalman

    filters with the neural networks. This is because there areno features that are shared between these two families oflocalizing methods. The neural networks are a more amor-phous method of modeling where, in the end, the arrivalof the best network for the application is obtained throughtrial and error.

    Another reason why it is difficult to arrive at a general-ized statement comparing the Kalman filters and the neuralnetworks is because, the scalability of the neural networks

    is not known. If the area of localization increased from the300 300 cm grid as described above to a 400 400 cmgrid, the noise characteristics of the distance measurementswill also change. The ultrasound signals which are used tomeasure distances will attenuate differently over this largerdistance. The noise characteristic of data for this new area oflocalization will be different and may require a much largerMLP than the one used above. The Kalman filter does notsuffer from the same problem as the neural networks. Ifthe size of the localization area changed, the computationcomplexity of the Kalman filter will not change.

    5.3 Memory Requirement ComparisonComparison of the memory requirements of these local-

    ization methods is as problematic as attempting to comparecomputational complexity. It is not clear at this time howthe neural networks will scale compared to the Kalman filterfor different applications. In the specific experiment carriedout for this paper, the memory usage of the neural networksas compared to the Kalman filters is described in Table 5.It should be noted that the memory usage described here isthe steady state memory required. It is also assumed thatfloats are four bytes long.

    Method Total Memory NumberUsage of Bytes

    RBF nm + np + 2n + 2p + m 42616MLP nm + np + 2n + 2p + m 280

    RNN n2 + nm + np + 2n + 2p + m 604PV 6k2 + 4k + 4km + 5m + m2 736

    PVA 6k2 + 4k + 4km + 5m + m2 1344RRBF nm + np + 2n + 2p + m 3416

    Table 5: Comparison of memory requirements be-tween Localization Methods

    In the expressions for the two-layer neural networks, n isthe number of nodes in the first layer, p is the number ofnodes in the output layer, and m is the number of distancereadings or the number of inputs.

    In the expressions for the two Kalman filters, k is thenumber of elements in the state variable and m is the num-ber of distance readings. The expressions for the memory

    requirements of the Kalman filters include an additionalk2 + 2km + m2 + k + m bytes of memory for temporaryvariables.

    Clearly MLP has the least memory usage requirements.It is interesting to note that the underlying memory charac-teristics for the MLP, RBF, and RRBF are equivalent. Thisis also true for the Kalman filters. The memory requiredby the MLP network is less than half of that required bythe PV Kalman filter. The large number of nodes spanningthe data set results in the increased memory usage for theRBF. The RNN network also suffers due to the n2 feedback

  • 8/14/2019 Localization Using Neural Networks in Wireless Sensor Networks

    7/7

    interconnections.

    6. CONCLUSIONThe experimental results indicate that among the six lo-

    calization algorithms, the RBF neural network has the bestperformance in terms of accuracy. As was mentioned in Sec-tion 3, the weights that determine the MLP and RNN neuralnetworks are found by trial and error unlike the weights for

    the RBF network. The MLP and RNN networks given inthis paper were obtained after an extended search. How-ever, it is possible that there may be weights for the MLPand RNN networks that will result in better performancethat exceeds the RBFs.

    However, as far as the computational and memory re-quirements are concerned, the MLP is the best. In fact,the MLPs accuracy is only marginally lower than RBFs.The MLP neural network offers the best trade-off betweenaccuracy and resource requirements.

    The RNN neural network also has the potential of ex-ceeding the MLPs performance if the step size betweenlocalization requests are small.

    It is also apparent that neural networks have a weakerself-adaptivity than the Kalman filters for the following rea-sons. First, neural networks perform well only for the areain which they have been trained. If the tracked object passesbeyond the boundaries of the area where the neural networkhas been trained, the neural network will not be able tolocalize.

    Second, when the beacons from which the distance mea-surements have been used to train the network are moved,the neural network needs to be re-trained and there is apossibility that the architecture of the neural network mayneed to change. On the other hand, the Kalman filters donot suffer from this problem and they can be used freely overany area once the appropriate noise parameters have beenmeasured.

    Compared with Kalman filter methods, the neural net-

    works do have some advantages. The Kalman filters itera-tively localize, correcting their estimates over time based onthe noise parameters which must be follow a Gaussian dis-tribution. The neural networks on the other hand localize ina single pass of the network. The Kalman filter uses the lawsof kinematics to predict the location of the tracked object. Ifthe tracked objects motion is random and spontaneous theneural networks ability to localize in a single pass resultsin more accurate estimates every time. The Kalman filter,however, requires several iterations before it begins to reachthe accuracy of the MLP.

    In conclusion, where noise parameters are not expectedto change, the localization method using MLP may be thebest option. The high accuracy, minimal computational andmemory requirements are highly desirable in embedded sys-

    tems. If a flexible and easily modifiable method is required,then the Kalman filters may be a better option. However,the decision between the PV and the PVA model of theKalman filter would depend on the application and the char-acteristics of the motion of the object.

    7. ACKNOWLEDGMENTSThis work was supported in part by NSF Grants (GK-

    12 #0538457, CCF #0621493, CNS #0723093, and DRL#0737583).

    8. REFERENCES[1] Nicholas Roy and Sebastian Thrun. Coastal navigation with

    mobile robots. In Advances in Neural Processing Systems 12,volume 12, pages 10431049, 1999.

    [2] N. Priyantha. Providing precise indoor location information tomobile devices, 2001. Masters thesis, Massachusetts Instituteof Technology, January 2001.

    [3] N. Patwari. Location Estimation in Sensor Networks. Ph.d.dissertation, Electrical Engineering, University of Michigan,2005.

    [4] A. Ward, A. Jones, and A. Hopper. A new location technique

    for the active office. In IEEE Personnel Communications,number 5, pages 4247, October 1997.

    [5] A. Harter, A. Hopper, P. Steggles, A. Ward, , and P. Webster.The anatomy of a context-aware application. In Proceedings ofthe 5th Annual ACM/IEEE International Conference onMobile Computing and Networking (Mobicom 99), pages5968, 1999.

    [6] A. Smith, H. Balakrishnan, M. Goraczko, and N. Priyantha.Tracking moving devices with the cricket location system. InMobiSys 04: Proceedings of the 2nd international conferenceon Mobile systems, applications, and services, pages 190202,New York, NY, USA, 2004. ACM Press.

    [7] Yifeng Zhu and Ali Shareef. Comparisons of three Kalman filtertracking models in wireless sensor network. In Proceedings ofIEEE International Workshop on Networking, Architecture,and Storages, 2006. NAS 06., 2006.

    [8] R. Want, A. Hopper, V. Falcao, and J . Gibbons. The activebadge location system. Technical Report 92.1, OlivettiResearch Ltd. (ORL), 24a Trumpington Street, Cambridge CB2

    1QA, 1992.[9] D. Moore, J. Leonard, D. Rus, and S. Teller. Robust

    distributed network localization with noisy rangemeasurements. In SenSys 04: Proceedings of the 2ndinternational conference on Embedded networked sensorsystems, pages 5061, New York, NY, USA, 2004. ACM Press.

    [10] Greg Welch, Gary Bishop, Leandra Vicci, Stephen Brumback,Kurtis Keller, and Dnardo Colucci. High-performancewide-area optical tracking: The HiBall tracking system.Presence: Teleoper. Virtual Environ., 10(1):121, 2001.

    [11] C. Alippi, A. Mottarella, and G. Vanini. A RF map-basedlocalization algorithm for indoor environments. In IEEEInternational Symposium on Circuits and Systems, 2005.ISCAS 2005., pages 652655. IEEE CNF, 2005.

    [12] J. Hightower, G. Borriello, , and R. Want. SpotON: An indoor3D location sensing technology based on RF signal strength.Technical Report CSE Technical Report NUM 2000-02-02,University of Washington, February 18 2000.

    [13] K. Lorincz and M. Welsh. A robust decentralized approach toRF-based location tracking. Technical Report TR-04-04,Harvard University, 2004.

    [14] L. M. Ni, Y. Liu, Y. C. Lau, and A. P. Patil. Landmarc: Indoorlocation sensing using active RFID. In Proceedings of the FirstIEEE International Conference on Pervasive Computing andCommunications, 2003. (PerCom 2003)., pages 407415.IEEE CNF, 2003.

    [15] S. K. Chenna, Y. Kr. Jain, H. Kapoor, R. S. Bapi, N. Yadaiah,A. Negi, V. Seshagiri Rao, and B. L. Deekshatulu. Stateestimation and tracking problems: A comparison betweenKalman filter and recurrent neural networks. In InternationalConference on Neural Information Processing, pages 275281,2004.

    [16] A. Shareef, Yifeng Zhu, Mohamad Musavi, and Bingxin Shen.Comparison of MLP neural network and kalman filter forlocalization in wireless sensor networks. In Proceedings of the19th IASTED International Conference on Parallel andDistributed Computing and Systems, pages 323330. ACTA

    Press, 2007.[17] S. Haykin. Neural Networks. Prentice Hall, 1998.

    [18] Nissanka B. Priyantha, Anit Chakraborty, and HariBalakrishnan. The cricket location-support system. InMobiCom 00: Proceedings of the 6th annual internationalconference on Mobile computing and networking, pages 3243,New York, NY, USA, 2000. ACM Press.

    [19] G. Welch and G. Bishop. An introduction to the Kalman filter.Technical Report TR 95-041, University of North Carolina atChapel Hill, Department of Computer Science, Chapel Hill, NC27599-3175, 2006.