a robust data interpolation based on a back propagation

17
Research Article A Robust Data Interpolation Based on a Back Propagation Artificial Neural Network Operator for Incomplete Acquisition in Wireless Sensor Networks Mingshan Xie , 1,2 Mengxing Huang , 1 Yong Bai , 1 Zhuhua Hu , 1 and Yanfang Deng 1 1 State Key Laboratory of Marine Resource Utilization in South China Sea, College of Information Science & Technology, Hainan University, Haikou 570228, China 2 College of Network, Haikou University of Economics, Haikou 571127, China Correspondence should be addressed to Yong Bai; [email protected] Received 13 July 2018; Revised 7 October 2018; Accepted 16 October 2018; Published 20 December 2018 Guest Editor: Aniello Falco Copyright © 2018 Mingshan Xie et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The data space collected by a wireless sensor network (WSN) is the basis of data mining and data visualization. In the process of monitoring physical quantities with large time and space correlations, incomplete acquisition strategy with data interpolation can be adopted to reduce the deployment cost. To improve the performance of data interpolation in such a scenario, we proposed a robust data interpolation based on a back propagation articial neural network operator. In this paper, a neural network learning operator is proposed based on the strong fault tolerance of articial neural networks. The learning operator is trained by using the historical data of the data acquisition nodes of WSN and is transferred to estimate the value of physical quantities at the locations where sensors are not deployed. The experimental results show that our proposed method yields smaller interpolation error than the traditional inverse-distance-weighted interpolation (IDWI) method. 1. Introduction The purpose of a wireless sensor network (WSN) is to obtain the data eld or data space of the physical world as accurate and complete as possible through acquisition technology. It is an important part of forecasting, simulation, and prediction to obtain the spatial-temporal distribution information of the monitored object accurately. However, in some scenarios, WSN can take an incomplete acquisition strategy, due to the development cost of the sensing device and the deployment environment factor, energy limitation, equipment aging, and other factors, or because it is not necessary to collect the data in each corner of the monitoring area. The incomplete acquisition strategy is divided into three cases: (a) spatial incomplete acquisition strategythe actual collected area is smaller than the interested area or the actual acquisition location set is part of the entire acquisition locations in the monitoring area; (b) temporal incomplete acquisition strategythe actual collection time period is less than the time period in which all devices work. The sleeping schedule is a temporal incomplete acquisition strategy. (c) Incomplete acquisition of attributesthe actual physical quantities collected are less than the interested physical quantities. Because the constraints of interpolation are relatively small, it is more appropriate to use the interpolation algo- rithm to complete or rene the entire data space in the case of spatial incomplete acquisition. The interpolation comple- tion algorithm takes advantage of the strong correlation between the data in the data space. At present, data interpo- lation is the main method to complement the data space of the entire region. In [1], Ding and Song used the linear interpolation theory to evaluate the working status of each node and the whole network coverage case. In [2], Alvear et al. applied interpolation techniques for creating detailed pollution maps. However, WSN is often aected by many unfavorable factors. For example, it is usually arranged in a harsh Hindawi Journal of Sensors Volume 2018, Article ID 7853695, 16 pages https://doi.org/10.1155/2018/7853695

Upload: others

Post on 16-Oct-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Research ArticleA Robust Data Interpolation Based on a Back PropagationArtificial Neural Network Operator for Incomplete Acquisition inWireless Sensor Networks

Mingshan Xie 12 Mengxing Huang 1 Yong Bai 1 Zhuhua Hu 1 and Yanfang Deng1

1State Key Laboratory of Marine Resource Utilization in South China Sea College of Information Science amp TechnologyHainan University Haikou 570228 China2College of Network Haikou University of Economics Haikou 571127 China

Correspondence should be addressed to Yong Bai baihainueducn

Received 13 July 2018 Revised 7 October 2018 Accepted 16 October 2018 Published 20 December 2018

Guest Editor Aniello Falco

Copyright copy 2018 Mingshan Xie et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

The data space collected by a wireless sensor network (WSN) is the basis of data mining and data visualization In the process ofmonitoring physical quantities with large time and space correlations incomplete acquisition strategy with data interpolationcan be adopted to reduce the deployment cost To improve the performance of data interpolation in such a scenario weproposed a robust data interpolation based on a back propagation artificial neural network operator In this paper a neuralnetwork learning operator is proposed based on the strong fault tolerance of artificial neural networks The learning operator istrained by using the historical data of the data acquisition nodes of WSN and is transferred to estimate the value of physicalquantities at the locations where sensors are not deployed The experimental results show that our proposed method yieldssmaller interpolation error than the traditional inverse-distance-weighted interpolation (IDWI) method

1 Introduction

The purpose of a wireless sensor network (WSN) is to obtainthe data field or data space of the physical world as accurateand complete as possible through acquisition technologyIt is an important part of forecasting simulation andprediction to obtain the spatial-temporal distributioninformation of the monitored object accurately Howeverin some scenarios WSN can take an incomplete acquisitionstrategy due to the development cost of the sensing deviceand the deployment environment factor energy limitationequipment aging and other factors or because it is notnecessary to collect the data in each corner of the monitoringarea The incomplete acquisition strategy is divided intothree cases (a) spatial incomplete acquisition strategymdashtheactual collected area is smaller than the interested area orthe actual acquisition location set is part of the entireacquisition locations in the monitoring area (b) temporalincomplete acquisition strategymdashthe actual collection time

period is less than the time period in which all devices workThe sleeping schedule is a temporal incomplete acquisitionstrategy (c) Incomplete acquisition of attributesmdashthe actualphysical quantities collected are less than the interestedphysical quantities

Because the constraints of interpolation are relativelysmall it is more appropriate to use the interpolation algo-rithm to complete or refine the entire data space in the caseof spatial incomplete acquisition The interpolation comple-tion algorithm takes advantage of the strong correlationbetween the data in the data space At present data interpo-lation is the main method to complement the data space ofthe entire region In [1] Ding and Song used the linearinterpolation theory to evaluate the working status of eachnode and the whole network coverage case In [2] Alvearet al applied interpolation techniques for creating detailedpollution maps

However WSN is often affected by many unfavorablefactors For example it is usually arranged in a harsh

HindawiJournal of SensorsVolume 2018 Article ID 7853695 16 pageshttpsdoiorg10115520187853695

environment the node failure rate is relatively high it is verydifficult to physically replace the failure sensor and the wire-less communication network is susceptible to interferenceattenuation multipath blind zone and other unfavorablefactors Data is prone to errors security is not guaranteedetc Therefore WSN data interpolation technology must behighly fault tolerant to ensure high credibility and robustnessof the completed data space [3]

Data interpolation is used to predict and estimate theinformation at an unknown location by means of usingknown information Transfer learning opens up a new pathfor data interpolation The goal of transfer learning is toextract useful knowledge from one or more source domaintasks and apply them to new target tasks It is essentiallythe transfer and reuse of knowledge Transfer learning hasgradually received the attention of scholars In [4] theauthors are motivated by the idea of transfer learning Theyproposed a novel domain correction and adaptive extremelearning machine (DC-AELM) framework with transferringcapability to realize the knowledge transfer for interferencesuppression To improve the radar emitter signal recogni-tion Yang et al use transfer learning to obtain the robustfeature against signal noise rate (SNR) variation in [5] In[6] the authors discuss the relationship between transferlearning and other related machine learning techniquessuch as domain adaptation multitask learning and sampleselection bias as well as covariate shift

Artificial neural networks have strong robustness One ofthe requirements to ensure the accuracy of transfer learningis the robustness of the learning algorithm Many scholarshave combined the neural artificial network with transferlearning In [7] Pan et al propose a cascade convolutionalneural network (CCNN) framework based on transferlearning for aircraft detection It achieves high accuracy andefficient detection with relatively few samples In [8] Parket al showed that the transfer learning of the ImageNetpretrained deep convolutional neural networks (DCNN)can be extremely useful when there are only a small numberof Doppler radar-based spectrogram data

The research aim of data interpolation of WSN is tocomplete the data space of the entire monitoring area byusing the limited data of the acquisition node to estimatethe data at the locations where sensors are deployed How-ever data errors of WSN due to various reasons have greatimpact on the accuracy of data interpolation Due to thestrong robustness of an artificial neural network an artificialneural network learning operator is generated by usinghistorical measurement data of limited data acquisitionnodes in this paper At the same time this paper applies thelearning property of the artificial neural network to theinverse-distance-weighted interpolation method which isconducive to improving precision and accuracy of datainterpolation On the basis of analyzing the demand of net-work models this paper proposes a robust data interpolationbased on a back propagation artificial neural network opera-tor for incomplete acquisition in wireless sensor networksThe detailed steps of the algorithm are discussed in detailand the algorithm is analyzed based on the MATLAB tooland the measured data provided by the Intel Berkeley

Research laboratory [9] The experimental results are goodevaluations of the fault-tolerant performance and lower errorof our proposed method

The main contributions of this paper are as follows

(1) Aiming at the data loss and disturbance error wepropose a fault-tolerant complementary algorithmbased on the robustness of the artificial neuralnetwork

(2) We combine the artificial neural network with theinverse-distance-weighted interpolation algorithmto obtain a novel back propagation artificial neuralnetwork operator

(3) We use the inverse tangent function to reconcile therelationships among multiple prediction values

The rest of the paper is organized as follows In Section 2we summarize the related work Section 3 introduces theinterpolation model in the condition of data error Section4 presents how to construct the learning operator set of dataacquisition nodes Section 5 elaborates how to generate theinterpolation by the method based on the back propagationartificial neural network operator We will show the experi-mental results of our proposed methods compared with theinverse-distance-weighted interpolation in Section 6 Theconclusions are given in Section 7

2 Related Works

21 The Inverse-Distance-Weighted Interpolation MethodThe inverse-distance-weighted interpolation (IDWI) methodis also called ldquoinverse-distance-weighted averagingrdquo or theldquoShepard Methodrdquo The interpolation scheme is explicitlyexpressed as follows

Given n locations whose the plane coordinates are xi yiand the values are zi where i = 1 2hellip n the interpolationfunction is

f x y =sumn

i=1 didistpi

sumni=1 1distpi

if x y ne xi yi emspi = 1 2hellip n

di if x y = xi yi emspi = 1 2hellip n1

where disti = x minus xi2 + y minus yi

2 is the horizontal distance

between x y and xi yi where i = 1 2hellip n p is a con-stant greater than 0 called the weighted power exponent

It can easily be seen that the interpolation f x y =sumni=1

didistpi sumn

i=1 1distpi of the location x y is the weightedmean of dist1 dist2hellip distn

The application of inverse-distance-weighted interpola-tion is more extensive Because of its simple computationand having less constraints the interpolation precision ishigher In [10] Kang and Wang use the Shepard family ofinterpolants to interpolate the density value of any givencomputational point within a certain circular influencedomain of the point In [11] Hammoudeh et al use a

2 Journal of Sensors

Shepard interpolation method to build a continuous map fora new WSN service called the map generation service

From (1) we can see that the IDWI algorithm is sensitiveto the accuracy of the data However WSN is usuallydeployed in a harsh environment and the probability ofdata being collected is high The error tolerance of theinterpolation algorithm is required This paper improvesthe robustness of interpolation algorithm on the basis ofthe inverse-distance interpolation algorithm

22 Artificial Neural Network An artificial neural network(ANN) is an information processing paradigm that isinspired from biological nervous systems such as how thebrain processes information ANNs like people have theability to learn by example An ANN is configured for aspecific application such as pattern recognition functionapproximation or data classification through a learningprocess Learning in biological systems involves adjustmentsto the synoptic connections that exist among neurons Thisis true for ANNs as well They are made up of simple pro-cessing units which are linked by weighted connections toform structures that are able to learn relationships betweensets of variables This heuristic method can be useful fornonlinear processes that have unknown functional formsThe feed forward neural networks or the multilayer percep-tron (MLP) among different networks is most commonlyused in engineering MLP networks are normally arrangedin three layers of neurons the input layer and output layerrepresent the input and output variables respectively ofthe model laid between them is one or more hiddenlayers that hold the networkrsquos ability to learn nonlinearrelationships [12]

The natural redundancy of neural networks and the formof the activation function (usually a sigmoid) of neuronresponses make them somewhat fault tolerant particularlywith respect to perturbation patterns Most of the publishedwork on this topic demonstrated this robustness by injectinglimited (Gaussian) noise on a software model [13] Velazcoet al proved the robustness of ANN with respect to bit errorsin [13] Venkitaraman et al proved that neural networkarchitecture exhibits robustness to the input perturbationthe output feature of the neural network exhibits theLipschitz continuity in terms of the input perturbation in[14] Artificial neural networks have strong robustness Therobustness of the algorithm is a requirement to ensure theaccuracy of artificial neural network operator transferringWe can see from the literatures [7 8] that operator transfer-ring can combine well with an artificial neural network Thelearning operator in this paper also adopts an artificial neuralnetwork algorithm

3 Problem Formulations

31 Data Acquisition Nodes To assess the entire environ-mental condition WSN collects data by deploying a certainnumber of sensors in the location of the monitoring areathus the physical quantity of the monitoring area is discre-tized and the monitoring physical quantity is digitized

Definition 1 Interested locations in the whole monitoringarea they are the central locations of the segment of themonitoring area that we are interested in

Sensors can be deployed at each interested location tocapture data The data at all interested locations reflects theinformation status of the entire monitoring area

We assumed that S is the set of the interested locationswhich is a matrix of 1timesn

S = s1 x1 y1 s2 x2 y2 s3 x3 y3 hellip sn xn yn 2

where si xi yi is the ith interested location in the monitoringarea xi yi is the coordinates of the si interested location inthe monitoring area Due to the difficulty and limitation ofdeployment not all the interested locations can deploysensors This paper studies the spatial incomplete collectionstrategy We select a subset of S as the data acquisition nodeS represents the potential of a set that is the number of

elements of S S = n

Definition 2 Data acquisition nodes they are the interestedlocations where the sensors are actually deployed

The sensors are deployed in these interested locationsso that these locations become data acquisition nodes Theall-data acquisition nodes in the monitoring area act as thesensing layer of the WSN and the information is transmittedto the server through the devices of the transport layer

In our research when the sensors are not deployed at theinterested location we use zero as a placeholder to replace thedata acquisition node When the interested location becomesthe data acquisition node we use 1 as a placeholder to replacethe data acquisition node Suppose that M is the set of dataacquisition nodes

M = si ∣ si = 1 si isin S i = 1hellip n 3

where si = 1 represents the ith interested location where thesensors are deployed M indicates the potential of the setM It reflects the total number of elements in the set of dataacquisition nodes This paper investigates the case wheremultiple types of sensors are deployed at a data acquisitionnode The data that the data acquisition node si can correctlycollect at time tl is defined as dsi tl = dsi 1 tl dsi 2 tl hellip dsi k tl which is k dimensional data that is perceived bythe ith data acquisition location si in S The physical quantityof temperature humidity etc can be measured at the sametime The data is defined as

DM tl = dsi tl ∣ si isinM dsi tl isin Rk 4

If M lt S then WSN implements incomplete cover-age if M = S thenWSN implements complete coverageThe data of the interested location where the sensors are notdeployed can be assessed by the data of the data acquisitionnode The interested location where the sensors are notdeployed is indicated as non-data acquisition location

3Journal of Sensors

32 Data Acquisition Error Because the wireless communi-cation network is susceptible to interference attenuationmultipath blind zone and other unfavorable factors the dataerror rate is high Nodes and links in wireless sensornetworks are inherently erroneous and unpredictable Theerror data which greatly deviated from the ideal truth valueis divided into two types data loss and data disturbance

(1) Data Loss These reasons such as nodes cannot worklinks cannot be linked or data cannot be transmitted causethe data of the corresponding data acquisition nodes to notreach the sink node

(2) Data Disturbance Due to the failure the local function ofthe WSN is in an incorrect (or ambiguous) system state Thisstate may cause a deviation between the data measured by thesensors of the corresponding data acquisition node and thetrue value or the signal is disturbed during the transmissionand the data received at the sink node is deviant from the truevalue The data that corresponds to the data acquisition nodeis not the desired result The collected data oscillate in thedata area near the true value In this paper we assumed thatthe data disturbance obeys the Gauss distribution

The main idea of our method is based on the fundamen-tal assumption that the sensing data of WSN are regarded asa vector indexed by the interested locations and recoveredfrom a subset sensing data As demonstrated in Figure 1the data acquisition consists of two stages the data-sensingstage and the data-recovering stage At the data-sensingstage instead of deploying sensors and sensing data at allinterested locations a subset of interested locations whichare the shaded ones in the second subfigure is selected tosense physical quantity and deliver the sensing data to thesink node at each data collection round Some locations aredrawn by the fork in the second subfigure because their datais lost or disturbed The fork represented the data errorsWhen the hardware and software of the network nodefailures or the communication links of the network arebroken the set of sensors which encounter data errors is onlythe subset of M At the data-recovering stage the sink nodereceives these incomplete sensing data over some data collec-tion rounds shown in the third subfigure in which the shadedentries represent the valid sensing data and the white entriesare unknown And then we could use them to recover thecomplete data by our method

Here we adopt a mask operator A to represent theprocess of collecting data-encountering errors

A DM tl = B tl 5

where B is the data set that is actually received for interpola-tion For the sake of clarity the operator A can be specifiedas a vector product as follows

A DM tl =Q ⊙DM tl 6

where ⊙ denotes the product of two vectors ie bsi tl =dsi tl times qsi Q is a vector of 1 times M bsi indicates the data

that is actually received by the ith data acquisition nodefor interpolation

qsi =

1 if the data is not error0 if the data is lostNo dsi tl σ

2

dsi if the data is disturbed

7

where No dsi tl σ2 represents Gauss distribution with a

mean of dsi tl and a variance of σ2In this paper the error rate of the received data is defined

as follows Q qsine1 M where Q qsine1

is the number of

non-1 elements

33 Completion of Data Space with Interpolation Due toconditional restrictions there is no way to deploy sensorsin S minusM The data generated in S minusM can be estimated byinterpolation based on the data in M The data space of theentire monitoring area is set to

DS tl =DM tl cup DSminusM tl 8

where DM tl represents the data set collected from the dataacquisition node in M at epoch tl DSminusM tl is the data setcollected from non-data acquisition locations S minusM at theepoch tl in the ideal case if the sensors are deployed inthe non-data acquisition location DSminusM tl is the datainterpolation set based on the data in M

The problem definition is how to process the DM tl dataso that DSminusM tl is as close as possible to DSminusM tl that is theproblem of minimizing the error between DSminusM tl andDSminusM tl Its mathematical expression is as follows

arg min DSminusM tl minusDSminusM tl

s t Q ne 19

where sdot is the Euclidean norm form used to evaluate theerror between DSminusM tl and DSminusM tl Q ne 1 indicates that Qis not a full 1 matrix

DSminusM tl = dsj tl ∣ dsj tl

= E dsjprime tl dsjPrime tl hellip dnsj tl sj isin S minusM 10

where dsjprime tl dsjPrime tl hellip dnsj tl represents each value close to

dsj tl Suppose si is the closest data acquisition node to sj

The value received from the data acquisition node nearestthe non-data acquisition location is also close to dsj tl

∵distsi⟶sj=min distM⟶sj

emspsi isinM 11

where distM⟶sjindicates the distance from each data acqui-

sition node in M to the non-data acquisition location sj The

4 Journal of Sensors

closer the information collected by the node the greater thecorrelation [15]

there4dsjprime tl = dsi tl 12

where dsi tl represents the value of the physical quantityactually collected by the si data acquisition node

Suppose that dsjPrime tl represents the assessed value at the

jth non-data acquisition location which is obtained by theback propagation artificial neural network operator In this

paper we get the dsjPrime tl that is close to dsj tl The data set

of non-data acquisition locations for interpolation is

dsjPrime tl =ϒsjbsi tl emspsj isin S minusM si isinM

DSminusMPrime tl =ϒSminusM B tl 13

where ϒSminusM sdot represents the learning operator to assessdsj tl B tl represents the received data set at the epoch tl

We can use the back propagation artificial neural networkoperator of the data acquisition node closest to the non-

Interestedlocations

Data acquisitionnodes

Proposed method

Incompletedata space

Completeddata space

Interested location index

Interested location index

Data errors

Figure 1 Incomplete acquisition process based on our proposed method

5Journal of Sensors

data acquisition location to predict the data of the non-dataacquisition location

4 Learning Operator Set of DataAcquisition Nodes

The mathematical model of the learning operator for thereconstruction is as follows

arg min ϒ Inp minus Tar

s t Inp ne Tar14

where Tar represents the learning goals ϒ sdot is the learn-ing operator of Inp and Inp can be individuals variablesand even algorithms or functions sets and so on Theinput of ϒ sdot is Inp Inp and Tar are different If theydo not have differences there is no need to learn forreconstruction The purpose of learning is to make ϒ Inpgradually approach Tar

Because data of WSN is error-prone we need fault-tolerant and robust learning operators The learning operatorin this paper uses a back propagation (BP) artificial neuralnetwork We can use data of data acquisition nodes to predictthe data of the non-data acquisition location thus assessingthe data space of the entire monitoring area Because of thestrong robustness and adaptability of the artificial neuralnetwork we can use the artificial neural network to interpo-late the data of non-data acquisition locations in the case ofdata error

The BP artificial neural network is a multilayer (at least 3levels) feedforward network based on error backpropagationBecause of its characteristics such as nonlinear mappingmultiple input and multiple output and self-organizingself-learning the BP artificial neural network can be moresuitable for dealing with the complex problems of nonlinearmultiple input and multiple output The BP artificial neuralnetwork model is composed of an input layer hidden layer(which can be multilayer but at least one level) and outputlayer Each layer is composed of a number of juxtaposedneurons The neurons in the same layer are not connectedto each other and the neurons in the adjacent layers areconnected by means of full interconnection [16]

After constructing the topology of the artificial neuralnetwork it is necessary to learn and train the network inorder to make the network intelligent For the BP artificialneural network the learning process is accomplished byforward propagation and reverse correction propagationAs each data acquisition node is related to other data acqui-sition nodes and each data acquisition node has historical

data it can be trained through the historical data of dataacquisition nodes to generate the artificial neural networklearning operator of the data acquisition node

41 Transform Function of the Input Unit The data of thenon-data acquisition location sj is assessed by using theinverse-distance learning operator of the data acquisitionnode si closest to sj Because there is spatial correlationbetween interested locations in space acquisition in thispaper we adopt the IDWI algorithm combined with the BPartificial neural network algorithm

At a certain epoch tl the data set of all other dataacquisition nodes except the si is as follows

BMminussi tl =

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

emspbs isinM minus si

15

In this paper the inverse-distance weight is used toconstruct the transform function The transform functionhsi BMminussi tl of the input unit BMminussi tl of the artificialneural network with data acquisition node si is as follows

hsi BMminussi tl =1distγsq⟶si

times BMminussi tl

sum M minus1sqisinM 1distγsq⟶si

emspsq isinM minus si si isinM

16

where distsq⟶sirepresents the distance between the data

acquisition node si and the non-data acquisition locationsq γ represents the weighted power exponent of the

distance reciprocal sum M minus1sqisinM 1distγsq⟶si

represents the sum

of the weighted reciprocal of the distance from the dataacquisition node si to the rest of data acquisition nodes

The artificial neural network requires a training set Wetake the historical data of the period T of all data acquisitionnodes as the training set T = t0 t1⋯ tl⋯ tm In practicalengineering it is feasible for us to get the data from dataacquisition nodes in a period to learn

In this paper data collected from all data acquisitionnodes are used as the training set BMminussi T BMminussi T is athree-order tensor as shown in the following Figure 2BMminussi T isin Rktimes Mminussi timestm The elements of the training set areindexed by physical quantities acquisition node and epoch

The BMminussi T matrix is obtained by

BMminussi T =

bso 1 t0 hellip bso 1 t0

⋮ bsq i tl ⋮

bsm k t0 hellip bsm k t0

bso1 t1 hellip bso 1 t1

⋮ bsq i t1 ⋮

bsm k t1 hellip bsm k t1

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

bso1 tm hellip bso1 tm

⋮ bsq i tl ⋮

bsm k tm hellip bsm k tm

emspbs isinM minus si

17

6 Journal of Sensors

In actual engineering BMminussi T has a lot of noise It isnecessary to clean it If it is not cleaned it will affect theestimation accuracy of the artificial neural network and theprecision of learning Data cleaning is the process of reexa-mining and verifying data to remove duplicate informationcorrect existing errors and provide data consistency Wecan use various filter algorithms to get BMminussi T

42 Artificial Neural Network Learning Operator Thesensing data of the real network include several physicalquantities such as temperature humidity and illuminationUsually a variety of sensors are deployed at an acquisitionnode and multiple physical quantities are collected at thesame time Physical quantities at the same acquisition nodehave the same temporal and spatial variation trend but inthe process of recovery using the artificial neural networkeach quantity has a mutual promotion effect The distancebetween data acquisition nodes and non-data acquisitionlocations is very easy to obtain Based on the inverse-distance interpolation algorithm and BP artificial neuralnetwork algorithm we propose a multidimensional inverse-distance BP artificial neural network learning operator

The BP artificial neural network which is the most widelyused one-way propagating multilayer feedforward networkis characterized by continuous adjustment of the networkconnection weight so that any nonlinear function can beapproximated with arbitrary accuracy The BP artificialneural network is self-learning and adaptive and has robust-ness and generalization The training of the BP artificialneural network is the study of ldquosupervisor supervisionrdquoThe training process of the BP artificial neural network isshown in Figure 3

For the input information it is first transmitted to thenode of the hidden layer through the weighted thresholdsummation and then the output information of the hiddennode is transmitted to the output node through the weightedthreshold summation after the operation of the transfer

function of each element Finally the output result is givenThe purpose of network learning is to obtain the rightweights and thresholds The training process consists oftwo parts forward and backward propagation

The BP artificial neural network is a multilayer feed-forward network trained by the error backpropagationalgorithm The BP artificial neural network structure is amultilayer network structure which has not only input nodesand output nodes but also one or more hidden nodes Asdemonstrated in Figure 3 according to the prediction errorthe v and w are continuously adjusted and finally the BPartificial neural network learning operator for reconstructingthe data of data acquisition node si can be determined asshown in the part ϒsi

sdot of Figure 3The input layer of the BP artificial neural network

learning operator is the data collected at a certain time atall acquisition nodes except the si acquisition node which isthe vector hsi BMminussi tl The input of the jth neuron in thehidden layer is calculated by

Σj = M minus1

q=1k

p=1νqjphsi BMminussi tl minus θ j 18

where νqjp represents the weight between the q p thinput neurons and the jth neurons in the hidden layer Theq p th input neurons are hsi bsq k tl that is the k-th

dimension data of the sq acquisition node at time tl θj isthe threshold of the jth neuron in the hidden layer Theoutput of neurons in the hidden layer is calculated by

hoj =1

1 + eminusΣ j19

Similarly the output of each neuron in the output layer isset to op

Epoch

Acquisition node

Phys

ical

qua

ntity

tmtlt0

so

sm

sq

k

Figure 2 The model of BMminussi T

7Journal of Sensors

The sum of squared errors of all neurons in the outputlayer is the objective function value optimized by the BPartificial neural network algorithm which is calculated by

E =1

M minus 1 times k times kM minus1

q=1k

p=1k

p=1bsi p minus op

2 20

where bsi p represents the expected output value of neuron opin the output layer corresponding to the value of the pthphysical quantity sensed by the data acquisition node siAccording to the gradient descent method the error ofeach neuron in the output layer is obtained by

Ep = op times 1 minus op times bsi p minus op 21

The weights and thresholds of the output layer can beadjusted by

Δω = part times Ep times hoj 22

Δθp = part times Ep 23

where part isin 0 1 is the learning rate which reflects thespeed of training and learning Similarly the weight andthreshold of the hidden layer can be obtained If thedesired results cannot be obtained from the output layerit needs to constantly adjust the weights and thresholdsgradually reducing the error The BP neural network hasstrong self-learning ability and can quickly obtain theoptimal solution

1205641

120564j

120564J

12059611

120596j1

120596J1

bso1(t0) bso1(t1)

bsi1(T)

bsi1(T)

bso1(tm) hsi

(bso

1(tl))

bsq1(t0) bsq1(t1) bsq1(tm) hsi

(bsq

1(tl))

bsm1(t0) bsm1(t1) bsm1(tm) hsi

(bsm

1(tl))

bsok(t0) bsok(t1) bsok(tm) hsi

(bso

k(tl))

bsqk(t0) bsqk(t1) bsqk(tm) hsi

(bsq

k(tl))

bsmk(t0) bsmk(t1) bsmk(tm) hsi

(bsm

k(tl))

Output layer

Hidden layer

Input layer

t0 tm

12059612

120596j2

120596J2

Error back propagation

ϒsi ()

Vm1k

Vmjk

VmJk

Vq1k

Vqjk

VqJk

V11k

V1jk

V1Jk

Vm11

Vmj1

VmJ1

Vq11

Vqj1

VqJ1

V111

V1j1

V1J1

o1

tl

ok

Figure 3 The training process of the BP artificial neural network

8 Journal of Sensors

The mathematical model of Figure 3 is as follows

f op J

j=1k

p=1ωj f Σ j

M minus1

q=1k

p=1νqjphsi BMminussi T ⟶ bsi p T

24

where BsqT is the training data that is the historical data

of acquisition nodes except si in the period T hsi sdot is thetransform function of the input layer νqj is the weight ofthe input layer f Σ j

sdot is the transfer function of the input

layer to the hidden layer w is the weight of the hiddenlayer to the output layer f op sdot is the transfer function of

the hidden layer to the input layer bsi T is the ldquosupervisorsupervisionrdquo that is the historical data of acquisition node siin the period T

From (14) and (24) the following formula can beobtained

ϒsisdot = f op

J

j=1ωj f Σ j

M minus1

q=1k

p=1νqjp sdot ∣bsi T 25

whereϒsisdot represents the inverse-distance BP artificial neu-

ral network learning operator of acquisition node si ∣bsi Trepresents that this neural network operator ϒsi

sdot is a learn-ing operator trained by the data of si as tutor information

The data collected by each data acquisition node in themonitoring area is related to each other The data of a dataacquisition node can be learned by inputting the data ofother data acquisition nodes into the learning operatorThe learning operator set of data acquisition nodes in thewhole monitoring area is as follows

ϒ sdot = ϒsisdot ∣ si isinM 26

5 Interpolation at Non-DataAcquisition Locations

We transfer the BP artificial neural network operators fromdata acquisition nodes to non-data acquisition locationsWe can use the learning operator of the data acquisition nodeclosest to the non-data acquisition location to estimate thedata of the non-data acquisition location There are four waysto implement the learning operator transferring includingsample transferring feature transferring model transferringand relationship transferring In this paper model transfer-ring (also called parameter transferring) is used to bettercombine with the BP artificial neural network that is wecan use the pretrained BP artificial neural network to inter-polate The BP artificial neural network learning operatorhas strong robustness and adaptability In the interested areaif the physical quantity collected is highly correlated andthere is no situation in which the physical quantity changes

drastically between the various interested locations thelearning operator can be transferred

Since the construction of the artificial neural networkrequires historical data for training and there is no historicaldata at the non-data acquisition location deployed withoutsensors it is very difficult to construct the artificial neuralnetworks of the non-data acquisition location However wecan predict the physical quantities of non-data acquisitionlocations by using the learning operator from the nearest dataacquisition node

51 Transform Function Corresponding to NonacquisitionLocation sj First we use the IDWI method to construct thetransform function The transform function of the artificialneural network input unit sq of the acquisition node si isdefined as hsi BMminussi tl

hsj BMminussi tl =BMminussi tl dist

γsq⟶sj

sum M minus1sqisinMminussi 1distγsq⟶sj

27

where distsq⟶sjrepresents the distance between sq and sj

γ represents the weighted exponentiation of the distance

reciprocal sum M minus1sqisinM 1distγsq⟶sj

represents the sum of the

weighted reciprocal of the distance from the non-dataacquisition location sj to the rest of the acquisition nodesThe physical data collected from the remaining M minus 1data acquisition nodes except si are as follows

BMminussi = bsq tl ∣ sq isinM minus si 28

52 Learning Operator Transferring Since the data of thedata acquisition node closest to the non-data acquisitionlocation is important to the non-data acquisition locationand its data is most correlated with the data of the non-data acquisition location we estimate the data of the non-data acquisition location with data from the nearest dataacquisition node and its learning operator

Since the data we are targeting is spatially correlated thesmaller the distance between the two interested locations isthe smaller the difference of the collected data is Converselythe greater the difference of the collected data is We canuse the data of the data acquisition node close to thenon-data acquisition location to assess the data at thenon-data acquisition location

Because the BP artificial neural network learning opera-tor has strong robustness and adaptability we can transferthe inverse BP artificial neural network learning operator ofsi to sj in order to estimate the data at sj in this paper Thetransform function h sdot does not change with time and it isa function of the distance between the sampling locationshsi BMminussi tl ⟶ hsj BMminussi tl the number of input param-

eters is constant while the input parameters vary with timeSo the change of the transform function will not affect the

9Journal of Sensors

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

environment the node failure rate is relatively high it is verydifficult to physically replace the failure sensor and the wire-less communication network is susceptible to interferenceattenuation multipath blind zone and other unfavorablefactors Data is prone to errors security is not guaranteedetc Therefore WSN data interpolation technology must behighly fault tolerant to ensure high credibility and robustnessof the completed data space [3]

Data interpolation is used to predict and estimate theinformation at an unknown location by means of usingknown information Transfer learning opens up a new pathfor data interpolation The goal of transfer learning is toextract useful knowledge from one or more source domaintasks and apply them to new target tasks It is essentiallythe transfer and reuse of knowledge Transfer learning hasgradually received the attention of scholars In [4] theauthors are motivated by the idea of transfer learning Theyproposed a novel domain correction and adaptive extremelearning machine (DC-AELM) framework with transferringcapability to realize the knowledge transfer for interferencesuppression To improve the radar emitter signal recogni-tion Yang et al use transfer learning to obtain the robustfeature against signal noise rate (SNR) variation in [5] In[6] the authors discuss the relationship between transferlearning and other related machine learning techniquessuch as domain adaptation multitask learning and sampleselection bias as well as covariate shift

Artificial neural networks have strong robustness One ofthe requirements to ensure the accuracy of transfer learningis the robustness of the learning algorithm Many scholarshave combined the neural artificial network with transferlearning In [7] Pan et al propose a cascade convolutionalneural network (CCNN) framework based on transferlearning for aircraft detection It achieves high accuracy andefficient detection with relatively few samples In [8] Parket al showed that the transfer learning of the ImageNetpretrained deep convolutional neural networks (DCNN)can be extremely useful when there are only a small numberof Doppler radar-based spectrogram data

The research aim of data interpolation of WSN is tocomplete the data space of the entire monitoring area byusing the limited data of the acquisition node to estimatethe data at the locations where sensors are deployed How-ever data errors of WSN due to various reasons have greatimpact on the accuracy of data interpolation Due to thestrong robustness of an artificial neural network an artificialneural network learning operator is generated by usinghistorical measurement data of limited data acquisitionnodes in this paper At the same time this paper applies thelearning property of the artificial neural network to theinverse-distance-weighted interpolation method which isconducive to improving precision and accuracy of datainterpolation On the basis of analyzing the demand of net-work models this paper proposes a robust data interpolationbased on a back propagation artificial neural network opera-tor for incomplete acquisition in wireless sensor networksThe detailed steps of the algorithm are discussed in detailand the algorithm is analyzed based on the MATLAB tooland the measured data provided by the Intel Berkeley

Research laboratory [9] The experimental results are goodevaluations of the fault-tolerant performance and lower errorof our proposed method

The main contributions of this paper are as follows

(1) Aiming at the data loss and disturbance error wepropose a fault-tolerant complementary algorithmbased on the robustness of the artificial neuralnetwork

(2) We combine the artificial neural network with theinverse-distance-weighted interpolation algorithmto obtain a novel back propagation artificial neuralnetwork operator

(3) We use the inverse tangent function to reconcile therelationships among multiple prediction values

The rest of the paper is organized as follows In Section 2we summarize the related work Section 3 introduces theinterpolation model in the condition of data error Section4 presents how to construct the learning operator set of dataacquisition nodes Section 5 elaborates how to generate theinterpolation by the method based on the back propagationartificial neural network operator We will show the experi-mental results of our proposed methods compared with theinverse-distance-weighted interpolation in Section 6 Theconclusions are given in Section 7

2 Related Works

21 The Inverse-Distance-Weighted Interpolation MethodThe inverse-distance-weighted interpolation (IDWI) methodis also called ldquoinverse-distance-weighted averagingrdquo or theldquoShepard Methodrdquo The interpolation scheme is explicitlyexpressed as follows

Given n locations whose the plane coordinates are xi yiand the values are zi where i = 1 2hellip n the interpolationfunction is

f x y =sumn

i=1 didistpi

sumni=1 1distpi

if x y ne xi yi emspi = 1 2hellip n

di if x y = xi yi emspi = 1 2hellip n1

where disti = x minus xi2 + y minus yi

2 is the horizontal distance

between x y and xi yi where i = 1 2hellip n p is a con-stant greater than 0 called the weighted power exponent

It can easily be seen that the interpolation f x y =sumni=1

didistpi sumn

i=1 1distpi of the location x y is the weightedmean of dist1 dist2hellip distn

The application of inverse-distance-weighted interpola-tion is more extensive Because of its simple computationand having less constraints the interpolation precision ishigher In [10] Kang and Wang use the Shepard family ofinterpolants to interpolate the density value of any givencomputational point within a certain circular influencedomain of the point In [11] Hammoudeh et al use a

2 Journal of Sensors

Shepard interpolation method to build a continuous map fora new WSN service called the map generation service

From (1) we can see that the IDWI algorithm is sensitiveto the accuracy of the data However WSN is usuallydeployed in a harsh environment and the probability ofdata being collected is high The error tolerance of theinterpolation algorithm is required This paper improvesthe robustness of interpolation algorithm on the basis ofthe inverse-distance interpolation algorithm

22 Artificial Neural Network An artificial neural network(ANN) is an information processing paradigm that isinspired from biological nervous systems such as how thebrain processes information ANNs like people have theability to learn by example An ANN is configured for aspecific application such as pattern recognition functionapproximation or data classification through a learningprocess Learning in biological systems involves adjustmentsto the synoptic connections that exist among neurons Thisis true for ANNs as well They are made up of simple pro-cessing units which are linked by weighted connections toform structures that are able to learn relationships betweensets of variables This heuristic method can be useful fornonlinear processes that have unknown functional formsThe feed forward neural networks or the multilayer percep-tron (MLP) among different networks is most commonlyused in engineering MLP networks are normally arrangedin three layers of neurons the input layer and output layerrepresent the input and output variables respectively ofthe model laid between them is one or more hiddenlayers that hold the networkrsquos ability to learn nonlinearrelationships [12]

The natural redundancy of neural networks and the formof the activation function (usually a sigmoid) of neuronresponses make them somewhat fault tolerant particularlywith respect to perturbation patterns Most of the publishedwork on this topic demonstrated this robustness by injectinglimited (Gaussian) noise on a software model [13] Velazcoet al proved the robustness of ANN with respect to bit errorsin [13] Venkitaraman et al proved that neural networkarchitecture exhibits robustness to the input perturbationthe output feature of the neural network exhibits theLipschitz continuity in terms of the input perturbation in[14] Artificial neural networks have strong robustness Therobustness of the algorithm is a requirement to ensure theaccuracy of artificial neural network operator transferringWe can see from the literatures [7 8] that operator transfer-ring can combine well with an artificial neural network Thelearning operator in this paper also adopts an artificial neuralnetwork algorithm

3 Problem Formulations

31 Data Acquisition Nodes To assess the entire environ-mental condition WSN collects data by deploying a certainnumber of sensors in the location of the monitoring areathus the physical quantity of the monitoring area is discre-tized and the monitoring physical quantity is digitized

Definition 1 Interested locations in the whole monitoringarea they are the central locations of the segment of themonitoring area that we are interested in

Sensors can be deployed at each interested location tocapture data The data at all interested locations reflects theinformation status of the entire monitoring area

We assumed that S is the set of the interested locationswhich is a matrix of 1timesn

S = s1 x1 y1 s2 x2 y2 s3 x3 y3 hellip sn xn yn 2

where si xi yi is the ith interested location in the monitoringarea xi yi is the coordinates of the si interested location inthe monitoring area Due to the difficulty and limitation ofdeployment not all the interested locations can deploysensors This paper studies the spatial incomplete collectionstrategy We select a subset of S as the data acquisition nodeS represents the potential of a set that is the number of

elements of S S = n

Definition 2 Data acquisition nodes they are the interestedlocations where the sensors are actually deployed

The sensors are deployed in these interested locationsso that these locations become data acquisition nodes Theall-data acquisition nodes in the monitoring area act as thesensing layer of the WSN and the information is transmittedto the server through the devices of the transport layer

In our research when the sensors are not deployed at theinterested location we use zero as a placeholder to replace thedata acquisition node When the interested location becomesthe data acquisition node we use 1 as a placeholder to replacethe data acquisition node Suppose that M is the set of dataacquisition nodes

M = si ∣ si = 1 si isin S i = 1hellip n 3

where si = 1 represents the ith interested location where thesensors are deployed M indicates the potential of the setM It reflects the total number of elements in the set of dataacquisition nodes This paper investigates the case wheremultiple types of sensors are deployed at a data acquisitionnode The data that the data acquisition node si can correctlycollect at time tl is defined as dsi tl = dsi 1 tl dsi 2 tl hellip dsi k tl which is k dimensional data that is perceived bythe ith data acquisition location si in S The physical quantityof temperature humidity etc can be measured at the sametime The data is defined as

DM tl = dsi tl ∣ si isinM dsi tl isin Rk 4

If M lt S then WSN implements incomplete cover-age if M = S thenWSN implements complete coverageThe data of the interested location where the sensors are notdeployed can be assessed by the data of the data acquisitionnode The interested location where the sensors are notdeployed is indicated as non-data acquisition location

3Journal of Sensors

32 Data Acquisition Error Because the wireless communi-cation network is susceptible to interference attenuationmultipath blind zone and other unfavorable factors the dataerror rate is high Nodes and links in wireless sensornetworks are inherently erroneous and unpredictable Theerror data which greatly deviated from the ideal truth valueis divided into two types data loss and data disturbance

(1) Data Loss These reasons such as nodes cannot worklinks cannot be linked or data cannot be transmitted causethe data of the corresponding data acquisition nodes to notreach the sink node

(2) Data Disturbance Due to the failure the local function ofthe WSN is in an incorrect (or ambiguous) system state Thisstate may cause a deviation between the data measured by thesensors of the corresponding data acquisition node and thetrue value or the signal is disturbed during the transmissionand the data received at the sink node is deviant from the truevalue The data that corresponds to the data acquisition nodeis not the desired result The collected data oscillate in thedata area near the true value In this paper we assumed thatthe data disturbance obeys the Gauss distribution

The main idea of our method is based on the fundamen-tal assumption that the sensing data of WSN are regarded asa vector indexed by the interested locations and recoveredfrom a subset sensing data As demonstrated in Figure 1the data acquisition consists of two stages the data-sensingstage and the data-recovering stage At the data-sensingstage instead of deploying sensors and sensing data at allinterested locations a subset of interested locations whichare the shaded ones in the second subfigure is selected tosense physical quantity and deliver the sensing data to thesink node at each data collection round Some locations aredrawn by the fork in the second subfigure because their datais lost or disturbed The fork represented the data errorsWhen the hardware and software of the network nodefailures or the communication links of the network arebroken the set of sensors which encounter data errors is onlythe subset of M At the data-recovering stage the sink nodereceives these incomplete sensing data over some data collec-tion rounds shown in the third subfigure in which the shadedentries represent the valid sensing data and the white entriesare unknown And then we could use them to recover thecomplete data by our method

Here we adopt a mask operator A to represent theprocess of collecting data-encountering errors

A DM tl = B tl 5

where B is the data set that is actually received for interpola-tion For the sake of clarity the operator A can be specifiedas a vector product as follows

A DM tl =Q ⊙DM tl 6

where ⊙ denotes the product of two vectors ie bsi tl =dsi tl times qsi Q is a vector of 1 times M bsi indicates the data

that is actually received by the ith data acquisition nodefor interpolation

qsi =

1 if the data is not error0 if the data is lostNo dsi tl σ

2

dsi if the data is disturbed

7

where No dsi tl σ2 represents Gauss distribution with a

mean of dsi tl and a variance of σ2In this paper the error rate of the received data is defined

as follows Q qsine1 M where Q qsine1

is the number of

non-1 elements

33 Completion of Data Space with Interpolation Due toconditional restrictions there is no way to deploy sensorsin S minusM The data generated in S minusM can be estimated byinterpolation based on the data in M The data space of theentire monitoring area is set to

DS tl =DM tl cup DSminusM tl 8

where DM tl represents the data set collected from the dataacquisition node in M at epoch tl DSminusM tl is the data setcollected from non-data acquisition locations S minusM at theepoch tl in the ideal case if the sensors are deployed inthe non-data acquisition location DSminusM tl is the datainterpolation set based on the data in M

The problem definition is how to process the DM tl dataso that DSminusM tl is as close as possible to DSminusM tl that is theproblem of minimizing the error between DSminusM tl andDSminusM tl Its mathematical expression is as follows

arg min DSminusM tl minusDSminusM tl

s t Q ne 19

where sdot is the Euclidean norm form used to evaluate theerror between DSminusM tl and DSminusM tl Q ne 1 indicates that Qis not a full 1 matrix

DSminusM tl = dsj tl ∣ dsj tl

= E dsjprime tl dsjPrime tl hellip dnsj tl sj isin S minusM 10

where dsjprime tl dsjPrime tl hellip dnsj tl represents each value close to

dsj tl Suppose si is the closest data acquisition node to sj

The value received from the data acquisition node nearestthe non-data acquisition location is also close to dsj tl

∵distsi⟶sj=min distM⟶sj

emspsi isinM 11

where distM⟶sjindicates the distance from each data acqui-

sition node in M to the non-data acquisition location sj The

4 Journal of Sensors

closer the information collected by the node the greater thecorrelation [15]

there4dsjprime tl = dsi tl 12

where dsi tl represents the value of the physical quantityactually collected by the si data acquisition node

Suppose that dsjPrime tl represents the assessed value at the

jth non-data acquisition location which is obtained by theback propagation artificial neural network operator In this

paper we get the dsjPrime tl that is close to dsj tl The data set

of non-data acquisition locations for interpolation is

dsjPrime tl =ϒsjbsi tl emspsj isin S minusM si isinM

DSminusMPrime tl =ϒSminusM B tl 13

where ϒSminusM sdot represents the learning operator to assessdsj tl B tl represents the received data set at the epoch tl

We can use the back propagation artificial neural networkoperator of the data acquisition node closest to the non-

Interestedlocations

Data acquisitionnodes

Proposed method

Incompletedata space

Completeddata space

Interested location index

Interested location index

Data errors

Figure 1 Incomplete acquisition process based on our proposed method

5Journal of Sensors

data acquisition location to predict the data of the non-dataacquisition location

4 Learning Operator Set of DataAcquisition Nodes

The mathematical model of the learning operator for thereconstruction is as follows

arg min ϒ Inp minus Tar

s t Inp ne Tar14

where Tar represents the learning goals ϒ sdot is the learn-ing operator of Inp and Inp can be individuals variablesand even algorithms or functions sets and so on Theinput of ϒ sdot is Inp Inp and Tar are different If theydo not have differences there is no need to learn forreconstruction The purpose of learning is to make ϒ Inpgradually approach Tar

Because data of WSN is error-prone we need fault-tolerant and robust learning operators The learning operatorin this paper uses a back propagation (BP) artificial neuralnetwork We can use data of data acquisition nodes to predictthe data of the non-data acquisition location thus assessingthe data space of the entire monitoring area Because of thestrong robustness and adaptability of the artificial neuralnetwork we can use the artificial neural network to interpo-late the data of non-data acquisition locations in the case ofdata error

The BP artificial neural network is a multilayer (at least 3levels) feedforward network based on error backpropagationBecause of its characteristics such as nonlinear mappingmultiple input and multiple output and self-organizingself-learning the BP artificial neural network can be moresuitable for dealing with the complex problems of nonlinearmultiple input and multiple output The BP artificial neuralnetwork model is composed of an input layer hidden layer(which can be multilayer but at least one level) and outputlayer Each layer is composed of a number of juxtaposedneurons The neurons in the same layer are not connectedto each other and the neurons in the adjacent layers areconnected by means of full interconnection [16]

After constructing the topology of the artificial neuralnetwork it is necessary to learn and train the network inorder to make the network intelligent For the BP artificialneural network the learning process is accomplished byforward propagation and reverse correction propagationAs each data acquisition node is related to other data acqui-sition nodes and each data acquisition node has historical

data it can be trained through the historical data of dataacquisition nodes to generate the artificial neural networklearning operator of the data acquisition node

41 Transform Function of the Input Unit The data of thenon-data acquisition location sj is assessed by using theinverse-distance learning operator of the data acquisitionnode si closest to sj Because there is spatial correlationbetween interested locations in space acquisition in thispaper we adopt the IDWI algorithm combined with the BPartificial neural network algorithm

At a certain epoch tl the data set of all other dataacquisition nodes except the si is as follows

BMminussi tl =

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

emspbs isinM minus si

15

In this paper the inverse-distance weight is used toconstruct the transform function The transform functionhsi BMminussi tl of the input unit BMminussi tl of the artificialneural network with data acquisition node si is as follows

hsi BMminussi tl =1distγsq⟶si

times BMminussi tl

sum M minus1sqisinM 1distγsq⟶si

emspsq isinM minus si si isinM

16

where distsq⟶sirepresents the distance between the data

acquisition node si and the non-data acquisition locationsq γ represents the weighted power exponent of the

distance reciprocal sum M minus1sqisinM 1distγsq⟶si

represents the sum

of the weighted reciprocal of the distance from the dataacquisition node si to the rest of data acquisition nodes

The artificial neural network requires a training set Wetake the historical data of the period T of all data acquisitionnodes as the training set T = t0 t1⋯ tl⋯ tm In practicalengineering it is feasible for us to get the data from dataacquisition nodes in a period to learn

In this paper data collected from all data acquisitionnodes are used as the training set BMminussi T BMminussi T is athree-order tensor as shown in the following Figure 2BMminussi T isin Rktimes Mminussi timestm The elements of the training set areindexed by physical quantities acquisition node and epoch

The BMminussi T matrix is obtained by

BMminussi T =

bso 1 t0 hellip bso 1 t0

⋮ bsq i tl ⋮

bsm k t0 hellip bsm k t0

bso1 t1 hellip bso 1 t1

⋮ bsq i t1 ⋮

bsm k t1 hellip bsm k t1

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

bso1 tm hellip bso1 tm

⋮ bsq i tl ⋮

bsm k tm hellip bsm k tm

emspbs isinM minus si

17

6 Journal of Sensors

In actual engineering BMminussi T has a lot of noise It isnecessary to clean it If it is not cleaned it will affect theestimation accuracy of the artificial neural network and theprecision of learning Data cleaning is the process of reexa-mining and verifying data to remove duplicate informationcorrect existing errors and provide data consistency Wecan use various filter algorithms to get BMminussi T

42 Artificial Neural Network Learning Operator Thesensing data of the real network include several physicalquantities such as temperature humidity and illuminationUsually a variety of sensors are deployed at an acquisitionnode and multiple physical quantities are collected at thesame time Physical quantities at the same acquisition nodehave the same temporal and spatial variation trend but inthe process of recovery using the artificial neural networkeach quantity has a mutual promotion effect The distancebetween data acquisition nodes and non-data acquisitionlocations is very easy to obtain Based on the inverse-distance interpolation algorithm and BP artificial neuralnetwork algorithm we propose a multidimensional inverse-distance BP artificial neural network learning operator

The BP artificial neural network which is the most widelyused one-way propagating multilayer feedforward networkis characterized by continuous adjustment of the networkconnection weight so that any nonlinear function can beapproximated with arbitrary accuracy The BP artificialneural network is self-learning and adaptive and has robust-ness and generalization The training of the BP artificialneural network is the study of ldquosupervisor supervisionrdquoThe training process of the BP artificial neural network isshown in Figure 3

For the input information it is first transmitted to thenode of the hidden layer through the weighted thresholdsummation and then the output information of the hiddennode is transmitted to the output node through the weightedthreshold summation after the operation of the transfer

function of each element Finally the output result is givenThe purpose of network learning is to obtain the rightweights and thresholds The training process consists oftwo parts forward and backward propagation

The BP artificial neural network is a multilayer feed-forward network trained by the error backpropagationalgorithm The BP artificial neural network structure is amultilayer network structure which has not only input nodesand output nodes but also one or more hidden nodes Asdemonstrated in Figure 3 according to the prediction errorthe v and w are continuously adjusted and finally the BPartificial neural network learning operator for reconstructingthe data of data acquisition node si can be determined asshown in the part ϒsi

sdot of Figure 3The input layer of the BP artificial neural network

learning operator is the data collected at a certain time atall acquisition nodes except the si acquisition node which isthe vector hsi BMminussi tl The input of the jth neuron in thehidden layer is calculated by

Σj = M minus1

q=1k

p=1νqjphsi BMminussi tl minus θ j 18

where νqjp represents the weight between the q p thinput neurons and the jth neurons in the hidden layer Theq p th input neurons are hsi bsq k tl that is the k-th

dimension data of the sq acquisition node at time tl θj isthe threshold of the jth neuron in the hidden layer Theoutput of neurons in the hidden layer is calculated by

hoj =1

1 + eminusΣ j19

Similarly the output of each neuron in the output layer isset to op

Epoch

Acquisition node

Phys

ical

qua

ntity

tmtlt0

so

sm

sq

k

Figure 2 The model of BMminussi T

7Journal of Sensors

The sum of squared errors of all neurons in the outputlayer is the objective function value optimized by the BPartificial neural network algorithm which is calculated by

E =1

M minus 1 times k times kM minus1

q=1k

p=1k

p=1bsi p minus op

2 20

where bsi p represents the expected output value of neuron opin the output layer corresponding to the value of the pthphysical quantity sensed by the data acquisition node siAccording to the gradient descent method the error ofeach neuron in the output layer is obtained by

Ep = op times 1 minus op times bsi p minus op 21

The weights and thresholds of the output layer can beadjusted by

Δω = part times Ep times hoj 22

Δθp = part times Ep 23

where part isin 0 1 is the learning rate which reflects thespeed of training and learning Similarly the weight andthreshold of the hidden layer can be obtained If thedesired results cannot be obtained from the output layerit needs to constantly adjust the weights and thresholdsgradually reducing the error The BP neural network hasstrong self-learning ability and can quickly obtain theoptimal solution

1205641

120564j

120564J

12059611

120596j1

120596J1

bso1(t0) bso1(t1)

bsi1(T)

bsi1(T)

bso1(tm) hsi

(bso

1(tl))

bsq1(t0) bsq1(t1) bsq1(tm) hsi

(bsq

1(tl))

bsm1(t0) bsm1(t1) bsm1(tm) hsi

(bsm

1(tl))

bsok(t0) bsok(t1) bsok(tm) hsi

(bso

k(tl))

bsqk(t0) bsqk(t1) bsqk(tm) hsi

(bsq

k(tl))

bsmk(t0) bsmk(t1) bsmk(tm) hsi

(bsm

k(tl))

Output layer

Hidden layer

Input layer

t0 tm

12059612

120596j2

120596J2

Error back propagation

ϒsi ()

Vm1k

Vmjk

VmJk

Vq1k

Vqjk

VqJk

V11k

V1jk

V1Jk

Vm11

Vmj1

VmJ1

Vq11

Vqj1

VqJ1

V111

V1j1

V1J1

o1

tl

ok

Figure 3 The training process of the BP artificial neural network

8 Journal of Sensors

The mathematical model of Figure 3 is as follows

f op J

j=1k

p=1ωj f Σ j

M minus1

q=1k

p=1νqjphsi BMminussi T ⟶ bsi p T

24

where BsqT is the training data that is the historical data

of acquisition nodes except si in the period T hsi sdot is thetransform function of the input layer νqj is the weight ofthe input layer f Σ j

sdot is the transfer function of the input

layer to the hidden layer w is the weight of the hiddenlayer to the output layer f op sdot is the transfer function of

the hidden layer to the input layer bsi T is the ldquosupervisorsupervisionrdquo that is the historical data of acquisition node siin the period T

From (14) and (24) the following formula can beobtained

ϒsisdot = f op

J

j=1ωj f Σ j

M minus1

q=1k

p=1νqjp sdot ∣bsi T 25

whereϒsisdot represents the inverse-distance BP artificial neu-

ral network learning operator of acquisition node si ∣bsi Trepresents that this neural network operator ϒsi

sdot is a learn-ing operator trained by the data of si as tutor information

The data collected by each data acquisition node in themonitoring area is related to each other The data of a dataacquisition node can be learned by inputting the data ofother data acquisition nodes into the learning operatorThe learning operator set of data acquisition nodes in thewhole monitoring area is as follows

ϒ sdot = ϒsisdot ∣ si isinM 26

5 Interpolation at Non-DataAcquisition Locations

We transfer the BP artificial neural network operators fromdata acquisition nodes to non-data acquisition locationsWe can use the learning operator of the data acquisition nodeclosest to the non-data acquisition location to estimate thedata of the non-data acquisition location There are four waysto implement the learning operator transferring includingsample transferring feature transferring model transferringand relationship transferring In this paper model transfer-ring (also called parameter transferring) is used to bettercombine with the BP artificial neural network that is wecan use the pretrained BP artificial neural network to inter-polate The BP artificial neural network learning operatorhas strong robustness and adaptability In the interested areaif the physical quantity collected is highly correlated andthere is no situation in which the physical quantity changes

drastically between the various interested locations thelearning operator can be transferred

Since the construction of the artificial neural networkrequires historical data for training and there is no historicaldata at the non-data acquisition location deployed withoutsensors it is very difficult to construct the artificial neuralnetworks of the non-data acquisition location However wecan predict the physical quantities of non-data acquisitionlocations by using the learning operator from the nearest dataacquisition node

51 Transform Function Corresponding to NonacquisitionLocation sj First we use the IDWI method to construct thetransform function The transform function of the artificialneural network input unit sq of the acquisition node si isdefined as hsi BMminussi tl

hsj BMminussi tl =BMminussi tl dist

γsq⟶sj

sum M minus1sqisinMminussi 1distγsq⟶sj

27

where distsq⟶sjrepresents the distance between sq and sj

γ represents the weighted exponentiation of the distance

reciprocal sum M minus1sqisinM 1distγsq⟶sj

represents the sum of the

weighted reciprocal of the distance from the non-dataacquisition location sj to the rest of the acquisition nodesThe physical data collected from the remaining M minus 1data acquisition nodes except si are as follows

BMminussi = bsq tl ∣ sq isinM minus si 28

52 Learning Operator Transferring Since the data of thedata acquisition node closest to the non-data acquisitionlocation is important to the non-data acquisition locationand its data is most correlated with the data of the non-data acquisition location we estimate the data of the non-data acquisition location with data from the nearest dataacquisition node and its learning operator

Since the data we are targeting is spatially correlated thesmaller the distance between the two interested locations isthe smaller the difference of the collected data is Converselythe greater the difference of the collected data is We canuse the data of the data acquisition node close to thenon-data acquisition location to assess the data at thenon-data acquisition location

Because the BP artificial neural network learning opera-tor has strong robustness and adaptability we can transferthe inverse BP artificial neural network learning operator ofsi to sj in order to estimate the data at sj in this paper Thetransform function h sdot does not change with time and it isa function of the distance between the sampling locationshsi BMminussi tl ⟶ hsj BMminussi tl the number of input param-

eters is constant while the input parameters vary with timeSo the change of the transform function will not affect the

9Journal of Sensors

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

Shepard interpolation method to build a continuous map fora new WSN service called the map generation service

From (1) we can see that the IDWI algorithm is sensitiveto the accuracy of the data However WSN is usuallydeployed in a harsh environment and the probability ofdata being collected is high The error tolerance of theinterpolation algorithm is required This paper improvesthe robustness of interpolation algorithm on the basis ofthe inverse-distance interpolation algorithm

22 Artificial Neural Network An artificial neural network(ANN) is an information processing paradigm that isinspired from biological nervous systems such as how thebrain processes information ANNs like people have theability to learn by example An ANN is configured for aspecific application such as pattern recognition functionapproximation or data classification through a learningprocess Learning in biological systems involves adjustmentsto the synoptic connections that exist among neurons Thisis true for ANNs as well They are made up of simple pro-cessing units which are linked by weighted connections toform structures that are able to learn relationships betweensets of variables This heuristic method can be useful fornonlinear processes that have unknown functional formsThe feed forward neural networks or the multilayer percep-tron (MLP) among different networks is most commonlyused in engineering MLP networks are normally arrangedin three layers of neurons the input layer and output layerrepresent the input and output variables respectively ofthe model laid between them is one or more hiddenlayers that hold the networkrsquos ability to learn nonlinearrelationships [12]

The natural redundancy of neural networks and the formof the activation function (usually a sigmoid) of neuronresponses make them somewhat fault tolerant particularlywith respect to perturbation patterns Most of the publishedwork on this topic demonstrated this robustness by injectinglimited (Gaussian) noise on a software model [13] Velazcoet al proved the robustness of ANN with respect to bit errorsin [13] Venkitaraman et al proved that neural networkarchitecture exhibits robustness to the input perturbationthe output feature of the neural network exhibits theLipschitz continuity in terms of the input perturbation in[14] Artificial neural networks have strong robustness Therobustness of the algorithm is a requirement to ensure theaccuracy of artificial neural network operator transferringWe can see from the literatures [7 8] that operator transfer-ring can combine well with an artificial neural network Thelearning operator in this paper also adopts an artificial neuralnetwork algorithm

3 Problem Formulations

31 Data Acquisition Nodes To assess the entire environ-mental condition WSN collects data by deploying a certainnumber of sensors in the location of the monitoring areathus the physical quantity of the monitoring area is discre-tized and the monitoring physical quantity is digitized

Definition 1 Interested locations in the whole monitoringarea they are the central locations of the segment of themonitoring area that we are interested in

Sensors can be deployed at each interested location tocapture data The data at all interested locations reflects theinformation status of the entire monitoring area

We assumed that S is the set of the interested locationswhich is a matrix of 1timesn

S = s1 x1 y1 s2 x2 y2 s3 x3 y3 hellip sn xn yn 2

where si xi yi is the ith interested location in the monitoringarea xi yi is the coordinates of the si interested location inthe monitoring area Due to the difficulty and limitation ofdeployment not all the interested locations can deploysensors This paper studies the spatial incomplete collectionstrategy We select a subset of S as the data acquisition nodeS represents the potential of a set that is the number of

elements of S S = n

Definition 2 Data acquisition nodes they are the interestedlocations where the sensors are actually deployed

The sensors are deployed in these interested locationsso that these locations become data acquisition nodes Theall-data acquisition nodes in the monitoring area act as thesensing layer of the WSN and the information is transmittedto the server through the devices of the transport layer

In our research when the sensors are not deployed at theinterested location we use zero as a placeholder to replace thedata acquisition node When the interested location becomesthe data acquisition node we use 1 as a placeholder to replacethe data acquisition node Suppose that M is the set of dataacquisition nodes

M = si ∣ si = 1 si isin S i = 1hellip n 3

where si = 1 represents the ith interested location where thesensors are deployed M indicates the potential of the setM It reflects the total number of elements in the set of dataacquisition nodes This paper investigates the case wheremultiple types of sensors are deployed at a data acquisitionnode The data that the data acquisition node si can correctlycollect at time tl is defined as dsi tl = dsi 1 tl dsi 2 tl hellip dsi k tl which is k dimensional data that is perceived bythe ith data acquisition location si in S The physical quantityof temperature humidity etc can be measured at the sametime The data is defined as

DM tl = dsi tl ∣ si isinM dsi tl isin Rk 4

If M lt S then WSN implements incomplete cover-age if M = S thenWSN implements complete coverageThe data of the interested location where the sensors are notdeployed can be assessed by the data of the data acquisitionnode The interested location where the sensors are notdeployed is indicated as non-data acquisition location

3Journal of Sensors

32 Data Acquisition Error Because the wireless communi-cation network is susceptible to interference attenuationmultipath blind zone and other unfavorable factors the dataerror rate is high Nodes and links in wireless sensornetworks are inherently erroneous and unpredictable Theerror data which greatly deviated from the ideal truth valueis divided into two types data loss and data disturbance

(1) Data Loss These reasons such as nodes cannot worklinks cannot be linked or data cannot be transmitted causethe data of the corresponding data acquisition nodes to notreach the sink node

(2) Data Disturbance Due to the failure the local function ofthe WSN is in an incorrect (or ambiguous) system state Thisstate may cause a deviation between the data measured by thesensors of the corresponding data acquisition node and thetrue value or the signal is disturbed during the transmissionand the data received at the sink node is deviant from the truevalue The data that corresponds to the data acquisition nodeis not the desired result The collected data oscillate in thedata area near the true value In this paper we assumed thatthe data disturbance obeys the Gauss distribution

The main idea of our method is based on the fundamen-tal assumption that the sensing data of WSN are regarded asa vector indexed by the interested locations and recoveredfrom a subset sensing data As demonstrated in Figure 1the data acquisition consists of two stages the data-sensingstage and the data-recovering stage At the data-sensingstage instead of deploying sensors and sensing data at allinterested locations a subset of interested locations whichare the shaded ones in the second subfigure is selected tosense physical quantity and deliver the sensing data to thesink node at each data collection round Some locations aredrawn by the fork in the second subfigure because their datais lost or disturbed The fork represented the data errorsWhen the hardware and software of the network nodefailures or the communication links of the network arebroken the set of sensors which encounter data errors is onlythe subset of M At the data-recovering stage the sink nodereceives these incomplete sensing data over some data collec-tion rounds shown in the third subfigure in which the shadedentries represent the valid sensing data and the white entriesare unknown And then we could use them to recover thecomplete data by our method

Here we adopt a mask operator A to represent theprocess of collecting data-encountering errors

A DM tl = B tl 5

where B is the data set that is actually received for interpola-tion For the sake of clarity the operator A can be specifiedas a vector product as follows

A DM tl =Q ⊙DM tl 6

where ⊙ denotes the product of two vectors ie bsi tl =dsi tl times qsi Q is a vector of 1 times M bsi indicates the data

that is actually received by the ith data acquisition nodefor interpolation

qsi =

1 if the data is not error0 if the data is lostNo dsi tl σ

2

dsi if the data is disturbed

7

where No dsi tl σ2 represents Gauss distribution with a

mean of dsi tl and a variance of σ2In this paper the error rate of the received data is defined

as follows Q qsine1 M where Q qsine1

is the number of

non-1 elements

33 Completion of Data Space with Interpolation Due toconditional restrictions there is no way to deploy sensorsin S minusM The data generated in S minusM can be estimated byinterpolation based on the data in M The data space of theentire monitoring area is set to

DS tl =DM tl cup DSminusM tl 8

where DM tl represents the data set collected from the dataacquisition node in M at epoch tl DSminusM tl is the data setcollected from non-data acquisition locations S minusM at theepoch tl in the ideal case if the sensors are deployed inthe non-data acquisition location DSminusM tl is the datainterpolation set based on the data in M

The problem definition is how to process the DM tl dataso that DSminusM tl is as close as possible to DSminusM tl that is theproblem of minimizing the error between DSminusM tl andDSminusM tl Its mathematical expression is as follows

arg min DSminusM tl minusDSminusM tl

s t Q ne 19

where sdot is the Euclidean norm form used to evaluate theerror between DSminusM tl and DSminusM tl Q ne 1 indicates that Qis not a full 1 matrix

DSminusM tl = dsj tl ∣ dsj tl

= E dsjprime tl dsjPrime tl hellip dnsj tl sj isin S minusM 10

where dsjprime tl dsjPrime tl hellip dnsj tl represents each value close to

dsj tl Suppose si is the closest data acquisition node to sj

The value received from the data acquisition node nearestthe non-data acquisition location is also close to dsj tl

∵distsi⟶sj=min distM⟶sj

emspsi isinM 11

where distM⟶sjindicates the distance from each data acqui-

sition node in M to the non-data acquisition location sj The

4 Journal of Sensors

closer the information collected by the node the greater thecorrelation [15]

there4dsjprime tl = dsi tl 12

where dsi tl represents the value of the physical quantityactually collected by the si data acquisition node

Suppose that dsjPrime tl represents the assessed value at the

jth non-data acquisition location which is obtained by theback propagation artificial neural network operator In this

paper we get the dsjPrime tl that is close to dsj tl The data set

of non-data acquisition locations for interpolation is

dsjPrime tl =ϒsjbsi tl emspsj isin S minusM si isinM

DSminusMPrime tl =ϒSminusM B tl 13

where ϒSminusM sdot represents the learning operator to assessdsj tl B tl represents the received data set at the epoch tl

We can use the back propagation artificial neural networkoperator of the data acquisition node closest to the non-

Interestedlocations

Data acquisitionnodes

Proposed method

Incompletedata space

Completeddata space

Interested location index

Interested location index

Data errors

Figure 1 Incomplete acquisition process based on our proposed method

5Journal of Sensors

data acquisition location to predict the data of the non-dataacquisition location

4 Learning Operator Set of DataAcquisition Nodes

The mathematical model of the learning operator for thereconstruction is as follows

arg min ϒ Inp minus Tar

s t Inp ne Tar14

where Tar represents the learning goals ϒ sdot is the learn-ing operator of Inp and Inp can be individuals variablesand even algorithms or functions sets and so on Theinput of ϒ sdot is Inp Inp and Tar are different If theydo not have differences there is no need to learn forreconstruction The purpose of learning is to make ϒ Inpgradually approach Tar

Because data of WSN is error-prone we need fault-tolerant and robust learning operators The learning operatorin this paper uses a back propagation (BP) artificial neuralnetwork We can use data of data acquisition nodes to predictthe data of the non-data acquisition location thus assessingthe data space of the entire monitoring area Because of thestrong robustness and adaptability of the artificial neuralnetwork we can use the artificial neural network to interpo-late the data of non-data acquisition locations in the case ofdata error

The BP artificial neural network is a multilayer (at least 3levels) feedforward network based on error backpropagationBecause of its characteristics such as nonlinear mappingmultiple input and multiple output and self-organizingself-learning the BP artificial neural network can be moresuitable for dealing with the complex problems of nonlinearmultiple input and multiple output The BP artificial neuralnetwork model is composed of an input layer hidden layer(which can be multilayer but at least one level) and outputlayer Each layer is composed of a number of juxtaposedneurons The neurons in the same layer are not connectedto each other and the neurons in the adjacent layers areconnected by means of full interconnection [16]

After constructing the topology of the artificial neuralnetwork it is necessary to learn and train the network inorder to make the network intelligent For the BP artificialneural network the learning process is accomplished byforward propagation and reverse correction propagationAs each data acquisition node is related to other data acqui-sition nodes and each data acquisition node has historical

data it can be trained through the historical data of dataacquisition nodes to generate the artificial neural networklearning operator of the data acquisition node

41 Transform Function of the Input Unit The data of thenon-data acquisition location sj is assessed by using theinverse-distance learning operator of the data acquisitionnode si closest to sj Because there is spatial correlationbetween interested locations in space acquisition in thispaper we adopt the IDWI algorithm combined with the BPartificial neural network algorithm

At a certain epoch tl the data set of all other dataacquisition nodes except the si is as follows

BMminussi tl =

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

emspbs isinM minus si

15

In this paper the inverse-distance weight is used toconstruct the transform function The transform functionhsi BMminussi tl of the input unit BMminussi tl of the artificialneural network with data acquisition node si is as follows

hsi BMminussi tl =1distγsq⟶si

times BMminussi tl

sum M minus1sqisinM 1distγsq⟶si

emspsq isinM minus si si isinM

16

where distsq⟶sirepresents the distance between the data

acquisition node si and the non-data acquisition locationsq γ represents the weighted power exponent of the

distance reciprocal sum M minus1sqisinM 1distγsq⟶si

represents the sum

of the weighted reciprocal of the distance from the dataacquisition node si to the rest of data acquisition nodes

The artificial neural network requires a training set Wetake the historical data of the period T of all data acquisitionnodes as the training set T = t0 t1⋯ tl⋯ tm In practicalengineering it is feasible for us to get the data from dataacquisition nodes in a period to learn

In this paper data collected from all data acquisitionnodes are used as the training set BMminussi T BMminussi T is athree-order tensor as shown in the following Figure 2BMminussi T isin Rktimes Mminussi timestm The elements of the training set areindexed by physical quantities acquisition node and epoch

The BMminussi T matrix is obtained by

BMminussi T =

bso 1 t0 hellip bso 1 t0

⋮ bsq i tl ⋮

bsm k t0 hellip bsm k t0

bso1 t1 hellip bso 1 t1

⋮ bsq i t1 ⋮

bsm k t1 hellip bsm k t1

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

bso1 tm hellip bso1 tm

⋮ bsq i tl ⋮

bsm k tm hellip bsm k tm

emspbs isinM minus si

17

6 Journal of Sensors

In actual engineering BMminussi T has a lot of noise It isnecessary to clean it If it is not cleaned it will affect theestimation accuracy of the artificial neural network and theprecision of learning Data cleaning is the process of reexa-mining and verifying data to remove duplicate informationcorrect existing errors and provide data consistency Wecan use various filter algorithms to get BMminussi T

42 Artificial Neural Network Learning Operator Thesensing data of the real network include several physicalquantities such as temperature humidity and illuminationUsually a variety of sensors are deployed at an acquisitionnode and multiple physical quantities are collected at thesame time Physical quantities at the same acquisition nodehave the same temporal and spatial variation trend but inthe process of recovery using the artificial neural networkeach quantity has a mutual promotion effect The distancebetween data acquisition nodes and non-data acquisitionlocations is very easy to obtain Based on the inverse-distance interpolation algorithm and BP artificial neuralnetwork algorithm we propose a multidimensional inverse-distance BP artificial neural network learning operator

The BP artificial neural network which is the most widelyused one-way propagating multilayer feedforward networkis characterized by continuous adjustment of the networkconnection weight so that any nonlinear function can beapproximated with arbitrary accuracy The BP artificialneural network is self-learning and adaptive and has robust-ness and generalization The training of the BP artificialneural network is the study of ldquosupervisor supervisionrdquoThe training process of the BP artificial neural network isshown in Figure 3

For the input information it is first transmitted to thenode of the hidden layer through the weighted thresholdsummation and then the output information of the hiddennode is transmitted to the output node through the weightedthreshold summation after the operation of the transfer

function of each element Finally the output result is givenThe purpose of network learning is to obtain the rightweights and thresholds The training process consists oftwo parts forward and backward propagation

The BP artificial neural network is a multilayer feed-forward network trained by the error backpropagationalgorithm The BP artificial neural network structure is amultilayer network structure which has not only input nodesand output nodes but also one or more hidden nodes Asdemonstrated in Figure 3 according to the prediction errorthe v and w are continuously adjusted and finally the BPartificial neural network learning operator for reconstructingthe data of data acquisition node si can be determined asshown in the part ϒsi

sdot of Figure 3The input layer of the BP artificial neural network

learning operator is the data collected at a certain time atall acquisition nodes except the si acquisition node which isthe vector hsi BMminussi tl The input of the jth neuron in thehidden layer is calculated by

Σj = M minus1

q=1k

p=1νqjphsi BMminussi tl minus θ j 18

where νqjp represents the weight between the q p thinput neurons and the jth neurons in the hidden layer Theq p th input neurons are hsi bsq k tl that is the k-th

dimension data of the sq acquisition node at time tl θj isthe threshold of the jth neuron in the hidden layer Theoutput of neurons in the hidden layer is calculated by

hoj =1

1 + eminusΣ j19

Similarly the output of each neuron in the output layer isset to op

Epoch

Acquisition node

Phys

ical

qua

ntity

tmtlt0

so

sm

sq

k

Figure 2 The model of BMminussi T

7Journal of Sensors

The sum of squared errors of all neurons in the outputlayer is the objective function value optimized by the BPartificial neural network algorithm which is calculated by

E =1

M minus 1 times k times kM minus1

q=1k

p=1k

p=1bsi p minus op

2 20

where bsi p represents the expected output value of neuron opin the output layer corresponding to the value of the pthphysical quantity sensed by the data acquisition node siAccording to the gradient descent method the error ofeach neuron in the output layer is obtained by

Ep = op times 1 minus op times bsi p minus op 21

The weights and thresholds of the output layer can beadjusted by

Δω = part times Ep times hoj 22

Δθp = part times Ep 23

where part isin 0 1 is the learning rate which reflects thespeed of training and learning Similarly the weight andthreshold of the hidden layer can be obtained If thedesired results cannot be obtained from the output layerit needs to constantly adjust the weights and thresholdsgradually reducing the error The BP neural network hasstrong self-learning ability and can quickly obtain theoptimal solution

1205641

120564j

120564J

12059611

120596j1

120596J1

bso1(t0) bso1(t1)

bsi1(T)

bsi1(T)

bso1(tm) hsi

(bso

1(tl))

bsq1(t0) bsq1(t1) bsq1(tm) hsi

(bsq

1(tl))

bsm1(t0) bsm1(t1) bsm1(tm) hsi

(bsm

1(tl))

bsok(t0) bsok(t1) bsok(tm) hsi

(bso

k(tl))

bsqk(t0) bsqk(t1) bsqk(tm) hsi

(bsq

k(tl))

bsmk(t0) bsmk(t1) bsmk(tm) hsi

(bsm

k(tl))

Output layer

Hidden layer

Input layer

t0 tm

12059612

120596j2

120596J2

Error back propagation

ϒsi ()

Vm1k

Vmjk

VmJk

Vq1k

Vqjk

VqJk

V11k

V1jk

V1Jk

Vm11

Vmj1

VmJ1

Vq11

Vqj1

VqJ1

V111

V1j1

V1J1

o1

tl

ok

Figure 3 The training process of the BP artificial neural network

8 Journal of Sensors

The mathematical model of Figure 3 is as follows

f op J

j=1k

p=1ωj f Σ j

M minus1

q=1k

p=1νqjphsi BMminussi T ⟶ bsi p T

24

where BsqT is the training data that is the historical data

of acquisition nodes except si in the period T hsi sdot is thetransform function of the input layer νqj is the weight ofthe input layer f Σ j

sdot is the transfer function of the input

layer to the hidden layer w is the weight of the hiddenlayer to the output layer f op sdot is the transfer function of

the hidden layer to the input layer bsi T is the ldquosupervisorsupervisionrdquo that is the historical data of acquisition node siin the period T

From (14) and (24) the following formula can beobtained

ϒsisdot = f op

J

j=1ωj f Σ j

M minus1

q=1k

p=1νqjp sdot ∣bsi T 25

whereϒsisdot represents the inverse-distance BP artificial neu-

ral network learning operator of acquisition node si ∣bsi Trepresents that this neural network operator ϒsi

sdot is a learn-ing operator trained by the data of si as tutor information

The data collected by each data acquisition node in themonitoring area is related to each other The data of a dataacquisition node can be learned by inputting the data ofother data acquisition nodes into the learning operatorThe learning operator set of data acquisition nodes in thewhole monitoring area is as follows

ϒ sdot = ϒsisdot ∣ si isinM 26

5 Interpolation at Non-DataAcquisition Locations

We transfer the BP artificial neural network operators fromdata acquisition nodes to non-data acquisition locationsWe can use the learning operator of the data acquisition nodeclosest to the non-data acquisition location to estimate thedata of the non-data acquisition location There are four waysto implement the learning operator transferring includingsample transferring feature transferring model transferringand relationship transferring In this paper model transfer-ring (also called parameter transferring) is used to bettercombine with the BP artificial neural network that is wecan use the pretrained BP artificial neural network to inter-polate The BP artificial neural network learning operatorhas strong robustness and adaptability In the interested areaif the physical quantity collected is highly correlated andthere is no situation in which the physical quantity changes

drastically between the various interested locations thelearning operator can be transferred

Since the construction of the artificial neural networkrequires historical data for training and there is no historicaldata at the non-data acquisition location deployed withoutsensors it is very difficult to construct the artificial neuralnetworks of the non-data acquisition location However wecan predict the physical quantities of non-data acquisitionlocations by using the learning operator from the nearest dataacquisition node

51 Transform Function Corresponding to NonacquisitionLocation sj First we use the IDWI method to construct thetransform function The transform function of the artificialneural network input unit sq of the acquisition node si isdefined as hsi BMminussi tl

hsj BMminussi tl =BMminussi tl dist

γsq⟶sj

sum M minus1sqisinMminussi 1distγsq⟶sj

27

where distsq⟶sjrepresents the distance between sq and sj

γ represents the weighted exponentiation of the distance

reciprocal sum M minus1sqisinM 1distγsq⟶sj

represents the sum of the

weighted reciprocal of the distance from the non-dataacquisition location sj to the rest of the acquisition nodesThe physical data collected from the remaining M minus 1data acquisition nodes except si are as follows

BMminussi = bsq tl ∣ sq isinM minus si 28

52 Learning Operator Transferring Since the data of thedata acquisition node closest to the non-data acquisitionlocation is important to the non-data acquisition locationand its data is most correlated with the data of the non-data acquisition location we estimate the data of the non-data acquisition location with data from the nearest dataacquisition node and its learning operator

Since the data we are targeting is spatially correlated thesmaller the distance between the two interested locations isthe smaller the difference of the collected data is Converselythe greater the difference of the collected data is We canuse the data of the data acquisition node close to thenon-data acquisition location to assess the data at thenon-data acquisition location

Because the BP artificial neural network learning opera-tor has strong robustness and adaptability we can transferthe inverse BP artificial neural network learning operator ofsi to sj in order to estimate the data at sj in this paper Thetransform function h sdot does not change with time and it isa function of the distance between the sampling locationshsi BMminussi tl ⟶ hsj BMminussi tl the number of input param-

eters is constant while the input parameters vary with timeSo the change of the transform function will not affect the

9Journal of Sensors

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

32 Data Acquisition Error Because the wireless communi-cation network is susceptible to interference attenuationmultipath blind zone and other unfavorable factors the dataerror rate is high Nodes and links in wireless sensornetworks are inherently erroneous and unpredictable Theerror data which greatly deviated from the ideal truth valueis divided into two types data loss and data disturbance

(1) Data Loss These reasons such as nodes cannot worklinks cannot be linked or data cannot be transmitted causethe data of the corresponding data acquisition nodes to notreach the sink node

(2) Data Disturbance Due to the failure the local function ofthe WSN is in an incorrect (or ambiguous) system state Thisstate may cause a deviation between the data measured by thesensors of the corresponding data acquisition node and thetrue value or the signal is disturbed during the transmissionand the data received at the sink node is deviant from the truevalue The data that corresponds to the data acquisition nodeis not the desired result The collected data oscillate in thedata area near the true value In this paper we assumed thatthe data disturbance obeys the Gauss distribution

The main idea of our method is based on the fundamen-tal assumption that the sensing data of WSN are regarded asa vector indexed by the interested locations and recoveredfrom a subset sensing data As demonstrated in Figure 1the data acquisition consists of two stages the data-sensingstage and the data-recovering stage At the data-sensingstage instead of deploying sensors and sensing data at allinterested locations a subset of interested locations whichare the shaded ones in the second subfigure is selected tosense physical quantity and deliver the sensing data to thesink node at each data collection round Some locations aredrawn by the fork in the second subfigure because their datais lost or disturbed The fork represented the data errorsWhen the hardware and software of the network nodefailures or the communication links of the network arebroken the set of sensors which encounter data errors is onlythe subset of M At the data-recovering stage the sink nodereceives these incomplete sensing data over some data collec-tion rounds shown in the third subfigure in which the shadedentries represent the valid sensing data and the white entriesare unknown And then we could use them to recover thecomplete data by our method

Here we adopt a mask operator A to represent theprocess of collecting data-encountering errors

A DM tl = B tl 5

where B is the data set that is actually received for interpola-tion For the sake of clarity the operator A can be specifiedas a vector product as follows

A DM tl =Q ⊙DM tl 6

where ⊙ denotes the product of two vectors ie bsi tl =dsi tl times qsi Q is a vector of 1 times M bsi indicates the data

that is actually received by the ith data acquisition nodefor interpolation

qsi =

1 if the data is not error0 if the data is lostNo dsi tl σ

2

dsi if the data is disturbed

7

where No dsi tl σ2 represents Gauss distribution with a

mean of dsi tl and a variance of σ2In this paper the error rate of the received data is defined

as follows Q qsine1 M where Q qsine1

is the number of

non-1 elements

33 Completion of Data Space with Interpolation Due toconditional restrictions there is no way to deploy sensorsin S minusM The data generated in S minusM can be estimated byinterpolation based on the data in M The data space of theentire monitoring area is set to

DS tl =DM tl cup DSminusM tl 8

where DM tl represents the data set collected from the dataacquisition node in M at epoch tl DSminusM tl is the data setcollected from non-data acquisition locations S minusM at theepoch tl in the ideal case if the sensors are deployed inthe non-data acquisition location DSminusM tl is the datainterpolation set based on the data in M

The problem definition is how to process the DM tl dataso that DSminusM tl is as close as possible to DSminusM tl that is theproblem of minimizing the error between DSminusM tl andDSminusM tl Its mathematical expression is as follows

arg min DSminusM tl minusDSminusM tl

s t Q ne 19

where sdot is the Euclidean norm form used to evaluate theerror between DSminusM tl and DSminusM tl Q ne 1 indicates that Qis not a full 1 matrix

DSminusM tl = dsj tl ∣ dsj tl

= E dsjprime tl dsjPrime tl hellip dnsj tl sj isin S minusM 10

where dsjprime tl dsjPrime tl hellip dnsj tl represents each value close to

dsj tl Suppose si is the closest data acquisition node to sj

The value received from the data acquisition node nearestthe non-data acquisition location is also close to dsj tl

∵distsi⟶sj=min distM⟶sj

emspsi isinM 11

where distM⟶sjindicates the distance from each data acqui-

sition node in M to the non-data acquisition location sj The

4 Journal of Sensors

closer the information collected by the node the greater thecorrelation [15]

there4dsjprime tl = dsi tl 12

where dsi tl represents the value of the physical quantityactually collected by the si data acquisition node

Suppose that dsjPrime tl represents the assessed value at the

jth non-data acquisition location which is obtained by theback propagation artificial neural network operator In this

paper we get the dsjPrime tl that is close to dsj tl The data set

of non-data acquisition locations for interpolation is

dsjPrime tl =ϒsjbsi tl emspsj isin S minusM si isinM

DSminusMPrime tl =ϒSminusM B tl 13

where ϒSminusM sdot represents the learning operator to assessdsj tl B tl represents the received data set at the epoch tl

We can use the back propagation artificial neural networkoperator of the data acquisition node closest to the non-

Interestedlocations

Data acquisitionnodes

Proposed method

Incompletedata space

Completeddata space

Interested location index

Interested location index

Data errors

Figure 1 Incomplete acquisition process based on our proposed method

5Journal of Sensors

data acquisition location to predict the data of the non-dataacquisition location

4 Learning Operator Set of DataAcquisition Nodes

The mathematical model of the learning operator for thereconstruction is as follows

arg min ϒ Inp minus Tar

s t Inp ne Tar14

where Tar represents the learning goals ϒ sdot is the learn-ing operator of Inp and Inp can be individuals variablesand even algorithms or functions sets and so on Theinput of ϒ sdot is Inp Inp and Tar are different If theydo not have differences there is no need to learn forreconstruction The purpose of learning is to make ϒ Inpgradually approach Tar

Because data of WSN is error-prone we need fault-tolerant and robust learning operators The learning operatorin this paper uses a back propagation (BP) artificial neuralnetwork We can use data of data acquisition nodes to predictthe data of the non-data acquisition location thus assessingthe data space of the entire monitoring area Because of thestrong robustness and adaptability of the artificial neuralnetwork we can use the artificial neural network to interpo-late the data of non-data acquisition locations in the case ofdata error

The BP artificial neural network is a multilayer (at least 3levels) feedforward network based on error backpropagationBecause of its characteristics such as nonlinear mappingmultiple input and multiple output and self-organizingself-learning the BP artificial neural network can be moresuitable for dealing with the complex problems of nonlinearmultiple input and multiple output The BP artificial neuralnetwork model is composed of an input layer hidden layer(which can be multilayer but at least one level) and outputlayer Each layer is composed of a number of juxtaposedneurons The neurons in the same layer are not connectedto each other and the neurons in the adjacent layers areconnected by means of full interconnection [16]

After constructing the topology of the artificial neuralnetwork it is necessary to learn and train the network inorder to make the network intelligent For the BP artificialneural network the learning process is accomplished byforward propagation and reverse correction propagationAs each data acquisition node is related to other data acqui-sition nodes and each data acquisition node has historical

data it can be trained through the historical data of dataacquisition nodes to generate the artificial neural networklearning operator of the data acquisition node

41 Transform Function of the Input Unit The data of thenon-data acquisition location sj is assessed by using theinverse-distance learning operator of the data acquisitionnode si closest to sj Because there is spatial correlationbetween interested locations in space acquisition in thispaper we adopt the IDWI algorithm combined with the BPartificial neural network algorithm

At a certain epoch tl the data set of all other dataacquisition nodes except the si is as follows

BMminussi tl =

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

emspbs isinM minus si

15

In this paper the inverse-distance weight is used toconstruct the transform function The transform functionhsi BMminussi tl of the input unit BMminussi tl of the artificialneural network with data acquisition node si is as follows

hsi BMminussi tl =1distγsq⟶si

times BMminussi tl

sum M minus1sqisinM 1distγsq⟶si

emspsq isinM minus si si isinM

16

where distsq⟶sirepresents the distance between the data

acquisition node si and the non-data acquisition locationsq γ represents the weighted power exponent of the

distance reciprocal sum M minus1sqisinM 1distγsq⟶si

represents the sum

of the weighted reciprocal of the distance from the dataacquisition node si to the rest of data acquisition nodes

The artificial neural network requires a training set Wetake the historical data of the period T of all data acquisitionnodes as the training set T = t0 t1⋯ tl⋯ tm In practicalengineering it is feasible for us to get the data from dataacquisition nodes in a period to learn

In this paper data collected from all data acquisitionnodes are used as the training set BMminussi T BMminussi T is athree-order tensor as shown in the following Figure 2BMminussi T isin Rktimes Mminussi timestm The elements of the training set areindexed by physical quantities acquisition node and epoch

The BMminussi T matrix is obtained by

BMminussi T =

bso 1 t0 hellip bso 1 t0

⋮ bsq i tl ⋮

bsm k t0 hellip bsm k t0

bso1 t1 hellip bso 1 t1

⋮ bsq i t1 ⋮

bsm k t1 hellip bsm k t1

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

bso1 tm hellip bso1 tm

⋮ bsq i tl ⋮

bsm k tm hellip bsm k tm

emspbs isinM minus si

17

6 Journal of Sensors

In actual engineering BMminussi T has a lot of noise It isnecessary to clean it If it is not cleaned it will affect theestimation accuracy of the artificial neural network and theprecision of learning Data cleaning is the process of reexa-mining and verifying data to remove duplicate informationcorrect existing errors and provide data consistency Wecan use various filter algorithms to get BMminussi T

42 Artificial Neural Network Learning Operator Thesensing data of the real network include several physicalquantities such as temperature humidity and illuminationUsually a variety of sensors are deployed at an acquisitionnode and multiple physical quantities are collected at thesame time Physical quantities at the same acquisition nodehave the same temporal and spatial variation trend but inthe process of recovery using the artificial neural networkeach quantity has a mutual promotion effect The distancebetween data acquisition nodes and non-data acquisitionlocations is very easy to obtain Based on the inverse-distance interpolation algorithm and BP artificial neuralnetwork algorithm we propose a multidimensional inverse-distance BP artificial neural network learning operator

The BP artificial neural network which is the most widelyused one-way propagating multilayer feedforward networkis characterized by continuous adjustment of the networkconnection weight so that any nonlinear function can beapproximated with arbitrary accuracy The BP artificialneural network is self-learning and adaptive and has robust-ness and generalization The training of the BP artificialneural network is the study of ldquosupervisor supervisionrdquoThe training process of the BP artificial neural network isshown in Figure 3

For the input information it is first transmitted to thenode of the hidden layer through the weighted thresholdsummation and then the output information of the hiddennode is transmitted to the output node through the weightedthreshold summation after the operation of the transfer

function of each element Finally the output result is givenThe purpose of network learning is to obtain the rightweights and thresholds The training process consists oftwo parts forward and backward propagation

The BP artificial neural network is a multilayer feed-forward network trained by the error backpropagationalgorithm The BP artificial neural network structure is amultilayer network structure which has not only input nodesand output nodes but also one or more hidden nodes Asdemonstrated in Figure 3 according to the prediction errorthe v and w are continuously adjusted and finally the BPartificial neural network learning operator for reconstructingthe data of data acquisition node si can be determined asshown in the part ϒsi

sdot of Figure 3The input layer of the BP artificial neural network

learning operator is the data collected at a certain time atall acquisition nodes except the si acquisition node which isthe vector hsi BMminussi tl The input of the jth neuron in thehidden layer is calculated by

Σj = M minus1

q=1k

p=1νqjphsi BMminussi tl minus θ j 18

where νqjp represents the weight between the q p thinput neurons and the jth neurons in the hidden layer Theq p th input neurons are hsi bsq k tl that is the k-th

dimension data of the sq acquisition node at time tl θj isthe threshold of the jth neuron in the hidden layer Theoutput of neurons in the hidden layer is calculated by

hoj =1

1 + eminusΣ j19

Similarly the output of each neuron in the output layer isset to op

Epoch

Acquisition node

Phys

ical

qua

ntity

tmtlt0

so

sm

sq

k

Figure 2 The model of BMminussi T

7Journal of Sensors

The sum of squared errors of all neurons in the outputlayer is the objective function value optimized by the BPartificial neural network algorithm which is calculated by

E =1

M minus 1 times k times kM minus1

q=1k

p=1k

p=1bsi p minus op

2 20

where bsi p represents the expected output value of neuron opin the output layer corresponding to the value of the pthphysical quantity sensed by the data acquisition node siAccording to the gradient descent method the error ofeach neuron in the output layer is obtained by

Ep = op times 1 minus op times bsi p minus op 21

The weights and thresholds of the output layer can beadjusted by

Δω = part times Ep times hoj 22

Δθp = part times Ep 23

where part isin 0 1 is the learning rate which reflects thespeed of training and learning Similarly the weight andthreshold of the hidden layer can be obtained If thedesired results cannot be obtained from the output layerit needs to constantly adjust the weights and thresholdsgradually reducing the error The BP neural network hasstrong self-learning ability and can quickly obtain theoptimal solution

1205641

120564j

120564J

12059611

120596j1

120596J1

bso1(t0) bso1(t1)

bsi1(T)

bsi1(T)

bso1(tm) hsi

(bso

1(tl))

bsq1(t0) bsq1(t1) bsq1(tm) hsi

(bsq

1(tl))

bsm1(t0) bsm1(t1) bsm1(tm) hsi

(bsm

1(tl))

bsok(t0) bsok(t1) bsok(tm) hsi

(bso

k(tl))

bsqk(t0) bsqk(t1) bsqk(tm) hsi

(bsq

k(tl))

bsmk(t0) bsmk(t1) bsmk(tm) hsi

(bsm

k(tl))

Output layer

Hidden layer

Input layer

t0 tm

12059612

120596j2

120596J2

Error back propagation

ϒsi ()

Vm1k

Vmjk

VmJk

Vq1k

Vqjk

VqJk

V11k

V1jk

V1Jk

Vm11

Vmj1

VmJ1

Vq11

Vqj1

VqJ1

V111

V1j1

V1J1

o1

tl

ok

Figure 3 The training process of the BP artificial neural network

8 Journal of Sensors

The mathematical model of Figure 3 is as follows

f op J

j=1k

p=1ωj f Σ j

M minus1

q=1k

p=1νqjphsi BMminussi T ⟶ bsi p T

24

where BsqT is the training data that is the historical data

of acquisition nodes except si in the period T hsi sdot is thetransform function of the input layer νqj is the weight ofthe input layer f Σ j

sdot is the transfer function of the input

layer to the hidden layer w is the weight of the hiddenlayer to the output layer f op sdot is the transfer function of

the hidden layer to the input layer bsi T is the ldquosupervisorsupervisionrdquo that is the historical data of acquisition node siin the period T

From (14) and (24) the following formula can beobtained

ϒsisdot = f op

J

j=1ωj f Σ j

M minus1

q=1k

p=1νqjp sdot ∣bsi T 25

whereϒsisdot represents the inverse-distance BP artificial neu-

ral network learning operator of acquisition node si ∣bsi Trepresents that this neural network operator ϒsi

sdot is a learn-ing operator trained by the data of si as tutor information

The data collected by each data acquisition node in themonitoring area is related to each other The data of a dataacquisition node can be learned by inputting the data ofother data acquisition nodes into the learning operatorThe learning operator set of data acquisition nodes in thewhole monitoring area is as follows

ϒ sdot = ϒsisdot ∣ si isinM 26

5 Interpolation at Non-DataAcquisition Locations

We transfer the BP artificial neural network operators fromdata acquisition nodes to non-data acquisition locationsWe can use the learning operator of the data acquisition nodeclosest to the non-data acquisition location to estimate thedata of the non-data acquisition location There are four waysto implement the learning operator transferring includingsample transferring feature transferring model transferringand relationship transferring In this paper model transfer-ring (also called parameter transferring) is used to bettercombine with the BP artificial neural network that is wecan use the pretrained BP artificial neural network to inter-polate The BP artificial neural network learning operatorhas strong robustness and adaptability In the interested areaif the physical quantity collected is highly correlated andthere is no situation in which the physical quantity changes

drastically between the various interested locations thelearning operator can be transferred

Since the construction of the artificial neural networkrequires historical data for training and there is no historicaldata at the non-data acquisition location deployed withoutsensors it is very difficult to construct the artificial neuralnetworks of the non-data acquisition location However wecan predict the physical quantities of non-data acquisitionlocations by using the learning operator from the nearest dataacquisition node

51 Transform Function Corresponding to NonacquisitionLocation sj First we use the IDWI method to construct thetransform function The transform function of the artificialneural network input unit sq of the acquisition node si isdefined as hsi BMminussi tl

hsj BMminussi tl =BMminussi tl dist

γsq⟶sj

sum M minus1sqisinMminussi 1distγsq⟶sj

27

where distsq⟶sjrepresents the distance between sq and sj

γ represents the weighted exponentiation of the distance

reciprocal sum M minus1sqisinM 1distγsq⟶sj

represents the sum of the

weighted reciprocal of the distance from the non-dataacquisition location sj to the rest of the acquisition nodesThe physical data collected from the remaining M minus 1data acquisition nodes except si are as follows

BMminussi = bsq tl ∣ sq isinM minus si 28

52 Learning Operator Transferring Since the data of thedata acquisition node closest to the non-data acquisitionlocation is important to the non-data acquisition locationand its data is most correlated with the data of the non-data acquisition location we estimate the data of the non-data acquisition location with data from the nearest dataacquisition node and its learning operator

Since the data we are targeting is spatially correlated thesmaller the distance between the two interested locations isthe smaller the difference of the collected data is Converselythe greater the difference of the collected data is We canuse the data of the data acquisition node close to thenon-data acquisition location to assess the data at thenon-data acquisition location

Because the BP artificial neural network learning opera-tor has strong robustness and adaptability we can transferthe inverse BP artificial neural network learning operator ofsi to sj in order to estimate the data at sj in this paper Thetransform function h sdot does not change with time and it isa function of the distance between the sampling locationshsi BMminussi tl ⟶ hsj BMminussi tl the number of input param-

eters is constant while the input parameters vary with timeSo the change of the transform function will not affect the

9Journal of Sensors

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

closer the information collected by the node the greater thecorrelation [15]

there4dsjprime tl = dsi tl 12

where dsi tl represents the value of the physical quantityactually collected by the si data acquisition node

Suppose that dsjPrime tl represents the assessed value at the

jth non-data acquisition location which is obtained by theback propagation artificial neural network operator In this

paper we get the dsjPrime tl that is close to dsj tl The data set

of non-data acquisition locations for interpolation is

dsjPrime tl =ϒsjbsi tl emspsj isin S minusM si isinM

DSminusMPrime tl =ϒSminusM B tl 13

where ϒSminusM sdot represents the learning operator to assessdsj tl B tl represents the received data set at the epoch tl

We can use the back propagation artificial neural networkoperator of the data acquisition node closest to the non-

Interestedlocations

Data acquisitionnodes

Proposed method

Incompletedata space

Completeddata space

Interested location index

Interested location index

Data errors

Figure 1 Incomplete acquisition process based on our proposed method

5Journal of Sensors

data acquisition location to predict the data of the non-dataacquisition location

4 Learning Operator Set of DataAcquisition Nodes

The mathematical model of the learning operator for thereconstruction is as follows

arg min ϒ Inp minus Tar

s t Inp ne Tar14

where Tar represents the learning goals ϒ sdot is the learn-ing operator of Inp and Inp can be individuals variablesand even algorithms or functions sets and so on Theinput of ϒ sdot is Inp Inp and Tar are different If theydo not have differences there is no need to learn forreconstruction The purpose of learning is to make ϒ Inpgradually approach Tar

Because data of WSN is error-prone we need fault-tolerant and robust learning operators The learning operatorin this paper uses a back propagation (BP) artificial neuralnetwork We can use data of data acquisition nodes to predictthe data of the non-data acquisition location thus assessingthe data space of the entire monitoring area Because of thestrong robustness and adaptability of the artificial neuralnetwork we can use the artificial neural network to interpo-late the data of non-data acquisition locations in the case ofdata error

The BP artificial neural network is a multilayer (at least 3levels) feedforward network based on error backpropagationBecause of its characteristics such as nonlinear mappingmultiple input and multiple output and self-organizingself-learning the BP artificial neural network can be moresuitable for dealing with the complex problems of nonlinearmultiple input and multiple output The BP artificial neuralnetwork model is composed of an input layer hidden layer(which can be multilayer but at least one level) and outputlayer Each layer is composed of a number of juxtaposedneurons The neurons in the same layer are not connectedto each other and the neurons in the adjacent layers areconnected by means of full interconnection [16]

After constructing the topology of the artificial neuralnetwork it is necessary to learn and train the network inorder to make the network intelligent For the BP artificialneural network the learning process is accomplished byforward propagation and reverse correction propagationAs each data acquisition node is related to other data acqui-sition nodes and each data acquisition node has historical

data it can be trained through the historical data of dataacquisition nodes to generate the artificial neural networklearning operator of the data acquisition node

41 Transform Function of the Input Unit The data of thenon-data acquisition location sj is assessed by using theinverse-distance learning operator of the data acquisitionnode si closest to sj Because there is spatial correlationbetween interested locations in space acquisition in thispaper we adopt the IDWI algorithm combined with the BPartificial neural network algorithm

At a certain epoch tl the data set of all other dataacquisition nodes except the si is as follows

BMminussi tl =

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

emspbs isinM minus si

15

In this paper the inverse-distance weight is used toconstruct the transform function The transform functionhsi BMminussi tl of the input unit BMminussi tl of the artificialneural network with data acquisition node si is as follows

hsi BMminussi tl =1distγsq⟶si

times BMminussi tl

sum M minus1sqisinM 1distγsq⟶si

emspsq isinM minus si si isinM

16

where distsq⟶sirepresents the distance between the data

acquisition node si and the non-data acquisition locationsq γ represents the weighted power exponent of the

distance reciprocal sum M minus1sqisinM 1distγsq⟶si

represents the sum

of the weighted reciprocal of the distance from the dataacquisition node si to the rest of data acquisition nodes

The artificial neural network requires a training set Wetake the historical data of the period T of all data acquisitionnodes as the training set T = t0 t1⋯ tl⋯ tm In practicalengineering it is feasible for us to get the data from dataacquisition nodes in a period to learn

In this paper data collected from all data acquisitionnodes are used as the training set BMminussi T BMminussi T is athree-order tensor as shown in the following Figure 2BMminussi T isin Rktimes Mminussi timestm The elements of the training set areindexed by physical quantities acquisition node and epoch

The BMminussi T matrix is obtained by

BMminussi T =

bso 1 t0 hellip bso 1 t0

⋮ bsq i tl ⋮

bsm k t0 hellip bsm k t0

bso1 t1 hellip bso 1 t1

⋮ bsq i t1 ⋮

bsm k t1 hellip bsm k t1

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

bso1 tm hellip bso1 tm

⋮ bsq i tl ⋮

bsm k tm hellip bsm k tm

emspbs isinM minus si

17

6 Journal of Sensors

In actual engineering BMminussi T has a lot of noise It isnecessary to clean it If it is not cleaned it will affect theestimation accuracy of the artificial neural network and theprecision of learning Data cleaning is the process of reexa-mining and verifying data to remove duplicate informationcorrect existing errors and provide data consistency Wecan use various filter algorithms to get BMminussi T

42 Artificial Neural Network Learning Operator Thesensing data of the real network include several physicalquantities such as temperature humidity and illuminationUsually a variety of sensors are deployed at an acquisitionnode and multiple physical quantities are collected at thesame time Physical quantities at the same acquisition nodehave the same temporal and spatial variation trend but inthe process of recovery using the artificial neural networkeach quantity has a mutual promotion effect The distancebetween data acquisition nodes and non-data acquisitionlocations is very easy to obtain Based on the inverse-distance interpolation algorithm and BP artificial neuralnetwork algorithm we propose a multidimensional inverse-distance BP artificial neural network learning operator

The BP artificial neural network which is the most widelyused one-way propagating multilayer feedforward networkis characterized by continuous adjustment of the networkconnection weight so that any nonlinear function can beapproximated with arbitrary accuracy The BP artificialneural network is self-learning and adaptive and has robust-ness and generalization The training of the BP artificialneural network is the study of ldquosupervisor supervisionrdquoThe training process of the BP artificial neural network isshown in Figure 3

For the input information it is first transmitted to thenode of the hidden layer through the weighted thresholdsummation and then the output information of the hiddennode is transmitted to the output node through the weightedthreshold summation after the operation of the transfer

function of each element Finally the output result is givenThe purpose of network learning is to obtain the rightweights and thresholds The training process consists oftwo parts forward and backward propagation

The BP artificial neural network is a multilayer feed-forward network trained by the error backpropagationalgorithm The BP artificial neural network structure is amultilayer network structure which has not only input nodesand output nodes but also one or more hidden nodes Asdemonstrated in Figure 3 according to the prediction errorthe v and w are continuously adjusted and finally the BPartificial neural network learning operator for reconstructingthe data of data acquisition node si can be determined asshown in the part ϒsi

sdot of Figure 3The input layer of the BP artificial neural network

learning operator is the data collected at a certain time atall acquisition nodes except the si acquisition node which isthe vector hsi BMminussi tl The input of the jth neuron in thehidden layer is calculated by

Σj = M minus1

q=1k

p=1νqjphsi BMminussi tl minus θ j 18

where νqjp represents the weight between the q p thinput neurons and the jth neurons in the hidden layer Theq p th input neurons are hsi bsq k tl that is the k-th

dimension data of the sq acquisition node at time tl θj isthe threshold of the jth neuron in the hidden layer Theoutput of neurons in the hidden layer is calculated by

hoj =1

1 + eminusΣ j19

Similarly the output of each neuron in the output layer isset to op

Epoch

Acquisition node

Phys

ical

qua

ntity

tmtlt0

so

sm

sq

k

Figure 2 The model of BMminussi T

7Journal of Sensors

The sum of squared errors of all neurons in the outputlayer is the objective function value optimized by the BPartificial neural network algorithm which is calculated by

E =1

M minus 1 times k times kM minus1

q=1k

p=1k

p=1bsi p minus op

2 20

where bsi p represents the expected output value of neuron opin the output layer corresponding to the value of the pthphysical quantity sensed by the data acquisition node siAccording to the gradient descent method the error ofeach neuron in the output layer is obtained by

Ep = op times 1 minus op times bsi p minus op 21

The weights and thresholds of the output layer can beadjusted by

Δω = part times Ep times hoj 22

Δθp = part times Ep 23

where part isin 0 1 is the learning rate which reflects thespeed of training and learning Similarly the weight andthreshold of the hidden layer can be obtained If thedesired results cannot be obtained from the output layerit needs to constantly adjust the weights and thresholdsgradually reducing the error The BP neural network hasstrong self-learning ability and can quickly obtain theoptimal solution

1205641

120564j

120564J

12059611

120596j1

120596J1

bso1(t0) bso1(t1)

bsi1(T)

bsi1(T)

bso1(tm) hsi

(bso

1(tl))

bsq1(t0) bsq1(t1) bsq1(tm) hsi

(bsq

1(tl))

bsm1(t0) bsm1(t1) bsm1(tm) hsi

(bsm

1(tl))

bsok(t0) bsok(t1) bsok(tm) hsi

(bso

k(tl))

bsqk(t0) bsqk(t1) bsqk(tm) hsi

(bsq

k(tl))

bsmk(t0) bsmk(t1) bsmk(tm) hsi

(bsm

k(tl))

Output layer

Hidden layer

Input layer

t0 tm

12059612

120596j2

120596J2

Error back propagation

ϒsi ()

Vm1k

Vmjk

VmJk

Vq1k

Vqjk

VqJk

V11k

V1jk

V1Jk

Vm11

Vmj1

VmJ1

Vq11

Vqj1

VqJ1

V111

V1j1

V1J1

o1

tl

ok

Figure 3 The training process of the BP artificial neural network

8 Journal of Sensors

The mathematical model of Figure 3 is as follows

f op J

j=1k

p=1ωj f Σ j

M minus1

q=1k

p=1νqjphsi BMminussi T ⟶ bsi p T

24

where BsqT is the training data that is the historical data

of acquisition nodes except si in the period T hsi sdot is thetransform function of the input layer νqj is the weight ofthe input layer f Σ j

sdot is the transfer function of the input

layer to the hidden layer w is the weight of the hiddenlayer to the output layer f op sdot is the transfer function of

the hidden layer to the input layer bsi T is the ldquosupervisorsupervisionrdquo that is the historical data of acquisition node siin the period T

From (14) and (24) the following formula can beobtained

ϒsisdot = f op

J

j=1ωj f Σ j

M minus1

q=1k

p=1νqjp sdot ∣bsi T 25

whereϒsisdot represents the inverse-distance BP artificial neu-

ral network learning operator of acquisition node si ∣bsi Trepresents that this neural network operator ϒsi

sdot is a learn-ing operator trained by the data of si as tutor information

The data collected by each data acquisition node in themonitoring area is related to each other The data of a dataacquisition node can be learned by inputting the data ofother data acquisition nodes into the learning operatorThe learning operator set of data acquisition nodes in thewhole monitoring area is as follows

ϒ sdot = ϒsisdot ∣ si isinM 26

5 Interpolation at Non-DataAcquisition Locations

We transfer the BP artificial neural network operators fromdata acquisition nodes to non-data acquisition locationsWe can use the learning operator of the data acquisition nodeclosest to the non-data acquisition location to estimate thedata of the non-data acquisition location There are four waysto implement the learning operator transferring includingsample transferring feature transferring model transferringand relationship transferring In this paper model transfer-ring (also called parameter transferring) is used to bettercombine with the BP artificial neural network that is wecan use the pretrained BP artificial neural network to inter-polate The BP artificial neural network learning operatorhas strong robustness and adaptability In the interested areaif the physical quantity collected is highly correlated andthere is no situation in which the physical quantity changes

drastically between the various interested locations thelearning operator can be transferred

Since the construction of the artificial neural networkrequires historical data for training and there is no historicaldata at the non-data acquisition location deployed withoutsensors it is very difficult to construct the artificial neuralnetworks of the non-data acquisition location However wecan predict the physical quantities of non-data acquisitionlocations by using the learning operator from the nearest dataacquisition node

51 Transform Function Corresponding to NonacquisitionLocation sj First we use the IDWI method to construct thetransform function The transform function of the artificialneural network input unit sq of the acquisition node si isdefined as hsi BMminussi tl

hsj BMminussi tl =BMminussi tl dist

γsq⟶sj

sum M minus1sqisinMminussi 1distγsq⟶sj

27

where distsq⟶sjrepresents the distance between sq and sj

γ represents the weighted exponentiation of the distance

reciprocal sum M minus1sqisinM 1distγsq⟶sj

represents the sum of the

weighted reciprocal of the distance from the non-dataacquisition location sj to the rest of the acquisition nodesThe physical data collected from the remaining M minus 1data acquisition nodes except si are as follows

BMminussi = bsq tl ∣ sq isinM minus si 28

52 Learning Operator Transferring Since the data of thedata acquisition node closest to the non-data acquisitionlocation is important to the non-data acquisition locationand its data is most correlated with the data of the non-data acquisition location we estimate the data of the non-data acquisition location with data from the nearest dataacquisition node and its learning operator

Since the data we are targeting is spatially correlated thesmaller the distance between the two interested locations isthe smaller the difference of the collected data is Converselythe greater the difference of the collected data is We canuse the data of the data acquisition node close to thenon-data acquisition location to assess the data at thenon-data acquisition location

Because the BP artificial neural network learning opera-tor has strong robustness and adaptability we can transferthe inverse BP artificial neural network learning operator ofsi to sj in order to estimate the data at sj in this paper Thetransform function h sdot does not change with time and it isa function of the distance between the sampling locationshsi BMminussi tl ⟶ hsj BMminussi tl the number of input param-

eters is constant while the input parameters vary with timeSo the change of the transform function will not affect the

9Journal of Sensors

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

data acquisition location to predict the data of the non-dataacquisition location

4 Learning Operator Set of DataAcquisition Nodes

The mathematical model of the learning operator for thereconstruction is as follows

arg min ϒ Inp minus Tar

s t Inp ne Tar14

where Tar represents the learning goals ϒ sdot is the learn-ing operator of Inp and Inp can be individuals variablesand even algorithms or functions sets and so on Theinput of ϒ sdot is Inp Inp and Tar are different If theydo not have differences there is no need to learn forreconstruction The purpose of learning is to make ϒ Inpgradually approach Tar

Because data of WSN is error-prone we need fault-tolerant and robust learning operators The learning operatorin this paper uses a back propagation (BP) artificial neuralnetwork We can use data of data acquisition nodes to predictthe data of the non-data acquisition location thus assessingthe data space of the entire monitoring area Because of thestrong robustness and adaptability of the artificial neuralnetwork we can use the artificial neural network to interpo-late the data of non-data acquisition locations in the case ofdata error

The BP artificial neural network is a multilayer (at least 3levels) feedforward network based on error backpropagationBecause of its characteristics such as nonlinear mappingmultiple input and multiple output and self-organizingself-learning the BP artificial neural network can be moresuitable for dealing with the complex problems of nonlinearmultiple input and multiple output The BP artificial neuralnetwork model is composed of an input layer hidden layer(which can be multilayer but at least one level) and outputlayer Each layer is composed of a number of juxtaposedneurons The neurons in the same layer are not connectedto each other and the neurons in the adjacent layers areconnected by means of full interconnection [16]

After constructing the topology of the artificial neuralnetwork it is necessary to learn and train the network inorder to make the network intelligent For the BP artificialneural network the learning process is accomplished byforward propagation and reverse correction propagationAs each data acquisition node is related to other data acqui-sition nodes and each data acquisition node has historical

data it can be trained through the historical data of dataacquisition nodes to generate the artificial neural networklearning operator of the data acquisition node

41 Transform Function of the Input Unit The data of thenon-data acquisition location sj is assessed by using theinverse-distance learning operator of the data acquisitionnode si closest to sj Because there is spatial correlationbetween interested locations in space acquisition in thispaper we adopt the IDWI algorithm combined with the BPartificial neural network algorithm

At a certain epoch tl the data set of all other dataacquisition nodes except the si is as follows

BMminussi tl =

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

emspbs isinM minus si

15

In this paper the inverse-distance weight is used toconstruct the transform function The transform functionhsi BMminussi tl of the input unit BMminussi tl of the artificialneural network with data acquisition node si is as follows

hsi BMminussi tl =1distγsq⟶si

times BMminussi tl

sum M minus1sqisinM 1distγsq⟶si

emspsq isinM minus si si isinM

16

where distsq⟶sirepresents the distance between the data

acquisition node si and the non-data acquisition locationsq γ represents the weighted power exponent of the

distance reciprocal sum M minus1sqisinM 1distγsq⟶si

represents the sum

of the weighted reciprocal of the distance from the dataacquisition node si to the rest of data acquisition nodes

The artificial neural network requires a training set Wetake the historical data of the period T of all data acquisitionnodes as the training set T = t0 t1⋯ tl⋯ tm In practicalengineering it is feasible for us to get the data from dataacquisition nodes in a period to learn

In this paper data collected from all data acquisitionnodes are used as the training set BMminussi T BMminussi T is athree-order tensor as shown in the following Figure 2BMminussi T isin Rktimes Mminussi timestm The elements of the training set areindexed by physical quantities acquisition node and epoch

The BMminussi T matrix is obtained by

BMminussi T =

bso 1 t0 hellip bso 1 t0

⋮ bsq i tl ⋮

bsm k t0 hellip bsm k t0

bso1 t1 hellip bso 1 t1

⋮ bsq i t1 ⋮

bsm k t1 hellip bsm k t1

bso 1 tl hellip bso 1 tl

⋮ bsq i tl ⋮

bsm k tl hellip bsm k tl

bso1 tm hellip bso1 tm

⋮ bsq i tl ⋮

bsm k tm hellip bsm k tm

emspbs isinM minus si

17

6 Journal of Sensors

In actual engineering BMminussi T has a lot of noise It isnecessary to clean it If it is not cleaned it will affect theestimation accuracy of the artificial neural network and theprecision of learning Data cleaning is the process of reexa-mining and verifying data to remove duplicate informationcorrect existing errors and provide data consistency Wecan use various filter algorithms to get BMminussi T

42 Artificial Neural Network Learning Operator Thesensing data of the real network include several physicalquantities such as temperature humidity and illuminationUsually a variety of sensors are deployed at an acquisitionnode and multiple physical quantities are collected at thesame time Physical quantities at the same acquisition nodehave the same temporal and spatial variation trend but inthe process of recovery using the artificial neural networkeach quantity has a mutual promotion effect The distancebetween data acquisition nodes and non-data acquisitionlocations is very easy to obtain Based on the inverse-distance interpolation algorithm and BP artificial neuralnetwork algorithm we propose a multidimensional inverse-distance BP artificial neural network learning operator

The BP artificial neural network which is the most widelyused one-way propagating multilayer feedforward networkis characterized by continuous adjustment of the networkconnection weight so that any nonlinear function can beapproximated with arbitrary accuracy The BP artificialneural network is self-learning and adaptive and has robust-ness and generalization The training of the BP artificialneural network is the study of ldquosupervisor supervisionrdquoThe training process of the BP artificial neural network isshown in Figure 3

For the input information it is first transmitted to thenode of the hidden layer through the weighted thresholdsummation and then the output information of the hiddennode is transmitted to the output node through the weightedthreshold summation after the operation of the transfer

function of each element Finally the output result is givenThe purpose of network learning is to obtain the rightweights and thresholds The training process consists oftwo parts forward and backward propagation

The BP artificial neural network is a multilayer feed-forward network trained by the error backpropagationalgorithm The BP artificial neural network structure is amultilayer network structure which has not only input nodesand output nodes but also one or more hidden nodes Asdemonstrated in Figure 3 according to the prediction errorthe v and w are continuously adjusted and finally the BPartificial neural network learning operator for reconstructingthe data of data acquisition node si can be determined asshown in the part ϒsi

sdot of Figure 3The input layer of the BP artificial neural network

learning operator is the data collected at a certain time atall acquisition nodes except the si acquisition node which isthe vector hsi BMminussi tl The input of the jth neuron in thehidden layer is calculated by

Σj = M minus1

q=1k

p=1νqjphsi BMminussi tl minus θ j 18

where νqjp represents the weight between the q p thinput neurons and the jth neurons in the hidden layer Theq p th input neurons are hsi bsq k tl that is the k-th

dimension data of the sq acquisition node at time tl θj isthe threshold of the jth neuron in the hidden layer Theoutput of neurons in the hidden layer is calculated by

hoj =1

1 + eminusΣ j19

Similarly the output of each neuron in the output layer isset to op

Epoch

Acquisition node

Phys

ical

qua

ntity

tmtlt0

so

sm

sq

k

Figure 2 The model of BMminussi T

7Journal of Sensors

The sum of squared errors of all neurons in the outputlayer is the objective function value optimized by the BPartificial neural network algorithm which is calculated by

E =1

M minus 1 times k times kM minus1

q=1k

p=1k

p=1bsi p minus op

2 20

where bsi p represents the expected output value of neuron opin the output layer corresponding to the value of the pthphysical quantity sensed by the data acquisition node siAccording to the gradient descent method the error ofeach neuron in the output layer is obtained by

Ep = op times 1 minus op times bsi p minus op 21

The weights and thresholds of the output layer can beadjusted by

Δω = part times Ep times hoj 22

Δθp = part times Ep 23

where part isin 0 1 is the learning rate which reflects thespeed of training and learning Similarly the weight andthreshold of the hidden layer can be obtained If thedesired results cannot be obtained from the output layerit needs to constantly adjust the weights and thresholdsgradually reducing the error The BP neural network hasstrong self-learning ability and can quickly obtain theoptimal solution

1205641

120564j

120564J

12059611

120596j1

120596J1

bso1(t0) bso1(t1)

bsi1(T)

bsi1(T)

bso1(tm) hsi

(bso

1(tl))

bsq1(t0) bsq1(t1) bsq1(tm) hsi

(bsq

1(tl))

bsm1(t0) bsm1(t1) bsm1(tm) hsi

(bsm

1(tl))

bsok(t0) bsok(t1) bsok(tm) hsi

(bso

k(tl))

bsqk(t0) bsqk(t1) bsqk(tm) hsi

(bsq

k(tl))

bsmk(t0) bsmk(t1) bsmk(tm) hsi

(bsm

k(tl))

Output layer

Hidden layer

Input layer

t0 tm

12059612

120596j2

120596J2

Error back propagation

ϒsi ()

Vm1k

Vmjk

VmJk

Vq1k

Vqjk

VqJk

V11k

V1jk

V1Jk

Vm11

Vmj1

VmJ1

Vq11

Vqj1

VqJ1

V111

V1j1

V1J1

o1

tl

ok

Figure 3 The training process of the BP artificial neural network

8 Journal of Sensors

The mathematical model of Figure 3 is as follows

f op J

j=1k

p=1ωj f Σ j

M minus1

q=1k

p=1νqjphsi BMminussi T ⟶ bsi p T

24

where BsqT is the training data that is the historical data

of acquisition nodes except si in the period T hsi sdot is thetransform function of the input layer νqj is the weight ofthe input layer f Σ j

sdot is the transfer function of the input

layer to the hidden layer w is the weight of the hiddenlayer to the output layer f op sdot is the transfer function of

the hidden layer to the input layer bsi T is the ldquosupervisorsupervisionrdquo that is the historical data of acquisition node siin the period T

From (14) and (24) the following formula can beobtained

ϒsisdot = f op

J

j=1ωj f Σ j

M minus1

q=1k

p=1νqjp sdot ∣bsi T 25

whereϒsisdot represents the inverse-distance BP artificial neu-

ral network learning operator of acquisition node si ∣bsi Trepresents that this neural network operator ϒsi

sdot is a learn-ing operator trained by the data of si as tutor information

The data collected by each data acquisition node in themonitoring area is related to each other The data of a dataacquisition node can be learned by inputting the data ofother data acquisition nodes into the learning operatorThe learning operator set of data acquisition nodes in thewhole monitoring area is as follows

ϒ sdot = ϒsisdot ∣ si isinM 26

5 Interpolation at Non-DataAcquisition Locations

We transfer the BP artificial neural network operators fromdata acquisition nodes to non-data acquisition locationsWe can use the learning operator of the data acquisition nodeclosest to the non-data acquisition location to estimate thedata of the non-data acquisition location There are four waysto implement the learning operator transferring includingsample transferring feature transferring model transferringand relationship transferring In this paper model transfer-ring (also called parameter transferring) is used to bettercombine with the BP artificial neural network that is wecan use the pretrained BP artificial neural network to inter-polate The BP artificial neural network learning operatorhas strong robustness and adaptability In the interested areaif the physical quantity collected is highly correlated andthere is no situation in which the physical quantity changes

drastically between the various interested locations thelearning operator can be transferred

Since the construction of the artificial neural networkrequires historical data for training and there is no historicaldata at the non-data acquisition location deployed withoutsensors it is very difficult to construct the artificial neuralnetworks of the non-data acquisition location However wecan predict the physical quantities of non-data acquisitionlocations by using the learning operator from the nearest dataacquisition node

51 Transform Function Corresponding to NonacquisitionLocation sj First we use the IDWI method to construct thetransform function The transform function of the artificialneural network input unit sq of the acquisition node si isdefined as hsi BMminussi tl

hsj BMminussi tl =BMminussi tl dist

γsq⟶sj

sum M minus1sqisinMminussi 1distγsq⟶sj

27

where distsq⟶sjrepresents the distance between sq and sj

γ represents the weighted exponentiation of the distance

reciprocal sum M minus1sqisinM 1distγsq⟶sj

represents the sum of the

weighted reciprocal of the distance from the non-dataacquisition location sj to the rest of the acquisition nodesThe physical data collected from the remaining M minus 1data acquisition nodes except si are as follows

BMminussi = bsq tl ∣ sq isinM minus si 28

52 Learning Operator Transferring Since the data of thedata acquisition node closest to the non-data acquisitionlocation is important to the non-data acquisition locationand its data is most correlated with the data of the non-data acquisition location we estimate the data of the non-data acquisition location with data from the nearest dataacquisition node and its learning operator

Since the data we are targeting is spatially correlated thesmaller the distance between the two interested locations isthe smaller the difference of the collected data is Converselythe greater the difference of the collected data is We canuse the data of the data acquisition node close to thenon-data acquisition location to assess the data at thenon-data acquisition location

Because the BP artificial neural network learning opera-tor has strong robustness and adaptability we can transferthe inverse BP artificial neural network learning operator ofsi to sj in order to estimate the data at sj in this paper Thetransform function h sdot does not change with time and it isa function of the distance between the sampling locationshsi BMminussi tl ⟶ hsj BMminussi tl the number of input param-

eters is constant while the input parameters vary with timeSo the change of the transform function will not affect the

9Journal of Sensors

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

In actual engineering BMminussi T has a lot of noise It isnecessary to clean it If it is not cleaned it will affect theestimation accuracy of the artificial neural network and theprecision of learning Data cleaning is the process of reexa-mining and verifying data to remove duplicate informationcorrect existing errors and provide data consistency Wecan use various filter algorithms to get BMminussi T

42 Artificial Neural Network Learning Operator Thesensing data of the real network include several physicalquantities such as temperature humidity and illuminationUsually a variety of sensors are deployed at an acquisitionnode and multiple physical quantities are collected at thesame time Physical quantities at the same acquisition nodehave the same temporal and spatial variation trend but inthe process of recovery using the artificial neural networkeach quantity has a mutual promotion effect The distancebetween data acquisition nodes and non-data acquisitionlocations is very easy to obtain Based on the inverse-distance interpolation algorithm and BP artificial neuralnetwork algorithm we propose a multidimensional inverse-distance BP artificial neural network learning operator

The BP artificial neural network which is the most widelyused one-way propagating multilayer feedforward networkis characterized by continuous adjustment of the networkconnection weight so that any nonlinear function can beapproximated with arbitrary accuracy The BP artificialneural network is self-learning and adaptive and has robust-ness and generalization The training of the BP artificialneural network is the study of ldquosupervisor supervisionrdquoThe training process of the BP artificial neural network isshown in Figure 3

For the input information it is first transmitted to thenode of the hidden layer through the weighted thresholdsummation and then the output information of the hiddennode is transmitted to the output node through the weightedthreshold summation after the operation of the transfer

function of each element Finally the output result is givenThe purpose of network learning is to obtain the rightweights and thresholds The training process consists oftwo parts forward and backward propagation

The BP artificial neural network is a multilayer feed-forward network trained by the error backpropagationalgorithm The BP artificial neural network structure is amultilayer network structure which has not only input nodesand output nodes but also one or more hidden nodes Asdemonstrated in Figure 3 according to the prediction errorthe v and w are continuously adjusted and finally the BPartificial neural network learning operator for reconstructingthe data of data acquisition node si can be determined asshown in the part ϒsi

sdot of Figure 3The input layer of the BP artificial neural network

learning operator is the data collected at a certain time atall acquisition nodes except the si acquisition node which isthe vector hsi BMminussi tl The input of the jth neuron in thehidden layer is calculated by

Σj = M minus1

q=1k

p=1νqjphsi BMminussi tl minus θ j 18

where νqjp represents the weight between the q p thinput neurons and the jth neurons in the hidden layer Theq p th input neurons are hsi bsq k tl that is the k-th

dimension data of the sq acquisition node at time tl θj isthe threshold of the jth neuron in the hidden layer Theoutput of neurons in the hidden layer is calculated by

hoj =1

1 + eminusΣ j19

Similarly the output of each neuron in the output layer isset to op

Epoch

Acquisition node

Phys

ical

qua

ntity

tmtlt0

so

sm

sq

k

Figure 2 The model of BMminussi T

7Journal of Sensors

The sum of squared errors of all neurons in the outputlayer is the objective function value optimized by the BPartificial neural network algorithm which is calculated by

E =1

M minus 1 times k times kM minus1

q=1k

p=1k

p=1bsi p minus op

2 20

where bsi p represents the expected output value of neuron opin the output layer corresponding to the value of the pthphysical quantity sensed by the data acquisition node siAccording to the gradient descent method the error ofeach neuron in the output layer is obtained by

Ep = op times 1 minus op times bsi p minus op 21

The weights and thresholds of the output layer can beadjusted by

Δω = part times Ep times hoj 22

Δθp = part times Ep 23

where part isin 0 1 is the learning rate which reflects thespeed of training and learning Similarly the weight andthreshold of the hidden layer can be obtained If thedesired results cannot be obtained from the output layerit needs to constantly adjust the weights and thresholdsgradually reducing the error The BP neural network hasstrong self-learning ability and can quickly obtain theoptimal solution

1205641

120564j

120564J

12059611

120596j1

120596J1

bso1(t0) bso1(t1)

bsi1(T)

bsi1(T)

bso1(tm) hsi

(bso

1(tl))

bsq1(t0) bsq1(t1) bsq1(tm) hsi

(bsq

1(tl))

bsm1(t0) bsm1(t1) bsm1(tm) hsi

(bsm

1(tl))

bsok(t0) bsok(t1) bsok(tm) hsi

(bso

k(tl))

bsqk(t0) bsqk(t1) bsqk(tm) hsi

(bsq

k(tl))

bsmk(t0) bsmk(t1) bsmk(tm) hsi

(bsm

k(tl))

Output layer

Hidden layer

Input layer

t0 tm

12059612

120596j2

120596J2

Error back propagation

ϒsi ()

Vm1k

Vmjk

VmJk

Vq1k

Vqjk

VqJk

V11k

V1jk

V1Jk

Vm11

Vmj1

VmJ1

Vq11

Vqj1

VqJ1

V111

V1j1

V1J1

o1

tl

ok

Figure 3 The training process of the BP artificial neural network

8 Journal of Sensors

The mathematical model of Figure 3 is as follows

f op J

j=1k

p=1ωj f Σ j

M minus1

q=1k

p=1νqjphsi BMminussi T ⟶ bsi p T

24

where BsqT is the training data that is the historical data

of acquisition nodes except si in the period T hsi sdot is thetransform function of the input layer νqj is the weight ofthe input layer f Σ j

sdot is the transfer function of the input

layer to the hidden layer w is the weight of the hiddenlayer to the output layer f op sdot is the transfer function of

the hidden layer to the input layer bsi T is the ldquosupervisorsupervisionrdquo that is the historical data of acquisition node siin the period T

From (14) and (24) the following formula can beobtained

ϒsisdot = f op

J

j=1ωj f Σ j

M minus1

q=1k

p=1νqjp sdot ∣bsi T 25

whereϒsisdot represents the inverse-distance BP artificial neu-

ral network learning operator of acquisition node si ∣bsi Trepresents that this neural network operator ϒsi

sdot is a learn-ing operator trained by the data of si as tutor information

The data collected by each data acquisition node in themonitoring area is related to each other The data of a dataacquisition node can be learned by inputting the data ofother data acquisition nodes into the learning operatorThe learning operator set of data acquisition nodes in thewhole monitoring area is as follows

ϒ sdot = ϒsisdot ∣ si isinM 26

5 Interpolation at Non-DataAcquisition Locations

We transfer the BP artificial neural network operators fromdata acquisition nodes to non-data acquisition locationsWe can use the learning operator of the data acquisition nodeclosest to the non-data acquisition location to estimate thedata of the non-data acquisition location There are four waysto implement the learning operator transferring includingsample transferring feature transferring model transferringand relationship transferring In this paper model transfer-ring (also called parameter transferring) is used to bettercombine with the BP artificial neural network that is wecan use the pretrained BP artificial neural network to inter-polate The BP artificial neural network learning operatorhas strong robustness and adaptability In the interested areaif the physical quantity collected is highly correlated andthere is no situation in which the physical quantity changes

drastically between the various interested locations thelearning operator can be transferred

Since the construction of the artificial neural networkrequires historical data for training and there is no historicaldata at the non-data acquisition location deployed withoutsensors it is very difficult to construct the artificial neuralnetworks of the non-data acquisition location However wecan predict the physical quantities of non-data acquisitionlocations by using the learning operator from the nearest dataacquisition node

51 Transform Function Corresponding to NonacquisitionLocation sj First we use the IDWI method to construct thetransform function The transform function of the artificialneural network input unit sq of the acquisition node si isdefined as hsi BMminussi tl

hsj BMminussi tl =BMminussi tl dist

γsq⟶sj

sum M minus1sqisinMminussi 1distγsq⟶sj

27

where distsq⟶sjrepresents the distance between sq and sj

γ represents the weighted exponentiation of the distance

reciprocal sum M minus1sqisinM 1distγsq⟶sj

represents the sum of the

weighted reciprocal of the distance from the non-dataacquisition location sj to the rest of the acquisition nodesThe physical data collected from the remaining M minus 1data acquisition nodes except si are as follows

BMminussi = bsq tl ∣ sq isinM minus si 28

52 Learning Operator Transferring Since the data of thedata acquisition node closest to the non-data acquisitionlocation is important to the non-data acquisition locationand its data is most correlated with the data of the non-data acquisition location we estimate the data of the non-data acquisition location with data from the nearest dataacquisition node and its learning operator

Since the data we are targeting is spatially correlated thesmaller the distance between the two interested locations isthe smaller the difference of the collected data is Converselythe greater the difference of the collected data is We canuse the data of the data acquisition node close to thenon-data acquisition location to assess the data at thenon-data acquisition location

Because the BP artificial neural network learning opera-tor has strong robustness and adaptability we can transferthe inverse BP artificial neural network learning operator ofsi to sj in order to estimate the data at sj in this paper Thetransform function h sdot does not change with time and it isa function of the distance between the sampling locationshsi BMminussi tl ⟶ hsj BMminussi tl the number of input param-

eters is constant while the input parameters vary with timeSo the change of the transform function will not affect the

9Journal of Sensors

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

The sum of squared errors of all neurons in the outputlayer is the objective function value optimized by the BPartificial neural network algorithm which is calculated by

E =1

M minus 1 times k times kM minus1

q=1k

p=1k

p=1bsi p minus op

2 20

where bsi p represents the expected output value of neuron opin the output layer corresponding to the value of the pthphysical quantity sensed by the data acquisition node siAccording to the gradient descent method the error ofeach neuron in the output layer is obtained by

Ep = op times 1 minus op times bsi p minus op 21

The weights and thresholds of the output layer can beadjusted by

Δω = part times Ep times hoj 22

Δθp = part times Ep 23

where part isin 0 1 is the learning rate which reflects thespeed of training and learning Similarly the weight andthreshold of the hidden layer can be obtained If thedesired results cannot be obtained from the output layerit needs to constantly adjust the weights and thresholdsgradually reducing the error The BP neural network hasstrong self-learning ability and can quickly obtain theoptimal solution

1205641

120564j

120564J

12059611

120596j1

120596J1

bso1(t0) bso1(t1)

bsi1(T)

bsi1(T)

bso1(tm) hsi

(bso

1(tl))

bsq1(t0) bsq1(t1) bsq1(tm) hsi

(bsq

1(tl))

bsm1(t0) bsm1(t1) bsm1(tm) hsi

(bsm

1(tl))

bsok(t0) bsok(t1) bsok(tm) hsi

(bso

k(tl))

bsqk(t0) bsqk(t1) bsqk(tm) hsi

(bsq

k(tl))

bsmk(t0) bsmk(t1) bsmk(tm) hsi

(bsm

k(tl))

Output layer

Hidden layer

Input layer

t0 tm

12059612

120596j2

120596J2

Error back propagation

ϒsi ()

Vm1k

Vmjk

VmJk

Vq1k

Vqjk

VqJk

V11k

V1jk

V1Jk

Vm11

Vmj1

VmJ1

Vq11

Vqj1

VqJ1

V111

V1j1

V1J1

o1

tl

ok

Figure 3 The training process of the BP artificial neural network

8 Journal of Sensors

The mathematical model of Figure 3 is as follows

f op J

j=1k

p=1ωj f Σ j

M minus1

q=1k

p=1νqjphsi BMminussi T ⟶ bsi p T

24

where BsqT is the training data that is the historical data

of acquisition nodes except si in the period T hsi sdot is thetransform function of the input layer νqj is the weight ofthe input layer f Σ j

sdot is the transfer function of the input

layer to the hidden layer w is the weight of the hiddenlayer to the output layer f op sdot is the transfer function of

the hidden layer to the input layer bsi T is the ldquosupervisorsupervisionrdquo that is the historical data of acquisition node siin the period T

From (14) and (24) the following formula can beobtained

ϒsisdot = f op

J

j=1ωj f Σ j

M minus1

q=1k

p=1νqjp sdot ∣bsi T 25

whereϒsisdot represents the inverse-distance BP artificial neu-

ral network learning operator of acquisition node si ∣bsi Trepresents that this neural network operator ϒsi

sdot is a learn-ing operator trained by the data of si as tutor information

The data collected by each data acquisition node in themonitoring area is related to each other The data of a dataacquisition node can be learned by inputting the data ofother data acquisition nodes into the learning operatorThe learning operator set of data acquisition nodes in thewhole monitoring area is as follows

ϒ sdot = ϒsisdot ∣ si isinM 26

5 Interpolation at Non-DataAcquisition Locations

We transfer the BP artificial neural network operators fromdata acquisition nodes to non-data acquisition locationsWe can use the learning operator of the data acquisition nodeclosest to the non-data acquisition location to estimate thedata of the non-data acquisition location There are four waysto implement the learning operator transferring includingsample transferring feature transferring model transferringand relationship transferring In this paper model transfer-ring (also called parameter transferring) is used to bettercombine with the BP artificial neural network that is wecan use the pretrained BP artificial neural network to inter-polate The BP artificial neural network learning operatorhas strong robustness and adaptability In the interested areaif the physical quantity collected is highly correlated andthere is no situation in which the physical quantity changes

drastically between the various interested locations thelearning operator can be transferred

Since the construction of the artificial neural networkrequires historical data for training and there is no historicaldata at the non-data acquisition location deployed withoutsensors it is very difficult to construct the artificial neuralnetworks of the non-data acquisition location However wecan predict the physical quantities of non-data acquisitionlocations by using the learning operator from the nearest dataacquisition node

51 Transform Function Corresponding to NonacquisitionLocation sj First we use the IDWI method to construct thetransform function The transform function of the artificialneural network input unit sq of the acquisition node si isdefined as hsi BMminussi tl

hsj BMminussi tl =BMminussi tl dist

γsq⟶sj

sum M minus1sqisinMminussi 1distγsq⟶sj

27

where distsq⟶sjrepresents the distance between sq and sj

γ represents the weighted exponentiation of the distance

reciprocal sum M minus1sqisinM 1distγsq⟶sj

represents the sum of the

weighted reciprocal of the distance from the non-dataacquisition location sj to the rest of the acquisition nodesThe physical data collected from the remaining M minus 1data acquisition nodes except si are as follows

BMminussi = bsq tl ∣ sq isinM minus si 28

52 Learning Operator Transferring Since the data of thedata acquisition node closest to the non-data acquisitionlocation is important to the non-data acquisition locationand its data is most correlated with the data of the non-data acquisition location we estimate the data of the non-data acquisition location with data from the nearest dataacquisition node and its learning operator

Since the data we are targeting is spatially correlated thesmaller the distance between the two interested locations isthe smaller the difference of the collected data is Converselythe greater the difference of the collected data is We canuse the data of the data acquisition node close to thenon-data acquisition location to assess the data at thenon-data acquisition location

Because the BP artificial neural network learning opera-tor has strong robustness and adaptability we can transferthe inverse BP artificial neural network learning operator ofsi to sj in order to estimate the data at sj in this paper Thetransform function h sdot does not change with time and it isa function of the distance between the sampling locationshsi BMminussi tl ⟶ hsj BMminussi tl the number of input param-

eters is constant while the input parameters vary with timeSo the change of the transform function will not affect the

9Journal of Sensors

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

The mathematical model of Figure 3 is as follows

f op J

j=1k

p=1ωj f Σ j

M minus1

q=1k

p=1νqjphsi BMminussi T ⟶ bsi p T

24

where BsqT is the training data that is the historical data

of acquisition nodes except si in the period T hsi sdot is thetransform function of the input layer νqj is the weight ofthe input layer f Σ j

sdot is the transfer function of the input

layer to the hidden layer w is the weight of the hiddenlayer to the output layer f op sdot is the transfer function of

the hidden layer to the input layer bsi T is the ldquosupervisorsupervisionrdquo that is the historical data of acquisition node siin the period T

From (14) and (24) the following formula can beobtained

ϒsisdot = f op

J

j=1ωj f Σ j

M minus1

q=1k

p=1νqjp sdot ∣bsi T 25

whereϒsisdot represents the inverse-distance BP artificial neu-

ral network learning operator of acquisition node si ∣bsi Trepresents that this neural network operator ϒsi

sdot is a learn-ing operator trained by the data of si as tutor information

The data collected by each data acquisition node in themonitoring area is related to each other The data of a dataacquisition node can be learned by inputting the data ofother data acquisition nodes into the learning operatorThe learning operator set of data acquisition nodes in thewhole monitoring area is as follows

ϒ sdot = ϒsisdot ∣ si isinM 26

5 Interpolation at Non-DataAcquisition Locations

We transfer the BP artificial neural network operators fromdata acquisition nodes to non-data acquisition locationsWe can use the learning operator of the data acquisition nodeclosest to the non-data acquisition location to estimate thedata of the non-data acquisition location There are four waysto implement the learning operator transferring includingsample transferring feature transferring model transferringand relationship transferring In this paper model transfer-ring (also called parameter transferring) is used to bettercombine with the BP artificial neural network that is wecan use the pretrained BP artificial neural network to inter-polate The BP artificial neural network learning operatorhas strong robustness and adaptability In the interested areaif the physical quantity collected is highly correlated andthere is no situation in which the physical quantity changes

drastically between the various interested locations thelearning operator can be transferred

Since the construction of the artificial neural networkrequires historical data for training and there is no historicaldata at the non-data acquisition location deployed withoutsensors it is very difficult to construct the artificial neuralnetworks of the non-data acquisition location However wecan predict the physical quantities of non-data acquisitionlocations by using the learning operator from the nearest dataacquisition node

51 Transform Function Corresponding to NonacquisitionLocation sj First we use the IDWI method to construct thetransform function The transform function of the artificialneural network input unit sq of the acquisition node si isdefined as hsi BMminussi tl

hsj BMminussi tl =BMminussi tl dist

γsq⟶sj

sum M minus1sqisinMminussi 1distγsq⟶sj

27

where distsq⟶sjrepresents the distance between sq and sj

γ represents the weighted exponentiation of the distance

reciprocal sum M minus1sqisinM 1distγsq⟶sj

represents the sum of the

weighted reciprocal of the distance from the non-dataacquisition location sj to the rest of the acquisition nodesThe physical data collected from the remaining M minus 1data acquisition nodes except si are as follows

BMminussi = bsq tl ∣ sq isinM minus si 28

52 Learning Operator Transferring Since the data of thedata acquisition node closest to the non-data acquisitionlocation is important to the non-data acquisition locationand its data is most correlated with the data of the non-data acquisition location we estimate the data of the non-data acquisition location with data from the nearest dataacquisition node and its learning operator

Since the data we are targeting is spatially correlated thesmaller the distance between the two interested locations isthe smaller the difference of the collected data is Converselythe greater the difference of the collected data is We canuse the data of the data acquisition node close to thenon-data acquisition location to assess the data at thenon-data acquisition location

Because the BP artificial neural network learning opera-tor has strong robustness and adaptability we can transferthe inverse BP artificial neural network learning operator ofsi to sj in order to estimate the data at sj in this paper Thetransform function h sdot does not change with time and it isa function of the distance between the sampling locationshsi BMminussi tl ⟶ hsj BMminussi tl the number of input param-

eters is constant while the input parameters vary with timeSo the change of the transform function will not affect the

9Journal of Sensors

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

trend of input parameters The change of the transformfunction will not affect the accuracy of prediction and theoperator transferring can be implemented

dsjPrime tl =ϒsihsj BMminussi tl

= f si J

j=1ωj f z j

M

q=1νqjhsj BMminussi tl

29

The assessment network based on the BP artificial neuralnetwork operator is shown in Figure 4

In this paper our proposed method is improved on thebasis of the BP artificial neural network According to spatialcorrelation the physical quantities of the monitored andinterested location close to the data acquisition nodes sq canbe approximated by learning the operator of sq

Suppose dsj represents the estimated value of the physical

quantity of the interested location sj Due to conditionalrestrictions no sensor is deployed at the interested locationsj We choose the physical quantity of the data acquisitionnode si nearest to the non-data acquisition location sj forestimation We can use Algorithm 1 to achieve the determi-nation of si

53 Assessment at the Non-Data Acquisition Location si isthe nearest data acquisition node to sj We can use the learn-ing operator of the data acquisition node si to estimate dsj

This paper is an improvement on the inverse-distance inter-polation method Because si is closest to sj the correlation oftheir data is the largest The data collected actually by si havethe greatest impact on the predictive value at sj

dsj = E dprimesj tl dPrimesj tl

= α times dprimesj tl + β times dPrimesj tl emspsi isinM sj isin S minusM30

where α and β denote the weight of dprimesj tl and dPrimesj tl respectively to estimate physical quantities at sj α + β = 1

dprimesj tl is the value of the actual measurement so its

credibility is higher The closer si is to sj the greater thecorrelation between physical quantity at si and the datacollected by si based on spatial correlation

We assume that distsi⟶sjrepresents the distance between

si and sj The influence weight of data collected by si onthe assessment of sj decreases with increasing distsi⟶sj

Conversely the greater the distance distsi⟶sj the smaller

the impact The change field of distsi⟶sjis 0 +infin We find

that the inverse tangent function is an increasing function on0 +infin The function curve is shown in Figure 5

We use the boundedness and monotonically increasingcharacteristics of the arctangent function curve and are

inspired by the idea of IDWI The formula for calculating βis as follows

β =arctan distsi⟶sj

π2emspdistsi⟶sj

isin 0 +infin 31

We limit the value of β to the interval [0 1] Whendistsi⟶sj

is close to 0 β is close to 0 and 1 minus β is close to 1

then the data measured by the data acquisition point is closerto the value at the non-data acquisition location Whendistsi⟶sj

= 0 it means that the sensors have deployed in sj

and we do not need other values calculated by the predictionalgorithm and directly use the actual measured data

dsj =arctan distsi⟶sj

π2times dPrimesj tl

+ 1 minusarctan distsi⟶sj

π2times dprimesj tl emspsi isinM sj isin S minusM

32

If the interested location of the interpolation is still farfrom the nearest data acquisition node then this algorithmwill cause a large error Since we are using the sensorplacement based on the iterative dividing four subregionsthe sensors of the data acquisition node are omnidirectionalthroughout the space not only concentrated in a certaindomain The error is not too large

6 Experiments and Evaluation

61 Parameter Setup The data set we used is the measureddata provided by the Intel Berkeley Research lab [9] The datais collected from 54 data acquisition nodes in the IntelBerkeley Research lab between February 28th and April 5th2004 In this case the epoch is a monotonically increasingsequence number from each mote Two readings from thesame epoch number were produced from different motes atthe same time There are some missing epochs in this dataset Mote IDs range from 1ndash54 In this experiment weselected the data of these 9 motes as a set of interestedlocations because these 9 points have the same epoch

S = s7 22 5 8 s18 5 5 10 s19 3 5 13 s21 4 5 18 s22 1 5 23 s31 15 5 28 s46 34 5 16 s47 39 5 14 s48 35 5 10

33

When the environment of the monitoring area is notvery complicated the physical quantity of acquisition is avery spatial correlation and it is feasible to use theplacement based on the iterative dividing four subregionsto deploy sensors In this experiment we use the methodof the iterative dividing four subregions to select theinterested locations from S as data acquisition nodes Theclosest si xi yi (si xi yi isin S) to the deployment location

10 Journal of Sensors

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

hsi(bso1(T))

hsi(bsq1(T))

hsi(bsm1(T))

hsi(bsok(T))

hsi(bsqk(T))

hsi(bsmk(T))

hsj(bso1(Tl))

hsj(bsq1(Tl))

hsj(bsm1(Tl))

hsj(bsok(Tl))

hsj(bsqk(Tl))

hsj(bsmk(Tl))

Output layerInput layer

Transferring

bso1(T)

bso1(tl)

bsq1(tl)

bsm1(tl)

bsok(tl)

bsqk(tl)

bsmk(tl)

bsq1(T)

bsm1(T)

bsok(T)

bsqk(T)

bsmk(T)

dsj1(t1)

ϒsi ()

ϒsi ()

Primeˆ

dsjk(t1)Primeˆ

bsi1(T)

bsik(T)

Figure 4 The assessment network based on the BP artificial neural network operator

11Journal of Sensors

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

generated by the method of the iterative dividing foursubregions is selected in M

In this experiment we set the dimension of the collecteddata to 2 that is k = 2 We take two physical quantitiestemperature and humidity Data acquisition nodes need atleast two ie M ge 2 Because one data acquisition nodeshould be the closest node from the interested location forinterpolation and si notin empty the data of other acquisition nodesshould be used as the training set of the artificial neuralnetwork M minus sinsubempty M minus si ge 1

The epochs we chose are T = 333542 44515661627073868889919599102107112113123124127 Thesedata whose epoch is an element in T are used as a training setepoch Unfortunately the actual sampling sensing data arealways corrupted and some values go missing We need realclean sensing data to train the BP artificial neural networksto improve the interpolation precision

In order to not lose the trend of data change in theprocess of data filtering we simply use the mean processingwith a filter window on the temporal direction of the mea-surements as the clean sensing data If dsi t minus 1 minus dsi t ≫δ times dMminussi t minus 1 minus dMminussi t then dsi t can be replaced bythe value calculated

dsi t =dsi t minus 1 + dsi t + 1

2 34

where δ is the adjusting coefficient In this experiment wetake δ = 10

The filtered result of the temperature measurements isshown in Figure 6(a) and the actual sensing data is shownin Figure 6(b) The filtered result of the humidity measure-ments is shown in Figure 7(a) and the actual sensing datais shown in Figure 7(b)

When comparing the test results we need that the epochof the data supplied by each mote is the same Because in theactual application the data space of the monitoring area webuilt is the data space at a certain moment We assumedthat tl = 138 In this experiment the time we selectedwas the 138th epoch for interpolation We compare theactual collected value at the 138th epoch with the interpola-tion calculated by algorithms

To evaluate the accuracy of the reconstructed measure-ment we choose the mean relative error (MRE) It reflectsthe precision of the estimated data relative to the mea-sured data The formula for the calculation of MRE is asfollows [17]

MRE =1nn

i=1

d xi minus d xid xi

35

where d xi is the actual acquisition value of the ith sensorCorrespondingly d xi is the assessed value n is the totalnumber of data acquisition nodes

62 Results of the Experiment This experiment is in a smallrange the physical quantity of the collected data is highlycorrelated and there is no situation in which the physical

quantity changes drastically between the various interestedlocations so the learning operator can be transferred

In the case of data loss we compare our method with theinverse-distance-weighted interpolation algorithm in termsof interpolation accuracy Then qsi = 0 Since the data acqui-sition nodes for data loss are random we conducted 20 testsfor a more accurate comparison Obviously it is necessarythat the number of acquisition node points for data loss beless than or equal to the total number of acquisition nodesThe results are shown in Figures 8 and 9

As can be seen from Figures 8 and 9 as the proportionof lost data in all collected data decreases the error of inter-polation is gradually reduced The curve of the proposedalgorithm is relatively flat indicating that it is less affectedby data loss

In the case of data loss especially when the number ofdata acquisition nodes is relatively small the interpolationerror of our algorithm is much smaller than that of theinverse-distance-weighted interpolation

In the case of data disturbance we compare ourmethod with the inverse-distance-weighted interpolationalgorithm in terms of interpolation accuracy Then qsi =No dsi tl σ

2 dsi For the interpolation of temperature weuse the mean temperature of all acquisition nodes as themean value of the Gauss distribution We set the parameters

minusπ2

π2

y

xO

Figure 5 Inverse tangent function curve

Input M sj xj yjOutput siInitialize mindistFor each si in M

distsj⟶sq= xj minus xq

2 + yj minus yq22

if mindist gt distsj⟶sq

then i = q mindist = distsj⟶sq

end ifEnd for

Algorithm 1 Culling si from M

12 Journal of Sensors

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

as dsi tl = 18 47895 and σ2 = 18 for the interpolation ofhumidity we use the mean humidity of all acquisition nodesas the mean value of the Gauss distribution We set theparameters as dsi tl = 39 44691 and σ2 = 39 Again we did20 tests The results are shown in Figures 10 and 11

It can be seen from Figures 10 and 11 that the variationcurve of the proposed algorithm is relatively flat while thecurve of the inverse-distance-weighted interpolation algo-rithm fluctuates greatly The interpolation error of the IDWIalgorithm is not only affected by data disturbance but alsoaffected by the deployment location of the data acquisitionnodes When 7 acquisition nodes are deployed the error ofthe IDWI algorithm is the smallest Because sensor place-ment based on the iterative dividing four subregions is nearuniform deployment when 7 acquisition nodes are deployedthe interpolation error is small

In the case where there are not many acquisition nodesthe density of the acquisition nodes where data disturbanceoccurs is large so the error of interpolation is more promi-nent The number of sensors that can be deployed increasesand the error of interpolation is also reduced

As we can see from Figures 8ndash11 our algorithm is insen-sitive to errors and strong in robustness when the data iswrong while the inverse-distance interpolation method hasa great influence on the interpolation accuracy of the errordata In particular when the error rate is high the relativeerror of our algorithm is much lower than that of theinverse-distance interpolation algorithm When the errorrate is 50 our algorithm has a relative error of 01 and therelative error of the inverse-distance interpolation algorithmis 35 as shown in Figure 9(a)

7 Conclusions

In this paper we proposed a robust data interpolation basedon a back propagation artificial neural network operator forincomplete acquisition in a wireless sensor network Underthe incomplete collection strategy of WSN the effect of com-plete acquisition can be approximately obtained by interpola-tion In the case of limited data acquisition nodes the data ofthe acquisition nodes are used to train to obtain the learningoperator Then the learning operator of the acquisition node

38150

39

40

50

Hum

idity

100

41

40

Epoch number

42

30

Node index

43

50 20100 0

(a)

minus10150

010

50

20

Hum

idity

100

30

40

Epoch number

40

30

Node index

50

50 20100 0

(b)

Figure 7 The humidity sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

175150

18

50

185

Tem

pera

ture

100 40

19

Epoch number

30

Node index

195

50 20100 0

(a)

0150

50

50

Tem

pera

ture

100

100

40

Epoch number

30

Node index

150

50 20100 0

(b)

Figure 6 The temperature sensing data from the Intel Berkeley Research lab (a) the filtered sensing data (b) the actual sensing data

13Journal of Sensors

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

002040608

112141618

222242628

3323436

Mea

n re

lativ

e err

or

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 9 The humidity sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

0010203040506070809

11112131415161718

Mea

n re

lativ

e err

or

Inverse-distance-weighted interpolationProposed method

(a)

Inverse-distance-weighted interpolationProposed method

0

05

1

15

8

2

25

Mea

n re

lativ

e err

or

3

35

6 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 8 The temperature sensing data from the Intel Berkeley Research lab (a) data loss occurred at 1 data acquisition node (b) data lossoccurred at data acquisition nodes increased from 2 to 8

14 Journal of Sensors

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

closest to the non-data acquisition location is transferred tothe non-data acquisition location for interpolation Consid-ering that the data collected by WSN is prone to error we

analyzed the reasons for the error In order to improve thefault tolerance and robustness of the interpolation algo-rithm we proposed a BP artificial neural network learning

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07

Mea

n re

lativ

e err

or

(a)

0

02

04

06

Mea

n re

lativ

e err

or

08

1

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

(b)

Figure 11 The humidity sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node (b) datadisturbance occurred at data acquisition nodes increased from 2 to 8

2 3 4 5 6 7 8Number of data acquisition nodes

Inverse-distance-weighted interpolationProposed method

0

01

02

03

04

05

06

07M

ean

relat

ive e

rror

(a)

Inverse-distance-weighted interpolationProposed method

86 8

Number of data lost nodes

76

Number of data acquisition nodes

4 5432 2

0

05

1

Mea

n re

lativ

e err

or

15

(b)

Figure 10 The temperature sensing data from the Intel Berkeley Research lab (a) data disturbance occurred at 1 data acquisition node(b) data disturbance occurred at data acquisition nodes increased from 2 to 8

15Journal of Sensors

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

operator From the experiments we demonstrated that ouralgorithm has strong robustness and it has a lower error inthe case of data errors collected by the WSN This methodhas strong potential for practical data visualization dataanalysis WSN monitoring etc

Data Availability

The measured data that support the findings of this study aresupplied by Intel Berkeley Research lab They are availablefrom httpdbcsailmitedulabdatalabdatahtml

Conflicts of Interest

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National NaturalScience Foundation of China (Grant Nos 61561017 and61462022) the Innovative Research Projects for GraduateStudents of Hainan Higher Education Institutions (GrantNo Hyb2017-06) the Hainan Province Major Science ampTechnology Project (Grant No ZDKJ2016015) the projectof the Natural Science Foundation of Hainan Province inChina (Grant No617033) the open project of the State KeyLaboratory of Marine Resource Utilization in South ChinaSea (Grant No 2016013B) and the Key RampD Project ofHainan Province (No ZDYF2018015)

References

[1] F Ding and A Song ldquoDevelopment and coverage evaluationof ZigBee-based wireless network applicationsrdquo Journal ofSensors vol 2016 Article ID 2943974 9 pages 2016

[2] O Alvear W Zamora C Calafate J C Cano and P ManzonildquoAn architecture offering mobile pollution sensing with highspatial resolutionrdquo Journal of Sensors vol 2016 ArticleID 1458147 13 pages 2016

[3] W Xiao The Key Technology of Adaptive Data Fault Tolerancein Wireless Sensor Networks National Defense University ofScience and Technology Hunan China 2010

[4] Z Liang F Tian C Zhang H Sun A Song and T LiuldquoImproving the robustness of prediction model by transferlearning for interference suppression of electronic noserdquo IEEESensors Journal vol 18 no 3 pp 1111ndash1121 2017

[5] Z Yang W Qiu H Sun and A Nallanathan ldquoRobustradar emitter recognition based on the three-dimensionaldistribution feature and transfer learningrdquo Sensors vol 16no 3 pp 289ndash303 2016

[6] S J Pan and Q Yang ldquoA survey on transfer learningrdquo IEEETransactions on Knowledge and Data Engineering vol 22no 10 pp 1345ndash1359 2010

[7] B Pan J Tai Q Zheng and S Zhao ldquoCascade convolutionalneural network based on transfer-learning for aircraft detec-tion on high-resolution remote sensing imagesrdquo Journal ofSensors vol 2017 Article ID 1796728 14 pages 2017

[8] J Park R Javier T Moon and Y Kim ldquoMicro-Doppler basedclassification of human aquatic activities via transfer learning

of convolutional neural networksrdquo Sensors vol 16 no 12pp 1990ndash2000 2016

[9] httpdbcsailmitedulabdatalabdatahtml

[10] Z Kang and YWang ldquoStructural topology optimization basedon non-local Shepard interpolation of density fieldrdquo ComputerMethods in Applied Mechanics and Engineering vol 200no 49-52 pp 3515ndash3525 2011

[11] M Hammoudeh R Newman C Dennett and S MountldquoInterpolation techniques for building a continuous map fromdiscrete wireless sensor network datardquo Wireless Communica-tions and Mobile Computing vol 13 no 9 pp 809ndash827 2013

[12] M Saberi A Azadeh A Nourmohammadzadeh andP Pazhoheshfar ldquoComparing performance and robustness ofSVM and ANN for fault diagnosis in a centrifugal pumprdquo in19th International Congress on Modelling and Simulationpp 433ndash439 Perth Australia 2011

[13] R Velazco P Cheynet J D Muller R Ecoffet and S BuchnerldquoArtificial neural network robustness for on-board satelliteimage processing results of upset simulations and groundtestsrdquo IEEE Transactions on Nuclear Science vol 44 no 6pp 2337ndash2344 1997

[14] A Venkitaraman A M Javid and S Chatterjee ldquoR3Netrandom weights rectifier linear units and robustness for artifi-cial neural networkrdquo 2018 httpsarxivorgabs180304186

[15] C Zhang X Zhang O Li et al ldquoWSN data collection algo-rithm based on compressed sensing under unreliable linksrdquoJournal of Communication vol 37 no 9 pp 131ndash141 2016

[16] H L Chen and W Peng ldquoImproved BP artificial neuralnetwork in traffic accident predictionrdquo Journal of East ChinaNormal University (Natural Science) vol 2017 no 2 pp 61ndash68 2017

[17] G Y Lu and D W Wong ldquoAn adaptive inverse-distanceweighting spatial interpolation techniquerdquo Computers ampGeosciences vol 34 no 9 pp 1044ndash1055 2008

16 Journal of Sensors

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom