research article fuzzy counter propagation neural network...
TRANSCRIPT
Research ArticleFuzzy Counter Propagation Neural Network Control fora Class of Nonlinear Dynamical Systems
Vandana Sakhre1 Sanjeev Jain1 Vilas S Sapkal2 and Dev P Agarwal3
1Madhav Institute of Technology amp Science Gwalior 474005 India2SGB Amravati University Amravati 444062 India3IIT Delhi 110001 India
Correspondence should be addressed to Vandana Sakhre vssakhregmailcom
Received 3 May 2015 Revised 1 August 2015 Accepted 3 August 2015
Academic Editor Ezequiel Lopez-Rubio
Copyright copy 2015 Vandana Sakhre et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed for a class of nonlinear dynamical systems Inthis process the weight connecting between the instar and outstar that is input-hidden and hidden-output layer respectively isadjusted by using FuzzyCompetitive Learning (FCL) FCL paradigm adopts the principle of learning which is used to calculate BestMatched Node (BMN) which is proposedThis strategy offers a robust control of nonlinear dynamical systems FCPN is comparedwith the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error(MAE) Mean Square Error (MSE) Best Fit Rate (BFR) and so forth It envisages that the proposed FCPN gives better results thanDNandBPNThe effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamicalsystems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins timeseries data
1 Introduction
With the rapid advance of computers the digitization hasswept in all fields of science and technology with specialemphasize on modeling and identification Most of the phys-ical processes are continuous in nature but they are generallymodeled using discrete time with the use of derivative andintegral as they are easily realizable Taking these advantagesan enormous research has been focused on discrete timemethods andmany techniques based on linear and nonlineardiscrete systems have been developed [1 2] Neural networkhas been widely used for nonlinear dynamical systems [3ndash5] Various control systems like back stepping control [6 7]sliding mode control [8 9] and control using soft computing[10] are continuous source of interest The nonlinear dynam-ical system is a generic problem which finds its applicationin every field of engineering The technique based on softcomputing fuzzy logic and neural network has also foundits application in modeling of systems in various applicationdomains Feedforward Neural Network (FNN) is one of themost commonly used networks for this purpose FNN with
Back Propagation algorithm is another powerful networkNeural network ismost popular approach due to its capabilityofmodelingmost of the nonlinear functions approximately toany arbitrary degree of accuracy [11] Most of the systems aredesigned using feedforward network with gradient descentlearning but the gradient descent method encounters theproblem of slow convergence This problem of convergencecan be overcome by use of some Cauchy-Newton [12] Quasi-Newtonian method and Levenberg-Marquardt algorithms[13]
Optimal controllers for hybrid dynamical systems (HDS)developed using Hamilton-Jacobi-Bellman (HJB) solutionmethod [14 15] Novel Lyapunov-Krasovskii functional(LKF) with triple integral time constructed for exponentialsynchronization of complex dynamical networks will controlpacket loss and additive time varying delay [16] Control ofdiscrete time varying system using dynamical model isdifficult To overcome this condition new neural networkapproximation structure was developed to solve optimaltracking problem of nonlinear discrete time varying timesystem using reinforcement learning (RL) method [17] To
Hindawi Publishing CorporationComputational Intelligence and NeuroscienceVolume 2015 Article ID 719620 12 pageshttpdxdoiorg1011552015719620
2 Computational Intelligence and Neuroscience
enhance the performance of nonlinear multivariable systemfuzzy optimal controller based Takagi-Sugeno model wasdeveloped which was used to minimize a quadratic per-formance index [18] An ultimate stable training algorithminspired by adaptive observer for black box identificationof nonlinear discrete systems developed using state spacerecurrent neural networkThe algorithmdevelopedwas usingonly input and output measurement in discrete time [19]Indirect adaptive controller that uses neural network waspresented for the identification and control of experimentalpilot distillation column This system is multivariable withunknowndynamics and the neural networkwas trained usingLevenberg-Marquardt algorithm [20] Adaptive finite timestabilization of a class of switched nonlinear systems wasinvestigated with unknown nonlinear terms using neuralnetwork The finite law and adaptive law were constructedusing radial basis function neural network (RBFNN) toapproximate the unknown packaged function Stability anal-ysis of RBFNN was observed to evaluate the states of theclosed loop systems [21] An adaptive fuzzy output trackingcontrol approach was proposed in [22] for single input singleoutput (SISO) systemwhich is uncertain and nonlinear underarbitrary switching An iterative learning control schemewas proposed for a class of nonlinear dynamic systemswhich includes holonomic systems as its subsets with linearfeedback mechanism and feedforward learning strategies[23]
In this proposed work we have used instar-outstar struc-ture based CPN with Fuzzy Competitive Learning (FCL)We have designed Fuzzy Counter Propagation Networkdesign to control some nonlinear dynamical systems In theFCPN CPN model is trained by FCL algorithms The FCLlearning is used for adjusting weights and update of BestMatched Node (BMN) in discrete time nonlinear dynamicalsystem
The main contributions of this research are as follows
(1) This paper contributes the approximation for a classof nonlinear dynamical systems using Fuzzy Compet-itive Learning Based Counter Propagation Network(FCPN)
(2) FCPN is employed to optimize the Mean AbsoluteError Mean Square Error Best Fit Rate and so forth
(3) Performance criteria of proposed FCPN for nonlineardynamical systems are effectively improved by com-pensating reference signal and controller signal
The paper is organized as follows In Section 2 problemformulations for a class of nonlinear dynamical systems aregiven Section 3 contains description of Feedforward NeuralNetwork (FNN) with Back Propagation Network DynamicNetwork and Fuzzy Learning Based Counter PropagationNetwork Dynamical learning for CPN and optimal learningof dynamical system are presented in Section 4 The simula-tion results and comparison of different nonlinear dynamicalmodels are presented in Section 5 Section 6 gives conclusionof the study
2 Problem Formulation
Let us consider four models of discrete time nonlineardynamical systems [24] for single input single output (SISO)and multiple input and single output (MISO) system con-sidered in this paper and they are described by differenceequations (1)ndash(4) and Box-Jenkins time series data [10] Let119891 119877119899 rarr 119877 and 119892 119877119898 rarr 119877 be the nonlinear continuousdifferentiable function of Model IndashModel IV approximatedby FCPN to the desired degree of accuracy Once the systemhas been parameterized the performance evaluation has beencarried out by FCPN for (1) to (4)
Model I
119910 (119896 + 1) =119899minus1
sum119894=0
120572119894119910 (119896 minus 119894)
+ 119892 [119906 (119896) 119906 (119896 minus 119898 + 1)]
(1)
Model II
119910 (119896 + 1) = 119891 [119910 (119896) 119910 (119896 minus 119899 + 1)]
+119898minus1
sum119894=0
120573119894119906 (119896 minus 119894)
(2)
Model III
119910 (119896 + 1) = 119891 [119910 (119896) 119910 (119896 minus 119899 + 1)]
+ 119892 [119906 (119896) 119906 (119896 minus 119898 + 1)] (3)
Model IV
119910 (119896 + 1) = 119891 [119910 (119896) 119910 (119896 minus 119899 + 1) 119906 (119896)
119906 (119896 minus 119898 + 1)] (4)
where [119906(119896) 119910(119896)] represent the input-output pair of thesystem at time 119896 and their order is represented by (119899119898)FCL is used to learn the system defined for (1) to (4) and theperformance of the FCPN can be measured by error functiongiven in
119864 (119896) =1
2[119910 (119896) minus 119910 (119896)]
2
(5)
where 119906(119896) is the input 119910(119896) is the system output 119910(119896) isneural network output and (5) can be written for neuralcontroller as
119864119888 =1
119875
119875
sum119901=1
[119910 (119896) minus 119910119888
(119896)]2
(6)
where 119910119888(119896) is neural controller output and (6) is known asminimized error function and 119875 is total number of inputpatterns
3 Back Propagation Network (BPN)
A neural network is one of the most popular intelligentsystems which has capability to approximate calculated andtarget value at an arbitrary degree of accuracy BPN is one
Computational Intelligence and Neuroscience 3
x1
xi
xn
Z1
Zj
Zp
1 1
Y1
Yk
Ym
x1 y1
yk
ym
t1
tk
tm
X1
Xj
Xn
op
1i
1j
1p
i1
ij
ip
n1
np
nj
o1
oj
w1j
wj1
wp1
w1k
w1m
wjk
wjm
wpk
wpm
w01
w0m w0k
Figure 1 General architecture of BPN
of the most commonly used networks for this purpose TheBPN consists of input layer hidden layer and output layerand possesses weighted interconnections The BPN is basedon supervised learning This method is used where error ispropagated back to the hidden layer The aim of this neuralnetwork is to train the net to achieve a balance betweenthe netrsquos ability to respond (memorization) and its ability togive reasonable response to the input that is similar but notidentical to the one that is used in training (generalization)For a given set of training input-output pairs this networkprovides a procedure for changing weights to classify thegiven input patterns correctly The weight update algorithmis based on gradient descent method [25] The networkarchitecture is shown in Figure 1
31 Dynamic Network Neural network can be classifiedinto two categories (i) Static and (ii) Dynamic In StaticNeural Network output can be calculated directly from inputthrough feedforward network But in Dynamic Networkoutput depends on current or previous inputs outputs orstates of network Where the current output is function ofcurrent input and previous output it is known as recurrent(feedback) network Training process of Static and DynamicNetwork can be differentiated by use of gradient or Jacobianwhich is computed Dynamic Network contains delays andoperates on a sequence of inputs Dynamic Networks canhave purely feedforward connections or they can have somerecurrent connections They can be trained using DynamicBack Propagation Network [26]
The general equation of the net input 119899119898(119905) for119898th layeris given by
119899119898 (119905) = sum
119897isin119871119891119898
sum119889isinDL119898sdot119897
IW119898sdot119897 (119889) 119886119897 (119905 minus 119889)
+ sum119897isin119868119898
sum119889isinDI119898sdot119897
IW119898sdot119897 (119889) 119887119897 (119905 minus 119889) + 119887119898(7)
where 119887119897(119905) is the 119897th input vector at time 119905 IW119898119897 is the inputweight between 119897th and 119898th layer LW119898119897 is the layer weightbetween 119897th and119898th layer and 119887119898 is the bias vector for layer119898 DL
119898119897is the set of all delays in the Tapped Delay Line
(TDL) between 119897th and 119898th layer and DI119898119897
is the set of alldelays in the TDL between input 119897 and119898th layer 119868
119898is the set
of indices of input vectors that connect to layer119898 and 119871119891119898is
the set of indices of layers that connect to layer119898 The outputof119898th layer at time 119905 is computed as
119886119898 (119905) = 119891119898 (119899119898 (119905)) (8)
Network has a TDL on the input with DI11
= 0 1 2 and itsoutput is represented as
119886 (119905) = 119891 (119899 (119905)) = sum119889
IW (119889) lowast 119901 (119905 minus 119889)
119886 (119905) = 119891 [iw11(0) 119901 (119905) + iw
11(1) 119901 (119905 minus 1)
+ iw11(2) 119901 (119905 minus 2)]
(9)
One simple architecture with delay DI11
= 0 1 2 forfeedforward Dynamic Network is shown in Figure 2 knownas Dynamic Adaptive Linear Neuron
32 Counter Propagation Network (CPN) It is multilayerfeedforward network based on the combination of the inputcompetitive and output layers Model of CPN is instar-outstar It is three-layer neural network that performs input-output data mapping that is producing output in theresponse to an input vector on the basis of Fuzzy CompetitiveLearning In CPN the connection between input layer andcompetitive layer is instar structure and connection betweencompetitive and output layer is outstar structure CounterPropagation Network involves two-stage training process Infirst stage input vector is clustered on the basis of Euclidean
4 Computational Intelligence and Neuroscience
D
D
f
p(t)
a(t)
iw11(0)
iw11(1)
iw11(2)
sum
Figure 2 Simple architecture of Dynamic Adaptive Linear Neuron
distance and the performance of network is improved usinglinear topology Using Euclidean distance between input andweight vector we evaluated BMN In phase II the desiredresponse is obtained by adapting the weights from compet-itive layer to output layer [27] Let 119909 = [119909
11199092sdot sdot sdot 119909119899]
and 119910 = [119910lowast1119910lowast2sdot sdot sdot 119910lowast119898] be input and desired vector
respectively let V119894119895be weight of input layer and competitive
layer where 1 le 119894 le 119899 and 1 le 119895 le 119901 and let119908119895119896be the weight
between competitive layer and output layer where 1 le 119896 le 119898Euclidean distance between input vector 119909 and weight vectorV119894119895is
119863119895=119899
sum119894=1
(119909119894minus V119894119895)2
where 119895 = 1 2 119901 (10)
The architecture of CPN is shown in Figure 3
33 Fuzzy Competitive Learning (FCL) FCL is used foradjusting instar-outstar weights and update of BMN fordiscrete time nonlinear dynamical systems Learning of CPNis divided into two phases FuzzyCompetitive Learning phase(unsupervised) and Grossberg Learning phase (supervised)[28ndash31] CPN is efficient network for mapping the inputvector of size 119899 placed in clusters of size 119898 Like traditionalself-organizing map the weight vector for every node isconsidered as a sample vector to the input pattern Theprocess is to determine which weight vector (say 119869) is moresimilar to the input pattern chosen as BMN First phase isbased on closeness between weight vector and input vectorand second phase is weight update between competitive andoutput layer for desired response
Let V119894119895be weight of input node 119894 to neuron 119895 and let 119908
119895119896
be the weight between competitive node 119895 and neuron 119896 Wepropose a new learning rate calculation method for use ofweight update
prop (119869) =sum119899
119894=1(V119894119869minus 119909119894)2
sum119898
119895=1sum119899
119894=1(V119894119869minus 119909119894)2 (11)
where 119869 denotes the index of BMN andprop is known as FuzzyCompetitive Learning rate
Fuzzy Competitive Learning phase is described as fol-lows
331 Training Algorithm of FCL
Phase I (for determination of BMN) There are steps fortraining of CPN as follows
Step 1 Initialize instar-outstar weights
Step 2 Perform Steps 3ndash8 until stopping criteria for Phase Itraining fail The stopping condition may be fixed number ofepochs or learning rate has reduced to negligible value
Step 3 Perform Steps 4ndash6 for all input training vector119883
Step 4 Set the119883-input layer activation to vector119883
Step 5 Compute the Best Matched Node (BMN) (119869) usingEuclidean distance find the node119885
119869whose distance from the
input pattern is minimum Euclidean distance is calculated asfollows
119863119895=119899
sum119894=1
(119909119894minus V119894119895)2
where 119895 = 1 2 119901 (12)
Minimum net input
119911119894119899119869
=119899
sum119894=1
119909119894V119894119869 (13)
Step 6 Calculate Fuzzy Competitive Learning (14) rate forweight update using BMN
Step 7 We update weight for unit 119885119869
V119894119869(new) = V
119894119869(old) + prop ℎ (119869 119894 119895) (119909
119894minus V119894119869) (14)
where 119894 = 1 2 3 119899 and ℎ(sdot sdot) is neighborhood functionaround BMN Consider
ℎ (119869 119894 119895) = exp(minus10038171003817100381710038171003817V119894119895 minus V
119894119869
10038171003817100381710038171003817
21205902 (119905))
where ℎ (119869 119894 119895) isin [0 1]
(15)
120590 (119905) = 1205900exp(minus119905
119879) (16)
Step 8 Test the stopping criteria
Phase II (to obtain desired response)
Step 1 Set activation function to (119909 119910) input and output layerrespectively
Step 2 Update the BMN (119869) (Step 5 from Phase I) Alsoupdate the weights into unit 119885
119869as
V119894119869(new) = V
119894119869(old) + 120572ℎ (119869 119894 119895) [119909
119894minus V119894119869(old)] (17)
Computational Intelligence and Neuroscience 5
X1
Xi
Xn
x1
xi
xn
11
1p
i1ij
ij
ip
n1
np
nj
Z1
Zj
Zp
w11
wj1
wp1
wjk
wjm
wik
wim
wpk
wpm
Y1
Yk
Ym
yy1
yy
k
yym
Input layer Competitive layer Output layer Desired output
Learning rate prop Learning rate 120573
(Unsupervised)instar
(Supervised)outstar
BMN
Figure 3 Architecture of Counter Propagation Network
Step 3 Update the weights from node 119885119869to the output unit
as
119908119869119896(new) = 119908
119869119896(old) + 120573ℎ (119869 119895 119896) [119910
119895minus 119908119869119896(old)]
for 1 le 119896 le 119898(18)
Step 4 Learning rate 120573 is calculated as
120573 (119869) =sum119898
119896=1(119910119896minus 119908119869119896)2
sum119901
119895=1sum119898
119896=1(119910119896minus 119908119895119896)2 (19)
Step 5 Decrease the rate of learning st 120573 = 120573 minus 120576 where 120576 issmall positive number
Step 6 Test the stopping condition for Phase II (ie fixednumber of epochs or its learning rate has reduced to negli-gible value)
332 Testing Phase (to Test FCPN)
Step 1 Set the initial weights that is the weights obtainedduring training
Step 2 Apply FCPN to the input vector119883
Step 3 Find unit 119869 that is closest to vector119883
Step 4 Set activations of output units
Step 5 Apply activation function at 119910119896 where 119910
119896= sum119895119911119895119908119895119896
4 Dynamical Learning for CPN
Learning stabilities are fundamental issues for CPN howeverthere are few studies on the learning issue of FNN BPN forlearning is not always successful because of its sensitivity tolearning parameters Optimal learning rate always changesduring the training process Dynamical learning of CPN iscarried out using Lemmas 1 2 and 3
Assumption Let us assume 120601(119909) is a sigmoid function if itis bounded continuous and increasing function Since inputto the neural network in this model is bounded we considerLemmas 1 and 2 given below
Lemma 1 Let 120601(119909) be a sigmoid function and let Ω be acompact set in R119899 and 119891 R119899 rarr R on Ω is a continuousfunction and for arbitrary isingt 0 exist integer119873 and real constants119888119894 120579119894 and 119908
119894119895 119894 = 1 2 119873 119895 = 1 2 119899 such that
119891 (1199091 1199092 119909
119899) =119873
sum119894=1
1198881198940(119899
sum119895=1
119908119894119895119909119894minus 120579119894) (20)
satisfies
max119909isinΩ
10038171003817100381710038171003817119891 (1199091 1199092 119909119899) minus 119891 (1199091 1199092 119909119899)10038171003817100381710038171003817 ltisin (21)
Using Lemma 1 dynamical learning for a three-layer CPN canbe formulated where the hidden layer transfer functions are120601(119909) and transfer function at output layer is linear
Let all the vectors be column vectors and superscript 119889119896119901
refers to specific output vectors component
6 Computational Intelligence and Neuroscience
Let 119883 = [1199091 1199092 119909
119901] isin R119871times119875 119884 = [119910
1 1199102 119910
119901] isin
R119867times119875 119874 = [1199001 1199002 119900
119901] isin R119870times119875 119863 = [119889
1 1198892 119889
119901] isin
R119870times119875 be the input hidden output and desired vectorrespectively where 119871119867 and 119870 denote the number of inputhidden and output layer neurons Let 119881 and 119882 representthe input-hidden and hidden-output layer weight matrixrespectivelyThe objectives of the network training minimizeerror function 119869 where 119869 is
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
(22)
41 Optimal Learning of Dynamical System Consider athree-layer CPN the network error matrix is defined by theerror between differences of desired and FCPN output at anyiteration and given as
119864119905= 119874119905minus 119863 = 119882
119905119884119905minus 119863 = 119882
119905119884119905119883 minus 119863 (23)
The objective of the network for minimization of error givenin (24) is defined as follows
st 119869 =1
2119875119870119879119903(119864119905119864119879119905) (24)
where119879119903represent the trace ofmatrix Using gradient descent
method updated weight is given by
119882119905+1
= 119882119905minus 120573119905
120597119869
120597119882119905
119881119905+1
= 119881119905minus 120573119905
120597119869
120597119881119905
(25)
or
119882119905+1
= 119882119905minus120573119905
119875119870119864119905119884119879119905
119881119905+1
= 119881119905minus120573119905
119875119870119882119879119905119864119905119883119879
(26)
Using (23)ndash(26) we have
119864119905+1119864119879119905+1
= (119882119905+1119884119905+1
minus 119863) (119882119905+1119884119905+1
minus 119863)119879
(27)
To obtain minimum error for multi-layer FNN in aboveequation (28) after simplification we have
119869119905+1
minus 119869119905=
1
2119875119870119892 (120573)
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
119864119905+1119864119879119905+1
= (119882119905+1119881119905+1119883 minus 119863) (119882
119905+1119881119905+1119883 minus 119863)
119879
= [(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863][(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863]T= [119864119905minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2119864119905119884119879119905119882119879119905119864119905119883119879119883] lowast [119864
119905
minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2(119864119905119884119879119905119882119879119905119864119905119883119879119883)]
119879
= 119864119905119864119879119905
minus120573119905
119875119870[119864119905(119882119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905(119864119905119884119879119905119881119905119883)119879
+ (119882119905119882119879119905119864119905119883119879119883119864119879
119905) + (119864
119905119884119879119905119881119905119883119864119879119905)]
+1205732119905
(119875119870)2[119864119905(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883119864119879
119905
+ 119864119905119884119879119905119881119905119883(119882119905119882119879119905119864119905119883119879119883)
119879
+119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119881119905119883)119879
+119882119905119882119879119905119864119905119883119879119883(119882
119905119882119879119905119864119905119883119879119883)
119879
]
minus1205733119905
(119875119870)3[119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119882
119905119882119879119891119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
]
+1205734119905
(119875119870)4[119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
119879
]
(28)
For simplification omit subscript 119905 we have that is
119869119905+1
minus 119869119905=
1
2119875119870(1198601205734 + 1198611205734 + 1198621205734 +119872120573) (29)
where
119860 =1
(119875119870)4119879119903[119864119884119879119882119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
]
119861 =1
(119875119870)3119879119903[119882119882119879119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
Computational Intelligence and Neuroscience 7
+ 119864119884119879119881119883(119864119884119879119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119864119884119879119881119883)]
119862 =1
(119875119870)2119879119903[(119864119884119879119882119879119864119883119879119883)
119879
+ 119864119884119879119882119864119883119879119883119864119879 + 119864119884119879119881119883(119882119882119879119864119883119879119883)119879
+119882119882119879119864119883119879119883 + (119864119884119879119881119883)119879
+ 119864119884119879119881119883(119864119884119879119881119883)119879
+119882119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
]
119872 =1
119875119870119879119903[119864 (119882119882119879119864119883119879119883)
119879
+ 119864 (119864119884119879119881119883)119879
+119882119882119879119864119883119879119883 + 119864119884119879119881119883119864119879]
(30)
where
119892 (120573) = (1198601205734 + 1198611205733 + 1198621205732 +119872120573) (31)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119888) (32)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860
Lemma 2 For solution of general real cubic equation one usesthe following lemma
119891 (119909) = 1199093 + 1198861199092 + 119887119909 + 119888
119871119890119905 119863 = minus271198882 + 18119888119886119887 + 11988621198872 minus 411988631198873 minus 41198873(33)
where119863 is discriminant of 119891(119909)Then we have the following
(1) If119863 lt 0 119891(119909) has one real root
(2) If119863 ge 0 119891(119909) has three real roots
(a) 119863 gt 0 119891(119909) has three different real roots(b) 119863 = 0 6119887 minus 211988620 119891(119909) has one single root and
one multiple root(c) 119863 = 0 66 minus 21198862 = 0 119891(119909) has one root of three
multiplicities
Lemma 3 For given polynomial 119892(120573) given (31) if optimum120573 = 120573
119894| 119892(120573
119894) = min(119892(120573
1) 119892(1205732) 119892(1205733)) 119894 isin 1 2 3
where 120573119894is real root of 120597119892120597120573 the optimum (120573) is the optimal
learning rate and this learning process is stable
Proof To find stable learning range of 120573 consider thatLyapunov function is
119881119905= 1198692119905
Δ119881119905= 1198692119905+1
minus 1198692119905
if Δ119881119905lt 0
(34)
and then dynamical system is guaranteed to be stable ifΔ119881119905lt
0 that is 119869119905+1minus119869119905= (12119875119870)(1198601205734+1198611205734+1198621205734+119872120573) lt 0 (29)
where 120573 is learning rate since in the training process inputmatrix remains the same during the whole training processTo find the range of 120573 which satisfy 119892(120573) = (1198601205734 + 1198611205733 +
1198621205732 + 119872120573) lt 0 Since 120597119892120597120573 where has at least one realroot (Lemma 2) and one of them must give optimum 119892(120573)Obviously minimum value of 119892(120573) gives the largest reductionin 119869119905at each step of learning process Equation (31) shows
that 119892(120573) has two or four real roots one including 120573 = 0(Lemma 2) such that minimum value of 120573 shows largestreduction in error at two successive times and minimumvalue is obtained by differentiating (31) with respect to 120573 wehave from (32)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119862) (35)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860Solving 120597119892120597120573 = 0 gives 120573 which minimizes error in (5)
5 Simulation Results
To demonstrate the effectiveness and merit of proposedFCPN simulation results are presented and discussed Fourdifferent nonlinear dynamical systems and one generalbenchmark problem known as Box-Jenkins model with timeseries data are considered
51 Performance Criteria For accessing the performancecriteria of FCPN and comparing with Dynamic and BackPropagation Network we have evaluated various errors asgiven below In case of four dynamical models it is recom-mended to access criterion such as subset of the following
Given 119873 pair of data points (119910(119896) 119910(119896)) where 119910(119896)is output of system and 119910(119896) is output of controller theMaximum Absolute Error (MAE) is
119869MAE = 119869max = max1le119896le119873
1003816100381610038161003816119910 (119896) minus 119910 (119896)1003816100381610038161003816 (36)
the Sum of Squared Errors (SSE) is
119869SSE =119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(37)
the Mean Squared Error (MSE) is
119869MSE =1
119873
119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(38)
8 Computational Intelligence and Neuroscience
0 50 100 150 200 2500
1
2
3
4
5
6
7
Iteration
Mea
n Sq
uare
Err
ortimes10minus3
(a)
0 50 100 150 200 250
0
05
1
15
2
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus2
minus15
minus1
minus05
(b)
Figure 4 (a) Mean Square Error of system (Example 1) using FCPN (b) Performance of the controller using FCPN algorithm for Example 1
andor the Root Mean Squared Error (RMSE) is
119869RMSE = radic119869MSE (39)
The measure Variance Accounting Factor (VAF) is
119869VAF = (1 minusVar (119910 (119896) minus 119910 (119896))
Var (119910 (119896))) 100 (40)
A related measure is the Normalized Mean Squared Error(NMSE)
119869NMSE =sum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2 (41)
and the Best Fit Rate (BFR) is
119869BFR = (1 minusradicsum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2)100 (42)
Example 1 The nonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 03119910 (119896) + 06119910 (119896 minus 1) + 119891 [119906 (119896)] (43)
The nonlinear function in the system in this case is 119891(119906) =1199063 + 031199062 minus 04119906 where 119910 and 119906 are uniformly distributedin [minus2 2] The objective of Example 1 is to control the systemto track reference output given as 250 sample data pointsThemodel has two inputs 119910(119896) and 119906(119896) and single output 119910(119896+1)and the system identificationwas initially performedwith thesystem input being uniformly distributed over [minus2 2] TheFCPN controller uses two inputs 119910(119896) and 119906(119896) to produceoutput 119910(119896 + 1) FCPN model governed by the differenceequation 119910(119896+1) = 03119910(119896)+06119910(119896minus1)+119873[119906(119896)]was usedFigures 4(a) and 4(b) show the error 119890(119896 + 1) = 119910(119896 + 1) minus119910(119896 + 1) and outputs of the dynamical system and the FCPN
Table 1 Calculated various errors for Example 1
NN model Different error calculationMAE SSE MSE RMSE NMSE
FCPN 13364 01143 45709119890 minus 004 00214 71627119890 minus 004
Dynamic 58231 31918 00128 01130 00028BPN 380666 481017 04810 06936 01035
As can be seen from the figure the identification error is smalleven when the input is changed to a sum of two sinusoids119906(119896) = sin(2120587119896250) + sin(212058711989625) at 119896 = 250
Table 1 includes various calculated errors of differentNN models for Example 1 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 2 Thenonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 119891 [119910119901(119896) 119910
119901(119896 minus 1)] + 119906 (119896) (44)
where
119891 [119910119901(119896) 119910
119901(119896 minus 1)]
= [119910119901(119896) 119910119901(119896 minus 1) [119910
119901(119896) + 25]
1 + 1199102119901(119896) + 1199102
119901(119896 minus 1)
]
(45)
Theobjective of (44) is to control the system to track referenceoutput given 100 sample data points The model has twoinputs119910(119896) and 119906(119896) and single output119910(119896+1) and the systemidentification was initially performed with the system inputbeing uniformly distributed over [minus2 2] Training and testingsamples contain 100 sample data pointsThe FCPN controlleruses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1) FCPN
Computational Intelligence and Neuroscience 9
0 20 40 60 80 1000
1
2
3
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
05
1
15
2
25
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)Figure 5 (a) Mean Square Error of system (Example 2) using FCPN (b) Performance of the controller using FCPN algorithm for Example 2
Table 2 Calculated various errors for Example 2
NN models Different error calculationMAE SSE MSE RMSE NMSE
FCPN 01956 00032 32328119890 minus 005 00057 27164119890 minus 005
Dynamic 95152 29856 00299 01728 00308BP 101494 45163 00452 02125 00323
network discussed earlier is used to identify the system frominput-output data and is described by the equation
119910 (119896 + 1) = 119873 [119910 (119896) 119910 (119896 minus 1)] + 119906 (119896) (46)
For FCPN the identification process involves the adjustmentof the weights of 119873 using FCL FCPN identifier needs someprior information concerning the input-output behavior ofthe system before identification can be undertaken FCL isused to adjust the weights of the neural network so that theerror 119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shownin Figure 5(a) The behavior of the FCPN model for (16) isshown in Figure 5(b) The input 119906(119896) was assumed to be arandom signal uniformly distributed in the interval [minus2 2]The weights in the neural network were adjusted
Table 2 includes various calculated errors of differentNN models for Example 2 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 3 Thenonlinear dynamical system [23] is governedby the following difference equation
119910 (119896 + 1) =119910 (119896)
1 + 119910 (119896)2+ 1199063 (119896) (47)
which corresponds to 119891[119910(119896)] = 119910(119896)(1 + 119910(119896)2) and119892[119906(119896)] = 1199063(119896) FCPN network equation (47) is used toidentify the system from input-output data Weights in the
Table 3 Calculated various errors and BFR for Example 3
NN model MAE SSE MSE RMSE NMSEFCPN 03519 00032 31709119890 minus 005 00056 56371119890 minus 006
Dynamic 55652 36158 00362 01902 00223BP 426524 469937 04699 06855 02892
neural networks were adjusted for every instant of discretetime steps
The objective of (47) is to control the system to trackreference output given 100 sample data pointsThemodel hastwo inputs 119910(119896) and 119906(119896) and single output 119910(119896 + 1) and thesystem identification was initially performed with the systeminput being uniformly distributed over [minus2 2] Training andtesting samples contain 250 sample data points The FCPNcontroller uses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1)The input data is generated using uniform distribution ininterval [minus2 2] The function 119891 obtained by FCPN is usedto adjust the weights of the neural network so that the error119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shown inFigure 6(a) In Figure 6(b) the desired outputs of the systemas well as the FCPN model are shown and are seen to beindistinguishable
Table 3 includes various calculated errors of differentNN models for Example 3 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 4 Thenonlinear dynamical system [24] is governedby the difference equation (48) Generalized form of thisequation is Model IV In this example the system is assumedto be of the form
119910119901(119896 + 1)
= 119891 [119910119901(119896) 119910
119901(119896 minus 1) 119910
119901(119896 minus 2) 119906 (119896) 119906 (119896 minus 1)]
(48)
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
2 Computational Intelligence and Neuroscience
enhance the performance of nonlinear multivariable systemfuzzy optimal controller based Takagi-Sugeno model wasdeveloped which was used to minimize a quadratic per-formance index [18] An ultimate stable training algorithminspired by adaptive observer for black box identificationof nonlinear discrete systems developed using state spacerecurrent neural networkThe algorithmdevelopedwas usingonly input and output measurement in discrete time [19]Indirect adaptive controller that uses neural network waspresented for the identification and control of experimentalpilot distillation column This system is multivariable withunknowndynamics and the neural networkwas trained usingLevenberg-Marquardt algorithm [20] Adaptive finite timestabilization of a class of switched nonlinear systems wasinvestigated with unknown nonlinear terms using neuralnetwork The finite law and adaptive law were constructedusing radial basis function neural network (RBFNN) toapproximate the unknown packaged function Stability anal-ysis of RBFNN was observed to evaluate the states of theclosed loop systems [21] An adaptive fuzzy output trackingcontrol approach was proposed in [22] for single input singleoutput (SISO) systemwhich is uncertain and nonlinear underarbitrary switching An iterative learning control schemewas proposed for a class of nonlinear dynamic systemswhich includes holonomic systems as its subsets with linearfeedback mechanism and feedforward learning strategies[23]
In this proposed work we have used instar-outstar struc-ture based CPN with Fuzzy Competitive Learning (FCL)We have designed Fuzzy Counter Propagation Networkdesign to control some nonlinear dynamical systems In theFCPN CPN model is trained by FCL algorithms The FCLlearning is used for adjusting weights and update of BestMatched Node (BMN) in discrete time nonlinear dynamicalsystem
The main contributions of this research are as follows
(1) This paper contributes the approximation for a classof nonlinear dynamical systems using Fuzzy Compet-itive Learning Based Counter Propagation Network(FCPN)
(2) FCPN is employed to optimize the Mean AbsoluteError Mean Square Error Best Fit Rate and so forth
(3) Performance criteria of proposed FCPN for nonlineardynamical systems are effectively improved by com-pensating reference signal and controller signal
The paper is organized as follows In Section 2 problemformulations for a class of nonlinear dynamical systems aregiven Section 3 contains description of Feedforward NeuralNetwork (FNN) with Back Propagation Network DynamicNetwork and Fuzzy Learning Based Counter PropagationNetwork Dynamical learning for CPN and optimal learningof dynamical system are presented in Section 4 The simula-tion results and comparison of different nonlinear dynamicalmodels are presented in Section 5 Section 6 gives conclusionof the study
2 Problem Formulation
Let us consider four models of discrete time nonlineardynamical systems [24] for single input single output (SISO)and multiple input and single output (MISO) system con-sidered in this paper and they are described by differenceequations (1)ndash(4) and Box-Jenkins time series data [10] Let119891 119877119899 rarr 119877 and 119892 119877119898 rarr 119877 be the nonlinear continuousdifferentiable function of Model IndashModel IV approximatedby FCPN to the desired degree of accuracy Once the systemhas been parameterized the performance evaluation has beencarried out by FCPN for (1) to (4)
Model I
119910 (119896 + 1) =119899minus1
sum119894=0
120572119894119910 (119896 minus 119894)
+ 119892 [119906 (119896) 119906 (119896 minus 119898 + 1)]
(1)
Model II
119910 (119896 + 1) = 119891 [119910 (119896) 119910 (119896 minus 119899 + 1)]
+119898minus1
sum119894=0
120573119894119906 (119896 minus 119894)
(2)
Model III
119910 (119896 + 1) = 119891 [119910 (119896) 119910 (119896 minus 119899 + 1)]
+ 119892 [119906 (119896) 119906 (119896 minus 119898 + 1)] (3)
Model IV
119910 (119896 + 1) = 119891 [119910 (119896) 119910 (119896 minus 119899 + 1) 119906 (119896)
119906 (119896 minus 119898 + 1)] (4)
where [119906(119896) 119910(119896)] represent the input-output pair of thesystem at time 119896 and their order is represented by (119899119898)FCL is used to learn the system defined for (1) to (4) and theperformance of the FCPN can be measured by error functiongiven in
119864 (119896) =1
2[119910 (119896) minus 119910 (119896)]
2
(5)
where 119906(119896) is the input 119910(119896) is the system output 119910(119896) isneural network output and (5) can be written for neuralcontroller as
119864119888 =1
119875
119875
sum119901=1
[119910 (119896) minus 119910119888
(119896)]2
(6)
where 119910119888(119896) is neural controller output and (6) is known asminimized error function and 119875 is total number of inputpatterns
3 Back Propagation Network (BPN)
A neural network is one of the most popular intelligentsystems which has capability to approximate calculated andtarget value at an arbitrary degree of accuracy BPN is one
Computational Intelligence and Neuroscience 3
x1
xi
xn
Z1
Zj
Zp
1 1
Y1
Yk
Ym
x1 y1
yk
ym
t1
tk
tm
X1
Xj
Xn
op
1i
1j
1p
i1
ij
ip
n1
np
nj
o1
oj
w1j
wj1
wp1
w1k
w1m
wjk
wjm
wpk
wpm
w01
w0m w0k
Figure 1 General architecture of BPN
of the most commonly used networks for this purpose TheBPN consists of input layer hidden layer and output layerand possesses weighted interconnections The BPN is basedon supervised learning This method is used where error ispropagated back to the hidden layer The aim of this neuralnetwork is to train the net to achieve a balance betweenthe netrsquos ability to respond (memorization) and its ability togive reasonable response to the input that is similar but notidentical to the one that is used in training (generalization)For a given set of training input-output pairs this networkprovides a procedure for changing weights to classify thegiven input patterns correctly The weight update algorithmis based on gradient descent method [25] The networkarchitecture is shown in Figure 1
31 Dynamic Network Neural network can be classifiedinto two categories (i) Static and (ii) Dynamic In StaticNeural Network output can be calculated directly from inputthrough feedforward network But in Dynamic Networkoutput depends on current or previous inputs outputs orstates of network Where the current output is function ofcurrent input and previous output it is known as recurrent(feedback) network Training process of Static and DynamicNetwork can be differentiated by use of gradient or Jacobianwhich is computed Dynamic Network contains delays andoperates on a sequence of inputs Dynamic Networks canhave purely feedforward connections or they can have somerecurrent connections They can be trained using DynamicBack Propagation Network [26]
The general equation of the net input 119899119898(119905) for119898th layeris given by
119899119898 (119905) = sum
119897isin119871119891119898
sum119889isinDL119898sdot119897
IW119898sdot119897 (119889) 119886119897 (119905 minus 119889)
+ sum119897isin119868119898
sum119889isinDI119898sdot119897
IW119898sdot119897 (119889) 119887119897 (119905 minus 119889) + 119887119898(7)
where 119887119897(119905) is the 119897th input vector at time 119905 IW119898119897 is the inputweight between 119897th and 119898th layer LW119898119897 is the layer weightbetween 119897th and119898th layer and 119887119898 is the bias vector for layer119898 DL
119898119897is the set of all delays in the Tapped Delay Line
(TDL) between 119897th and 119898th layer and DI119898119897
is the set of alldelays in the TDL between input 119897 and119898th layer 119868
119898is the set
of indices of input vectors that connect to layer119898 and 119871119891119898is
the set of indices of layers that connect to layer119898 The outputof119898th layer at time 119905 is computed as
119886119898 (119905) = 119891119898 (119899119898 (119905)) (8)
Network has a TDL on the input with DI11
= 0 1 2 and itsoutput is represented as
119886 (119905) = 119891 (119899 (119905)) = sum119889
IW (119889) lowast 119901 (119905 minus 119889)
119886 (119905) = 119891 [iw11(0) 119901 (119905) + iw
11(1) 119901 (119905 minus 1)
+ iw11(2) 119901 (119905 minus 2)]
(9)
One simple architecture with delay DI11
= 0 1 2 forfeedforward Dynamic Network is shown in Figure 2 knownas Dynamic Adaptive Linear Neuron
32 Counter Propagation Network (CPN) It is multilayerfeedforward network based on the combination of the inputcompetitive and output layers Model of CPN is instar-outstar It is three-layer neural network that performs input-output data mapping that is producing output in theresponse to an input vector on the basis of Fuzzy CompetitiveLearning In CPN the connection between input layer andcompetitive layer is instar structure and connection betweencompetitive and output layer is outstar structure CounterPropagation Network involves two-stage training process Infirst stage input vector is clustered on the basis of Euclidean
4 Computational Intelligence and Neuroscience
D
D
f
p(t)
a(t)
iw11(0)
iw11(1)
iw11(2)
sum
Figure 2 Simple architecture of Dynamic Adaptive Linear Neuron
distance and the performance of network is improved usinglinear topology Using Euclidean distance between input andweight vector we evaluated BMN In phase II the desiredresponse is obtained by adapting the weights from compet-itive layer to output layer [27] Let 119909 = [119909
11199092sdot sdot sdot 119909119899]
and 119910 = [119910lowast1119910lowast2sdot sdot sdot 119910lowast119898] be input and desired vector
respectively let V119894119895be weight of input layer and competitive
layer where 1 le 119894 le 119899 and 1 le 119895 le 119901 and let119908119895119896be the weight
between competitive layer and output layer where 1 le 119896 le 119898Euclidean distance between input vector 119909 and weight vectorV119894119895is
119863119895=119899
sum119894=1
(119909119894minus V119894119895)2
where 119895 = 1 2 119901 (10)
The architecture of CPN is shown in Figure 3
33 Fuzzy Competitive Learning (FCL) FCL is used foradjusting instar-outstar weights and update of BMN fordiscrete time nonlinear dynamical systems Learning of CPNis divided into two phases FuzzyCompetitive Learning phase(unsupervised) and Grossberg Learning phase (supervised)[28ndash31] CPN is efficient network for mapping the inputvector of size 119899 placed in clusters of size 119898 Like traditionalself-organizing map the weight vector for every node isconsidered as a sample vector to the input pattern Theprocess is to determine which weight vector (say 119869) is moresimilar to the input pattern chosen as BMN First phase isbased on closeness between weight vector and input vectorand second phase is weight update between competitive andoutput layer for desired response
Let V119894119895be weight of input node 119894 to neuron 119895 and let 119908
119895119896
be the weight between competitive node 119895 and neuron 119896 Wepropose a new learning rate calculation method for use ofweight update
prop (119869) =sum119899
119894=1(V119894119869minus 119909119894)2
sum119898
119895=1sum119899
119894=1(V119894119869minus 119909119894)2 (11)
where 119869 denotes the index of BMN andprop is known as FuzzyCompetitive Learning rate
Fuzzy Competitive Learning phase is described as fol-lows
331 Training Algorithm of FCL
Phase I (for determination of BMN) There are steps fortraining of CPN as follows
Step 1 Initialize instar-outstar weights
Step 2 Perform Steps 3ndash8 until stopping criteria for Phase Itraining fail The stopping condition may be fixed number ofepochs or learning rate has reduced to negligible value
Step 3 Perform Steps 4ndash6 for all input training vector119883
Step 4 Set the119883-input layer activation to vector119883
Step 5 Compute the Best Matched Node (BMN) (119869) usingEuclidean distance find the node119885
119869whose distance from the
input pattern is minimum Euclidean distance is calculated asfollows
119863119895=119899
sum119894=1
(119909119894minus V119894119895)2
where 119895 = 1 2 119901 (12)
Minimum net input
119911119894119899119869
=119899
sum119894=1
119909119894V119894119869 (13)
Step 6 Calculate Fuzzy Competitive Learning (14) rate forweight update using BMN
Step 7 We update weight for unit 119885119869
V119894119869(new) = V
119894119869(old) + prop ℎ (119869 119894 119895) (119909
119894minus V119894119869) (14)
where 119894 = 1 2 3 119899 and ℎ(sdot sdot) is neighborhood functionaround BMN Consider
ℎ (119869 119894 119895) = exp(minus10038171003817100381710038171003817V119894119895 minus V
119894119869
10038171003817100381710038171003817
21205902 (119905))
where ℎ (119869 119894 119895) isin [0 1]
(15)
120590 (119905) = 1205900exp(minus119905
119879) (16)
Step 8 Test the stopping criteria
Phase II (to obtain desired response)
Step 1 Set activation function to (119909 119910) input and output layerrespectively
Step 2 Update the BMN (119869) (Step 5 from Phase I) Alsoupdate the weights into unit 119885
119869as
V119894119869(new) = V
119894119869(old) + 120572ℎ (119869 119894 119895) [119909
119894minus V119894119869(old)] (17)
Computational Intelligence and Neuroscience 5
X1
Xi
Xn
x1
xi
xn
11
1p
i1ij
ij
ip
n1
np
nj
Z1
Zj
Zp
w11
wj1
wp1
wjk
wjm
wik
wim
wpk
wpm
Y1
Yk
Ym
yy1
yy
k
yym
Input layer Competitive layer Output layer Desired output
Learning rate prop Learning rate 120573
(Unsupervised)instar
(Supervised)outstar
BMN
Figure 3 Architecture of Counter Propagation Network
Step 3 Update the weights from node 119885119869to the output unit
as
119908119869119896(new) = 119908
119869119896(old) + 120573ℎ (119869 119895 119896) [119910
119895minus 119908119869119896(old)]
for 1 le 119896 le 119898(18)
Step 4 Learning rate 120573 is calculated as
120573 (119869) =sum119898
119896=1(119910119896minus 119908119869119896)2
sum119901
119895=1sum119898
119896=1(119910119896minus 119908119895119896)2 (19)
Step 5 Decrease the rate of learning st 120573 = 120573 minus 120576 where 120576 issmall positive number
Step 6 Test the stopping condition for Phase II (ie fixednumber of epochs or its learning rate has reduced to negli-gible value)
332 Testing Phase (to Test FCPN)
Step 1 Set the initial weights that is the weights obtainedduring training
Step 2 Apply FCPN to the input vector119883
Step 3 Find unit 119869 that is closest to vector119883
Step 4 Set activations of output units
Step 5 Apply activation function at 119910119896 where 119910
119896= sum119895119911119895119908119895119896
4 Dynamical Learning for CPN
Learning stabilities are fundamental issues for CPN howeverthere are few studies on the learning issue of FNN BPN forlearning is not always successful because of its sensitivity tolearning parameters Optimal learning rate always changesduring the training process Dynamical learning of CPN iscarried out using Lemmas 1 2 and 3
Assumption Let us assume 120601(119909) is a sigmoid function if itis bounded continuous and increasing function Since inputto the neural network in this model is bounded we considerLemmas 1 and 2 given below
Lemma 1 Let 120601(119909) be a sigmoid function and let Ω be acompact set in R119899 and 119891 R119899 rarr R on Ω is a continuousfunction and for arbitrary isingt 0 exist integer119873 and real constants119888119894 120579119894 and 119908
119894119895 119894 = 1 2 119873 119895 = 1 2 119899 such that
119891 (1199091 1199092 119909
119899) =119873
sum119894=1
1198881198940(119899
sum119895=1
119908119894119895119909119894minus 120579119894) (20)
satisfies
max119909isinΩ
10038171003817100381710038171003817119891 (1199091 1199092 119909119899) minus 119891 (1199091 1199092 119909119899)10038171003817100381710038171003817 ltisin (21)
Using Lemma 1 dynamical learning for a three-layer CPN canbe formulated where the hidden layer transfer functions are120601(119909) and transfer function at output layer is linear
Let all the vectors be column vectors and superscript 119889119896119901
refers to specific output vectors component
6 Computational Intelligence and Neuroscience
Let 119883 = [1199091 1199092 119909
119901] isin R119871times119875 119884 = [119910
1 1199102 119910
119901] isin
R119867times119875 119874 = [1199001 1199002 119900
119901] isin R119870times119875 119863 = [119889
1 1198892 119889
119901] isin
R119870times119875 be the input hidden output and desired vectorrespectively where 119871119867 and 119870 denote the number of inputhidden and output layer neurons Let 119881 and 119882 representthe input-hidden and hidden-output layer weight matrixrespectivelyThe objectives of the network training minimizeerror function 119869 where 119869 is
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
(22)
41 Optimal Learning of Dynamical System Consider athree-layer CPN the network error matrix is defined by theerror between differences of desired and FCPN output at anyiteration and given as
119864119905= 119874119905minus 119863 = 119882
119905119884119905minus 119863 = 119882
119905119884119905119883 minus 119863 (23)
The objective of the network for minimization of error givenin (24) is defined as follows
st 119869 =1
2119875119870119879119903(119864119905119864119879119905) (24)
where119879119903represent the trace ofmatrix Using gradient descent
method updated weight is given by
119882119905+1
= 119882119905minus 120573119905
120597119869
120597119882119905
119881119905+1
= 119881119905minus 120573119905
120597119869
120597119881119905
(25)
or
119882119905+1
= 119882119905minus120573119905
119875119870119864119905119884119879119905
119881119905+1
= 119881119905minus120573119905
119875119870119882119879119905119864119905119883119879
(26)
Using (23)ndash(26) we have
119864119905+1119864119879119905+1
= (119882119905+1119884119905+1
minus 119863) (119882119905+1119884119905+1
minus 119863)119879
(27)
To obtain minimum error for multi-layer FNN in aboveequation (28) after simplification we have
119869119905+1
minus 119869119905=
1
2119875119870119892 (120573)
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
119864119905+1119864119879119905+1
= (119882119905+1119881119905+1119883 minus 119863) (119882
119905+1119881119905+1119883 minus 119863)
119879
= [(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863][(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863]T= [119864119905minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2119864119905119884119879119905119882119879119905119864119905119883119879119883] lowast [119864
119905
minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2(119864119905119884119879119905119882119879119905119864119905119883119879119883)]
119879
= 119864119905119864119879119905
minus120573119905
119875119870[119864119905(119882119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905(119864119905119884119879119905119881119905119883)119879
+ (119882119905119882119879119905119864119905119883119879119883119864119879
119905) + (119864
119905119884119879119905119881119905119883119864119879119905)]
+1205732119905
(119875119870)2[119864119905(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883119864119879
119905
+ 119864119905119884119879119905119881119905119883(119882119905119882119879119905119864119905119883119879119883)
119879
+119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119881119905119883)119879
+119882119905119882119879119905119864119905119883119879119883(119882
119905119882119879119905119864119905119883119879119883)
119879
]
minus1205733119905
(119875119870)3[119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119882
119905119882119879119891119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
]
+1205734119905
(119875119870)4[119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
119879
]
(28)
For simplification omit subscript 119905 we have that is
119869119905+1
minus 119869119905=
1
2119875119870(1198601205734 + 1198611205734 + 1198621205734 +119872120573) (29)
where
119860 =1
(119875119870)4119879119903[119864119884119879119882119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
]
119861 =1
(119875119870)3119879119903[119882119882119879119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
Computational Intelligence and Neuroscience 7
+ 119864119884119879119881119883(119864119884119879119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119864119884119879119881119883)]
119862 =1
(119875119870)2119879119903[(119864119884119879119882119879119864119883119879119883)
119879
+ 119864119884119879119882119864119883119879119883119864119879 + 119864119884119879119881119883(119882119882119879119864119883119879119883)119879
+119882119882119879119864119883119879119883 + (119864119884119879119881119883)119879
+ 119864119884119879119881119883(119864119884119879119881119883)119879
+119882119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
]
119872 =1
119875119870119879119903[119864 (119882119882119879119864119883119879119883)
119879
+ 119864 (119864119884119879119881119883)119879
+119882119882119879119864119883119879119883 + 119864119884119879119881119883119864119879]
(30)
where
119892 (120573) = (1198601205734 + 1198611205733 + 1198621205732 +119872120573) (31)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119888) (32)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860
Lemma 2 For solution of general real cubic equation one usesthe following lemma
119891 (119909) = 1199093 + 1198861199092 + 119887119909 + 119888
119871119890119905 119863 = minus271198882 + 18119888119886119887 + 11988621198872 minus 411988631198873 minus 41198873(33)
where119863 is discriminant of 119891(119909)Then we have the following
(1) If119863 lt 0 119891(119909) has one real root
(2) If119863 ge 0 119891(119909) has three real roots
(a) 119863 gt 0 119891(119909) has three different real roots(b) 119863 = 0 6119887 minus 211988620 119891(119909) has one single root and
one multiple root(c) 119863 = 0 66 minus 21198862 = 0 119891(119909) has one root of three
multiplicities
Lemma 3 For given polynomial 119892(120573) given (31) if optimum120573 = 120573
119894| 119892(120573
119894) = min(119892(120573
1) 119892(1205732) 119892(1205733)) 119894 isin 1 2 3
where 120573119894is real root of 120597119892120597120573 the optimum (120573) is the optimal
learning rate and this learning process is stable
Proof To find stable learning range of 120573 consider thatLyapunov function is
119881119905= 1198692119905
Δ119881119905= 1198692119905+1
minus 1198692119905
if Δ119881119905lt 0
(34)
and then dynamical system is guaranteed to be stable ifΔ119881119905lt
0 that is 119869119905+1minus119869119905= (12119875119870)(1198601205734+1198611205734+1198621205734+119872120573) lt 0 (29)
where 120573 is learning rate since in the training process inputmatrix remains the same during the whole training processTo find the range of 120573 which satisfy 119892(120573) = (1198601205734 + 1198611205733 +
1198621205732 + 119872120573) lt 0 Since 120597119892120597120573 where has at least one realroot (Lemma 2) and one of them must give optimum 119892(120573)Obviously minimum value of 119892(120573) gives the largest reductionin 119869119905at each step of learning process Equation (31) shows
that 119892(120573) has two or four real roots one including 120573 = 0(Lemma 2) such that minimum value of 120573 shows largestreduction in error at two successive times and minimumvalue is obtained by differentiating (31) with respect to 120573 wehave from (32)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119862) (35)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860Solving 120597119892120597120573 = 0 gives 120573 which minimizes error in (5)
5 Simulation Results
To demonstrate the effectiveness and merit of proposedFCPN simulation results are presented and discussed Fourdifferent nonlinear dynamical systems and one generalbenchmark problem known as Box-Jenkins model with timeseries data are considered
51 Performance Criteria For accessing the performancecriteria of FCPN and comparing with Dynamic and BackPropagation Network we have evaluated various errors asgiven below In case of four dynamical models it is recom-mended to access criterion such as subset of the following
Given 119873 pair of data points (119910(119896) 119910(119896)) where 119910(119896)is output of system and 119910(119896) is output of controller theMaximum Absolute Error (MAE) is
119869MAE = 119869max = max1le119896le119873
1003816100381610038161003816119910 (119896) minus 119910 (119896)1003816100381610038161003816 (36)
the Sum of Squared Errors (SSE) is
119869SSE =119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(37)
the Mean Squared Error (MSE) is
119869MSE =1
119873
119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(38)
8 Computational Intelligence and Neuroscience
0 50 100 150 200 2500
1
2
3
4
5
6
7
Iteration
Mea
n Sq
uare
Err
ortimes10minus3
(a)
0 50 100 150 200 250
0
05
1
15
2
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus2
minus15
minus1
minus05
(b)
Figure 4 (a) Mean Square Error of system (Example 1) using FCPN (b) Performance of the controller using FCPN algorithm for Example 1
andor the Root Mean Squared Error (RMSE) is
119869RMSE = radic119869MSE (39)
The measure Variance Accounting Factor (VAF) is
119869VAF = (1 minusVar (119910 (119896) minus 119910 (119896))
Var (119910 (119896))) 100 (40)
A related measure is the Normalized Mean Squared Error(NMSE)
119869NMSE =sum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2 (41)
and the Best Fit Rate (BFR) is
119869BFR = (1 minusradicsum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2)100 (42)
Example 1 The nonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 03119910 (119896) + 06119910 (119896 minus 1) + 119891 [119906 (119896)] (43)
The nonlinear function in the system in this case is 119891(119906) =1199063 + 031199062 minus 04119906 where 119910 and 119906 are uniformly distributedin [minus2 2] The objective of Example 1 is to control the systemto track reference output given as 250 sample data pointsThemodel has two inputs 119910(119896) and 119906(119896) and single output 119910(119896+1)and the system identificationwas initially performedwith thesystem input being uniformly distributed over [minus2 2] TheFCPN controller uses two inputs 119910(119896) and 119906(119896) to produceoutput 119910(119896 + 1) FCPN model governed by the differenceequation 119910(119896+1) = 03119910(119896)+06119910(119896minus1)+119873[119906(119896)]was usedFigures 4(a) and 4(b) show the error 119890(119896 + 1) = 119910(119896 + 1) minus119910(119896 + 1) and outputs of the dynamical system and the FCPN
Table 1 Calculated various errors for Example 1
NN model Different error calculationMAE SSE MSE RMSE NMSE
FCPN 13364 01143 45709119890 minus 004 00214 71627119890 minus 004
Dynamic 58231 31918 00128 01130 00028BPN 380666 481017 04810 06936 01035
As can be seen from the figure the identification error is smalleven when the input is changed to a sum of two sinusoids119906(119896) = sin(2120587119896250) + sin(212058711989625) at 119896 = 250
Table 1 includes various calculated errors of differentNN models for Example 1 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 2 Thenonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 119891 [119910119901(119896) 119910
119901(119896 minus 1)] + 119906 (119896) (44)
where
119891 [119910119901(119896) 119910
119901(119896 minus 1)]
= [119910119901(119896) 119910119901(119896 minus 1) [119910
119901(119896) + 25]
1 + 1199102119901(119896) + 1199102
119901(119896 minus 1)
]
(45)
Theobjective of (44) is to control the system to track referenceoutput given 100 sample data points The model has twoinputs119910(119896) and 119906(119896) and single output119910(119896+1) and the systemidentification was initially performed with the system inputbeing uniformly distributed over [minus2 2] Training and testingsamples contain 100 sample data pointsThe FCPN controlleruses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1) FCPN
Computational Intelligence and Neuroscience 9
0 20 40 60 80 1000
1
2
3
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
05
1
15
2
25
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)Figure 5 (a) Mean Square Error of system (Example 2) using FCPN (b) Performance of the controller using FCPN algorithm for Example 2
Table 2 Calculated various errors for Example 2
NN models Different error calculationMAE SSE MSE RMSE NMSE
FCPN 01956 00032 32328119890 minus 005 00057 27164119890 minus 005
Dynamic 95152 29856 00299 01728 00308BP 101494 45163 00452 02125 00323
network discussed earlier is used to identify the system frominput-output data and is described by the equation
119910 (119896 + 1) = 119873 [119910 (119896) 119910 (119896 minus 1)] + 119906 (119896) (46)
For FCPN the identification process involves the adjustmentof the weights of 119873 using FCL FCPN identifier needs someprior information concerning the input-output behavior ofthe system before identification can be undertaken FCL isused to adjust the weights of the neural network so that theerror 119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shownin Figure 5(a) The behavior of the FCPN model for (16) isshown in Figure 5(b) The input 119906(119896) was assumed to be arandom signal uniformly distributed in the interval [minus2 2]The weights in the neural network were adjusted
Table 2 includes various calculated errors of differentNN models for Example 2 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 3 Thenonlinear dynamical system [23] is governedby the following difference equation
119910 (119896 + 1) =119910 (119896)
1 + 119910 (119896)2+ 1199063 (119896) (47)
which corresponds to 119891[119910(119896)] = 119910(119896)(1 + 119910(119896)2) and119892[119906(119896)] = 1199063(119896) FCPN network equation (47) is used toidentify the system from input-output data Weights in the
Table 3 Calculated various errors and BFR for Example 3
NN model MAE SSE MSE RMSE NMSEFCPN 03519 00032 31709119890 minus 005 00056 56371119890 minus 006
Dynamic 55652 36158 00362 01902 00223BP 426524 469937 04699 06855 02892
neural networks were adjusted for every instant of discretetime steps
The objective of (47) is to control the system to trackreference output given 100 sample data pointsThemodel hastwo inputs 119910(119896) and 119906(119896) and single output 119910(119896 + 1) and thesystem identification was initially performed with the systeminput being uniformly distributed over [minus2 2] Training andtesting samples contain 250 sample data points The FCPNcontroller uses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1)The input data is generated using uniform distribution ininterval [minus2 2] The function 119891 obtained by FCPN is usedto adjust the weights of the neural network so that the error119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shown inFigure 6(a) In Figure 6(b) the desired outputs of the systemas well as the FCPN model are shown and are seen to beindistinguishable
Table 3 includes various calculated errors of differentNN models for Example 3 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 4 Thenonlinear dynamical system [24] is governedby the difference equation (48) Generalized form of thisequation is Model IV In this example the system is assumedto be of the form
119910119901(119896 + 1)
= 119891 [119910119901(119896) 119910
119901(119896 minus 1) 119910
119901(119896 minus 2) 119906 (119896) 119906 (119896 minus 1)]
(48)
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 3
x1
xi
xn
Z1
Zj
Zp
1 1
Y1
Yk
Ym
x1 y1
yk
ym
t1
tk
tm
X1
Xj
Xn
op
1i
1j
1p
i1
ij
ip
n1
np
nj
o1
oj
w1j
wj1
wp1
w1k
w1m
wjk
wjm
wpk
wpm
w01
w0m w0k
Figure 1 General architecture of BPN
of the most commonly used networks for this purpose TheBPN consists of input layer hidden layer and output layerand possesses weighted interconnections The BPN is basedon supervised learning This method is used where error ispropagated back to the hidden layer The aim of this neuralnetwork is to train the net to achieve a balance betweenthe netrsquos ability to respond (memorization) and its ability togive reasonable response to the input that is similar but notidentical to the one that is used in training (generalization)For a given set of training input-output pairs this networkprovides a procedure for changing weights to classify thegiven input patterns correctly The weight update algorithmis based on gradient descent method [25] The networkarchitecture is shown in Figure 1
31 Dynamic Network Neural network can be classifiedinto two categories (i) Static and (ii) Dynamic In StaticNeural Network output can be calculated directly from inputthrough feedforward network But in Dynamic Networkoutput depends on current or previous inputs outputs orstates of network Where the current output is function ofcurrent input and previous output it is known as recurrent(feedback) network Training process of Static and DynamicNetwork can be differentiated by use of gradient or Jacobianwhich is computed Dynamic Network contains delays andoperates on a sequence of inputs Dynamic Networks canhave purely feedforward connections or they can have somerecurrent connections They can be trained using DynamicBack Propagation Network [26]
The general equation of the net input 119899119898(119905) for119898th layeris given by
119899119898 (119905) = sum
119897isin119871119891119898
sum119889isinDL119898sdot119897
IW119898sdot119897 (119889) 119886119897 (119905 minus 119889)
+ sum119897isin119868119898
sum119889isinDI119898sdot119897
IW119898sdot119897 (119889) 119887119897 (119905 minus 119889) + 119887119898(7)
where 119887119897(119905) is the 119897th input vector at time 119905 IW119898119897 is the inputweight between 119897th and 119898th layer LW119898119897 is the layer weightbetween 119897th and119898th layer and 119887119898 is the bias vector for layer119898 DL
119898119897is the set of all delays in the Tapped Delay Line
(TDL) between 119897th and 119898th layer and DI119898119897
is the set of alldelays in the TDL between input 119897 and119898th layer 119868
119898is the set
of indices of input vectors that connect to layer119898 and 119871119891119898is
the set of indices of layers that connect to layer119898 The outputof119898th layer at time 119905 is computed as
119886119898 (119905) = 119891119898 (119899119898 (119905)) (8)
Network has a TDL on the input with DI11
= 0 1 2 and itsoutput is represented as
119886 (119905) = 119891 (119899 (119905)) = sum119889
IW (119889) lowast 119901 (119905 minus 119889)
119886 (119905) = 119891 [iw11(0) 119901 (119905) + iw
11(1) 119901 (119905 minus 1)
+ iw11(2) 119901 (119905 minus 2)]
(9)
One simple architecture with delay DI11
= 0 1 2 forfeedforward Dynamic Network is shown in Figure 2 knownas Dynamic Adaptive Linear Neuron
32 Counter Propagation Network (CPN) It is multilayerfeedforward network based on the combination of the inputcompetitive and output layers Model of CPN is instar-outstar It is three-layer neural network that performs input-output data mapping that is producing output in theresponse to an input vector on the basis of Fuzzy CompetitiveLearning In CPN the connection between input layer andcompetitive layer is instar structure and connection betweencompetitive and output layer is outstar structure CounterPropagation Network involves two-stage training process Infirst stage input vector is clustered on the basis of Euclidean
4 Computational Intelligence and Neuroscience
D
D
f
p(t)
a(t)
iw11(0)
iw11(1)
iw11(2)
sum
Figure 2 Simple architecture of Dynamic Adaptive Linear Neuron
distance and the performance of network is improved usinglinear topology Using Euclidean distance between input andweight vector we evaluated BMN In phase II the desiredresponse is obtained by adapting the weights from compet-itive layer to output layer [27] Let 119909 = [119909
11199092sdot sdot sdot 119909119899]
and 119910 = [119910lowast1119910lowast2sdot sdot sdot 119910lowast119898] be input and desired vector
respectively let V119894119895be weight of input layer and competitive
layer where 1 le 119894 le 119899 and 1 le 119895 le 119901 and let119908119895119896be the weight
between competitive layer and output layer where 1 le 119896 le 119898Euclidean distance between input vector 119909 and weight vectorV119894119895is
119863119895=119899
sum119894=1
(119909119894minus V119894119895)2
where 119895 = 1 2 119901 (10)
The architecture of CPN is shown in Figure 3
33 Fuzzy Competitive Learning (FCL) FCL is used foradjusting instar-outstar weights and update of BMN fordiscrete time nonlinear dynamical systems Learning of CPNis divided into two phases FuzzyCompetitive Learning phase(unsupervised) and Grossberg Learning phase (supervised)[28ndash31] CPN is efficient network for mapping the inputvector of size 119899 placed in clusters of size 119898 Like traditionalself-organizing map the weight vector for every node isconsidered as a sample vector to the input pattern Theprocess is to determine which weight vector (say 119869) is moresimilar to the input pattern chosen as BMN First phase isbased on closeness between weight vector and input vectorand second phase is weight update between competitive andoutput layer for desired response
Let V119894119895be weight of input node 119894 to neuron 119895 and let 119908
119895119896
be the weight between competitive node 119895 and neuron 119896 Wepropose a new learning rate calculation method for use ofweight update
prop (119869) =sum119899
119894=1(V119894119869minus 119909119894)2
sum119898
119895=1sum119899
119894=1(V119894119869minus 119909119894)2 (11)
where 119869 denotes the index of BMN andprop is known as FuzzyCompetitive Learning rate
Fuzzy Competitive Learning phase is described as fol-lows
331 Training Algorithm of FCL
Phase I (for determination of BMN) There are steps fortraining of CPN as follows
Step 1 Initialize instar-outstar weights
Step 2 Perform Steps 3ndash8 until stopping criteria for Phase Itraining fail The stopping condition may be fixed number ofepochs or learning rate has reduced to negligible value
Step 3 Perform Steps 4ndash6 for all input training vector119883
Step 4 Set the119883-input layer activation to vector119883
Step 5 Compute the Best Matched Node (BMN) (119869) usingEuclidean distance find the node119885
119869whose distance from the
input pattern is minimum Euclidean distance is calculated asfollows
119863119895=119899
sum119894=1
(119909119894minus V119894119895)2
where 119895 = 1 2 119901 (12)
Minimum net input
119911119894119899119869
=119899
sum119894=1
119909119894V119894119869 (13)
Step 6 Calculate Fuzzy Competitive Learning (14) rate forweight update using BMN
Step 7 We update weight for unit 119885119869
V119894119869(new) = V
119894119869(old) + prop ℎ (119869 119894 119895) (119909
119894minus V119894119869) (14)
where 119894 = 1 2 3 119899 and ℎ(sdot sdot) is neighborhood functionaround BMN Consider
ℎ (119869 119894 119895) = exp(minus10038171003817100381710038171003817V119894119895 minus V
119894119869
10038171003817100381710038171003817
21205902 (119905))
where ℎ (119869 119894 119895) isin [0 1]
(15)
120590 (119905) = 1205900exp(minus119905
119879) (16)
Step 8 Test the stopping criteria
Phase II (to obtain desired response)
Step 1 Set activation function to (119909 119910) input and output layerrespectively
Step 2 Update the BMN (119869) (Step 5 from Phase I) Alsoupdate the weights into unit 119885
119869as
V119894119869(new) = V
119894119869(old) + 120572ℎ (119869 119894 119895) [119909
119894minus V119894119869(old)] (17)
Computational Intelligence and Neuroscience 5
X1
Xi
Xn
x1
xi
xn
11
1p
i1ij
ij
ip
n1
np
nj
Z1
Zj
Zp
w11
wj1
wp1
wjk
wjm
wik
wim
wpk
wpm
Y1
Yk
Ym
yy1
yy
k
yym
Input layer Competitive layer Output layer Desired output
Learning rate prop Learning rate 120573
(Unsupervised)instar
(Supervised)outstar
BMN
Figure 3 Architecture of Counter Propagation Network
Step 3 Update the weights from node 119885119869to the output unit
as
119908119869119896(new) = 119908
119869119896(old) + 120573ℎ (119869 119895 119896) [119910
119895minus 119908119869119896(old)]
for 1 le 119896 le 119898(18)
Step 4 Learning rate 120573 is calculated as
120573 (119869) =sum119898
119896=1(119910119896minus 119908119869119896)2
sum119901
119895=1sum119898
119896=1(119910119896minus 119908119895119896)2 (19)
Step 5 Decrease the rate of learning st 120573 = 120573 minus 120576 where 120576 issmall positive number
Step 6 Test the stopping condition for Phase II (ie fixednumber of epochs or its learning rate has reduced to negli-gible value)
332 Testing Phase (to Test FCPN)
Step 1 Set the initial weights that is the weights obtainedduring training
Step 2 Apply FCPN to the input vector119883
Step 3 Find unit 119869 that is closest to vector119883
Step 4 Set activations of output units
Step 5 Apply activation function at 119910119896 where 119910
119896= sum119895119911119895119908119895119896
4 Dynamical Learning for CPN
Learning stabilities are fundamental issues for CPN howeverthere are few studies on the learning issue of FNN BPN forlearning is not always successful because of its sensitivity tolearning parameters Optimal learning rate always changesduring the training process Dynamical learning of CPN iscarried out using Lemmas 1 2 and 3
Assumption Let us assume 120601(119909) is a sigmoid function if itis bounded continuous and increasing function Since inputto the neural network in this model is bounded we considerLemmas 1 and 2 given below
Lemma 1 Let 120601(119909) be a sigmoid function and let Ω be acompact set in R119899 and 119891 R119899 rarr R on Ω is a continuousfunction and for arbitrary isingt 0 exist integer119873 and real constants119888119894 120579119894 and 119908
119894119895 119894 = 1 2 119873 119895 = 1 2 119899 such that
119891 (1199091 1199092 119909
119899) =119873
sum119894=1
1198881198940(119899
sum119895=1
119908119894119895119909119894minus 120579119894) (20)
satisfies
max119909isinΩ
10038171003817100381710038171003817119891 (1199091 1199092 119909119899) minus 119891 (1199091 1199092 119909119899)10038171003817100381710038171003817 ltisin (21)
Using Lemma 1 dynamical learning for a three-layer CPN canbe formulated where the hidden layer transfer functions are120601(119909) and transfer function at output layer is linear
Let all the vectors be column vectors and superscript 119889119896119901
refers to specific output vectors component
6 Computational Intelligence and Neuroscience
Let 119883 = [1199091 1199092 119909
119901] isin R119871times119875 119884 = [119910
1 1199102 119910
119901] isin
R119867times119875 119874 = [1199001 1199002 119900
119901] isin R119870times119875 119863 = [119889
1 1198892 119889
119901] isin
R119870times119875 be the input hidden output and desired vectorrespectively where 119871119867 and 119870 denote the number of inputhidden and output layer neurons Let 119881 and 119882 representthe input-hidden and hidden-output layer weight matrixrespectivelyThe objectives of the network training minimizeerror function 119869 where 119869 is
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
(22)
41 Optimal Learning of Dynamical System Consider athree-layer CPN the network error matrix is defined by theerror between differences of desired and FCPN output at anyiteration and given as
119864119905= 119874119905minus 119863 = 119882
119905119884119905minus 119863 = 119882
119905119884119905119883 minus 119863 (23)
The objective of the network for minimization of error givenin (24) is defined as follows
st 119869 =1
2119875119870119879119903(119864119905119864119879119905) (24)
where119879119903represent the trace ofmatrix Using gradient descent
method updated weight is given by
119882119905+1
= 119882119905minus 120573119905
120597119869
120597119882119905
119881119905+1
= 119881119905minus 120573119905
120597119869
120597119881119905
(25)
or
119882119905+1
= 119882119905minus120573119905
119875119870119864119905119884119879119905
119881119905+1
= 119881119905minus120573119905
119875119870119882119879119905119864119905119883119879
(26)
Using (23)ndash(26) we have
119864119905+1119864119879119905+1
= (119882119905+1119884119905+1
minus 119863) (119882119905+1119884119905+1
minus 119863)119879
(27)
To obtain minimum error for multi-layer FNN in aboveequation (28) after simplification we have
119869119905+1
minus 119869119905=
1
2119875119870119892 (120573)
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
119864119905+1119864119879119905+1
= (119882119905+1119881119905+1119883 minus 119863) (119882
119905+1119881119905+1119883 minus 119863)
119879
= [(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863][(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863]T= [119864119905minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2119864119905119884119879119905119882119879119905119864119905119883119879119883] lowast [119864
119905
minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2(119864119905119884119879119905119882119879119905119864119905119883119879119883)]
119879
= 119864119905119864119879119905
minus120573119905
119875119870[119864119905(119882119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905(119864119905119884119879119905119881119905119883)119879
+ (119882119905119882119879119905119864119905119883119879119883119864119879
119905) + (119864
119905119884119879119905119881119905119883119864119879119905)]
+1205732119905
(119875119870)2[119864119905(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883119864119879
119905
+ 119864119905119884119879119905119881119905119883(119882119905119882119879119905119864119905119883119879119883)
119879
+119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119881119905119883)119879
+119882119905119882119879119905119864119905119883119879119883(119882
119905119882119879119905119864119905119883119879119883)
119879
]
minus1205733119905
(119875119870)3[119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119882
119905119882119879119891119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
]
+1205734119905
(119875119870)4[119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
119879
]
(28)
For simplification omit subscript 119905 we have that is
119869119905+1
minus 119869119905=
1
2119875119870(1198601205734 + 1198611205734 + 1198621205734 +119872120573) (29)
where
119860 =1
(119875119870)4119879119903[119864119884119879119882119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
]
119861 =1
(119875119870)3119879119903[119882119882119879119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
Computational Intelligence and Neuroscience 7
+ 119864119884119879119881119883(119864119884119879119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119864119884119879119881119883)]
119862 =1
(119875119870)2119879119903[(119864119884119879119882119879119864119883119879119883)
119879
+ 119864119884119879119882119864119883119879119883119864119879 + 119864119884119879119881119883(119882119882119879119864119883119879119883)119879
+119882119882119879119864119883119879119883 + (119864119884119879119881119883)119879
+ 119864119884119879119881119883(119864119884119879119881119883)119879
+119882119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
]
119872 =1
119875119870119879119903[119864 (119882119882119879119864119883119879119883)
119879
+ 119864 (119864119884119879119881119883)119879
+119882119882119879119864119883119879119883 + 119864119884119879119881119883119864119879]
(30)
where
119892 (120573) = (1198601205734 + 1198611205733 + 1198621205732 +119872120573) (31)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119888) (32)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860
Lemma 2 For solution of general real cubic equation one usesthe following lemma
119891 (119909) = 1199093 + 1198861199092 + 119887119909 + 119888
119871119890119905 119863 = minus271198882 + 18119888119886119887 + 11988621198872 minus 411988631198873 minus 41198873(33)
where119863 is discriminant of 119891(119909)Then we have the following
(1) If119863 lt 0 119891(119909) has one real root
(2) If119863 ge 0 119891(119909) has three real roots
(a) 119863 gt 0 119891(119909) has three different real roots(b) 119863 = 0 6119887 minus 211988620 119891(119909) has one single root and
one multiple root(c) 119863 = 0 66 minus 21198862 = 0 119891(119909) has one root of three
multiplicities
Lemma 3 For given polynomial 119892(120573) given (31) if optimum120573 = 120573
119894| 119892(120573
119894) = min(119892(120573
1) 119892(1205732) 119892(1205733)) 119894 isin 1 2 3
where 120573119894is real root of 120597119892120597120573 the optimum (120573) is the optimal
learning rate and this learning process is stable
Proof To find stable learning range of 120573 consider thatLyapunov function is
119881119905= 1198692119905
Δ119881119905= 1198692119905+1
minus 1198692119905
if Δ119881119905lt 0
(34)
and then dynamical system is guaranteed to be stable ifΔ119881119905lt
0 that is 119869119905+1minus119869119905= (12119875119870)(1198601205734+1198611205734+1198621205734+119872120573) lt 0 (29)
where 120573 is learning rate since in the training process inputmatrix remains the same during the whole training processTo find the range of 120573 which satisfy 119892(120573) = (1198601205734 + 1198611205733 +
1198621205732 + 119872120573) lt 0 Since 120597119892120597120573 where has at least one realroot (Lemma 2) and one of them must give optimum 119892(120573)Obviously minimum value of 119892(120573) gives the largest reductionin 119869119905at each step of learning process Equation (31) shows
that 119892(120573) has two or four real roots one including 120573 = 0(Lemma 2) such that minimum value of 120573 shows largestreduction in error at two successive times and minimumvalue is obtained by differentiating (31) with respect to 120573 wehave from (32)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119862) (35)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860Solving 120597119892120597120573 = 0 gives 120573 which minimizes error in (5)
5 Simulation Results
To demonstrate the effectiveness and merit of proposedFCPN simulation results are presented and discussed Fourdifferent nonlinear dynamical systems and one generalbenchmark problem known as Box-Jenkins model with timeseries data are considered
51 Performance Criteria For accessing the performancecriteria of FCPN and comparing with Dynamic and BackPropagation Network we have evaluated various errors asgiven below In case of four dynamical models it is recom-mended to access criterion such as subset of the following
Given 119873 pair of data points (119910(119896) 119910(119896)) where 119910(119896)is output of system and 119910(119896) is output of controller theMaximum Absolute Error (MAE) is
119869MAE = 119869max = max1le119896le119873
1003816100381610038161003816119910 (119896) minus 119910 (119896)1003816100381610038161003816 (36)
the Sum of Squared Errors (SSE) is
119869SSE =119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(37)
the Mean Squared Error (MSE) is
119869MSE =1
119873
119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(38)
8 Computational Intelligence and Neuroscience
0 50 100 150 200 2500
1
2
3
4
5
6
7
Iteration
Mea
n Sq
uare
Err
ortimes10minus3
(a)
0 50 100 150 200 250
0
05
1
15
2
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus2
minus15
minus1
minus05
(b)
Figure 4 (a) Mean Square Error of system (Example 1) using FCPN (b) Performance of the controller using FCPN algorithm for Example 1
andor the Root Mean Squared Error (RMSE) is
119869RMSE = radic119869MSE (39)
The measure Variance Accounting Factor (VAF) is
119869VAF = (1 minusVar (119910 (119896) minus 119910 (119896))
Var (119910 (119896))) 100 (40)
A related measure is the Normalized Mean Squared Error(NMSE)
119869NMSE =sum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2 (41)
and the Best Fit Rate (BFR) is
119869BFR = (1 minusradicsum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2)100 (42)
Example 1 The nonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 03119910 (119896) + 06119910 (119896 minus 1) + 119891 [119906 (119896)] (43)
The nonlinear function in the system in this case is 119891(119906) =1199063 + 031199062 minus 04119906 where 119910 and 119906 are uniformly distributedin [minus2 2] The objective of Example 1 is to control the systemto track reference output given as 250 sample data pointsThemodel has two inputs 119910(119896) and 119906(119896) and single output 119910(119896+1)and the system identificationwas initially performedwith thesystem input being uniformly distributed over [minus2 2] TheFCPN controller uses two inputs 119910(119896) and 119906(119896) to produceoutput 119910(119896 + 1) FCPN model governed by the differenceequation 119910(119896+1) = 03119910(119896)+06119910(119896minus1)+119873[119906(119896)]was usedFigures 4(a) and 4(b) show the error 119890(119896 + 1) = 119910(119896 + 1) minus119910(119896 + 1) and outputs of the dynamical system and the FCPN
Table 1 Calculated various errors for Example 1
NN model Different error calculationMAE SSE MSE RMSE NMSE
FCPN 13364 01143 45709119890 minus 004 00214 71627119890 minus 004
Dynamic 58231 31918 00128 01130 00028BPN 380666 481017 04810 06936 01035
As can be seen from the figure the identification error is smalleven when the input is changed to a sum of two sinusoids119906(119896) = sin(2120587119896250) + sin(212058711989625) at 119896 = 250
Table 1 includes various calculated errors of differentNN models for Example 1 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 2 Thenonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 119891 [119910119901(119896) 119910
119901(119896 minus 1)] + 119906 (119896) (44)
where
119891 [119910119901(119896) 119910
119901(119896 minus 1)]
= [119910119901(119896) 119910119901(119896 minus 1) [119910
119901(119896) + 25]
1 + 1199102119901(119896) + 1199102
119901(119896 minus 1)
]
(45)
Theobjective of (44) is to control the system to track referenceoutput given 100 sample data points The model has twoinputs119910(119896) and 119906(119896) and single output119910(119896+1) and the systemidentification was initially performed with the system inputbeing uniformly distributed over [minus2 2] Training and testingsamples contain 100 sample data pointsThe FCPN controlleruses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1) FCPN
Computational Intelligence and Neuroscience 9
0 20 40 60 80 1000
1
2
3
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
05
1
15
2
25
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)Figure 5 (a) Mean Square Error of system (Example 2) using FCPN (b) Performance of the controller using FCPN algorithm for Example 2
Table 2 Calculated various errors for Example 2
NN models Different error calculationMAE SSE MSE RMSE NMSE
FCPN 01956 00032 32328119890 minus 005 00057 27164119890 minus 005
Dynamic 95152 29856 00299 01728 00308BP 101494 45163 00452 02125 00323
network discussed earlier is used to identify the system frominput-output data and is described by the equation
119910 (119896 + 1) = 119873 [119910 (119896) 119910 (119896 minus 1)] + 119906 (119896) (46)
For FCPN the identification process involves the adjustmentof the weights of 119873 using FCL FCPN identifier needs someprior information concerning the input-output behavior ofthe system before identification can be undertaken FCL isused to adjust the weights of the neural network so that theerror 119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shownin Figure 5(a) The behavior of the FCPN model for (16) isshown in Figure 5(b) The input 119906(119896) was assumed to be arandom signal uniformly distributed in the interval [minus2 2]The weights in the neural network were adjusted
Table 2 includes various calculated errors of differentNN models for Example 2 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 3 Thenonlinear dynamical system [23] is governedby the following difference equation
119910 (119896 + 1) =119910 (119896)
1 + 119910 (119896)2+ 1199063 (119896) (47)
which corresponds to 119891[119910(119896)] = 119910(119896)(1 + 119910(119896)2) and119892[119906(119896)] = 1199063(119896) FCPN network equation (47) is used toidentify the system from input-output data Weights in the
Table 3 Calculated various errors and BFR for Example 3
NN model MAE SSE MSE RMSE NMSEFCPN 03519 00032 31709119890 minus 005 00056 56371119890 minus 006
Dynamic 55652 36158 00362 01902 00223BP 426524 469937 04699 06855 02892
neural networks were adjusted for every instant of discretetime steps
The objective of (47) is to control the system to trackreference output given 100 sample data pointsThemodel hastwo inputs 119910(119896) and 119906(119896) and single output 119910(119896 + 1) and thesystem identification was initially performed with the systeminput being uniformly distributed over [minus2 2] Training andtesting samples contain 250 sample data points The FCPNcontroller uses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1)The input data is generated using uniform distribution ininterval [minus2 2] The function 119891 obtained by FCPN is usedto adjust the weights of the neural network so that the error119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shown inFigure 6(a) In Figure 6(b) the desired outputs of the systemas well as the FCPN model are shown and are seen to beindistinguishable
Table 3 includes various calculated errors of differentNN models for Example 3 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 4 Thenonlinear dynamical system [24] is governedby the difference equation (48) Generalized form of thisequation is Model IV In this example the system is assumedto be of the form
119910119901(119896 + 1)
= 119891 [119910119901(119896) 119910
119901(119896 minus 1) 119910
119901(119896 minus 2) 119906 (119896) 119906 (119896 minus 1)]
(48)
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
4 Computational Intelligence and Neuroscience
D
D
f
p(t)
a(t)
iw11(0)
iw11(1)
iw11(2)
sum
Figure 2 Simple architecture of Dynamic Adaptive Linear Neuron
distance and the performance of network is improved usinglinear topology Using Euclidean distance between input andweight vector we evaluated BMN In phase II the desiredresponse is obtained by adapting the weights from compet-itive layer to output layer [27] Let 119909 = [119909
11199092sdot sdot sdot 119909119899]
and 119910 = [119910lowast1119910lowast2sdot sdot sdot 119910lowast119898] be input and desired vector
respectively let V119894119895be weight of input layer and competitive
layer where 1 le 119894 le 119899 and 1 le 119895 le 119901 and let119908119895119896be the weight
between competitive layer and output layer where 1 le 119896 le 119898Euclidean distance between input vector 119909 and weight vectorV119894119895is
119863119895=119899
sum119894=1
(119909119894minus V119894119895)2
where 119895 = 1 2 119901 (10)
The architecture of CPN is shown in Figure 3
33 Fuzzy Competitive Learning (FCL) FCL is used foradjusting instar-outstar weights and update of BMN fordiscrete time nonlinear dynamical systems Learning of CPNis divided into two phases FuzzyCompetitive Learning phase(unsupervised) and Grossberg Learning phase (supervised)[28ndash31] CPN is efficient network for mapping the inputvector of size 119899 placed in clusters of size 119898 Like traditionalself-organizing map the weight vector for every node isconsidered as a sample vector to the input pattern Theprocess is to determine which weight vector (say 119869) is moresimilar to the input pattern chosen as BMN First phase isbased on closeness between weight vector and input vectorand second phase is weight update between competitive andoutput layer for desired response
Let V119894119895be weight of input node 119894 to neuron 119895 and let 119908
119895119896
be the weight between competitive node 119895 and neuron 119896 Wepropose a new learning rate calculation method for use ofweight update
prop (119869) =sum119899
119894=1(V119894119869minus 119909119894)2
sum119898
119895=1sum119899
119894=1(V119894119869minus 119909119894)2 (11)
where 119869 denotes the index of BMN andprop is known as FuzzyCompetitive Learning rate
Fuzzy Competitive Learning phase is described as fol-lows
331 Training Algorithm of FCL
Phase I (for determination of BMN) There are steps fortraining of CPN as follows
Step 1 Initialize instar-outstar weights
Step 2 Perform Steps 3ndash8 until stopping criteria for Phase Itraining fail The stopping condition may be fixed number ofepochs or learning rate has reduced to negligible value
Step 3 Perform Steps 4ndash6 for all input training vector119883
Step 4 Set the119883-input layer activation to vector119883
Step 5 Compute the Best Matched Node (BMN) (119869) usingEuclidean distance find the node119885
119869whose distance from the
input pattern is minimum Euclidean distance is calculated asfollows
119863119895=119899
sum119894=1
(119909119894minus V119894119895)2
where 119895 = 1 2 119901 (12)
Minimum net input
119911119894119899119869
=119899
sum119894=1
119909119894V119894119869 (13)
Step 6 Calculate Fuzzy Competitive Learning (14) rate forweight update using BMN
Step 7 We update weight for unit 119885119869
V119894119869(new) = V
119894119869(old) + prop ℎ (119869 119894 119895) (119909
119894minus V119894119869) (14)
where 119894 = 1 2 3 119899 and ℎ(sdot sdot) is neighborhood functionaround BMN Consider
ℎ (119869 119894 119895) = exp(minus10038171003817100381710038171003817V119894119895 minus V
119894119869
10038171003817100381710038171003817
21205902 (119905))
where ℎ (119869 119894 119895) isin [0 1]
(15)
120590 (119905) = 1205900exp(minus119905
119879) (16)
Step 8 Test the stopping criteria
Phase II (to obtain desired response)
Step 1 Set activation function to (119909 119910) input and output layerrespectively
Step 2 Update the BMN (119869) (Step 5 from Phase I) Alsoupdate the weights into unit 119885
119869as
V119894119869(new) = V
119894119869(old) + 120572ℎ (119869 119894 119895) [119909
119894minus V119894119869(old)] (17)
Computational Intelligence and Neuroscience 5
X1
Xi
Xn
x1
xi
xn
11
1p
i1ij
ij
ip
n1
np
nj
Z1
Zj
Zp
w11
wj1
wp1
wjk
wjm
wik
wim
wpk
wpm
Y1
Yk
Ym
yy1
yy
k
yym
Input layer Competitive layer Output layer Desired output
Learning rate prop Learning rate 120573
(Unsupervised)instar
(Supervised)outstar
BMN
Figure 3 Architecture of Counter Propagation Network
Step 3 Update the weights from node 119885119869to the output unit
as
119908119869119896(new) = 119908
119869119896(old) + 120573ℎ (119869 119895 119896) [119910
119895minus 119908119869119896(old)]
for 1 le 119896 le 119898(18)
Step 4 Learning rate 120573 is calculated as
120573 (119869) =sum119898
119896=1(119910119896minus 119908119869119896)2
sum119901
119895=1sum119898
119896=1(119910119896minus 119908119895119896)2 (19)
Step 5 Decrease the rate of learning st 120573 = 120573 minus 120576 where 120576 issmall positive number
Step 6 Test the stopping condition for Phase II (ie fixednumber of epochs or its learning rate has reduced to negli-gible value)
332 Testing Phase (to Test FCPN)
Step 1 Set the initial weights that is the weights obtainedduring training
Step 2 Apply FCPN to the input vector119883
Step 3 Find unit 119869 that is closest to vector119883
Step 4 Set activations of output units
Step 5 Apply activation function at 119910119896 where 119910
119896= sum119895119911119895119908119895119896
4 Dynamical Learning for CPN
Learning stabilities are fundamental issues for CPN howeverthere are few studies on the learning issue of FNN BPN forlearning is not always successful because of its sensitivity tolearning parameters Optimal learning rate always changesduring the training process Dynamical learning of CPN iscarried out using Lemmas 1 2 and 3
Assumption Let us assume 120601(119909) is a sigmoid function if itis bounded continuous and increasing function Since inputto the neural network in this model is bounded we considerLemmas 1 and 2 given below
Lemma 1 Let 120601(119909) be a sigmoid function and let Ω be acompact set in R119899 and 119891 R119899 rarr R on Ω is a continuousfunction and for arbitrary isingt 0 exist integer119873 and real constants119888119894 120579119894 and 119908
119894119895 119894 = 1 2 119873 119895 = 1 2 119899 such that
119891 (1199091 1199092 119909
119899) =119873
sum119894=1
1198881198940(119899
sum119895=1
119908119894119895119909119894minus 120579119894) (20)
satisfies
max119909isinΩ
10038171003817100381710038171003817119891 (1199091 1199092 119909119899) minus 119891 (1199091 1199092 119909119899)10038171003817100381710038171003817 ltisin (21)
Using Lemma 1 dynamical learning for a three-layer CPN canbe formulated where the hidden layer transfer functions are120601(119909) and transfer function at output layer is linear
Let all the vectors be column vectors and superscript 119889119896119901
refers to specific output vectors component
6 Computational Intelligence and Neuroscience
Let 119883 = [1199091 1199092 119909
119901] isin R119871times119875 119884 = [119910
1 1199102 119910
119901] isin
R119867times119875 119874 = [1199001 1199002 119900
119901] isin R119870times119875 119863 = [119889
1 1198892 119889
119901] isin
R119870times119875 be the input hidden output and desired vectorrespectively where 119871119867 and 119870 denote the number of inputhidden and output layer neurons Let 119881 and 119882 representthe input-hidden and hidden-output layer weight matrixrespectivelyThe objectives of the network training minimizeerror function 119869 where 119869 is
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
(22)
41 Optimal Learning of Dynamical System Consider athree-layer CPN the network error matrix is defined by theerror between differences of desired and FCPN output at anyiteration and given as
119864119905= 119874119905minus 119863 = 119882
119905119884119905minus 119863 = 119882
119905119884119905119883 minus 119863 (23)
The objective of the network for minimization of error givenin (24) is defined as follows
st 119869 =1
2119875119870119879119903(119864119905119864119879119905) (24)
where119879119903represent the trace ofmatrix Using gradient descent
method updated weight is given by
119882119905+1
= 119882119905minus 120573119905
120597119869
120597119882119905
119881119905+1
= 119881119905minus 120573119905
120597119869
120597119881119905
(25)
or
119882119905+1
= 119882119905minus120573119905
119875119870119864119905119884119879119905
119881119905+1
= 119881119905minus120573119905
119875119870119882119879119905119864119905119883119879
(26)
Using (23)ndash(26) we have
119864119905+1119864119879119905+1
= (119882119905+1119884119905+1
minus 119863) (119882119905+1119884119905+1
minus 119863)119879
(27)
To obtain minimum error for multi-layer FNN in aboveequation (28) after simplification we have
119869119905+1
minus 119869119905=
1
2119875119870119892 (120573)
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
119864119905+1119864119879119905+1
= (119882119905+1119881119905+1119883 minus 119863) (119882
119905+1119881119905+1119883 minus 119863)
119879
= [(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863][(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863]T= [119864119905minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2119864119905119884119879119905119882119879119905119864119905119883119879119883] lowast [119864
119905
minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2(119864119905119884119879119905119882119879119905119864119905119883119879119883)]
119879
= 119864119905119864119879119905
minus120573119905
119875119870[119864119905(119882119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905(119864119905119884119879119905119881119905119883)119879
+ (119882119905119882119879119905119864119905119883119879119883119864119879
119905) + (119864
119905119884119879119905119881119905119883119864119879119905)]
+1205732119905
(119875119870)2[119864119905(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883119864119879
119905
+ 119864119905119884119879119905119881119905119883(119882119905119882119879119905119864119905119883119879119883)
119879
+119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119881119905119883)119879
+119882119905119882119879119905119864119905119883119879119883(119882
119905119882119879119905119864119905119883119879119883)
119879
]
minus1205733119905
(119875119870)3[119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119882
119905119882119879119891119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
]
+1205734119905
(119875119870)4[119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
119879
]
(28)
For simplification omit subscript 119905 we have that is
119869119905+1
minus 119869119905=
1
2119875119870(1198601205734 + 1198611205734 + 1198621205734 +119872120573) (29)
where
119860 =1
(119875119870)4119879119903[119864119884119879119882119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
]
119861 =1
(119875119870)3119879119903[119882119882119879119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
Computational Intelligence and Neuroscience 7
+ 119864119884119879119881119883(119864119884119879119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119864119884119879119881119883)]
119862 =1
(119875119870)2119879119903[(119864119884119879119882119879119864119883119879119883)
119879
+ 119864119884119879119882119864119883119879119883119864119879 + 119864119884119879119881119883(119882119882119879119864119883119879119883)119879
+119882119882119879119864119883119879119883 + (119864119884119879119881119883)119879
+ 119864119884119879119881119883(119864119884119879119881119883)119879
+119882119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
]
119872 =1
119875119870119879119903[119864 (119882119882119879119864119883119879119883)
119879
+ 119864 (119864119884119879119881119883)119879
+119882119882119879119864119883119879119883 + 119864119884119879119881119883119864119879]
(30)
where
119892 (120573) = (1198601205734 + 1198611205733 + 1198621205732 +119872120573) (31)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119888) (32)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860
Lemma 2 For solution of general real cubic equation one usesthe following lemma
119891 (119909) = 1199093 + 1198861199092 + 119887119909 + 119888
119871119890119905 119863 = minus271198882 + 18119888119886119887 + 11988621198872 minus 411988631198873 minus 41198873(33)
where119863 is discriminant of 119891(119909)Then we have the following
(1) If119863 lt 0 119891(119909) has one real root
(2) If119863 ge 0 119891(119909) has three real roots
(a) 119863 gt 0 119891(119909) has three different real roots(b) 119863 = 0 6119887 minus 211988620 119891(119909) has one single root and
one multiple root(c) 119863 = 0 66 minus 21198862 = 0 119891(119909) has one root of three
multiplicities
Lemma 3 For given polynomial 119892(120573) given (31) if optimum120573 = 120573
119894| 119892(120573
119894) = min(119892(120573
1) 119892(1205732) 119892(1205733)) 119894 isin 1 2 3
where 120573119894is real root of 120597119892120597120573 the optimum (120573) is the optimal
learning rate and this learning process is stable
Proof To find stable learning range of 120573 consider thatLyapunov function is
119881119905= 1198692119905
Δ119881119905= 1198692119905+1
minus 1198692119905
if Δ119881119905lt 0
(34)
and then dynamical system is guaranteed to be stable ifΔ119881119905lt
0 that is 119869119905+1minus119869119905= (12119875119870)(1198601205734+1198611205734+1198621205734+119872120573) lt 0 (29)
where 120573 is learning rate since in the training process inputmatrix remains the same during the whole training processTo find the range of 120573 which satisfy 119892(120573) = (1198601205734 + 1198611205733 +
1198621205732 + 119872120573) lt 0 Since 120597119892120597120573 where has at least one realroot (Lemma 2) and one of them must give optimum 119892(120573)Obviously minimum value of 119892(120573) gives the largest reductionin 119869119905at each step of learning process Equation (31) shows
that 119892(120573) has two or four real roots one including 120573 = 0(Lemma 2) such that minimum value of 120573 shows largestreduction in error at two successive times and minimumvalue is obtained by differentiating (31) with respect to 120573 wehave from (32)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119862) (35)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860Solving 120597119892120597120573 = 0 gives 120573 which minimizes error in (5)
5 Simulation Results
To demonstrate the effectiveness and merit of proposedFCPN simulation results are presented and discussed Fourdifferent nonlinear dynamical systems and one generalbenchmark problem known as Box-Jenkins model with timeseries data are considered
51 Performance Criteria For accessing the performancecriteria of FCPN and comparing with Dynamic and BackPropagation Network we have evaluated various errors asgiven below In case of four dynamical models it is recom-mended to access criterion such as subset of the following
Given 119873 pair of data points (119910(119896) 119910(119896)) where 119910(119896)is output of system and 119910(119896) is output of controller theMaximum Absolute Error (MAE) is
119869MAE = 119869max = max1le119896le119873
1003816100381610038161003816119910 (119896) minus 119910 (119896)1003816100381610038161003816 (36)
the Sum of Squared Errors (SSE) is
119869SSE =119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(37)
the Mean Squared Error (MSE) is
119869MSE =1
119873
119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(38)
8 Computational Intelligence and Neuroscience
0 50 100 150 200 2500
1
2
3
4
5
6
7
Iteration
Mea
n Sq
uare
Err
ortimes10minus3
(a)
0 50 100 150 200 250
0
05
1
15
2
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus2
minus15
minus1
minus05
(b)
Figure 4 (a) Mean Square Error of system (Example 1) using FCPN (b) Performance of the controller using FCPN algorithm for Example 1
andor the Root Mean Squared Error (RMSE) is
119869RMSE = radic119869MSE (39)
The measure Variance Accounting Factor (VAF) is
119869VAF = (1 minusVar (119910 (119896) minus 119910 (119896))
Var (119910 (119896))) 100 (40)
A related measure is the Normalized Mean Squared Error(NMSE)
119869NMSE =sum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2 (41)
and the Best Fit Rate (BFR) is
119869BFR = (1 minusradicsum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2)100 (42)
Example 1 The nonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 03119910 (119896) + 06119910 (119896 minus 1) + 119891 [119906 (119896)] (43)
The nonlinear function in the system in this case is 119891(119906) =1199063 + 031199062 minus 04119906 where 119910 and 119906 are uniformly distributedin [minus2 2] The objective of Example 1 is to control the systemto track reference output given as 250 sample data pointsThemodel has two inputs 119910(119896) and 119906(119896) and single output 119910(119896+1)and the system identificationwas initially performedwith thesystem input being uniformly distributed over [minus2 2] TheFCPN controller uses two inputs 119910(119896) and 119906(119896) to produceoutput 119910(119896 + 1) FCPN model governed by the differenceequation 119910(119896+1) = 03119910(119896)+06119910(119896minus1)+119873[119906(119896)]was usedFigures 4(a) and 4(b) show the error 119890(119896 + 1) = 119910(119896 + 1) minus119910(119896 + 1) and outputs of the dynamical system and the FCPN
Table 1 Calculated various errors for Example 1
NN model Different error calculationMAE SSE MSE RMSE NMSE
FCPN 13364 01143 45709119890 minus 004 00214 71627119890 minus 004
Dynamic 58231 31918 00128 01130 00028BPN 380666 481017 04810 06936 01035
As can be seen from the figure the identification error is smalleven when the input is changed to a sum of two sinusoids119906(119896) = sin(2120587119896250) + sin(212058711989625) at 119896 = 250
Table 1 includes various calculated errors of differentNN models for Example 1 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 2 Thenonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 119891 [119910119901(119896) 119910
119901(119896 minus 1)] + 119906 (119896) (44)
where
119891 [119910119901(119896) 119910
119901(119896 minus 1)]
= [119910119901(119896) 119910119901(119896 minus 1) [119910
119901(119896) + 25]
1 + 1199102119901(119896) + 1199102
119901(119896 minus 1)
]
(45)
Theobjective of (44) is to control the system to track referenceoutput given 100 sample data points The model has twoinputs119910(119896) and 119906(119896) and single output119910(119896+1) and the systemidentification was initially performed with the system inputbeing uniformly distributed over [minus2 2] Training and testingsamples contain 100 sample data pointsThe FCPN controlleruses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1) FCPN
Computational Intelligence and Neuroscience 9
0 20 40 60 80 1000
1
2
3
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
05
1
15
2
25
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)Figure 5 (a) Mean Square Error of system (Example 2) using FCPN (b) Performance of the controller using FCPN algorithm for Example 2
Table 2 Calculated various errors for Example 2
NN models Different error calculationMAE SSE MSE RMSE NMSE
FCPN 01956 00032 32328119890 minus 005 00057 27164119890 minus 005
Dynamic 95152 29856 00299 01728 00308BP 101494 45163 00452 02125 00323
network discussed earlier is used to identify the system frominput-output data and is described by the equation
119910 (119896 + 1) = 119873 [119910 (119896) 119910 (119896 minus 1)] + 119906 (119896) (46)
For FCPN the identification process involves the adjustmentof the weights of 119873 using FCL FCPN identifier needs someprior information concerning the input-output behavior ofthe system before identification can be undertaken FCL isused to adjust the weights of the neural network so that theerror 119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shownin Figure 5(a) The behavior of the FCPN model for (16) isshown in Figure 5(b) The input 119906(119896) was assumed to be arandom signal uniformly distributed in the interval [minus2 2]The weights in the neural network were adjusted
Table 2 includes various calculated errors of differentNN models for Example 2 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 3 Thenonlinear dynamical system [23] is governedby the following difference equation
119910 (119896 + 1) =119910 (119896)
1 + 119910 (119896)2+ 1199063 (119896) (47)
which corresponds to 119891[119910(119896)] = 119910(119896)(1 + 119910(119896)2) and119892[119906(119896)] = 1199063(119896) FCPN network equation (47) is used toidentify the system from input-output data Weights in the
Table 3 Calculated various errors and BFR for Example 3
NN model MAE SSE MSE RMSE NMSEFCPN 03519 00032 31709119890 minus 005 00056 56371119890 minus 006
Dynamic 55652 36158 00362 01902 00223BP 426524 469937 04699 06855 02892
neural networks were adjusted for every instant of discretetime steps
The objective of (47) is to control the system to trackreference output given 100 sample data pointsThemodel hastwo inputs 119910(119896) and 119906(119896) and single output 119910(119896 + 1) and thesystem identification was initially performed with the systeminput being uniformly distributed over [minus2 2] Training andtesting samples contain 250 sample data points The FCPNcontroller uses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1)The input data is generated using uniform distribution ininterval [minus2 2] The function 119891 obtained by FCPN is usedto adjust the weights of the neural network so that the error119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shown inFigure 6(a) In Figure 6(b) the desired outputs of the systemas well as the FCPN model are shown and are seen to beindistinguishable
Table 3 includes various calculated errors of differentNN models for Example 3 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 4 Thenonlinear dynamical system [24] is governedby the difference equation (48) Generalized form of thisequation is Model IV In this example the system is assumedto be of the form
119910119901(119896 + 1)
= 119891 [119910119901(119896) 119910
119901(119896 minus 1) 119910
119901(119896 minus 2) 119906 (119896) 119906 (119896 minus 1)]
(48)
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 5
X1
Xi
Xn
x1
xi
xn
11
1p
i1ij
ij
ip
n1
np
nj
Z1
Zj
Zp
w11
wj1
wp1
wjk
wjm
wik
wim
wpk
wpm
Y1
Yk
Ym
yy1
yy
k
yym
Input layer Competitive layer Output layer Desired output
Learning rate prop Learning rate 120573
(Unsupervised)instar
(Supervised)outstar
BMN
Figure 3 Architecture of Counter Propagation Network
Step 3 Update the weights from node 119885119869to the output unit
as
119908119869119896(new) = 119908
119869119896(old) + 120573ℎ (119869 119895 119896) [119910
119895minus 119908119869119896(old)]
for 1 le 119896 le 119898(18)
Step 4 Learning rate 120573 is calculated as
120573 (119869) =sum119898
119896=1(119910119896minus 119908119869119896)2
sum119901
119895=1sum119898
119896=1(119910119896minus 119908119895119896)2 (19)
Step 5 Decrease the rate of learning st 120573 = 120573 minus 120576 where 120576 issmall positive number
Step 6 Test the stopping condition for Phase II (ie fixednumber of epochs or its learning rate has reduced to negli-gible value)
332 Testing Phase (to Test FCPN)
Step 1 Set the initial weights that is the weights obtainedduring training
Step 2 Apply FCPN to the input vector119883
Step 3 Find unit 119869 that is closest to vector119883
Step 4 Set activations of output units
Step 5 Apply activation function at 119910119896 where 119910
119896= sum119895119911119895119908119895119896
4 Dynamical Learning for CPN
Learning stabilities are fundamental issues for CPN howeverthere are few studies on the learning issue of FNN BPN forlearning is not always successful because of its sensitivity tolearning parameters Optimal learning rate always changesduring the training process Dynamical learning of CPN iscarried out using Lemmas 1 2 and 3
Assumption Let us assume 120601(119909) is a sigmoid function if itis bounded continuous and increasing function Since inputto the neural network in this model is bounded we considerLemmas 1 and 2 given below
Lemma 1 Let 120601(119909) be a sigmoid function and let Ω be acompact set in R119899 and 119891 R119899 rarr R on Ω is a continuousfunction and for arbitrary isingt 0 exist integer119873 and real constants119888119894 120579119894 and 119908
119894119895 119894 = 1 2 119873 119895 = 1 2 119899 such that
119891 (1199091 1199092 119909
119899) =119873
sum119894=1
1198881198940(119899
sum119895=1
119908119894119895119909119894minus 120579119894) (20)
satisfies
max119909isinΩ
10038171003817100381710038171003817119891 (1199091 1199092 119909119899) minus 119891 (1199091 1199092 119909119899)10038171003817100381710038171003817 ltisin (21)
Using Lemma 1 dynamical learning for a three-layer CPN canbe formulated where the hidden layer transfer functions are120601(119909) and transfer function at output layer is linear
Let all the vectors be column vectors and superscript 119889119896119901
refers to specific output vectors component
6 Computational Intelligence and Neuroscience
Let 119883 = [1199091 1199092 119909
119901] isin R119871times119875 119884 = [119910
1 1199102 119910
119901] isin
R119867times119875 119874 = [1199001 1199002 119900
119901] isin R119870times119875 119863 = [119889
1 1198892 119889
119901] isin
R119870times119875 be the input hidden output and desired vectorrespectively where 119871119867 and 119870 denote the number of inputhidden and output layer neurons Let 119881 and 119882 representthe input-hidden and hidden-output layer weight matrixrespectivelyThe objectives of the network training minimizeerror function 119869 where 119869 is
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
(22)
41 Optimal Learning of Dynamical System Consider athree-layer CPN the network error matrix is defined by theerror between differences of desired and FCPN output at anyiteration and given as
119864119905= 119874119905minus 119863 = 119882
119905119884119905minus 119863 = 119882
119905119884119905119883 minus 119863 (23)
The objective of the network for minimization of error givenin (24) is defined as follows
st 119869 =1
2119875119870119879119903(119864119905119864119879119905) (24)
where119879119903represent the trace ofmatrix Using gradient descent
method updated weight is given by
119882119905+1
= 119882119905minus 120573119905
120597119869
120597119882119905
119881119905+1
= 119881119905minus 120573119905
120597119869
120597119881119905
(25)
or
119882119905+1
= 119882119905minus120573119905
119875119870119864119905119884119879119905
119881119905+1
= 119881119905minus120573119905
119875119870119882119879119905119864119905119883119879
(26)
Using (23)ndash(26) we have
119864119905+1119864119879119905+1
= (119882119905+1119884119905+1
minus 119863) (119882119905+1119884119905+1
minus 119863)119879
(27)
To obtain minimum error for multi-layer FNN in aboveequation (28) after simplification we have
119869119905+1
minus 119869119905=
1
2119875119870119892 (120573)
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
119864119905+1119864119879119905+1
= (119882119905+1119881119905+1119883 minus 119863) (119882
119905+1119881119905+1119883 minus 119863)
119879
= [(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863][(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863]T= [119864119905minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2119864119905119884119879119905119882119879119905119864119905119883119879119883] lowast [119864
119905
minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2(119864119905119884119879119905119882119879119905119864119905119883119879119883)]
119879
= 119864119905119864119879119905
minus120573119905
119875119870[119864119905(119882119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905(119864119905119884119879119905119881119905119883)119879
+ (119882119905119882119879119905119864119905119883119879119883119864119879
119905) + (119864
119905119884119879119905119881119905119883119864119879119905)]
+1205732119905
(119875119870)2[119864119905(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883119864119879
119905
+ 119864119905119884119879119905119881119905119883(119882119905119882119879119905119864119905119883119879119883)
119879
+119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119881119905119883)119879
+119882119905119882119879119905119864119905119883119879119883(119882
119905119882119879119905119864119905119883119879119883)
119879
]
minus1205733119905
(119875119870)3[119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119882
119905119882119879119891119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
]
+1205734119905
(119875119870)4[119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
119879
]
(28)
For simplification omit subscript 119905 we have that is
119869119905+1
minus 119869119905=
1
2119875119870(1198601205734 + 1198611205734 + 1198621205734 +119872120573) (29)
where
119860 =1
(119875119870)4119879119903[119864119884119879119882119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
]
119861 =1
(119875119870)3119879119903[119882119882119879119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
Computational Intelligence and Neuroscience 7
+ 119864119884119879119881119883(119864119884119879119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119864119884119879119881119883)]
119862 =1
(119875119870)2119879119903[(119864119884119879119882119879119864119883119879119883)
119879
+ 119864119884119879119882119864119883119879119883119864119879 + 119864119884119879119881119883(119882119882119879119864119883119879119883)119879
+119882119882119879119864119883119879119883 + (119864119884119879119881119883)119879
+ 119864119884119879119881119883(119864119884119879119881119883)119879
+119882119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
]
119872 =1
119875119870119879119903[119864 (119882119882119879119864119883119879119883)
119879
+ 119864 (119864119884119879119881119883)119879
+119882119882119879119864119883119879119883 + 119864119884119879119881119883119864119879]
(30)
where
119892 (120573) = (1198601205734 + 1198611205733 + 1198621205732 +119872120573) (31)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119888) (32)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860
Lemma 2 For solution of general real cubic equation one usesthe following lemma
119891 (119909) = 1199093 + 1198861199092 + 119887119909 + 119888
119871119890119905 119863 = minus271198882 + 18119888119886119887 + 11988621198872 minus 411988631198873 minus 41198873(33)
where119863 is discriminant of 119891(119909)Then we have the following
(1) If119863 lt 0 119891(119909) has one real root
(2) If119863 ge 0 119891(119909) has three real roots
(a) 119863 gt 0 119891(119909) has three different real roots(b) 119863 = 0 6119887 minus 211988620 119891(119909) has one single root and
one multiple root(c) 119863 = 0 66 minus 21198862 = 0 119891(119909) has one root of three
multiplicities
Lemma 3 For given polynomial 119892(120573) given (31) if optimum120573 = 120573
119894| 119892(120573
119894) = min(119892(120573
1) 119892(1205732) 119892(1205733)) 119894 isin 1 2 3
where 120573119894is real root of 120597119892120597120573 the optimum (120573) is the optimal
learning rate and this learning process is stable
Proof To find stable learning range of 120573 consider thatLyapunov function is
119881119905= 1198692119905
Δ119881119905= 1198692119905+1
minus 1198692119905
if Δ119881119905lt 0
(34)
and then dynamical system is guaranteed to be stable ifΔ119881119905lt
0 that is 119869119905+1minus119869119905= (12119875119870)(1198601205734+1198611205734+1198621205734+119872120573) lt 0 (29)
where 120573 is learning rate since in the training process inputmatrix remains the same during the whole training processTo find the range of 120573 which satisfy 119892(120573) = (1198601205734 + 1198611205733 +
1198621205732 + 119872120573) lt 0 Since 120597119892120597120573 where has at least one realroot (Lemma 2) and one of them must give optimum 119892(120573)Obviously minimum value of 119892(120573) gives the largest reductionin 119869119905at each step of learning process Equation (31) shows
that 119892(120573) has two or four real roots one including 120573 = 0(Lemma 2) such that minimum value of 120573 shows largestreduction in error at two successive times and minimumvalue is obtained by differentiating (31) with respect to 120573 wehave from (32)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119862) (35)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860Solving 120597119892120597120573 = 0 gives 120573 which minimizes error in (5)
5 Simulation Results
To demonstrate the effectiveness and merit of proposedFCPN simulation results are presented and discussed Fourdifferent nonlinear dynamical systems and one generalbenchmark problem known as Box-Jenkins model with timeseries data are considered
51 Performance Criteria For accessing the performancecriteria of FCPN and comparing with Dynamic and BackPropagation Network we have evaluated various errors asgiven below In case of four dynamical models it is recom-mended to access criterion such as subset of the following
Given 119873 pair of data points (119910(119896) 119910(119896)) where 119910(119896)is output of system and 119910(119896) is output of controller theMaximum Absolute Error (MAE) is
119869MAE = 119869max = max1le119896le119873
1003816100381610038161003816119910 (119896) minus 119910 (119896)1003816100381610038161003816 (36)
the Sum of Squared Errors (SSE) is
119869SSE =119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(37)
the Mean Squared Error (MSE) is
119869MSE =1
119873
119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(38)
8 Computational Intelligence and Neuroscience
0 50 100 150 200 2500
1
2
3
4
5
6
7
Iteration
Mea
n Sq
uare
Err
ortimes10minus3
(a)
0 50 100 150 200 250
0
05
1
15
2
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus2
minus15
minus1
minus05
(b)
Figure 4 (a) Mean Square Error of system (Example 1) using FCPN (b) Performance of the controller using FCPN algorithm for Example 1
andor the Root Mean Squared Error (RMSE) is
119869RMSE = radic119869MSE (39)
The measure Variance Accounting Factor (VAF) is
119869VAF = (1 minusVar (119910 (119896) minus 119910 (119896))
Var (119910 (119896))) 100 (40)
A related measure is the Normalized Mean Squared Error(NMSE)
119869NMSE =sum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2 (41)
and the Best Fit Rate (BFR) is
119869BFR = (1 minusradicsum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2)100 (42)
Example 1 The nonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 03119910 (119896) + 06119910 (119896 minus 1) + 119891 [119906 (119896)] (43)
The nonlinear function in the system in this case is 119891(119906) =1199063 + 031199062 minus 04119906 where 119910 and 119906 are uniformly distributedin [minus2 2] The objective of Example 1 is to control the systemto track reference output given as 250 sample data pointsThemodel has two inputs 119910(119896) and 119906(119896) and single output 119910(119896+1)and the system identificationwas initially performedwith thesystem input being uniformly distributed over [minus2 2] TheFCPN controller uses two inputs 119910(119896) and 119906(119896) to produceoutput 119910(119896 + 1) FCPN model governed by the differenceequation 119910(119896+1) = 03119910(119896)+06119910(119896minus1)+119873[119906(119896)]was usedFigures 4(a) and 4(b) show the error 119890(119896 + 1) = 119910(119896 + 1) minus119910(119896 + 1) and outputs of the dynamical system and the FCPN
Table 1 Calculated various errors for Example 1
NN model Different error calculationMAE SSE MSE RMSE NMSE
FCPN 13364 01143 45709119890 minus 004 00214 71627119890 minus 004
Dynamic 58231 31918 00128 01130 00028BPN 380666 481017 04810 06936 01035
As can be seen from the figure the identification error is smalleven when the input is changed to a sum of two sinusoids119906(119896) = sin(2120587119896250) + sin(212058711989625) at 119896 = 250
Table 1 includes various calculated errors of differentNN models for Example 1 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 2 Thenonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 119891 [119910119901(119896) 119910
119901(119896 minus 1)] + 119906 (119896) (44)
where
119891 [119910119901(119896) 119910
119901(119896 minus 1)]
= [119910119901(119896) 119910119901(119896 minus 1) [119910
119901(119896) + 25]
1 + 1199102119901(119896) + 1199102
119901(119896 minus 1)
]
(45)
Theobjective of (44) is to control the system to track referenceoutput given 100 sample data points The model has twoinputs119910(119896) and 119906(119896) and single output119910(119896+1) and the systemidentification was initially performed with the system inputbeing uniformly distributed over [minus2 2] Training and testingsamples contain 100 sample data pointsThe FCPN controlleruses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1) FCPN
Computational Intelligence and Neuroscience 9
0 20 40 60 80 1000
1
2
3
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
05
1
15
2
25
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)Figure 5 (a) Mean Square Error of system (Example 2) using FCPN (b) Performance of the controller using FCPN algorithm for Example 2
Table 2 Calculated various errors for Example 2
NN models Different error calculationMAE SSE MSE RMSE NMSE
FCPN 01956 00032 32328119890 minus 005 00057 27164119890 minus 005
Dynamic 95152 29856 00299 01728 00308BP 101494 45163 00452 02125 00323
network discussed earlier is used to identify the system frominput-output data and is described by the equation
119910 (119896 + 1) = 119873 [119910 (119896) 119910 (119896 minus 1)] + 119906 (119896) (46)
For FCPN the identification process involves the adjustmentof the weights of 119873 using FCL FCPN identifier needs someprior information concerning the input-output behavior ofthe system before identification can be undertaken FCL isused to adjust the weights of the neural network so that theerror 119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shownin Figure 5(a) The behavior of the FCPN model for (16) isshown in Figure 5(b) The input 119906(119896) was assumed to be arandom signal uniformly distributed in the interval [minus2 2]The weights in the neural network were adjusted
Table 2 includes various calculated errors of differentNN models for Example 2 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 3 Thenonlinear dynamical system [23] is governedby the following difference equation
119910 (119896 + 1) =119910 (119896)
1 + 119910 (119896)2+ 1199063 (119896) (47)
which corresponds to 119891[119910(119896)] = 119910(119896)(1 + 119910(119896)2) and119892[119906(119896)] = 1199063(119896) FCPN network equation (47) is used toidentify the system from input-output data Weights in the
Table 3 Calculated various errors and BFR for Example 3
NN model MAE SSE MSE RMSE NMSEFCPN 03519 00032 31709119890 minus 005 00056 56371119890 minus 006
Dynamic 55652 36158 00362 01902 00223BP 426524 469937 04699 06855 02892
neural networks were adjusted for every instant of discretetime steps
The objective of (47) is to control the system to trackreference output given 100 sample data pointsThemodel hastwo inputs 119910(119896) and 119906(119896) and single output 119910(119896 + 1) and thesystem identification was initially performed with the systeminput being uniformly distributed over [minus2 2] Training andtesting samples contain 250 sample data points The FCPNcontroller uses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1)The input data is generated using uniform distribution ininterval [minus2 2] The function 119891 obtained by FCPN is usedto adjust the weights of the neural network so that the error119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shown inFigure 6(a) In Figure 6(b) the desired outputs of the systemas well as the FCPN model are shown and are seen to beindistinguishable
Table 3 includes various calculated errors of differentNN models for Example 3 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 4 Thenonlinear dynamical system [24] is governedby the difference equation (48) Generalized form of thisequation is Model IV In this example the system is assumedto be of the form
119910119901(119896 + 1)
= 119891 [119910119901(119896) 119910
119901(119896 minus 1) 119910
119901(119896 minus 2) 119906 (119896) 119906 (119896 minus 1)]
(48)
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
6 Computational Intelligence and Neuroscience
Let 119883 = [1199091 1199092 119909
119901] isin R119871times119875 119884 = [119910
1 1199102 119910
119901] isin
R119867times119875 119874 = [1199001 1199002 119900
119901] isin R119870times119875 119863 = [119889
1 1198892 119889
119901] isin
R119870times119875 be the input hidden output and desired vectorrespectively where 119871119867 and 119870 denote the number of inputhidden and output layer neurons Let 119881 and 119882 representthe input-hidden and hidden-output layer weight matrixrespectivelyThe objectives of the network training minimizeerror function 119869 where 119869 is
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
(22)
41 Optimal Learning of Dynamical System Consider athree-layer CPN the network error matrix is defined by theerror between differences of desired and FCPN output at anyiteration and given as
119864119905= 119874119905minus 119863 = 119882
119905119884119905minus 119863 = 119882
119905119884119905119883 minus 119863 (23)
The objective of the network for minimization of error givenin (24) is defined as follows
st 119869 =1
2119875119870119879119903(119864119905119864119879119905) (24)
where119879119903represent the trace ofmatrix Using gradient descent
method updated weight is given by
119882119905+1
= 119882119905minus 120573119905
120597119869
120597119882119905
119881119905+1
= 119881119905minus 120573119905
120597119869
120597119881119905
(25)
or
119882119905+1
= 119882119905minus120573119905
119875119870119864119905119884119879119905
119881119905+1
= 119881119905minus120573119905
119875119870119882119879119905119864119905119883119879
(26)
Using (23)ndash(26) we have
119864119905+1119864119879119905+1
= (119882119905+1119884119905+1
minus 119863) (119882119905+1119884119905+1
minus 119863)119879
(27)
To obtain minimum error for multi-layer FNN in aboveequation (28) after simplification we have
119869119905+1
minus 119869119905=
1
2119875119870119892 (120573)
119869 =1
2119875119870
119875
sum119901=1
119870
sum119896=1
(119900119896119901minus 119889119896119901)2
119864119905+1119864119879119905+1
= (119882119905+1119881119905+1119883 minus 119863) (119882
119905+1119881119905+1119883 minus 119863)
119879
= [(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863][(119882119905minus120573119905
119875119870119864119905119884119879119905)(119881119905minus120573119905
119875119870119882119879119905119864119905119883119879)119883
minus 119863]T= [119864119905minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2119864119905119884119879119905119882119879119905119864119905119883119879119883] lowast [119864
119905
minus120573119905
119875119870(119882119905119882119879119905119864119905119883119879119883 + 119864
119905119884119879119905119881119905119883)
+1205732119905
(119875119870)2(119864119905119884119879119905119882119879119905119864119905119883119879119883)]
119879
= 119864119905119864119879119905
minus120573119905
119875119870[119864119905(119882119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905(119864119905119884119879119905119881119905119883)119879
+ (119882119905119882119879119905119864119905119883119879119883119864119879
119905) + (119864
119905119884119879119905119881119905119883119864119879119905)]
+1205732119905
(119875119870)2[119864119905(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883119864119879
119905
+ 119864119905119884119879119905119881119905119883(119882119905119882119879119905119864119905119883119879119883)
119879
+119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119881119905119883)119879
+119882119905119882119879119905119864119905119883119879119883(119882
119905119882119879119905119864119905119883119879119883)
119879
]
minus1205733119905
(119875119870)3[119882119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
+ 119864119905119884119879119905119881119905119883(119864119905119884119879119905119882119879119905119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119882
119905119882119879119891119864119905119883119879119883)
119879
+ 119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119881119905119883)119879
]
+1205734119905
(119875119870)4[119864119905119884119879119905119882119879119905119864119905119883119879119883(119864
119905119884119879119905119882119879119905119864119905119883119879119883)
119879
]
(28)
For simplification omit subscript 119905 we have that is
119869119905+1
minus 119869119905=
1
2119875119870(1198601205734 + 1198611205734 + 1198621205734 +119872120573) (29)
where
119860 =1
(119875119870)4119879119903[119864119884119879119882119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
]
119861 =1
(119875119870)3119879119903[119882119882119879119864119883119879119883(119864119884119879119882119879119864119883119879119883)
119879
Computational Intelligence and Neuroscience 7
+ 119864119884119879119881119883(119864119884119879119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119864119884119879119881119883)]
119862 =1
(119875119870)2119879119903[(119864119884119879119882119879119864119883119879119883)
119879
+ 119864119884119879119882119864119883119879119883119864119879 + 119864119884119879119881119883(119882119882119879119864119883119879119883)119879
+119882119882119879119864119883119879119883 + (119864119884119879119881119883)119879
+ 119864119884119879119881119883(119864119884119879119881119883)119879
+119882119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
]
119872 =1
119875119870119879119903[119864 (119882119882119879119864119883119879119883)
119879
+ 119864 (119864119884119879119881119883)119879
+119882119882119879119864119883119879119883 + 119864119884119879119881119883119864119879]
(30)
where
119892 (120573) = (1198601205734 + 1198611205733 + 1198621205732 +119872120573) (31)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119888) (32)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860
Lemma 2 For solution of general real cubic equation one usesthe following lemma
119891 (119909) = 1199093 + 1198861199092 + 119887119909 + 119888
119871119890119905 119863 = minus271198882 + 18119888119886119887 + 11988621198872 minus 411988631198873 minus 41198873(33)
where119863 is discriminant of 119891(119909)Then we have the following
(1) If119863 lt 0 119891(119909) has one real root
(2) If119863 ge 0 119891(119909) has three real roots
(a) 119863 gt 0 119891(119909) has three different real roots(b) 119863 = 0 6119887 minus 211988620 119891(119909) has one single root and
one multiple root(c) 119863 = 0 66 minus 21198862 = 0 119891(119909) has one root of three
multiplicities
Lemma 3 For given polynomial 119892(120573) given (31) if optimum120573 = 120573
119894| 119892(120573
119894) = min(119892(120573
1) 119892(1205732) 119892(1205733)) 119894 isin 1 2 3
where 120573119894is real root of 120597119892120597120573 the optimum (120573) is the optimal
learning rate and this learning process is stable
Proof To find stable learning range of 120573 consider thatLyapunov function is
119881119905= 1198692119905
Δ119881119905= 1198692119905+1
minus 1198692119905
if Δ119881119905lt 0
(34)
and then dynamical system is guaranteed to be stable ifΔ119881119905lt
0 that is 119869119905+1minus119869119905= (12119875119870)(1198601205734+1198611205734+1198621205734+119872120573) lt 0 (29)
where 120573 is learning rate since in the training process inputmatrix remains the same during the whole training processTo find the range of 120573 which satisfy 119892(120573) = (1198601205734 + 1198611205733 +
1198621205732 + 119872120573) lt 0 Since 120597119892120597120573 where has at least one realroot (Lemma 2) and one of them must give optimum 119892(120573)Obviously minimum value of 119892(120573) gives the largest reductionin 119869119905at each step of learning process Equation (31) shows
that 119892(120573) has two or four real roots one including 120573 = 0(Lemma 2) such that minimum value of 120573 shows largestreduction in error at two successive times and minimumvalue is obtained by differentiating (31) with respect to 120573 wehave from (32)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119862) (35)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860Solving 120597119892120597120573 = 0 gives 120573 which minimizes error in (5)
5 Simulation Results
To demonstrate the effectiveness and merit of proposedFCPN simulation results are presented and discussed Fourdifferent nonlinear dynamical systems and one generalbenchmark problem known as Box-Jenkins model with timeseries data are considered
51 Performance Criteria For accessing the performancecriteria of FCPN and comparing with Dynamic and BackPropagation Network we have evaluated various errors asgiven below In case of four dynamical models it is recom-mended to access criterion such as subset of the following
Given 119873 pair of data points (119910(119896) 119910(119896)) where 119910(119896)is output of system and 119910(119896) is output of controller theMaximum Absolute Error (MAE) is
119869MAE = 119869max = max1le119896le119873
1003816100381610038161003816119910 (119896) minus 119910 (119896)1003816100381610038161003816 (36)
the Sum of Squared Errors (SSE) is
119869SSE =119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(37)
the Mean Squared Error (MSE) is
119869MSE =1
119873
119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(38)
8 Computational Intelligence and Neuroscience
0 50 100 150 200 2500
1
2
3
4
5
6
7
Iteration
Mea
n Sq
uare
Err
ortimes10minus3
(a)
0 50 100 150 200 250
0
05
1
15
2
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus2
minus15
minus1
minus05
(b)
Figure 4 (a) Mean Square Error of system (Example 1) using FCPN (b) Performance of the controller using FCPN algorithm for Example 1
andor the Root Mean Squared Error (RMSE) is
119869RMSE = radic119869MSE (39)
The measure Variance Accounting Factor (VAF) is
119869VAF = (1 minusVar (119910 (119896) minus 119910 (119896))
Var (119910 (119896))) 100 (40)
A related measure is the Normalized Mean Squared Error(NMSE)
119869NMSE =sum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2 (41)
and the Best Fit Rate (BFR) is
119869BFR = (1 minusradicsum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2)100 (42)
Example 1 The nonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 03119910 (119896) + 06119910 (119896 minus 1) + 119891 [119906 (119896)] (43)
The nonlinear function in the system in this case is 119891(119906) =1199063 + 031199062 minus 04119906 where 119910 and 119906 are uniformly distributedin [minus2 2] The objective of Example 1 is to control the systemto track reference output given as 250 sample data pointsThemodel has two inputs 119910(119896) and 119906(119896) and single output 119910(119896+1)and the system identificationwas initially performedwith thesystem input being uniformly distributed over [minus2 2] TheFCPN controller uses two inputs 119910(119896) and 119906(119896) to produceoutput 119910(119896 + 1) FCPN model governed by the differenceequation 119910(119896+1) = 03119910(119896)+06119910(119896minus1)+119873[119906(119896)]was usedFigures 4(a) and 4(b) show the error 119890(119896 + 1) = 119910(119896 + 1) minus119910(119896 + 1) and outputs of the dynamical system and the FCPN
Table 1 Calculated various errors for Example 1
NN model Different error calculationMAE SSE MSE RMSE NMSE
FCPN 13364 01143 45709119890 minus 004 00214 71627119890 minus 004
Dynamic 58231 31918 00128 01130 00028BPN 380666 481017 04810 06936 01035
As can be seen from the figure the identification error is smalleven when the input is changed to a sum of two sinusoids119906(119896) = sin(2120587119896250) + sin(212058711989625) at 119896 = 250
Table 1 includes various calculated errors of differentNN models for Example 1 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 2 Thenonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 119891 [119910119901(119896) 119910
119901(119896 minus 1)] + 119906 (119896) (44)
where
119891 [119910119901(119896) 119910
119901(119896 minus 1)]
= [119910119901(119896) 119910119901(119896 minus 1) [119910
119901(119896) + 25]
1 + 1199102119901(119896) + 1199102
119901(119896 minus 1)
]
(45)
Theobjective of (44) is to control the system to track referenceoutput given 100 sample data points The model has twoinputs119910(119896) and 119906(119896) and single output119910(119896+1) and the systemidentification was initially performed with the system inputbeing uniformly distributed over [minus2 2] Training and testingsamples contain 100 sample data pointsThe FCPN controlleruses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1) FCPN
Computational Intelligence and Neuroscience 9
0 20 40 60 80 1000
1
2
3
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
05
1
15
2
25
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)Figure 5 (a) Mean Square Error of system (Example 2) using FCPN (b) Performance of the controller using FCPN algorithm for Example 2
Table 2 Calculated various errors for Example 2
NN models Different error calculationMAE SSE MSE RMSE NMSE
FCPN 01956 00032 32328119890 minus 005 00057 27164119890 minus 005
Dynamic 95152 29856 00299 01728 00308BP 101494 45163 00452 02125 00323
network discussed earlier is used to identify the system frominput-output data and is described by the equation
119910 (119896 + 1) = 119873 [119910 (119896) 119910 (119896 minus 1)] + 119906 (119896) (46)
For FCPN the identification process involves the adjustmentof the weights of 119873 using FCL FCPN identifier needs someprior information concerning the input-output behavior ofthe system before identification can be undertaken FCL isused to adjust the weights of the neural network so that theerror 119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shownin Figure 5(a) The behavior of the FCPN model for (16) isshown in Figure 5(b) The input 119906(119896) was assumed to be arandom signal uniformly distributed in the interval [minus2 2]The weights in the neural network were adjusted
Table 2 includes various calculated errors of differentNN models for Example 2 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 3 Thenonlinear dynamical system [23] is governedby the following difference equation
119910 (119896 + 1) =119910 (119896)
1 + 119910 (119896)2+ 1199063 (119896) (47)
which corresponds to 119891[119910(119896)] = 119910(119896)(1 + 119910(119896)2) and119892[119906(119896)] = 1199063(119896) FCPN network equation (47) is used toidentify the system from input-output data Weights in the
Table 3 Calculated various errors and BFR for Example 3
NN model MAE SSE MSE RMSE NMSEFCPN 03519 00032 31709119890 minus 005 00056 56371119890 minus 006
Dynamic 55652 36158 00362 01902 00223BP 426524 469937 04699 06855 02892
neural networks were adjusted for every instant of discretetime steps
The objective of (47) is to control the system to trackreference output given 100 sample data pointsThemodel hastwo inputs 119910(119896) and 119906(119896) and single output 119910(119896 + 1) and thesystem identification was initially performed with the systeminput being uniformly distributed over [minus2 2] Training andtesting samples contain 250 sample data points The FCPNcontroller uses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1)The input data is generated using uniform distribution ininterval [minus2 2] The function 119891 obtained by FCPN is usedto adjust the weights of the neural network so that the error119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shown inFigure 6(a) In Figure 6(b) the desired outputs of the systemas well as the FCPN model are shown and are seen to beindistinguishable
Table 3 includes various calculated errors of differentNN models for Example 3 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 4 Thenonlinear dynamical system [24] is governedby the difference equation (48) Generalized form of thisequation is Model IV In this example the system is assumedto be of the form
119910119901(119896 + 1)
= 119891 [119910119901(119896) 119910
119901(119896 minus 1) 119910
119901(119896 minus 2) 119906 (119896) 119906 (119896 minus 1)]
(48)
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 7
+ 119864119884119879119881119883(119864119884119879119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
+ 119864119884119879119882119879119864119883119879119883(119864119884119879119881119883)]
119862 =1
(119875119870)2119879119903[(119864119884119879119882119879119864119883119879119883)
119879
+ 119864119884119879119882119864119883119879119883119864119879 + 119864119884119879119881119883(119882119882119879119864119883119879119883)119879
+119882119882119879119864119883119879119883 + (119864119884119879119881119883)119879
+ 119864119884119879119881119883(119864119884119879119881119883)119879
+119882119882119879119864119883119879119883(119882119882119879119864119883119879119883)119879
]
119872 =1
119875119870119879119903[119864 (119882119882119879119864119883119879119883)
119879
+ 119864 (119864119884119879119881119883)119879
+119882119882119879119864119883119879119883 + 119864119884119879119881119883119864119879]
(30)
where
119892 (120573) = (1198601205734 + 1198611205733 + 1198621205732 +119872120573) (31)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119888) (32)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860
Lemma 2 For solution of general real cubic equation one usesthe following lemma
119891 (119909) = 1199093 + 1198861199092 + 119887119909 + 119888
119871119890119905 119863 = minus271198882 + 18119888119886119887 + 11988621198872 minus 411988631198873 minus 41198873(33)
where119863 is discriminant of 119891(119909)Then we have the following
(1) If119863 lt 0 119891(119909) has one real root
(2) If119863 ge 0 119891(119909) has three real roots
(a) 119863 gt 0 119891(119909) has three different real roots(b) 119863 = 0 6119887 minus 211988620 119891(119909) has one single root and
one multiple root(c) 119863 = 0 66 minus 21198862 = 0 119891(119909) has one root of three
multiplicities
Lemma 3 For given polynomial 119892(120573) given (31) if optimum120573 = 120573
119894| 119892(120573
119894) = min(119892(120573
1) 119892(1205732) 119892(1205733)) 119894 isin 1 2 3
where 120573119894is real root of 120597119892120597120573 the optimum (120573) is the optimal
learning rate and this learning process is stable
Proof To find stable learning range of 120573 consider thatLyapunov function is
119881119905= 1198692119905
Δ119881119905= 1198692119905+1
minus 1198692119905
if Δ119881119905lt 0
(34)
and then dynamical system is guaranteed to be stable ifΔ119881119905lt
0 that is 119869119905+1minus119869119905= (12119875119870)(1198601205734+1198611205734+1198621205734+119872120573) lt 0 (29)
where 120573 is learning rate since in the training process inputmatrix remains the same during the whole training processTo find the range of 120573 which satisfy 119892(120573) = (1198601205734 + 1198611205733 +
1198621205732 + 119872120573) lt 0 Since 120597119892120597120573 where has at least one realroot (Lemma 2) and one of them must give optimum 119892(120573)Obviously minimum value of 119892(120573) gives the largest reductionin 119869119905at each step of learning process Equation (31) shows
that 119892(120573) has two or four real roots one including 120573 = 0(Lemma 2) such that minimum value of 120573 shows largestreduction in error at two successive times and minimumvalue is obtained by differentiating (31) with respect to 120573 wehave from (32)
120597119892
120597120573= 4119860 (1205733 + 1198861205732 + 119887120573 + 119862) (35)
where 119886 = 31198614119860 119887 = 21198624119860 and 119888 = 1198724119860Solving 120597119892120597120573 = 0 gives 120573 which minimizes error in (5)
5 Simulation Results
To demonstrate the effectiveness and merit of proposedFCPN simulation results are presented and discussed Fourdifferent nonlinear dynamical systems and one generalbenchmark problem known as Box-Jenkins model with timeseries data are considered
51 Performance Criteria For accessing the performancecriteria of FCPN and comparing with Dynamic and BackPropagation Network we have evaluated various errors asgiven below In case of four dynamical models it is recom-mended to access criterion such as subset of the following
Given 119873 pair of data points (119910(119896) 119910(119896)) where 119910(119896)is output of system and 119910(119896) is output of controller theMaximum Absolute Error (MAE) is
119869MAE = 119869max = max1le119896le119873
1003816100381610038161003816119910 (119896) minus 119910 (119896)1003816100381610038161003816 (36)
the Sum of Squared Errors (SSE) is
119869SSE =119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(37)
the Mean Squared Error (MSE) is
119869MSE =1
119873
119873
sum119896=1
(119910 (119896) minus 119910 (119896))2
(38)
8 Computational Intelligence and Neuroscience
0 50 100 150 200 2500
1
2
3
4
5
6
7
Iteration
Mea
n Sq
uare
Err
ortimes10minus3
(a)
0 50 100 150 200 250
0
05
1
15
2
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus2
minus15
minus1
minus05
(b)
Figure 4 (a) Mean Square Error of system (Example 1) using FCPN (b) Performance of the controller using FCPN algorithm for Example 1
andor the Root Mean Squared Error (RMSE) is
119869RMSE = radic119869MSE (39)
The measure Variance Accounting Factor (VAF) is
119869VAF = (1 minusVar (119910 (119896) minus 119910 (119896))
Var (119910 (119896))) 100 (40)
A related measure is the Normalized Mean Squared Error(NMSE)
119869NMSE =sum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2 (41)
and the Best Fit Rate (BFR) is
119869BFR = (1 minusradicsum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2)100 (42)
Example 1 The nonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 03119910 (119896) + 06119910 (119896 minus 1) + 119891 [119906 (119896)] (43)
The nonlinear function in the system in this case is 119891(119906) =1199063 + 031199062 minus 04119906 where 119910 and 119906 are uniformly distributedin [minus2 2] The objective of Example 1 is to control the systemto track reference output given as 250 sample data pointsThemodel has two inputs 119910(119896) and 119906(119896) and single output 119910(119896+1)and the system identificationwas initially performedwith thesystem input being uniformly distributed over [minus2 2] TheFCPN controller uses two inputs 119910(119896) and 119906(119896) to produceoutput 119910(119896 + 1) FCPN model governed by the differenceequation 119910(119896+1) = 03119910(119896)+06119910(119896minus1)+119873[119906(119896)]was usedFigures 4(a) and 4(b) show the error 119890(119896 + 1) = 119910(119896 + 1) minus119910(119896 + 1) and outputs of the dynamical system and the FCPN
Table 1 Calculated various errors for Example 1
NN model Different error calculationMAE SSE MSE RMSE NMSE
FCPN 13364 01143 45709119890 minus 004 00214 71627119890 minus 004
Dynamic 58231 31918 00128 01130 00028BPN 380666 481017 04810 06936 01035
As can be seen from the figure the identification error is smalleven when the input is changed to a sum of two sinusoids119906(119896) = sin(2120587119896250) + sin(212058711989625) at 119896 = 250
Table 1 includes various calculated errors of differentNN models for Example 1 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 2 Thenonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 119891 [119910119901(119896) 119910
119901(119896 minus 1)] + 119906 (119896) (44)
where
119891 [119910119901(119896) 119910
119901(119896 minus 1)]
= [119910119901(119896) 119910119901(119896 minus 1) [119910
119901(119896) + 25]
1 + 1199102119901(119896) + 1199102
119901(119896 minus 1)
]
(45)
Theobjective of (44) is to control the system to track referenceoutput given 100 sample data points The model has twoinputs119910(119896) and 119906(119896) and single output119910(119896+1) and the systemidentification was initially performed with the system inputbeing uniformly distributed over [minus2 2] Training and testingsamples contain 100 sample data pointsThe FCPN controlleruses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1) FCPN
Computational Intelligence and Neuroscience 9
0 20 40 60 80 1000
1
2
3
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
05
1
15
2
25
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)Figure 5 (a) Mean Square Error of system (Example 2) using FCPN (b) Performance of the controller using FCPN algorithm for Example 2
Table 2 Calculated various errors for Example 2
NN models Different error calculationMAE SSE MSE RMSE NMSE
FCPN 01956 00032 32328119890 minus 005 00057 27164119890 minus 005
Dynamic 95152 29856 00299 01728 00308BP 101494 45163 00452 02125 00323
network discussed earlier is used to identify the system frominput-output data and is described by the equation
119910 (119896 + 1) = 119873 [119910 (119896) 119910 (119896 minus 1)] + 119906 (119896) (46)
For FCPN the identification process involves the adjustmentof the weights of 119873 using FCL FCPN identifier needs someprior information concerning the input-output behavior ofthe system before identification can be undertaken FCL isused to adjust the weights of the neural network so that theerror 119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shownin Figure 5(a) The behavior of the FCPN model for (16) isshown in Figure 5(b) The input 119906(119896) was assumed to be arandom signal uniformly distributed in the interval [minus2 2]The weights in the neural network were adjusted
Table 2 includes various calculated errors of differentNN models for Example 2 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 3 Thenonlinear dynamical system [23] is governedby the following difference equation
119910 (119896 + 1) =119910 (119896)
1 + 119910 (119896)2+ 1199063 (119896) (47)
which corresponds to 119891[119910(119896)] = 119910(119896)(1 + 119910(119896)2) and119892[119906(119896)] = 1199063(119896) FCPN network equation (47) is used toidentify the system from input-output data Weights in the
Table 3 Calculated various errors and BFR for Example 3
NN model MAE SSE MSE RMSE NMSEFCPN 03519 00032 31709119890 minus 005 00056 56371119890 minus 006
Dynamic 55652 36158 00362 01902 00223BP 426524 469937 04699 06855 02892
neural networks were adjusted for every instant of discretetime steps
The objective of (47) is to control the system to trackreference output given 100 sample data pointsThemodel hastwo inputs 119910(119896) and 119906(119896) and single output 119910(119896 + 1) and thesystem identification was initially performed with the systeminput being uniformly distributed over [minus2 2] Training andtesting samples contain 250 sample data points The FCPNcontroller uses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1)The input data is generated using uniform distribution ininterval [minus2 2] The function 119891 obtained by FCPN is usedto adjust the weights of the neural network so that the error119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shown inFigure 6(a) In Figure 6(b) the desired outputs of the systemas well as the FCPN model are shown and are seen to beindistinguishable
Table 3 includes various calculated errors of differentNN models for Example 3 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 4 Thenonlinear dynamical system [24] is governedby the difference equation (48) Generalized form of thisequation is Model IV In this example the system is assumedto be of the form
119910119901(119896 + 1)
= 119891 [119910119901(119896) 119910
119901(119896 minus 1) 119910
119901(119896 minus 2) 119906 (119896) 119906 (119896 minus 1)]
(48)
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
8 Computational Intelligence and Neuroscience
0 50 100 150 200 2500
1
2
3
4
5
6
7
Iteration
Mea
n Sq
uare
Err
ortimes10minus3
(a)
0 50 100 150 200 250
0
05
1
15
2
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus2
minus15
minus1
minus05
(b)
Figure 4 (a) Mean Square Error of system (Example 1) using FCPN (b) Performance of the controller using FCPN algorithm for Example 1
andor the Root Mean Squared Error (RMSE) is
119869RMSE = radic119869MSE (39)
The measure Variance Accounting Factor (VAF) is
119869VAF = (1 minusVar (119910 (119896) minus 119910 (119896))
Var (119910 (119896))) 100 (40)
A related measure is the Normalized Mean Squared Error(NMSE)
119869NMSE =sum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2 (41)
and the Best Fit Rate (BFR) is
119869BFR = (1 minusradicsum119873
119896=1(119910 (119896) minus 119910 (119896))
2
sum119873
119896=1(119910 (119896) minus 119910)
2)100 (42)
Example 1 The nonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 03119910 (119896) + 06119910 (119896 minus 1) + 119891 [119906 (119896)] (43)
The nonlinear function in the system in this case is 119891(119906) =1199063 + 031199062 minus 04119906 where 119910 and 119906 are uniformly distributedin [minus2 2] The objective of Example 1 is to control the systemto track reference output given as 250 sample data pointsThemodel has two inputs 119910(119896) and 119906(119896) and single output 119910(119896+1)and the system identificationwas initially performedwith thesystem input being uniformly distributed over [minus2 2] TheFCPN controller uses two inputs 119910(119896) and 119906(119896) to produceoutput 119910(119896 + 1) FCPN model governed by the differenceequation 119910(119896+1) = 03119910(119896)+06119910(119896minus1)+119873[119906(119896)]was usedFigures 4(a) and 4(b) show the error 119890(119896 + 1) = 119910(119896 + 1) minus119910(119896 + 1) and outputs of the dynamical system and the FCPN
Table 1 Calculated various errors for Example 1
NN model Different error calculationMAE SSE MSE RMSE NMSE
FCPN 13364 01143 45709119890 minus 004 00214 71627119890 minus 004
Dynamic 58231 31918 00128 01130 00028BPN 380666 481017 04810 06936 01035
As can be seen from the figure the identification error is smalleven when the input is changed to a sum of two sinusoids119906(119896) = sin(2120587119896250) + sin(212058711989625) at 119896 = 250
Table 1 includes various calculated errors of differentNN models for Example 1 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 2 Thenonlinear dynamical system [24] is governedby the following difference equation
119910 (119896 + 1) = 119891 [119910119901(119896) 119910
119901(119896 minus 1)] + 119906 (119896) (44)
where
119891 [119910119901(119896) 119910
119901(119896 minus 1)]
= [119910119901(119896) 119910119901(119896 minus 1) [119910
119901(119896) + 25]
1 + 1199102119901(119896) + 1199102
119901(119896 minus 1)
]
(45)
Theobjective of (44) is to control the system to track referenceoutput given 100 sample data points The model has twoinputs119910(119896) and 119906(119896) and single output119910(119896+1) and the systemidentification was initially performed with the system inputbeing uniformly distributed over [minus2 2] Training and testingsamples contain 100 sample data pointsThe FCPN controlleruses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1) FCPN
Computational Intelligence and Neuroscience 9
0 20 40 60 80 1000
1
2
3
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
05
1
15
2
25
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)Figure 5 (a) Mean Square Error of system (Example 2) using FCPN (b) Performance of the controller using FCPN algorithm for Example 2
Table 2 Calculated various errors for Example 2
NN models Different error calculationMAE SSE MSE RMSE NMSE
FCPN 01956 00032 32328119890 minus 005 00057 27164119890 minus 005
Dynamic 95152 29856 00299 01728 00308BP 101494 45163 00452 02125 00323
network discussed earlier is used to identify the system frominput-output data and is described by the equation
119910 (119896 + 1) = 119873 [119910 (119896) 119910 (119896 minus 1)] + 119906 (119896) (46)
For FCPN the identification process involves the adjustmentof the weights of 119873 using FCL FCPN identifier needs someprior information concerning the input-output behavior ofthe system before identification can be undertaken FCL isused to adjust the weights of the neural network so that theerror 119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shownin Figure 5(a) The behavior of the FCPN model for (16) isshown in Figure 5(b) The input 119906(119896) was assumed to be arandom signal uniformly distributed in the interval [minus2 2]The weights in the neural network were adjusted
Table 2 includes various calculated errors of differentNN models for Example 2 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 3 Thenonlinear dynamical system [23] is governedby the following difference equation
119910 (119896 + 1) =119910 (119896)
1 + 119910 (119896)2+ 1199063 (119896) (47)
which corresponds to 119891[119910(119896)] = 119910(119896)(1 + 119910(119896)2) and119892[119906(119896)] = 1199063(119896) FCPN network equation (47) is used toidentify the system from input-output data Weights in the
Table 3 Calculated various errors and BFR for Example 3
NN model MAE SSE MSE RMSE NMSEFCPN 03519 00032 31709119890 minus 005 00056 56371119890 minus 006
Dynamic 55652 36158 00362 01902 00223BP 426524 469937 04699 06855 02892
neural networks were adjusted for every instant of discretetime steps
The objective of (47) is to control the system to trackreference output given 100 sample data pointsThemodel hastwo inputs 119910(119896) and 119906(119896) and single output 119910(119896 + 1) and thesystem identification was initially performed with the systeminput being uniformly distributed over [minus2 2] Training andtesting samples contain 250 sample data points The FCPNcontroller uses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1)The input data is generated using uniform distribution ininterval [minus2 2] The function 119891 obtained by FCPN is usedto adjust the weights of the neural network so that the error119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shown inFigure 6(a) In Figure 6(b) the desired outputs of the systemas well as the FCPN model are shown and are seen to beindistinguishable
Table 3 includes various calculated errors of differentNN models for Example 3 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 4 Thenonlinear dynamical system [24] is governedby the difference equation (48) Generalized form of thisequation is Model IV In this example the system is assumedto be of the form
119910119901(119896 + 1)
= 119891 [119910119901(119896) 119910
119901(119896 minus 1) 119910
119901(119896 minus 2) 119906 (119896) 119906 (119896 minus 1)]
(48)
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 9
0 20 40 60 80 1000
1
2
3
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
05
1
15
2
25
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)Figure 5 (a) Mean Square Error of system (Example 2) using FCPN (b) Performance of the controller using FCPN algorithm for Example 2
Table 2 Calculated various errors for Example 2
NN models Different error calculationMAE SSE MSE RMSE NMSE
FCPN 01956 00032 32328119890 minus 005 00057 27164119890 minus 005
Dynamic 95152 29856 00299 01728 00308BP 101494 45163 00452 02125 00323
network discussed earlier is used to identify the system frominput-output data and is described by the equation
119910 (119896 + 1) = 119873 [119910 (119896) 119910 (119896 minus 1)] + 119906 (119896) (46)
For FCPN the identification process involves the adjustmentof the weights of 119873 using FCL FCPN identifier needs someprior information concerning the input-output behavior ofthe system before identification can be undertaken FCL isused to adjust the weights of the neural network so that theerror 119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shownin Figure 5(a) The behavior of the FCPN model for (16) isshown in Figure 5(b) The input 119906(119896) was assumed to be arandom signal uniformly distributed in the interval [minus2 2]The weights in the neural network were adjusted
Table 2 includes various calculated errors of differentNN models for Example 2 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 3 Thenonlinear dynamical system [23] is governedby the following difference equation
119910 (119896 + 1) =119910 (119896)
1 + 119910 (119896)2+ 1199063 (119896) (47)
which corresponds to 119891[119910(119896)] = 119910(119896)(1 + 119910(119896)2) and119892[119906(119896)] = 1199063(119896) FCPN network equation (47) is used toidentify the system from input-output data Weights in the
Table 3 Calculated various errors and BFR for Example 3
NN model MAE SSE MSE RMSE NMSEFCPN 03519 00032 31709119890 minus 005 00056 56371119890 minus 006
Dynamic 55652 36158 00362 01902 00223BP 426524 469937 04699 06855 02892
neural networks were adjusted for every instant of discretetime steps
The objective of (47) is to control the system to trackreference output given 100 sample data pointsThemodel hastwo inputs 119910(119896) and 119906(119896) and single output 119910(119896 + 1) and thesystem identification was initially performed with the systeminput being uniformly distributed over [minus2 2] Training andtesting samples contain 250 sample data points The FCPNcontroller uses two inputs 119910(119896) and 119906(119896) to output 119910(119896 + 1)The input data is generated using uniform distribution ininterval [minus2 2] The function 119891 obtained by FCPN is usedto adjust the weights of the neural network so that the error119890(119896 + 1) = 119910(119896 + 1) minus 119910(119896 + 1) is minimized as shown inFigure 6(a) In Figure 6(b) the desired outputs of the systemas well as the FCPN model are shown and are seen to beindistinguishable
Table 3 includes various calculated errors of differentNN models for Example 3 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Example 4 Thenonlinear dynamical system [24] is governedby the difference equation (48) Generalized form of thisequation is Model IV In this example the system is assumedto be of the form
119910119901(119896 + 1)
= 119891 [119910119901(119896) 119910
119901(119896 minus 1) 119910
119901(119896 minus 2) 119906 (119896) 119906 (119896 minus 1)]
(48)
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
10 Computational Intelligence and Neuroscience
0 20 40 60 80 1000
1
2
Iteration
Mea
n Sq
uare
Err
ortimes10minus4
(a)
0 20 40 60 80 100
0
1
2
3
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus3
minus2
minus1
(b)
Figure 6 (a) Mean Square Error of system (Example 3) using FCPN (b) Performance of the controller using FCPN algorithm for Example 3
0 100 200 300 400 500 600 700 8000
1
2
3
4
5
6
Iteration
Mea
n Sq
uare
Err
or
times10minus4
(a)
0 100 200 300 400 500 600 700 800
0
05
1
Iteration
Targ
et an
d FC
PN o
utpu
t
TargetFCPN
minus1
minus05
(b)
Figure 7 (a)Mean Square Error of system (Example 4) using FCPN (b) Performance of the controller using FCPN algorithm for Example 4
where the unknown function 119891 has the form
119891 [1199091 1199092 1199093 1199094 1199095] =
1199091119909211990931199095(1199093minus 1) + 119909
4
1 + 11990923+ 11990922
(49)
In the identification model which is approximated by FCPNFigure 7(b) shows the output of the system and the modelwhen the identification procedure carried random inputsignal uniformly distributed in the interval [minus1 1] Theperformance of the model is studied and the error 119890(119896 +1) = 119873[119910
119901(119896 + 1)] is minimized as shown in Figure 7(a)
In Figure 7(b) the outputs of the system as well as outputof FCPN model are shown Input to the system and theidentified model is given by 119906(119896) = sin(2120587119896250) for 119896 le 500and 119906(119896) = 08 sin(2120587119896250) + 02 sin(212058711989625) for 119896 gt 500
Table 4 includes various calculated errors of differentNN models for Example 4 It can be seen from the tablethat various errors calculated for FCPN are minimum ascompared to the DN and BPN
Table 4 Calculated various errors and BFR for Example 4
NN models MAE SSE MSE RMSE NMSEFCPN 304348 21931 00027 00524 00186Dynamic 328952 64059 00080 00895 00276BP 531988 134174 00168 01295 00844
Example 5 In this example we have used Box-Jenkins timeseries [32] of 296 pairs of data measured from a gas furnacesystem with single input 119906(119905) being gas flow rate and singleoutput 119910(119905) being CO
2concentration in outlet gas Training
samples and testing samples contained 196 and 100 datapoints respectively The FCPN uses the two inputs thecurrent state 119910(119896) and the desired state 119910119889(119896) to produce anoutput which is 119910(119896) [33]
Figure 8(a) shows the mean squared control errors ofFCPN methods Figure 8(b) shows the performance of thecontroller with FCPN algorithm Result shows that FCPNalgorithm enable us to appropriate the approximation using
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience 11
0 10 20 30 40 50 60 70 80 90 100044046048
05052054056058
06
Iteration
Mea
n Sq
uare
Err
or
(a)
0 10 20 30 40 50 60 70 80 90 10045
50
55
60
Iteration
Targ
et an
d FC
PN o
utpu
t
DesiredFCPN
(b)Figure 8 (a) Mean Square Error of system (Example 5) using FCPN (b) Performance of the controller using FCPN algorithm for Example 5
Table 5 BFR () values of NN models for various examples
NN models BFR ()Example 1 Example 2 Example 3 Example 4
FCPN 9732 9948 9976 8635DN 9469 7932 8508 8339BPN 6783 7928 462234 7094
fuzzy learning as given in equation (11) of Box Jenkins timeseries data based on calculation of BMN
Table 5 shows BFR () various NN models for all Exam-ples 1ndash5 respectively It can be observed from Table 5 that theBest Fit Rate found for FCPN is maximum as compared toDN and BPN which shows better performance of the FCPNnetwork for nonlinear system
6 Conclusions
FCPN is a neural network control method which was devel-oped and presented It is based on the concept of combiningFCL algorithm and CPN The key ideas explored are theuse of the FCL algorithm for training the weights of theinstar-outstar The performances of the FCPN controllerbased training are tested using four nonlinear dynamicalsystems and on time series (Box-Jenkins) data and comparedwith the Dynamic Network and standard Back Propagationalgorithm The comparative performances of the FCPNalgorithm and Dynamic Network such as the number ofiterations and performance functions like MAE MSE SSENMSE BFR and so forth of FCPN and Dynamic Networkerror are summarized in Tables 1ndash4 It can be seen that theFCPN algorithm gives minimum errors as compared to theDynamic Network and standard Back Propagation Networkfor all four models of nonlinear dynamical systems and Box-Jenkins time series data Results obtained from FCPN werecompared for various errors and it is clearly shown that FCPNworks much better for the control of nonlinear dynamicalsystems
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The authors gratefully acknowledged the financial assistanceprovided by the All India Council of Technical Education(AICTE) in the form of Research Promotion Scheme (RPS)project in 2012
References
[1] T Soderstrom and P Stoica System Identification Prentice HallNew York NY USA 1989
[2] S A Billings Nonlinear System Identification NARMAXMeth-ods in the Time Frequency and Spatio-Temporal DomainsWiley Chichester UK 2013
[3] M Liu ldquoDecentralized control of robotmanipulators nonlinearand adaptive approachesrdquo IEEE Transactions on AutomaticControl vol 44 no 2 pp 357ndash363 1999
[4] C-M Lin A-B Ting M-C Li and T-Y Chen ldquoNeural-network-based robust adaptive control for a class of nonlinearsystemsrdquo Neural Computing and Applications vol 20 no 4 pp557ndash563 2011
[5] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[6] I Kanellakopoulos P V Kokotovic and A S Morse ldquoSys-tematic design of adaptive controllers for feedback linearizablesystemsrdquo IEEE Transactions on Automatic Control vol 36 no11 pp 1241ndash1253 1991
[7] P Kokotovic ldquoThe joy of feedback nonlinear and adaptiverdquoIEEE Control Systems vol 12 no 3 pp 7ndash17 1992
[8] H Elmali andNOlgac ldquoRobust output tracking control of non-linear MIMO systems via sliding mode techniquerdquoAutomaticavol 28 no 1 pp 145ndash151 1992
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
12 Computational Intelligence and Neuroscience
[9] N Sadati and R Ghadami ldquoAdaptivemulti-model slidingmodecontrol of robotic manipulators using soft computingrdquo Neuro-computing vol 71 no 13ndash15 pp 2702ndash2710 2008
[10] A Kroll and H Schulte ldquoBenchmark problems for nonlinearsystem identification and control using Soft Computing meth-ods need and overviewrdquo Applied Soft Computing Journal vol25 pp 496ndash513 2014
[11] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989
[12] A Bortoletti C Di Fiore S Fanelli and P Zellini ldquoA new classof Quasi-Newtonian methods for optimal learning in MLP-networksrdquo IEEE Transactions on Neural Networks vol 14 no2 pp 263ndash273 2003
[13] G Lera and M Pinzolas ldquoNeighborhood based Levenberg-Marquardt algorithm for neural network trainingrdquo IEEE Trans-actions on Neural Networks vol 13 no 5 pp 1200ndash1203 2002
[14] A K Shah and D M Adhyaru ldquoClustering based multiplemodel control of hybrid dynamical systems usingHJB solutionrdquoApplied Soft Computing vol 31 pp 103ndash117 2015
[15] Y Cui H Zhang Y Wang and Z Zhang ldquoAdaptive neuraldynamic surface control for a class of uncertain nonlinearsystems with disturbancesrdquo Neurocomputing vol 165 pp 152ndash158 2015
[16] R Rakkiyappan N Sakthivel and J Cao ldquoStochastic sampled-data control for synchronization of complex dynamical net-works with control packet loss and additive time-varyingdelaysrdquo Neural Networks vol 66 pp 46ndash63 2015
[17] B Kiumarsi F L Lewis and D S Levine ldquoOptimal controlof nonlinear discrete time-varying systems using a new neuralnetwork approximation structurerdquo Neurocomputing vol 156pp 157ndash165 2015
[18] B M Al-Hadithi A Jimenez and R G Lopez ldquoFuzzy optimalcontrol using generalized Takagi-Sugeno model for multivari-able nonlinear systemsrdquoApplied SoftComputing Journal vol 30pp 205ndash213 2015
[19] MAGonzalez-Olvera andY Tang ldquoIdentification of nonlineardiscrete systems by a state-space recurrent neurofuzzy networkwith a convergent algorithmrdquoNeurocomputing vol 148 pp 318ndash325 2015
[20] J F de Canete P Del Saz-Orozco I Garcia-Moral and SGonzalez-Perez ldquoIndirect adaptive structure for multivariableneural identification and control of a pilot distillation plantrdquoApplied Soft Computing vol 12 no 9 pp 2728ndash2739 2012
[21] M Cai and Z Xiang ldquoAdaptive neural finite-time control for aclass of switched nonlinear systemsrdquo Neurocomputing vol 155pp 177ndash185 2015
[22] Y Li S Tong and T Li ldquoAdaptive fuzzy backstepping controldesign for a class of pure-feedback switched nonlinear systemsrdquoNonlinear Analysis Hybrid Systems vol 16 pp 72ndash80 2015
[23] T-Y Kuc J S Lee and K Nam ldquoAn iterative learning controltheory for a class of nonlinear dynamic systemsrdquo Automaticavol 28 no 6 pp 1215ndash1221 1992
[24] K S Narendra and K Parthasarathy ldquoNeural networks anddynamical systemsrdquo International Journal of Approximate Rea-soning vol 6 no 2 pp 109ndash131 1992
[25] R Hecht-Nielsen ldquoTheory of the back propagation neuralnetworkrdquo Neural Networks vol 1 pp 593ndash605 1989
[26] M T Hagan H B Demuth M H Beale and O De JesusNeural Network Design Cengage Learning 2nd edition 2014
[27] F-J Chang and Y-C Chen ldquoA counterpropagation fuzzy-neural network modeling approach to real time streamflowpredictionrdquo Journal of Hydrology vol 245 no 1ndash4 pp 153ndash1642001
[28] A Dwivedi N S C Bose A Kumar P Kandula D Mishraand P K Kalra ldquoA novel hybrid image compression techniquewavelet-MFOCPNrdquo in Proceedings of the 9th Asian Symposiumon Information Display (ASID rsquo06) pp 492ndash495 2006
[29] C J C Burges P Simard and H S Malvar Improving WaveletImage Compression with Neural Networks Microsoft ResearchRedmond Wash USA 2001
[30] D Woods ldquoBack and counter propagation aberrationsrdquo in Pro-ceedings of the IEEE International Conference on Neural Net-works pp 473ndash479 1988
[31] DMishra N S C Bose A Tolambiya et al ldquoColor image com-pression with modified forward-only counterpropagation neu-ral network improvement of the quality using different distancemeasuresrdquo in Proceedings of the 9th International Conferenceon Information Technology (ICIT rsquo06) pp 139ndash140 IEEEBhubaneswar India December 2006
[32] G E Box and G M Jenkins Time Series Analysis Forecastingand Control Holden Day San Francisco Calif USA 1970
[33] J Kim and N Kasabov ldquoHyFIS adaptive neuro-fuzzy inferencesystems and their application to nonlinear dynamical systemsrdquoNeural Networks vol 12 no 9 pp 1301ndash1319 1999
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014