research article chaotic hopfield neural network swarm optimization and its...

11
Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 873670, 10 pages http://dx.doi.org/10.1155/2013/873670 Research Article Chaotic Hopfield Neural Network Swarm Optimization and Its Application Yanxia Sun, 1 Zenghui Wang, 2 and Barend Jacobus van Wyk 1 1 Department of Electrical Engineering, Tshwane University of Technology, Pretoria 0001, South Africa 2 School of Engineering, University of South Africa, Florida 1710, South Africa Correspondence should be addressed to Zenghui Wang; [email protected] Received 7 February 2013; Accepted 20 March 2013 Academic Editor: Xiaojing Yang Copyright © 2013 Yanxia Sun et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A new neural network based optimization algorithm is proposed. e presented model is a discrete-time, continuous-state Hopfield neural network and the states of the model are updated synchronously. e proposed algorithm combines the advantages of traditional PSO, chaos and Hopfield neural networks: particles learn from their own experience and the experiences of surrounding particles, their search behavior is ergodic, and convergence of the swarm is guaranteed. e effectiveness of the proposed approach is demonstrated using simulations and typical optimization problems. 1. Introduction e discovery of chaos in astronomical, solar, fluid, and other systems sparked significant research in nonlinear dynamics exhibiting chaos. Chaos was found to be useful and have great potential in many disciplines such as mixing liquids with low power consumption, presenting outages in power systems, biomedical engineering applications involving signals from the brain and heart, to name just a few [1]. Chaotic systems exhibit three important properties. Firstly, a deterministic system is said to be chaotic whenever its evolution sensitively depends on the initial conditions. Secondly, there is an infinite number of unstable periodic orbits embedded in the underlying chaotic set. irdly, the dynamics of the chaotic attractor is ergodic, which implies that during its temporal evolution the system ergodically visits small neighborhoods around every point in each one of the unstable periodic orbits embedded within the chaotic attractor. Although it appears to be stochastic, it is generated by a deterministic nonlin- ear system. Lyapunov exponents characterize quantitatively stochastic properties of the dynamical systems. When the dynamical system is chaotic, there exists at least one lyapunov exponent >0. It is reported that chaotic behavior also exists in biological neurons and neural networks [2, 3]. Using chaos to develop novel optimization techniques gained much attention during the last decade. For a given energy or cost function, the chaotic ergodic orbits of a chaotic dynamic system used for optimization may eventually reach the global optimum or a point close to it with high probability [4, 5]. Since Hopfield and Tank [6] applied their neural network to the travelling salesman problem, neural networks have provided a powerful approach to a wide variety of optimiza- tion problems [7, 8]. However the Hopfield neural network (HNN) oſten gets trapped in a local minima. A number of modifications were made to Hopfield neural networks to escape from local minima. Some modifications, based on chaotic neural networks [9] and simulated annealing [10], were proposed to solve global optimization problems [11]. In [1214] the guaranteed convergence of Hopfield neural networks is discussed. Particle swarm optimization (PSO), developed by Clerc and Kennedy in 2002 [15], is a stochastic global optimization method which is based on simulation of social behavior. In a particle swarm optimizer, individuals “evolve” by cooperating with other individuals over several generations. Each particle adjusts its flying according to its own flying experiences and the flying experience of its companions. Each individual is named as a particle which, in fact, represents a potential solution to a problem. Each particle is treated as a point in a -dimensional space. However, the PSO algorithm is likely

Upload: others

Post on 05-Jul-2020

27 views

Category:

Documents


0 download

TRANSCRIPT

Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2013 Article ID 873670 10 pageshttpdxdoiorg1011552013873670

Research ArticleChaotic Hopfield Neural Network Swarm Optimizationand Its Application

Yanxia Sun1 Zenghui Wang2 and Barend Jacobus van Wyk1

1 Department of Electrical Engineering Tshwane University of Technology Pretoria 0001 South Africa2 School of Engineering University of South Africa Florida 1710 South Africa

Correspondence should be addressed to Zenghui Wang wangzenghgmailcom

Received 7 February 2013 Accepted 20 March 2013

Academic Editor Xiaojing Yang

Copyright copy 2013 Yanxia Sun et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

A new neural network based optimization algorithm is proposedThe presentedmodel is a discrete-time continuous-stateHopfieldneural network and the states of the model are updated synchronously The proposed algorithm combines the advantages oftraditional PSO chaos andHopfield neural networks particles learn from their own experience and the experiences of surroundingparticles their search behavior is ergodic and convergence of the swarm is guaranteedThe effectiveness of the proposed approachis demonstrated using simulations and typical optimization problems

1 Introduction

Thediscovery of chaos in astronomical solar fluid and othersystems sparked significant research in nonlinear dynamicsexhibiting chaos Chaos was found to be useful and have greatpotential in many disciplines such as mixing liquids with lowpower consumption presenting outages in power systemsbiomedical engineering applications involving signals fromthe brain and heart to name just a few [1] Chaotic systemsexhibit three important properties Firstly a deterministicsystem is said to be chaotic whenever its evolution sensitivelydepends on the initial conditions Secondly there is aninfinite number of unstable periodic orbits embedded in theunderlying chaotic set Thirdly the dynamics of the chaoticattractor is ergodic which implies that during its temporalevolution the system ergodically visits small neighborhoodsaround every point in each one of the unstable periodic orbitsembedded within the chaotic attractor Although it appearsto be stochastic it is generated by a deterministic nonlin-ear system Lyapunov exponents characterize quantitativelystochastic properties of the dynamical systems When thedynamical system is chaotic there exists at least one lyapunovexponent 120582 gt 0 It is reported that chaotic behavior alsoexists in biological neurons and neural networks [2 3] Usingchaos to develop novel optimization techniques gainedmuch

attention during the last decade For a given energy or costfunction the chaotic ergodic orbits of a chaotic dynamicsystem used for optimizationmay eventually reach the globaloptimum or a point close to it with high probability [4 5]

Since Hopfield and Tank [6] applied their neural networkto the travelling salesman problem neural networks haveprovided a powerful approach to a wide variety of optimiza-tion problems [7 8] However the Hopfield neural network(HNN) often gets trapped in a local minima A numberof modifications were made to Hopfield neural networks toescape from local minima Some modifications based onchaotic neural networks [9] and simulated annealing [10]were proposed to solve global optimization problems [11]In [12ndash14] the guaranteed convergence of Hopfield neuralnetworks is discussed

Particle swarm optimization (PSO) developed by Clercand Kennedy in 2002 [15] is a stochastic global optimizationmethod which is based on simulation of social behavior In aparticle swarmoptimizer individuals ldquoevolverdquo by cooperatingwith other individuals over several generations Each particleadjusts its flying according to its own flying experiences andthe flying experience of its companions Each individual isnamed as a particle which in fact represents a potentialsolution to a problem Each particle is treated as a point ina119863-dimensional space However the PSO algorithm is likely

2 Journal of Applied Mathematics

to temporarily get stuck andmay need a long period of time toescape from a local extremum [16] It is difficult to guaranteethe convergence of the swarm especially when randomparameters are used In order to improve the dynamicalbehavior of PSO one can combine chaoswith PSOalgorithmsto enhance the performance of PSO In [17ndash19] chaos wereapplied to the PSO to avoid the PSO getting trapped in localminima

PSO is motivated by the behavior of organisms such asfish schooling and bird flocking [20] During the processfuture particle positions (determined by velocity) can beregarded as particle intelligence [21] Using a chaotic intel-ligent swarm system to replace the original PSO might beconvenient for analysis while maintaining stochastic searchproperties Most importantly the convergence of a particleswarm initialized with random weights is not guaranteed

In this paper we propose a chaotic Hopfield neuralnetwork swarmoptimization (CHNNSO) algorithmThe restof the paper is organized as follows In Section 2 the prelim-inaries of Hopfield neural networks and PSO are describedThe chaotic Hopfield neural network model is developed inSection 3 In Section 4 the dynamics of the chaotic Hopfieldneural network is analyzed Section 5 provides simulationresults and comparisonsThe conclusion is given in Section 6

2 Preliminaries

21 Basic Hopfield Neural Network Theory [22] A Hopfieldnet is a recurrent neural network having a synaptic connec-tion pattern such that there is an underlying Lyapunov energyfunction for the activity dynamics Started in any initial statethe state of the system evolves to a final state that is a (local)minimum of the Lyapunov energy function The Lyapunovenergy function decreases in a monotone fashion under thedynamics and is bounded below Because of the existence ofan elementary Lyapunov energy function for the dynamicsthe only possible asymptotic result is a state on an attractor

There are two popular forms of themodel binary neuronswith discrete time which is updated one at a time and con-tinuous time graded neurons In this paper the second kindof model is used The dynamics of a 119899-neuron continuousHopfield neural network is described by

119889119906119894

119889119905=minus119906119894

120591+sum

119895

119879119894119895119909119894 (119905) + 119868119894 (1)

Here 119906119894isin (minusinfininfin) is the input of neuron 119894 and the output

of neuron 119894 is

119909119894 (119905) = 119892119894 [119906119894 (119905)] (2)

where 119894 = 1 2 119899 120591 is a positive constant 119868119894is external

inputs (eg sensory input or bias current) to neuron 119894 and issometimes called the ldquofiring thresholdrdquo when replaced withminus119868119894 119906119894is the mean internal potential of the neuron which

determines the output of neuron 119894 119879119894119895is the strength of

synaptic input from neuron 119894 to neuron 119895 119892 is a monotonefunction that converts internal potential into firing rate input

of the neuron 119879 is the matrix with elements 119879119894119895 When 119879 is

symmetric the Lyapunov energy function is given by

119869 = minus1

2sum

119894119895

119879119894119895119909119894119909119895minussum

119894

119868119894119909119894+1

120591sum

119894

int

119909119895

0

119892minus1(119885) 119889119885 (3)

where 119892minus1 is the inverse of the gain function 119892 There isa significant limiting case of this function when 119879 has nodiagonal elements and the input-output relation becomes astep going from 0 to amaximumfiring rate (for conveniencescaled to 1) The third term of this Lyapnouv function is thenzero or infinite With no diagonal elements in 119879 the minimaof 119869 are all located at corners of the hypercube 0 le 119909

119895le 1

In this limit the states of the continuous variable system arestable

Many optimization problems can be readily representedusing Hopfield nets by transforming the problem into vari-ables such that the desired optimization corresponds to theminimization of the respective Lyapunov energy function [6]The dynamics of the HNN converges to a local Lyapunovenergy minimum If this local minimum is also the globalminimum the solution of the desired optimization task hasbeen carried out by the convergence of the network state

22 Basic PSO Theory Many real optimization problemscan be formulated as the following functional optimizationproblem

min 119891 (119883119894) 119883

119894= [1199091

119894 119909

119899

119894]

st 119909119894isin [119886119894 119887119894] 119894 = 1 2 119899

(4)

Here 119891 is the objective function and119883119894is the decision vector

consisting of 119899 variablesTheoriginal particle swarmalgorithmworks by iteratively

searching in a region and is concerned with the best previoussuccess of each particle the best previous success of theparticle swarm and the current position and velocity of eachparticle [20] Every candidate solution of 119891(119883

119894) is called a

ldquoparticlerdquo The particle searches the domain of the problemaccording to

119881119894 (119905 + 1) = 120596119881119894 (119905) + 11988811198771 (119875119894 minus 119883119894 (119905)) + 11988821198772 (119875119892 minus 119883119894 (119905))

(5)

119883119894 (119905 + 1) = 119883119894 (119905) + 119881119894 (119905 + 1) (6)

where 119881119894= [V1

119894 V2119894 V119899

119894] is the velocity of particle 119894

119883119894= [1199091

119894 1199092

119894 119909

119899

119894] represents the position of particle 119894 119875

119894

represents the best previous position of particle 119894 (indicatingthe best discoveries or previous experience of particle 119894)119875119892represents the best previous position among all particles

(indicating the best discovery or previous experience of thesocial swarm)120596 is the inertia weight that controls the impactof the previous velocity of the particle on its current velocityand is sometimes adaptive [17] 119877

1and 119877

2are two random

weights whose components 1199031198951and 119903119895

2(119895 = 1 2 119899) are

chosen uniformly within the interval [0 1] which might notguarantee the convergence of the particle trajectory 119888

1and

Journal of Applied Mathematics 3

1

1

1

Feedback input

1119901119895119894

119901119895119892

Externalappliedinput

minus1

minus1

119908

119894(119905)

119909119895119894 (119905)

1198881Rand1(∙)

1198882Rand2(∙)

119909119895119894 (119905 + 1)

Figure 1 Particle structure

1198882are the positive constant parameters Generally the value

of each component in 119881119894should be clamped to the range

[minusVmax Vmax] to control excessive roaming of particles outsidethe search space

3 A Chaotic Hopfield Neural Network Model

From the introduction of basic PSO theory every particlecan be seen as the model of a single fish or a single birdThe position chosen by the particle can be regarded as astate of a neural network with a random synaptic connectionAccording to (5)-(6) the position components of particle 119894can be thought of as the output of a neural network as shownin Figure 1

In Figure 1 Rand1(sdot) and Rand2(sdot) are two independentand uniformly distributed random variables within the range[0 1] which refer to 119903119895

1and 1199031198952 respectively 119901119895

119894and 119901119895

119892are the

components of 119875119894and 119875119892 respectively 119901119895

119892is the previous best

value amongst all particles and 119901119895119894 as an externally applied

input is the 119895th element of the best previous position 119875119894

and it is coupled with other components of 119875119894 The particles

migrate toward a new position according to (5)-(6) Thisprocess is repeated until a defined stopping criterion is met(eg maximum number of iterations or a sufficiently goodfitness value)

As pointed out by Clerc and Kennedy [15] the powerfuloptimization ability of the PSO comes from the interactionamongst the particles The analysis of complex interactionamongst the particles in the swarm is beyond the scope ofthis paper which focuses on the construction of a simpleparticle using a neural network perspective and convergenceissues Artificial neural networks are composed of simpleartificial neurons mimicking biological neurons The HNNhas the property that as each neuron in a HNN updates anenergy function is monotonically reduced until the networkstabilizes [23] One can therefore map an optimizationproblem to a HNN such that the cost function of the problemcorresponds to the energy function of the HNN and theresult of the HNN thus suggests a low cost solution to theoptimization problem The HNN might therefore be a goodchoice to model particle behavior

In order to approach 119875119892and 119875

119894 the HNN model should

include at least two neurons For simplicity the HNN model

of each particle position component has two neurons whoseoutputs are 119909119895

119894(119905) and 119909

119895

119894119901(119905) In order to transform the

problem into variables such that the desired optimizationcorresponds to the minimization of the energy function theobjective function should be determined firstly As 119909119895

119894(119905) and

119909119895

119894119901(119905) should approach 119901119895

119892and 119901119895

119894 respectively (119909119895

119894(119905) minus 119901

119895

119892)2

and (119909119895119894119901(119905) minus 119901

119895

119894)2 can be chosen as two parts of the energy

function The third part of energy function (119909119895119894(119905) minus 119909

119895

119894119901(119905))2

is added to accompany (119909119895119894119901(119905) minus 119901

119895

119894)2 to cause 119909119895

119894(119905) to tend

towards 119901119895119894 Therefore the HNN Lyapunov energy function

for each particle is proposed

119869119895

119894(119905) = 119860(119909

119895

119894(119905) minus 119901

119895

119892)2

+ 119861(119909119895

119894119901(119905) minus 119901

119895

119894)2

+ 119862(119909119895

119894(119905) minus 119909

119895

119894119901(119905))2

(7)

where 119860 119861 and 119862 are positive constantsHere the neuron input-output function is chosen as a

sigmoid function given by (9) and (11) Equations (8) and (10)are the Euler approximation of (1) of the continuousHopfieldneural network [14]The dynamics of component 119895 of particle119894 is described by

119906119895

119894(119905 + 1) = 119896119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

(8)

119909119895

119894(119905 + 1) = 119892119894 (119906

119895

119894(119905 + 1)) =

1

1 minus 119890120585119906119895

119894(119905+1)

(9)

119906119895

119894119901(119905 + 1) = 119896119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

(10)

119909119895

119894119901(119905 + 1) = 119892119894 (119906

119895

119894119901(119905 + 1)) =

1

1 minus 119890120585119906119895

119894119901(119905+1)

(11)

According to (5)-(6) and Figure 1 the PSO uses randomweights to simulate birds flocking or fish searching for foodWhen birds flock or fish search for food they exhibit chaoslike behavior yet (8)ndash(11) do not generate chaos Aihara et al[9] proposed a kind of chaotic neuronwhich includes relativerefractoriness in the model to simulate chaos in a biological

4 Journal of Applied Mathematics

brain To use this result 119911119894(119896)(119909119895

119894(119896)minus119868

0) and 119911

119894(119896)(119909119895

119894119901(119896)minus119868

0)

are added to (8) and (10) to cause chaos Equations (8) and(10) then become

119906119895

119894(119905 + 1) = 119896119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

minus 119911119894 (119905) (119909

119895

119894(119905) minus 1198680) (12)

119906119895

119894119901(119905 + 1) = 119896119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

minus 119911119894 (119905) (119909

119895

119894119901(119905) minus 1198680) (13)

In order to escape from chaos as time evolves we set

119911119894 (119905 + 1) = (1 minus 120573) 119911119894 (119905) (14)

In (8)ndash(14) 120585 120572 and 119896 are positive parameters 119911119894(119905) is self-

feedback connection weight (the refractory strength) 120573 is thedamping factor of the time-dependent 119911

119894(119905) (0 le 120573 le 1) 119868

0is

a positive parameterAll the parameters are fixed except 119911(119905) which is variedThe combination of (9) (11)ndash(14) is called chaotic Hop-

field neural network swarm optimization(CHNNSO) pro-posed by us According to (8)ndash(14) the following procedurecan be used for implementing the proposed CHNNSOalgorithm

(1) Initialize the swarm assign a random position in theproblemhyperspace to each particle and calculate thefitness function which is given by the optimizationproblem whose variables are corresponding to theelements of particle position coordinates

(2) Synchronously update the positions of all the particlesusing (9) (11)ndash(14) and change the two states everyiteration

(3) Evaluate the fitness function for each particle

(4) For each individual particle compare the particlersquosfitness value with its previous best fitness value If thecurrent value is better than the previous best valuethen set this value as the 119901119895

119894and the current particlersquos

position 119909119895119894 as 119901119895119894 else if the 119901

119894is updated then reset

119911 = 1199110

(5) Identify the particle that has the best fitness valueWhen iterations are less than a certain value and ifthe particle with the best fitness value is changed thenreset 119911 = 119911

0to keep the particles chaotic to prevent

premature convergence

(6) Repeat steps (2)ndash(5) until a stopping criterion is met(eg maximum number of iterations or a sufficientlygood fitness value)

As can be seen from (9) and (11) the particle posi-tion component 119909119895

119894is located in the interval [minus1 1] The

optimization problem variable interval must therefore bemapped to [minus1 1] and vice versa using

119909119895= minus1 +

2 (119909119895minus 119886119895)

119887119895minus 119886119895

119895 = 1 2 119899

119909119895= 119886119895+1

2(119909119895+ 1) (119887

119895minus 119886119895) 119895 = 1 2 119899

(15)

Here 119886119895and 119887119895are the lower boundary and the upper bound-

ary of 119909119895(119905) respectively and only one particle is analyzed for

simplicity

4 Dynamics of Chaotic Hopfield NetworkSwarm Optimization

In this section the dynamics of the chaotic Hopfield net-work swarm optimization (CHNNSO) is analyzed The firstsubsection discusses the convergence of the chaotic particleswarm The second subsection discusses the dynamics of thesimplest CHNNSO with different parameter values

41 Convergence of the Particle Swarm

Theorem 1 (Wang and Smith [14]) If one has a networkof neurons with arbitrarily increasing IO functions thereexists a sufficient stability condition for a synchronous TCNN(transiently chaotic neural network) equation (12) namely

119896 gt 12119896

120573maxgt minus1198791015840

min (16)

Here 1120573max = min(11198921015840119894) (1198921015840119894denotes the derivative with

respect to time 119905 of the neural IO function for neuron 119894 inthis paper is the sigmoid function (9) 1198791015840min is the minimumeigenvalue of the connected weight matrix of the dynamics ofa n-neuron continuous Hopfield neural network)

Theorem 2 A sufficient stability condition for the CHNNSOmodel is 119896 gt 1

Proof When 119905 rarr infin

119911 (119905 + 1) = 0 (17)

It then follows that the equilibria of (12) and (13) can beevaluated by

Δ119906119895

119894(119905 + 1) = (119896 minus 1) 119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

Δ119906119895

119894119901(119905 + 1) = (119896 minus 1) 119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

(18)

According to (7) we get

Δ119906119895

119894(119905 + 1) = (119896 minus 1) 119906

119895

119894(119905) + 2119860120572 (119909

119895

119894(119905) minus 119901

119895

119892)

+ 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(19)

Journal of Applied Mathematics 5

Δ119906119895

119894119901(119905 + 1) = (119896 minus 1) 119906

119895

119894119901(119905) + 2119861120572 (119909

119895

119894119901(119905) minus 119901

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(20)

1198791015840

min = min(eigenvalue(2119860120572 + 2119862120572 minus2119862120572

minus2119862120572 2119861120572 + 2119862120572))

= (119860 + 119861 + 2119862) 120572 minus 120572radic(119860 minus 119861)2+ 41198622

= 120572(119860 + 119861 + 2119862 minus radic(119860 minus 119861)2+ 41198622)

(21)

In this paper

120573max =1

min (11198921015840)=120585

4 (22)

It is clear that 1198791015840min gt 0 and the stability condition(16) is satisfied when 120585 gt 0 The above analysis verifiesTheorem 2

Theorem 3 The particles converge to the sphere with centerpoint 119901119895

119892and radius 119877 = max 119909119895

119894119890minus 119901119895

119894 (119909119895119894119890is the final

convergence equilibria if the optimization problem is in two-dimensional plane the particles are finally in a circle)

It is easy to show that the particle model given by (7) and(8)ndash(14) has only one equilibrium as 119905 rarr infin that is 119911(119905) =0 Hence as 119905 rarr infin 119883

119894belongs to the hypersphere whose

origin is 119875119892and the radius is 119877 Solving (9) (11) (19) and (20)

simultaneously we get

(119896 minus 1) ln (1 minus (1119909119895119894) (119905))

120585+ 2119860120572 (119909

119895

119894(119905) minus 119875

119895

119892)

+ 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(23)

(119896 minus 1) ln (1 minus (1119909119895119894119901(119905)))

120585+ 2119861120572 (119909

119895

119894119901(119905) minus 119875

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(24)

With (23) and (24) satisfied there must exist the finalconvergence equilibria 119909119895

119894119890and 119909119895

119894119901119890 So the best place the

particle swarm can find is 119901119895119892minus 119909119895

119894119890(119905 + 1) and radius is

119877 = max 119909119895119894119890minus 119901119895

119894

The above analysis therefore verifies Theorem 3

42Dynamics of the Simplest ChaoticHopfieldNeuralNetworkSwarm In this section the dynamics of the simplest particleswarm model is analyzed Equations (7) and (8)ndash(13) are thedynamic model of a single particle with subscript 119894 ignoredAccording to (7) and Theorem 3 the parameters 119860 119861 and119862 control the final convergent radius According to trial anderror the parameters 119860 119861 and 119862 can be chosen in the rangefrom 0005 to 005 According to (16) and (22) 119896 gt 1 and

120585 gt 0 In the simulation the results are better when 119896 is inthe neighborhood of 11 and 120585 is in the neighborhood of 150The parameters 120573 and 119885(0) control the time of the chaoticperiod If 120573 is too big andor 119885(0) is too small the systemwill quickly escape from chaos and performance will be poorThe parameter 119868

0= 02 is standard in the literature on chaotic

neural networks The simulation showed that the model isnot sensitive to the values of parameters 120585 and 119896 for example100 lt 120585 lt 200 and 1 lt 119896 lt 15 are feasible

Then the values of the parameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02 119901

119894= 05 119901

119892= 02

(25)

Figure 2 shows the time evolution of 119909119894(119905) 119911(119905) and the

Lyapunov exponent 120582 of 119909119894(119905) The Lyapunov exponent 120582

characterizes the rate of separation of infinitesimally closetrajectories A positive Lyapunov exponent is usually taken asan indication that the system is chaotic [1] Here 120582 is definedas

120582 = lim119899rarr+infin

1

119898

119898minus1

sum

119905=0

ln10038161003816100381610038161003816100381610038161003816

119889119909119894 (119905 + 1)

119889119909119894 (119905)

10038161003816100381610038161003816100381610038161003816 (26)

At about 200 steps 119911(119905) decays to a small value and 119909119894(119905)

departs from chaos which corresponds with the change of 120582from positive to negative

According to Figure 2 the convergence process of asimple particle position follows the nonlinear bifurcationmaking the particle converge to a stable fixed point from astrange attractor In the following section it is shown thatthe fixed point is determined by the best previous position 119875

119892

among all particles and the best position 119875119894of the individual

particle

Remark 4 The proposed CHNNSOmodel is a deterministicChaos-Hopfield neural network swarm which is differentfrom existing PSOs with stochastic parameters Its searchorbits exhibit an evolutionary process of inverse periodbifurcation from chaos to periodic orbits then to sink Aschaos is ergodic and the particle is always in a chaotic state atthe beginning (eg in Figure 2) the particle can escape whentrapped in local extremaThis proposedCHNNSOmodelwilltherefore in general not suffer from being easily trapped in athe local optimum and will continue to search for a globaloptimum

5 Numerical Simulation

To test the performance of the proposed algorithms twofamous benchmark optimization problems and an engi-neering optimization problem with linear and nonlinearconstraints are used The solutions to the two benchmarkproblems can be represented in the plane and therefore theconvergence of the CHNNSO can be clearly observed Theresults of the third optimization problem when comparedwith other algorithms are displayed in Table 1 We willcompare the CHNNSO with the original PSO [20]

6 Journal of Applied Mathematics

0 2000 4000 6000 8000 10000 12000 140000

1

05

Time

119909119894

(a)

02

04

0 2000 4000 6000 8000 10000 12000 14000Time

0

119911(119905)

(b)

0 2000 4000 6000 8000 10000 12000 14000Time

050

minus05

minus1

minus15

120582

(c)

Figure 2 Time evolutions of 119909119894(119905) 119911(119905) and the Lyapunov exponent 120582

Table 1The parameters of the Hartmann function (when 119899 = 3 and119902 = 4)

119894 1198861198941

1198861198942

1198861198943

119888119894

1199011198941

1199011198942

1199011198943

1 3 10 30 1 03689 01170 02673

2 01 10 35 12 04699 04387 07470

3 3 10 30 3 01091 08732 05547

4 01 10 35 32 003815 05473 08828

51 The Rastrigin Function To demonstrate the efficiencyof the proposed technique the famous Rastrigin function ischosen as a test problem This function with two variables isgiven by

119891 (119883) = 1199092

1+ 1199092

2minus cos (18119909

1) minus cos (18119909

2)

minus 1 lt 119909119894lt 1 119894 = 1 2

(27)

The global minimum is minus2 and the minimum point is (00) There are about 50 local minima arranged in a latticeconfiguration

The proposed technique is applied with a population sizeof 20 and the maximum number of iterations is 20000 Thechaotic particle swarm parameters are chosen as 119860 = 002119861 = 119862 = 001 120573 = 0001 119911(0) = 03 120572 = 1 120585 = 150119896 = 11 and 119868

0= 02

The position of every particle is initialized with a randomvalueThe time evolution of the cost of the Rastrigin functionis shown in Figure 3 The global minimum at minus2 is obtainedby the best particle with (119909

1 1199092) = (0 0)

From Figure 3 it can be seen that the proposed methodgives good optimization results Since there are two variablesin the Rastrigin function it is easy to show the final conver-gent particle states in the plane

In Figure 4 the ldquolowastrdquos are the best experiences of eachparticleThe ldquo+rdquos are the final states of the particlesTheglobalminimum (0 0) is also included in the ldquolowastrdquos and the ldquo+rdquosMostldquolowastrdquos and ldquo+rdquos are overlapped at the global minimum (0 0)According toTheorem 3 the particles will finally converge to

0 05 1 15 2Step

minus1

minus15

minus2

minus25

119891(119909)

times104

Figure 3 Time evolutions of Rastrigin function

a circle finally For this Rastrigin problem the particlesrsquo finalstates converge to the circle as shown in Figure 4 and hencethe global convergence of the particles is guaranteed

Figure 5 displays the results when the original PSO [20]was used to optimize the Rastrigin function In the numericalsimulation the particle swarm population size is also 20 andparameters 119888

1and 1198882are set to 2 and 120596 set to 1 Vmax is set

equal to the dynamic range of each dimension The ldquolowastrdquos inFigure 5 are the final states of all the particles correspondingto the ldquo+rdquos in Figure 4 It is easy to see that the final states ofthe particles are ruleless even though the global minimum ofminus2 is obtained by the best experience of the particle swarmthat is (119909

1 1199092) = (0 0) as shown in Figure 5

By comparing the results obtained by the proposedCHNNSO in Figure 4 with the results of the original PSO inFigure 5 it can be seen that the final states of the particles ofthe proposed CHNNSO are attracted to the best experienceof all the particles and that convergence is superior Thefinal states of CHNNSO particles are guaranteed to convergewhich is not the case for original PSO implications

Journal of Applied Mathematics 7

0 05 1

0

05

1

minus05

minus05minus1minus1

1199092

1199091

Figure 4 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Rastrigin function

0

0

05

05

1

1

minus05

minus05minus1minus1

1199092

1199091

Figure 5The final states of the particles achieved fromoriginal PSOfor Rastrigin function

When their parameters 1198881and 1198882are both set to 1494

and 120596 to a value of 0729 Vmax is set equal to the dynamicrange on each dimensionThe constriction factors 119888

1 1198882120596 are

applied to improve the convergence of the particle over timeby damping the oscillations once the particle is focused onthe best point in an optimal regionThemain disadvantage ofthis method is that the particles may follow wider cycles andmay not converge when the individual best performance isfar from the neighborhoods best performance (two differentregions) [24]

52 The Schafferrsquos F6 Function To further investigate theperformance of the CHNNSO

119891 (119883) = 05 +

radicsin (11990921+ 11990922)2minus 05

10 + 0001(11990921+ 11990922)2 (28)

0

0

25

25

5

5

minus25

minus25minus5minus5

1199092

1199091

Figure 6 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Schafferrsquo F6 function

the Schafferrsquos F6 function [25] is chosen This function has asingle global optimum at (0 0) and119891min(119909 119910) = 0 and a largenumber of local optima The global optimum is difficult tofind because the value at the best local optimum differs withonly about 10minus3 from the global minimum The local optimacrowd around the global optimumTheproposed technique isapplied with a population size of 30 the iterations are 100000and the parameters of CHNNSO are chosen as

119860 = 119861 = 119862 = 001 120573 = 00003 119911 (0) = 03

120585 = 155 119896 = 11 1198680= 02

(29)

The position of each particle is initialized with a randomvalue In Figure 6 the ldquolowastrdquos are the best experiences of eachparticleThe global minimum at 2674 times 10minus4 is obtained bythe best particle with (119909

1 1199092) = (minus000999 001294) which

is included in the ldquolowastrdquos The ldquo+rdquos are the final states of theparticles According to Theorem 3 the particlesrsquo final statesconverge in a circle as shown in Figure 6 which proves globalconvergence FromFigure 6 it is clearly seen that the particlesrsquofinal states are attracted to the neighborhoods of the bestexperiences of all the particles and the convergence is good

Figure 7 shows the final particle states when the originalPSO [20] was used to optimize the Schafferrsquos F6 function Inthis numerical simulation of the original PSO the particleswarm population size is also 20 and parameters 119888

1and 1198882are

both set to 2 and set 120596 is set to a value of 1 Vmax is set equal tothe dynamic range of each dimensionThe ldquolowastrdquos in Figure 7 arethe final states of all the particles corresponding to the ldquo+rdquos inFigure 6 It is easy to see that the final states of the particles areruleless in Figure 7 The global minimum 63299 times 10

minus4 isobtained by the best particle with (119909

1 1199092) = (001889 times 10

minus3minus04665 times 10

minus3) The best experience from the original PSO

is not as good as the best of the proposed CHNNSOComparing the results obtained from the proposed

CHNNSO in Figure 6 and the original PSO in Figure 7 itis clearly seen that the particlesrsquo final states of the proposed

8 Journal of Applied Mathematics

0

25

25 5

minus25

minus25minus5

minus5

minus5

1199092

1199091

0

Figure 7Thefinal states of the particles achieved fromoriginal PSOfor Schafferrsquos F6 function

CHNNSO are finally attracted to the best experience of allthe particles and the convergence is better than that of theoriginal PSO The CHNNSO can guarantee the convergenceof the particle swarm but the final states of the original PSOare ruleless

53 The Hartmann Function The Hartmann function when119899 = 3 119902 = 4 is given by

119891 (119909) = minus

119902

sum

119894=1

119888119894exp(minus

119899

sum

119895=1

119886119894119895(119909119895minus 119901119894119895)2

) (30)

with 119909119895belonging to

119878 = 119909 isin 119877119899| 0 le 119909

119895le 1 1 le 119895 le 119899 (31)

Table 1 shows the parameter values for the Hartmann func-tion when 119899 = 3 119902 = 4

When 119899 = 3 119883min = (0114 0556 0882) 119891(119909min) =minus386

The time evolution of the cost of the Hartmann functionis 15000 In Figure 8 only subdimensions are picturedIn Figure 8 the ldquo+rdquos are the final states of the particlesand the ldquolowastrdquos denote the best experiences of all particlesFrom Figure 8 it can be easily seen that the final statesof the particles converge to the circle The center point is(00831 05567 08522) and the radius is 01942 The finalparticle states confirmTheorem 3 and the final convergencyis guaranteed

54 Design of a Pressure Vessel The pressure vessel problemdescribed in [26 27] is an example which has linear andnonlinear constraints and has been solved by a variety oftechniques The objective of the problem is to minimize thetotal cost of the material needed for forming and weldinga cylindrical vessel There are four design variables 119909

1(119879119904

thickness of the shell) 1199092(119879ℎ thickness of the head) 119909

3(119877

0 01 02 03035

04

045

05

055

06

065

07

075

08

1199092

1199091

minus02 minus01

Figure 8Thefinal states of the particles achieved fromoriginal PSOfor Hartmann function

inner radius) and 1199094(119871 length of the cylindrical section of

the vessel) 119879119904and 119879

ℎare integer multiples of 00625 inch

which are the available thickness of rolled steel plates and 119877and119871 are continuousTheproblemcan be specified as follows

Minimize 119891 (119883) = 06224119909111990931199094 + 1778111990921199092

3

+ 316611199092

11199094+ 1984119909

2

11199093

subject to 1198921 (119883) = minus1199091 + 001931199093 le 0

1198922 (119883) = minus1199092 + 0009541199093 le 0

1198923 (119883) = minus120587119909

2

31199094minus4

31205871199093+ 1296000 le 0

1198924 (119883) = 1199094 minus 240 le 0

(32)

The following range of the variables were used [27]

0 lt 1199091le 99 0 lt 119909

2le 99 10 le 119909

3le 200

10 le 1199094le 200

(33)

de Freitas Vas and de Graca Pinto Fernandes [26] proposedan algorithm to deal with the constrained optimization prob-lems Here this algorithm [26] is combined with CHNNSOto search for the global optimum The proposed techniqueis applied with a population size of 20 and the maximumnumber of iterations is 20000 Then the values of theparameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02

119901119894= 05 119901

119892= 05

(34)

From Table 2 the best solution obtained by the CHNNSO isbetter than the other two solutions previously reported

Journal of Applied Mathematics 9

Table 2 Comparison of the results for pressure Vessel designproblem

Design variables Best solutions foundThis paper Hu et al [27] Coello [28]

1199091(119879119904) 08125 08125 08125

1199092(119879ℎ) 04375 04375 04375

1199093(119877) 4209845 4209845 403239

1199094(119871) 1766366 1766366 2000

1198921(119883) 0 0 003424

1198922(119883) minus003588 minus003588 minus0052847

1198923(119883) minus11814 lowast 10

minus11minus58208 lowast 10

minus11minus27105845

1198924(119883) minus633634 minus633634 minus400

119891(119883) 60591313 60591313 62887445

6 Conclusion

This paper proposed a chaotic neural networks swarmoptimization algorithm It incorporates the particle swarmsearching structure having global optimization capability intoHopfield neural networks which guarantee convergence Inaddition by adding chaos generator terms into Hopfieldneural networks the ergodic searching capability is greatlyimproved in the proposed algorithm The decay factorintroduced in the chaos terms ensures that the searchingevolves to convergence to global optimum after globallychaotic optimization The experiment results of three classicbenchmark functions showed that the proposed algorithmcan guarantee the convergence of the particle swarm search-ing and can escape from local extremum Therefore theproposed algorithm improves the practicality of particleswarm optimization As this is a general particle model sometechniques such as the local best version algorithm proposedin [29] can be used together with the new model This will beexplored in future work

Acknowledgments

This work was supported by ChinaSouth Africa ResearchCooperation Programme (nos 78673 and CS06-L02) SouthAfrican National Research Foundation Incentive Grant (no81705) SDUST Research Fund (no 2010KYTD101) and Keyscientific support program of Qingdao City (no 11-2-3-51-nsh)

References

[1] J C Sprott Chaos and Time-Series Analysis Oxford UniversityPress New York NY USA 2004

[2] K Aihara and G Matsumoto Chaos in Biological SystemsPlenum Press New York NY USA 1987

[3] C A Skarda and W J Freeman ldquoHow brains make chaos inorder tomake sense of theworldrdquoBehavioral andBrain Sciencesvol 10 pp 161ndash165 1987

[4] L Wang D Z Zheng and Q S Lin ldquoSurvey on chaotic opti-mization methodsrdquo Computation Technology Automation vol20 pp 1ndash5 2001

[5] B Li and W S Jiang ldquoOptimizing complex functions by chaossearchrdquo Cybernetics and Systems vol 29 no 4 pp 409ndash4191998

[6] J J Hopfield and DW Tank ldquolsquoNeuralrsquo computation of decisonsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[7] Z Wang Y Liu K Fraser and X Liu ldquoStochastic stabilityof uncertain Hopfield neural networks with discrete and dis-tributed delaysrdquo Physics Letters A vol 354 no 4 pp 288ndash2972006

[8] T Tanaka and E Hiura ldquoComputational abilities of a chaoticneural networkrdquo Physics Letters A vol 315 no 3-4 pp 225ndash2302003

[9] K Aihara T Takabe and M Toyoda ldquoChaotic neural net-worksrdquo Physics Letters A vol 144 no 6-7 pp 333ndash340 1990

[10] S Kirkpatrick C D Gelatt Jr and M P Vecchi ldquoOptimizationby simulated annealingrdquo Science vol 220 no 4598 pp 671ndash6801983

[11] L Chen and K Aihara ldquoChaotic simulated annealing by aneural network model with transient chaosrdquo Neural Networksvol 8 no 6 pp 915ndash930 1995

[12] L Chen and K Aihara ldquoChaos and asymptotical stability indiscrete-time neural networksrdquo Physica D vol 104 no 3-4 pp286ndash325 1997

[13] L Wang ldquoOn competitive learningrdquo IEEE Transactions onNeural Networks vol 8 no 5 pp 1214ndash1217 1997

[14] L Wang and K Smith ldquoOn chaotic simulated annealingrdquo IEEETransactions on Neural Networks vol 9 no 4 pp 716ndash718 1998

[15] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002

[16] J Ke J X Qian and Y Z Qiao ldquoA modified particle swarmoptimization algorithmrdquo Journal of Circuits and Systems vol10 pp 87ndash91 2003

[17] B Liu L Wang Y H Jin F Tang and D X Huang ldquoImprovedparticle swarm optimization combined with chaosrdquo ChaosSolitons and Fractals vol 25 no 5 pp 1261ndash1271 2005

[18] T Xiang X Liao and K W Wong ldquoAn improved particleswarm optimization algorithm combined with piecewise linearchaotic maprdquo Applied Mathematics and Computation vol 190no 2 pp 1637ndash1645 2007

[19] C Fan and G Jiang ldquoA simple particle swarm optimizationcombined with chaotic searchrdquo in Proceedings of the 7th WorldCongress on Intelligent Control and Automation (WCICA rsquo08)pp 593ndash598 Chongqing China June 2008

[20] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia December 1995

[21] S H Chen A J Jakeman and J P Norton ldquoArtificial Intelli-gence techniques an introduction to their use for modellingenvironmental systemsrdquo Mathematics and Computers in Simu-lation vol 78 no 2-3 pp 379ndash400 2008

[22] J J Hopfield ldquoHopfield networkrdquo Scholarpedia vol 2 article1977 2007

[23] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 2554ndash2558 1984

[24] Y del Valle G K Venayagamoorthy S Mohagheghi J CHernandez and R G Harley ldquoParticle swarm optimization

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

2 Journal of Applied Mathematics

to temporarily get stuck andmay need a long period of time toescape from a local extremum [16] It is difficult to guaranteethe convergence of the swarm especially when randomparameters are used In order to improve the dynamicalbehavior of PSO one can combine chaoswith PSOalgorithmsto enhance the performance of PSO In [17ndash19] chaos wereapplied to the PSO to avoid the PSO getting trapped in localminima

PSO is motivated by the behavior of organisms such asfish schooling and bird flocking [20] During the processfuture particle positions (determined by velocity) can beregarded as particle intelligence [21] Using a chaotic intel-ligent swarm system to replace the original PSO might beconvenient for analysis while maintaining stochastic searchproperties Most importantly the convergence of a particleswarm initialized with random weights is not guaranteed

In this paper we propose a chaotic Hopfield neuralnetwork swarmoptimization (CHNNSO) algorithmThe restof the paper is organized as follows In Section 2 the prelim-inaries of Hopfield neural networks and PSO are describedThe chaotic Hopfield neural network model is developed inSection 3 In Section 4 the dynamics of the chaotic Hopfieldneural network is analyzed Section 5 provides simulationresults and comparisonsThe conclusion is given in Section 6

2 Preliminaries

21 Basic Hopfield Neural Network Theory [22] A Hopfieldnet is a recurrent neural network having a synaptic connec-tion pattern such that there is an underlying Lyapunov energyfunction for the activity dynamics Started in any initial statethe state of the system evolves to a final state that is a (local)minimum of the Lyapunov energy function The Lyapunovenergy function decreases in a monotone fashion under thedynamics and is bounded below Because of the existence ofan elementary Lyapunov energy function for the dynamicsthe only possible asymptotic result is a state on an attractor

There are two popular forms of themodel binary neuronswith discrete time which is updated one at a time and con-tinuous time graded neurons In this paper the second kindof model is used The dynamics of a 119899-neuron continuousHopfield neural network is described by

119889119906119894

119889119905=minus119906119894

120591+sum

119895

119879119894119895119909119894 (119905) + 119868119894 (1)

Here 119906119894isin (minusinfininfin) is the input of neuron 119894 and the output

of neuron 119894 is

119909119894 (119905) = 119892119894 [119906119894 (119905)] (2)

where 119894 = 1 2 119899 120591 is a positive constant 119868119894is external

inputs (eg sensory input or bias current) to neuron 119894 and issometimes called the ldquofiring thresholdrdquo when replaced withminus119868119894 119906119894is the mean internal potential of the neuron which

determines the output of neuron 119894 119879119894119895is the strength of

synaptic input from neuron 119894 to neuron 119895 119892 is a monotonefunction that converts internal potential into firing rate input

of the neuron 119879 is the matrix with elements 119879119894119895 When 119879 is

symmetric the Lyapunov energy function is given by

119869 = minus1

2sum

119894119895

119879119894119895119909119894119909119895minussum

119894

119868119894119909119894+1

120591sum

119894

int

119909119895

0

119892minus1(119885) 119889119885 (3)

where 119892minus1 is the inverse of the gain function 119892 There isa significant limiting case of this function when 119879 has nodiagonal elements and the input-output relation becomes astep going from 0 to amaximumfiring rate (for conveniencescaled to 1) The third term of this Lyapnouv function is thenzero or infinite With no diagonal elements in 119879 the minimaof 119869 are all located at corners of the hypercube 0 le 119909

119895le 1

In this limit the states of the continuous variable system arestable

Many optimization problems can be readily representedusing Hopfield nets by transforming the problem into vari-ables such that the desired optimization corresponds to theminimization of the respective Lyapunov energy function [6]The dynamics of the HNN converges to a local Lyapunovenergy minimum If this local minimum is also the globalminimum the solution of the desired optimization task hasbeen carried out by the convergence of the network state

22 Basic PSO Theory Many real optimization problemscan be formulated as the following functional optimizationproblem

min 119891 (119883119894) 119883

119894= [1199091

119894 119909

119899

119894]

st 119909119894isin [119886119894 119887119894] 119894 = 1 2 119899

(4)

Here 119891 is the objective function and119883119894is the decision vector

consisting of 119899 variablesTheoriginal particle swarmalgorithmworks by iteratively

searching in a region and is concerned with the best previoussuccess of each particle the best previous success of theparticle swarm and the current position and velocity of eachparticle [20] Every candidate solution of 119891(119883

119894) is called a

ldquoparticlerdquo The particle searches the domain of the problemaccording to

119881119894 (119905 + 1) = 120596119881119894 (119905) + 11988811198771 (119875119894 minus 119883119894 (119905)) + 11988821198772 (119875119892 minus 119883119894 (119905))

(5)

119883119894 (119905 + 1) = 119883119894 (119905) + 119881119894 (119905 + 1) (6)

where 119881119894= [V1

119894 V2119894 V119899

119894] is the velocity of particle 119894

119883119894= [1199091

119894 1199092

119894 119909

119899

119894] represents the position of particle 119894 119875

119894

represents the best previous position of particle 119894 (indicatingthe best discoveries or previous experience of particle 119894)119875119892represents the best previous position among all particles

(indicating the best discovery or previous experience of thesocial swarm)120596 is the inertia weight that controls the impactof the previous velocity of the particle on its current velocityand is sometimes adaptive [17] 119877

1and 119877

2are two random

weights whose components 1199031198951and 119903119895

2(119895 = 1 2 119899) are

chosen uniformly within the interval [0 1] which might notguarantee the convergence of the particle trajectory 119888

1and

Journal of Applied Mathematics 3

1

1

1

Feedback input

1119901119895119894

119901119895119892

Externalappliedinput

minus1

minus1

119908

119894(119905)

119909119895119894 (119905)

1198881Rand1(∙)

1198882Rand2(∙)

119909119895119894 (119905 + 1)

Figure 1 Particle structure

1198882are the positive constant parameters Generally the value

of each component in 119881119894should be clamped to the range

[minusVmax Vmax] to control excessive roaming of particles outsidethe search space

3 A Chaotic Hopfield Neural Network Model

From the introduction of basic PSO theory every particlecan be seen as the model of a single fish or a single birdThe position chosen by the particle can be regarded as astate of a neural network with a random synaptic connectionAccording to (5)-(6) the position components of particle 119894can be thought of as the output of a neural network as shownin Figure 1

In Figure 1 Rand1(sdot) and Rand2(sdot) are two independentand uniformly distributed random variables within the range[0 1] which refer to 119903119895

1and 1199031198952 respectively 119901119895

119894and 119901119895

119892are the

components of 119875119894and 119875119892 respectively 119901119895

119892is the previous best

value amongst all particles and 119901119895119894 as an externally applied

input is the 119895th element of the best previous position 119875119894

and it is coupled with other components of 119875119894 The particles

migrate toward a new position according to (5)-(6) Thisprocess is repeated until a defined stopping criterion is met(eg maximum number of iterations or a sufficiently goodfitness value)

As pointed out by Clerc and Kennedy [15] the powerfuloptimization ability of the PSO comes from the interactionamongst the particles The analysis of complex interactionamongst the particles in the swarm is beyond the scope ofthis paper which focuses on the construction of a simpleparticle using a neural network perspective and convergenceissues Artificial neural networks are composed of simpleartificial neurons mimicking biological neurons The HNNhas the property that as each neuron in a HNN updates anenergy function is monotonically reduced until the networkstabilizes [23] One can therefore map an optimizationproblem to a HNN such that the cost function of the problemcorresponds to the energy function of the HNN and theresult of the HNN thus suggests a low cost solution to theoptimization problem The HNN might therefore be a goodchoice to model particle behavior

In order to approach 119875119892and 119875

119894 the HNN model should

include at least two neurons For simplicity the HNN model

of each particle position component has two neurons whoseoutputs are 119909119895

119894(119905) and 119909

119895

119894119901(119905) In order to transform the

problem into variables such that the desired optimizationcorresponds to the minimization of the energy function theobjective function should be determined firstly As 119909119895

119894(119905) and

119909119895

119894119901(119905) should approach 119901119895

119892and 119901119895

119894 respectively (119909119895

119894(119905) minus 119901

119895

119892)2

and (119909119895119894119901(119905) minus 119901

119895

119894)2 can be chosen as two parts of the energy

function The third part of energy function (119909119895119894(119905) minus 119909

119895

119894119901(119905))2

is added to accompany (119909119895119894119901(119905) minus 119901

119895

119894)2 to cause 119909119895

119894(119905) to tend

towards 119901119895119894 Therefore the HNN Lyapunov energy function

for each particle is proposed

119869119895

119894(119905) = 119860(119909

119895

119894(119905) minus 119901

119895

119892)2

+ 119861(119909119895

119894119901(119905) minus 119901

119895

119894)2

+ 119862(119909119895

119894(119905) minus 119909

119895

119894119901(119905))2

(7)

where 119860 119861 and 119862 are positive constantsHere the neuron input-output function is chosen as a

sigmoid function given by (9) and (11) Equations (8) and (10)are the Euler approximation of (1) of the continuousHopfieldneural network [14]The dynamics of component 119895 of particle119894 is described by

119906119895

119894(119905 + 1) = 119896119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

(8)

119909119895

119894(119905 + 1) = 119892119894 (119906

119895

119894(119905 + 1)) =

1

1 minus 119890120585119906119895

119894(119905+1)

(9)

119906119895

119894119901(119905 + 1) = 119896119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

(10)

119909119895

119894119901(119905 + 1) = 119892119894 (119906

119895

119894119901(119905 + 1)) =

1

1 minus 119890120585119906119895

119894119901(119905+1)

(11)

According to (5)-(6) and Figure 1 the PSO uses randomweights to simulate birds flocking or fish searching for foodWhen birds flock or fish search for food they exhibit chaoslike behavior yet (8)ndash(11) do not generate chaos Aihara et al[9] proposed a kind of chaotic neuronwhich includes relativerefractoriness in the model to simulate chaos in a biological

4 Journal of Applied Mathematics

brain To use this result 119911119894(119896)(119909119895

119894(119896)minus119868

0) and 119911

119894(119896)(119909119895

119894119901(119896)minus119868

0)

are added to (8) and (10) to cause chaos Equations (8) and(10) then become

119906119895

119894(119905 + 1) = 119896119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

minus 119911119894 (119905) (119909

119895

119894(119905) minus 1198680) (12)

119906119895

119894119901(119905 + 1) = 119896119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

minus 119911119894 (119905) (119909

119895

119894119901(119905) minus 1198680) (13)

In order to escape from chaos as time evolves we set

119911119894 (119905 + 1) = (1 minus 120573) 119911119894 (119905) (14)

In (8)ndash(14) 120585 120572 and 119896 are positive parameters 119911119894(119905) is self-

feedback connection weight (the refractory strength) 120573 is thedamping factor of the time-dependent 119911

119894(119905) (0 le 120573 le 1) 119868

0is

a positive parameterAll the parameters are fixed except 119911(119905) which is variedThe combination of (9) (11)ndash(14) is called chaotic Hop-

field neural network swarm optimization(CHNNSO) pro-posed by us According to (8)ndash(14) the following procedurecan be used for implementing the proposed CHNNSOalgorithm

(1) Initialize the swarm assign a random position in theproblemhyperspace to each particle and calculate thefitness function which is given by the optimizationproblem whose variables are corresponding to theelements of particle position coordinates

(2) Synchronously update the positions of all the particlesusing (9) (11)ndash(14) and change the two states everyiteration

(3) Evaluate the fitness function for each particle

(4) For each individual particle compare the particlersquosfitness value with its previous best fitness value If thecurrent value is better than the previous best valuethen set this value as the 119901119895

119894and the current particlersquos

position 119909119895119894 as 119901119895119894 else if the 119901

119894is updated then reset

119911 = 1199110

(5) Identify the particle that has the best fitness valueWhen iterations are less than a certain value and ifthe particle with the best fitness value is changed thenreset 119911 = 119911

0to keep the particles chaotic to prevent

premature convergence

(6) Repeat steps (2)ndash(5) until a stopping criterion is met(eg maximum number of iterations or a sufficientlygood fitness value)

As can be seen from (9) and (11) the particle posi-tion component 119909119895

119894is located in the interval [minus1 1] The

optimization problem variable interval must therefore bemapped to [minus1 1] and vice versa using

119909119895= minus1 +

2 (119909119895minus 119886119895)

119887119895minus 119886119895

119895 = 1 2 119899

119909119895= 119886119895+1

2(119909119895+ 1) (119887

119895minus 119886119895) 119895 = 1 2 119899

(15)

Here 119886119895and 119887119895are the lower boundary and the upper bound-

ary of 119909119895(119905) respectively and only one particle is analyzed for

simplicity

4 Dynamics of Chaotic Hopfield NetworkSwarm Optimization

In this section the dynamics of the chaotic Hopfield net-work swarm optimization (CHNNSO) is analyzed The firstsubsection discusses the convergence of the chaotic particleswarm The second subsection discusses the dynamics of thesimplest CHNNSO with different parameter values

41 Convergence of the Particle Swarm

Theorem 1 (Wang and Smith [14]) If one has a networkof neurons with arbitrarily increasing IO functions thereexists a sufficient stability condition for a synchronous TCNN(transiently chaotic neural network) equation (12) namely

119896 gt 12119896

120573maxgt minus1198791015840

min (16)

Here 1120573max = min(11198921015840119894) (1198921015840119894denotes the derivative with

respect to time 119905 of the neural IO function for neuron 119894 inthis paper is the sigmoid function (9) 1198791015840min is the minimumeigenvalue of the connected weight matrix of the dynamics ofa n-neuron continuous Hopfield neural network)

Theorem 2 A sufficient stability condition for the CHNNSOmodel is 119896 gt 1

Proof When 119905 rarr infin

119911 (119905 + 1) = 0 (17)

It then follows that the equilibria of (12) and (13) can beevaluated by

Δ119906119895

119894(119905 + 1) = (119896 minus 1) 119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

Δ119906119895

119894119901(119905 + 1) = (119896 minus 1) 119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

(18)

According to (7) we get

Δ119906119895

119894(119905 + 1) = (119896 minus 1) 119906

119895

119894(119905) + 2119860120572 (119909

119895

119894(119905) minus 119901

119895

119892)

+ 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(19)

Journal of Applied Mathematics 5

Δ119906119895

119894119901(119905 + 1) = (119896 minus 1) 119906

119895

119894119901(119905) + 2119861120572 (119909

119895

119894119901(119905) minus 119901

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(20)

1198791015840

min = min(eigenvalue(2119860120572 + 2119862120572 minus2119862120572

minus2119862120572 2119861120572 + 2119862120572))

= (119860 + 119861 + 2119862) 120572 minus 120572radic(119860 minus 119861)2+ 41198622

= 120572(119860 + 119861 + 2119862 minus radic(119860 minus 119861)2+ 41198622)

(21)

In this paper

120573max =1

min (11198921015840)=120585

4 (22)

It is clear that 1198791015840min gt 0 and the stability condition(16) is satisfied when 120585 gt 0 The above analysis verifiesTheorem 2

Theorem 3 The particles converge to the sphere with centerpoint 119901119895

119892and radius 119877 = max 119909119895

119894119890minus 119901119895

119894 (119909119895119894119890is the final

convergence equilibria if the optimization problem is in two-dimensional plane the particles are finally in a circle)

It is easy to show that the particle model given by (7) and(8)ndash(14) has only one equilibrium as 119905 rarr infin that is 119911(119905) =0 Hence as 119905 rarr infin 119883

119894belongs to the hypersphere whose

origin is 119875119892and the radius is 119877 Solving (9) (11) (19) and (20)

simultaneously we get

(119896 minus 1) ln (1 minus (1119909119895119894) (119905))

120585+ 2119860120572 (119909

119895

119894(119905) minus 119875

119895

119892)

+ 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(23)

(119896 minus 1) ln (1 minus (1119909119895119894119901(119905)))

120585+ 2119861120572 (119909

119895

119894119901(119905) minus 119875

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(24)

With (23) and (24) satisfied there must exist the finalconvergence equilibria 119909119895

119894119890and 119909119895

119894119901119890 So the best place the

particle swarm can find is 119901119895119892minus 119909119895

119894119890(119905 + 1) and radius is

119877 = max 119909119895119894119890minus 119901119895

119894

The above analysis therefore verifies Theorem 3

42Dynamics of the Simplest ChaoticHopfieldNeuralNetworkSwarm In this section the dynamics of the simplest particleswarm model is analyzed Equations (7) and (8)ndash(13) are thedynamic model of a single particle with subscript 119894 ignoredAccording to (7) and Theorem 3 the parameters 119860 119861 and119862 control the final convergent radius According to trial anderror the parameters 119860 119861 and 119862 can be chosen in the rangefrom 0005 to 005 According to (16) and (22) 119896 gt 1 and

120585 gt 0 In the simulation the results are better when 119896 is inthe neighborhood of 11 and 120585 is in the neighborhood of 150The parameters 120573 and 119885(0) control the time of the chaoticperiod If 120573 is too big andor 119885(0) is too small the systemwill quickly escape from chaos and performance will be poorThe parameter 119868

0= 02 is standard in the literature on chaotic

neural networks The simulation showed that the model isnot sensitive to the values of parameters 120585 and 119896 for example100 lt 120585 lt 200 and 1 lt 119896 lt 15 are feasible

Then the values of the parameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02 119901

119894= 05 119901

119892= 02

(25)

Figure 2 shows the time evolution of 119909119894(119905) 119911(119905) and the

Lyapunov exponent 120582 of 119909119894(119905) The Lyapunov exponent 120582

characterizes the rate of separation of infinitesimally closetrajectories A positive Lyapunov exponent is usually taken asan indication that the system is chaotic [1] Here 120582 is definedas

120582 = lim119899rarr+infin

1

119898

119898minus1

sum

119905=0

ln10038161003816100381610038161003816100381610038161003816

119889119909119894 (119905 + 1)

119889119909119894 (119905)

10038161003816100381610038161003816100381610038161003816 (26)

At about 200 steps 119911(119905) decays to a small value and 119909119894(119905)

departs from chaos which corresponds with the change of 120582from positive to negative

According to Figure 2 the convergence process of asimple particle position follows the nonlinear bifurcationmaking the particle converge to a stable fixed point from astrange attractor In the following section it is shown thatthe fixed point is determined by the best previous position 119875

119892

among all particles and the best position 119875119894of the individual

particle

Remark 4 The proposed CHNNSOmodel is a deterministicChaos-Hopfield neural network swarm which is differentfrom existing PSOs with stochastic parameters Its searchorbits exhibit an evolutionary process of inverse periodbifurcation from chaos to periodic orbits then to sink Aschaos is ergodic and the particle is always in a chaotic state atthe beginning (eg in Figure 2) the particle can escape whentrapped in local extremaThis proposedCHNNSOmodelwilltherefore in general not suffer from being easily trapped in athe local optimum and will continue to search for a globaloptimum

5 Numerical Simulation

To test the performance of the proposed algorithms twofamous benchmark optimization problems and an engi-neering optimization problem with linear and nonlinearconstraints are used The solutions to the two benchmarkproblems can be represented in the plane and therefore theconvergence of the CHNNSO can be clearly observed Theresults of the third optimization problem when comparedwith other algorithms are displayed in Table 1 We willcompare the CHNNSO with the original PSO [20]

6 Journal of Applied Mathematics

0 2000 4000 6000 8000 10000 12000 140000

1

05

Time

119909119894

(a)

02

04

0 2000 4000 6000 8000 10000 12000 14000Time

0

119911(119905)

(b)

0 2000 4000 6000 8000 10000 12000 14000Time

050

minus05

minus1

minus15

120582

(c)

Figure 2 Time evolutions of 119909119894(119905) 119911(119905) and the Lyapunov exponent 120582

Table 1The parameters of the Hartmann function (when 119899 = 3 and119902 = 4)

119894 1198861198941

1198861198942

1198861198943

119888119894

1199011198941

1199011198942

1199011198943

1 3 10 30 1 03689 01170 02673

2 01 10 35 12 04699 04387 07470

3 3 10 30 3 01091 08732 05547

4 01 10 35 32 003815 05473 08828

51 The Rastrigin Function To demonstrate the efficiencyof the proposed technique the famous Rastrigin function ischosen as a test problem This function with two variables isgiven by

119891 (119883) = 1199092

1+ 1199092

2minus cos (18119909

1) minus cos (18119909

2)

minus 1 lt 119909119894lt 1 119894 = 1 2

(27)

The global minimum is minus2 and the minimum point is (00) There are about 50 local minima arranged in a latticeconfiguration

The proposed technique is applied with a population sizeof 20 and the maximum number of iterations is 20000 Thechaotic particle swarm parameters are chosen as 119860 = 002119861 = 119862 = 001 120573 = 0001 119911(0) = 03 120572 = 1 120585 = 150119896 = 11 and 119868

0= 02

The position of every particle is initialized with a randomvalueThe time evolution of the cost of the Rastrigin functionis shown in Figure 3 The global minimum at minus2 is obtainedby the best particle with (119909

1 1199092) = (0 0)

From Figure 3 it can be seen that the proposed methodgives good optimization results Since there are two variablesin the Rastrigin function it is easy to show the final conver-gent particle states in the plane

In Figure 4 the ldquolowastrdquos are the best experiences of eachparticleThe ldquo+rdquos are the final states of the particlesTheglobalminimum (0 0) is also included in the ldquolowastrdquos and the ldquo+rdquosMostldquolowastrdquos and ldquo+rdquos are overlapped at the global minimum (0 0)According toTheorem 3 the particles will finally converge to

0 05 1 15 2Step

minus1

minus15

minus2

minus25

119891(119909)

times104

Figure 3 Time evolutions of Rastrigin function

a circle finally For this Rastrigin problem the particlesrsquo finalstates converge to the circle as shown in Figure 4 and hencethe global convergence of the particles is guaranteed

Figure 5 displays the results when the original PSO [20]was used to optimize the Rastrigin function In the numericalsimulation the particle swarm population size is also 20 andparameters 119888

1and 1198882are set to 2 and 120596 set to 1 Vmax is set

equal to the dynamic range of each dimension The ldquolowastrdquos inFigure 5 are the final states of all the particles correspondingto the ldquo+rdquos in Figure 4 It is easy to see that the final states ofthe particles are ruleless even though the global minimum ofminus2 is obtained by the best experience of the particle swarmthat is (119909

1 1199092) = (0 0) as shown in Figure 5

By comparing the results obtained by the proposedCHNNSO in Figure 4 with the results of the original PSO inFigure 5 it can be seen that the final states of the particles ofthe proposed CHNNSO are attracted to the best experienceof all the particles and that convergence is superior Thefinal states of CHNNSO particles are guaranteed to convergewhich is not the case for original PSO implications

Journal of Applied Mathematics 7

0 05 1

0

05

1

minus05

minus05minus1minus1

1199092

1199091

Figure 4 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Rastrigin function

0

0

05

05

1

1

minus05

minus05minus1minus1

1199092

1199091

Figure 5The final states of the particles achieved fromoriginal PSOfor Rastrigin function

When their parameters 1198881and 1198882are both set to 1494

and 120596 to a value of 0729 Vmax is set equal to the dynamicrange on each dimensionThe constriction factors 119888

1 1198882120596 are

applied to improve the convergence of the particle over timeby damping the oscillations once the particle is focused onthe best point in an optimal regionThemain disadvantage ofthis method is that the particles may follow wider cycles andmay not converge when the individual best performance isfar from the neighborhoods best performance (two differentregions) [24]

52 The Schafferrsquos F6 Function To further investigate theperformance of the CHNNSO

119891 (119883) = 05 +

radicsin (11990921+ 11990922)2minus 05

10 + 0001(11990921+ 11990922)2 (28)

0

0

25

25

5

5

minus25

minus25minus5minus5

1199092

1199091

Figure 6 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Schafferrsquo F6 function

the Schafferrsquos F6 function [25] is chosen This function has asingle global optimum at (0 0) and119891min(119909 119910) = 0 and a largenumber of local optima The global optimum is difficult tofind because the value at the best local optimum differs withonly about 10minus3 from the global minimum The local optimacrowd around the global optimumTheproposed technique isapplied with a population size of 30 the iterations are 100000and the parameters of CHNNSO are chosen as

119860 = 119861 = 119862 = 001 120573 = 00003 119911 (0) = 03

120585 = 155 119896 = 11 1198680= 02

(29)

The position of each particle is initialized with a randomvalue In Figure 6 the ldquolowastrdquos are the best experiences of eachparticleThe global minimum at 2674 times 10minus4 is obtained bythe best particle with (119909

1 1199092) = (minus000999 001294) which

is included in the ldquolowastrdquos The ldquo+rdquos are the final states of theparticles According to Theorem 3 the particlesrsquo final statesconverge in a circle as shown in Figure 6 which proves globalconvergence FromFigure 6 it is clearly seen that the particlesrsquofinal states are attracted to the neighborhoods of the bestexperiences of all the particles and the convergence is good

Figure 7 shows the final particle states when the originalPSO [20] was used to optimize the Schafferrsquos F6 function Inthis numerical simulation of the original PSO the particleswarm population size is also 20 and parameters 119888

1and 1198882are

both set to 2 and set 120596 is set to a value of 1 Vmax is set equal tothe dynamic range of each dimensionThe ldquolowastrdquos in Figure 7 arethe final states of all the particles corresponding to the ldquo+rdquos inFigure 6 It is easy to see that the final states of the particles areruleless in Figure 7 The global minimum 63299 times 10

minus4 isobtained by the best particle with (119909

1 1199092) = (001889 times 10

minus3minus04665 times 10

minus3) The best experience from the original PSO

is not as good as the best of the proposed CHNNSOComparing the results obtained from the proposed

CHNNSO in Figure 6 and the original PSO in Figure 7 itis clearly seen that the particlesrsquo final states of the proposed

8 Journal of Applied Mathematics

0

25

25 5

minus25

minus25minus5

minus5

minus5

1199092

1199091

0

Figure 7Thefinal states of the particles achieved fromoriginal PSOfor Schafferrsquos F6 function

CHNNSO are finally attracted to the best experience of allthe particles and the convergence is better than that of theoriginal PSO The CHNNSO can guarantee the convergenceof the particle swarm but the final states of the original PSOare ruleless

53 The Hartmann Function The Hartmann function when119899 = 3 119902 = 4 is given by

119891 (119909) = minus

119902

sum

119894=1

119888119894exp(minus

119899

sum

119895=1

119886119894119895(119909119895minus 119901119894119895)2

) (30)

with 119909119895belonging to

119878 = 119909 isin 119877119899| 0 le 119909

119895le 1 1 le 119895 le 119899 (31)

Table 1 shows the parameter values for the Hartmann func-tion when 119899 = 3 119902 = 4

When 119899 = 3 119883min = (0114 0556 0882) 119891(119909min) =minus386

The time evolution of the cost of the Hartmann functionis 15000 In Figure 8 only subdimensions are picturedIn Figure 8 the ldquo+rdquos are the final states of the particlesand the ldquolowastrdquos denote the best experiences of all particlesFrom Figure 8 it can be easily seen that the final statesof the particles converge to the circle The center point is(00831 05567 08522) and the radius is 01942 The finalparticle states confirmTheorem 3 and the final convergencyis guaranteed

54 Design of a Pressure Vessel The pressure vessel problemdescribed in [26 27] is an example which has linear andnonlinear constraints and has been solved by a variety oftechniques The objective of the problem is to minimize thetotal cost of the material needed for forming and weldinga cylindrical vessel There are four design variables 119909

1(119879119904

thickness of the shell) 1199092(119879ℎ thickness of the head) 119909

3(119877

0 01 02 03035

04

045

05

055

06

065

07

075

08

1199092

1199091

minus02 minus01

Figure 8Thefinal states of the particles achieved fromoriginal PSOfor Hartmann function

inner radius) and 1199094(119871 length of the cylindrical section of

the vessel) 119879119904and 119879

ℎare integer multiples of 00625 inch

which are the available thickness of rolled steel plates and 119877and119871 are continuousTheproblemcan be specified as follows

Minimize 119891 (119883) = 06224119909111990931199094 + 1778111990921199092

3

+ 316611199092

11199094+ 1984119909

2

11199093

subject to 1198921 (119883) = minus1199091 + 001931199093 le 0

1198922 (119883) = minus1199092 + 0009541199093 le 0

1198923 (119883) = minus120587119909

2

31199094minus4

31205871199093+ 1296000 le 0

1198924 (119883) = 1199094 minus 240 le 0

(32)

The following range of the variables were used [27]

0 lt 1199091le 99 0 lt 119909

2le 99 10 le 119909

3le 200

10 le 1199094le 200

(33)

de Freitas Vas and de Graca Pinto Fernandes [26] proposedan algorithm to deal with the constrained optimization prob-lems Here this algorithm [26] is combined with CHNNSOto search for the global optimum The proposed techniqueis applied with a population size of 20 and the maximumnumber of iterations is 20000 Then the values of theparameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02

119901119894= 05 119901

119892= 05

(34)

From Table 2 the best solution obtained by the CHNNSO isbetter than the other two solutions previously reported

Journal of Applied Mathematics 9

Table 2 Comparison of the results for pressure Vessel designproblem

Design variables Best solutions foundThis paper Hu et al [27] Coello [28]

1199091(119879119904) 08125 08125 08125

1199092(119879ℎ) 04375 04375 04375

1199093(119877) 4209845 4209845 403239

1199094(119871) 1766366 1766366 2000

1198921(119883) 0 0 003424

1198922(119883) minus003588 minus003588 minus0052847

1198923(119883) minus11814 lowast 10

minus11minus58208 lowast 10

minus11minus27105845

1198924(119883) minus633634 minus633634 minus400

119891(119883) 60591313 60591313 62887445

6 Conclusion

This paper proposed a chaotic neural networks swarmoptimization algorithm It incorporates the particle swarmsearching structure having global optimization capability intoHopfield neural networks which guarantee convergence Inaddition by adding chaos generator terms into Hopfieldneural networks the ergodic searching capability is greatlyimproved in the proposed algorithm The decay factorintroduced in the chaos terms ensures that the searchingevolves to convergence to global optimum after globallychaotic optimization The experiment results of three classicbenchmark functions showed that the proposed algorithmcan guarantee the convergence of the particle swarm search-ing and can escape from local extremum Therefore theproposed algorithm improves the practicality of particleswarm optimization As this is a general particle model sometechniques such as the local best version algorithm proposedin [29] can be used together with the new model This will beexplored in future work

Acknowledgments

This work was supported by ChinaSouth Africa ResearchCooperation Programme (nos 78673 and CS06-L02) SouthAfrican National Research Foundation Incentive Grant (no81705) SDUST Research Fund (no 2010KYTD101) and Keyscientific support program of Qingdao City (no 11-2-3-51-nsh)

References

[1] J C Sprott Chaos and Time-Series Analysis Oxford UniversityPress New York NY USA 2004

[2] K Aihara and G Matsumoto Chaos in Biological SystemsPlenum Press New York NY USA 1987

[3] C A Skarda and W J Freeman ldquoHow brains make chaos inorder tomake sense of theworldrdquoBehavioral andBrain Sciencesvol 10 pp 161ndash165 1987

[4] L Wang D Z Zheng and Q S Lin ldquoSurvey on chaotic opti-mization methodsrdquo Computation Technology Automation vol20 pp 1ndash5 2001

[5] B Li and W S Jiang ldquoOptimizing complex functions by chaossearchrdquo Cybernetics and Systems vol 29 no 4 pp 409ndash4191998

[6] J J Hopfield and DW Tank ldquolsquoNeuralrsquo computation of decisonsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[7] Z Wang Y Liu K Fraser and X Liu ldquoStochastic stabilityof uncertain Hopfield neural networks with discrete and dis-tributed delaysrdquo Physics Letters A vol 354 no 4 pp 288ndash2972006

[8] T Tanaka and E Hiura ldquoComputational abilities of a chaoticneural networkrdquo Physics Letters A vol 315 no 3-4 pp 225ndash2302003

[9] K Aihara T Takabe and M Toyoda ldquoChaotic neural net-worksrdquo Physics Letters A vol 144 no 6-7 pp 333ndash340 1990

[10] S Kirkpatrick C D Gelatt Jr and M P Vecchi ldquoOptimizationby simulated annealingrdquo Science vol 220 no 4598 pp 671ndash6801983

[11] L Chen and K Aihara ldquoChaotic simulated annealing by aneural network model with transient chaosrdquo Neural Networksvol 8 no 6 pp 915ndash930 1995

[12] L Chen and K Aihara ldquoChaos and asymptotical stability indiscrete-time neural networksrdquo Physica D vol 104 no 3-4 pp286ndash325 1997

[13] L Wang ldquoOn competitive learningrdquo IEEE Transactions onNeural Networks vol 8 no 5 pp 1214ndash1217 1997

[14] L Wang and K Smith ldquoOn chaotic simulated annealingrdquo IEEETransactions on Neural Networks vol 9 no 4 pp 716ndash718 1998

[15] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002

[16] J Ke J X Qian and Y Z Qiao ldquoA modified particle swarmoptimization algorithmrdquo Journal of Circuits and Systems vol10 pp 87ndash91 2003

[17] B Liu L Wang Y H Jin F Tang and D X Huang ldquoImprovedparticle swarm optimization combined with chaosrdquo ChaosSolitons and Fractals vol 25 no 5 pp 1261ndash1271 2005

[18] T Xiang X Liao and K W Wong ldquoAn improved particleswarm optimization algorithm combined with piecewise linearchaotic maprdquo Applied Mathematics and Computation vol 190no 2 pp 1637ndash1645 2007

[19] C Fan and G Jiang ldquoA simple particle swarm optimizationcombined with chaotic searchrdquo in Proceedings of the 7th WorldCongress on Intelligent Control and Automation (WCICA rsquo08)pp 593ndash598 Chongqing China June 2008

[20] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia December 1995

[21] S H Chen A J Jakeman and J P Norton ldquoArtificial Intelli-gence techniques an introduction to their use for modellingenvironmental systemsrdquo Mathematics and Computers in Simu-lation vol 78 no 2-3 pp 379ndash400 2008

[22] J J Hopfield ldquoHopfield networkrdquo Scholarpedia vol 2 article1977 2007

[23] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 2554ndash2558 1984

[24] Y del Valle G K Venayagamoorthy S Mohagheghi J CHernandez and R G Harley ldquoParticle swarm optimization

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Journal of Applied Mathematics 3

1

1

1

Feedback input

1119901119895119894

119901119895119892

Externalappliedinput

minus1

minus1

119908

119894(119905)

119909119895119894 (119905)

1198881Rand1(∙)

1198882Rand2(∙)

119909119895119894 (119905 + 1)

Figure 1 Particle structure

1198882are the positive constant parameters Generally the value

of each component in 119881119894should be clamped to the range

[minusVmax Vmax] to control excessive roaming of particles outsidethe search space

3 A Chaotic Hopfield Neural Network Model

From the introduction of basic PSO theory every particlecan be seen as the model of a single fish or a single birdThe position chosen by the particle can be regarded as astate of a neural network with a random synaptic connectionAccording to (5)-(6) the position components of particle 119894can be thought of as the output of a neural network as shownin Figure 1

In Figure 1 Rand1(sdot) and Rand2(sdot) are two independentand uniformly distributed random variables within the range[0 1] which refer to 119903119895

1and 1199031198952 respectively 119901119895

119894and 119901119895

119892are the

components of 119875119894and 119875119892 respectively 119901119895

119892is the previous best

value amongst all particles and 119901119895119894 as an externally applied

input is the 119895th element of the best previous position 119875119894

and it is coupled with other components of 119875119894 The particles

migrate toward a new position according to (5)-(6) Thisprocess is repeated until a defined stopping criterion is met(eg maximum number of iterations or a sufficiently goodfitness value)

As pointed out by Clerc and Kennedy [15] the powerfuloptimization ability of the PSO comes from the interactionamongst the particles The analysis of complex interactionamongst the particles in the swarm is beyond the scope ofthis paper which focuses on the construction of a simpleparticle using a neural network perspective and convergenceissues Artificial neural networks are composed of simpleartificial neurons mimicking biological neurons The HNNhas the property that as each neuron in a HNN updates anenergy function is monotonically reduced until the networkstabilizes [23] One can therefore map an optimizationproblem to a HNN such that the cost function of the problemcorresponds to the energy function of the HNN and theresult of the HNN thus suggests a low cost solution to theoptimization problem The HNN might therefore be a goodchoice to model particle behavior

In order to approach 119875119892and 119875

119894 the HNN model should

include at least two neurons For simplicity the HNN model

of each particle position component has two neurons whoseoutputs are 119909119895

119894(119905) and 119909

119895

119894119901(119905) In order to transform the

problem into variables such that the desired optimizationcorresponds to the minimization of the energy function theobjective function should be determined firstly As 119909119895

119894(119905) and

119909119895

119894119901(119905) should approach 119901119895

119892and 119901119895

119894 respectively (119909119895

119894(119905) minus 119901

119895

119892)2

and (119909119895119894119901(119905) minus 119901

119895

119894)2 can be chosen as two parts of the energy

function The third part of energy function (119909119895119894(119905) minus 119909

119895

119894119901(119905))2

is added to accompany (119909119895119894119901(119905) minus 119901

119895

119894)2 to cause 119909119895

119894(119905) to tend

towards 119901119895119894 Therefore the HNN Lyapunov energy function

for each particle is proposed

119869119895

119894(119905) = 119860(119909

119895

119894(119905) minus 119901

119895

119892)2

+ 119861(119909119895

119894119901(119905) minus 119901

119895

119894)2

+ 119862(119909119895

119894(119905) minus 119909

119895

119894119901(119905))2

(7)

where 119860 119861 and 119862 are positive constantsHere the neuron input-output function is chosen as a

sigmoid function given by (9) and (11) Equations (8) and (10)are the Euler approximation of (1) of the continuousHopfieldneural network [14]The dynamics of component 119895 of particle119894 is described by

119906119895

119894(119905 + 1) = 119896119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

(8)

119909119895

119894(119905 + 1) = 119892119894 (119906

119895

119894(119905 + 1)) =

1

1 minus 119890120585119906119895

119894(119905+1)

(9)

119906119895

119894119901(119905 + 1) = 119896119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

(10)

119909119895

119894119901(119905 + 1) = 119892119894 (119906

119895

119894119901(119905 + 1)) =

1

1 minus 119890120585119906119895

119894119901(119905+1)

(11)

According to (5)-(6) and Figure 1 the PSO uses randomweights to simulate birds flocking or fish searching for foodWhen birds flock or fish search for food they exhibit chaoslike behavior yet (8)ndash(11) do not generate chaos Aihara et al[9] proposed a kind of chaotic neuronwhich includes relativerefractoriness in the model to simulate chaos in a biological

4 Journal of Applied Mathematics

brain To use this result 119911119894(119896)(119909119895

119894(119896)minus119868

0) and 119911

119894(119896)(119909119895

119894119901(119896)minus119868

0)

are added to (8) and (10) to cause chaos Equations (8) and(10) then become

119906119895

119894(119905 + 1) = 119896119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

minus 119911119894 (119905) (119909

119895

119894(119905) minus 1198680) (12)

119906119895

119894119901(119905 + 1) = 119896119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

minus 119911119894 (119905) (119909

119895

119894119901(119905) minus 1198680) (13)

In order to escape from chaos as time evolves we set

119911119894 (119905 + 1) = (1 minus 120573) 119911119894 (119905) (14)

In (8)ndash(14) 120585 120572 and 119896 are positive parameters 119911119894(119905) is self-

feedback connection weight (the refractory strength) 120573 is thedamping factor of the time-dependent 119911

119894(119905) (0 le 120573 le 1) 119868

0is

a positive parameterAll the parameters are fixed except 119911(119905) which is variedThe combination of (9) (11)ndash(14) is called chaotic Hop-

field neural network swarm optimization(CHNNSO) pro-posed by us According to (8)ndash(14) the following procedurecan be used for implementing the proposed CHNNSOalgorithm

(1) Initialize the swarm assign a random position in theproblemhyperspace to each particle and calculate thefitness function which is given by the optimizationproblem whose variables are corresponding to theelements of particle position coordinates

(2) Synchronously update the positions of all the particlesusing (9) (11)ndash(14) and change the two states everyiteration

(3) Evaluate the fitness function for each particle

(4) For each individual particle compare the particlersquosfitness value with its previous best fitness value If thecurrent value is better than the previous best valuethen set this value as the 119901119895

119894and the current particlersquos

position 119909119895119894 as 119901119895119894 else if the 119901

119894is updated then reset

119911 = 1199110

(5) Identify the particle that has the best fitness valueWhen iterations are less than a certain value and ifthe particle with the best fitness value is changed thenreset 119911 = 119911

0to keep the particles chaotic to prevent

premature convergence

(6) Repeat steps (2)ndash(5) until a stopping criterion is met(eg maximum number of iterations or a sufficientlygood fitness value)

As can be seen from (9) and (11) the particle posi-tion component 119909119895

119894is located in the interval [minus1 1] The

optimization problem variable interval must therefore bemapped to [minus1 1] and vice versa using

119909119895= minus1 +

2 (119909119895minus 119886119895)

119887119895minus 119886119895

119895 = 1 2 119899

119909119895= 119886119895+1

2(119909119895+ 1) (119887

119895minus 119886119895) 119895 = 1 2 119899

(15)

Here 119886119895and 119887119895are the lower boundary and the upper bound-

ary of 119909119895(119905) respectively and only one particle is analyzed for

simplicity

4 Dynamics of Chaotic Hopfield NetworkSwarm Optimization

In this section the dynamics of the chaotic Hopfield net-work swarm optimization (CHNNSO) is analyzed The firstsubsection discusses the convergence of the chaotic particleswarm The second subsection discusses the dynamics of thesimplest CHNNSO with different parameter values

41 Convergence of the Particle Swarm

Theorem 1 (Wang and Smith [14]) If one has a networkof neurons with arbitrarily increasing IO functions thereexists a sufficient stability condition for a synchronous TCNN(transiently chaotic neural network) equation (12) namely

119896 gt 12119896

120573maxgt minus1198791015840

min (16)

Here 1120573max = min(11198921015840119894) (1198921015840119894denotes the derivative with

respect to time 119905 of the neural IO function for neuron 119894 inthis paper is the sigmoid function (9) 1198791015840min is the minimumeigenvalue of the connected weight matrix of the dynamics ofa n-neuron continuous Hopfield neural network)

Theorem 2 A sufficient stability condition for the CHNNSOmodel is 119896 gt 1

Proof When 119905 rarr infin

119911 (119905 + 1) = 0 (17)

It then follows that the equilibria of (12) and (13) can beevaluated by

Δ119906119895

119894(119905 + 1) = (119896 minus 1) 119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

Δ119906119895

119894119901(119905 + 1) = (119896 minus 1) 119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

(18)

According to (7) we get

Δ119906119895

119894(119905 + 1) = (119896 minus 1) 119906

119895

119894(119905) + 2119860120572 (119909

119895

119894(119905) minus 119901

119895

119892)

+ 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(19)

Journal of Applied Mathematics 5

Δ119906119895

119894119901(119905 + 1) = (119896 minus 1) 119906

119895

119894119901(119905) + 2119861120572 (119909

119895

119894119901(119905) minus 119901

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(20)

1198791015840

min = min(eigenvalue(2119860120572 + 2119862120572 minus2119862120572

minus2119862120572 2119861120572 + 2119862120572))

= (119860 + 119861 + 2119862) 120572 minus 120572radic(119860 minus 119861)2+ 41198622

= 120572(119860 + 119861 + 2119862 minus radic(119860 minus 119861)2+ 41198622)

(21)

In this paper

120573max =1

min (11198921015840)=120585

4 (22)

It is clear that 1198791015840min gt 0 and the stability condition(16) is satisfied when 120585 gt 0 The above analysis verifiesTheorem 2

Theorem 3 The particles converge to the sphere with centerpoint 119901119895

119892and radius 119877 = max 119909119895

119894119890minus 119901119895

119894 (119909119895119894119890is the final

convergence equilibria if the optimization problem is in two-dimensional plane the particles are finally in a circle)

It is easy to show that the particle model given by (7) and(8)ndash(14) has only one equilibrium as 119905 rarr infin that is 119911(119905) =0 Hence as 119905 rarr infin 119883

119894belongs to the hypersphere whose

origin is 119875119892and the radius is 119877 Solving (9) (11) (19) and (20)

simultaneously we get

(119896 minus 1) ln (1 minus (1119909119895119894) (119905))

120585+ 2119860120572 (119909

119895

119894(119905) minus 119875

119895

119892)

+ 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(23)

(119896 minus 1) ln (1 minus (1119909119895119894119901(119905)))

120585+ 2119861120572 (119909

119895

119894119901(119905) minus 119875

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(24)

With (23) and (24) satisfied there must exist the finalconvergence equilibria 119909119895

119894119890and 119909119895

119894119901119890 So the best place the

particle swarm can find is 119901119895119892minus 119909119895

119894119890(119905 + 1) and radius is

119877 = max 119909119895119894119890minus 119901119895

119894

The above analysis therefore verifies Theorem 3

42Dynamics of the Simplest ChaoticHopfieldNeuralNetworkSwarm In this section the dynamics of the simplest particleswarm model is analyzed Equations (7) and (8)ndash(13) are thedynamic model of a single particle with subscript 119894 ignoredAccording to (7) and Theorem 3 the parameters 119860 119861 and119862 control the final convergent radius According to trial anderror the parameters 119860 119861 and 119862 can be chosen in the rangefrom 0005 to 005 According to (16) and (22) 119896 gt 1 and

120585 gt 0 In the simulation the results are better when 119896 is inthe neighborhood of 11 and 120585 is in the neighborhood of 150The parameters 120573 and 119885(0) control the time of the chaoticperiod If 120573 is too big andor 119885(0) is too small the systemwill quickly escape from chaos and performance will be poorThe parameter 119868

0= 02 is standard in the literature on chaotic

neural networks The simulation showed that the model isnot sensitive to the values of parameters 120585 and 119896 for example100 lt 120585 lt 200 and 1 lt 119896 lt 15 are feasible

Then the values of the parameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02 119901

119894= 05 119901

119892= 02

(25)

Figure 2 shows the time evolution of 119909119894(119905) 119911(119905) and the

Lyapunov exponent 120582 of 119909119894(119905) The Lyapunov exponent 120582

characterizes the rate of separation of infinitesimally closetrajectories A positive Lyapunov exponent is usually taken asan indication that the system is chaotic [1] Here 120582 is definedas

120582 = lim119899rarr+infin

1

119898

119898minus1

sum

119905=0

ln10038161003816100381610038161003816100381610038161003816

119889119909119894 (119905 + 1)

119889119909119894 (119905)

10038161003816100381610038161003816100381610038161003816 (26)

At about 200 steps 119911(119905) decays to a small value and 119909119894(119905)

departs from chaos which corresponds with the change of 120582from positive to negative

According to Figure 2 the convergence process of asimple particle position follows the nonlinear bifurcationmaking the particle converge to a stable fixed point from astrange attractor In the following section it is shown thatthe fixed point is determined by the best previous position 119875

119892

among all particles and the best position 119875119894of the individual

particle

Remark 4 The proposed CHNNSOmodel is a deterministicChaos-Hopfield neural network swarm which is differentfrom existing PSOs with stochastic parameters Its searchorbits exhibit an evolutionary process of inverse periodbifurcation from chaos to periodic orbits then to sink Aschaos is ergodic and the particle is always in a chaotic state atthe beginning (eg in Figure 2) the particle can escape whentrapped in local extremaThis proposedCHNNSOmodelwilltherefore in general not suffer from being easily trapped in athe local optimum and will continue to search for a globaloptimum

5 Numerical Simulation

To test the performance of the proposed algorithms twofamous benchmark optimization problems and an engi-neering optimization problem with linear and nonlinearconstraints are used The solutions to the two benchmarkproblems can be represented in the plane and therefore theconvergence of the CHNNSO can be clearly observed Theresults of the third optimization problem when comparedwith other algorithms are displayed in Table 1 We willcompare the CHNNSO with the original PSO [20]

6 Journal of Applied Mathematics

0 2000 4000 6000 8000 10000 12000 140000

1

05

Time

119909119894

(a)

02

04

0 2000 4000 6000 8000 10000 12000 14000Time

0

119911(119905)

(b)

0 2000 4000 6000 8000 10000 12000 14000Time

050

minus05

minus1

minus15

120582

(c)

Figure 2 Time evolutions of 119909119894(119905) 119911(119905) and the Lyapunov exponent 120582

Table 1The parameters of the Hartmann function (when 119899 = 3 and119902 = 4)

119894 1198861198941

1198861198942

1198861198943

119888119894

1199011198941

1199011198942

1199011198943

1 3 10 30 1 03689 01170 02673

2 01 10 35 12 04699 04387 07470

3 3 10 30 3 01091 08732 05547

4 01 10 35 32 003815 05473 08828

51 The Rastrigin Function To demonstrate the efficiencyof the proposed technique the famous Rastrigin function ischosen as a test problem This function with two variables isgiven by

119891 (119883) = 1199092

1+ 1199092

2minus cos (18119909

1) minus cos (18119909

2)

minus 1 lt 119909119894lt 1 119894 = 1 2

(27)

The global minimum is minus2 and the minimum point is (00) There are about 50 local minima arranged in a latticeconfiguration

The proposed technique is applied with a population sizeof 20 and the maximum number of iterations is 20000 Thechaotic particle swarm parameters are chosen as 119860 = 002119861 = 119862 = 001 120573 = 0001 119911(0) = 03 120572 = 1 120585 = 150119896 = 11 and 119868

0= 02

The position of every particle is initialized with a randomvalueThe time evolution of the cost of the Rastrigin functionis shown in Figure 3 The global minimum at minus2 is obtainedby the best particle with (119909

1 1199092) = (0 0)

From Figure 3 it can be seen that the proposed methodgives good optimization results Since there are two variablesin the Rastrigin function it is easy to show the final conver-gent particle states in the plane

In Figure 4 the ldquolowastrdquos are the best experiences of eachparticleThe ldquo+rdquos are the final states of the particlesTheglobalminimum (0 0) is also included in the ldquolowastrdquos and the ldquo+rdquosMostldquolowastrdquos and ldquo+rdquos are overlapped at the global minimum (0 0)According toTheorem 3 the particles will finally converge to

0 05 1 15 2Step

minus1

minus15

minus2

minus25

119891(119909)

times104

Figure 3 Time evolutions of Rastrigin function

a circle finally For this Rastrigin problem the particlesrsquo finalstates converge to the circle as shown in Figure 4 and hencethe global convergence of the particles is guaranteed

Figure 5 displays the results when the original PSO [20]was used to optimize the Rastrigin function In the numericalsimulation the particle swarm population size is also 20 andparameters 119888

1and 1198882are set to 2 and 120596 set to 1 Vmax is set

equal to the dynamic range of each dimension The ldquolowastrdquos inFigure 5 are the final states of all the particles correspondingto the ldquo+rdquos in Figure 4 It is easy to see that the final states ofthe particles are ruleless even though the global minimum ofminus2 is obtained by the best experience of the particle swarmthat is (119909

1 1199092) = (0 0) as shown in Figure 5

By comparing the results obtained by the proposedCHNNSO in Figure 4 with the results of the original PSO inFigure 5 it can be seen that the final states of the particles ofthe proposed CHNNSO are attracted to the best experienceof all the particles and that convergence is superior Thefinal states of CHNNSO particles are guaranteed to convergewhich is not the case for original PSO implications

Journal of Applied Mathematics 7

0 05 1

0

05

1

minus05

minus05minus1minus1

1199092

1199091

Figure 4 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Rastrigin function

0

0

05

05

1

1

minus05

minus05minus1minus1

1199092

1199091

Figure 5The final states of the particles achieved fromoriginal PSOfor Rastrigin function

When their parameters 1198881and 1198882are both set to 1494

and 120596 to a value of 0729 Vmax is set equal to the dynamicrange on each dimensionThe constriction factors 119888

1 1198882120596 are

applied to improve the convergence of the particle over timeby damping the oscillations once the particle is focused onthe best point in an optimal regionThemain disadvantage ofthis method is that the particles may follow wider cycles andmay not converge when the individual best performance isfar from the neighborhoods best performance (two differentregions) [24]

52 The Schafferrsquos F6 Function To further investigate theperformance of the CHNNSO

119891 (119883) = 05 +

radicsin (11990921+ 11990922)2minus 05

10 + 0001(11990921+ 11990922)2 (28)

0

0

25

25

5

5

minus25

minus25minus5minus5

1199092

1199091

Figure 6 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Schafferrsquo F6 function

the Schafferrsquos F6 function [25] is chosen This function has asingle global optimum at (0 0) and119891min(119909 119910) = 0 and a largenumber of local optima The global optimum is difficult tofind because the value at the best local optimum differs withonly about 10minus3 from the global minimum The local optimacrowd around the global optimumTheproposed technique isapplied with a population size of 30 the iterations are 100000and the parameters of CHNNSO are chosen as

119860 = 119861 = 119862 = 001 120573 = 00003 119911 (0) = 03

120585 = 155 119896 = 11 1198680= 02

(29)

The position of each particle is initialized with a randomvalue In Figure 6 the ldquolowastrdquos are the best experiences of eachparticleThe global minimum at 2674 times 10minus4 is obtained bythe best particle with (119909

1 1199092) = (minus000999 001294) which

is included in the ldquolowastrdquos The ldquo+rdquos are the final states of theparticles According to Theorem 3 the particlesrsquo final statesconverge in a circle as shown in Figure 6 which proves globalconvergence FromFigure 6 it is clearly seen that the particlesrsquofinal states are attracted to the neighborhoods of the bestexperiences of all the particles and the convergence is good

Figure 7 shows the final particle states when the originalPSO [20] was used to optimize the Schafferrsquos F6 function Inthis numerical simulation of the original PSO the particleswarm population size is also 20 and parameters 119888

1and 1198882are

both set to 2 and set 120596 is set to a value of 1 Vmax is set equal tothe dynamic range of each dimensionThe ldquolowastrdquos in Figure 7 arethe final states of all the particles corresponding to the ldquo+rdquos inFigure 6 It is easy to see that the final states of the particles areruleless in Figure 7 The global minimum 63299 times 10

minus4 isobtained by the best particle with (119909

1 1199092) = (001889 times 10

minus3minus04665 times 10

minus3) The best experience from the original PSO

is not as good as the best of the proposed CHNNSOComparing the results obtained from the proposed

CHNNSO in Figure 6 and the original PSO in Figure 7 itis clearly seen that the particlesrsquo final states of the proposed

8 Journal of Applied Mathematics

0

25

25 5

minus25

minus25minus5

minus5

minus5

1199092

1199091

0

Figure 7Thefinal states of the particles achieved fromoriginal PSOfor Schafferrsquos F6 function

CHNNSO are finally attracted to the best experience of allthe particles and the convergence is better than that of theoriginal PSO The CHNNSO can guarantee the convergenceof the particle swarm but the final states of the original PSOare ruleless

53 The Hartmann Function The Hartmann function when119899 = 3 119902 = 4 is given by

119891 (119909) = minus

119902

sum

119894=1

119888119894exp(minus

119899

sum

119895=1

119886119894119895(119909119895minus 119901119894119895)2

) (30)

with 119909119895belonging to

119878 = 119909 isin 119877119899| 0 le 119909

119895le 1 1 le 119895 le 119899 (31)

Table 1 shows the parameter values for the Hartmann func-tion when 119899 = 3 119902 = 4

When 119899 = 3 119883min = (0114 0556 0882) 119891(119909min) =minus386

The time evolution of the cost of the Hartmann functionis 15000 In Figure 8 only subdimensions are picturedIn Figure 8 the ldquo+rdquos are the final states of the particlesand the ldquolowastrdquos denote the best experiences of all particlesFrom Figure 8 it can be easily seen that the final statesof the particles converge to the circle The center point is(00831 05567 08522) and the radius is 01942 The finalparticle states confirmTheorem 3 and the final convergencyis guaranteed

54 Design of a Pressure Vessel The pressure vessel problemdescribed in [26 27] is an example which has linear andnonlinear constraints and has been solved by a variety oftechniques The objective of the problem is to minimize thetotal cost of the material needed for forming and weldinga cylindrical vessel There are four design variables 119909

1(119879119904

thickness of the shell) 1199092(119879ℎ thickness of the head) 119909

3(119877

0 01 02 03035

04

045

05

055

06

065

07

075

08

1199092

1199091

minus02 minus01

Figure 8Thefinal states of the particles achieved fromoriginal PSOfor Hartmann function

inner radius) and 1199094(119871 length of the cylindrical section of

the vessel) 119879119904and 119879

ℎare integer multiples of 00625 inch

which are the available thickness of rolled steel plates and 119877and119871 are continuousTheproblemcan be specified as follows

Minimize 119891 (119883) = 06224119909111990931199094 + 1778111990921199092

3

+ 316611199092

11199094+ 1984119909

2

11199093

subject to 1198921 (119883) = minus1199091 + 001931199093 le 0

1198922 (119883) = minus1199092 + 0009541199093 le 0

1198923 (119883) = minus120587119909

2

31199094minus4

31205871199093+ 1296000 le 0

1198924 (119883) = 1199094 minus 240 le 0

(32)

The following range of the variables were used [27]

0 lt 1199091le 99 0 lt 119909

2le 99 10 le 119909

3le 200

10 le 1199094le 200

(33)

de Freitas Vas and de Graca Pinto Fernandes [26] proposedan algorithm to deal with the constrained optimization prob-lems Here this algorithm [26] is combined with CHNNSOto search for the global optimum The proposed techniqueis applied with a population size of 20 and the maximumnumber of iterations is 20000 Then the values of theparameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02

119901119894= 05 119901

119892= 05

(34)

From Table 2 the best solution obtained by the CHNNSO isbetter than the other two solutions previously reported

Journal of Applied Mathematics 9

Table 2 Comparison of the results for pressure Vessel designproblem

Design variables Best solutions foundThis paper Hu et al [27] Coello [28]

1199091(119879119904) 08125 08125 08125

1199092(119879ℎ) 04375 04375 04375

1199093(119877) 4209845 4209845 403239

1199094(119871) 1766366 1766366 2000

1198921(119883) 0 0 003424

1198922(119883) minus003588 minus003588 minus0052847

1198923(119883) minus11814 lowast 10

minus11minus58208 lowast 10

minus11minus27105845

1198924(119883) minus633634 minus633634 minus400

119891(119883) 60591313 60591313 62887445

6 Conclusion

This paper proposed a chaotic neural networks swarmoptimization algorithm It incorporates the particle swarmsearching structure having global optimization capability intoHopfield neural networks which guarantee convergence Inaddition by adding chaos generator terms into Hopfieldneural networks the ergodic searching capability is greatlyimproved in the proposed algorithm The decay factorintroduced in the chaos terms ensures that the searchingevolves to convergence to global optimum after globallychaotic optimization The experiment results of three classicbenchmark functions showed that the proposed algorithmcan guarantee the convergence of the particle swarm search-ing and can escape from local extremum Therefore theproposed algorithm improves the practicality of particleswarm optimization As this is a general particle model sometechniques such as the local best version algorithm proposedin [29] can be used together with the new model This will beexplored in future work

Acknowledgments

This work was supported by ChinaSouth Africa ResearchCooperation Programme (nos 78673 and CS06-L02) SouthAfrican National Research Foundation Incentive Grant (no81705) SDUST Research Fund (no 2010KYTD101) and Keyscientific support program of Qingdao City (no 11-2-3-51-nsh)

References

[1] J C Sprott Chaos and Time-Series Analysis Oxford UniversityPress New York NY USA 2004

[2] K Aihara and G Matsumoto Chaos in Biological SystemsPlenum Press New York NY USA 1987

[3] C A Skarda and W J Freeman ldquoHow brains make chaos inorder tomake sense of theworldrdquoBehavioral andBrain Sciencesvol 10 pp 161ndash165 1987

[4] L Wang D Z Zheng and Q S Lin ldquoSurvey on chaotic opti-mization methodsrdquo Computation Technology Automation vol20 pp 1ndash5 2001

[5] B Li and W S Jiang ldquoOptimizing complex functions by chaossearchrdquo Cybernetics and Systems vol 29 no 4 pp 409ndash4191998

[6] J J Hopfield and DW Tank ldquolsquoNeuralrsquo computation of decisonsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[7] Z Wang Y Liu K Fraser and X Liu ldquoStochastic stabilityof uncertain Hopfield neural networks with discrete and dis-tributed delaysrdquo Physics Letters A vol 354 no 4 pp 288ndash2972006

[8] T Tanaka and E Hiura ldquoComputational abilities of a chaoticneural networkrdquo Physics Letters A vol 315 no 3-4 pp 225ndash2302003

[9] K Aihara T Takabe and M Toyoda ldquoChaotic neural net-worksrdquo Physics Letters A vol 144 no 6-7 pp 333ndash340 1990

[10] S Kirkpatrick C D Gelatt Jr and M P Vecchi ldquoOptimizationby simulated annealingrdquo Science vol 220 no 4598 pp 671ndash6801983

[11] L Chen and K Aihara ldquoChaotic simulated annealing by aneural network model with transient chaosrdquo Neural Networksvol 8 no 6 pp 915ndash930 1995

[12] L Chen and K Aihara ldquoChaos and asymptotical stability indiscrete-time neural networksrdquo Physica D vol 104 no 3-4 pp286ndash325 1997

[13] L Wang ldquoOn competitive learningrdquo IEEE Transactions onNeural Networks vol 8 no 5 pp 1214ndash1217 1997

[14] L Wang and K Smith ldquoOn chaotic simulated annealingrdquo IEEETransactions on Neural Networks vol 9 no 4 pp 716ndash718 1998

[15] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002

[16] J Ke J X Qian and Y Z Qiao ldquoA modified particle swarmoptimization algorithmrdquo Journal of Circuits and Systems vol10 pp 87ndash91 2003

[17] B Liu L Wang Y H Jin F Tang and D X Huang ldquoImprovedparticle swarm optimization combined with chaosrdquo ChaosSolitons and Fractals vol 25 no 5 pp 1261ndash1271 2005

[18] T Xiang X Liao and K W Wong ldquoAn improved particleswarm optimization algorithm combined with piecewise linearchaotic maprdquo Applied Mathematics and Computation vol 190no 2 pp 1637ndash1645 2007

[19] C Fan and G Jiang ldquoA simple particle swarm optimizationcombined with chaotic searchrdquo in Proceedings of the 7th WorldCongress on Intelligent Control and Automation (WCICA rsquo08)pp 593ndash598 Chongqing China June 2008

[20] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia December 1995

[21] S H Chen A J Jakeman and J P Norton ldquoArtificial Intelli-gence techniques an introduction to their use for modellingenvironmental systemsrdquo Mathematics and Computers in Simu-lation vol 78 no 2-3 pp 379ndash400 2008

[22] J J Hopfield ldquoHopfield networkrdquo Scholarpedia vol 2 article1977 2007

[23] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 2554ndash2558 1984

[24] Y del Valle G K Venayagamoorthy S Mohagheghi J CHernandez and R G Harley ldquoParticle swarm optimization

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

4 Journal of Applied Mathematics

brain To use this result 119911119894(119896)(119909119895

119894(119896)minus119868

0) and 119911

119894(119896)(119909119895

119894119901(119896)minus119868

0)

are added to (8) and (10) to cause chaos Equations (8) and(10) then become

119906119895

119894(119905 + 1) = 119896119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

minus 119911119894 (119905) (119909

119895

119894(119905) minus 1198680) (12)

119906119895

119894119901(119905 + 1) = 119896119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

minus 119911119894 (119905) (119909

119895

119894119901(119905) minus 1198680) (13)

In order to escape from chaos as time evolves we set

119911119894 (119905 + 1) = (1 minus 120573) 119911119894 (119905) (14)

In (8)ndash(14) 120585 120572 and 119896 are positive parameters 119911119894(119905) is self-

feedback connection weight (the refractory strength) 120573 is thedamping factor of the time-dependent 119911

119894(119905) (0 le 120573 le 1) 119868

0is

a positive parameterAll the parameters are fixed except 119911(119905) which is variedThe combination of (9) (11)ndash(14) is called chaotic Hop-

field neural network swarm optimization(CHNNSO) pro-posed by us According to (8)ndash(14) the following procedurecan be used for implementing the proposed CHNNSOalgorithm

(1) Initialize the swarm assign a random position in theproblemhyperspace to each particle and calculate thefitness function which is given by the optimizationproblem whose variables are corresponding to theelements of particle position coordinates

(2) Synchronously update the positions of all the particlesusing (9) (11)ndash(14) and change the two states everyiteration

(3) Evaluate the fitness function for each particle

(4) For each individual particle compare the particlersquosfitness value with its previous best fitness value If thecurrent value is better than the previous best valuethen set this value as the 119901119895

119894and the current particlersquos

position 119909119895119894 as 119901119895119894 else if the 119901

119894is updated then reset

119911 = 1199110

(5) Identify the particle that has the best fitness valueWhen iterations are less than a certain value and ifthe particle with the best fitness value is changed thenreset 119911 = 119911

0to keep the particles chaotic to prevent

premature convergence

(6) Repeat steps (2)ndash(5) until a stopping criterion is met(eg maximum number of iterations or a sufficientlygood fitness value)

As can be seen from (9) and (11) the particle posi-tion component 119909119895

119894is located in the interval [minus1 1] The

optimization problem variable interval must therefore bemapped to [minus1 1] and vice versa using

119909119895= minus1 +

2 (119909119895minus 119886119895)

119887119895minus 119886119895

119895 = 1 2 119899

119909119895= 119886119895+1

2(119909119895+ 1) (119887

119895minus 119886119895) 119895 = 1 2 119899

(15)

Here 119886119895and 119887119895are the lower boundary and the upper bound-

ary of 119909119895(119905) respectively and only one particle is analyzed for

simplicity

4 Dynamics of Chaotic Hopfield NetworkSwarm Optimization

In this section the dynamics of the chaotic Hopfield net-work swarm optimization (CHNNSO) is analyzed The firstsubsection discusses the convergence of the chaotic particleswarm The second subsection discusses the dynamics of thesimplest CHNNSO with different parameter values

41 Convergence of the Particle Swarm

Theorem 1 (Wang and Smith [14]) If one has a networkof neurons with arbitrarily increasing IO functions thereexists a sufficient stability condition for a synchronous TCNN(transiently chaotic neural network) equation (12) namely

119896 gt 12119896

120573maxgt minus1198791015840

min (16)

Here 1120573max = min(11198921015840119894) (1198921015840119894denotes the derivative with

respect to time 119905 of the neural IO function for neuron 119894 inthis paper is the sigmoid function (9) 1198791015840min is the minimumeigenvalue of the connected weight matrix of the dynamics ofa n-neuron continuous Hopfield neural network)

Theorem 2 A sufficient stability condition for the CHNNSOmodel is 119896 gt 1

Proof When 119905 rarr infin

119911 (119905 + 1) = 0 (17)

It then follows that the equilibria of (12) and (13) can beevaluated by

Δ119906119895

119894(119905 + 1) = (119896 minus 1) 119906

119895

119894(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894(119905)

Δ119906119895

119894119901(119905 + 1) = (119896 minus 1) 119906

119895

119894119901(119905) + 120572

120597119869119895

119894(119905)

120597119909119895

119894119901(119905)

(18)

According to (7) we get

Δ119906119895

119894(119905 + 1) = (119896 minus 1) 119906

119895

119894(119905) + 2119860120572 (119909

119895

119894(119905) minus 119901

119895

119892)

+ 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(19)

Journal of Applied Mathematics 5

Δ119906119895

119894119901(119905 + 1) = (119896 minus 1) 119906

119895

119894119901(119905) + 2119861120572 (119909

119895

119894119901(119905) minus 119901

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(20)

1198791015840

min = min(eigenvalue(2119860120572 + 2119862120572 minus2119862120572

minus2119862120572 2119861120572 + 2119862120572))

= (119860 + 119861 + 2119862) 120572 minus 120572radic(119860 minus 119861)2+ 41198622

= 120572(119860 + 119861 + 2119862 minus radic(119860 minus 119861)2+ 41198622)

(21)

In this paper

120573max =1

min (11198921015840)=120585

4 (22)

It is clear that 1198791015840min gt 0 and the stability condition(16) is satisfied when 120585 gt 0 The above analysis verifiesTheorem 2

Theorem 3 The particles converge to the sphere with centerpoint 119901119895

119892and radius 119877 = max 119909119895

119894119890minus 119901119895

119894 (119909119895119894119890is the final

convergence equilibria if the optimization problem is in two-dimensional plane the particles are finally in a circle)

It is easy to show that the particle model given by (7) and(8)ndash(14) has only one equilibrium as 119905 rarr infin that is 119911(119905) =0 Hence as 119905 rarr infin 119883

119894belongs to the hypersphere whose

origin is 119875119892and the radius is 119877 Solving (9) (11) (19) and (20)

simultaneously we get

(119896 minus 1) ln (1 minus (1119909119895119894) (119905))

120585+ 2119860120572 (119909

119895

119894(119905) minus 119875

119895

119892)

+ 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(23)

(119896 minus 1) ln (1 minus (1119909119895119894119901(119905)))

120585+ 2119861120572 (119909

119895

119894119901(119905) minus 119875

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(24)

With (23) and (24) satisfied there must exist the finalconvergence equilibria 119909119895

119894119890and 119909119895

119894119901119890 So the best place the

particle swarm can find is 119901119895119892minus 119909119895

119894119890(119905 + 1) and radius is

119877 = max 119909119895119894119890minus 119901119895

119894

The above analysis therefore verifies Theorem 3

42Dynamics of the Simplest ChaoticHopfieldNeuralNetworkSwarm In this section the dynamics of the simplest particleswarm model is analyzed Equations (7) and (8)ndash(13) are thedynamic model of a single particle with subscript 119894 ignoredAccording to (7) and Theorem 3 the parameters 119860 119861 and119862 control the final convergent radius According to trial anderror the parameters 119860 119861 and 119862 can be chosen in the rangefrom 0005 to 005 According to (16) and (22) 119896 gt 1 and

120585 gt 0 In the simulation the results are better when 119896 is inthe neighborhood of 11 and 120585 is in the neighborhood of 150The parameters 120573 and 119885(0) control the time of the chaoticperiod If 120573 is too big andor 119885(0) is too small the systemwill quickly escape from chaos and performance will be poorThe parameter 119868

0= 02 is standard in the literature on chaotic

neural networks The simulation showed that the model isnot sensitive to the values of parameters 120585 and 119896 for example100 lt 120585 lt 200 and 1 lt 119896 lt 15 are feasible

Then the values of the parameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02 119901

119894= 05 119901

119892= 02

(25)

Figure 2 shows the time evolution of 119909119894(119905) 119911(119905) and the

Lyapunov exponent 120582 of 119909119894(119905) The Lyapunov exponent 120582

characterizes the rate of separation of infinitesimally closetrajectories A positive Lyapunov exponent is usually taken asan indication that the system is chaotic [1] Here 120582 is definedas

120582 = lim119899rarr+infin

1

119898

119898minus1

sum

119905=0

ln10038161003816100381610038161003816100381610038161003816

119889119909119894 (119905 + 1)

119889119909119894 (119905)

10038161003816100381610038161003816100381610038161003816 (26)

At about 200 steps 119911(119905) decays to a small value and 119909119894(119905)

departs from chaos which corresponds with the change of 120582from positive to negative

According to Figure 2 the convergence process of asimple particle position follows the nonlinear bifurcationmaking the particle converge to a stable fixed point from astrange attractor In the following section it is shown thatthe fixed point is determined by the best previous position 119875

119892

among all particles and the best position 119875119894of the individual

particle

Remark 4 The proposed CHNNSOmodel is a deterministicChaos-Hopfield neural network swarm which is differentfrom existing PSOs with stochastic parameters Its searchorbits exhibit an evolutionary process of inverse periodbifurcation from chaos to periodic orbits then to sink Aschaos is ergodic and the particle is always in a chaotic state atthe beginning (eg in Figure 2) the particle can escape whentrapped in local extremaThis proposedCHNNSOmodelwilltherefore in general not suffer from being easily trapped in athe local optimum and will continue to search for a globaloptimum

5 Numerical Simulation

To test the performance of the proposed algorithms twofamous benchmark optimization problems and an engi-neering optimization problem with linear and nonlinearconstraints are used The solutions to the two benchmarkproblems can be represented in the plane and therefore theconvergence of the CHNNSO can be clearly observed Theresults of the third optimization problem when comparedwith other algorithms are displayed in Table 1 We willcompare the CHNNSO with the original PSO [20]

6 Journal of Applied Mathematics

0 2000 4000 6000 8000 10000 12000 140000

1

05

Time

119909119894

(a)

02

04

0 2000 4000 6000 8000 10000 12000 14000Time

0

119911(119905)

(b)

0 2000 4000 6000 8000 10000 12000 14000Time

050

minus05

minus1

minus15

120582

(c)

Figure 2 Time evolutions of 119909119894(119905) 119911(119905) and the Lyapunov exponent 120582

Table 1The parameters of the Hartmann function (when 119899 = 3 and119902 = 4)

119894 1198861198941

1198861198942

1198861198943

119888119894

1199011198941

1199011198942

1199011198943

1 3 10 30 1 03689 01170 02673

2 01 10 35 12 04699 04387 07470

3 3 10 30 3 01091 08732 05547

4 01 10 35 32 003815 05473 08828

51 The Rastrigin Function To demonstrate the efficiencyof the proposed technique the famous Rastrigin function ischosen as a test problem This function with two variables isgiven by

119891 (119883) = 1199092

1+ 1199092

2minus cos (18119909

1) minus cos (18119909

2)

minus 1 lt 119909119894lt 1 119894 = 1 2

(27)

The global minimum is minus2 and the minimum point is (00) There are about 50 local minima arranged in a latticeconfiguration

The proposed technique is applied with a population sizeof 20 and the maximum number of iterations is 20000 Thechaotic particle swarm parameters are chosen as 119860 = 002119861 = 119862 = 001 120573 = 0001 119911(0) = 03 120572 = 1 120585 = 150119896 = 11 and 119868

0= 02

The position of every particle is initialized with a randomvalueThe time evolution of the cost of the Rastrigin functionis shown in Figure 3 The global minimum at minus2 is obtainedby the best particle with (119909

1 1199092) = (0 0)

From Figure 3 it can be seen that the proposed methodgives good optimization results Since there are two variablesin the Rastrigin function it is easy to show the final conver-gent particle states in the plane

In Figure 4 the ldquolowastrdquos are the best experiences of eachparticleThe ldquo+rdquos are the final states of the particlesTheglobalminimum (0 0) is also included in the ldquolowastrdquos and the ldquo+rdquosMostldquolowastrdquos and ldquo+rdquos are overlapped at the global minimum (0 0)According toTheorem 3 the particles will finally converge to

0 05 1 15 2Step

minus1

minus15

minus2

minus25

119891(119909)

times104

Figure 3 Time evolutions of Rastrigin function

a circle finally For this Rastrigin problem the particlesrsquo finalstates converge to the circle as shown in Figure 4 and hencethe global convergence of the particles is guaranteed

Figure 5 displays the results when the original PSO [20]was used to optimize the Rastrigin function In the numericalsimulation the particle swarm population size is also 20 andparameters 119888

1and 1198882are set to 2 and 120596 set to 1 Vmax is set

equal to the dynamic range of each dimension The ldquolowastrdquos inFigure 5 are the final states of all the particles correspondingto the ldquo+rdquos in Figure 4 It is easy to see that the final states ofthe particles are ruleless even though the global minimum ofminus2 is obtained by the best experience of the particle swarmthat is (119909

1 1199092) = (0 0) as shown in Figure 5

By comparing the results obtained by the proposedCHNNSO in Figure 4 with the results of the original PSO inFigure 5 it can be seen that the final states of the particles ofthe proposed CHNNSO are attracted to the best experienceof all the particles and that convergence is superior Thefinal states of CHNNSO particles are guaranteed to convergewhich is not the case for original PSO implications

Journal of Applied Mathematics 7

0 05 1

0

05

1

minus05

minus05minus1minus1

1199092

1199091

Figure 4 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Rastrigin function

0

0

05

05

1

1

minus05

minus05minus1minus1

1199092

1199091

Figure 5The final states of the particles achieved fromoriginal PSOfor Rastrigin function

When their parameters 1198881and 1198882are both set to 1494

and 120596 to a value of 0729 Vmax is set equal to the dynamicrange on each dimensionThe constriction factors 119888

1 1198882120596 are

applied to improve the convergence of the particle over timeby damping the oscillations once the particle is focused onthe best point in an optimal regionThemain disadvantage ofthis method is that the particles may follow wider cycles andmay not converge when the individual best performance isfar from the neighborhoods best performance (two differentregions) [24]

52 The Schafferrsquos F6 Function To further investigate theperformance of the CHNNSO

119891 (119883) = 05 +

radicsin (11990921+ 11990922)2minus 05

10 + 0001(11990921+ 11990922)2 (28)

0

0

25

25

5

5

minus25

minus25minus5minus5

1199092

1199091

Figure 6 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Schafferrsquo F6 function

the Schafferrsquos F6 function [25] is chosen This function has asingle global optimum at (0 0) and119891min(119909 119910) = 0 and a largenumber of local optima The global optimum is difficult tofind because the value at the best local optimum differs withonly about 10minus3 from the global minimum The local optimacrowd around the global optimumTheproposed technique isapplied with a population size of 30 the iterations are 100000and the parameters of CHNNSO are chosen as

119860 = 119861 = 119862 = 001 120573 = 00003 119911 (0) = 03

120585 = 155 119896 = 11 1198680= 02

(29)

The position of each particle is initialized with a randomvalue In Figure 6 the ldquolowastrdquos are the best experiences of eachparticleThe global minimum at 2674 times 10minus4 is obtained bythe best particle with (119909

1 1199092) = (minus000999 001294) which

is included in the ldquolowastrdquos The ldquo+rdquos are the final states of theparticles According to Theorem 3 the particlesrsquo final statesconverge in a circle as shown in Figure 6 which proves globalconvergence FromFigure 6 it is clearly seen that the particlesrsquofinal states are attracted to the neighborhoods of the bestexperiences of all the particles and the convergence is good

Figure 7 shows the final particle states when the originalPSO [20] was used to optimize the Schafferrsquos F6 function Inthis numerical simulation of the original PSO the particleswarm population size is also 20 and parameters 119888

1and 1198882are

both set to 2 and set 120596 is set to a value of 1 Vmax is set equal tothe dynamic range of each dimensionThe ldquolowastrdquos in Figure 7 arethe final states of all the particles corresponding to the ldquo+rdquos inFigure 6 It is easy to see that the final states of the particles areruleless in Figure 7 The global minimum 63299 times 10

minus4 isobtained by the best particle with (119909

1 1199092) = (001889 times 10

minus3minus04665 times 10

minus3) The best experience from the original PSO

is not as good as the best of the proposed CHNNSOComparing the results obtained from the proposed

CHNNSO in Figure 6 and the original PSO in Figure 7 itis clearly seen that the particlesrsquo final states of the proposed

8 Journal of Applied Mathematics

0

25

25 5

minus25

minus25minus5

minus5

minus5

1199092

1199091

0

Figure 7Thefinal states of the particles achieved fromoriginal PSOfor Schafferrsquos F6 function

CHNNSO are finally attracted to the best experience of allthe particles and the convergence is better than that of theoriginal PSO The CHNNSO can guarantee the convergenceof the particle swarm but the final states of the original PSOare ruleless

53 The Hartmann Function The Hartmann function when119899 = 3 119902 = 4 is given by

119891 (119909) = minus

119902

sum

119894=1

119888119894exp(minus

119899

sum

119895=1

119886119894119895(119909119895minus 119901119894119895)2

) (30)

with 119909119895belonging to

119878 = 119909 isin 119877119899| 0 le 119909

119895le 1 1 le 119895 le 119899 (31)

Table 1 shows the parameter values for the Hartmann func-tion when 119899 = 3 119902 = 4

When 119899 = 3 119883min = (0114 0556 0882) 119891(119909min) =minus386

The time evolution of the cost of the Hartmann functionis 15000 In Figure 8 only subdimensions are picturedIn Figure 8 the ldquo+rdquos are the final states of the particlesand the ldquolowastrdquos denote the best experiences of all particlesFrom Figure 8 it can be easily seen that the final statesof the particles converge to the circle The center point is(00831 05567 08522) and the radius is 01942 The finalparticle states confirmTheorem 3 and the final convergencyis guaranteed

54 Design of a Pressure Vessel The pressure vessel problemdescribed in [26 27] is an example which has linear andnonlinear constraints and has been solved by a variety oftechniques The objective of the problem is to minimize thetotal cost of the material needed for forming and weldinga cylindrical vessel There are four design variables 119909

1(119879119904

thickness of the shell) 1199092(119879ℎ thickness of the head) 119909

3(119877

0 01 02 03035

04

045

05

055

06

065

07

075

08

1199092

1199091

minus02 minus01

Figure 8Thefinal states of the particles achieved fromoriginal PSOfor Hartmann function

inner radius) and 1199094(119871 length of the cylindrical section of

the vessel) 119879119904and 119879

ℎare integer multiples of 00625 inch

which are the available thickness of rolled steel plates and 119877and119871 are continuousTheproblemcan be specified as follows

Minimize 119891 (119883) = 06224119909111990931199094 + 1778111990921199092

3

+ 316611199092

11199094+ 1984119909

2

11199093

subject to 1198921 (119883) = minus1199091 + 001931199093 le 0

1198922 (119883) = minus1199092 + 0009541199093 le 0

1198923 (119883) = minus120587119909

2

31199094minus4

31205871199093+ 1296000 le 0

1198924 (119883) = 1199094 minus 240 le 0

(32)

The following range of the variables were used [27]

0 lt 1199091le 99 0 lt 119909

2le 99 10 le 119909

3le 200

10 le 1199094le 200

(33)

de Freitas Vas and de Graca Pinto Fernandes [26] proposedan algorithm to deal with the constrained optimization prob-lems Here this algorithm [26] is combined with CHNNSOto search for the global optimum The proposed techniqueis applied with a population size of 20 and the maximumnumber of iterations is 20000 Then the values of theparameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02

119901119894= 05 119901

119892= 05

(34)

From Table 2 the best solution obtained by the CHNNSO isbetter than the other two solutions previously reported

Journal of Applied Mathematics 9

Table 2 Comparison of the results for pressure Vessel designproblem

Design variables Best solutions foundThis paper Hu et al [27] Coello [28]

1199091(119879119904) 08125 08125 08125

1199092(119879ℎ) 04375 04375 04375

1199093(119877) 4209845 4209845 403239

1199094(119871) 1766366 1766366 2000

1198921(119883) 0 0 003424

1198922(119883) minus003588 minus003588 minus0052847

1198923(119883) minus11814 lowast 10

minus11minus58208 lowast 10

minus11minus27105845

1198924(119883) minus633634 minus633634 minus400

119891(119883) 60591313 60591313 62887445

6 Conclusion

This paper proposed a chaotic neural networks swarmoptimization algorithm It incorporates the particle swarmsearching structure having global optimization capability intoHopfield neural networks which guarantee convergence Inaddition by adding chaos generator terms into Hopfieldneural networks the ergodic searching capability is greatlyimproved in the proposed algorithm The decay factorintroduced in the chaos terms ensures that the searchingevolves to convergence to global optimum after globallychaotic optimization The experiment results of three classicbenchmark functions showed that the proposed algorithmcan guarantee the convergence of the particle swarm search-ing and can escape from local extremum Therefore theproposed algorithm improves the practicality of particleswarm optimization As this is a general particle model sometechniques such as the local best version algorithm proposedin [29] can be used together with the new model This will beexplored in future work

Acknowledgments

This work was supported by ChinaSouth Africa ResearchCooperation Programme (nos 78673 and CS06-L02) SouthAfrican National Research Foundation Incentive Grant (no81705) SDUST Research Fund (no 2010KYTD101) and Keyscientific support program of Qingdao City (no 11-2-3-51-nsh)

References

[1] J C Sprott Chaos and Time-Series Analysis Oxford UniversityPress New York NY USA 2004

[2] K Aihara and G Matsumoto Chaos in Biological SystemsPlenum Press New York NY USA 1987

[3] C A Skarda and W J Freeman ldquoHow brains make chaos inorder tomake sense of theworldrdquoBehavioral andBrain Sciencesvol 10 pp 161ndash165 1987

[4] L Wang D Z Zheng and Q S Lin ldquoSurvey on chaotic opti-mization methodsrdquo Computation Technology Automation vol20 pp 1ndash5 2001

[5] B Li and W S Jiang ldquoOptimizing complex functions by chaossearchrdquo Cybernetics and Systems vol 29 no 4 pp 409ndash4191998

[6] J J Hopfield and DW Tank ldquolsquoNeuralrsquo computation of decisonsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[7] Z Wang Y Liu K Fraser and X Liu ldquoStochastic stabilityof uncertain Hopfield neural networks with discrete and dis-tributed delaysrdquo Physics Letters A vol 354 no 4 pp 288ndash2972006

[8] T Tanaka and E Hiura ldquoComputational abilities of a chaoticneural networkrdquo Physics Letters A vol 315 no 3-4 pp 225ndash2302003

[9] K Aihara T Takabe and M Toyoda ldquoChaotic neural net-worksrdquo Physics Letters A vol 144 no 6-7 pp 333ndash340 1990

[10] S Kirkpatrick C D Gelatt Jr and M P Vecchi ldquoOptimizationby simulated annealingrdquo Science vol 220 no 4598 pp 671ndash6801983

[11] L Chen and K Aihara ldquoChaotic simulated annealing by aneural network model with transient chaosrdquo Neural Networksvol 8 no 6 pp 915ndash930 1995

[12] L Chen and K Aihara ldquoChaos and asymptotical stability indiscrete-time neural networksrdquo Physica D vol 104 no 3-4 pp286ndash325 1997

[13] L Wang ldquoOn competitive learningrdquo IEEE Transactions onNeural Networks vol 8 no 5 pp 1214ndash1217 1997

[14] L Wang and K Smith ldquoOn chaotic simulated annealingrdquo IEEETransactions on Neural Networks vol 9 no 4 pp 716ndash718 1998

[15] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002

[16] J Ke J X Qian and Y Z Qiao ldquoA modified particle swarmoptimization algorithmrdquo Journal of Circuits and Systems vol10 pp 87ndash91 2003

[17] B Liu L Wang Y H Jin F Tang and D X Huang ldquoImprovedparticle swarm optimization combined with chaosrdquo ChaosSolitons and Fractals vol 25 no 5 pp 1261ndash1271 2005

[18] T Xiang X Liao and K W Wong ldquoAn improved particleswarm optimization algorithm combined with piecewise linearchaotic maprdquo Applied Mathematics and Computation vol 190no 2 pp 1637ndash1645 2007

[19] C Fan and G Jiang ldquoA simple particle swarm optimizationcombined with chaotic searchrdquo in Proceedings of the 7th WorldCongress on Intelligent Control and Automation (WCICA rsquo08)pp 593ndash598 Chongqing China June 2008

[20] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia December 1995

[21] S H Chen A J Jakeman and J P Norton ldquoArtificial Intelli-gence techniques an introduction to their use for modellingenvironmental systemsrdquo Mathematics and Computers in Simu-lation vol 78 no 2-3 pp 379ndash400 2008

[22] J J Hopfield ldquoHopfield networkrdquo Scholarpedia vol 2 article1977 2007

[23] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 2554ndash2558 1984

[24] Y del Valle G K Venayagamoorthy S Mohagheghi J CHernandez and R G Harley ldquoParticle swarm optimization

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Journal of Applied Mathematics 5

Δ119906119895

119894119901(119905 + 1) = (119896 minus 1) 119906

119895

119894119901(119905) + 2119861120572 (119909

119895

119894119901(119905) minus 119901

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(20)

1198791015840

min = min(eigenvalue(2119860120572 + 2119862120572 minus2119862120572

minus2119862120572 2119861120572 + 2119862120572))

= (119860 + 119861 + 2119862) 120572 minus 120572radic(119860 minus 119861)2+ 41198622

= 120572(119860 + 119861 + 2119862 minus radic(119860 minus 119861)2+ 41198622)

(21)

In this paper

120573max =1

min (11198921015840)=120585

4 (22)

It is clear that 1198791015840min gt 0 and the stability condition(16) is satisfied when 120585 gt 0 The above analysis verifiesTheorem 2

Theorem 3 The particles converge to the sphere with centerpoint 119901119895

119892and radius 119877 = max 119909119895

119894119890minus 119901119895

119894 (119909119895119894119890is the final

convergence equilibria if the optimization problem is in two-dimensional plane the particles are finally in a circle)

It is easy to show that the particle model given by (7) and(8)ndash(14) has only one equilibrium as 119905 rarr infin that is 119911(119905) =0 Hence as 119905 rarr infin 119883

119894belongs to the hypersphere whose

origin is 119875119892and the radius is 119877 Solving (9) (11) (19) and (20)

simultaneously we get

(119896 minus 1) ln (1 minus (1119909119895119894) (119905))

120585+ 2119860120572 (119909

119895

119894(119905) minus 119875

119895

119892)

+ 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(23)

(119896 minus 1) ln (1 minus (1119909119895119894119901(119905)))

120585+ 2119861120572 (119909

119895

119894119901(119905) minus 119875

119895

119894)

minus 2119862120572 (119909119895

119894(119905) minus 119909

119895

119894119901(119905)) = 0

(24)

With (23) and (24) satisfied there must exist the finalconvergence equilibria 119909119895

119894119890and 119909119895

119894119901119890 So the best place the

particle swarm can find is 119901119895119892minus 119909119895

119894119890(119905 + 1) and radius is

119877 = max 119909119895119894119890minus 119901119895

119894

The above analysis therefore verifies Theorem 3

42Dynamics of the Simplest ChaoticHopfieldNeuralNetworkSwarm In this section the dynamics of the simplest particleswarm model is analyzed Equations (7) and (8)ndash(13) are thedynamic model of a single particle with subscript 119894 ignoredAccording to (7) and Theorem 3 the parameters 119860 119861 and119862 control the final convergent radius According to trial anderror the parameters 119860 119861 and 119862 can be chosen in the rangefrom 0005 to 005 According to (16) and (22) 119896 gt 1 and

120585 gt 0 In the simulation the results are better when 119896 is inthe neighborhood of 11 and 120585 is in the neighborhood of 150The parameters 120573 and 119885(0) control the time of the chaoticperiod If 120573 is too big andor 119885(0) is too small the systemwill quickly escape from chaos and performance will be poorThe parameter 119868

0= 02 is standard in the literature on chaotic

neural networks The simulation showed that the model isnot sensitive to the values of parameters 120585 and 119896 for example100 lt 120585 lt 200 and 1 lt 119896 lt 15 are feasible

Then the values of the parameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02 119901

119894= 05 119901

119892= 02

(25)

Figure 2 shows the time evolution of 119909119894(119905) 119911(119905) and the

Lyapunov exponent 120582 of 119909119894(119905) The Lyapunov exponent 120582

characterizes the rate of separation of infinitesimally closetrajectories A positive Lyapunov exponent is usually taken asan indication that the system is chaotic [1] Here 120582 is definedas

120582 = lim119899rarr+infin

1

119898

119898minus1

sum

119905=0

ln10038161003816100381610038161003816100381610038161003816

119889119909119894 (119905 + 1)

119889119909119894 (119905)

10038161003816100381610038161003816100381610038161003816 (26)

At about 200 steps 119911(119905) decays to a small value and 119909119894(119905)

departs from chaos which corresponds with the change of 120582from positive to negative

According to Figure 2 the convergence process of asimple particle position follows the nonlinear bifurcationmaking the particle converge to a stable fixed point from astrange attractor In the following section it is shown thatthe fixed point is determined by the best previous position 119875

119892

among all particles and the best position 119875119894of the individual

particle

Remark 4 The proposed CHNNSOmodel is a deterministicChaos-Hopfield neural network swarm which is differentfrom existing PSOs with stochastic parameters Its searchorbits exhibit an evolutionary process of inverse periodbifurcation from chaos to periodic orbits then to sink Aschaos is ergodic and the particle is always in a chaotic state atthe beginning (eg in Figure 2) the particle can escape whentrapped in local extremaThis proposedCHNNSOmodelwilltherefore in general not suffer from being easily trapped in athe local optimum and will continue to search for a globaloptimum

5 Numerical Simulation

To test the performance of the proposed algorithms twofamous benchmark optimization problems and an engi-neering optimization problem with linear and nonlinearconstraints are used The solutions to the two benchmarkproblems can be represented in the plane and therefore theconvergence of the CHNNSO can be clearly observed Theresults of the third optimization problem when comparedwith other algorithms are displayed in Table 1 We willcompare the CHNNSO with the original PSO [20]

6 Journal of Applied Mathematics

0 2000 4000 6000 8000 10000 12000 140000

1

05

Time

119909119894

(a)

02

04

0 2000 4000 6000 8000 10000 12000 14000Time

0

119911(119905)

(b)

0 2000 4000 6000 8000 10000 12000 14000Time

050

minus05

minus1

minus15

120582

(c)

Figure 2 Time evolutions of 119909119894(119905) 119911(119905) and the Lyapunov exponent 120582

Table 1The parameters of the Hartmann function (when 119899 = 3 and119902 = 4)

119894 1198861198941

1198861198942

1198861198943

119888119894

1199011198941

1199011198942

1199011198943

1 3 10 30 1 03689 01170 02673

2 01 10 35 12 04699 04387 07470

3 3 10 30 3 01091 08732 05547

4 01 10 35 32 003815 05473 08828

51 The Rastrigin Function To demonstrate the efficiencyof the proposed technique the famous Rastrigin function ischosen as a test problem This function with two variables isgiven by

119891 (119883) = 1199092

1+ 1199092

2minus cos (18119909

1) minus cos (18119909

2)

minus 1 lt 119909119894lt 1 119894 = 1 2

(27)

The global minimum is minus2 and the minimum point is (00) There are about 50 local minima arranged in a latticeconfiguration

The proposed technique is applied with a population sizeof 20 and the maximum number of iterations is 20000 Thechaotic particle swarm parameters are chosen as 119860 = 002119861 = 119862 = 001 120573 = 0001 119911(0) = 03 120572 = 1 120585 = 150119896 = 11 and 119868

0= 02

The position of every particle is initialized with a randomvalueThe time evolution of the cost of the Rastrigin functionis shown in Figure 3 The global minimum at minus2 is obtainedby the best particle with (119909

1 1199092) = (0 0)

From Figure 3 it can be seen that the proposed methodgives good optimization results Since there are two variablesin the Rastrigin function it is easy to show the final conver-gent particle states in the plane

In Figure 4 the ldquolowastrdquos are the best experiences of eachparticleThe ldquo+rdquos are the final states of the particlesTheglobalminimum (0 0) is also included in the ldquolowastrdquos and the ldquo+rdquosMostldquolowastrdquos and ldquo+rdquos are overlapped at the global minimum (0 0)According toTheorem 3 the particles will finally converge to

0 05 1 15 2Step

minus1

minus15

minus2

minus25

119891(119909)

times104

Figure 3 Time evolutions of Rastrigin function

a circle finally For this Rastrigin problem the particlesrsquo finalstates converge to the circle as shown in Figure 4 and hencethe global convergence of the particles is guaranteed

Figure 5 displays the results when the original PSO [20]was used to optimize the Rastrigin function In the numericalsimulation the particle swarm population size is also 20 andparameters 119888

1and 1198882are set to 2 and 120596 set to 1 Vmax is set

equal to the dynamic range of each dimension The ldquolowastrdquos inFigure 5 are the final states of all the particles correspondingto the ldquo+rdquos in Figure 4 It is easy to see that the final states ofthe particles are ruleless even though the global minimum ofminus2 is obtained by the best experience of the particle swarmthat is (119909

1 1199092) = (0 0) as shown in Figure 5

By comparing the results obtained by the proposedCHNNSO in Figure 4 with the results of the original PSO inFigure 5 it can be seen that the final states of the particles ofthe proposed CHNNSO are attracted to the best experienceof all the particles and that convergence is superior Thefinal states of CHNNSO particles are guaranteed to convergewhich is not the case for original PSO implications

Journal of Applied Mathematics 7

0 05 1

0

05

1

minus05

minus05minus1minus1

1199092

1199091

Figure 4 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Rastrigin function

0

0

05

05

1

1

minus05

minus05minus1minus1

1199092

1199091

Figure 5The final states of the particles achieved fromoriginal PSOfor Rastrigin function

When their parameters 1198881and 1198882are both set to 1494

and 120596 to a value of 0729 Vmax is set equal to the dynamicrange on each dimensionThe constriction factors 119888

1 1198882120596 are

applied to improve the convergence of the particle over timeby damping the oscillations once the particle is focused onthe best point in an optimal regionThemain disadvantage ofthis method is that the particles may follow wider cycles andmay not converge when the individual best performance isfar from the neighborhoods best performance (two differentregions) [24]

52 The Schafferrsquos F6 Function To further investigate theperformance of the CHNNSO

119891 (119883) = 05 +

radicsin (11990921+ 11990922)2minus 05

10 + 0001(11990921+ 11990922)2 (28)

0

0

25

25

5

5

minus25

minus25minus5minus5

1199092

1199091

Figure 6 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Schafferrsquo F6 function

the Schafferrsquos F6 function [25] is chosen This function has asingle global optimum at (0 0) and119891min(119909 119910) = 0 and a largenumber of local optima The global optimum is difficult tofind because the value at the best local optimum differs withonly about 10minus3 from the global minimum The local optimacrowd around the global optimumTheproposed technique isapplied with a population size of 30 the iterations are 100000and the parameters of CHNNSO are chosen as

119860 = 119861 = 119862 = 001 120573 = 00003 119911 (0) = 03

120585 = 155 119896 = 11 1198680= 02

(29)

The position of each particle is initialized with a randomvalue In Figure 6 the ldquolowastrdquos are the best experiences of eachparticleThe global minimum at 2674 times 10minus4 is obtained bythe best particle with (119909

1 1199092) = (minus000999 001294) which

is included in the ldquolowastrdquos The ldquo+rdquos are the final states of theparticles According to Theorem 3 the particlesrsquo final statesconverge in a circle as shown in Figure 6 which proves globalconvergence FromFigure 6 it is clearly seen that the particlesrsquofinal states are attracted to the neighborhoods of the bestexperiences of all the particles and the convergence is good

Figure 7 shows the final particle states when the originalPSO [20] was used to optimize the Schafferrsquos F6 function Inthis numerical simulation of the original PSO the particleswarm population size is also 20 and parameters 119888

1and 1198882are

both set to 2 and set 120596 is set to a value of 1 Vmax is set equal tothe dynamic range of each dimensionThe ldquolowastrdquos in Figure 7 arethe final states of all the particles corresponding to the ldquo+rdquos inFigure 6 It is easy to see that the final states of the particles areruleless in Figure 7 The global minimum 63299 times 10

minus4 isobtained by the best particle with (119909

1 1199092) = (001889 times 10

minus3minus04665 times 10

minus3) The best experience from the original PSO

is not as good as the best of the proposed CHNNSOComparing the results obtained from the proposed

CHNNSO in Figure 6 and the original PSO in Figure 7 itis clearly seen that the particlesrsquo final states of the proposed

8 Journal of Applied Mathematics

0

25

25 5

minus25

minus25minus5

minus5

minus5

1199092

1199091

0

Figure 7Thefinal states of the particles achieved fromoriginal PSOfor Schafferrsquos F6 function

CHNNSO are finally attracted to the best experience of allthe particles and the convergence is better than that of theoriginal PSO The CHNNSO can guarantee the convergenceof the particle swarm but the final states of the original PSOare ruleless

53 The Hartmann Function The Hartmann function when119899 = 3 119902 = 4 is given by

119891 (119909) = minus

119902

sum

119894=1

119888119894exp(minus

119899

sum

119895=1

119886119894119895(119909119895minus 119901119894119895)2

) (30)

with 119909119895belonging to

119878 = 119909 isin 119877119899| 0 le 119909

119895le 1 1 le 119895 le 119899 (31)

Table 1 shows the parameter values for the Hartmann func-tion when 119899 = 3 119902 = 4

When 119899 = 3 119883min = (0114 0556 0882) 119891(119909min) =minus386

The time evolution of the cost of the Hartmann functionis 15000 In Figure 8 only subdimensions are picturedIn Figure 8 the ldquo+rdquos are the final states of the particlesand the ldquolowastrdquos denote the best experiences of all particlesFrom Figure 8 it can be easily seen that the final statesof the particles converge to the circle The center point is(00831 05567 08522) and the radius is 01942 The finalparticle states confirmTheorem 3 and the final convergencyis guaranteed

54 Design of a Pressure Vessel The pressure vessel problemdescribed in [26 27] is an example which has linear andnonlinear constraints and has been solved by a variety oftechniques The objective of the problem is to minimize thetotal cost of the material needed for forming and weldinga cylindrical vessel There are four design variables 119909

1(119879119904

thickness of the shell) 1199092(119879ℎ thickness of the head) 119909

3(119877

0 01 02 03035

04

045

05

055

06

065

07

075

08

1199092

1199091

minus02 minus01

Figure 8Thefinal states of the particles achieved fromoriginal PSOfor Hartmann function

inner radius) and 1199094(119871 length of the cylindrical section of

the vessel) 119879119904and 119879

ℎare integer multiples of 00625 inch

which are the available thickness of rolled steel plates and 119877and119871 are continuousTheproblemcan be specified as follows

Minimize 119891 (119883) = 06224119909111990931199094 + 1778111990921199092

3

+ 316611199092

11199094+ 1984119909

2

11199093

subject to 1198921 (119883) = minus1199091 + 001931199093 le 0

1198922 (119883) = minus1199092 + 0009541199093 le 0

1198923 (119883) = minus120587119909

2

31199094minus4

31205871199093+ 1296000 le 0

1198924 (119883) = 1199094 minus 240 le 0

(32)

The following range of the variables were used [27]

0 lt 1199091le 99 0 lt 119909

2le 99 10 le 119909

3le 200

10 le 1199094le 200

(33)

de Freitas Vas and de Graca Pinto Fernandes [26] proposedan algorithm to deal with the constrained optimization prob-lems Here this algorithm [26] is combined with CHNNSOto search for the global optimum The proposed techniqueis applied with a population size of 20 and the maximumnumber of iterations is 20000 Then the values of theparameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02

119901119894= 05 119901

119892= 05

(34)

From Table 2 the best solution obtained by the CHNNSO isbetter than the other two solutions previously reported

Journal of Applied Mathematics 9

Table 2 Comparison of the results for pressure Vessel designproblem

Design variables Best solutions foundThis paper Hu et al [27] Coello [28]

1199091(119879119904) 08125 08125 08125

1199092(119879ℎ) 04375 04375 04375

1199093(119877) 4209845 4209845 403239

1199094(119871) 1766366 1766366 2000

1198921(119883) 0 0 003424

1198922(119883) minus003588 minus003588 minus0052847

1198923(119883) minus11814 lowast 10

minus11minus58208 lowast 10

minus11minus27105845

1198924(119883) minus633634 minus633634 minus400

119891(119883) 60591313 60591313 62887445

6 Conclusion

This paper proposed a chaotic neural networks swarmoptimization algorithm It incorporates the particle swarmsearching structure having global optimization capability intoHopfield neural networks which guarantee convergence Inaddition by adding chaos generator terms into Hopfieldneural networks the ergodic searching capability is greatlyimproved in the proposed algorithm The decay factorintroduced in the chaos terms ensures that the searchingevolves to convergence to global optimum after globallychaotic optimization The experiment results of three classicbenchmark functions showed that the proposed algorithmcan guarantee the convergence of the particle swarm search-ing and can escape from local extremum Therefore theproposed algorithm improves the practicality of particleswarm optimization As this is a general particle model sometechniques such as the local best version algorithm proposedin [29] can be used together with the new model This will beexplored in future work

Acknowledgments

This work was supported by ChinaSouth Africa ResearchCooperation Programme (nos 78673 and CS06-L02) SouthAfrican National Research Foundation Incentive Grant (no81705) SDUST Research Fund (no 2010KYTD101) and Keyscientific support program of Qingdao City (no 11-2-3-51-nsh)

References

[1] J C Sprott Chaos and Time-Series Analysis Oxford UniversityPress New York NY USA 2004

[2] K Aihara and G Matsumoto Chaos in Biological SystemsPlenum Press New York NY USA 1987

[3] C A Skarda and W J Freeman ldquoHow brains make chaos inorder tomake sense of theworldrdquoBehavioral andBrain Sciencesvol 10 pp 161ndash165 1987

[4] L Wang D Z Zheng and Q S Lin ldquoSurvey on chaotic opti-mization methodsrdquo Computation Technology Automation vol20 pp 1ndash5 2001

[5] B Li and W S Jiang ldquoOptimizing complex functions by chaossearchrdquo Cybernetics and Systems vol 29 no 4 pp 409ndash4191998

[6] J J Hopfield and DW Tank ldquolsquoNeuralrsquo computation of decisonsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[7] Z Wang Y Liu K Fraser and X Liu ldquoStochastic stabilityof uncertain Hopfield neural networks with discrete and dis-tributed delaysrdquo Physics Letters A vol 354 no 4 pp 288ndash2972006

[8] T Tanaka and E Hiura ldquoComputational abilities of a chaoticneural networkrdquo Physics Letters A vol 315 no 3-4 pp 225ndash2302003

[9] K Aihara T Takabe and M Toyoda ldquoChaotic neural net-worksrdquo Physics Letters A vol 144 no 6-7 pp 333ndash340 1990

[10] S Kirkpatrick C D Gelatt Jr and M P Vecchi ldquoOptimizationby simulated annealingrdquo Science vol 220 no 4598 pp 671ndash6801983

[11] L Chen and K Aihara ldquoChaotic simulated annealing by aneural network model with transient chaosrdquo Neural Networksvol 8 no 6 pp 915ndash930 1995

[12] L Chen and K Aihara ldquoChaos and asymptotical stability indiscrete-time neural networksrdquo Physica D vol 104 no 3-4 pp286ndash325 1997

[13] L Wang ldquoOn competitive learningrdquo IEEE Transactions onNeural Networks vol 8 no 5 pp 1214ndash1217 1997

[14] L Wang and K Smith ldquoOn chaotic simulated annealingrdquo IEEETransactions on Neural Networks vol 9 no 4 pp 716ndash718 1998

[15] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002

[16] J Ke J X Qian and Y Z Qiao ldquoA modified particle swarmoptimization algorithmrdquo Journal of Circuits and Systems vol10 pp 87ndash91 2003

[17] B Liu L Wang Y H Jin F Tang and D X Huang ldquoImprovedparticle swarm optimization combined with chaosrdquo ChaosSolitons and Fractals vol 25 no 5 pp 1261ndash1271 2005

[18] T Xiang X Liao and K W Wong ldquoAn improved particleswarm optimization algorithm combined with piecewise linearchaotic maprdquo Applied Mathematics and Computation vol 190no 2 pp 1637ndash1645 2007

[19] C Fan and G Jiang ldquoA simple particle swarm optimizationcombined with chaotic searchrdquo in Proceedings of the 7th WorldCongress on Intelligent Control and Automation (WCICA rsquo08)pp 593ndash598 Chongqing China June 2008

[20] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia December 1995

[21] S H Chen A J Jakeman and J P Norton ldquoArtificial Intelli-gence techniques an introduction to their use for modellingenvironmental systemsrdquo Mathematics and Computers in Simu-lation vol 78 no 2-3 pp 379ndash400 2008

[22] J J Hopfield ldquoHopfield networkrdquo Scholarpedia vol 2 article1977 2007

[23] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 2554ndash2558 1984

[24] Y del Valle G K Venayagamoorthy S Mohagheghi J CHernandez and R G Harley ldquoParticle swarm optimization

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

6 Journal of Applied Mathematics

0 2000 4000 6000 8000 10000 12000 140000

1

05

Time

119909119894

(a)

02

04

0 2000 4000 6000 8000 10000 12000 14000Time

0

119911(119905)

(b)

0 2000 4000 6000 8000 10000 12000 14000Time

050

minus05

minus1

minus15

120582

(c)

Figure 2 Time evolutions of 119909119894(119905) 119911(119905) and the Lyapunov exponent 120582

Table 1The parameters of the Hartmann function (when 119899 = 3 and119902 = 4)

119894 1198861198941

1198861198942

1198861198943

119888119894

1199011198941

1199011198942

1199011198943

1 3 10 30 1 03689 01170 02673

2 01 10 35 12 04699 04387 07470

3 3 10 30 3 01091 08732 05547

4 01 10 35 32 003815 05473 08828

51 The Rastrigin Function To demonstrate the efficiencyof the proposed technique the famous Rastrigin function ischosen as a test problem This function with two variables isgiven by

119891 (119883) = 1199092

1+ 1199092

2minus cos (18119909

1) minus cos (18119909

2)

minus 1 lt 119909119894lt 1 119894 = 1 2

(27)

The global minimum is minus2 and the minimum point is (00) There are about 50 local minima arranged in a latticeconfiguration

The proposed technique is applied with a population sizeof 20 and the maximum number of iterations is 20000 Thechaotic particle swarm parameters are chosen as 119860 = 002119861 = 119862 = 001 120573 = 0001 119911(0) = 03 120572 = 1 120585 = 150119896 = 11 and 119868

0= 02

The position of every particle is initialized with a randomvalueThe time evolution of the cost of the Rastrigin functionis shown in Figure 3 The global minimum at minus2 is obtainedby the best particle with (119909

1 1199092) = (0 0)

From Figure 3 it can be seen that the proposed methodgives good optimization results Since there are two variablesin the Rastrigin function it is easy to show the final conver-gent particle states in the plane

In Figure 4 the ldquolowastrdquos are the best experiences of eachparticleThe ldquo+rdquos are the final states of the particlesTheglobalminimum (0 0) is also included in the ldquolowastrdquos and the ldquo+rdquosMostldquolowastrdquos and ldquo+rdquos are overlapped at the global minimum (0 0)According toTheorem 3 the particles will finally converge to

0 05 1 15 2Step

minus1

minus15

minus2

minus25

119891(119909)

times104

Figure 3 Time evolutions of Rastrigin function

a circle finally For this Rastrigin problem the particlesrsquo finalstates converge to the circle as shown in Figure 4 and hencethe global convergence of the particles is guaranteed

Figure 5 displays the results when the original PSO [20]was used to optimize the Rastrigin function In the numericalsimulation the particle swarm population size is also 20 andparameters 119888

1and 1198882are set to 2 and 120596 set to 1 Vmax is set

equal to the dynamic range of each dimension The ldquolowastrdquos inFigure 5 are the final states of all the particles correspondingto the ldquo+rdquos in Figure 4 It is easy to see that the final states ofthe particles are ruleless even though the global minimum ofminus2 is obtained by the best experience of the particle swarmthat is (119909

1 1199092) = (0 0) as shown in Figure 5

By comparing the results obtained by the proposedCHNNSO in Figure 4 with the results of the original PSO inFigure 5 it can be seen that the final states of the particles ofthe proposed CHNNSO are attracted to the best experienceof all the particles and that convergence is superior Thefinal states of CHNNSO particles are guaranteed to convergewhich is not the case for original PSO implications

Journal of Applied Mathematics 7

0 05 1

0

05

1

minus05

minus05minus1minus1

1199092

1199091

Figure 4 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Rastrigin function

0

0

05

05

1

1

minus05

minus05minus1minus1

1199092

1199091

Figure 5The final states of the particles achieved fromoriginal PSOfor Rastrigin function

When their parameters 1198881and 1198882are both set to 1494

and 120596 to a value of 0729 Vmax is set equal to the dynamicrange on each dimensionThe constriction factors 119888

1 1198882120596 are

applied to improve the convergence of the particle over timeby damping the oscillations once the particle is focused onthe best point in an optimal regionThemain disadvantage ofthis method is that the particles may follow wider cycles andmay not converge when the individual best performance isfar from the neighborhoods best performance (two differentregions) [24]

52 The Schafferrsquos F6 Function To further investigate theperformance of the CHNNSO

119891 (119883) = 05 +

radicsin (11990921+ 11990922)2minus 05

10 + 0001(11990921+ 11990922)2 (28)

0

0

25

25

5

5

minus25

minus25minus5minus5

1199092

1199091

Figure 6 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Schafferrsquo F6 function

the Schafferrsquos F6 function [25] is chosen This function has asingle global optimum at (0 0) and119891min(119909 119910) = 0 and a largenumber of local optima The global optimum is difficult tofind because the value at the best local optimum differs withonly about 10minus3 from the global minimum The local optimacrowd around the global optimumTheproposed technique isapplied with a population size of 30 the iterations are 100000and the parameters of CHNNSO are chosen as

119860 = 119861 = 119862 = 001 120573 = 00003 119911 (0) = 03

120585 = 155 119896 = 11 1198680= 02

(29)

The position of each particle is initialized with a randomvalue In Figure 6 the ldquolowastrdquos are the best experiences of eachparticleThe global minimum at 2674 times 10minus4 is obtained bythe best particle with (119909

1 1199092) = (minus000999 001294) which

is included in the ldquolowastrdquos The ldquo+rdquos are the final states of theparticles According to Theorem 3 the particlesrsquo final statesconverge in a circle as shown in Figure 6 which proves globalconvergence FromFigure 6 it is clearly seen that the particlesrsquofinal states are attracted to the neighborhoods of the bestexperiences of all the particles and the convergence is good

Figure 7 shows the final particle states when the originalPSO [20] was used to optimize the Schafferrsquos F6 function Inthis numerical simulation of the original PSO the particleswarm population size is also 20 and parameters 119888

1and 1198882are

both set to 2 and set 120596 is set to a value of 1 Vmax is set equal tothe dynamic range of each dimensionThe ldquolowastrdquos in Figure 7 arethe final states of all the particles corresponding to the ldquo+rdquos inFigure 6 It is easy to see that the final states of the particles areruleless in Figure 7 The global minimum 63299 times 10

minus4 isobtained by the best particle with (119909

1 1199092) = (001889 times 10

minus3minus04665 times 10

minus3) The best experience from the original PSO

is not as good as the best of the proposed CHNNSOComparing the results obtained from the proposed

CHNNSO in Figure 6 and the original PSO in Figure 7 itis clearly seen that the particlesrsquo final states of the proposed

8 Journal of Applied Mathematics

0

25

25 5

minus25

minus25minus5

minus5

minus5

1199092

1199091

0

Figure 7Thefinal states of the particles achieved fromoriginal PSOfor Schafferrsquos F6 function

CHNNSO are finally attracted to the best experience of allthe particles and the convergence is better than that of theoriginal PSO The CHNNSO can guarantee the convergenceof the particle swarm but the final states of the original PSOare ruleless

53 The Hartmann Function The Hartmann function when119899 = 3 119902 = 4 is given by

119891 (119909) = minus

119902

sum

119894=1

119888119894exp(minus

119899

sum

119895=1

119886119894119895(119909119895minus 119901119894119895)2

) (30)

with 119909119895belonging to

119878 = 119909 isin 119877119899| 0 le 119909

119895le 1 1 le 119895 le 119899 (31)

Table 1 shows the parameter values for the Hartmann func-tion when 119899 = 3 119902 = 4

When 119899 = 3 119883min = (0114 0556 0882) 119891(119909min) =minus386

The time evolution of the cost of the Hartmann functionis 15000 In Figure 8 only subdimensions are picturedIn Figure 8 the ldquo+rdquos are the final states of the particlesand the ldquolowastrdquos denote the best experiences of all particlesFrom Figure 8 it can be easily seen that the final statesof the particles converge to the circle The center point is(00831 05567 08522) and the radius is 01942 The finalparticle states confirmTheorem 3 and the final convergencyis guaranteed

54 Design of a Pressure Vessel The pressure vessel problemdescribed in [26 27] is an example which has linear andnonlinear constraints and has been solved by a variety oftechniques The objective of the problem is to minimize thetotal cost of the material needed for forming and weldinga cylindrical vessel There are four design variables 119909

1(119879119904

thickness of the shell) 1199092(119879ℎ thickness of the head) 119909

3(119877

0 01 02 03035

04

045

05

055

06

065

07

075

08

1199092

1199091

minus02 minus01

Figure 8Thefinal states of the particles achieved fromoriginal PSOfor Hartmann function

inner radius) and 1199094(119871 length of the cylindrical section of

the vessel) 119879119904and 119879

ℎare integer multiples of 00625 inch

which are the available thickness of rolled steel plates and 119877and119871 are continuousTheproblemcan be specified as follows

Minimize 119891 (119883) = 06224119909111990931199094 + 1778111990921199092

3

+ 316611199092

11199094+ 1984119909

2

11199093

subject to 1198921 (119883) = minus1199091 + 001931199093 le 0

1198922 (119883) = minus1199092 + 0009541199093 le 0

1198923 (119883) = minus120587119909

2

31199094minus4

31205871199093+ 1296000 le 0

1198924 (119883) = 1199094 minus 240 le 0

(32)

The following range of the variables were used [27]

0 lt 1199091le 99 0 lt 119909

2le 99 10 le 119909

3le 200

10 le 1199094le 200

(33)

de Freitas Vas and de Graca Pinto Fernandes [26] proposedan algorithm to deal with the constrained optimization prob-lems Here this algorithm [26] is combined with CHNNSOto search for the global optimum The proposed techniqueis applied with a population size of 20 and the maximumnumber of iterations is 20000 Then the values of theparameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02

119901119894= 05 119901

119892= 05

(34)

From Table 2 the best solution obtained by the CHNNSO isbetter than the other two solutions previously reported

Journal of Applied Mathematics 9

Table 2 Comparison of the results for pressure Vessel designproblem

Design variables Best solutions foundThis paper Hu et al [27] Coello [28]

1199091(119879119904) 08125 08125 08125

1199092(119879ℎ) 04375 04375 04375

1199093(119877) 4209845 4209845 403239

1199094(119871) 1766366 1766366 2000

1198921(119883) 0 0 003424

1198922(119883) minus003588 minus003588 minus0052847

1198923(119883) minus11814 lowast 10

minus11minus58208 lowast 10

minus11minus27105845

1198924(119883) minus633634 minus633634 minus400

119891(119883) 60591313 60591313 62887445

6 Conclusion

This paper proposed a chaotic neural networks swarmoptimization algorithm It incorporates the particle swarmsearching structure having global optimization capability intoHopfield neural networks which guarantee convergence Inaddition by adding chaos generator terms into Hopfieldneural networks the ergodic searching capability is greatlyimproved in the proposed algorithm The decay factorintroduced in the chaos terms ensures that the searchingevolves to convergence to global optimum after globallychaotic optimization The experiment results of three classicbenchmark functions showed that the proposed algorithmcan guarantee the convergence of the particle swarm search-ing and can escape from local extremum Therefore theproposed algorithm improves the practicality of particleswarm optimization As this is a general particle model sometechniques such as the local best version algorithm proposedin [29] can be used together with the new model This will beexplored in future work

Acknowledgments

This work was supported by ChinaSouth Africa ResearchCooperation Programme (nos 78673 and CS06-L02) SouthAfrican National Research Foundation Incentive Grant (no81705) SDUST Research Fund (no 2010KYTD101) and Keyscientific support program of Qingdao City (no 11-2-3-51-nsh)

References

[1] J C Sprott Chaos and Time-Series Analysis Oxford UniversityPress New York NY USA 2004

[2] K Aihara and G Matsumoto Chaos in Biological SystemsPlenum Press New York NY USA 1987

[3] C A Skarda and W J Freeman ldquoHow brains make chaos inorder tomake sense of theworldrdquoBehavioral andBrain Sciencesvol 10 pp 161ndash165 1987

[4] L Wang D Z Zheng and Q S Lin ldquoSurvey on chaotic opti-mization methodsrdquo Computation Technology Automation vol20 pp 1ndash5 2001

[5] B Li and W S Jiang ldquoOptimizing complex functions by chaossearchrdquo Cybernetics and Systems vol 29 no 4 pp 409ndash4191998

[6] J J Hopfield and DW Tank ldquolsquoNeuralrsquo computation of decisonsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[7] Z Wang Y Liu K Fraser and X Liu ldquoStochastic stabilityof uncertain Hopfield neural networks with discrete and dis-tributed delaysrdquo Physics Letters A vol 354 no 4 pp 288ndash2972006

[8] T Tanaka and E Hiura ldquoComputational abilities of a chaoticneural networkrdquo Physics Letters A vol 315 no 3-4 pp 225ndash2302003

[9] K Aihara T Takabe and M Toyoda ldquoChaotic neural net-worksrdquo Physics Letters A vol 144 no 6-7 pp 333ndash340 1990

[10] S Kirkpatrick C D Gelatt Jr and M P Vecchi ldquoOptimizationby simulated annealingrdquo Science vol 220 no 4598 pp 671ndash6801983

[11] L Chen and K Aihara ldquoChaotic simulated annealing by aneural network model with transient chaosrdquo Neural Networksvol 8 no 6 pp 915ndash930 1995

[12] L Chen and K Aihara ldquoChaos and asymptotical stability indiscrete-time neural networksrdquo Physica D vol 104 no 3-4 pp286ndash325 1997

[13] L Wang ldquoOn competitive learningrdquo IEEE Transactions onNeural Networks vol 8 no 5 pp 1214ndash1217 1997

[14] L Wang and K Smith ldquoOn chaotic simulated annealingrdquo IEEETransactions on Neural Networks vol 9 no 4 pp 716ndash718 1998

[15] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002

[16] J Ke J X Qian and Y Z Qiao ldquoA modified particle swarmoptimization algorithmrdquo Journal of Circuits and Systems vol10 pp 87ndash91 2003

[17] B Liu L Wang Y H Jin F Tang and D X Huang ldquoImprovedparticle swarm optimization combined with chaosrdquo ChaosSolitons and Fractals vol 25 no 5 pp 1261ndash1271 2005

[18] T Xiang X Liao and K W Wong ldquoAn improved particleswarm optimization algorithm combined with piecewise linearchaotic maprdquo Applied Mathematics and Computation vol 190no 2 pp 1637ndash1645 2007

[19] C Fan and G Jiang ldquoA simple particle swarm optimizationcombined with chaotic searchrdquo in Proceedings of the 7th WorldCongress on Intelligent Control and Automation (WCICA rsquo08)pp 593ndash598 Chongqing China June 2008

[20] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia December 1995

[21] S H Chen A J Jakeman and J P Norton ldquoArtificial Intelli-gence techniques an introduction to their use for modellingenvironmental systemsrdquo Mathematics and Computers in Simu-lation vol 78 no 2-3 pp 379ndash400 2008

[22] J J Hopfield ldquoHopfield networkrdquo Scholarpedia vol 2 article1977 2007

[23] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 2554ndash2558 1984

[24] Y del Valle G K Venayagamoorthy S Mohagheghi J CHernandez and R G Harley ldquoParticle swarm optimization

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Journal of Applied Mathematics 7

0 05 1

0

05

1

minus05

minus05minus1minus1

1199092

1199091

Figure 4 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Rastrigin function

0

0

05

05

1

1

minus05

minus05minus1minus1

1199092

1199091

Figure 5The final states of the particles achieved fromoriginal PSOfor Rastrigin function

When their parameters 1198881and 1198882are both set to 1494

and 120596 to a value of 0729 Vmax is set equal to the dynamicrange on each dimensionThe constriction factors 119888

1 1198882120596 are

applied to improve the convergence of the particle over timeby damping the oscillations once the particle is focused onthe best point in an optimal regionThemain disadvantage ofthis method is that the particles may follow wider cycles andmay not converge when the individual best performance isfar from the neighborhoods best performance (two differentregions) [24]

52 The Schafferrsquos F6 Function To further investigate theperformance of the CHNNSO

119891 (119883) = 05 +

radicsin (11990921+ 11990922)2minus 05

10 + 0001(11990921+ 11990922)2 (28)

0

0

25

25

5

5

minus25

minus25minus5minus5

1199092

1199091

Figure 6 The final states and the best experiences of the particlesachieved from the proposed CHNNSO for Schafferrsquo F6 function

the Schafferrsquos F6 function [25] is chosen This function has asingle global optimum at (0 0) and119891min(119909 119910) = 0 and a largenumber of local optima The global optimum is difficult tofind because the value at the best local optimum differs withonly about 10minus3 from the global minimum The local optimacrowd around the global optimumTheproposed technique isapplied with a population size of 30 the iterations are 100000and the parameters of CHNNSO are chosen as

119860 = 119861 = 119862 = 001 120573 = 00003 119911 (0) = 03

120585 = 155 119896 = 11 1198680= 02

(29)

The position of each particle is initialized with a randomvalue In Figure 6 the ldquolowastrdquos are the best experiences of eachparticleThe global minimum at 2674 times 10minus4 is obtained bythe best particle with (119909

1 1199092) = (minus000999 001294) which

is included in the ldquolowastrdquos The ldquo+rdquos are the final states of theparticles According to Theorem 3 the particlesrsquo final statesconverge in a circle as shown in Figure 6 which proves globalconvergence FromFigure 6 it is clearly seen that the particlesrsquofinal states are attracted to the neighborhoods of the bestexperiences of all the particles and the convergence is good

Figure 7 shows the final particle states when the originalPSO [20] was used to optimize the Schafferrsquos F6 function Inthis numerical simulation of the original PSO the particleswarm population size is also 20 and parameters 119888

1and 1198882are

both set to 2 and set 120596 is set to a value of 1 Vmax is set equal tothe dynamic range of each dimensionThe ldquolowastrdquos in Figure 7 arethe final states of all the particles corresponding to the ldquo+rdquos inFigure 6 It is easy to see that the final states of the particles areruleless in Figure 7 The global minimum 63299 times 10

minus4 isobtained by the best particle with (119909

1 1199092) = (001889 times 10

minus3minus04665 times 10

minus3) The best experience from the original PSO

is not as good as the best of the proposed CHNNSOComparing the results obtained from the proposed

CHNNSO in Figure 6 and the original PSO in Figure 7 itis clearly seen that the particlesrsquo final states of the proposed

8 Journal of Applied Mathematics

0

25

25 5

minus25

minus25minus5

minus5

minus5

1199092

1199091

0

Figure 7Thefinal states of the particles achieved fromoriginal PSOfor Schafferrsquos F6 function

CHNNSO are finally attracted to the best experience of allthe particles and the convergence is better than that of theoriginal PSO The CHNNSO can guarantee the convergenceof the particle swarm but the final states of the original PSOare ruleless

53 The Hartmann Function The Hartmann function when119899 = 3 119902 = 4 is given by

119891 (119909) = minus

119902

sum

119894=1

119888119894exp(minus

119899

sum

119895=1

119886119894119895(119909119895minus 119901119894119895)2

) (30)

with 119909119895belonging to

119878 = 119909 isin 119877119899| 0 le 119909

119895le 1 1 le 119895 le 119899 (31)

Table 1 shows the parameter values for the Hartmann func-tion when 119899 = 3 119902 = 4

When 119899 = 3 119883min = (0114 0556 0882) 119891(119909min) =minus386

The time evolution of the cost of the Hartmann functionis 15000 In Figure 8 only subdimensions are picturedIn Figure 8 the ldquo+rdquos are the final states of the particlesand the ldquolowastrdquos denote the best experiences of all particlesFrom Figure 8 it can be easily seen that the final statesof the particles converge to the circle The center point is(00831 05567 08522) and the radius is 01942 The finalparticle states confirmTheorem 3 and the final convergencyis guaranteed

54 Design of a Pressure Vessel The pressure vessel problemdescribed in [26 27] is an example which has linear andnonlinear constraints and has been solved by a variety oftechniques The objective of the problem is to minimize thetotal cost of the material needed for forming and weldinga cylindrical vessel There are four design variables 119909

1(119879119904

thickness of the shell) 1199092(119879ℎ thickness of the head) 119909

3(119877

0 01 02 03035

04

045

05

055

06

065

07

075

08

1199092

1199091

minus02 minus01

Figure 8Thefinal states of the particles achieved fromoriginal PSOfor Hartmann function

inner radius) and 1199094(119871 length of the cylindrical section of

the vessel) 119879119904and 119879

ℎare integer multiples of 00625 inch

which are the available thickness of rolled steel plates and 119877and119871 are continuousTheproblemcan be specified as follows

Minimize 119891 (119883) = 06224119909111990931199094 + 1778111990921199092

3

+ 316611199092

11199094+ 1984119909

2

11199093

subject to 1198921 (119883) = minus1199091 + 001931199093 le 0

1198922 (119883) = minus1199092 + 0009541199093 le 0

1198923 (119883) = minus120587119909

2

31199094minus4

31205871199093+ 1296000 le 0

1198924 (119883) = 1199094 minus 240 le 0

(32)

The following range of the variables were used [27]

0 lt 1199091le 99 0 lt 119909

2le 99 10 le 119909

3le 200

10 le 1199094le 200

(33)

de Freitas Vas and de Graca Pinto Fernandes [26] proposedan algorithm to deal with the constrained optimization prob-lems Here this algorithm [26] is combined with CHNNSOto search for the global optimum The proposed techniqueis applied with a population size of 20 and the maximumnumber of iterations is 20000 Then the values of theparameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02

119901119894= 05 119901

119892= 05

(34)

From Table 2 the best solution obtained by the CHNNSO isbetter than the other two solutions previously reported

Journal of Applied Mathematics 9

Table 2 Comparison of the results for pressure Vessel designproblem

Design variables Best solutions foundThis paper Hu et al [27] Coello [28]

1199091(119879119904) 08125 08125 08125

1199092(119879ℎ) 04375 04375 04375

1199093(119877) 4209845 4209845 403239

1199094(119871) 1766366 1766366 2000

1198921(119883) 0 0 003424

1198922(119883) minus003588 minus003588 minus0052847

1198923(119883) minus11814 lowast 10

minus11minus58208 lowast 10

minus11minus27105845

1198924(119883) minus633634 minus633634 minus400

119891(119883) 60591313 60591313 62887445

6 Conclusion

This paper proposed a chaotic neural networks swarmoptimization algorithm It incorporates the particle swarmsearching structure having global optimization capability intoHopfield neural networks which guarantee convergence Inaddition by adding chaos generator terms into Hopfieldneural networks the ergodic searching capability is greatlyimproved in the proposed algorithm The decay factorintroduced in the chaos terms ensures that the searchingevolves to convergence to global optimum after globallychaotic optimization The experiment results of three classicbenchmark functions showed that the proposed algorithmcan guarantee the convergence of the particle swarm search-ing and can escape from local extremum Therefore theproposed algorithm improves the practicality of particleswarm optimization As this is a general particle model sometechniques such as the local best version algorithm proposedin [29] can be used together with the new model This will beexplored in future work

Acknowledgments

This work was supported by ChinaSouth Africa ResearchCooperation Programme (nos 78673 and CS06-L02) SouthAfrican National Research Foundation Incentive Grant (no81705) SDUST Research Fund (no 2010KYTD101) and Keyscientific support program of Qingdao City (no 11-2-3-51-nsh)

References

[1] J C Sprott Chaos and Time-Series Analysis Oxford UniversityPress New York NY USA 2004

[2] K Aihara and G Matsumoto Chaos in Biological SystemsPlenum Press New York NY USA 1987

[3] C A Skarda and W J Freeman ldquoHow brains make chaos inorder tomake sense of theworldrdquoBehavioral andBrain Sciencesvol 10 pp 161ndash165 1987

[4] L Wang D Z Zheng and Q S Lin ldquoSurvey on chaotic opti-mization methodsrdquo Computation Technology Automation vol20 pp 1ndash5 2001

[5] B Li and W S Jiang ldquoOptimizing complex functions by chaossearchrdquo Cybernetics and Systems vol 29 no 4 pp 409ndash4191998

[6] J J Hopfield and DW Tank ldquolsquoNeuralrsquo computation of decisonsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[7] Z Wang Y Liu K Fraser and X Liu ldquoStochastic stabilityof uncertain Hopfield neural networks with discrete and dis-tributed delaysrdquo Physics Letters A vol 354 no 4 pp 288ndash2972006

[8] T Tanaka and E Hiura ldquoComputational abilities of a chaoticneural networkrdquo Physics Letters A vol 315 no 3-4 pp 225ndash2302003

[9] K Aihara T Takabe and M Toyoda ldquoChaotic neural net-worksrdquo Physics Letters A vol 144 no 6-7 pp 333ndash340 1990

[10] S Kirkpatrick C D Gelatt Jr and M P Vecchi ldquoOptimizationby simulated annealingrdquo Science vol 220 no 4598 pp 671ndash6801983

[11] L Chen and K Aihara ldquoChaotic simulated annealing by aneural network model with transient chaosrdquo Neural Networksvol 8 no 6 pp 915ndash930 1995

[12] L Chen and K Aihara ldquoChaos and asymptotical stability indiscrete-time neural networksrdquo Physica D vol 104 no 3-4 pp286ndash325 1997

[13] L Wang ldquoOn competitive learningrdquo IEEE Transactions onNeural Networks vol 8 no 5 pp 1214ndash1217 1997

[14] L Wang and K Smith ldquoOn chaotic simulated annealingrdquo IEEETransactions on Neural Networks vol 9 no 4 pp 716ndash718 1998

[15] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002

[16] J Ke J X Qian and Y Z Qiao ldquoA modified particle swarmoptimization algorithmrdquo Journal of Circuits and Systems vol10 pp 87ndash91 2003

[17] B Liu L Wang Y H Jin F Tang and D X Huang ldquoImprovedparticle swarm optimization combined with chaosrdquo ChaosSolitons and Fractals vol 25 no 5 pp 1261ndash1271 2005

[18] T Xiang X Liao and K W Wong ldquoAn improved particleswarm optimization algorithm combined with piecewise linearchaotic maprdquo Applied Mathematics and Computation vol 190no 2 pp 1637ndash1645 2007

[19] C Fan and G Jiang ldquoA simple particle swarm optimizationcombined with chaotic searchrdquo in Proceedings of the 7th WorldCongress on Intelligent Control and Automation (WCICA rsquo08)pp 593ndash598 Chongqing China June 2008

[20] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia December 1995

[21] S H Chen A J Jakeman and J P Norton ldquoArtificial Intelli-gence techniques an introduction to their use for modellingenvironmental systemsrdquo Mathematics and Computers in Simu-lation vol 78 no 2-3 pp 379ndash400 2008

[22] J J Hopfield ldquoHopfield networkrdquo Scholarpedia vol 2 article1977 2007

[23] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 2554ndash2558 1984

[24] Y del Valle G K Venayagamoorthy S Mohagheghi J CHernandez and R G Harley ldquoParticle swarm optimization

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

8 Journal of Applied Mathematics

0

25

25 5

minus25

minus25minus5

minus5

minus5

1199092

1199091

0

Figure 7Thefinal states of the particles achieved fromoriginal PSOfor Schafferrsquos F6 function

CHNNSO are finally attracted to the best experience of allthe particles and the convergence is better than that of theoriginal PSO The CHNNSO can guarantee the convergenceof the particle swarm but the final states of the original PSOare ruleless

53 The Hartmann Function The Hartmann function when119899 = 3 119902 = 4 is given by

119891 (119909) = minus

119902

sum

119894=1

119888119894exp(minus

119899

sum

119895=1

119886119894119895(119909119895minus 119901119894119895)2

) (30)

with 119909119895belonging to

119878 = 119909 isin 119877119899| 0 le 119909

119895le 1 1 le 119895 le 119899 (31)

Table 1 shows the parameter values for the Hartmann func-tion when 119899 = 3 119902 = 4

When 119899 = 3 119883min = (0114 0556 0882) 119891(119909min) =minus386

The time evolution of the cost of the Hartmann functionis 15000 In Figure 8 only subdimensions are picturedIn Figure 8 the ldquo+rdquos are the final states of the particlesand the ldquolowastrdquos denote the best experiences of all particlesFrom Figure 8 it can be easily seen that the final statesof the particles converge to the circle The center point is(00831 05567 08522) and the radius is 01942 The finalparticle states confirmTheorem 3 and the final convergencyis guaranteed

54 Design of a Pressure Vessel The pressure vessel problemdescribed in [26 27] is an example which has linear andnonlinear constraints and has been solved by a variety oftechniques The objective of the problem is to minimize thetotal cost of the material needed for forming and weldinga cylindrical vessel There are four design variables 119909

1(119879119904

thickness of the shell) 1199092(119879ℎ thickness of the head) 119909

3(119877

0 01 02 03035

04

045

05

055

06

065

07

075

08

1199092

1199091

minus02 minus01

Figure 8Thefinal states of the particles achieved fromoriginal PSOfor Hartmann function

inner radius) and 1199094(119871 length of the cylindrical section of

the vessel) 119879119904and 119879

ℎare integer multiples of 00625 inch

which are the available thickness of rolled steel plates and 119877and119871 are continuousTheproblemcan be specified as follows

Minimize 119891 (119883) = 06224119909111990931199094 + 1778111990921199092

3

+ 316611199092

11199094+ 1984119909

2

11199093

subject to 1198921 (119883) = minus1199091 + 001931199093 le 0

1198922 (119883) = minus1199092 + 0009541199093 le 0

1198923 (119883) = minus120587119909

2

31199094minus4

31205871199093+ 1296000 le 0

1198924 (119883) = 1199094 minus 240 le 0

(32)

The following range of the variables were used [27]

0 lt 1199091le 99 0 lt 119909

2le 99 10 le 119909

3le 200

10 le 1199094le 200

(33)

de Freitas Vas and de Graca Pinto Fernandes [26] proposedan algorithm to deal with the constrained optimization prob-lems Here this algorithm [26] is combined with CHNNSOto search for the global optimum The proposed techniqueis applied with a population size of 20 and the maximumnumber of iterations is 20000 Then the values of theparameters in (7)ndash(14) are set to

119860 = 002 119861 = 119862 = 001 120585 = 150

120572 = 1 120573 = 0001 119885 (0) = 03

119896 = 11 1198680= 02

119901119894= 05 119901

119892= 05

(34)

From Table 2 the best solution obtained by the CHNNSO isbetter than the other two solutions previously reported

Journal of Applied Mathematics 9

Table 2 Comparison of the results for pressure Vessel designproblem

Design variables Best solutions foundThis paper Hu et al [27] Coello [28]

1199091(119879119904) 08125 08125 08125

1199092(119879ℎ) 04375 04375 04375

1199093(119877) 4209845 4209845 403239

1199094(119871) 1766366 1766366 2000

1198921(119883) 0 0 003424

1198922(119883) minus003588 minus003588 minus0052847

1198923(119883) minus11814 lowast 10

minus11minus58208 lowast 10

minus11minus27105845

1198924(119883) minus633634 minus633634 minus400

119891(119883) 60591313 60591313 62887445

6 Conclusion

This paper proposed a chaotic neural networks swarmoptimization algorithm It incorporates the particle swarmsearching structure having global optimization capability intoHopfield neural networks which guarantee convergence Inaddition by adding chaos generator terms into Hopfieldneural networks the ergodic searching capability is greatlyimproved in the proposed algorithm The decay factorintroduced in the chaos terms ensures that the searchingevolves to convergence to global optimum after globallychaotic optimization The experiment results of three classicbenchmark functions showed that the proposed algorithmcan guarantee the convergence of the particle swarm search-ing and can escape from local extremum Therefore theproposed algorithm improves the practicality of particleswarm optimization As this is a general particle model sometechniques such as the local best version algorithm proposedin [29] can be used together with the new model This will beexplored in future work

Acknowledgments

This work was supported by ChinaSouth Africa ResearchCooperation Programme (nos 78673 and CS06-L02) SouthAfrican National Research Foundation Incentive Grant (no81705) SDUST Research Fund (no 2010KYTD101) and Keyscientific support program of Qingdao City (no 11-2-3-51-nsh)

References

[1] J C Sprott Chaos and Time-Series Analysis Oxford UniversityPress New York NY USA 2004

[2] K Aihara and G Matsumoto Chaos in Biological SystemsPlenum Press New York NY USA 1987

[3] C A Skarda and W J Freeman ldquoHow brains make chaos inorder tomake sense of theworldrdquoBehavioral andBrain Sciencesvol 10 pp 161ndash165 1987

[4] L Wang D Z Zheng and Q S Lin ldquoSurvey on chaotic opti-mization methodsrdquo Computation Technology Automation vol20 pp 1ndash5 2001

[5] B Li and W S Jiang ldquoOptimizing complex functions by chaossearchrdquo Cybernetics and Systems vol 29 no 4 pp 409ndash4191998

[6] J J Hopfield and DW Tank ldquolsquoNeuralrsquo computation of decisonsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[7] Z Wang Y Liu K Fraser and X Liu ldquoStochastic stabilityof uncertain Hopfield neural networks with discrete and dis-tributed delaysrdquo Physics Letters A vol 354 no 4 pp 288ndash2972006

[8] T Tanaka and E Hiura ldquoComputational abilities of a chaoticneural networkrdquo Physics Letters A vol 315 no 3-4 pp 225ndash2302003

[9] K Aihara T Takabe and M Toyoda ldquoChaotic neural net-worksrdquo Physics Letters A vol 144 no 6-7 pp 333ndash340 1990

[10] S Kirkpatrick C D Gelatt Jr and M P Vecchi ldquoOptimizationby simulated annealingrdquo Science vol 220 no 4598 pp 671ndash6801983

[11] L Chen and K Aihara ldquoChaotic simulated annealing by aneural network model with transient chaosrdquo Neural Networksvol 8 no 6 pp 915ndash930 1995

[12] L Chen and K Aihara ldquoChaos and asymptotical stability indiscrete-time neural networksrdquo Physica D vol 104 no 3-4 pp286ndash325 1997

[13] L Wang ldquoOn competitive learningrdquo IEEE Transactions onNeural Networks vol 8 no 5 pp 1214ndash1217 1997

[14] L Wang and K Smith ldquoOn chaotic simulated annealingrdquo IEEETransactions on Neural Networks vol 9 no 4 pp 716ndash718 1998

[15] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002

[16] J Ke J X Qian and Y Z Qiao ldquoA modified particle swarmoptimization algorithmrdquo Journal of Circuits and Systems vol10 pp 87ndash91 2003

[17] B Liu L Wang Y H Jin F Tang and D X Huang ldquoImprovedparticle swarm optimization combined with chaosrdquo ChaosSolitons and Fractals vol 25 no 5 pp 1261ndash1271 2005

[18] T Xiang X Liao and K W Wong ldquoAn improved particleswarm optimization algorithm combined with piecewise linearchaotic maprdquo Applied Mathematics and Computation vol 190no 2 pp 1637ndash1645 2007

[19] C Fan and G Jiang ldquoA simple particle swarm optimizationcombined with chaotic searchrdquo in Proceedings of the 7th WorldCongress on Intelligent Control and Automation (WCICA rsquo08)pp 593ndash598 Chongqing China June 2008

[20] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia December 1995

[21] S H Chen A J Jakeman and J P Norton ldquoArtificial Intelli-gence techniques an introduction to their use for modellingenvironmental systemsrdquo Mathematics and Computers in Simu-lation vol 78 no 2-3 pp 379ndash400 2008

[22] J J Hopfield ldquoHopfield networkrdquo Scholarpedia vol 2 article1977 2007

[23] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 2554ndash2558 1984

[24] Y del Valle G K Venayagamoorthy S Mohagheghi J CHernandez and R G Harley ldquoParticle swarm optimization

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Journal of Applied Mathematics 9

Table 2 Comparison of the results for pressure Vessel designproblem

Design variables Best solutions foundThis paper Hu et al [27] Coello [28]

1199091(119879119904) 08125 08125 08125

1199092(119879ℎ) 04375 04375 04375

1199093(119877) 4209845 4209845 403239

1199094(119871) 1766366 1766366 2000

1198921(119883) 0 0 003424

1198922(119883) minus003588 minus003588 minus0052847

1198923(119883) minus11814 lowast 10

minus11minus58208 lowast 10

minus11minus27105845

1198924(119883) minus633634 minus633634 minus400

119891(119883) 60591313 60591313 62887445

6 Conclusion

This paper proposed a chaotic neural networks swarmoptimization algorithm It incorporates the particle swarmsearching structure having global optimization capability intoHopfield neural networks which guarantee convergence Inaddition by adding chaos generator terms into Hopfieldneural networks the ergodic searching capability is greatlyimproved in the proposed algorithm The decay factorintroduced in the chaos terms ensures that the searchingevolves to convergence to global optimum after globallychaotic optimization The experiment results of three classicbenchmark functions showed that the proposed algorithmcan guarantee the convergence of the particle swarm search-ing and can escape from local extremum Therefore theproposed algorithm improves the practicality of particleswarm optimization As this is a general particle model sometechniques such as the local best version algorithm proposedin [29] can be used together with the new model This will beexplored in future work

Acknowledgments

This work was supported by ChinaSouth Africa ResearchCooperation Programme (nos 78673 and CS06-L02) SouthAfrican National Research Foundation Incentive Grant (no81705) SDUST Research Fund (no 2010KYTD101) and Keyscientific support program of Qingdao City (no 11-2-3-51-nsh)

References

[1] J C Sprott Chaos and Time-Series Analysis Oxford UniversityPress New York NY USA 2004

[2] K Aihara and G Matsumoto Chaos in Biological SystemsPlenum Press New York NY USA 1987

[3] C A Skarda and W J Freeman ldquoHow brains make chaos inorder tomake sense of theworldrdquoBehavioral andBrain Sciencesvol 10 pp 161ndash165 1987

[4] L Wang D Z Zheng and Q S Lin ldquoSurvey on chaotic opti-mization methodsrdquo Computation Technology Automation vol20 pp 1ndash5 2001

[5] B Li and W S Jiang ldquoOptimizing complex functions by chaossearchrdquo Cybernetics and Systems vol 29 no 4 pp 409ndash4191998

[6] J J Hopfield and DW Tank ldquolsquoNeuralrsquo computation of decisonsin optimization problemsrdquo Biological Cybernetics vol 52 no 3pp 141ndash152 1985

[7] Z Wang Y Liu K Fraser and X Liu ldquoStochastic stabilityof uncertain Hopfield neural networks with discrete and dis-tributed delaysrdquo Physics Letters A vol 354 no 4 pp 288ndash2972006

[8] T Tanaka and E Hiura ldquoComputational abilities of a chaoticneural networkrdquo Physics Letters A vol 315 no 3-4 pp 225ndash2302003

[9] K Aihara T Takabe and M Toyoda ldquoChaotic neural net-worksrdquo Physics Letters A vol 144 no 6-7 pp 333ndash340 1990

[10] S Kirkpatrick C D Gelatt Jr and M P Vecchi ldquoOptimizationby simulated annealingrdquo Science vol 220 no 4598 pp 671ndash6801983

[11] L Chen and K Aihara ldquoChaotic simulated annealing by aneural network model with transient chaosrdquo Neural Networksvol 8 no 6 pp 915ndash930 1995

[12] L Chen and K Aihara ldquoChaos and asymptotical stability indiscrete-time neural networksrdquo Physica D vol 104 no 3-4 pp286ndash325 1997

[13] L Wang ldquoOn competitive learningrdquo IEEE Transactions onNeural Networks vol 8 no 5 pp 1214ndash1217 1997

[14] L Wang and K Smith ldquoOn chaotic simulated annealingrdquo IEEETransactions on Neural Networks vol 9 no 4 pp 716ndash718 1998

[15] M Clerc and J Kennedy ldquoThe particle swarm-explosion sta-bility and convergence in a multidimensional complex spacerdquoIEEE Transactions on Evolutionary Computation vol 6 no 1pp 58ndash73 2002

[16] J Ke J X Qian and Y Z Qiao ldquoA modified particle swarmoptimization algorithmrdquo Journal of Circuits and Systems vol10 pp 87ndash91 2003

[17] B Liu L Wang Y H Jin F Tang and D X Huang ldquoImprovedparticle swarm optimization combined with chaosrdquo ChaosSolitons and Fractals vol 25 no 5 pp 1261ndash1271 2005

[18] T Xiang X Liao and K W Wong ldquoAn improved particleswarm optimization algorithm combined with piecewise linearchaotic maprdquo Applied Mathematics and Computation vol 190no 2 pp 1637ndash1645 2007

[19] C Fan and G Jiang ldquoA simple particle swarm optimizationcombined with chaotic searchrdquo in Proceedings of the 7th WorldCongress on Intelligent Control and Automation (WCICA rsquo08)pp 593ndash598 Chongqing China June 2008

[20] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Perth Australia December 1995

[21] S H Chen A J Jakeman and J P Norton ldquoArtificial Intelli-gence techniques an introduction to their use for modellingenvironmental systemsrdquo Mathematics and Computers in Simu-lation vol 78 no 2-3 pp 379ndash400 2008

[22] J J Hopfield ldquoHopfield networkrdquo Scholarpedia vol 2 article1977 2007

[23] J J Hopfield ldquoNeurons with graded response have collectivecomputational properties like those of two-state neuronsrdquoProceedings of the National Academy of Sciences of the UnitedStates of America vol 81 no 10 pp 2554ndash2558 1984

[24] Y del Valle G K Venayagamoorthy S Mohagheghi J CHernandez and R G Harley ldquoParticle swarm optimization

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

10 Journal of Applied Mathematics

basic concepts variants and applications in power systemsrdquoIEEE Transactions on Evolutionary Computation vol 12 no 2pp 171ndash195 2008

[25] J D Schaffer R A Caruana L J Eshelman andRDas ldquoA studyof control parameters affectiong online performance of geneticalgorithms for function optimizationrdquo in Proceedings of the 3rdInternational Conference on Genetic Algorithms pp 51ndash60 1989

[26] A I de Freitas Vaz and E M da Graca Pinto FernandesldquoOptimization of nonlinear constrained particle swarmrdquo Tech-nological and Economic Development of Economy vol 12 no 1pp 30ndash36 2006

[27] X Hu R C Eberhart and Y Shi ldquoEngineering optimizationwith particle swarmrdquo in Proceedings of the IEEE Swarm Intelli-gence Symposium pp 53ndash57 2003

[28] C A C Coello ldquoTheoretical and numerical constraint-handling techniques used with evolutionary algorithms asurvey of the state of the artrdquo Computer Methods in AppliedMechanics and Engineering vol 191 no 11-12 pp 1245ndash12872002

[29] J Kennedy ldquoSmall worlds and mega-minds effects of neigh-borhood topology on particle swarm performancerdquo NeuralNetworks vol 18 pp 205ndash217 1997

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of