[advances in intelligent systems and computing] proceedings of seventh international conference on...

16
J. C. Bansal et al. (eds.), Proceedings of Seventh International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA 2012), Advances in Intelligent Systems and Computing 201, DOI: 10.1007/978-81-322-1038-2_28, Ó Springer India 2013 Self Adaptive Acceleration Factor in Particle Swarm Optimization Shimpi Singh Jadon, Harish Sharma, Jagdish chand Bansal, and Ritu Tiwari Abstract Particle swarm optimization (PSO), which is one of the leading swarm intelligence algorithms, dominates other optimization algorithms in some fields but, it also has the drawbacks like it easily falls into local optima and suffers from slow convergence in the later stages. This paper proposes improved version of PSO called Self Adaptive Acceleration Factors in PSO (SAAFPSO) to balance between explo- ration and exploitation. We converted the constant acceleration factors used in stan- dard PSO into function of particle’s fitness. If a particle is more fit then it gives more importance to global best particle and less to itself to avoid local convergence. In later stages, particles will be more fitter so all will move towards global best parti- cle, thus achieved the convergence speed. Experiment is performed and compared with standard PSO and Artificial bee colony (ABC) on 14 unbiased benchmark opti- Shimpi Singh Jadon ABV-Indian Institute of Information Technology and Management, Gwalior Tel.: +09926262325, Fax: + 0751-2449813 e-mail: [email protected] Harish Sharma ABV-Indian Institute of Information Technology and Management, Gwalior Tel.: +09479810157, Fax: + 0751-2449813 e-mail: [email protected] Jagdish Chand Bansal ABV-Indian Institute of Information Technology and Management, Gwalior Tel.: +0751-2449819, Fax: + 0751-2449813 e-mail: [email protected] Ritu Tiwari ABV-Indian Institute of Information Technology and Management, Gwalior Tel.: +09479810157, Fax: + 0751-2449813 e-mail: [email protected] 11 325

Upload: atulya-k

Post on 08-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

J. C. Bansal et al. (eds.), Proceedings of Seventh International Conference on Bio-InspiredComputing: Theories and Applications (BIC-TA 2012), Advances in Intelligent Systemsand Computing 201, DOI: 10.1007/978-81-322-1038-2_28, � Springer India 2013

Self Adaptive Acceleration Factor in ParticleSwarm Optimization

Shimpi Singh Jadon, Harish Sharma, Jagdish chand Bansal, and Ritu Tiwari

Abstract Particle swarm optimization (PSO), which is one of the leading swarmintelligence algorithms, dominates other optimization algorithms in some fields but,it also has the drawbacks like it easily falls into local optima and suffers from slowconvergence in the later stages. This paper proposes improved version of PSO calledSelf Adaptive Acceleration Factors in PSO (SAAFPSO) to balance between explo-ration and exploitation. We converted the constant acceleration factors used in stan-dard PSO into function of particle’s fitness. If a particle is more fit then it gives moreimportance to global best particle and less to itself to avoid local convergence. Inlater stages, particles will be more fitter so all will move towards global best parti-cle, thus achieved the convergence speed. Experiment is performed and comparedwith standard PSO and Artificial bee colony (ABC) on 14 unbiased benchmark opti-

Shimpi Singh JadonABV-Indian Institute of Information Technology and Management, GwaliorTel.: +09926262325,Fax: + 0751-2449813e-mail: [email protected]

Harish SharmaABV-Indian Institute of Information Technology and Management, GwaliorTel.: +09479810157,Fax: + 0751-2449813e-mail: [email protected]

Jagdish Chand BansalABV-Indian Institute of Information Technology and Management, GwaliorTel.: +0751-2449819,Fax: + 0751-2449813e-mail: [email protected]

Ritu TiwariABV-Indian Institute of Information Technology and Management, GwaliorTel.: +09479810157,Fax: + 0751-2449813e-mail: [email protected]

111

325

mization functions and one real world engineering optimization problem (known aspressure vessel design) and results show that proposed algorithm SAAFPSO domi-nates others.

Key words: Particle Swarm Optimization, Artificial Bee Colony, Swarm intelli-gence, Acceleration factor, Optimization

1 Introduction

After being inspired from social behavior of fish schooling and birds flocking whilesearching the food, Kennedy and Eberhart [15], [8] developed a swarm intelligencebased optimization technique called Particle swarm optimization (PSO) in 1995.PSO is a population based, easy to understand and implement, robust meta heuristicoptimization algorithm. PSO can be a better choice for multi model, non convex,non linear and complex optimization problems but like any other evolutionary al-gorithm, it also has drawbacks like trapping into local best solutions[19], inefficientcomputationally as measured by the number of function evaluations required [5].These points restrict PSO to less applicability [18]. Researchers are continuouslyworking to achieve these goals i.e., increasing convergence speed and ignoring thelocal optima to explore PSO applicability. As a result, a huge variants of PSO al-gorithm have been proposed [22],[19],[31],[32],[13], [5] to get rid of these weak-nesses. However, achieving both goals simultaneously is difficult like liang et al.proposed the comprehensive-learning PSO (CLPSO) [19] which aims at ignoringthe local optima, but results show that it also suffers from slow convergence. Rat-naweera et al. [22] proposed time varying acceleration factors to balance betweencognitive and social component in initial and later stages. Zhan et al. [31] also triedto adapt the acceleration factors increasing or decreasing depending on different ex-ploring or exploring search space stages. Zhang et al. [32] studied effect of thesefactors on position expectation and variance and suggested that setting the cogni-tive acceleration factor as 1.85 and the social acceleration factor as 2 works goodfor improving system stability. Gai-yun et al. [13] also worked for self adaption ofcognitive and social factors. We proposed self adaptive acceleration factors in PSOalgorithm after analyzing the drawbacks of PSO.

Rest of the paper is organized as follows: Standard PSO is explained in section2. In section 3, self adaptive particle swarm optimization is proposed. In Section4, performance of the proposed strategy is analyzed. Finally, in section 5, paper isconcluded.

326 S. S. Jadon

2 Standard Particle Swarm Optimization Algorithm

PSO is an optimization technique which simulates the birds flocking behavior. PSOis a dynamic population of active, interactive agents with very little in the way ofinherent intelligence. In PSO, whole group is called swarm and each individual iscalled particle which represents possible candidate’s solution. The Swarm findsfood for its self through social learning by observing the behavior of nearby birdswho appeared to be near the food source. Initially each particle is initialized withinthe search space randomly and keeps the information about its personal best posi-tion known as pbest, swarm best position known as gbest and current velocity Vwith which it is moving, in her memory. Based on these three values, each particleupdates its position. In this manner, whole swarm moves in better direction whilefollowing collaborative trail and error method and converges to single best knownsolution.

For an D dimensional search space, the ith particle of the swarm at time step t isrepresented by a D- dimensional vector, X t

i = (xi1,xi2, ....,xiD). The velocity of thisparticle is represented by another D-dimensional vector V t

i = (vi1,vi2, ....,viD) . Thepreviously best visited position of the ith particle is denoted as Pt

i = (pi1, pi2, ...., piD). g is the index of the best particle in the swarm. PSO swarm uses two equationsfor movement called velocity update equation and position update equation. Thevelocity of the ith particle is updated using the velocity update equation given by (1)and the position is updated using (2).

vt+1id = vt

id + c1r1(ptid − xt

id)+ c2r2(ptgd − xt

id) (1)

xt+1id = xt

id + vt+1id (2)

where d = 1,2, ...,D represents the dimension and i = 1,2, ...,S represents the par-ticle index. S is the size of the swarm and c1 and c2 are constants (usually c1 = c2),called cognitive and social scaling parameters respectively or simply accelerationcoefficients. r1 and r2 are random numbers in the range [0,1] drawn from a uniformdistribution.

The right hand side of velocity update equation (1) consists of three terms, thefirst term vt

id is the memory of the previous direction of movement which can bethought of as a momentum term and prevents the particle from drastically changingdirection. The second term c1r1(pt

id − xtid) is called cognitive component or persis-

tence which draws particle back to their previous best situation and enables the localsearch in swarm. The last term c2r2(pt

gd−xtid) is known as social component which

allows individuals to compare themselves to others in it’s group and is responsiblefor global search. The Pseudo-code for Particle Swarm Optimization, is describedas follows :

Based on the neighborhood size, initially two versions of PSO algorithm werepresented in literature, namely, global version of PSO which is the original PSO(PSO-G) and the local version of PSO (PSO-L)[23] The only difference betweenPSO-G and PSO-L is that the term pg in social component in velocity update equa-

Self Adaptive Acceleration Factor in Particle Swarm Optimization 327

Algorithm 1 Particle Swarm Optimization Algorithm:Initialize the parameters, w, c1 and c2;Initialize the particle positions and their velocities in the search space;Evaluate fitness of individual particles;Store gbest and pbest;while stopping condition(s) not true do

for each individual, Xi dofor each dimension d, xid do

(i) Evaluate the velocity vid using (1);(ii) Evaluate the position xid using (2);

end forend forEvaluate fitness of updated particles;Update gbest and pbest;

end whileReturn the individual with the best fitness as the solution;

tion (1). For PSO-G, it refers the best particle of whole swarm while for PSO-Lit represents the best particle of the individual’s neighborhood. The social networkemployed by the PSO-G reflects the star topology which offers a faster convergencebut it is very likely to converge prematurely. While PSO-L uses a ring social net-work topology where smaller neighborhoods are defined for each particle. It can beeasily observed that due to the less particle inter connectivity in PSO-L, it is lesssusceptible to be trapped in local minima but at the cost of slow convergence. Ingeneral, PSO-G performs better for unimodal problems and PSO-L for multimodalproblems.

Velocity update equation in PSO determines the balance between exploration andexploitation capability of PSO. In Basic PSO, no bounds were defined for velocity,due to which in early iterations the particles far from gbest, will take large step sizeand are very much intended to leave the search space. Thus to control velocity so thatparticle update step size is balanced, velocity clamping concept was introduced. Invelocity clamping, whenever velocity exceeds from its bounds, it is set at its bounds.To avoid the use of velocity clamping and to make balance between exploration andexploitation, two new parameters called inertia weight [25] and constriction factor[6] were introduced in velocity update equation as:

vt+1id = w∗ vt

id + c1r1(ptid − xt

id)+ c2r2(ptgd − xt

id) (3)

vt+1id = χ ∗ (vt

id + c1r1(ptid − xt

id)+ c2r2(ptgd − xt

id)) (4)

where inertia weight and constriction factor are denoted by w and χ respectively. Inthe velocity update equation(1) of PSO, if we remove the first part then the flyingparticle’s velocity is only determined by its current position and its best position inhistory and velocity itself is memory less. Therefore, it can be imagined that thesearch process for PSO without the first part is a process where the search spacestatistically shrinks through the generations. PSO starts to behave like a local searchalgorithm. Only when the global optimum is within the initial search space, then

328 S. S. Jadon

there is a chance for PSO to find the solution. On the other hand, by adding thefirst part, the particles have a tendency to expand the search space, that is, they havethe ability to explore the new area. Considering of this, both terms w and χ wereintroduced to balance the global search and local search but for constriction factor,velocity clamping was completely ignored. As far as the optimal values to set theseparameters are concerned, researchers are continuously trying to fine tune these andsuggested that alternative values in[25, 12, 20, 30, 10, 3, 17] can be used, depend-ing on the nature of problem, as for different problems, there should be differentbalances between the local search ability and global search ability. Eberhart andShi [9] concluded that as a rule of thumb, the best approach to use with particleswarm optimization is to utilize the constriction factor approach while limiting Vmaxto Xmax or utilize the inertia weight approach while selecting w, c1, and c2 accordingto equation:

χ =2

|2−φ −√

φ(φ −4)| ,where φ = c1r1 + c2r2,φ > 4. (5)

3 Self adaptive acceleration factor in Particle SwarmOptimization

However the standard PSO has the capability to get a good solution at a signifi-cantly faster rate but, when it is compared to other optimization techniques, it isweak to refine the optimum solution, mainly due to less diversity in later search [2].On the different side, problem-based tuning of parameters is also important in PSO,to get optimum solution accurately and efficiently[26]. In standard PSO’s velocityupdate equation (1) contains three terms. The first term has the global search capa-bility, the second and third terms are the particles cognitive and social informationsharing capability respectively. More cognitive capability force particle to move to-wards personal best position fast and more social information force particle to movetowards global best position fast. It can be seen from (1), the movement of swarmtowards optimum solution is guided by the acceleration factor c1 and c2. There-fore, acceleration coefficient c1 and c2 should be tuned carefully to get the desiredsolution.

Kennedy and Eberhart [15] explained that more value of the cognitive componentcompared to the social component, results in excessive wandering of individualsthrough the search space while on the other hand more value of the social compo-nent may results that particles will converge prematurely toward a local optimum.Moreover, they advised to set both coefficient to 2 so that mean of both factors invelocity update equation is unity in order to move particles over only half the time offeasible region. Since then, most of the researchers are extensively using this phe-nomena. Suganthan [27] tested time varying linearly decreasing both factors andnoticed that the fixed value 2 of these factors produce better solutions. However, healso advised that the acceleration factors should not be equal to 2 all the time, based

Self Adaptive Acceleration Factor in Particle Swarm Optimization 329

on empirical studies. Generally, in population-based optimization methods, in earlystages individuals should be diverged through the entire feasible region and shouldnot gathered at one point while in later stages, the solutions should be fine tuned toconcentrate towards the global optima.

After analyzing these concerns, we propose fitness based acceleration factors asa new parameter strategy for PSO, simply to explore search space in early stagesand to fine tune solutions in final stages. The proposed modification can be mathe-matically seen as follows:

c1 = 1.2− probi (6)c2 = 0.5+ probi (7)

where c1 and c2 are cognitive component’s coefficient and social component’scoefficient respectively and probi is the relative fitness of ith particle in populationat time t which is calculated as:

probi =f iti

max f it(8)

where f iti is the fitness of ith particle and max f it is the fitness of best particle inthe current population so that it will be in range [0,1]. And the constants 1.2, 0.5in equations (6),(7) respectively are used based on empirical studies. In this way,after each iteration, probability of each particle is calculated and used to decideacceleration factor’s value. Initially each particle has low fitness so coefficient valuewill be more, therefore particles will be more able to explore search space and inlater stages all particle will have higher fitness so coefficient will be low, thereforeparticles will be able to search in small neighborhood of existing solutions i.e., finetuning of existing solutions.

4 Experiments and Results

4.1 Test problems under consideration

To validate the discussion, we compared the proposed strategy with standard PSOover 14 unbiased benchmark optimization functions which are given in Table 1.Further, to see the robustness of the proposed strategy, a well known engineeringoptimization problem ( f15 namely, pressure vessel (confinement method) [28] isalso solved. The considered engineering optimization problem is described as fol-lows:

Pressure Vessel design: The pressure vessel design is an optimization problem inwhich the total cost of the material, forming, and welding of a cylindrical vessel isminimized [28]. There are four design variables involved: x1, (Ts, shell thickness),

330 S. S. Jadon

x2 (Th, spherical head thickness), x3 ( R, radius of cylindrical shell), and x4 ( L,shell length). The mathematical formulation of this typical constrained optimizationproblem is as follows:

f15(x) = 0.6224x1x3x4 +1.7781x2x23 +3.1611x2

1x4 +19.84x21x3

subject to

g1(x) = 0.0193x3− x1

g2(x) = 0.00954x3− x2

g3(x) = 750×1728−πx23(x4 +

43

x3)

The search boundaries for the variables are

1.125≤ x1 ≤ 12.5,

0.625≤ x2 ≤ 12.5,

1.0×10−8 ≤ x3 ≤ 240

and1.0×10−8 ≤ x4 ≤ 240.

The best known global optimum solution is f (1.125,0.625,55.8592,57.7315) =7197.729 [28]. For a successful run, the minimum error criteria is fixed to be 1.0E−05 i.e. an algorithm is considered successful if it finds the error less than acceptableerror in a specified maximum function evaluations.

Self Adaptive Acceleration Factor in Particle Swarm Optimization 331

Tabl

e1:

Test

prob

lem

s

Test

Prob

lem

Obj

ectiv

efu

nctio

nSe

arch

Ran

geO

ptim

umVa

lue

DA

ccep

tabl

eE

rror

Bea

lefu

nctio

nf 1

(x)=

[1.5−

x 1(1−

x 2)]

2+

[2.2

5−x 1

(1−

x2 2)]2

+[2

.625−

x 1(1−

x3 2)]

2[-

4.5,

4.5]

f(3,

0.5)

=0

21.

0E−

05

Col

ville

func

tion

f 2(x

)=

100[

x 2−

x2 1]2

+(1−

x 1)2

+90

(x4−

x2 3)2

+(1−

x 3)2

+10

.1[(

x 2−

1)2+

(x4−

1)2 ]

+19

.8(x

2−

1)(x

4−

1)[-

10,1

0]f(

1)=

04

1.0E−

05

Bra

nins

’sfu

nctio

nf 3

(x)=

a(x 2−

bx2 1+

cx1−

d)2+

e(1−

f)co

sx1+

e−5

≤x 1≤

10,0≤

x 2≤

15f(−π

,12.

275)

=0.

3979

21.

0E−

05

Kow

alik

func

tion

f 4(x

)=∑

11 i=1[

a i−

x 1(b

2 i+

b ix 2

)

b2 i+

b ix 3

+x 4

]2[-

5,5]

f(0.

1928

,0.

1908

,0.1

231,

0.13

57)=

3.07

E−

044

1.0E−

05

2DTr

ipod

f 5(x

)=p(

x 2)(

1+

p(x 1

))+|(x

1+

50p(

x 2)(

1−

2p(x

1)))|+

|(x2+

50(1−

2p(x

2)))|

[-10

0,10

0]f(

0,−5

0)=

02

1.0E−

04

Shif

ted

Ros

enbr

ock

f 6(x

)=

∑D−1

i=1

(100

(z2 i−

z i+1)

2+

(zi−

1)2 )

+f b

ias,

z=

x−

o+

1,x

=[x

1,x 2

,....x

D],

o=

[o1,

o 2,...

o D]

[-10

0,10

0]f(

o)=

f bia

s=

390

101.

0E−

01

Shif

ted

Sphe

ref 7

(x)=

∑D i=

1z2 i

+f b

ias,

z=

x−o

,x=

[x1,

x 2,....x

D],

o=

[o1,

o 2,...

o D]

[-10

0,10

0]f(

o)=

f bia

s=−4

5010

1.0E−

05

Shif

ted

Ras

trig

inf 8

(x)=

∑D i=

1(z2 i−

10co

s(2π

z i)+

10)+

f bia

sz=

(x-o

),x=

(x1,

x 2,..

......

x D),

o=(o

1,o 2

,.....

...o D

)[-

5,5]

f(o)

=f b

ias=−3

3010

1.0E−

02

Shif

ted

Schw

efel

f 9(x

)=

∑D i=

1(∑

i j=1

z j)2

+f b

ias,

z=

x−

o,x

=[x

1,x 2

,....x

D],

o=

[o1,

o 2,...

o D]

[-10

0,10

0]f(

o)=

f bia

s=−4

5010

1.0E−

05

Shif

ted

Gri

ewan

kf 1

0(x)

=∑

D i=1

z2 i40

00−

∏D i=

1co

s(z i √ i)

+1

+f b

ias,

z=

(x−

o),

x=

[x1,

x 2,....x

D],

o=

[o1,

o 2,...

o D]

[-60

0,60

0]f(

o)=

f bia

s=−1

8010

1.0E−

05

Gol

dste

in-P

rice

f 11(

x)=

(1+

(x1+

x 2+

1)2·(1

9−

14x 1

+3x

2 1−

14x 2

+6x

1x2+

3x2 2))·

(30+

(2x 1−

3x2)

2·(1

8−

32x 1

+12

x2 1+

48x 2−

36x 1

x 2+

27x2 2

))[-

2,2]

f(0,−1

)=3

21.

0E−

14

Eas

om’s

func

tion

f 12(

x)=−c

osx 1

cosx

2e((−(

x 1− Π

)2−(

x 2−Π

)2))

[-10

,10]

f(π,

π)=−1

21.

0E−

13

Shub

ert

f 13(

x)=−

∑5 i=

1ic

os((

i+1)

x 1+

1)∑

5 i=1

icos

((i+

1)x 2

+1)

[-10

,10]

f(7.

0835

,4.8

580)

=−1

86.7

309

21.

0E−

05

Sinu

soid

alf 1

4(x)

=−[

A∏

n i=1

sin(

x i−

z)+

∏n i=

1si

n(B(x

i−

z))]

,A

=2.

5,B

=5,

z=

30[0

,180

]f(

90+

z)=−(

A+

1)10

1.00

E-0

2

332 S. S. Jadon

4.2 Experimental setting

To prove the efficiency of SAAFPSO, it is compared with PSO and ABC. To testSAAFPSO, PSO and ABC over considered problems, following experimental set-ting is adopted:

Parameter setting for PSO and SAAFPSO:

• Swarm size NP = 50 ,• Inertia weight w = 0.8,• Acceleration coefficients c1 = c2 = 0.5+ log2 (for PSO)[16] and for SAAFPSO

as per equations (6) and (7),• The number of simulations/run =100.

Parameter setting for ABC:

• Colony size NP = 50 [7, 11],• φi j = rand[−1,1],• Number of food sources SN = NP/2,• limit = D×SN [14, 1],• The number of simulations/run =100.

4.3 Results Comparison

Numerical results with experimental setting of subsection 4.2 are given in Table2. In Table 2, standard deviation (SD), mean error (ME), average function evalu-ations (AFE), and success rate (SR) are reported. Table 2 shows that most of thetime SAAFPSO outperforms in terms of reliability, efficiency and accuracy as com-pare to the PSO and ABC. Some more intensive analyses based on acceleration rate(AR) [21], performance indices and boxplots have been carried out for results ofSAAFPSO, PSO and ABC.

Table 2: Comparison of the results of test problems

Test Algorithm SD ME AFE SRFunction

f1

PSO 2.78E-06 5.01E-06 2717 100SAAFPSO 2.71E-06 5.22E-06 2269.5 100ABC 1.66E-06 8.64E-06 16520.09 100

f2

PSO 1.98E-04 8.40E-04 52701 99SAAFPSO 1.98E-04 8.41E-04 22996 100ABC 1.03E-01 1.67E-01 199254.48 1

f3

PSO 3.94E-06 5.98E-06 30590 86SAAFPSO 3.43E-06 5.78E-06 22826 90ABC 6.83E-06 6.05E-06 1925.52 100

to be cont’d on next page

Self Adaptive Acceleration Factor in Particle Swarm Optimization 333

Table 2: Comparison of the results of test problems (Cont.)

Test Algorithm SD ME AFE SRFunction

f4

PSO 8.28E-05 9.90E-05 39473.5 99SAAFPSO 1.07E-05 9.16E-05 21835.5 100ABC 7.33E-05 1.76E-04 180578.91 18

f5

PSO 3.54E-01 1.20E-01 33546 89SAAFPSO 2.86E-01 9.01E-02 29985.5 91ABC 2.69E-05 6.42E-05 8771.65 100

f6

PSO 7.97E+00 2.53E+00 188678 52SAAFPSO 8.35E+00 1.55E+00 86424.5 82ABC 1.05E+00 6.36E-01 176098.02 23

f7

PSO 1.35E-06 8.35E-06 15862 100SAAFPSO 1.56E-06 7.88E-06 8652.5 100ABC 2.42E-06 7.16E-06 9013.5 100

f8

PSO 5.35E+00 3.61E+01 200050 0SAAFPSO 5.23E+00 3.95E+01 200050 0ABC 1.21E+01 8.91E+01 200011.71 0

f9

PSO 3.22E+03 1.40E+03 200050 0SAAFPSO 3.39E+03 3.68E+03 200050 0ABC 3.54E+03 1.11E+04 200029.02 0

f10

PSO 3.01E-02 4.63E-02 197240.5 2SAAFPSO 5.54E-03 3.29E-03 121521.5 62ABC 2.21E-03 6.95E-04 61650.9 90

f11

PSO 2.85E-15 5.70E-15 9722.5 100SAAFPSO 2.82E-15 5.54E-15 7688 100ABC 5.16E-06 1.04E-06 109879.46 62

f12

PSO 2.90E-14 5.19E-14 9758.5 100SAAFPSO 3.18E-14 5.12E-14 8953.5 100ABC 4.44E-05 1.60E-05 181447.91 17

f13

PSO 6.25E-04 1.96E-04 89428 68SAAFPSO 8.86E-05 2.48E-05 34221.5 93ABC 5.34E-06 4.86E-06 4752.21 100

f14

PSO 2.91E-01 4.09E-01 178643 21SAAFPSO 3.67E-01 4.03E-01 163543.5 35ABC 1.83E-03 7.77E-03 54159.26 99

f15

PSO 3.17E-05 2.92E-05 91222 65SAAFPSO 9.36E-02 1.56E-02 87339 76ABC 1.07E+01 1.69E+01 200024.49 0

SAAFPSO, PSO and ABC are compared through SR, ME and AFE in Table 2.First SR is compared for all these algorithms and if it is not possible to distinguishthe algorithms based on SR then comparison is made on the basis of AFE. ME isused for comparison if it is not possible on the basis of SR and AFE both. Out-come of this comparison is summarized in Table 3. In Table 3, ‘+’ indicates that theSAAFPSO is better than the considered algorithms and ‘-’ indicates that the algo-rithm is not better or the difference is very small. The last row of Table 3, establishesthe superiority of SAAFPSO over PSO and ABC.

Further, we compare the convergence speed of the considered algorithms by mea-suring the average function evaluations (AFEs). A smaller AFEs means higher con-vergence speed. In order to minimize the effect of the stochastic nature of the algo-

334 S. S. Jadon

Table 3: Summary of Table 2 outcome

Function SAAFPSOVs

SAAFPSOVs

PSO ABCf1 + +f2 + +f3 + -f4 + +f5 + -f6 + +f7 + +f8 - +f9 - -f10 + -f11 + +f12 + +f13 + -f14 + -f15 + +Total number of +sign

13 9

Table 4: Acceleration Rate (AR) of SAAFPSO compare to PSO and ABCTest Problems PSO ABCf1 1.197179996 7.27917603f2 2.291746391 8.664745173f3 1.340138439 0.084356436f4 1.807767168 8.269969087f5 1.118740725 0.292529723f6 2.183154082 2.037593738f7 1.833227391 1.041722046f8 1 0.999808598f9 1 0.999895126f10 1.623091387 0.507325041f11 1.264633195 14.29233351f12 1.089908974 20.26558441f13 2.613210993 0.13886621f14 1.092327118 0.33116119f15 1.044458947 2.290208154

rithms, the reported function evaluations for each test problem is the average over100 runs. In order to compare convergence speeds, we use the acceleration rate (AR)[21] which is defined as follows, based on the AFEs for the two algorithms ALGOand SAAFPSO:

AR =AFEALGO

AFESAAFPSO, (9)

where, ALGO ∈ {PSO,ABC} and AR > 1 means SAAFPSO is faster. In order toinvestigate the AR of the proposed algorithm, compare to the PSO and ABC, results

Self Adaptive Acceleration Factor in Particle Swarm Optimization 335

of Table 2 are analyzed and the value of AR is calculated using equation (9). Table4 shows a clear comparison between SAAFPSO and PSO, SAAFPSO and ABC interms of AR. It is clear from the Table 4 that convergence speed of SAAFPSO isfaster among all the considered algorithms.

For the purpose of comparison in terms of consolidated performance, boxplotanalyses have been carried out for all the considered algorithms. The empirical dis-tribution of data is efficiently represented graphically by the boxplot analysis tool[29]. The boxplots for PSO, SAAFPSO and ABC are shown in Figure 1. It is clearfrom this figure that SAAFPSO is better than the considered algorithms as interquar-tile range and median are comparatively low.

PSO SAAFDE ABC

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

x 105

Ave

rag

e n

um

ber

of

fun

ctio

n e

valu

atio

ns

Fig. 1: Boxplots graphs for average function evaluation

Further, to compare the considered algorithms, by giving weighted importanceto the success rate, the standard deviation and the average number of function eval-uations, performance indices (PI) are calculated [4]. The values of PI for the PSO,SAAFPSO and ABC are calculated by using following equations:

PI =1

Np

Np

∑i=1

(k1α i1 + k2α i

2 + k3α i3)

Where α i1 = Sri

Tri ; α i2 =

{M f i

A f i , if Sri > 0.

0, if Sri = 0.; and α i

3 = Moi

Aoi

i = 1,2, ...,Np

• Sri = Successful simulations/runs of ith problem.• Tri = Total simulations of ith problem.• M f i = Minimum of average number of function evaluations used for obtaining

the required solution of ith problem.• A f i = Average number of function evaluations used for obtaining the required

solution of ith problem.• Moi = Minimum of standard deviation obtained for the ith problem.

336 S. S. Jadon

• Aoi = Standard deviation obtained by an algorithm for the ith problem.• Np = Total number of optimization problems evaluated.

The weights assigned to the success rate, the average number of function evaluationsand the standard deviation are represented by k1,k2 and k3 respectively where k1 +k2 + k3 = 1 and 0 ≤ k1,k2,k3 ≤ 1. To calculate the PIs, equal weights are assignedto two variables while weight of the remaining variable vary from 0 to 1 as given in[24]. Following are the resultant cases:

1. k1 = W,k2 = k3 = 1−W2 ,0≤W ≤ 1;

2. k2 = W,k1 = k3 = 1−W2 ,0≤W ≤ 1;

3. k3 = W,k1 = k2 = 1−W2 ,0≤W ≤ 1

The graphs corresponding to each of the cases (1), (2) and (3) for PSO, SAAFPSOand ABC are shown in Figures 2(a), 2(b), and 2(c) respectively. In these figures theweights k1,k2 and k3 are represented by horizontal axis while the PI is representedby the vertical axis.

In case (1), average number of function evaluations and the standard deviation aregiven equal weights. PIs of the considered algorithms are superimposed in Figure2(a) for comparison of the performance. It is observed that PI of SAAFPSO arehigher than the considered algorithms. In case (2), equal weights are assigned to thesuccess rate and standard deviation and in case (3), equal weights are assigned tothe success rate and average function evaluations. It is clear from Figure 2(b) andFigure 2(c) that the algorithms perform same as in case (1).

5 Conclusion

In this paper, a self adaptive strategy is presented for the acceleration factors tobalance exploration and exploitation in the search space. In the proposed strategy,acceleration factors are adaptively modified by a probability which is a function offitness. Therefore, in this strategy, there are different acceleration factors for everyindividual based on its fitness. It is showed by the acceleration factor that the pro-posed strategy speedup the convergence ability of PSO. With the help of experi-ments over 14 unbiased benchmark test functions and one real world engineeringoptimization problem, it is proved that the proposed strategy outperforms over PSOand basic ABC in terms of reliability, efficiency and accuracy.

References

[1] B. Akay and D. Karaboga. A modified artificial bee colony algorithm for real-parameter optimization. Information Sciences, 2010.

Self Adaptive Acceleration Factor in Particle Swarm Optimization 337

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Weight (k1)

Per

form

ance

Ind

ex

PSOSAAFPSOABC

(a)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Weight (k2)

Per

form

ance

Ind

ex

PSOSAAFPSOABC

(b)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Weight (k3)

Per

form

ance

Ind

ex

PSOSAAFPSOABC

(c)

Fig. 2: Performance index for test problems; (a) for case (1), (b) for case (2) and (c) for case (3).

[2] P. Angeline. Evolutionary optimization versus particle swarm optimization:Philosophy and performance differences. In Evolutionary Programming VII,pages 601–610. Springer, 1998.

[3] M.S. Arumugam and MVC Rao. On the performance of the particle swarmoptimization algorithm with various inertia weight variants for computing op-timal control of a class of hybrid systems. Discrete Dynamics in Nature andSociety, 2006, 2006.

[4] J.C. Bansal and H. Sharma. Cognitive learning in differential evolution andits application to model order reduction problem for single-input single-outputsystems. Memetic Computing, pages 1–21, 2012.

[5] G. Ciuprina, D. Ioan, and I. Munteanu. Use of intelligent-particle swarm opti-mization in electromagnetics. Magnetics, IEEE Transactions on, 38(2):1037–1040, 2002.

338 S. S. Jadon

[6] M. Clerc. The swarm and the queen: towards a deterministic and adaptiveparticle swarm optimization. In Evolutionary Computation, 1999. CEC 99.Proceedings of the 1999 Congress on, volume 3. IEEE, 1999.

[7] K. Diwold, A. Aderhold, A. Scheidler, and M. Middendorf. Performanceevaluation of artificial bee colony optimization and new selection schemes.Memetic Computing, pages 1–14, 2011.

[8] R. Eberhart and J. Kennedy. A new optimizer using particle swarm theory. InMicro Machine and Human Science, 1995. MHS’95., Proceedings of the SixthInternational Symposium on, pages 39–43. Ieee, 1995.

[9] R.C. Eberhart and Y. Shi. Comparing inertia weights and constriction factors inparticle swarm optimization. In Evolutionary Computation, 2000. Proceedingsof the 2000 Congress on, volume 1, pages 84–88. IEEE, 2000.

[10] R.C. Eberhart and Y. Shi. Tracking and optimizing dynamic systems withparticle swarms. In Evolutionary Computation, 2001. Proceedings of the 2001Congress on, volume 1, pages 94–100. IEEE, 2001.

[11] M. El-Abd. Performance assessment of foraging algorithms vs. evolutionaryalgorithms. Information Sciences, 2011.

[12] J. Ememipour, M.M.S. Nejad, M.M. Ebadzadeh, and J. Rezanejad. Introduce anew inertia weight for particle swarm optimization. In Computer Sciences andConvergence Information Technology, 2009. ICCIT’09. Fourth InternationalConference on, pages 1650–1653. IEEE, 2009.

[13] W. Gai-yun and H. Dong-xue. Particle swarm optimization based on self-adaptive acceleration factors. In Genetic and Evolutionary Computing, 2009.WGEC’09. 3rd International Conference on, pages 637–640. IEEE, 2009.

[14] D. Karaboga and B. Basturk. Artificial bee colony (abc) optimization algo-rithm for solving constrained optimization problems. Foundations of FuzzyLogic and Soft Computing, pages 789–798, 2007.

[15] J. Kennedy and R. Eberhart. Particle swarm optimization. In Neural Net-works, 1995. Proceedings., IEEE International Conference on, volume 4,pages 1942–1948. IEEE, 1995.

[16] J.J. Kim, S.Y. Park, and J.J. Lee. Experience repository based particle swarmoptimization for evolutionary robotics. In ICCAS-SICE, 2009, pages 2540–2544. IEEE, 2009.

[17] H.R. Li and Y.L. Gao. Particle Swarm Optimization Algorithm with ExponentDecreasing Inertia Weight and Stochastic Mutation. In — 2009 Second In-ternational Conference on Information and Computing Science, pages 66–69.IEEE, 2009.

[18] X. D. Li and A. P. Engelbrecht. Particle swarm optimization: An introductionand its recent developments. Genetic Evol. Comput. Conf., pages 3391–3414,2007.

[19] J.J. Liang, AK Qin, P.N. Suganthan, and S. Baskar. Comprehensive learn-ing particle swarm optimizer for global optimization of multimodal functions.Evolutionary Computation, IEEE Transactions on, 10(3):281–295, 2006.

Self Adaptive Acceleration Factor in Particle Swarm Optimization 339

[20] R.F. Malik, T.A. Rahman, S.Z.M. Hashim, and R. Ngah. New particle swarmoptimizer with sigmoid increasing inertia weight. International Journal ofComputer Science and Security (IJCSS), 1(2):pages 35, 2007.

[21] S. Rahnamayan, H.R. Tizhoosh, and M.M.A. Salama. Opposition-based differ-ential evolution. Evolutionary Computation, IEEE Transactions on, 12(1):64–79, 2008.

[22] A. Ratnaweera, S.K. Halgamuge, and H.C. Watson. Self-organizing hierar-chical particle swarm optimizer with time-varying acceleration coefficients.Evolutionary Computation, IEEE Transactions on, 8(3):240–255, 2004.

[23] J. Kennedy R.C. Eberhart. A new optimizer using particle swarm theory, pro-ceedings of 6th symp. micro machine and human science. In Proceedings of6th symp. Micro Machine and Human Science, pages 39–43. Nagoya, Japan,1995.

[24] H. Sharma, J. Bansal, and K. Arya. Dynamic scaling factor based differentialevolution algorithm. In Proceedings of the International Conference on SoftComputing for Problem Solving (SocProS 2011) December 20-22, 2011, pages73–85. Springer, 2012.

[25] Y. Shi and R. Eberhart. A modified particle swarm optimizer. In EvolutionaryComputation Proceedings, 1998. IEEE World Congress on Computational In-telligence., The 1998 IEEE International Conference on, pages 69–73. IEEE,1998.

[26] Y. Shi and R. Eberhart. Parameter selection in particle swarm optimization. InEvolutionary Programming VII, pages 591–600. Springer, 1998.

[27] P.N. Suganthan. Particle swarm optimiser with neighbourhood operator. InEvolutionary Computation, 1999. CEC 99. Proceedings of the 1999 Congresson, volume 3. IEEE, 1999.

[28] X. Wang, XZ Gao, and SJ Ovaska. A simulated annealing-based immuneoptimization method. In Proceedings of the International and InterdisciplinaryConference on Adaptive Knowledge Representation and Reasoning, Porvoo,Finland, pages 41–47, 2008.

[29] D.F. Williamson, R.A. Parker, and J.S. Kendrick. The box plot: a simple visualmethod to interpret data. Annals of internal medicine, 110(11):916, 1989.

[30] J. Xin, G. Chen, and Y. Hai. A Particle Swarm Optimizer with Multi-stageLinearly-Decreasing Inertia Weight. In Computational Sciences and Opti-mization, 2009. CSO 2009. International Joint Conference on, volume 1, pages505–508. IEEE, 2009.

[31] Z.H. Zhan, J. Zhang, Y. Li, and H.S.H. Chung. Adaptive particle swarm opti-mization. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Trans-actions on, 39(6):1362–1381, 2009.

[32] W. Zhang, H. Li, Z. Zhang, and H. Wang. The selection of acceleration factorsfor improving stability of particle swarm optimization. In Natural Compu-tation, 2008. ICNC’08. Fourth International Conference on, volume 1, pages376–380. IEEE, 2008.

340 S. S. Jadon