solving constrained optimization problems with hybrid particle swarm optimization

20
This article was downloaded by: [University of New Mexico] On: 23 November 2014, At: 19:02 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Engineering Optimization Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/geno20 Solving constrained optimization problems with hybrid particle swarm optimization Erwie Zahara a & Chia-Hsin Hu a a Department of Industrial Engineering and Management , St. John's University , Tamsui, Taiwan, 251, Republic of China Published online: 20 Oct 2008. To cite this article: Erwie Zahara & Chia-Hsin Hu (2008) Solving constrained optimization problems with hybrid particle swarm optimization, Engineering Optimization, 40:11, 1031-1049, DOI: 10.1080/03052150802265870 To link to this article: http://dx.doi.org/10.1080/03052150802265870 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Upload: chia-hsin

Post on 29-Mar-2017

224 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Solving constrained optimization problems with hybrid particle swarm optimization

This article was downloaded by: [University of New Mexico]On: 23 November 2014, At: 19:02Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Engineering OptimizationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/geno20

Solving constrained optimizationproblems with hybrid particle swarmoptimizationErwie Zahara a & Chia-Hsin Hu aa Department of Industrial Engineering and Management , St.John's University , Tamsui, Taiwan, 251, Republic of ChinaPublished online: 20 Oct 2008.

To cite this article: Erwie Zahara & Chia-Hsin Hu (2008) Solving constrained optimization problemswith hybrid particle swarm optimization, Engineering Optimization, 40:11, 1031-1049, DOI:10.1080/03052150802265870

To link to this article: http://dx.doi.org/10.1080/03052150802265870

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering OptimizationVol. 40, No. 11, November 2008, 1031–1049

Solving constrained optimization problems with hybridparticle swarm optimization

Erwie Zahara* and Chia-Hsin Hu

Department of Industrial Engineering and Management, St. John’s University, Tamsui,Taiwan 251, Republic of China

(Received 17 August 2007; final version received 9 January 2008 )

Constrained optimization problems (COPs) are very important in that they frequently appear in the realworld. A COP, in which both the function and constraints may be nonlinear, consists of the optimizationof a function subject to constraints. Constraint handling is one of the major concerns when solving COPswith particle swarm optimization (PSO) combined with the Nelder–Mead simplex search method (NM-PSO). This article proposes embedded constraint handling methods, which include the gradient repairmethod and constraint fitness priority-based ranking method, as a special operator in NM-PSO for dealingwith constraints. Experiments using 13 benchmark problems are explained and the NM-PSO results arecompared with the best known solutions reported in the literature. Comparison with three different meta-heuristics demonstrates that NM-PSO with the embedded constraint operator is extremely effective andefficient at locating optimal solutions.

Keywords: constrained optimization; Nelder–Mead simplex search method; particle swarm optimization;constraint handling

1. Introduction

Constrained optimization is a mathematical programming technique that attempts to solve non-linear objective functions, or deals with constraints that have a nonlinear relationship, or handlesboth at the same time. It has become an important branch of operations research and has a widevariety of applications in such areas as the military, economics, engineering optimization andmanagement science (Nash and Sofer 1996). Constrained optimization problems (COPs) with n

variables and m constraints may be written in the following canonical form:

Minimize f (x) (1)

subject togj (x) ≤ 0, j = 1, . . . , q

hj (x) = 0, j = q + 1, . . . , m

li ≤ xi ≤ ui, i = 1, . . . , n

*Corresponding author. Email: [email protected]

ISSN 0305-215X print/ISSN 1029-0273 online© 2008 Taylor & FrancisDOI: 10.1080/03052150802265870http://www.informaworld.com

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 3: Solving constrained optimization problems with hybrid particle swarm optimization

1032 E. Zahara and C.-H. Hu

where x = (x1, x2, · · · , xn) is a one-dimensional vector of n decision variables, f (x) is anobjective function, gj (x) ≤ 0 are q inequality constraints and hj (x) = 0 are m–q equalityconstraints. The values ui and li are the upper and the lower bounds of xi , respectively.

Traditionally, most of the methods found in the literature search for approximate solutions toCOPs and assume that the goal and constraints are differentiable (Fung et al. 2002). In the casethat the goal and constraints are not differentiable, heuristic methods are employed. Optimizationmethods using heuristics, which have helped researchers formulate such new stochastic algorithmsas genetic algorithms (Tang et al. 1998), evolutionary algorithms (Runarsson and Yao 2005) andparticle swarm optimization (PSO) (Dong et al. 2005), are of little practical use in solving COPsdue to slow convergence rates. Fan and Zahara (2007) have demonstrated the hybrid Nelder–Meadsimplex method and particle swarm optimization method (NM-PSO) to be a promising and viabletool for solving unconstrained nonlinear optimization problems. This article demonstrates thatthis algorithm can also be adopted to solve constrained nonlinear optimization problems. Untilrecently, the most commonly used constraint handling technique for stochastic optimization wasthe penalty method (Coello Coello 2002). The penalty method converts a COP to an unconstrainedproblem by adding, to the objective function, the constraints as a penalty term. When a solutionis deemed infeasible, meaning that the solution satisfies the constraints partially, the objectivevalue is penalized by a penalty set beforehand. The difficulty of using this method is in theselection of an appropriate penalty value for the problem at hand. To overcome this drawback,some researchers have suggested replacing the constant penalty by a variable penalty sequenceaccording to the degree of constraint violation (Farmani and Wright 2003). However, this typeof scheme usually ends up with the difficulty of selecting another set of parameters (CoelloCoello 2002). In contrast, a constraint fitness priority-based ranking method (Dong et al. 2005)attempts to deal with the difficulty of selecting an appropriate penalty value by constructing aconstraint fitness function. A constraint fitness priority-based ranking method (Dong et al. 2005)and a repair method using the gradient information derived from the constraint set (Chootinan andChen 2006) are embedded in NM-PSO to evaluate the particles during the search when solvingCOPs. The superior performance of the proposed NM-PSO will be demonstrated by solving 13benchmark COPs.

The remainder of this article is organized as follows. The gradient repair method and constraintfitness priority-based ranking method for handling constraints are briefly discussed in Section 2.The framework of hybrid Nelder–Mead simplex search and PSO are detailed in Section 3. InSection 4, the results obtained in this study are compared against the optimal or best knownsolutions of the 13 benchmark problems reported in the literature. The application of the proposedmethod to a ceramic grinding optimization problem is discussed in Section 5; finally, conclusionsare drawn in Section 6 based on the results.

2. Constraint handling methods

This section introduces two constraint handling methods: the gradient repair method and constraintfitness priority-based ranking method, which will be embedded in NM-PSO.

2.1. The gradient repair method

The gradient repair method was proposed by Chootinan and Chen (2006). It utilizes gradi-ent information derived from the constraint set to systematically repair infeasible solutionsby directing infeasible solutions toward the feasible region. The procedure is summarized asfollows.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 4: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering Optimization 1033

(1) For any solution, determine the degree of constraint violation �V by Equation (2).

V =[

g q×1

h(m−g)×1

]m×1

=⇒ �V =[Min {0, −g(x)}

−h(x)

](2)

where V consists of vectors of inequality constraints (g) and equality constraints (h) for theproblem.

(2) Compute ∇xV, where ∇xV are the derivatives of these constraints with respect to the solutionvector (n decision variables) and e is a small scalar for perturbation.

∇xV =[∇xg

∇xh

]m×n

≈ 1

e·[

g(x|xi = xi + e) − g(x), ∀i = 1, 2, . . . n

h(x|xi = xi + e) − h(x), ∀i = 1, 2, . . . n

]m×n

(3)

Therefore, the relationship between changes of constraint violation �V and solution vector�x is expressed by

�V = ∇xV × �x =⇒ �x = ∇xV−1 × �V (4)

(3) Compute the Moore–Pensore inverse or pseudoinverse(∇xV+)

, which is the approximateinverse of ∇xV to be used instead in Equation (3).

(4) Update the solution vector by

xt+1 = xt + �x = xt + ∇xV−1 × �V ≈ xt + ∇xV+ × �V.

(5) Loop through steps 1 to 4 at most five times.

The above procedure is demonstrated using the following example in which there are fourinequality constraints and one equality constraint, given by Equation (5).

g1 : x21 + x2

2 − 25 ≤ 0g2 : x1 − 3 ≤ 0g3 : − x1 ≤ 0g4 : − x2 ≤ 0h1 : x1 + x2 − 5 = 0

=⇒ V =

⎡⎢⎢⎢⎢⎣

x21 + x2

2 − 25x1 − 3−x1

−x2

x1 + x2 − 5

⎤⎥⎥⎥⎥⎦ =⇒ ∇xV =

⎡⎢⎢⎢⎢⎣

2x1 2x2

1 0−1 00 −11 1

⎤⎥⎥⎥⎥⎦ (5)

(1) Suppose the first iteration produces [x1, x2] = [6, 0], which clearly violates g1, g2 and h1.

Calculate the degree of constraint violation �V.

V =

⎡⎢⎢⎢⎢⎣

113

−601

⎤⎥⎥⎥⎥⎦ =⇒ �V =

⎡⎣−11

−3−1

⎤⎦

(2) Compute ∇xV and the Moore–Pensore inverse or pseudoinverse(∇xV+)

for the violatedconstraints.

∇xV =⎡⎣12 0

1 01 1

⎤⎦ =⇒ ∇xV+ =

[0.0828 0.0069 0

−0.0828 −0.0069 1

]

(3) Update the solution vector by xt+1 = xt + �x ≈ xt + ∇xV+ × �V.

x2 = x1 + �x =[

60

]+

[0.0828 0.0069 0

−0.0828 −0.0069 1

⎡⎣−11

−3−1

⎤⎦ =

[5.0685

−0.0685

]

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 5: Solving constrained optimization problems with hybrid particle swarm optimization

1034 E. Zahara and C.-H. Hu

(4) The same process is repeated at most five times until the solution is finally moved to [x1, x2] =[3, 2], which is in the feasible region.

2.2. Constraint fitness priority-based ranking method

This method, proposed by Dong et al. (2005), solves the difficulty of selecting an appropriatepenalty value that appears in the penalty method. The method introduces a constraint fitnessfunction Cf (x) for constraints, and it is computed from both inequality and equality constraintsas follows.

For inequality constraints gj (x) ≤ 0,

Cj(x) =

⎧⎪⎨⎪⎩

1, If gj (x) ≤ 0

1 − gj (x)

gmax(x), If gj (x) > 0

(6)

where gmax(x) = max{gj (x), j = 1, 2, · · · q} and Cj(x) is the fitness level of point x forconstrained condition (j ).

For equality constraints hj (x) = 0,

Cj(x) =

⎧⎪⎨⎪⎩

1, If hj (x) = 0

1 − |hj (x)|hmax(x)

, If hj (x) �= 0(7)

where

hmax(x) = max{hj (x), j = q + 1, q + 2, . . . m

}.

The constraint fitness function evaluated at point x is equal to the weighted sum of the Cj ’s, asdefined below:

Cf (x) =m∑

j=1

wjCj (x),

m∑j=1

wj = 1, 0 ≤ wj ≤ 1/m, ∀j. (8)

where wj is a randomly generated weight for constraint j . The sum signifies the fitness level ofpoint x as related to the feasible domain Q; that is, if Cf (x) = 1, it is an indication that x ∈ Q,and on the contrary, if 0 < Cf (x) < 1, the smaller Cf (x) is, the less likely that x is in the feasibledomain Q or the further away x is from Q.

3. Hybrid Nelder–Mead and particle swarm optimization

The goal of integrating the Nelder–Mead simplex method (NM) and particle swarm optimization(PSO) is to combine their advantages and avoid disadvantages. For example, the NM is a veryefficient local search procedure but its convergence is extremely sensitive to the selected startingpoint; PSO belongs to the class of global search procedures but requires much computationaleffort (Fan and Zahara 2007).

3.1. The Nelder–Mead simplex search method

The NM method, proposed by Nelder and Mead (1965), is a local search method designedfor unconstrained optimization without using gradient information. The operations of this

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 6: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering Optimization 1035

method rescale the simplex based on the local behaviour of the function by using four basicprocedures: reflection, expansion, contraction and shrinkage. Through these procedures, thesimplex can successively improve itself and zero in on the optimum.

The steps of NM, which are modified to handle a minimization COP, are as follows.

(1) Initialization. An initial N + 1 vertex points are randomly generated in their respective searchrange or space. Evaluate the fitness of the objective function and the constraint fitness at eachvertex point of the simplex. For the maximization case, it is convenient to transform theproblem into the minimization case by pre-multiplying the objective function by –1.

(2) Reflection. Determine Xhigh and Xlow, vertices with the highest and the lowest functionvalues, respectively. Let fhigh, flow and Cfhigh, Cflow represent the corresponding objectivefunction values and constraint function values, respectively. Find xcent , the centre of thesimplex without xhigh in the minimization case. Generate a new vertex xref l by reflecting theworst point according to

xref l = (1 + α)xcent − αxhigh (α > 0) (9)

Nelder and Mead suggested that α = 1. If Cfref l < 1 and Cfref l > Cflow or Cfref l = 1and fref l < flow, then proceed with expansion; otherwise proceed with contraction.

(3) Expansion. After reflection, two possible expansion cases need to be considered.(3.1) If Cfref l < 1 and Cfref l > Cflow, the simplex is expanded in order to extend the search

space in the same direction and the expansion point is calculated by Equation (10)

xexp = γ xref l + (1 − γ )xcent (γ > 1) (10)

Nelder and Mead suggested γ = 2. If Cfexp > Cflow or Cfexp = 1, the expansion isaccepted by replacing xhigh with xexp; otherwise, xref l replaces xhigh.

(3.2) If Cfref l = 1 and fref l < flow, the expansion point is calculated by Equation (10).If Cfexp = 1 and fexp < flow, the expansion is accepted by replacing xhigh with xexp;otherwise, xref l replaces xhigh.

(3.3) Exit the algorithm if the stopping criteria are satisfied; otherwise go to step (2).(4) Contraction. After reflection, there are two possible contraction cases to consider.

(4.1) If Cfref l < 1 and Cfref l ≤ Cflow, the contraction vertex is calculated by Equation(11):

xcont = βxhigh + (1 − β)xcent (0 < β < 1) (11)

Nelder and Mead suggested β = 0.5. If Cfcont > Cflow or Cfcont = 1, the contractionis accepted by replacing xhigh with xcont ; otherwise do shrinking in step (5.1).

(4.2) If Cfref l = 1 and fref l ≥ flow and if fref l ≤ fhigh, then xref l replaces xhigh and contrac-tion by Equation (11) is carried out. If fref l > fhigh, then direct contraction without thereplacement of xhigh by xref l is performed. If Cfcont = 1 & fcont < flow, the contractionis accepted by replacing xhigh with xcont ; otherwise do shrinking in step (5.2).

(4.3) Exit the algorithm if the stopping criteria are satisfied; otherwise go to step (2).(5) Shrinkage.

(5.1) Following step (4.1). in which Cfcont ≤ Cflow and contraction has failed, try shrinkageattempts for all points except xlow by Equation (12):

xi ← δxi + (1 − δ)xlow (0 < δ < 1) (12)

Nelder and Mead suggested δ = 0.5.(5.2) Contraction has failed in step (4.2) where Cfcont < 1 or Cfcont = 1 but fcont ≥ flow.

Shrink the entire simplex except xlow by Equation (12).(5.3) Exit the algorithm if the stopping criteria are satisfied; otherwise go to step (2).

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 7: Solving constrained optimization problems with hybrid particle swarm optimization

1036 E. Zahara and C.-H. Hu

3.2. Particle swarm optimization

Particle swarm optimization (PSO) is one of the latest evolutionary optimization techniquesdeveloped by Eberhart and Kennedy (1995). The PSO concept is based on a metaphor of socialinteraction such as bird flocking or fish schooling. Similar to genetic algorithms, PSO is alsopopulation-based and evolutionary in nature, with one major difference from genetic algorithmsin that it does not implement filtering, i.e. all members of the population survive through the entiresearch process. PSO simulates a commonly observed social behaviour, where members of a grouptend to follow the leader of the best among the group. The procedure of PSO is reviewed below.

(1) Initialization. Randomly generate a swarm of potential solutions called ‘particles’ and assigna random velocity to each.

(2) Velocity update. The particles are then ‘flown’ through hyperspace by updating their ownvelocity. The velocity update of a particle is dynamically adjusted, subject to its own pastflight and those of its companions. The particle’s velocity and position are updated using

V Newid (t + 1) = c0 × V old

id (t) + c1 × rand() × (pid(t) − xoldid (t))

+ c2 × rand() × (pgd(t) − xoldid (t)) (13)

xNewid (t + 1) = xold

id (t) + V Newid (t + 1) (14)

where c1 and c2 are two positive constants; c0 is an inertia weight and rand( ) is a random valueinside (0, 1). Eberhart and Shi (2001) and Hu and Eberhart (2001) suggested c1 = c2 = 2and c0 = [0.5 + (rand()/2.0)]. Equation (13) illustrates the calculation of a new velocity foreach individual. The velocity of each particle is updated according to its previous velocity(Vid ), the particle’s previous best location (pid ) and the global best location (pgd ). Particlevelocities on each dimension are clamped to a maximum velocity Vmax, which is a fractionin the search space on each dimension. Equation (14) shows how each particle’s position isupdated in the search space.

3.3. Hybrid NM-PSO method

Embedding the constraint handling methods in NM-PSO is now presented. The population sizeof this hybrid NM-PSO approach is set at 21N + 1 when solving an N -dimensional problem.The initial population is randomly generated in the problem search space. For every particle thatviolates the constraints, use the gradient repair method to direct the infeasible solution toward thefeasible region. In most cases, the repair method does move the solution to the feasible region.However, there may be times when the repair method fails; in such cases, the infeasible solutionsare left as they are and the process continues with the next step. The constraint fitness priority-based ranking method is then adopted to evaluate the constraint fitness value of each particle andranks them according their constraint fitness values. The one with the greatest (constraint fitness)value computed by formulation (8) is placed at the top. If two particles have the same constraintfitness value, then their objective fitness values are compared in order to determine their relativeposition. The one with a better objective fitness value is positioned in front of the other. The topN + 1 particles are then fed into the NM method to improve the (N + 1)th particle. The PSOmethod adjusts the other 20N particles by taking into account the positions of the N + 1 bestparticles. This procedure for adjusting the 20N particles involves selection of the global bestparticle, selection of the neighbourhood best particles and, finally, velocity updates. The globalbest particle of the population is determined according to the sorted fitness values. Next, the20N particles are evenly divided into 10N neighbourhoods with 2 particles per neighbourhood.The particle with the better fitness value in each neighbourhood is denoted as the neighbourhood

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 8: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering Optimization 1037

Figure 1. The hybrid NM-PSO algorithm embedded with constraint handling methods.

best particle. By Equations (13) and (14), a velocity update for each of the 20N particles is thencarried out. The 21N + 1 particles are sorted in preparation for repeating the entire run. Theprocess terminates when a certain convergence criterion is satisfied. Figure 1 summarizes thealgorithm of NM-PSO. For further details of the hybrid NM-PSO for unconstrained optimizationproblem, see Fan and Zahara (2007).

4. Experimental results

In this study, the proposed NM-PSO was applied to solve 13 benchmark problems studied byBecerra and Coello (2006). This test suite is a set of difficult nonlinear functions for constrainedoptimization and includes four maximization problems G02, G03, G08 and G12. The charac-teristics of these problems include the number of decision variables (n), the type of objectivefunction, the size of feasible region (k), the number of linear inequalities (LI), nonlinear equal-ities (NE) and nonlinear inequalities (NI), and the number of active constraints at the optimum(a) are summarized in Table 1. The size of the feasible region, empirically determined by simu-lation (Koziel and Michaelwicz 1999), indicates the degree of difficulty of randomly generatinga feasible solution—the greater the size, the higher the degree of difficulty. The forms of the 13problems are described completely in Appendix 1. The algorithm is coded in Matlab 7.0 and thesimulations were run on a Pentium IV 2.4 GHz with 512 MB memory capacity.

Table 1. Characteristics of benchmark problems.

n Type of f (x) k (%) LI NE NI a

G01 13 Quadratic 0.011 9 0 0 6G02 20 Nonlinear 99.847 0 0 2 1G03 10 Polynomial 0.002 0 1 0 1G04 5 Quadratic 52.123 0 0 6 2G05 4 Cubic 0.000 2 3 0 3G06 2 Cubic 0.006 0 0 2 2G07 10 Quadratic 0.000 3 0 5 6G08 2 Non-linear 0.856 0 0 2 0G09 7 Polynomial 0.512 0 0 4 2G10 8 Linear 0.001 3 0 3 6G11 2 Quadratic 0.000 0 1 0 1G12 3 Quadratic 4.779 0 0 1 0G13 5 Exponential 0.000 0 3 0 3

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 9: Solving constrained optimization problems with hybrid particle swarm optimization

1038 E. Zahara and C.-H. Hu

During the simulation, it was found that the population size has a significant effect on theperformance of NM-PSO. Table 2 shows the performance of NM-PSO with different populationsizes for test problems G01, G04, G05 and G08 and it is clearly seen that for the population sizes21N + 1 and 31N + 1, the same performance is achieved with better solutions than for 11N + 1.Therefore, a population size of 21N + 1 was employed in NM-PSO algorithm instead of 31N + 1to decrease the number of function evaluations.

The task of optimizing each of the test functions was executed 20 times by NM-PSO, andthe search process was iterated 5000 times until the reference or a better solution was found.Table 3 contains a summary of the execution results of NM-PSO, and includes the worst and thebest solutions obtained, the means, the medians and the standard deviations of the solutions, theaverage number of function evaluations, the average number of iterations and the average numberof computation times. For the sake of comparison, the table also gives the reference optimalvalues. In every problem, the best solutions are almost equal to the reference optimal solutions.For problems G05 and G10, the optimal solutions are better than the reference. For problemsG01, G03, G04, G05, G06, G08, G09, G11 and G12, optimal solutions were found consistentlyin all 20 runs. For problems G02 and G13, the optimal solutions were not consistently found.Finally, for problems G07 and G10, quite satisfactory global optima were found. In terms of theefficiency of NM-PSO, Table 3 shows that more than half of test problems converged in less than300 iterations (or equivalently less than 50 s) and problems G09 and G13 converged in less than2800 iterations (less than 240 s). However, test problems G02, G07, and G10 never completelyconverged; they all stopped at the pre-set maximum number of iterations (5000) with a highstandard deviation, signifying that the solutions found thus far are still quite far apart.

These 13 benchmark problems were solved by NM-PSO and the solutions are now comparedwith those obtained by applying the cultural differential evolution (CDE) algorithm (Becerraand Coello 2006), the filter simulated annealing (FSA) method (Hedar and Fukushima 2006),the genetic algorithm (GA) (Chootinan and Chen 2006) and the evolutionary algorithm (EA)(Runarsson and Yao 2005). Table 4 lists the results of solving these problems by the variousmethods using 20 runs in terms of the best, the mean, and the worst solutions, the standarddeviation, and the average of function evaluations of each method. Solutions to problems G12and G13 by GA are not readily available and therefore are not included in the table. Better values

Table 2. The effect of population size on NM-PSO.

Population size

11N + 1 21N + 1 31N + 1

G01 (min) Best −15.000 −15.000 −15.000Mean −14.998 −15.000 −15.000Worst −14.997 −15.000 −15.000Std. 0.00107 0.00000 0.00000

G04 (min) Best −30665.5386 −30665.5386 −30665.5386Mean −30665.2025 −30665.5386 −30665.5386Worst −30663.8583 −30665.5386 −30665.5386Std. 0.751451 1.35e–005 1.52e–006

G05 (min) Best 5126.359 5126.359 5126.359Mean 5126.372 5126.359 5126.359Worst 5126.397 5126.359 5126.359Std. 0.013854 2.70e–004 1.34e–004

G08 (max) Best 0.095825 0.095825 0.095825Mean 0.068486 0.095825 0.095825Worst 0.025812 0.095825 0.095825Std. 0.037453 3.45e–008 1.12e–008

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 10: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering

Optim

ization1039

Table 3. Results of test problems using NM-PSO.

Reference Best objective Median objective Mean objective Standard Worst objective Average no. of Average no. Average computationType optimal value value value value deviation value func−evaluations of iterations time (s)

G01 Min −15 −15 −15 −15 0.000000 −15 41959 142 12.875G02 Max 0.803619 0.803619 0.796464 0.796431 0.002522 0.796422 2229858 5000 719.672G03 Max 1 1 1 1 0.000000 1 64108 282 48.085G04 Min −30665.5386 −30665.5386 −30665.5386 −30665.5386 0.000000 −30665.5386 19658 169 4.937G05 Min 5126.4981 5126.359 5126.359 5126.359 0.000000 5126.359 25253 269 18.734G06 Min −6961.8138 −6961.824 −6961.824 −6961.824 0.000000 −6961.824 9856 198 2.617G07 Min 24.3062091 24.3062 24.4669 24.4883 0.115070 24.7195 1129252 5000 306.547G08 Max 0.095825 0.095825 0.095825 0.095825 0.000000 0.095825 2103 42 0.602G09 Min 680.6300573 680.6300554 680.6300554 680.6300554 0.000000 680.6300554 422498 2643 112.461G10 Min 7049.3307 7049.2969 7049.5581 7049.5652 0.198500 7049.9358 881161 5000 323.007G11 Min 0.75 0.75 0.75 0.75 0.000000 0.75 743 14 0.320G12 Max 1 1 1 1 0.000000 1 923 12 0.344G13 Min 0.0539498 0.053949 0.054533 0.054854 0.001260 0.058301 265548 2735 239.297

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 11: Solving constrained optimization problems with hybrid particle swarm optimization

1040 E. Zahara and C.-H. Hu

Table 4. Comparison results of test problems with CDE, FSA, GA and ES.

Best objective Mean objective Worst objective Standard Average no. ofType Algorithm value value value deviation func−evaluations

G01 Min CDE −15.000000 −14.999996 −14.999993 0.000002 100100FSA −14.999105 −14.993316 −14.979977 0.004813 205748GA −15.000000 −15.000000 −15.000000 0.000000 95512EA −15.000000 −15.000000 −15.000000 0.000000 122000NM-PSO −15.000000 −15.000000 −15.000000 0.000000 41959

G02 Max CDE 0.803619 0.724886 0.590908 0.070125 100100FSA 0.754912 0.371708 0.271311 0.098023 227832GA 0.801119 0.785746 0.745329 0.013700 331972EA 0.803619 0.753209 0.609330 0.037000 349600NM-PSO 0.803619 0.796431 0.796422 0.002522 2229858

G03 Max CDE 0.995413 0.788635 0.639920 0.115214 100100FSA 1.000001 0.9991874 0.9915186 0.001653 314938GA 0.999980 0.999920 0.999790 0.000060 399804EA 1.000000 1.000000 1.000000 0.000000 339600NM-PSO 1.000000 1.000000 1.000000 0.000000 64108

G04 Min CDE −30665.5387 −30665.5387 −30665.5387 0.000000 100100FSA −30665.5380 −30665.4665 −30664.6880 0.173218 86154GA −30665.5386 −30665.5386 −30665.5386 0.000000 26981EA −30665.5386 −30665.5386 −30665.5386 0.000000 66400NM-PSO −30665.5386 −30665.5386 −30665.5386 0.000000 19658

G05 Min CDE 5126.5709 5207.4107 5327.3905 69.225796 100100FSA 5126.4981 5126.4981 5126.4981 0.000000 47661GA 5126.4981 5126.4981 5126.4981 0.000000 39459EA 5126.4970 5126.4970 5126.4970 0.000000 62000NM-PSO 5126.3590 5126.3590 5126.3590 0.000000 25253

G06 Min CDE −6961.8139 −6961.8139 −6961.8139 0.000000 100100FSA −6961.8139 −6961.8139 −6961.8139 0.000000 44538GA −6961.8139 −6961.8139 −6961.8139 0.000000 13577EA −6961.8139 −6961.8139 −6961.8139 0.000000 56000NM-PSO −6961.8240 −6961.8240 −6961.8240 0.000000 9856

G07 Min CDE 24.306209 24.306210 24.306212 0.000001 100100FSA 24.310571 24.379527 24.644397 0.071635 404501GA 24.329400 24.471900 24.835200 0.129000 428314EA 24.306000 24.337000 24.635000 0.041000 350000NM-PSO 24.306209 24.488300 24.719500 0.115070 1129252

G08 Max CDE 0.095825 0.095825 0.095825 0.000000 100100FSA 0.095825 0.095825 0.095825 0.000000 56476GA 0.095825 0.095825 0.095825 0.000000 6217EA 0.095825 0.095825 0.095825 0.000000 49600NM-PSO 0.095825 0.095825 0.095825 0.000000 2103

G09 Min CDE 680.6301 680.6301 680.6301 0.000000 100100FSA 680.6300 680.6364 680.6983 0.014517 324569GA 680.6303 680.6381 680.6538 0.006610 388453EA 680.6301 680.6301 680.6301 0.000000 310000NM-PSO 680.6301 680.6301 680.6301 0.000000 422498

G10 Min CDE 7049.24806 7049.24827 7049.24848 0.000167 100100FSA 7059.86350 7059.32104 9398.64920 542.3421 243520GA 7049.26070 7049.56590 7051.68570 0.57000 572629EA 7049.40400 7082.22700 7258.54000 42.0000 344000NM-PSO 7049.29690 7049.56520 7049.93580 0.198500 881161

G11 Min CDE 0.749900 0.757995 0.796455 0.017138 100100FSA 0.749900 0.749900 0.749900 0.000000 23722GA 0.750000 0.750000 0.750000 0.000000 7215

(Continued)

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 12: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering Optimization 1041

Table 4. Continued.

Best objective Mean objective Worst objective Standard Average no. ofType Algorithm value value value deviation func−evaluations

EA 0.750000 0.750000 0.750000 0.000000 46400NM-PSO 0.750000 0.750000 0.750000 0.000000 743

G12 Max CDE 1.000000 1.000000 1.000000 0.000000 100100FSA 1.000000 1.000000 1.000000 0.000000 59355EA 1.000000 1.000000 1.000000 0.000000 20400NM-PSO 1.000000 1.000000 1.000000 0.00000 923

G13 Min CDE 0.056180 0.288324 0.392100 0.16710 100100FSA 0.053950 0.297720 0.438851 0.18865 120268EA 0.053942 0.111671 0.438804 0.14000 109200NM-PSO 0.053949 0.054854 0.058301 0.00126 265548

in each category are highlighted in bold type. Upon examining the results in Table 4, it is seenthat NM-PSO consistently found the global optimal solutions in 10 test problems out of the 13,while the other methods found the optimum for at most 8 test problems. In terms of computationalcosts, NM-PSO outperformed the other four methods in 8 problems out of the 13. Consideringboth solution quality and computational effort, NM-PSO is superior to the CDE method in 10test problems out of 13 and surpassed the other three methods in all 13 problems, making NM-PSO a clear winner. The superiority of NM-PSO over the other methods can be attributed to twofactors.

(1) NM-PSO can effectively reach a global optimum because of the mechanism of the gradientrepair method, which ensures that no infeasible intermediate solution will ever enter consid-eration and corrupt the solution quality. The gradient repair mechanism also helps transforma constrained problem into an unconstrained one once all constraints have been satisfied.

(2) The hybrid NM–PSO algorithm is a proven technique (Fan and Zahara 2007) for efficientlysolving unconstrained optimization problems.

Additionally, NM-PSO is easy to use because it does not call for the selection of an appropriatepenalty value for every test problem. It can therefore be concluded that NM-PSO is a robust andsuitable tool for solving COPs.

The convergence behaviour of NM-PSO falls into two general patterns. The repair methodeither moves an infeasible solution into the feasible region the first time the method is invoked, orit will take several iterations for the method to be effective. Solving test problems G01 and G06will provide more insight into the convergence behaviour of NM-PSO. Figures 2 and 3 depict therelationship between the number of iterations, values of constraint fitness function and values ofobjective fitness function. Figure 2(a) shows the first pattern, in which the constraint fitness valuereaches 1 after the repair procedure is called for the first time. Figure 2(b) shows that the objectivefitness value keeps decreasing for minimization problem G01 until NM-PSO reaches the globaloptimum after 40 iterations. In the case of a maximization problem, the objective fitness valuewould continue to increase.

Figure 3(a) shows the second pattern, in which the constraint fitness value does not reach 1until the repair method is called for the eighth iteration. Before this happens, the objective fitnessvalue would oscillate, as seen in Figure 3(b). This means that until 1 is reached by the constraintfitness function, the intermediate solution satisfies all the constraints and the objective valuebegins to decrease. Figures 2 and 3 convey one important message—the repair method helps setNM-PSO on the right course of optimization and when infeasible solutions have been repairedand all solutions satisfy the constraints, NM-PSO can proceed quickly to the global optimum.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 13: Solving constrained optimization problems with hybrid particle swarm optimization

1042 E. Zahara and C.-H. Hu

Figure 2. (a) Relation between iteration and constraint fitness function for G01. (b) Relation between iteration andobjective fitness function for G01.

It can be concluded that empowering the NM-PSO algorithm with constraint handling methodsis a recommended technique for solving COPs.

5. Application to a ceramic grinding optimization problem

Many engineering applications involve the use of advanced structural ceramics such as siliconcarbide (SiC), silicon nitride, alumina or partially stabilized zirconium. These materials featurea high strength at elevated temperatures, resistance to chemical degradation, wear resistance andlow density. The effective use of these structural ceramics in functional applications demandsthe grinding of ceramic components with good surface finish and low surface damage. The mainconcern about the use of ceramics in industry is the complexity involved in machining because of

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 14: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering Optimization 1043

Figure 3. (a) Relation between iteration and constraint fitness function for G06. (b) Relation between iteration andobjective fitness function for G06.

their high hardness and low fracture toughness. Therefore, optimization in ceramic processing andmanufacturing technology is necessary for the commercialization of ceramic use (Lee et al. 2007).

A model available in the literature (Gopal and Rao 2003) is adopted. The objective functionof the optimization problem of the ceramic grinding process can be described as maximizing thematerial removal rate (MRR) subject to a set of constraints comprising surface roughness, numberof flaws and input variables. The problem is a COP and is formulated as the objective function:

Max MRR = f × dc

subject to

Ra ≤ Ra(max), Ra = 0.145d0.1939c f 0.7071M−0.2343

Nc ≤ Nc(max), Nc = 29.67d0.4167c f −0.8333

5μm ≤ dc ≤ 30μm

8.6m/ min ≤ f ≤ 13.4m/ min

120 ≤ M ≤ 500

(15)

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 15: Solving constrained optimization problems with hybrid particle swarm optimization

1044 E. Zahara and C.-H. Hu

Table 5. Optimization results for ceramic grinding optimization problem.

Constraints Value of variables Value of constraints

Ra(max) Nc(max) dc f M Ra Nc Best MRR

1 0.3 7 5.607 13.4000 481.0194 0.2986 7 75.13422 9 8.4878 12.1946 500.0000 0.3000 9 103.50483 11 11.5842 11.1977 500.0000 0.3000 11 129.71624 0.35 7 5.607 13.4000 387.2869 0.3142 7 75.13425 9 10.2485 13.4000 493.2479 0.3337 9 137.32936 11 15.3513 12.8906 500.0000 0.3500 11 197.88837 0.4 7 5.607 13.4000 436.5745 0.3055 7 75.13428 9 10.2485 13.4000 388.5176 0.3529 9 137.32939 11 16.5883 13.4000 400.0047 0.3848 11 222.2834

where f is the table feed rate (m/min), dc is the depth of cut (μm), Ra(max) and Nc(max) are themaximum allowable values of surface roughness and the number of flaws, respectively and M isgrit size. Therefore, this is a three-dimensional problem whereby the values of dc, f and M aredetermined by NM-PSO.

The NM-PSO algorithm was run for 100 trials on the same problem with a total of 9 differ-ent combinations of constraints; the search process was iterated 100 times and the results arerecorded in Table 5. In all 9 cases, no violation was observed for any constraints. This means

Table 6. Statistical comparison between GA, PSO and NM-PSO.

Standard Number of functionRa(max) Nc(max) Algorithm Best MRR Mean MRR Worst MRR deviation evaluations

1 0.3 7 GA 75.1269 75.0220 74.6560 0.0995 10000PSO 75.1342 75.1342 75.1342 7.18e–14 10000NM-PSO 75.1342 75.1342 75.1342 2.94e–10 6784

2 0.3 9 GA 102.7890 101.6700 99.0790 0.8839 10000PSO 102.9200 102.9200 102.9200 1.44e–14 10000NM-PSO 103.5048 103.5048 103.5048 9.12e–06 6812

3 0.3 11 GA 128.8610 127.3100 124.9300 1.0011 10000PSO 128.9840 128.9840 128.9840 0.0000 10000NM-PSO 129.7162 129.7161 129.7160 0.0001 6804

4 0.35 7 GA 75.1266 75.0510 74.8750 0.557 10000PSO 75.1342 75.1342 75.1342 7.18e–14 10000NM-PSO 75.1342 75.1342 75.1342 4.11e–10 6812

5 0.35 9 GA 137.2880 136.9900 136.1200 0.2958 10000PSO 137.3290 137.3290 137.3290 8.61e–14 10000NM-PSO 137.3293 137.3293 137.3293 4.07e–08 6802

6 0.35 11 GA 196.7130 195.4200 192.8300 0.9251 10000PSO 196.7710 196.7710 196.7710 1.44e–13 10000NM-PSO 197.8883 197.8879 197.8867 0.0007 6792

7 0.4 7 GA 75.1336 75.0660 74.9490 0.0505 10000PSO 75.1342 75.1342 75.1342 7.18e–14 10000NM-PSO 75.1342 75.1342 75.1342 2.73e–09 6804

8 0.4 9 GA 137.3200 137.1000 136.2800 0.2007 10000PSO 137.3290 137.3290 137.3290 8.61e–14 10000NM-PSO 137.3293 137.3293 137.3293 2.33e–10 6798

9 0.4 11 GA 222.2720 221.9000 220.6000 0.31400 10000PSO 222.2830 222.2830 222.2830 1.15e–13 10000NM-PSO 222.2834 222.2834 222.2834 2.30e–08 6850

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 16: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering Optimization 1045

that NM-PSO is efficient and practical for this engineering problem. Thus, it will be more advan-tageous to grind SiC at a depth cut of dc = 5.6070 μm and feed rate f = 13.40 m/min with agrit size M = 481.0194 mesh. For a higher range of the number of flaws, an increase of 30 to60% in MRR can be observed by increasing the roughness value from 0.3 to 0.4 μm. This canbe a good approach to achieve the specified MRR with constraints on surface roughness andsurface damage.

To prove the superiority and robustness of NM-PSO, the results are compared with those of GAand PSO (Gopal and Rao 2003) in Table 6. It can be observed that NM-PSO found better MRRsolutions forNc(max) equal to 9 or 11. For Nc(max) equal to 7, the solution is the same as that ofPSO, but NM-PSO requires less function evaluations to reach the global optimum. The standarddeviations associated with NM-PSO are quite small, signifying that NM-PSO converges with ahigh degree of accuracy.

6. Conclusions

In this article, NM-PSO with embedded constraint handling methods is proposed for solvingCOPs. Experimental results clearly illustrate the attractiveness of this method by handlingseveral types of constraints. It can produce competitive, if not better, solutions as com-pared to CDE, FSA and GA and appears to be a promising constraint handling technique.As an application, NM-PSO was employed to solve the optimization problem of siliconcarbide grinding and it was proved that NM-PSO is superior in comparison with GA andPSO. In the future, NM-PSO will be applied to various real-world problems. Additionally,it would be desirable to explore the benefits of using a belief space in other types of prob-lems. The authors are also interested in extending the approach to deal with multi-objectiveproblems.

References

Becerra, R.L. and Coello, C.A., 2006. Cultured differential evolution for constrained optimization. Computer Methods inApplied Mechanics and Engineering, 195 (33–36), 4303–4322.

Chootinan, P. and Chen, A., 2006. Constraint handling in genetic algorithms using a gradient-based repair method.Computers and Operations Research, 33 (8), 2263–2281.

Coello Coello, C.A., 2002. Theoretical and numerical constraint handling techniques used with evolutionary algorithms;a survey of the state of art. Computer Methods in Applied Mechanics and Engineering, 191 (11–12), 1245–1287.

Dong, Y., Tang, J.F., Xu, B.D., and Wang, D., 2005. An application of swarm optimization to nonlinear programming.Computers and Mathematics with Applications, 49 (11–12), 1655–1668.

Eberhart, R.C. and Kennedy, J., 1995. A new optimizer using particle swarm theory. Proceedings of the Sixth InternationalSymposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October, 39–43, IEEE Service Center.

Eberhart, R.C. and Shi,Y., 2001. Tracking and optimizing dynamic systems with particle swarms. Proceedings of Congresson Evolutionary Computation, Seoul, Korea, 27–30 May, 94–97, IEEE Service Center.

Fan, S.K. and Zahara, E., 2007. A hybrid simplex search and particle swarm optimization for unconstrained optimization.European Journal of Operational Research, 181 (2), 527–548.

Farmani, R. and Wright, J.A., 2003. Self-adaptive fitness formulation for constrained optimization. IEEE Transactions onEvolutionary Computation, 7 (5), 445–455.

Fung, R.Y.K., Tang, J.F., and Wang, D., 2002. Extension of a hybrid genetic algorithm for nonlinear programming problemswith equality and inequality constraints. Computers and Operations Research, 29 (3), 261–274.

Gopal, A.V. and Rao, P.V., 2003. The optimization of the grinding of silicon carbide with diamond wheels using geneticalgorithms. International Journal of Advanced Manufacturing Technology, 22 (7–8), 475–480.

Hedar,A.D. and Fukushima, M., 2006. Derivative-free filter simulated annealing method for constrained continuous globaloptimization. Journal of Global Optimization, 35 (4), 521–549.

Hu, X. and Eberhart, R.C., 2001. Tracking dynamic systems with PSO: where’s the cheese? Proceedings of Workshop onParticle Swarm Optimization, Indianapolis, IN, USA Purdue School of Engineering and Technology.

Koziel, S. and Michaelwicz, Z., 1999. Evolutionary algorithms, homomorphous mappings, and constrained parameteroptimization. Evolutionary Computation, 7 (1), 19–44.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 17: Solving constrained optimization problems with hybrid particle swarm optimization

1046 E. Zahara and C.-H. Hu

Lee, T.S., Ting, T.O., Lin, Y.J., and Htay, T., 2007. A particle swarm approach for grinding process optimization analysis.International Journal of Advanced Manufacturing Technology, 33 (11–12), 1128–1135.

Nash, S.G. and Sofer, A., 1996. Linear and nonlinear programming. New York: McGraw-Hill.Nelder, J.A. and Mead, R., 1965. A simplex method for function minimization. Computer Journal, 7, 308–313.Runarsson, T.P. andYao, X., 2005. Search biases in constrained evolutionary optimization. IEEE Transactions on Systems,

Man and Cybernetics, 35 (2), 233–243.Tang, J.F., Wang, D.W., Ip, A., and Fung, R.Y.K., 1998. A hybrid genetic algorithm for a type of non-linear programming

problems. Computers and Mathematics with Applications, 36 (5), 11–21.

Appendix 1: Benchmark problems

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G01Minimize

f (x) = 54∑

i=1

xi − 54∑

i=1

x2i −

13∑i=1

xi

s.t.

g1(x) = 2x1 + 2x2 + x10 + x11 − 10 ≤ 0g2(x) = 2x1 + 2x3 + x10 + x12 − 10 ≤ 0g3(x) = 2x2 + 2x3 + x11 + x12 − 10 ≤ 0g4(x) = −8x1 + x10 ≤ 0g5(x) = −8x2 + x10 ≤ 0g6(x) = −8x3 + x12 ≤ 0g7(x) = −2x4 − x5 + x10 ≤ 0g8(x) = −2x6 − x7 + x10 ≤ 0g9(x) = −2x8 − x9 + x12 ≤ 0

0 ≤ xi ≤ 1, i = 1, 2, . . . , 9,

0 ≤ xi ≤ 100, i = 10, 11, 12,

0 ≤ xi ≤ 1, i = 13,⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G02Maximize

f (x) =

∣∣∣∣∣∣∣∑n

i=1 cos4(xi ) − 2∐n

i=1 cos2(xi )√∑ni=1 ix2

i

∣∣∣∣∣∣∣s.t.

g1(x) = 0.75 − ∐ni=1 xi ≤ 0

g2(x) = ∑ni=1 xi − 7.5n ≤ 0

n = 200 ≤ xi ≤ 10, i = 1, . . . , n⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G03Maximize

f (x) = (√

n)nn∐

i=1

xi

s.t.

h1(x) =n∑

i=1

x2i − 1 = 0

n = 100 ≤ xi ≤ 1, i = 1, . . . , n

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 18: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering Optimization 1047

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G04Minimize

f (x) = 5.3578547x23 + 0.8356891x1x5 + 37.293239x1 − 40792.141

s.t.

g1(x) = 85.334407 + 0.0056858x2x5 − 92 ≤ 0

g2(x) = −85.334407 − 0.0056858x2x5 − 0.0006262x1x4 + 0.0022053x3x5 ≤ 0

g3(x) = 80.51249 + 0.0071317x2x5 + 0.0029955x1x2 + 0.0021813x23 − 110 ≤ 0

g4(x) = −80.51249 − 0.0071317x2x5 − 0.002995x1x2 − 0.0021813x23 + 90 ≤ 0

g5(x) = 9.300961 + 0.0047026x3x5 + 0.0012547x1x3 + 0.0019085x3x4 − 25 ≤ 0

g6(x) = −9.300961 − 0.0047026x3x5 − 0.0012547x1x3 + 0.0019085x3x4 + 20 ≤ 0

78 ≤ x1 ≤ 102

33 ≤ x2 ≤ 45

27 ≤ xi ≤ 45, i = 3, 4, 5⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G05Minimize

f (x) = 3x1 + 0.000001x31 + 2x2 + (0.000002/3)x3

2

s.t.

g1(x) = −x4 + x3 − 0.55 ≤ 0

g2(x) = −x3 + x4 − 0.55 ≤ 0

h3(x) = 1000 sin(−x3 − 0.25) + 1000 sin(−x4 − 0.25) + 894.8 − x1 = 0

h4(x) = 1000 sin(x3 − 0.25) + 1000 sin(x3 − x4 − 0.25) + 894.8 − x1 = 0

h5(x) = 1000 sin(x4 − 0.25) + 1000 sin(x4 − x3 − 0.25) + 1294.8 = 0

0 ≤ x1 ≤ 1200

0 ≤ x2 ≤ 1200

0.55 ≤ xi ≤ −0.55, i = 3, 4,⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G06Minimize

f (x) = (x1 − 10)3 + (x2 − 20)3

s.t.

g1(x) = −(x1 − 5)2 − (x2 − 5)2 + 100 ≤ 0g2(x) = (x1 − 6)2 + (x2 − 5)2 − 82.81 ≤ 0

13 ≤ x1 ≤ 1000 ≤ x2 ≤ 100⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G07Minimize

f (x) = x21 + x2

2 + x1x2 − 14x1 − 16x2 + (x3 − 10)2 + 4(x4 − 5)2 + (x5 − 3)2 + 2(x6 − 1)2

+5x27 + 7(x8 − 11)2 + 2(x9 − 10)2 + (x10 − 7)2 + 45

s.t.

g1(x) = −105 + 4xi + 5x2 − 3x7 + 9x8 ≤ 0g2(x) = 10x1 − 8x2 − 17x7 + 2x8 ≤ 0g3(x) = −8x1 + 2x2 + 5x9 − 2x10 − 12 ≤ 0g4(x) = 3(x1 − 2)2 + 4(x2 − 3)2 + 2x2

3 − 7x4 − 120 ≤ 0g5(x) = 5x1 + 8x2 + (x3 − 6)2 − 2x4 − 20 ≤ 0g6(x) = x2

1 + 2(x2 − 2)2 − 2x1x2 + 14x5 − 6x6 ≤ 0g7(x) = 0.5(x1 − 8)2 + 2(x2 − 4)2 + 3x2

5 − x6 − 30 ≤ 0g8(x) = −3x1 + 6x2 + 12(x9 − 8)2 − 7x10 ≤ 0−10 ≤ xi ≤ 10, i = 1, . . . 10

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 19: Solving constrained optimization problems with hybrid particle swarm optimization

1048 E. Zahara and C.-H. Hu

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G08Maximize

f (x) = sin2(2πx1) sin(πx2)

x31 (x1 + x2)

s.t.

g1(x) = x21 − x2 + 1 ≤ 0

g2(x) = 1 − x1 + (x2 − 4)2 ≤ 0

0 ≤ xi ≤ 10, i = 1, 2

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G09Minimize

f (x) = (x1 − 10)2 + 5(x2 − 12)2 + x43 + 3(x4 − 11)2 + 10x6

5 + 7x26 + x4

7 − 4x6x7 − 10x6 − 8x7

s.t.

g1(x) = −127 + 2x21 + 3x4

2 + x3 + 4x24 + 5x5 ≤ 0

g2(x) = −282 + 7x1 + 3x2 + 10x23 + x4 − x5 ≤ 0

g3(x) = −196 + 23x1 + x22 + 6x2

6 − 8x7 ≤ 0g4(x) = 4x2

1 + x22 − 3x1x2 + 2x2

3 + 5x6 − 11x7 ≤ 0

−10 ≤ xi ≤ 10, i = 1, . . . , 7

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G10Minimize

f (x) = x1 + x2 + x3

s.t.

g1(x) = −1 + 0.0025(x4 + x6) ≤ 0g2(x) = −1 + 0.0025(x6 + x7 − x4) ≤ 0g3(x) = −1 + 0.01(x8 − x5) ≤ 0g4(x) = −x1x6 + 833.33252x4 + 100x1 − 83333.333 ≤ 0g5(x) = −x2x7 + 1250x5 + x2x4 − 1250x4 ≤ 0g6(x) = −x3x8 + 1250000 + x3x5 − 2500x5 ≤ 0

100 ≤ x1 ≤ 100001000 ≤ xi ≤ 10000, i = 2, 3,

10 ≤ xi ≤ 1000, i = 4, . . . , 8,

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G11Minimize

f (x) = x21 + (x−1)2

s.t.

h(x) = x2 − x21 − 0

−1 ≤ xi ≤ 1, i = 1, 2,

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G12Maximize

f (x) = (100 − (x1 − 5)2 − (x2 − 5)2 − (x3 − 5)2)/100s.t.

g(x) = (xi − p)2 + (x2 − q)2 + (x3 − r)2 − 0.0625 ≤ 0

0 ≤ xi ≤ 10, i = 5, 5, 5,

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014

Page 20: Solving constrained optimization problems with hybrid particle swarm optimization

Engineering Optimization 1049

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

G13Minimize

f (x) = ex1x2x3x4x5

s.t.

h1(x) = x21 + x2

2 + x23 + x2

4 + x25

h2(x) = x2x3 − 5x4x5 = 0h3(x) = x3

1 + x32 + 1 = 0

−2.3 ≤ xi ≤ 2.3, i = 1, 2,

3.2 ≤ xi ≤ 3.2, i = 3, 4, 5,

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ew M

exic

o] a

t 19:

02 2

3 N

ovem

ber

2014