a constrained global optimization method based on multi-objective particle swarm optimization

12
A Constrained Global Optimization Method Based on Multi-Objective Particle Swarm Optimization KAZUAKI MASUDA and KENZO KURIHARA Kanagawa University, Japan SUMMARY This paper proposes a constrained global optimiza- tion method based on Multi-Objective Particle Swarm Op- timization (MOPSO). A constrained optimization problem is transformed into another bi-objective problem which minimizes both the original objective function and the total amount of constraint violations. Then, the global optimum of the former problem is obtained as the Pareto optimal solution of the latter one having no constraint violation. In order to find the particular Pareto optimal solution, the proposed method introduces to MOPSO such operations as (a) restricting the number of Pareto optimal solutions ob- tained at each iteration of MOPSO to urge particles to approach the feasible set of the original constrained prob- lem, (b) choosing the most promising Pareto optimal solu- tion as the global best solution so as to exclude solutions dominated by it, and (c) encouraging to add Pareto optimal solutions if their number is too small to recover the diversity of search. Numerical examples verify the effectiveness, efficiency, and wide applicability of the proposed method. For some famous engineering design problems, in particu- lar, it can find solutions which are comparable to or better than the previously known best ones. © 2011 Wiley Peri- odicals, Inc. Electron Comm Jpn, 95(1): 43–54, 2012; Published online in Wiley Online Library (wileyonlineli- brary.com). DOI 10.1002/ecj.10385 Key words: constrained optimization; particle swarm optimization (PSO); multi-objective optimization; multi-objective particle swarm optimization (MOPSO). 1. Introduction Optimization is a fundamental technique to solve various engineering problems. In particular, meta-heuristic (MH) algorithms have attracted much interest as a practical optimization method. Most MH algorithms can solve prob- lems effectively and efficiently by cooperative and iterative use of two mechanisms: (a) generation of better solutions from current promising solution and (b) selection of more promising solutions based on the evaluation of generated solutions. In nonlinear continuous optimization, multi- point-search MH algorithms such as genetic algorithms (GA) [1, 2], particle swarm optimization (PSO) [3, 4], differential evolution (DE) [5, 6], etc., are applied success- fully for solving unconstrained problems. On the other hand, real optimization problems gen- erally have constraints. Conventionally, constrained prob- lems are solved by reducing them to unconstrained problems whose augmented objective function is formu- lated by the original objective function and constraint func- tions. For example, the penalty method [7–9] is applied for this purpose. There are also many studies on solving con- strained problems by MH algorithms, and in some methods like Refs. 10–12 MH algorithms have been applied to optimize the augmented objective function of the reduced unconstrained problems. It is true that MH algorithms are highly capable of finding the global optimum of such objective functions, but there are no other significant ad- vantages of combining the penalty method with multi- point-search MH algorithms. On the other hand, the authors have proposed a novel penalty method which utilizes PSO not only for optimizing the augmented function but also for updating the penalty parameter [13]. The effectiveness of the method has also been explained theoretically: with the use of the specific condition that PSO is allowed to increase the penalty parameter of search points separately, we proved that the global optimum of inequality constrained problems can always be obtained whenever PSO succeeds in global optimization of the corresponding unconstrained problems. However, we also found that the method is not suitable for certain constrained optimization problems, such as those with equality constraints and discrete con- straints [14], and we are tackling such problems now. © 2011 Wiley Periodicals, Inc. Electronics and Communications in Japan, Vol. 95, No. 1, 2012 Translated from Denki Gakkai Ronbunshi, Vol. 131-C, No. 5, May 2011, pp. 990–999 43

Upload: kazuaki-masuda

Post on 11-Jun-2016

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A constrained global optimization method based on multi-objective particle swarm optimization

A Constrained Global Optimization Method Based on Multi-Objective ParticleSwarm Optimization

KAZUAKI MASUDA and KENZO KURIHARAKanagawa University, Japan

SUMMARY

This paper proposes a constrained global optimiza-tion method based on Multi-Objective Particle Swarm Op-timization (MOPSO). A constrained optimization problemis transformed into another bi-objective problem whichminimizes both the original objective function and the totalamount of constraint violations. Then, the global optimumof the former problem is obtained as the Pareto optimalsolution of the latter one having no constraint violation. Inorder to find the particular Pareto optimal solution, theproposed method introduces to MOPSO such operations as(a) restricting the number of Pareto optimal solutions ob-tained at each iteration of MOPSO to urge particles toapproach the feasible set of the original constrained prob-lem, (b) choosing the most promising Pareto optimal solu-tion as the global best solution so as to exclude solutionsdominated by it, and (c) encouraging to add Pareto optimalsolutions if their number is too small to recover the diversityof search. Numerical examples verify the effectiveness,efficiency, and wide applicability of the proposed method.For some famous engineering design problems, in particu-lar, it can find solutions which are comparable to or betterthan the previously known best ones. © 2011 Wiley Peri-odicals, Inc. Electron Comm Jpn, 95(1): 43–54, 2012;Published online in Wiley Online Library (wileyonlineli-brary.com). DOI 10.1002/ecj.10385

Key words: constrained optimization; particleswarm optimization (PSO); multi-objective optimization;multi-objective particle swarm optimization (MOPSO).

1. Introduction

Optimization is a fundamental technique to solvevarious engineering problems. In particular, meta-heuristic(MH) algorithms have attracted much interest as a practical

optimization method. Most MH algorithms can solve prob-lems effectively and efficiently by cooperative and iterativeuse of two mechanisms: (a) generation of better solutionsfrom current promising solution and (b) selection of morepromising solutions based on the evaluation of generatedsolutions. In nonlinear continuous optimization, multi-point-search MH algorithms such as genetic algorithms(GA) [1, 2], particle swarm optimization (PSO) [3, 4],differential evolution (DE) [5, 6], etc., are applied success-fully for solving unconstrained problems.

On the other hand, real optimization problems gen-erally have constraints. Conventionally, constrained prob-lems are solved by reducing them to unconstrainedproblems whose augmented objective function is formu-lated by the original objective function and constraint func-tions. For example, the penalty method [7–9] is applied forthis purpose. There are also many studies on solving con-strained problems by MH algorithms, and in some methodslike Refs. 10–12 MH algorithms have been applied tooptimize the augmented objective function of the reducedunconstrained problems. It is true that MH algorithms arehighly capable of finding the global optimum of suchobjective functions, but there are no other significant ad-vantages of combining the penalty method with multi-point-search MH algorithms. On the other hand, the authorshave proposed a novel penalty method which utilizes PSOnot only for optimizing the augmented function but also forupdating the penalty parameter [13]. The effectiveness ofthe method has also been explained theoretically: with theuse of the specific condition that PSO is allowed to increasethe penalty parameter of search points separately, weproved that the global optimum of inequality constrainedproblems can always be obtained whenever PSO succeedsin global optimization of the corresponding unconstrainedproblems. However, we also found that the method is notsuitable for certain constrained optimization problems,such as those with equality constraints and discrete con-straints [14], and we are tackling such problems now.

© 2011 Wiley Periodicals, Inc.

Electronics and Communications in Japan, Vol. 95, No. 1, 2012Translated from Denki Gakkai Ronbunshi, Vol. 131-C, No. 5, May 2011, pp. 990–999

43

Page 2: A constrained global optimization method based on multi-objective particle swarm optimization

Meanwhile, conventional MH algorithms have beenfurther developed for solving multi-objective MH prob-lems, especially for finding their Pareto optimal efficientlyin a multi-point search framework [15–17]. In addition,multi-objective MH algorithms have been applied to con-strained optimization problems [18]; specifically, instead ofsolving a constrained optimization problem directly, themethod formulates a bi-objective optimization problemwhich optimizes the original objective function and mini-mizes the total amount of constraint violations [19], andsearches for the Pareto optimal set of this bi-objectiveproblem by using a multi-objective MH algorithm. Theglobal optimum of the original problem coincides only withthe Pareto optimal solutions for the bi-objective problemwithout any constraint violations. However, it is necessaryto find multiple Pareto optimal solutions to the bi-objectiveproblem by using multi-objective MH algorithms in orderto avoid local minima of the original problem, but it isinefficient to find the entire Pareto optimal set in order tofinally find only the global optimum of the original prob-lem. In this context, it is important to control the balancebetween diversification and intensification of search inthese algorithms.

In this study, we propose an efficient constrainedoptimization method which uses Multi-Objective ParticleSwarm Optimization (MOSPO) [20] to find Pareto optimalsolutions having no constraint violation of the bi-objectiveproblem considered above. In the proposed method, thefollowing mechanisms are added to the original MOPSOalgorithm:

(a) update of the search points to improve the originalobjective function and/or to reduce the total amount ofconstraint violations;

(b) use of the newly introduced global best solutionto eliminate the search points which are Pareto dominatedby it; and,

(c) diversification of search by increasing the numberof potential Pareto optimal solutions whenever it gets toosmall.

We also show the effectiveness of the proposed method bynumerical examples for various constrained optimizationproblems.

This paper is organized as follows. In Section 2, weformulate the bi-objective optimization problem for a con-strained optimization problem and explain the advantagesof using the bi-objective formulation. In Section 3, we givea brief description of MOPSO. In Section 4, we introducethe proposed method, which adds the above-mentionedmechanisms to the original MOPSO. In Section 5, we verifythe usefulness of the proposed method by numerical exam-ples. Conclusions and further work are presented in Section6.

2. Formulation of Unconstrained OptimizationProblem and Equivalent Bi-objective Optimization

Problem

Consider the following constrained optimizationproblem:

where f : RN → R is the objective function, and x = [x1, . . . ,xN]T is the decision variable vector. The function gl :RN → R, l = 1, . . . , L, defines the inequality constraintfunction, and hm : RN → R, m = 1, . . . , M, defines theequality constraint functions. Integer constraints and otherdiscrete constraints can also be described as equality con-straints. The set D ⊆ RN denotes a connected set that coversthe entire search domain for problem (1). Upper and lowerbound constraints can be considered by defining D prop-erly. In the following discussion, we assume that the objec-tive function f is bounded below on D.

2.1 Formulation of bi-objective optimizationproblem

A feasible solution of problem (1) can be realized byminimizing the amount of constraint violations. Accord-ingly, problem (1) can be replaced by a multi-objectiveoptimization problem which minimizes the objective func-tion f and the amount of constraint violations. Such amulti-objective problem can be formulated in two ways:one is to minimize the amount of constraint violationsseparately as different objective functions, and another is tominimize the total amount of constraint violations as asingle objective function. When MH algorithms are appliedto multi-objective optimization problems, the number ofobjective functions should be as small as possible, andhence we adopt the latter approach in this study.

For problem (1), the function µ: RN → R for the totalamount of constraint violations (below referred to as thetotal constraint violations) is defined on the set D as fol-lows:

The function µ gets zero only when x is a feasible solutionof problem (1). Otherwise, µ(x) is always positive. Anoptimization problem which minimizes simultaneously theobjective function f and the total constraint violations µ canbe formulated as the following bi-objective optimizationproblem:

(2)

(1a)

(1b)

(1c)

44

Page 3: A constrained global optimization method based on multi-objective particle swarm optimization

The global minimum of problem (1), that is, thefeasible solution with the smallest value of f, coincides witha Pareto optimal solution x to problem (3) that satisfies µ(x)= 0. The Pareto optimal solutions to problem (3) are definedas follows.

[Definition 2.1] (Pareto dominance of solutions)

If the following condition holds for solutions x, x′ ∈D for the bi-objective optimization problem (3)

then x is said to Pareto dominate x′, or x′ is said to be Paretodominated by x.

In this paper, Pareto dominance of x over x′ is denotedby x � x′.

[Definition 2.2] (Pareto optimal solutions)

If a solution x* ∈ D of the bi-objective optimizationproblem (3) is not dominated by any other solution x ∈ D,that is, if

then x* is said to be a Pareto optimal solution of problem(3).

In general, there are multiple Pareto optimal solu-tions. The set of Pareto optimal solutions in the variablespace is called the Pareto optimal set, and the mapping ofthe Pareto optimal set to the objective space is called thePareto front.

2.2 Advantages of using bi-objectiveoptimization formulation

Constrained optimization methods can be dividedinto the following large groups.

(i) Methods of transforming constrained optimiza-tion problems into unconstrained problems. In these meth-ods, the global optimum can be obtained throughoptimization of an augmented objective function whichincorporates constraints to the original objective function.

(ii) Methods of treating the objective function andconstraints separately. In these methods, the global opti-mum can be obtained through improvement of both theobjective function and the amount of constraint violations.

The transformation of a constrained optimizationproblem into a bi-objective optimization problem belongsto group (ii). We explain the advantages of this approachover group (i) in terms of the structure of optimizationproblems to be solved, the features of MH algorithms, andespecially the expected results when multi-objective MHalgorithms are applied to problem (3).

In applying methods of group (i) to problem (1),global optimization of the augmented objective functionmay be difficult because of the existence of local minima.In addition, the augmented function needs adjustment byusing specific parameters to balance the scale of the originalobjective function and constraints. If the parameters are notupdated properly, these methods may fail to find any feasi-ble solutions to problem (1) at all. On the other hand, in themethod of using bi-objective optimization formulation, theproblem of local optima does not exist, and the differencein objective function scale does not affect the search forPareto optimal solutions by multi-objective MH algo-rithms.

In terms of the features of multi-point MH algo-rithms, controlling the balance between diversification andintensification of search is necessary to solve single-objec-tive optimization problems efficiently. Specifically, themost promising points for the global optimum are selectedfrom multiple search points, and the other points are con-verged to the selected ones for intensification, or are re-pelled from them for diversification. However, since theglobal optimality of solutions cannot be determined accu-rately without assumptions such as convexity of the objec-tive function, it is fundamentally difficult to obtain theglobal optimum in this way. On the other hand, to solvemulti-objective optimization problems by multi-objectiveMH algorithms, only search diversity should be maintainedto find diverse Pareto optimal solutions. Furthermore, sincemultiple solutions that are not dominated by other solutionsin a population (below referred to as Pareto optimal candi-date solutions†) are used to generate new solutions, thediversity of search is maintained automatically. It might bedifficult to arrange Pareto optimal candidate solutions uni-formly over the Pareto optimal set, but it would be relativelyeasy to attract the candidate solutions close to a part of thePareto optimal set.

In addition, in multi-objective MH algorithms, im-provement of a single Pareto optimal candidate solutioncontributes to improvement of the set of Pareto optimalcandidate solutions (below referred to as the Pareto optimalcandidate set). When a multi-objective MH algorithm isapplied to problem (3), the probability to find the global

(3a)

(3b)

(5)

†Such solutions are selected according to Pareto dominance among solu-tions generated in the search, and do not necessarily coincide with truePareto optimal solutions. Thus we call them Pareto optimal candidatesolutions.

(4a)

(4b)

45

Page 4: A constrained global optimization method based on multi-objective particle swarm optimization

optimum of problem (1) is increased through improvementof Pareto optimal candidate solutions of problem (3) whichmay be infeasible for (1). Let us explain this point by anexample in which problem (1) has discrete constraints suchas integer constraints. In this case, the objective functionvalue f for feasible solutions to problem (1) are also discrete,and therefore the attainable set of objective functions forproblem (3) will have the shape shown in Fig. 1. In thefigure, the attainable set is shown by shading, and the pointsX, Y, and Z, with zero total constant violations µ correspondto local optima of problem (1). In particular, the point Xwhich achieves the minimum value of the objective func-tion f gives the global optimum to problem (1), and the boldline ascending to the left from point X shows the Paretofront for problem (3). Here we assume that a multi-objec-tive MH algorithm has found the Pareto optimal candidatesolutions shown by the points a to g around X, Y, and Z, inthe figure. We also assume that a new solution which hasless total constraint violations than the point a is found inthe vicinity of the point c. Then, because of Pareto domi-nance by the new solution, the points a and b will beremoved from the Pareto optimal candidate set, and furthersearch will be concentrated in the vicinity of the points Xand Y. Accordingly, a multi-objective MH algorithm canautomatically escape from local minima of problem (1)during the search for the Pareto optimal set of problem (3).In contrast, if problem (1) with discrete constraints is solvedby methods of group (i), the search by MH algorithms willbe trapped to a solution corresponding to a local optimumof problem (3) and will be terminated due to absence ofother local minima around it. Actually, such a phenomenonwas observed in our previous study [14] in when the penaltymethod was combined with PSO.

3. Multi-Objective Particle Swarm Optimization(MOPSO)

In this study, we use MOPSO [20] to solve thebi-objective optimization problem (3) introduced in Section2. MOPSO is not the only multi-objective algorithm avail-

able, but we choose it because of its simplicity in updatingsearch points.

In PSO [3, 4] and MOPSO, multiple search pointscalled particles (individuals) form a swarm, and they shareuseful information to search for better solutions. Everyparticle preserves the historically best solution obtained sofar (the personal best) in PSO and MOPSO. In addition, theswarm also preserves the historically best solution obtainedso far (the global best) in PSO, and the global optimum issearched by using these two kinds of best solutions. WhilePSO is used to find the single global optimum of a single-objective optimization problem, MOPSO is applied to finddiverse Pareto optimal solutions of a multi-objective opti-mization problem. Therefore, PSO and MOPSO differ inhow the best solutions are preserved by every particle andshared in the swarm.

To explain the detailed algorithm of MOPSO, let Qdenote the number of particles, and K denote the totalnumber of search iterations (the maximum number of gen-erations). For the q-th particle in the k-th generation, k = 1,. . . , K, and q = 1, . . . , Q, the position (the value of thedecision variable for the optimization problem) and thevelocity (the change of position) of the q-th particle in thek-th generation are denoted by xq(k) and vq(k), respectively.The personal best solution preserved by the q-th particle inthe k-th generation is denoted by pq(k). In addition, thePareto optimal candidate set shared in the swarm in the k-thgeneration is denoted by S(k). When the total number ofelements in S(k), that is, the number of Pareto optimalcandidate solutions in the k-th generation, is given by |S(k)|,elements of S(k), that is, Pareto optimal candidate solutionsin the same generation, are denoted by sq′(k), q′ = 1, . . . ,|S(k)|.

When MOPSO is applied to the bi-objective optimi-zation problem (3), particles in the initial generation (k = 0)are arranged randomly on set D, and their velocity is set tozero. After that (k > 0), the position and velocity of the q-thparticle are updated by the following equations:

In Eq. (6), w, c1, and c2 are parameters (positive values) thatcontrol the search behavior of MOPSO; r1, r2 are uniformrandom numbers that take real values in the interval [0, 1],and r3 is a uniform random number that takes integer valuesfrom 1 to |S(k)|. For each particle, Eq. (6) gives a newvelocity by a linear combination of the immediately pre-vious velocity of the particle, the direction vector thatguides the particle toward its personal best, and the direc-tion vector that guides the particle toward a randomlyselected Pareto optimal candidate solution. Then, Eq. (7)gives a new position of the particle.

Fig. 1. Attainable set of bi-objective problem (3) fororiginal problem (1) with discrete constraints.

(6)

(7)

46

Page 5: A constrained global optimization method based on multi-objective particle swarm optimization

The personal best of the q-th particle in MOPSO isupdated according to Pareto dominance of the previouspersonal best as follows:

Here the initial personal best pq(0) is set to xq(0) for all q =1, . . . , Q. Equation (8) replaces the personal best only if anew solution xq(k + 1) Pareto dominates the previous per-sonal best pq(k).

The Pareto optimal candidate set is updated as fol-lows. Let us consider the set S′(k + 1) = S(k) ∪ {p1(k +1), .. . , pQ(k + 1)} that adds the personal best solutions of allparticles in the (k + 1)-th generation to S(k) in the k-thgeneration. The new Pareto optimal candidate set S(k + 1)is generated by selecting the elements of S′(k + 1) that arenot Pareto dominated by others.

At the end of this section, let us explain the setting ofthe parameters w, c1, and c2 in MOPSO. It has been shown thatthe parameter setting varies the convergence behavior of PSO[22, 23]. In addition, updating methods of the parameters tocontrol the balance between diversification and intensificationof search are proposed for PSO [24]. On the other hand, inMOPSO, it is only necessary to maintain the diversity ofsearch in order to obtain the entire Pareto optimal set. Thismechanism is naturally implemented in MOPSO since theincrease in the number of Pareto optimal candidate solutionsmakes the selection of Pareto optimal candidate solutions inEq. (6) more random. It concludes that the parameters can befixed during the whole search in MOPSO.

4. Proposed Method

Certainly, one can find the global optimum of theconstrained optimization problem (1) by solving the bi-ob-jective optimization problem (3) by using MOPSO. Themore Pareto optimal solutions of problem (3) MOPSOfinds, the more likely the global optimum of problem (1) iscontained in the Pareto optimal candidate set. However, toobtain the entire Pareto optimal set for problem (3) is ineffi-cient since the Pareto optimal solutions to problem (3) withnonzero total constraint violations cannot be the global opti-mum to problem (1). Thus, we propose an efficient method offinding Pareto optimal solutions for problem (3) without anyconstraint violations (µ = 0) by MOPSO. The proposedmethod can effectively control the balance between diversifi-cation and intensification of search for the purpose.

4.1 Limitation on number of Pareto optimalcandidate solutions

In MOPSO, the Pareto optimal candidate solutionsgrow in number as the search proceeds. However, when

problem (3) is solved by MOPSO, Pareto optimal candidatesolutions with large values of the total constraint violationsµ are unlikely to be close to the global optimum to problem(1). Thus, we limit to a predefined number Q1 the numberof Pareto optimal candidate solutions to be propagated intothe next generation. Suppose, for example, that the Paretooptimal candidate set S(k) was obtained in the k-th genera-tion. If |S(k)| > Q1 holds, then the candidate solutions aresorted in ascending order of the values of µ, and only topQ1 solutions are preserved in the next generation, while therest are excluded from S(k).

If Pareto optimal candidate solutions with smallervalues of µ are added in the next generation, the Paretooptimal candidate set will be closer to the feasible set ofproblem (1) after the removal of candidate solutions withlarge values of µ. Thus, feasible solutions of problem (1)are more likely to be found in this procedure. Furthermore,limitation of the number of Pareto optimal candidate solu-tions can improve computational efficiency because of thefewer comparisons among candidate solutions.

4.2 Introduction and use of global best solution

The global optimum to problem (1) is usually aPareto optimal solution to problem (3) with zero totalconstraint violations (µ = 0). In order to solve problem (3)by MOPSO effectively, we will define the global bestsolution as the solution obtained so far that is most likelyto become the global optimum to problem (1) and use it toguide other particles to the solution. Specifically, the globalbest solution is defined by:

• the Pareto optimal candidate solution of problem(3) with the smallest value of the original objectivefunction f when feasible solutions to problem (1)have already been found;

• the Pareto optimal candidate solution of problem(3) with the smallest value of the total constraintviolations µ when feasible solutions to problem(1) have not been found yet.

When the global best solution in the k-th generation isdenoted by g(k), it is updated by the following equation:

Here g(0) is selected as one of the Pareto optimal candidatesolutions in the initial generation with the smallest totalconstraint violations. It should be noted that sq′(k + 1), q =

(8)

(9)

47

Page 6: A constrained global optimization method based on multi-objective particle swarm optimization

1, . . . , |S(k + 1)|, are the Pareto optimal candidate solutionsin the (k + 1)-th generation preserved through the procedureof Section 4.1.

The cases in Eq. (9) can be understood as follows:

• in the first case: If the global best solution g(k) inthe k-th generation is not feasible but a new Paretooptimal candidate solution that reduces the totalconstraint violations µ is obtained, it is acceptedas the new global best solution g(k + 1) in the (k +1)-th generation.

• in the second case: Whenever a feasible Paretooptimal candidate solution with zero total con-straint violations is obtained, it is always acceptedas g(k + 1), regardless of the feasibility of g(k).

• in the third case: Otherwise, g(k) is preserved asg(k + 1).

Note that there is always at most one feasible Pareto optimalcandidate solution in the Pareto optimal candidate set. If theglobal best solution g(k) in the k-th generation is feasibleand a new solution that Pareto dominates g(k) is obtainedand included in S(k + 1) in the (k + 1)-th generation, thesolution will necessarily be a feasible solution decreasingthe original objective function f. On the other hand, if g(k)is feasible but any solution that Pareto dominates it cannotbe obtained, g(k) will be preserved in S(k + 1). If g(k) isinfeasible, the second case of Eq. (9) is applied only themoment that a feasible solution is found for the first time.In any event, the feasible solution included in the Paretooptimal candidate set has the smallest value of the objectivefunction f among all feasible solutions found so far, and itis accepted as the global best solution for the next genera-tion. When Eq. (9) is applied, the total constraint violationsµ of the global best solution is always nonincreasing. Incontrast, the objective function f may either increase ordecrease so long as no feasible solution has been found.

In order to improve the global best solution moreeffectively, we will add the following procedure: if theglobal best solution g(k) has been improved in Eq. (9) andmore than one Pareto optimal candidate solutions that aredistinct from g(k) are contained in the Pareto optimal can-didate set S(k) in the k-th generation, then the Pareto optimalcandidate solution which is equivalent to the global bestsolution g(k) will be excluded from S(k). Figure 2 illustratesthe global best solution and the Pareto optimal candidateset when the global best solution was improved but anyfeasible solution was not found in the k-th generation. Dueto the removal of the global best solution from the Paretooptimal candidate set, particles can be frequently moved inthe descent direction of the objective function f by usingEq. (6). In the end, when MOPSO can find new Paretooptimal candidate solutions with less total constraint viola-tions, the global best solution will be improved further.

Moreover, we will basically exclude the Pareto opti-mal candidate solutions dominated by the global best solu-tion from the Pareto optimal candidate set. This procedurecorresponds to using the global best solution as a filter inRef. 19, and thus has the effect of improving the entirePareto optimal candidate set.

4.3 Diversification of search in case of fewPareto optimal candidate solutions

In MOPSO with the procedures explained in Sections4.1 and 4.2, the number of Pareto optimal candidate solu-tions for problem (3) tend to decrease as the search pro-ceeds. At the same time, after a feasible solution of problem(1) is found, particles may easily converge to a singlesolution. In either case, there is a possibility of prematureconvergence due to loss of the diversity of search. In orderto prevent such a situation, Pareto optimal candidate solu-tions must be increased whenever there are only a few.

When the number of Pareto optimal candidate solu-tions drops below a predefined number Q2 such that Q1 >Q2 ≥ 2, we will rearrange all particles in the following way:their position is reinitialized randomly in the set D; theirvelocity is set to zero; and their personal best solution is setto the reinitialized position. After that, their position andvelocity will be updated by Eqs. (6) and (7). When thisprocedure is applied in the k-th generation, Eqs. (6) and (7)become as follows:

It should be noted that xq(k) is the reinitialized position ofthe q-th particle, q = 1, . . . , Q. According to Eq. (11),particles move from randomly chosen points on the set Dtoward a Pareto optimal candidate solution; and, as a result,new Pareto optimal candidate solutions can be added nearexisting candidate solutions.

Fig. 2. Global best solution of proposed method.

(10)

(11)

48

Page 7: A constrained global optimization method based on multi-objective particle swarm optimization

However, if this procedure fails to multiply Paretooptimal candidate solutions, we will choose from xq(k + 1),q = 1, . . . , Q, the particles which are on the set D and haveless objective function values (f) than the global best solu-tion, and add them into the Pareto optimal candidate set S(k+ 1).

In fact, these solutions may be dominated by theglobal best solution or existing Pareto optimal candidatesolutions, but in order to recover the diversity of search theywill be exceptionally preserved for the next generationwhen the number of Pareto optimal solution candidatesolutions is smaller than Q2.

4.4 Simplification of update rule for personalbest solutions

In Section 3, we presented Eq. (8) as the update rulefor the personal best solution according to Pareto domi-nance, which was used in Ref. 20. However, since thepersonal best solution remains unchanged unless othersolutions which Pareto dominates are obtained, the numberof Pareto optimal candidate solutions will not increase. Tothe contrary, when some of the Pareto optimal candidatesolutions are eliminated by the procedures explained inSections 4.1 and 4.2, there is a risk of a gradual decline inthe number of candidates. Essentially, there is no need toupdate the personal best solution of each particle in termsof Pareto dominance. In this study we will use the followingsimple equation instead of Eq. (8):

As the equality pq(k) = xq(k) holds frequently in Eq. (12),to use this update rule means, in effect, to ignore the originalrole of the personal best solution in MOPSO. The use ofEq. (12) instead of Eq. (8) is also effective in reducingcomputational cost.

5. Numerical Examples

We now present several numerical examples to con-firm the effectiveness of the proposed method described inSection 4. In Section 5.1, we solve three example problemswhich minimize a simple two-variable nonlinear functionwith different constraints, and demonstrate that the pro-posed method is effective for solving problems with varioustypes of constraints. Then in Sections 5.2 to 5.4, we confirmfurther the usefulness of the proposed method by applyingit to three famous benchmark problems studied in Ref. 25and other papers.

In this section, the parameters of MOPSO are set tow = 0.4, c1 = c2 = 2.0 for all problems, all particles, and all

generations. The upper limit Q1 on the number of Paretooptimal candidate solutions is set equal to the number Q ofparticles in MOPSO for each problem, and the number Q2

that controls the timing of particle rearrangement is set to2 for all problems.

5.1 Minimization of two-variable nonlinearfunction

Let us consider three constrained minimization prob-lems for a two-variable nonlinear function. In each prob-lem, the upper and lower bound constraints, –10 ≤ x1, x2 ≤10, are handled by setting D = [–10, 10]2. Below we presentthe three example problems and the respective global opti-mal solutions x* found analytically. For each problem, wedepict the objective function in Fig. 3 and the contourdiagram with the location of the global optimum in Fig. 4.In the latter diagram, the points Ex.1 to Ex. 3 correspond tothe global optimum of Examples 1–3, respectively.

[Example 1: minimization problem with an inequal-ity constraint]

[Example 2: minimization problem with an equalityconstraint]

[Example 3: minimization problem with an equalityconstraint and an integer constraint]

(12)

(13a)

(13b)

(14a)

(14b)

Fig. 3. Surface of example objective function.

49

Page 8: A constrained global optimization method based on multi-objective particle swarm optimization

For Examples 1 and 2, the total constraint violationsamount was computed by substituting Eq. (13b) or (14b)into Eq. (2). In Example 3, integer constraint (15c) wastransformed to the following equality constraints:

where for real x, xl gives the largest integer equal to orsmaller than x, and xj gives the smallest integer equal toor larger than x. Thus, the total constraint violations amountwas defined as the substitution of Eqs. (15b), (16a), and(16b) into Eq. (2). Figures 5 to 7 plot the attainable set ofExample 1 to Example 3 by randomly generated feasiblesolutions in the region [–3, 3]2 containing the global opti-

mum. It can be observed clearly that the shape of theattainable set varies with the constraints; in particular, inExample 3, with integer constraints, the values of the ob-jective function are discrete for feasible solutions, and theshape of the attainable set is similar to that in Fig. 1.

When solving the three example problems, the num-ber Q of particles in MOPSO was set to 10, and themaximum number of generations K was set to 500. Through10,000 trials with different initialization of particles, Table1 shows the frequently obtained feasible solutions as theglobal best solution in the final generation. We can find thatthe proposed method discovered the global optimum at thehighest frequency for each example problem. While thefeasible region for Example 2 is comparatively narrow dueto the equality constraint, and while the feasible region forExample 3 becomes isolated points due to the integerconstraint, the global optimum was obtained in both casesat high frequencies above 80%.

5.2 Tension/compression string design problem

The proposed method was applied to the followingminimization problem with three variables and four in-equality constraints:

(15a)

(15b)

(15c)

Fig. 4. Contour of example objective function.

(16a)

(16b)

Fig. 6. Attainable set of (3) for Example 2.

Fig. 5. Attainable set of (3) for Example 1. Fig. 7. Attainable set of (3) for Example 3.

(17a)

(17b)

50

Page 9: A constrained global optimization method based on multi-objective particle swarm optimization

In 100 trials, Table 2 shows the minimum, maximum,average, and standard deviation of the objective functionvalues for the global best solutions obtained in the finalgeneration when the number M of particles was set to 30and the maximum number K of generations was set to 500.Feasible solutions with zero constraint violation were ob-tained 93 times out of 100. Table 3 compares the bestsolutions obtained by the proposed method with known bestsolutions [25]. The values of the constraint functions for theknown best solutions (italicized figures) were calculated bythe authors. The solutions obtained by the proposed methodhave the objective function value comparable to that of theknown best solutions, while satisfying all the constraints.In the proposed method, the feasible solutions with thesame objective function value (f) shown in Table 3 wereobtained in 59 trials of 100. Thus, we may conclude that theproposed method achieves high global optimization capa-bility.

5.3 Welded beam design problem

The proposed method was applied to the followingminimization problem with four variables and seven in-equality constraints:

Table 1. Results for Examples 1 to 3 Table 3. Comparison of best solutions fortension/compression string design problem

(17c)

(17e)

(17f)

(17d)

(18a)

(18c)

(18b)

(18d)

(18e)

(18f)

(18g)

(18h)(18i)

Table 2. Statistic results of proposed method withrespect to objective function (f) values fortension/compression string design problem

51

Page 10: A constrained global optimization method based on multi-objective particle swarm optimization

The problem (18) is the same as the problem solved in Ref.25. However, it should be noted that there exist modifica-tions in which the value of 13,600 in Eq. (18c) is replacedby 13,000, or in which different coefficients are used forthe function Pc.

Just as in Section 5.2, 100 trials were performed withthe number M of particles set to 30 and the maximumnumber K of generations set to 500. The resulting globalbest solutions in the final generation had zero constraintviolation. Table 4 shows the statistical values of objectivefunction obtained in 100 trials, and Table 5 compares thebest solutions with known best solutions [25]. While theobtained solutions by the proposed method have little vari-ance, the best solutions obtained by the proposed methoddiffer from the known best solutions only in the x2 value,and are feasible and give much smaller objective functionvalue (f). Thus, the proposed method outperformed theknown best solutions, which confirms its high effective-ness.

5.4 Pressure vessel design problem

The proposed method was applied to the followingminimization problem with four variables, and inequalityand discrete constraints:

Discrete constraint (19g) is transformed into equality con-straints as follows:

Thus, a problem with two equality constraints and fourinequality constraints is derived. The total constraint viola-tion was defined by substituting Eqs. (19c) to (19f), (20a),and (20b) into Eq. (2).

In this problem, 100 trials were performed with thenumber Q of particles set to 30 and the maximum numberK of generations set to 1000. Table 6 shows the minimum,maximum, average, and standard deviation of the objectivefunction values for the global best solutions obtained in the

Table 4. Statistic results of proposed method withrespect to objective function (f) values for welded beam

design problem

(19a)

(19c)

(19b)

(19d)

(19e)

(19f)

(19g)

(20a)

(20b)

Table 6. Statistic results of proposed method withrespect to objective function (f) values for pressure vessel

design problem

Table 5. Comparison of best solutions for welded beamdesign problem

52

Page 11: A constrained global optimization method based on multi-objective particle swarm optimization

final generation. In this case, solutions with the total con-straint violation less than 1.0 × 10–8 were considered feasi-ble. Feasible solutions were obtained in 91 trials of 100. Ascan be seen from Table 7, the best solutions obtained by theproposed method coincide with the known best solutions[25]. However, in this problem, the value of the function g3

varies strongly with the variable x3. The solutions presentedin Ref. 25 seem to violate the constraints in our calculations,which can be explained by rounding to 4 digits after thedecimal. Feasible solutions yielding the objective functionvalue in Table 7 were in 68 trials of 100. In addition, theknown method [25] has 81,000 function evaluations, whilethe number of evaluations in the proposed method is only30,000 for both the objective function and the total con-straint violation. These facts confirm that the proposedmethod offers not only high global optimization capabilitybut also high computation efficiency.

6. Conclusions

We proposed a method of applying Multi-ObjectiveParticle Swarm Optimization (MOPSO) to solve con-strained optimization problems; in particular, we transforma constrained minimization problem into a bi-objectiveoptimization problem which minimizes the original objec-tive function and the total amount of constraint violations.While MOPSO generally searches for as many Pareto op-timal solutions as possible, in the bi-objective optimizationproblem only the feasible Pareto optimal solution needs tobe found. Thus, the proposed method modifies MOPSO inthe following way: (a) the number of Pareto optimal candi-

date solutions propagating into the next generation is lim-ited, and the solutions with larger constraint violations areremoved; (b) the global best solution, which is likely to bethe closest to the global optimum for the original con-strained optimization problem, is introduced, and it is al-ways used for improvement of the Pareto optimal candidateset; (c) in order to preserve and recover the diversity ofsearch, particles are rearranged randomly to add Paretooptimal candidate solutions whenever their number be-comes too small. By numerical experiments, we confirmedthe effectiveness and efficiency of the proposed method. Wealso confirmed that the proposed method is applicable tovarious types of constrained optimization problems includ-ing discrete constraints.

In the near future, we will study mathematically theglobal search capability of the proposed method. To en-hance its applicability, we will apply the method to moreproblems with different numbers of variables and con-straints. In addition, we plan to continue research on solvingconstrained optimization problems with other MH algo-rithms.

REFERENCES

1. Goldberg DE. Genetic algorithms in search, optimi-zation, and machine learning. Addison-Wesley;1989.

2. Mitchell M. An introduction to genetic algorithms.MIT Press; 1998.

3. Eberhart RC, Kennedy J. A new optimizer usingparticle swarm theory. Proc Sixth Symp Micro Ma-chine and Human Science, p 39–43, 1995.

4. Kennedy J, Eberhart RC. Particle swarm optimiza-tion. Proc 1995 IEEE Int Conf Neural Networks, IV,p 1942–1948.

5. Price K, Storn R. Differential evolution. Dr. Dobb’sJournal 1997;264:18–24.

6. Storn R, Price K. Differential evolution—a simpleand efficient heuristic for global optimization overcontinuous spaces. J Global Optim 1997;11:341–359.

7. Zangwill WI. Non-linear programming via penaltyfunctions. Manage Sci 1967;13:344–358.

8. Aiyoshi E. Mathematical programming. AsakuraShoten; 1985. (in Japanese)

9. Luenberger DG. Linear and nonlinear programming(second edition). Springer; 2003.

10. Deb K. An efficient constraint handling method forgenetic algorithms. Comput Methods Appl MechEng 2000;186:311–338.

11. Coello CAC. Use of a self-adaptive penalty approachfor engineering optimization problems. Comput Ind2000;41:113–127.

Table 7. Comparison of best solutions for pressurevessel design problem

53

Page 12: A constrained global optimization method based on multi-objective particle swarm optimization

12. Coello CAC. Theoretical and numerical constraint-handling techniques used with evolutionary algo-rithm: a survey of the state of the art. ComputMethods Appl Mech Eng 2002;191:1245–1287.

13. Masuda K, Kurihara K, Aiyoshi E. A penalty ap-proach to handle inequality constraints in particleswarm optimization. Proc 2010 Int Conf Systems,Man and Cybernetics (SMC 2010), p 2520–2525.

14. Masuda K, Kurihara K. An understanding of con-strained optimization methods by using particleswarm optimization in the context of multi-objectiveoptimization. Proc of Electronics, Information andSystems Conference, Electronics, Information andSystems Society, IEE of Japan, p 474–482, 2010. (inJapanese)

15. Deb K. Multi-objective genetic algorithms: Problemdifficulties and construction of test problems. EvolComput 1999;7:205–230.

16. Knowles JD, Corne DW. Approximating the non-dominated front using the Pareto archived evolutionstrategy. Evol Comput 2000;8:149–172.

17. Deb K. A fast and elitist multiobjective genetic algo-rithm: NSGA-II. IEEE Trans Evol Comput2002;6:182–197.

18. Venter G, Haftka RT. Constrained particle swarmoptimization using a bi-objective formulation. StructMultidisc Optim 2010;40:65–76.

19. Fletcher R, Leyffer S. Nonlinear programming with-out a penalty function. Math Program Ser A2002;91:239–269.

20. Coello CAC, Lechuga MS. MOPSO: A proposal formulti objective particle swarm optimization. Proc ofthe 2002 Congress on Evolutionary Computation(CEC ’02), p 1051–1056.

21. Clerc M, Kennedy J. The particle swarm—Explo-sion, stability, and convergence in a multidimen-sional complex space. IEEE Trans Evol Comput2002;6:58–73.

22. Trelea IC. The particle swarm optimization algo-rithm: Convergence analysis and parameter selec-tion. Inf Process Lett 2003;85:317–325.

23. Jiang M, Luo YP, Yang SY. Stochastic convergenceanalysis and parameter selection of the standard par-ticle swarm optimization algorithm. Inf Process Lett2007;102:8–16.

24. Shi Y, Eberhart RC. A modified particle swarm op-timizer. Proc IEEE Int Conf Evolutionary Computa-tion, p 69–73, 1998.

25. He Q, Wang L. A hybrid particle swarm optimizationwith a feasibility-based rule for constrained optimi-zation. Appl Math Comput 2007;186:1407–1422.

AUTHORS (from left to right)

Kazuaki Masuda (member) received a bachelor’s degree from Keio University (Department of Applied Physics andPhysico-Informatics, Faculty of Technology) in 2000, completed the M.E. program in fundamental science and technology atthe Graduate School of Science and Technology in 2002, completed the doctoral program in 2005, and joined the faculty ofKanagawa University as a research associate. He has been an associate professor since 2008. His research interests are nonlineardynamic systems and global optimization algorithms. He holds a D.Eng. degree, and is a member of SICE and other societies.

Kenzo Kurihara (member) completed the M.E. program at the University of Tokyo (Mechanical Engineering, GraduateSchool of Engineering) in 1974 and joined Hitachi, Ltd. Since 1996 he has been affiliated with Kanagawa University, and isnow a professor. He was a visiting researcher at Vanderbilt University in 2000. His research interests are modeling control ofdiscrete systems, planning and management of production systems. He holds a D.Eng. degree, and is a member of IEEE, JSME,IPSJ, SICE, and other societies.

54