hybrid genetic algorithm for optimization problems with permutation property

19
Computers & Operations Research 31 (2004) 2453 – 2471 www.elsevier.com/locate/dsw Hybrid genetic algorithm for optimization problems with permutation property Hsiao-Fan Wang a ; , Kuang-Yao Wu a; b a Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 300, Taiwan b Department of Business Management, National United University, Miaoli 360, Taiwan Abstract Permutation property has been recognized as a common but challenging feature in combinatorial problems. Because of their complexity, recent research has turned to genetic algorithms to address such problems. Although genetic algorithms have been proven to facilitate the entire space search, they lack in ne-tuning capability for obtaining the global optimum. Therefore, in this study a hybrid genetic algorithm was developed by integrating both the evolutional and the neighborhood search for permutation optimization. Experimental results of a production scheduling problem indicate that the hybrid genetic algorithm outper- forms the other methods, in particular for larger problems. Numerical evidence also shows that dierent input data from the initial, transient and steady states inuence computation eciency in dierent ways. Therefore, their properties have been investigated to facilitate the measure of the performance and the estimation of the accuracy. ? 2003 Elsevier Ltd. All rights reserved. Keywords: Combinatorial optimization; Permutation property; Genetic algorithm; Neighborhood search; Evaluation and parameter determination; Scheduling example 1. Introduction Combinatorial optimization problems are characterized by a nite number of feasible solutions. Let E = {e 1 ;e 2 ;:::;e n } be a nite set, a set of feasible solutions dened over E, and f : R an objective function. A combinatorial optimization problem is to nd a solution in whose objective value is minimum or maximum [1]. These optimizations contain a huge body of problems such as Corresponding author. Tel.: +886-35-742-654; fax: +826-35-722-204. E-mail address: [email protected] (H.-F. Wang). 0305-0548/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved. doi:10.1016/S0305-0548(03)00198-9

Upload: hsiao-fan-wang

Post on 03-Jul-2016

217 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Hybrid genetic algorithm for optimization problems with permutation property

Computers & Operations Research 31 (2004) 2453–2471www.elsevier.com/locate/dsw

Hybrid genetic algorithm for optimization problemswith permutation property

Hsiao-Fan Wanga ;∗, Kuang-Yao Wua;b

aDepartment of Industrial Engineering and Engineering Management, National Tsing Hua University,Hsinchu 300, Taiwan

bDepartment of Business Management, National United University, Miaoli 360, Taiwan

Abstract

Permutation property has been recognized as a common but challenging feature in combinatorial problems.Because of their complexity, recent research has turned to genetic algorithms to address such problems.Although genetic algorithms have been proven to facilitate the entire space search, they lack in 3ne-tuningcapability for obtaining the global optimum. Therefore, in this study a hybrid genetic algorithm was developedby integrating both the evolutional and the neighborhood search for permutation optimization.

Experimental results of a production scheduling problem indicate that the hybrid genetic algorithm outper-forms the other methods, in particular for larger problems. Numerical evidence also shows that di7erent inputdata from the initial, transient and steady states in8uence computation e9ciency in di7erent ways. Therefore,their properties have been investigated to facilitate the measure of the performance and the estimation of theaccuracy.? 2003 Elsevier Ltd. All rights reserved.

Keywords: Combinatorial optimization; Permutation property; Genetic algorithm; Neighborhood search; Evaluation andparameter determination; Scheduling example

1. Introduction

Combinatorial optimization problems are characterized by a 3nite number of feasible solutions.Let E={e1; e2; : : : ; en} be a 3nite set, � a set of feasible solutions de3ned over E, and f :� → R anobjective function. A combinatorial optimization problem is to 3nd a solution in � whose objectivevalue is minimum or maximum [1]. These optimizations contain a huge body of problems such as

∗ Corresponding author. Tel.: +886-35-742-654; fax: +826-35-722-204.E-mail address: [email protected] (H.-F. Wang).

0305-0548/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved.doi:10.1016/S0305-0548(03)00198-9

Page 2: Hybrid genetic algorithm for optimization problems with permutation property

2454 H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471

Table 1De3nitions of three problems in a permutation model

Problem Quadratic assignment Flow-shop scheduling Production scheduling (Wangand Wu, 2002)

p Unit pi is assigned atith location

Job pi is scheduled at ith posi-tion of the sequence

Product pi is manufacturedwithin ith production timepattern

x xki determinates whether unit kassigned at location i

xki determinates whether job kdone at position i

xkt determinates whether productk produced in period t

v xki = 1 if pi = k; = 0 otherwise xki = 1 if pi = k; = 0 otherwise Assign xk1; xk2; : : : ; xkT based onpattern i by a mappingalgorithm, if pi = k

h A function de3ned as∑i

∑k

∑j¿i

∑k �=l cijdklxikxjl

where cij and dkl are constant

A logical arithmetic to measuremakespan

A linear programming modelconsidering capacity and de-mand to minimize cost

quadratic assignment problems, permutation 8ow-shop scheduling problems, traveling salesman prob-lems, vehicle routing and scheduling problems, and the more complex cases of resource-constrainedproduction scheduling problems. Among these problems, permutation has been recognized asbeing their most common property. Essentially, a common feature of permutation problems is thatif a permutation scheme is determined, its corresponding solution can be easily derived with aproblem-speci3c procedure. The general form of permutation optimization problems is expressed byModel (1).

min z = f(p)

s:t: f(p) = v(h(p));

p∈�;

(1)

where �={(e1; e2; : : : ; en)| all permutations of ei; ei ∈E; i=1; 2; : : : ; n}, p: a permutation (p1; p2; : : : ;pn) in Set �, x: a decision solution (x1; x2; : : : ; xd) obtained by h(p), h : p → x, a function or logisticprocedure which maps the permutation p in an ordinal permutation space into a decision solution xin a cardinal space by a one-to-one manner, v : x → R, a function, mathematic programming modelor arithmetic procedure to evaluate a decision solution x, f : p → R, an objective function in termsof optimization, which is composed of h and v.

Model (1) enables us to present the structures and complexity of the permutation problems.Table 1 shows three typical problems of this model.

Even in a simple permutation, exact algorithms are not easy to design with moderate computationale7ort as can be seen from the complexity theory [2]. A major trend in solving such hard problemsis to utilize an e7ective heuristics search. In recent studies, some meta-heuristics such as Multi-startLocal Search [3], Simulated Annealing (SA) [4], Tabu Search [5] and Genetic Algorithms (GAs)[6–8] have been commonly adopted. Reeves [9], and Aarts and Lenstra [10] have done a thoroughsurvey of these approaches to commonly de3ned combinatorial problems. There are also severalother studies addressing general permutation models. Tian et al. [11] applied SA to permutationproblems. Their theoretical results show that the SA algorithm terminates with a global optimum

Page 3: Hybrid genetic algorithm for optimization problems with permutation property

H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471 2455

if computation time is long enough. However, it is inapplicable in practical implementation [1].Djerid and Portmann [12] addressed keeping good schemata using crossover operators in GAs forpermutation problems. However, GAs, in general, are incapable of 3ne-tuning for obtaining theglobal optimum. Thus various modi3cations have been done by incorporating local search techniquesinto the evolution process for particular problems [3,13,14]. Alternatively, Yagiura and Ibaraki [15]proposed a hybrid GA for the permutation problems by employing dynamic programming as themechanism of crossover and local search. But it is not a general approach because its applicabilityhighly depends on the structures of the problems.

While GAs are good at global search but poor at convergence, local search is good at 3ne-tuningbut poor at 3nding global optima. Therefore, to compensate each other for the general permutationmodel (1), genetic algorithm and neighborhood search (NS) improvement are incorporated as thehybridization in this study. In the analysis of such hybridization, the pros-and-cons compensationbetween the GA and the NS are particularly addressed. The proposed hybrid GA was demonstratedwith an illustrative example of a complex production scheduling problem. The detailed procedure ispresented and the convergence behavior is analyzed in this paper.

The paper is organized as follows. Section 2 presents a foundation for designing a hybrid GA.Section 3 reports on the development of a hybrid GA approach for permutation optimization prob-lems. In Section 4, the proposed method is illustrated by means of a production scheduling. The pureGA, the hybrid GA, a multi-start neighborhood search and a random sampling search are compared.An extension to mining data patterns is then discussed in Section 5. Finally, the conclusions of thisstudy are drawn with possible directions for subsequent studies.

2. Foundation of the hybrid GA

In GAs a potential solution to a problem is inferred as an individual which can be represented asthe genetic structure of a chromosome. Throughout the genetic evolution, because of the mechanismof selection, crossover and mutation, good-quality o7spring are born from a population of chromo-somes. Generation by generation, the stronger individuals, as better solutions to the problem, are thesurvivors in a competitive environment. GAs have experienced increasing interest from the combi-natorial optimization community and have shown great promise for performance in many industrialengineering areas [8].

Although GAs perform well in a global search, they take a relatively long time to converge to alocal optimum [3,13,14]. Local improvement procedures, on the other hand, can 3nd the local opti-mum in a small region of the search space but they are typically poor in a global search. Therefore,local search procedures can be incorporated into a GA to improve its performance. Basically, thelocal search in this context can be considered as a kind of learning process during the lifetime of anindividual string for which the selection of better solutions is based on the 3tness at the end of theindividual life. Such hybrid GAs have been used successfully to solve a wide variety of problems[13,14,16].

There are two concepts for incorporating a local search into the GA framework in terms of anevolution function: Lamarckian evolution and the Baldwin E:ect [16]. The Baldwin e7ect allowsan individual’s 3tness (phenotype) to be determined based on learning, i.e. the application of localimprovement. Like natural evolution, the result of the improvement does not change the genetic

Page 4: Hybrid genetic algorithm for optimization problems with permutation property

2456 H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471

Exit

Yes

Population

Initialization

Fitness Evaluation

MatingPool

Offspring

ParentSelection

Terminate?

Crossover Mutation

Replacement

Learning

Learning

Fig. 1. Structure of the proposed hybrid GA.

structure (genotype) of the individual, but it increases the individual’s chances of survival. In additionto determining an individual’s 3tness by learning, Lamarckian evolution changes the genetic structureof an individual to re8ect the result of the learning. It has been shown that the Lamarckian strategyis more e9cient than the Baldwin strategy [16,17]. Therefore, “Lamarckian” learning is introducedin conjunction with the hybrid GA. Fig. 1 illustrates the structure of the proposed method in whicha simple GA is employed for global exploration (evolution) among the population, and an NSprocedure is used for local exploitation (learning) around chromosomes.

3. Design of a hybrid GA for permutation optimization problems

In the following, the chromosome syntax representation is analyzed 3rst, then, by incorporatingthe NS into the GA, the related functions of the GA with a performance measure and stoppingconditions are presented. Finally, the category relationship of the random sampling, the NS, the pureGA and the hybrid GA in the solution-seeking is outlined.

3.1. Chromosome syntax representation and population initialization

A key step of adopting GAs for permutation problems is to encode a permutation into a chro-mosome. Although binary encoding is used in Holland’s work [6] for Hamming cli:s, it does notwork well due to the high possibility of yielding infeasible solutions to permutation problems. Twomethods have been considered to overcome this di9culty, object-based representation [8] andrandom-key representation [18]. As to whether o7spring inherit anything from the parents, theobject-based representation has Lamarckian property but the random-key representation has not [8].

Page 5: Hybrid genetic algorithm for optimization problems with permutation property

H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471 2457

Table 2Variant statistics of two exchanging for permutation problem

No. of Size of Pairwise exchanging Adjacent exchangingobjects search space

Neighborhood Max. no. of Max. no. of Neighborhood Max. no. of Max. no. ofsize transitions local opt. size transitions local opt.

n n! n(n − 1)=2 n − 12n!

n(n − 1)n − 1 n(n − 1)=2

n!(n − 1)

5 1.200e2 10 4 1.200e1 4 10 3.000e16 7.200e2 15 5 4.800e1 5 15 1.440e27 5.040e3 21 6 2.400e2 6 21 8.400e28 4.032e4 28 7 1.440e3 7 28 5.760e39 3.629e5 36 8 1.008e4 8 36 4.536e4

10 3.629e6 45 9 8.064e4 9 45 4.032e511 3.992e7 55 10 7.258e5 10 55 3.992e612 4.790e8 66 11 7.258e6 11 66 4.355e7

To implement a hybrid GA with Lamarckian learning, the object-based representation is adopted.Without loss of generality, each element in E is labeled by serial numbers, i.e. ei = i; ∀i=1; 2; : : : ; n.The gene-code of the chromosome corresponding to Permutation p, named c, is represented by theinteger string “p1p2 : : : pn” and de3ned by the mapping function g : p → c. In general, the gene-codeof chromosome k at generation s is denoted by

cs; k = “cs;k;1cs;k;2 : : : cs; k;n”:

By using a random number generator, a population with size m; G0 ={c0;1; c0;2; : : : ; c0;m} in whichc0; k �= c0; l for k �= l, is initialized. The initial individuals are improved by a learning procedure andsubsequently each improved one is placed back into the population. To maintain su9cient diversityin the initial generation, the improved individual, being a duplicate of current individuals in thepopulation, is discarded rather than placed back into the population.

3.2. Individual learning and <tness evaluation

Individual learning is performed by an iterative NS procedure. Within successive interchanges, thecurrent solution is replaced with an elite (dominating) neighbor. The learning e7ect of the NS isconsidered as the evaluation rule for every individual. That is, the improved value of the objectivefunction f in (1) is used to evaluate the 3tness of the chromosomes.

Basically, the issues of scanning region, choosing elite and stopping criteria are considered whena simple NS procedure is designed.

(a) Scanning region. For the scanning region the neighborhood has to be de3ned. Two objectsare interchanged in two strategies: (1) to obtain adjacent neighborhood two adjoining objects areinterchanged; (2) to obtain pairwise neighborhood two objects in di7erent positions are interchanged.Table 2 shows the comparative results of these two strategies and reveals that this kind of searchscheme leads to many local optima when the scale of the problem increases. In particular, in 3nding

Page 6: Hybrid genetic algorithm for optimization problems with permutation property

2458 H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471

a local optimum, the probability of being trapped by adjacent exchanging is higher than with pairwiseexchanging.

(b) Choosing elite. This decision is to determine which neighboring solution replaces the currentone. Denote N (c; r; j) the jth neighbor of an individual with the current gene-code c, where r canbe placed by symbol “a” for adjacent neighborhood and “p” for pairwise neighborhood. Now letthe kth-improving neighbor, named N ∗(c; r; k), locate at the jk th neighbor characterized by

f(g−1(N (c; r; jk)))¡f(g−1(N (c; r; jk−1)))

and

f(g−1(N (c; r; j)))¿f(g−1(N (c; r; jk−1))) for j = jk−1 + 1; : : : ; jk − 1;

where N (c; r; j0) = c; j0 = 0 and k¿ 1. Denote Uk the total number of successive improving neigh-bors. Then N (c; r; j Uk) is the best-improving one. One must examine all neighbors for the bestimprovement but it is not necessary to examine all neighbors to 3nd the kth improvement.Basically, any successively improved neighbor can be considered as a candidate for the elite.However, a tradeo7 must be performed between the computation time and the quality of theelite. Hansen and MladenoviVc [19], and Johnson et al. [20] have examined the e7ects. The twoextremes, the 3rst-improvement and the best-improvement, are considered for comparison in thisimplementation.

(c) Stopping criteria. By successively replacing the current solution with the elite neighbor, thelocal search works in an iterative fashion. In the pure local search, the procedure terminates when nofurther improvement exists (named until-no-improvement), i.e. Uk = 0, in that way a local optimumis obtained. Thus, a multi-start method which searches from di7erent starting solutions is used toincrease the probability of hitting the global optimum. When designing a hybrid GA, attentionshould be paid to the equilibrium between the NS and the GA. If the current solution is replacedwith its elite neighbor only once (named once-improvement), the burden of the computing resourceon NS can be lightened to a large extend and the promise of the good pattern explored by GAis enhanced. Because, by improving once, the improvement of the individuals is not complete, theprogress of the population could be suspended. Thus, the once-improvement stopping criterion canbe adopted at the beginning so that the best solution could be exploited early. And then, when thebest 3tness of the previous generation cannot be improved under the once-improvement criterion, theuntil-no-improvement one takes over so that the population can progress. The bene3t of this mixedstrategy is shown later in the numerical case study.

3.3. Functions of the GA

3.3.1. Parent selectionThe parents are randomly selected from the current population to produce children by genetic

operations. Although the roulette-wheel-selection [7] is one commonly used technique, it is not usedfor the proposed hybrid GA because the selected individuals in the population have been evaluatedwith good 3tness throughout the learning procedure. Therefore, each individual is chosen as a parentby using a discrete uniform sampling scheme.

Page 7: Hybrid genetic algorithm for optimization problems with permutation property

H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471 2459

Parent 1 2 9 1 5 4 8 3 7 6Parent 2 8 9 6 3 4 1 2 7 5

Random crossover sites * *

Copy actual position in common locations 9 4 7

Copy the subsequence from Parent 1 9 1 5 4 8 7

Copy the order from Parent 2 6 9 1 5 4 8 3 7 2

Fig. 2. Illustration of crossover with the same-site-copy-3rst principle.

3.3.2. Crossover operatorA crossover operator recombines the gene-codes of two parents and produces o7springs such that

the children inherit a set of building blocks from each parent. However, genetic operators are relatedto representation schemes. A survey of crossover operators for ordering applications was presentedby Poon and Carter [21]. Regarding the permutation-based representation, the following crossoveroperators have been widely used: partially matched crossover (PMX) intending to keep the absolutepositions of elements and linear order crossover (LOX) intending to respect relative positions. Theparents here are selected from the improved individuals which could possess several positions orsubsequences or orders of gene-codes consistent with the global optimal one. A crossover schemeis proposed as shown in Fig. 2. It works as follows:Step 1: Any gene-code which occupies the same site of both parents has the top priority of being

assigned to that site of the o7spring. This step attempts to let the o7spring inherit actual positionsof the promising gene-codes.Step 2: The remaining sites in the o7spring are assigned by the order of all gene-codes in Parent 1

within the sequence bounded by two randomly selected points. This step attempts to let the o7springinherit the promised subsequence.Step 3: Any unassigned sites are placed in the order of appearance in Parent 2. This step attempts

to let the o7spring inherit the promised order. Steps 2 and 3 operate as LOX does.

3.3.3. Mutation operatorMutation takes place on some newly formed children in order to prevent all solutions from con-

verging to their particular local optima. There are several mutation operations for permutation prob-lems, such as adjacent two-change, arbitrary two-change, arbitrary three-change and shift change[22]. Because our hybridization uses a two-change NS to improve o7spring, shift-change has a greatability to force a new solution for escaping from the path of the NS. Thus, the shift-change isadopted as mutation operator as depicted in Fig. 3. It works as follows: take the form of selectingtwo sites at random, replace one selected site with the other and shift all sites within the sequencebounded by the two selected ones.

3.3.4. Population replacementAn o7spring is born through crossover and mutation operators and subsequently improved by an

NS procedure. The improved o7springs are the candidates for the members of the population. The

Page 8: Hybrid genetic algorithm for optimization problems with permutation property

2460 H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471

Selected String 6 9 1 5 4 8 3 7 2

Random Mutation Sites * *

Mutated String 6 9 8 1 5 4 3 7 2

Fig. 3. An example of the shift-change mutation.

improved o7spring that is better than any current individual in the population is inserted into thepopulation and the worst one in the population is deleted.

3.4. Measure of performance and termination condition of the hybrid GA

In this design, the 3tness value of any individual comes from several solutions provided by theNS. If complicate constraints are inherited in the evaluation model for permutation solutions, it wouldrequire considerable computation time. Thus, in performing the above Lamarckian learning operation,the algorithm could record all of the evaluated solutions and their evaluation values in a memorytable T. The solutions in the memory table are called “visited solutions”, being di7erent from eachother. Therefore, the total time required by the hybrid GA operation depends on the amount ofsolutions in the memory table |T|, especially for the problems with a complicated function f.

Basically, the number of visited solutions becomes the basic indicator in measuring the e9ciencyof the optimization. On the other hand, the e7ectiveness of the algorithm depends on the highestlevel of the 3tness of visited solutions. A well-performed algorithm should be able to 3nd the globaloptimal solution (e7ective result) with the least number of visited solutions (e9cient rate). Basedon these two criteria of e9ciency and e7ectiveness, there are two ways to measure the performance:

(a) Fix the level of e7ectiveness in order to evaluate the resource consumption e9ciency. Thatis, the number of visited solutions from the operation is not restricted while the threshold of theobjective value is 3xed. Whenever the objective value reaches the threshold, the computation isstopped and the number of visited solutions, namely t = |T|, is recorded.

(b) Fix the level of the resource consumption e9ciency in order to evaluate the reached e7ective-ness. That is, the number of solutions from the operation is 3xed but the threshold of the objectivevalue is relaxed. Whenever the pre-de3ned number of visited solutions is obtained, the computationis stopped and the best objective value, namely z, is recorded.

Once the measures are established, the terminated condition of the algorithm can be determined.

3.5. Summary of the hybrid GA

From the above presentation, it can be seen that the functions of crossover, mutation and learningare designed subject to the permutation property. In summary, the features of the proposed hybridGA include individual learning by the NS in the elite chosen by the 3rst improvement in the pairwiseneighborhood and the mixed stopping criterion of once-improvement and until-no-improvement. Inthe proposed hybrid version, di7erent from the general version of the genetic local search thatspeci3cally searches over the subspace of the local optima, each piece of local improvement by theonce-improving NS evolves in the mechanism of the GA.

Page 9: Hybrid genetic algorithm for optimization problems with permutation property

H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471 2461

Multi-start & restart NS

Natural Ind . . . . . . . . . .( Ini t ia l Sol . )

( Improving Sol . )

Generation 1

Learned Ind.( Improved Sol . )

Natural Ind . . . . ( Ini t ia l Sol . )

( Improving Sol . )

Generation 2

Learned Ind.( Improved Sol . )

Natural Ind . . . . ( Ini t ia l Sol . )

Multi-start NS Pure NS

. . .

. . .

. . .

. . .

. . .

RS

HGA

. . .

. . .

. . .

PGA

Fig. 4. Category relationship of RS, NS, PGA and HGA in solution-seeking.

In Fig. 4, the similarities and di7erences of the relevant methods during their solution seekingprocesses are displayed. For the hybrid GA (HGA), if its learning function is degenerated, it becomesa pure GA (PGA); if its evolution function is degenerated, it becomes an NS; and if its learningand evolution functions are degenerated, it will turn into a random sampling (RS).

Page 10: Hybrid genetic algorithm for optimization problems with permutation property

2462 H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471

Table 3Basic characteristics of 3ve numerical cases

Characters D × T a 9 × 4 12 × 5 19 × 9 40 × 13 110 × 26

(i) Evaluation mode of a permutation solution through an LPConstraints 582 909 2387 6749 35398Real variables 846 1404 3971 12040 66000Computation timeb (s) 0.18 0.23 0.31 0.44 2.87

(ii) Permutation mode of all solutionsSearch space 3:6E + 05 4:8E + 08 1:2E + 17 8:2E + 47 1:6E + 178Local opt. using pairwise search (max no.) 1:0E + 04 7:3E + 06 7:1E + 14 1:0E + 45 2:6E + 174

aD and T are the numbers of product types and scheduling timeframes, respectively. The number of workstation typesis 3xed by 11.

bComputation time is the average of 1000 random samples completed with the Cplex 6 LP solver, run on a PentimumIII 600MHz. Input and output operations are excluded.

4. Evaluation and comparative studies

In this section, the proposed method is demonstrated by using a mixed integer programming(MIP) model proposed by Wang and Wu [23] for the problem of multi-product, multi-period andmulti-resource production scheduling (M3PS) in which both an uninterrupted and an even productionschedule are required. In that study, the search space of the MIP model was transformed into anoptimum-wise pattern by a heuristic mining algorithm so that a permutation optimization problemcould be formed as a reference model to be solved. The M3PS model, with the complex coststructure in which higher-order interactions reside, was formulated as a general permutation modelin the third column of Table 1. This permutation model is too large for enumeration in practiceand lacks the strong relaxations or valid inequalities for it to be tractable. Therefore, the existingsolution procedures are not applicable.

Based on the production structure of a local LED manufacturer, 3ve numerical cases were gen-erated randomly respective to small, medium and large-scale problems. The basic characteristics ofthese cases are shown in Table 3. With the increase of product types and time periods, the dimensionsof the problems increase rapidly.

The experimental results consist of two parts. Part one in Section 4.1 is an evaluation of e9ciency.By using a 9×4 small-size problem, the e9ciency of di7erent combinations of the parameters used inHGA is evaluated. Then, the best combination is suggested for the following evaluation. Therefore,this part can also be seen as a preliminary study. Part two in Section 4.2 compares di7erent algorithmswith the 3ve cases for the e7ectiveness evaluation.

4.1. Evaluation of eAciency

Computation e9ciency in the GA depends on the parameters used, therefore the 3rst evaluationpolicy of “evaluate the e9ciency by 3xing the level of e7ectiveness” was adopted. First the 9 × 4case was taken as benchmark to determine the optimal parameters of the HGA against the global

Page 11: Hybrid genetic algorithm for optimization problems with permutation property

H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471 2463

optimal value pre-obtained by the CPLEX [24] mixed integer solver. Then the tests in the other fourlarger problems could be performed.

4.1.1. Determination of GA parametersGenerally, when performing the GA, three parameters must be preset. They are the crossover rate

�C, the mutation rate �M and the population size �P. When performing the HGA designed inSection 3, we made good use of the small size of this case in order to search as wide a range aspossible for the optimal combination of these three parameters. Thereby, the in8uential charactersof all the preset parameter values could be observed in detail. For each parameter combination(�C; �M; �P), the optimal search was implemented 30 times with di7erent starting seeds andthe number of visited solutions generated by the time when the global optimum was obtained orti ¿ 104, namely t∗i (i = 1; 2; : : : ; 30), was recorded. Finally, the results from the 30 implementationswere averaged as a performance indicator of each parameter combination:

Ut =∑30

i=1 t∗i30

: (2)

Based on Mitchell’s suggestion [7] on the values of crossover and mutation rates, Fig. 5 showsthe variation of di7erent parameter-combinations. From the subsequent experiment, the optimalparameter-combination was adopted to be �C = 0:85; �M = 0:2 and �P = 4. Note that the valueof the population size is clearly di7erent from 20 to 30 of that commonly used in literature. Thisis because the performance indicator was dedicated to the number of visited solutions. During thecalculation in the NS learning, the number of newly obtained solutions could be several times thepopulation size. So, before the global optimum is reached, the number of visited solutions increaseswith the increase of the population size. This phenomenon can be seen in Fig. 5(c). On the otherhand, when the population size is very small (�P = 2 or 3), there is no way to provide plenty ofgood block messages, thus the calculation e9ciency decreases.

4.1.2. Evaluation of NS integration strategyIntegration of the NS has been the key feature of the proposed HGA. To observe how it works,

the parameters of the GA were 3rst 3xed at optimal values obtained from Section 4.1.1. FromSection 3.3, we know that there are three operational strategies to be considered, which are scan-ning region �R, choosing elite �E and stopping criteria �S. In this benchmark test, these are setas: �R = exchange two adjacent or pairwise positions; �E = 3nd the 3rst better or the best im-provement by �R; �S = exploit �E by iterating after once-improvement or until-no-improvementfound.

Fig. 6 shows the results of the combinations. Since in some cases, the HGA needs quite along time to obtain the global optimum, when ti ¿ 104 the calculation was terminated (“¿” inFig. 6). It shows that adopting �R =pairwise and �E =better facilitates the computation e9ciency.Although �S = once is not as good as �S = until in the entire average value, with �S = once,3nding the global optimum is much faster than �S = until as shown in the shadow bars in Table 4.Therefore, in the learning steps of the HGA, a two-stage process was proposed to incorporate themerits of these strategies by 3rst adopting the pairwise/better/once strategy; and then, when the

Page 12: Hybrid genetic algorithm for optimization problems with permutation property

2464 H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471

400

800

1200

1600

0 2 4 6 8 10 12 14 16 18 20

(c) Population size

400

500

600

700

800

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

(a) Crossover rate

400

500

600

700

800

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

(b) Mutation rate

Crossover rate=0.85 Mutation rate=0.20

Mutation rate=0.20 Population size=4

Crossover rate=0.85 Population size=4

tt

t

Fig. 5. Variation of di7erent settings for the GA’s parameters in the 9 × 4 case.

previous generation could not improve the best 3tness value, the NS parameters were changed intothe pairwise/better/until strategy. Table 4 lists the results of this mixed strategy.

4.2. Evaluation of e:ectiveness

This comparative study meant to evaluate the e7ectiveness of the proposed method when the levelof required e9ciency is 3xed. Because the global optima were unknown for the testing cases exceptthat of 9 × 5, by considering the LP computing time (see Table 3) and the computer 3le storingtime, termination criterion were set at “1000 di7erent solutions have been found”.

Page 13: Hybrid genetic algorithm for optimization problems with permutation property

H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471 2465

0

1000

2000

3000

4000

5000

6000

7000

ΘS=Once ΘS=Once ΘS=Until ΘS=Until ΘS=Once ΘS=Once ΘS=Until ΘS=Until ΘS=Mixed

ΘE=Better ΘE=Best ΘE=Better ΘE=Best ΘE=Better ΘE=Best ΘE=Better ΘE=Best ΘE=Better

ΘR=Adj. ΘR=Adj. ΘR=Adj. ΘR=Adj. ΘR=Pair ΘR=Pair ΘR=Pair ΘR=Pair ΘR=Pair

Crossover rate=0.85 Mutation rate=0.20 Population size=4

>

>

>

>

>

t

Fig. 6. Comparison of di7erent NS operation combinations based on the 9 × 4 case.

Table 4Comparisona of �S = once, until and mixed in the 9 × 4 case

Sample

i

ΘS=once ΘS=until ΘS=mixed Sample

i

ΘS=once ΘS=until ΘS=mixed

1 706 606 696 16 357 651 357

2 367 599 578 17 187 487 187

3 182 561 182 18 169 341 169

4 109 447 109 19 343 635 343

5 5161 555 797 20 382 559 382

6 657 876 633 21 71 341 71

7 222 664 222 22 306 650 306

8 440 590 440 23 236 546 236

9 267 597 267 24 2903 433 793

10 576 648 415 25 1702 730 1067

11 414 620 491 26 197 485 197

12 526 515 526 27 1345 570 845

13 3383 498 809 28 3425 2076 1603

14 220 600 220 29 3981 521 695

15 112 619 112 30 4073 419 654

aHere (�C; �M; �P; �R ; �E) = (0:85, 0.2, 4, pairwise, better) and the num-bers in the columns titled �S = once; �S = until and �S = mixed identify thevalues of t∗i .

Page 14: Hybrid genetic algorithm for optimization problems with permutation property

2466 H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471

The performance of the HGA with the best parameters obtained from Section 4.1, denoted AHGA,was evaluated by comparing the three methods below in 3ve cases:

(a) Random Sampling method, ARS : 1000 di7erent solutions were randomly generated.(b) Neighborhood Search, ANS: The tactics of “pairwise”, “the 3rst improvement”, “terminated when

no further improvement exists”, “multi-start” and “re-start when terminated” were adopted, inwhich the multi-start quantity was equal to the population size of the HGA.

(c) Pure Genetic Algorithm, APGA: It worked like AHGA but excluded the learning procedure.

All of the methods were implemented 30 times with di7erent starting seeds. During this test,AHGA; APGA and ANS started with the same initial solutions of 30 groups. For a given number ofvisited solutions, t, each method obtained 30 corresponding best objective values zi(t); ∀i=1; : : : ; 30.By averaging these values, Uz(t) =

∑30i=1 zi(t)=30, and comparing them with the optimal evaluation

value of the best solution z+ as well as the worst solution z−, the performance indicator &(t) forall the methods is formulated as

&(t) =z− − Uz(t)z− − z+ : (3)

The best and the worst solutions were found after gathering all the visited solutions in the wholetest. Since the indicator &(t) has already been normalized between 0 and 1, it can be applied for theevaluation and comparison of the implementation e7ectiveness of the di7erent cases. Fig. 7 presentsthe results of the 3ve cases. It can be noted that with the problems becoming larger, AHGA performedbetter than the other methods.

5. Discussion and applications

The test results and related study topics are discussed in this section.

5.1. Performance measure

When an algorithm is implemented, it will go through three states of initial, transient and steadystates as shown in Fig. 8. Basically, the random sampling method performs a complete territorysearch without any direction. Therefore, depending on the random numbers in the entire searchspace, the current solution of the random sampling is independent from the previous one, i.e. theprocess from one solution to another solution is unstructured. This implies that such a method isappropriate to be used in the initial state when the properties are unclear.

In case the number of visited solutions is 3xed in one method, say Algorithm A0, and the imple-mentation result is worse than ARS, it means that the visited solutions could not e7ectively form agood process structure. The so-called “good process structure”, for the GA, is “good block evolve-ment”, whereas for the NS, it means “good path climbing”. If A0 is designed with a mechanismwhich can move towards good structure, with a constant increase of the visited solutions in numbers,the result of the implementation will gradually get closer to that of ARS. And then, when it exceedsARS, it will progress into the transient state. In the transient state, Algorithm A0 has already had itsgood process structure, and is striding towards the global optimum until at last it reaches the steady

Page 15: Hybrid genetic algorithm for optimization problems with permutation property

H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471 2467

0.80

0.84

0.88

0.92

0.96

1.00

0 200 400 600 800 1000t

HGA

NS

PGA

RS

(a) Case of D=9 and T=4

0.80

0.84

0.88

0.92

0.96

1.00

0 200 400 600 800 1000t

HGA

NS

PGA

RS

(b) Case of D=12 and T=5

0.80

0.84

0.88

0.92

0.96

1.00

0 200 400 600 800 1000t

HGA

NS

PGA

RS

(c) Case of D=19 and T=9

� (

t)�

(t)

� (

t)�

(t)

� (

t)

AHGA

ANS

APGA

ARS

AHGA

ANS

APGA

ARS

AHGA

ANS

APGA

ARS

0.80

0.84

0.88

0.92

0.96

1.00

0 200 400 600 800 1000t

HGA

NS

PGA

RS

(d) Case of D=40 and T=13

0.80

0.84

0.88

0.92

0.96

1.00

0 200 400 600 800 1000t

HGA

NS

PGA

RS

(e) Case of D=110 and T=26

AHGA

ANS

APGA

ARS

AHGA

ANS

APGA

ARS

Fig. 7. Performance in di7erent cases.

Page 16: Hybrid genetic algorithm for optimization problems with permutation property

2468 H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471

1.0

Initial state Transient state Steady state

t

Evaluated Alg. A0

ARS

� (

t)

Fig. 8. States in computation.

9x4 12x5 19x9 40x13 110x26

Initial state

Steady state

Transient state

1.0

0.0

RS

RS

of)1000(of)1000(

of)1000(of)1000(

AA�

AA� i

−�

−�

ε=0.005

AHGA

ANS

APGA

Aε=0.005

ARS

Fig. 9. Di7usion of di7erent methods in di7erent cases.

state. The state is “steady” if value &(t) is close to 1 during a continuous period of time. Thus, wecould de3ne computation being in the steady state when the following condition is satis3ed:

1 − &(t)6 '; (4)

where ' is a su9ciently small positive value. While the process structure is in the steady state, theobtained best solution may reach the global optimum.

In Fig. 7, except for the curves of APGA and ARS (12 × 5) which are slightly twisted with eachother, the curves of all of the other methods are consistent with the pattern of Fig. 8. Now let' = 0:005 and ARS be the references for all states, then the changing characteristics in the di7erentscaled problems can be summarized as follows and as shown in Fig. 9.

• For small-sized problems, ANS performs better than the others. However within 1000 visitedsolutions, when the local optima are approaching, with the problem size becoming greater, theperformance of ANS gets worse than that of APGA. Therefore, the NS is appropriate in dealingwith small-size problems.

• The performance of APGA is not so ideal. Except for the case of 40×13, its process almost parallelsto that of ARS. This is probably because the population size (= 4) is too small for PGA to forma good block. However, this needs further study.

Page 17: Hybrid genetic algorithm for optimization problems with permutation property

H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471 2469

0.88

0.90

0.92

0.94

0.96

0.98

1.00

0.918 0.928 0.938 0.948 0.958 0.968

9x4

12x5

19x9

40x13

110x26

� (

t)

t/11−

Fig. 10. Performance of AHGA in the transient state.

• AHGA cannot reach the steady state except in the case of 9 × 4 within 1000 visited solutions. Butwith the problem size becoming larger, it performs better in comparison with AGA and ANS.

5.2. The number of visited solutions

As shown in Fig. 8, if the steady state is desired, the number of the visited solutions can beincreased. But how many visited solutions should be pre-set? That is, if we can 3nd a method toestimate the number of visited solutions for the HGA to achieve the steady state, we may knowhow long it takes to get a proper solution with a certain degree of accuracy.

Ho [25] has shown that the ultimate accuracy of estimation cannot be improved more than 1=√

(,where “(” is the length of the simulation experiment. With “(” de3ned as the number of visitedsolutions, the two-dimensional curve of 1−√1=( can be plotted. The pattern of this curve is similarto the curves of the di7erent methods in Fig. 7. According to Ho’s indicator the following can beasked: “what are the test results in this study with respect to 1 − √

1=(?”. The data from thetransient state of all the cases is used in the comparison. Fig. 10 shows a linear relationship between1−√1=t and &(t). To be noted is the case of 9×4 in which the computation process has undergoneall three states. Making a straight-line regression against the data of the transient state of the 9 × 4case, a linear regression formula with a correlation coe9cient of 98.1% is set up.

By the same token, the number of visited solutions of AHGA that should be used to achieve thesteady state in the 110 × 26 case can be estimated. By utilizing the data of the transient state, alinear regression formula with a correlation coe9cient at 99.6% is set up:

&̂(t) = 2:99426

(1 −

√1t

)− 1:95291: (5)

Page 18: Hybrid genetic algorithm for optimization problems with permutation property

2470 H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471

Based on Formula (5), when t grows from 1000 up to 2000, 3000, 4000, 5000, the estimationvalues of &(t) are 0.9744, 0.9869, 0.9940, and 0.9990, respectively. When t is up to 4174 from1000, the steady state with '= 0:005 appears. In fact, users can now trade o7 between the limitationof computation time and the degree of improvement on computation result.

6. Conclusion

Almost all optimization techniques are restricted by the problems’ dimensions. The proposed hybridGA aims at overcoming this issue. By incorporating the e9ciency of the GA in exploring the entiresearch space with the convergent ability of the NS, its accuracy and computation e9ciency havebeen shown in this study. Facing large-scale and complicated mathematical programming problemsin practice, by applying the hybrid GA, lengthy analysis procedures can be reduced by e9cientlysetting up the entire solution-seeking system. This has been illustrated in the production scheduling.

For any experimental analysis of the algorithms, the evolvement characteristics of the computationprocesses have to be determined in advance so that the performance results can be improved. In thisstudy, the concept of initial, transient and steady states was introduced to support the analysis forboth the detailed implementation process of the algorithms, and the estimation of e9ciency accordingto the given degree of accuracy.

Finally, based on the property of “adaptive neighborhood search” in the proposed hybrid GA, “GAparameter adaptation techniques” have to be considered for future development so that the hybridGA can be self-adaptive.

Acknowledgements

The authors gratefully acknowledge the 3nancial support from the National Science Council,Taiwan, ROC with project number #NSC89-2213-E007-134.

References

[1] Nemhauser GL, Wolsey LA. Integer and combinatorial optimization. New York: Wiley; 1988.[2] Garey MR, Johnson DS. Computers and intractability: a guide to the theory of NP-completeness. New York: Freeman;

1979.[3] Houck CR, Joines JA, Kay MG. Comparison of genetic algorithms, random restart, and two-opt switching for solving

large location–allocation problems. Computers & Operations Research 1996;23(6):587–96.[4] Kirkpatrick S, Gelatt Jr CD, Vecchi MP. Optimization by simulated annealing. IBM Research Report RC 9355;

1982.[5] Glover F. A user’s guide to tabu search. Annals of Operations Research 1993;41:3–28.[6] Holland JH. Adaptation in natural and arti3cial systems, 2nd ed. Ann Arbor: University of Michigan Press; 1975.

Cambridge: MIT Press; 1992.[7] Mitchell M. An introduction to genetic algorithm. Cambridge: MIT Press; 1996.[8] Gen M, Cheng R. Genetic algorithms and engineering optimization. New York: Wiley; 1997.[9] Reeves CR. Modern heuristic techniques for combinatorial problems. Oxford: Blackwell; 1993.

[10] Aarts E, Lenstra JK, editors. Local search in combinatorial optimization. Chichester: Wiley; 1997.

Page 19: Hybrid genetic algorithm for optimization problems with permutation property

H.-F. Wang, K.-Y. Wu /Computers & Operations Research 31 (2004) 2453–2471 2471

[11] Tian P, Ma J, Zhang D-M. Application of the simulated annealing algorithm to the combinatorial optimizationproblem with permutation property: an investigation of generation mechanism. European Journal of OperationalResearch 1999;118:81–94.

[12] Djerid L, Portmann M-C. How to keep good schemata using cross-over operators for permutation problems.International Transactions in Operational Research 2000;7:637–51.

[13] Renders J-M, Flasse S. Hybrid methods using genetic algorithms for global optimization. IEEE Transactions onSystems, Man and Cybernetics Part B 1996;26(2):243–58.

[14] Chu PC, Beasley JE. A genetic algorithm for the generalized assignment problem. Computers & Operations Research1997;24(1):17–23.

[15] Yagiura M, Ibaraki T. The use of dynamic programming in genetic algorithms for permutation problems. EuropeanJournal of Operational Research 1996;92:387–401.

[16] Joines JA, Kay MG, King RE, Culbreth CT. A hybrid genetic algorithm for manufacturing cell design. Journal ofChinese Institute of Industrial Engineers 2000;17(5):549–64.

[17] Whitley D, Gordan V, Mathias K. Lamarckian evolution, the Baldwin e7ect and function optimization. In: Davidoret al., editors. Parallel problem solving from nature: PPSN III. Berlin: Springer-Verlag; 1994. p. 6–15.

[18] Bean J. Genetic algorithms and random keys for sequencing and optimization. ORSA Journal on Computing1994;6(2):154–60.

[19] Hansen P, MladenoviVc N. First improvement may be better than best improvement: an empirical study. Les Cahiersdu GERAD G-99-40, MontrVeal, Canada, 1999.

[20] Johnson DS, Bentley JL, McGeoch LA, Rothberg EE. Near-optimal solutions to very large traveling salesmanproblems. Monograph 2003, to be published.

[21] Poon PW, Carter JN. Genetic algorithm crossover operators for ordering applications. Computers & OperationsResearch 1995;22(1):135–47.

[22] Murata T, Ishibuchi H, Tanaka H. Genetic algorithms for 8owshop scheduling problems. Computers & IndustrialEngineering 1996;30(4):1061–71.

[23] Wang HF, Wu KY. Modeling and analysis for multi-period, multi-product and multi-resource production scheduling.Journal of Intelligent Manufacturing 2003;14(3):297–309.

[24] ILOG. Using the CPLEX Callable Library. ILOG CPLEX Division, 1997.[25] Ho YC. On the numerical solution of stochastic optimization problem. IEEE Transactions on Automatic Control

1997;42(5):727–9.