two physically motivated algorithms for combinatorial optimization: thermal cycling and iterative...

3
Computer Physics Communications 121–122 (1999) 34–36 www.elsevier.nl/locate/cpc Two physically motivated algorithms for combinatorial optimization: thermal cycling and iterative partial transcription A. Möbius a,1 , A. Díaz-Sánchez a,b , B. Freisleben c , M. Schreiber d , A. Fachat d , K.H. Hoffmann d , P. Merz c , A. Neklioudov a a Institut für Festkörper- und Werkstofforschung, D-01171 Dresden, Germany b Departamento de Física, Universidad de Murcia, E-30071 Murcia, Spain c Fachbereich Elektrotechnik und Informatik, Universität-GH Siegen, D-57068 Siegen, Germany d Institut für Physik, Technische Universität Chemnitz, D-09107 Chemnitz, Germany Abstract Among the various heuristic approaches to combinatorial optimization, local-search-based evolutionary algorithms have been particularly successful for the last years. We present two algorithms developed for jumping from local minimum to local minimum: Thermal cycling consists of cyclically heating and quenching by Metropolis and local search procedures, respectively, where the amplitude decreases during the process. Iterative partial transcription acts as a local search in the subspace spanned by the differing components of two approximate solutions corresponding to the relaxation of a spin glass by flipping clusters. The high efficiency of the proposed procedures is illustrated for the traveling salesman problem. 1999 Elsevier Science B.V. All rights reserved. Combinatorial optimization problems occur in var- ious fields of physics, engineering and economics. Many of them are difficult to solve since they are NP- hard, i.e., there is no known algorithm that finds the exact solution with an effort proportional to any power of the problem size. A popular such task is the travel- ing salesman problem (TSP): how to find the short- est roundtrip through a given set of cities [1]. Thus, for large-scale optimization problems, algorithms are needed which yield good approximations of the ex- act solution within a reasonable computing time, and which require only a modest effort in programming. The conceptionally simplest approximation algo- rithms are local search procedures. They can be best understood when interpreting the approximate solu- 1 E-mail: [email protected]. tions as discrete points (states) in a high-dimensional hilly landscape, and the quantity to be optimized as the corresponding potential energy. These algorithms proceed iteratively, improving the solution by small modifications (moves): The neighborhood of the cur- rent state, defined by the set of permitted modifications of the solution (move class), is searched for states of lower energy. If such a state is found, it is substituted for the current state, and a new search is started; oth- erwise, the process stops. Usually, the chances to find a global minimum in this way vanish exponentially as the problem size rises. They can be increased by tak- ing certain moves of higher complexity into account as in the Lin–Kernighan algorithm for the TSP [2]. In order to overcome barriers between local minima, simulated annealing [3,4] assumes the sample (i.e., the approximate solution) to be in contact with a heat bath 0010-4655/99/$ – see front matter 1999 Elsevier Science B.V. All rights reserved. PII:S0010-4655(99)00273-8

Upload: a-moebius

Post on 02-Jul-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Two physically motivated algorithms for combinatorial optimization: thermal cycling and iterative partial transcription

Computer Physics Communications 121–122 (1999) 34–36www.elsevier.nl/locate/cpc

Two physically motivated algorithms for combinatorialoptimization: thermal cycling and iterative partial transcription

A. Möbiusa,1, A. Díaz-Sáncheza,b, B. Freislebenc, M. Schreiberd, A. Fachatd,K.H. Hoffmannd, P. Merzc, A. Neklioudova

a Institut für Festkörper- und Werkstofforschung, D-01171 Dresden, Germanyb Departamento de Física, Universidad de Murcia, E-30071 Murcia, Spain

c Fachbereich Elektrotechnik und Informatik, Universität-GH Siegen, D-57068 Siegen, Germanyd Institut für Physik, Technische Universität Chemnitz, D-09107 Chemnitz, Germany

Abstract

Among the various heuristic approaches to combinatorial optimization, local-search-based evolutionary algorithms havebeen particularly successful for the last years. We present two algorithms developed for jumping from local minimum to localminimum:Thermal cyclingconsists of cyclically heating and quenching by Metropolis and local search procedures, respectively,where the amplitude decreases during the process.Iterative partial transcriptionacts as a local search in the subspace spannedby the differing components of two approximate solutions corresponding to the relaxation of a spin glass by flipping clusters.The high efficiency of the proposed procedures is illustrated for the traveling salesman problem. 1999 Elsevier Science B.V.All rights reserved.

Combinatorial optimization problems occur in var-ious fields of physics, engineering and economics.Many of them are difficult to solve since they are NP-hard, i.e., there is no known algorithm that finds theexact solution with an effort proportional to any powerof the problem size. A popular such task is the travel-ing salesman problem (TSP): how to find the short-est roundtrip through a given set of cities [1]. Thus,for large-scale optimization problems, algorithms areneeded which yield good approximations of the ex-act solution within a reasonable computing time, andwhich require only a modest effort in programming.

The conceptionally simplest approximation algo-rithms arelocal searchprocedures. They can be bestunderstood when interpreting the approximate solu-

1 E-mail: [email protected].

tions as discrete points (states) in a high-dimensionalhilly landscape, and the quantity to be optimized asthe corresponding potential energy. These algorithmsproceed iteratively, improving the solution by smallmodifications (moves): The neighborhood of the cur-rent state, defined by the set of permitted modificationsof the solution (move class), is searched for states oflower energy. If such a state is found, it is substitutedfor the current state, and a new search is started; oth-erwise, the process stops. Usually, the chances to finda global minimum in this way vanish exponentially asthe problem size rises. They can be increased by tak-ing certain moves of higher complexity into accountas in the Lin–Kernighan algorithm for the TSP [2].

In order to overcome barriers between local minima,simulated annealing[3,4] assumes the sample (i.e., theapproximate solution) to be in contact with a heat bath

0010-4655/99/$ – see front matter 1999 Elsevier Science B.V. All rights reserved.PII: S0010-4655(99)00273-8

Page 2: Two physically motivated algorithms for combinatorial optimization: thermal cycling and iterative partial transcription

A. Möbius et al. / Computer Physics Communications 121–122 (1999) 34–36 35

with a time dependent temperature. Thus, moves in-creasing the energy are also taken into account. Slowcooling permits to reach a particularly deep local min-imum. Several proposals have been made to improvethis concept, in particular to optimize the temperatureschedule, or to adapt simulated annealing to parallelcomputer architectures, see, e.g., [5–8].Genetic algo-rithms [9,10] offer another possibility to escape fromlocal minima. They simulate an evolution process byoperating on a population of individuals (approxi-mate solutions), where new generations are producedthrough the application of genetic operators such as se-lection, crossover and mutation. Particularly effectiveseem to be algorithms in which the individuals are lo-cal minima [11–13]. Here, we present the basic ideasof two algorithms which, similar to the genetic localsearch approach [14,15], improve the current approxi-mate solution by jumping from local minimum to localminimum, cf. [1,16].

When optimizing by means of simulated annealing,in practice, deep valleys attract the sample mainly bytheir area. However, it is tempting to make use oftheir depth. For that, we substitute the slow coolingby thermal cycling[17]: First, starting from the loweststate obtained so far, we randomly deposit energy intothe system by means of a Metropolis process withsome temperatureT , which is terminated, however,after a certain number of successful steps to retainthe gain of the previous cycles. This part is referredto as heating. Then we quench the system by meansof a local search algorithm where the move classconsidered can be more complex than in heating.Heating and quenching are cyclically repeated. Theprocess continues until, within a “reasonable” CPUtime, no further improvement can be found.

The amount of energy (determined byT ) depositedin a cycle decreases gradually. Our recipe for con-trolling T uses the rate of finding final states of theheating-quenching cycles which have lower energiesthan all previous ones: we keepT constant as long asthis rate exceeds a certain value. The lower this value,the better should on average be the final approximatesolution.

The algorithm proposed is improved by starting thecycles at random from one ofNa local minima heldin an archive, instead of from the best state so far. Inthis simultaneous search, it is favorable to make useof “collective experience”: If all archive states have a

Fig. 1. Deviation,δL= Lmean− 27686, of the obtained mean tourlength from the global optimum value versus CPU time,τCPU (inseconds per run for one PA8000 180 MHz processor of an HPK460), for the Padberg–Rinaldi problem.4: simulated annealing;×, 5, +, •: thermal cycling, differing by local search complexity,which increases from× to •. The related move classes (a),. . . , (d)are described in detail in [17]. Averages were taken from 20 runs.Error bars denote the 1σ -region.

certain common feature, the global minimum is likelyto also have this feature. Therefore, after changingT , we determine those degrees of freedom which arefrozen out, and ignore the related excitations in heatingduring the subsequent cycles.

We have applied the above ideas to a series ofsymmetric traveling salesman problems, as well as tothe Coulomb glass. In both cases, the correspondingcodes work very efficiently. Fig. 1 demonstrates thisfor a TSP instance, the Padberg–Rinaldi 532 cityproblem (att532) [18]. For further tests see [17].

The efficiency of thermal cycling as well as that ofseveral other particularly effective Monte Carlo op-timization procedures [1,12,13,16] rests on the con-sideration of local minima, modified by sophisticatedoperations. The idea ofiterative partial transcrip-tion [19] is to increase the average “gain” of the re-lated local searches by a fast post-processing, andconsequently to reduce the average number of localsearch steps required to reach a certain energy. This isachieved using the information inherent in the trans-formation which maps one to another local minimum,complementing the original local search by an addi-tional search in a small class of highly complex moves.

In detail, consider two approximate solutions, en-coded as vectorsv1 andv2. We look for decomposi-tions of the transformationM mappingv1 to v2 into aproduct of two commuting transformations,

v2=M(v1)=Mβ

(Mα(v1)

)=Mα

(Mβ(v1)

),

Page 3: Two physically motivated algorithms for combinatorial optimization: thermal cycling and iterative partial transcription

36 A. Möbius et al. / Computer Physics Communications 121–122 (1999) 34–36

Table 1Thermal cycling with IPTLS applied to six instances from [18]:smallest and largest tour lengths,Lmin and Lmax, number ofobtaining the best known value,nbest, mean length,Lmean, andτCPU (see caption of Fig. 1), for series of 20 runs

Problem Lmin Lmax nbest Lmean τCPU

pcb442 50778 50912 19 50785 60

att532 27686 27704 16 27688 88

rat783 8806 8809 14 8806.6 112

pr2392 378032 378655 7 378158 9380

fl3795 28772 28772 20 28772 6050

such thatMα(v1) and Mβ(v1) are possible approxi-mate solutions too, and thatMα and Mβ transcribedisjoint sets of components ofv1 by the values of thesecomponents inv2. Our procedure performs an iter-ative search for appropriate pairs(Mα,Mβ) accord-ing to increasing complexity ofMα . If such a pair isfound, the algorithm checks whetherMα improvesv1.If yes, v1 is substituted byMα(v1), otherwisev2 byM−1α (v2) = Mβ(v1), and then the search is restarted.

The iteration stops ifv1 = v2. If the output state, thecurrentv1, differs from both the input states, it is ad-ditionally exposed to a local search. We refer to thiscombination of iterative partial transcription and localsearch as IPTLS.

Applying this strategy to searching for the groundstate of a spin glass, one has to compare two replicasand to identify isolated clusters of spins with respect towhich the replicas differ, as proposed in [20]. Each ofthe clusters is flipped in that replica where this causesan energy decrease. For the TSP, we consider twotours and determine those subchains, which includethe same subset of cities in differing sequences, andhave the same initial and final cities. The worse ofthese subchains is transcribed by the respective betterone.

We have performed a series of numerical exper-iments for several TSP instances [19]: EmbeddingIPTLS into multi-start local search caused a large ef-ficiency increase, up to two orders of magnitude, and

still more when improving an archive of states ratherthan a single state. Even when embedding IPTLS inthermal cycling, we observed a considerable accelera-tion. The efficiency of this combination of algorithmsis illustrated by Table 1, see also [19].

References

[1] D.S. Johnson, L.A. McGeoch, in: Local Search in Combinato-rial Optimization, E. Aarts, J.K. Lenstra (Eds.) (Wiley, Chich-ester, 1997) 215.

[2] S. Lin, B. Kernighan, Operations Research 21 (1973) 498.[3] S. Kirkpatrick, C.D. Gelatt, Jr., M.P. Vecchi, Science 220

(1983) 671.[4] P. van Laarhoven, E.H.L. Aarts, Simulated Annealing: Theory

and Applications (Kluwer, Dordrecht, 1992).[5] A. Bolte, U.W. Thonemann, Eur. J. Oper. Res. 92 (1996) 402.[6] F.P. Marín, Phys. Rev. Lett. 77 (1996) 5149.[7] D. Janaki Ram, T.H. Sreenivas, K. Ganapathy Subramaniam,

J. Parallel Distrib. Comp. 37 (1996) 207.[8] J. Schneider, C. Froschhammer, I. Morgenstern, T. Husslein,

J.M. Singer, Comp. Phys. Commun. 96 (1996) 173.[9] J.H. Holland, Adaptation in Natural and Artificial Systems: an

Introductory Analysis With Applications to Biology, Control,and Artificial Intelligence (University of Michigan Press, AnnArbor, 1975).

[10] D.E. Goldberg, Genetic Algorithms in Search, Optimizationand Machine Learning (Addison-Wesley, Reading, MA, 1989).

[11] R.M. Brady, Nature 317 (1985) 804.[12] H. Mühlenbein, M. Gorges-Schleuter, O. Krämer, Parallel

Computing 7 (1988) 65.[13] B. Freisleben, P. Merz, in: Proc. 1996 IEEE Int. Conf. Evolu-

tionary Computation, Nagoya (IEEE, Piscataway, 1996) 616.[14] P. Merz, B. Freisleben, in: Proc. the 7th International Confer-

ence on Genetic Algorithms (1997) 465.[15] P. Merz, B. Freisleben, in: Proc. 5th Conference on Parallel

Problem Solving from Nature – PPSN V, Lecture Notes inComputer Sci. 1498 (Springer, Berlin, 1998) 765.

[16] O. Martin, S.W. Otto, E.W. Felten, Operation Res. Lett. 11(1992) 219.

[17] A. Möbius, A. Neklioudov, A. Díaz-Sánchez, K.H. Hoffmann,A. Fachat, M. Schreiber, Phys. Rev. Lett. 79 (1997) 4297.

[18] G. Reinelt, ORSA J. Comput. 3 (1991) 376; www.iwr.uni-heidelberg.de/iwr/comopt/soft/TSPLIB95.

[19] A. Möbius, B. Freisleben, P. Merz, M. Schreiber, Phys. Rev. E,submitted.

[20] N. Kawashima, M. Suzuki, J. Phys. A: Math. Gen. 25 (1992)1055.