krasnogor, n. and smith, j. (2008) memetic algorithms: the ...in: studies on the theory and design...

23
Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The poly- nomial local search complexity theory perspective. Journal of Math- ematical Modelling and Algorithms, 7 (3). pp. 3-24. ISSN 1570-1166 We recommend you cite the published version. The publisher’s URL is http://dx.doi.org/10.1007/s10852?007?9070?9 Refereed: No (no note) Disclaimer UWE has obtained warranties from all depositors as to their title in the material deposited and as to their right to deposit such material. UWE makes no representation or warranties of commercial utility, title, or fit- ness for a particular purpose or any other warranty, express or implied in respect of any material deposited. UWE makes no representation that the use of the materials will not infringe any patent, copyright, trademark or other property or proprietary rights. UWE accepts no liability for any infringement of intellectual property rights in any material deposited but will remove such material from public view pend- ing investigation in the event of an allegation of any such infringement. PLEASE SCROLL DOWN FOR TEXT.

Upload: others

Post on 18-Mar-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The poly-nomial local search complexity theory perspective. Journal of Math-

ematical Modelling and Algorithms, 7 (3). pp. 3-24. ISSN 1570-1166

We recommend you cite the published version.The publisher’s URL is http://dx.doi.org/10.1007/s10852?007?9070?9

Refereed: No

(no note)

Disclaimer

UWE has obtained warranties from all depositors as to their title in the materialdeposited and as to their right to deposit such material.

UWE makes no representation or warranties of commercial utility, title, or fit-ness for a particular purpose or any other warranty, express or implied in respectof any material deposited.

UWE makes no representation that the use of the materials will not infringeany patent, copyright, trademark or other property or proprietary rights.

UWE accepts no liability for any infringement of intellectual property rightsin any material deposited but will remove such material from public view pend-ing investigation in the event of an allegation of any such infringement.

PLEASE SCROLL DOWN FOR TEXT.

Page 2: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24DOI 10.1007/s10852-007-9070-9

Memetic Algorithms: The Polynomial Local SearchComplexity Theory Perspective

Natalio Krasnogor · Jim Smith

Received: 1 August 2005 / Accepted: 16 July 2007 /Published online: 12 October 2007© Springer Science + Business Media B.V. 2007

Abstract In previous work (Krasnogor, http://www.cs.nott.ac.uk/∼nxk/papers.html.In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis,University of the West of England, Bristol, UK, 2002; Krasnogor and Smith, IEEETrans Evol Algorithms 9(6):474–488, 2005) we develop a syntax-only classificationof evolutionary algorithms, in particular so-called memetic algorithms (MAs). When“syntactic sugar” is added to our model, we are able to investigate the polynomiallocal search (PLS) complexity of memetic algorithms. In this paper we show the PLS-completeness of whole classes of problems that occur when memetic algorithms areapplied to the travelling salesman problem using a range of mutation, crossover andlocal search operators. Our PLS-completeness results shed light on the worst casebehaviour that can be expected of a memetic algorithm under these circumstances.Moreover, we point out in this paper that memetic algorithms for graph partitioningand maximum network flow (both with important practical applications) also giverise to PLS-complete problems.

Electronic Supplementary Material The online version of this article(doi:10.1007/s10852-007-9070-9) contains supplementary material, which is available toauthorized users.

N. Krasnogor (B)Automated Scheduling, Optimisation and Planning Research Group,School of Computer Science and IT,University of Nottingham, Nottingham, UKe-mail: [email protected]: http://www.cs.nott.ac.uk/∼nxk

J. SmithFaculty of Computing, Engineering and Mathematical Sciences,University of the West of England, Bristol, UKe-mail: [email protected]: http://www.cems.uwe.ac.uk/∼jsmith

Page 3: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

4 J Math Model Algor (2008) 7:3–24

Keywords Memetic algorithms · Genetic local search hybrids ·Evolutionary hybrid algorithms · Travelling salesman problem · Graph partitioning ·Polynomial local search · Multimeme algorithms · Meta-Lamarckian algorithms ·Hyper-heuristics

Mathematics Subject Classifications (2000) 68R01 · 68Q25 · 68Q17 · 68T01

1 Introduction

There are just a few studies on the behaviour of evolutionary algorithms (EAs) thatemploy a purely complexity perspective e.g. [6, 41, 46, 48]. The majority of thesestudies resort to important simplifications either of the algorithms analysed or of theproblems to which they are applied. Furthermore, none of these studies recognisesthe fact that by the very nature of EAs (and similarly for the local search heuristicswithin a memetic algorithm [MA]) it is generally unknown how many iterations thealgorithm will need before stopping in a local optimum, let alone the global optimum.Due to these limitations on our knowledge severe simplifications are needed toanalyse the behaviour of EAs and MAs from a computational complexity viewpoint.

Polynomial local search (PLS) theory [22, 52] is a theoretical framework foranalysing the behaviour of search algorithms that explicitly focuses on the relation-ships between problems (and instances thereof), the search algorithms applied tothose problems, and the neighbourhoods they use. In this paper we argue that PLScomplexity might provide new insights into the nature of search with EAs or MAswhich are complementary to those afforded by other formal approaches. Specifically,we believe that the use of PLS theory can provide a fertile research ground whereneither problems nor algorithms need be simplified in order to obtain rigorous worstcase complexity results. The overall strategy taken in this paper, and proposed as thesubject matter of future research is this:

– The use of PLS complexity theory to analyse the interrelation between algo-rithms (e.g. different EAs and MAs) with their component parts (e.g. mutation,crossover, selection and local search operators) and the problems they areapplied to.

– The use of PLS theory to produce worst case bounds on the complexities ofsearch for specific algorithm-problem pairs.

The contributions of this paper are to illustrate how PLS theory may be appliedto the study of EAs and MAs, and to provide some examples of how this mayenhance our understanding. Specifically, we show the PLS-completeness of not justone PLS problem but of whole classes of problems associated with the applicationof MAs to the travelling salesman problem using a range of mutation, crossover andlocal search operators. Our PLS-completeness results shed light on the worst casebehaviour that can be expected of an MA belonging to the class of PLS problemsanalysed. Moreover, we point out in this paper that MAs for graph partitioning andmaximum network flow (both with important practical applications) also give rise toPLS-complete problems.

The organisation of this paper is as follows. Section 2 provides background mater-ial. Section 2.1 provides a very succinct introduction to memetic algorithms, perhaps

Page 4: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 5

one of the most widely used search methods. In Section 2.2 we briefly survey someother results relating to the complexity of search with EAs and MAs, and motivateour use of PLS theory. Section 2.3 provides an overview of the necessary concepts inpolynomial local search Complexity Theory needed in this paper. Section 3 specifiesthe neighbourhoods of a range of recombination and mutation operators that havebeen widely used when applying EAs to the travelling salesman problem (TSP).Section 4 proceeds to show how PLS theory may be applied to the study of MAs andspecifies how we extended the PLS concepts of cost functions and neighbourhoodsto cope with the specifics of population-based search. The following subsections thengo on to show the PLS-completeness of MAs with some specific combinations ofoperators. Finally, Section 5 discusses how our PLS results may be extended toproblem domains other than the TSP, and what the implications might be for thedesign of MAs, before conclusions are drawn in Section 6.

2 Background

2.1 Memetic Algorithms

Memetic algorithms (MAs) are metaheuristics designed to find solutions to complexand difficult optimisation problems [35]. They are evolutionary algorithms (EAs)that include a stage of individual optimisation or learning as part of their searchstrategy. MAs are known also by the name of hybrid genetic algorithms or geneticlocal search. A recent overview of the field is reported in [18] and in a variety of on-line resources such as [34]. A simple MA scheme is shown in Fig. 1. In the same wayas other EAs [5], a MA keeps a (multi)set of solutions, called the population, andapplies to it several perturbation operators such as mutation, crossover and, in thecase of MAs, local search.

Fig. 1 A basic version of a memetic algorithm

Page 5: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

6 J Math Model Algor (2008) 7:3–24

The inclusion of a local search stage into the traditional evolutionary cycle ofcrossover-mutation-selection is a crucial deviation from canonical EAs as it affectshow local and global search are performed [16, 24, 25, 27, 28]. The reader should alsonote that the pseudo-code shown in Fig. 1 is just one possible way to hybridise an EAwith local search. In fact, a great number of distinct architectures for MAs have beenpresented in the literature and even integrated into a formal syntactic model [24, 25].

2.2 Complexity Studies and Memetic Algorithms

In this section, we review some results on the complexity of search with EAs andextend them to search with MAs, following as closely as possible the definitions givenin [22] and [52]. For detailed explanations of the local search complexity classes thereader is referred to these papers. The essence of the message we want to convey inthis paper is that the computational complexity of search, in this case based on MAs,cannot be divorced from the theoretical studies performed for local search heuristics.

S. Rana in her doctoral dissertation [43] studied the role of local optima inevolutionary search. The author recognised that while many researchers view thesearch based on EAs (and in particular genetic algorithms (GAs)) as a search foruseful schemata, others consider it as a search in the space of local optima. Shegoes on to argue that while certainly both aspects can have a share of the truth,local optima are more important than schema for what she called hybrid geneticalgorithms as illustrated by genetic local search (i.e. memetic algorithms).

More recently, Vose and Wright in [47], while discussing the processing ofschemata argue that:

The notion that “leverage” or “critical advantages” are gained by con-templating a large number of redundant coordinates is dubious at best.The widespread belief that genetic algorithms are robust by virtue oftheir schema processing is, in view of the observations made in thispaper, more the result of salesmanship than logical analysis.

We recognise that theoretical studies on the schema processing abilities of MAsmight be relevant for dynamic studies. However, our point of view is that for thespecific purpose of studying the worst case complexity of search with MAs, localoptima and local search complexity are the more relevant features.

In Hart and Belew [19] the authors introduce the following problem (DGA-MAX)

Genetic algorithm combinatorial maximization problemInstance: Integer l defining the combinatorial binary space Bl

and a polynomially computable function f | Bl �→ Z.Solution: A binary string s ∈ Bl .Measure: f (s).

Based on this formulation they prove the NP-completeness of the decision versionand the NP-Hardness of the optimisation version by means of a polynomial reductionof SAT to the decision version of DGA-MAX. Moreover, they conclude:

These results imply that there does not exist a fixed set of algorithmicparameters which enable the GA to optimise an arbitrary function in

Page 6: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 7

F (the family of deterministic pseudo-boolean functions whose valuesrange over the integers).

and they go on saying:

Any discussion of the computational complexity of the GA must berelative to a specific class of functions. The assumptions that can be madeabout this class of functions are often critical to establishing interestingcomplexity bounds.

Arguably, provided that P �= N P, we believe that these results are much strongerthan the ones presented (and much debated) in the no free lunch theorem [42, 50,51]. This is because they represent a “computationally bounded” version of the NFLand suggest that it is not possible to derive any interesting or useful conclusions ifwe don’t look at specific families of problems. Moreover, it is our belief that eventhis is not sufficient. To fully characterise the complexity of search the associationbetween the neighbourhood used by the search algorithm (be it local or global) andthe problem to be solved should be explicitly considered. Local search complexitytheory is an analytical framework that explicitly uses knowledge about the functionto be solved and the operators employed in the search. Hence, we suggest that thebest way to try to measure the complexity of evolutionary search in general, andmemetic variants in particular, is in the context of local search complexity theory.

To do that we will need to recast the concept of “population”, which somehowgives the global character to search by means of evolutionary algorithms, to that of“solution”, which gives the local nature to (heuristic) local search.

2.3 Polynomial Local Search Theory

Following [22], we adopt the definition:

Definition 1 (Local Search Problem) A local search problem � is a 4-tuple � =(D�, S�, f�, N�) where:

– D� is a set of instances, e.g., Euclidean planar distance matrices for the TSP.– S�(x) is a set of solutions to instance x ∈ D�, e.g., a set of TSP tours.– f�(s, x) is a cost function for s ∈ S�(x) and x ∈ D�, e.g., the length of tours for

TSP instances.– N�(s, x) is a neighbourhood function that assigns to every s ∈ S�(x) and x ∈ D�

a set of solutions, {ns}, with ns ∈ S�(x).

Please note that there is no notion of “locality” or “vicinity” in the definitionof N�(s, x) given above. Although practitioners usually consider as neighbourssolutions that are in some sense a few perturbations away from each other, in theformal definition that is not required. One solution can be in the neighbourhood ofanother even if they do not have any common features. Moreover, the formalisationdoes not prescribe how solutions for a local search problem are to be represented.Thus although in their seminal paper [22] used binary encodings for the solutions,any polynomial transformation would also be valid.

Thus if, for example, the local search problem at hand is a version of, letssay, the TSP and the neighbourhood used is a Farthest_Insertion(...) then asolution will be one TSP tour. On the other hand, if the neighbourhood used

Page 7: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

8 J Math Model Algor (2008) 7:3–24

is a Genetic_Algorithm(...) then a solution will be a population of TSP tours.Moreover, it is important to understand that by changing “just” the neighbourhoodexplored, the whole local search problem being solved is changed as well, and,although the set of instances to be solved remains the same, the local search problemwill be different. This is a direct product of the clear association made by thetheory between a search problem and a neighbourhood. A metaphorical way ofunderstanding this concept is by reference to the fitness landscape of a problem[28, 31, 44] which is a graph L = (V, E). The set of vertices V are solutions to aparticular problem instance, the set of edges E contains all the pairs of vertices’sfrom V considered to be “close” to each other. The closeness is determined eitherby a move operator or by a metric (which ultimately induces a move operator). Thevertices’s of L are labelled according to the objective value assigned to that particularsolution. If in the fitness landscape L the move operator changes, or the metric thatdefines closeness changes then the graph itself changes. The observation made in[44] that one operator gives rise to one landscape (the one-operator one-landscapehypothesis) is also valid for local search problems as defined above.

Definition 2 (The Class Polynomial Local Search (PLS)) A problem � is said to bein the class PLS if three polynomial time algorithms can be defined for it as follows:

– A� is an algorithm that given x, recognises whether x ∈ D�, and if it is, generatesan initial solution s0 ∈ S�(x).

– B� is an algorithm that evaluates the cost of a solution s ∈ S�(x) for x ∈ D�,that is, the algorithms computes f�(s, x).

– C� is an algorithm that given x ∈ D� and s ∈ S�(x) determines whether s is alocal optimum under neighbourhood N�(s, x) and if it is not it returns a solutions′ ∈ N�(s, x) with strictly better cost:

• f�(s, x) < f�(s′, x) for a maximisation problem.• f�(s, x) > f�(s′, x) for a minimisation problem.

The complexity studies of local search relate not only problems and their instancesbut their correspondent local optima as well:

Definition 3 (PLS-Reduction) Given two local search problems �1,�2, a PLS-reduction consists of two functions, computable in polynomial time, h and g suchthat:

– h : D�1 �→ D�2 , that is, for x ∈ D�1 , h(x) ∈ D�2 .– g : (S�2(h(x)), D�1) �→ S�1(x).– ∀x ∈ D�1 , if s is local optimum for h(x) of �2, then g(s, x) is local optimum for x.

PLS-reductions possess the appropriate standard properties, e.g. transitivity,required for complexity studies.

Definition 4 (PLS-complete) We say that a problem � ∈ PLS is PLS-complete ifevery other problem in PLS can be PLS-reduced to it.

We have then, that PLS-complete problems are the hardest problem in PLS . It isusually much harder to show PLS-completeness than NP-completeness because theformer requires not only a mapping between problems but also a mapping between

Page 8: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 9

the whole search spaces with local optima (with respect to specific neighborhoods)preserved.

The class PLS is somewhere between the search versions of P and N P (Ps, N Ps),and we have the following lemma [22, 52]:

Lemma 1 (Ps ⊆ PLS ⊆ N Ps)

The following two problems, relevant to this paper, amongst many other impor-tant combinatorial optimisation problems and neighborhoods, have been proved tobe PLS-complete :

– �TSP−1 = (T SP, STSP, fTSP, K − Opt) [26].– �TSP−2 = (T SP, STSP, fTSP, Lin − Kernighan) [37].

with f (.) the usual tour length.

3 Some Genetic Neighborhoods for the TSP

In this section we describe some commonly used neighborhoods for the TSP and wecompute some bounds on their sizes. The neighborhoods described here are usefulnot only for the TSP but also for many other problems where solutions might berepresented as permutations of objects. In what follows we assume that a TSP tour isencoded as a permutation of cities, each of them represented by an integer. We defineand compute trivial bounds on the size of these operators because we use them laterto show that specific classes of MA, which use those neighborhoods for the search,belong to the class PLS and some are PLS-complete .

3.1 Crossover Neighborhoods

We analyse next four neighborhoods induced by partially matched (PMX) [11],insertion (IX) [36], distance preserving (DPX) [8] and edge-3 recombination [49]crossovers.

The IX operates by randomly choosing two crossover points. The sub-toursobtained are inserted into the other donor’s chromosome in a position where a cityof the sub-tour that is being inserted occurs in that donor. Duplicates are deleted andthe gaps closed. For example, taking the two tours and randomly selected crossoverpoints below:

t1 = 1 2 3 | 4 5 6 7 | 8 9 10

t2 = 7 6 2 | 3 1 5 4 | 10 8 9

we insert 3 1 5 4 in one out of the four possible occurrences of the cities in t1. Thechoice is made randomly:

t ′1 = 1 2 3 1 5 4 | 4 5 6 7 | 8 9 10

in this case we inserted the sub-tour in the position of the city number 3. Next wedelete the repetitions in t ′

1 and close the gaps:

t ′′1 = 2 | 3 1 5 4 | 6 7 8 9 10

The same procedure is used to generate t ′′2 .

Page 9: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

10 J Math Model Algor (2008) 7:3–24

PMX works by generating two crossover points at random and interchanging thesub-tour contained between these two points as in the example below:

t1 = 1 2 3 | 4 5 6 7 | 8 9 10

t2 = 7 6 2 | 3 1 5 4 | 10 8 9

we exchange the sub-tours 4 5 6 7 with 3 1 5 4 obtaining:

t ′1 = 1 2 3 | 3 1 5 4 | 8 9 10

t ′2 = 7 6 2 | 4 5 6 7 | 10 8 9.

To restore feasibility, note that there are repeated cities in each intermediateoffspring, so we perform the reverse changes ‘outside’ the crossover points:

t ′′1 = 6 2 7 | 3 1 5 4 | 8 9 10

t ′′2 = 3 1 2 | 4 5 6 7 | 10 8 9.

The DPX operator copies to the offspring all the edges that are shared by theparents. Then, it completes the offspring with edges that do not belong to either ofthe parents, in such a way that it preserves the distance between parents in the newlycreated offspring. In Merz’ Ph.D. thesis [28] the completion is done using nearestneighbour information; the example below is taken from that work:

t1 = 5 3 9 1 2 8 10 6 7 4

t2 = 1 2 5 3 9 4 8 6 10 7

By copying the common edges to the offspring we get the following intermediate setof sub-tours:

t ′1 = 5 3 9 | 1 2 | 8 | 10 6 | 7 | 4

We then complete the gaps in such a way that no edge that is just in one of the parentswill be included:

t ′′1 = 6 10 5 3 9 8 7 2 1 4

The same procedure can be used to generate the second offspring.Edge crossover is based on the idea that an offspring should be created as far as

possible using only edges that are present in one or more parent. It has undergonea number of revisions over the years. Here we describe the most commonly usedversion: edge-3 crossover after [49], which is designed to ensure that common edgesare preserved.

In order to achieve this, an edge table (also known as adjacency lists) is con-structed, which for each element, lists the other elements that are linked to it in thetwo parents. A “+" in the table indicates that the edge is present in both parents. Theoperator works as follows:

1. Construct edge table2. Pick an initial element at random and put it in the offspring3. Set the variable current_element = entry

Page 10: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 11

4. Remove all references to current_element from the table5. Examine list for current_element

– If there is a common edge, pick that to be next element– Otherwise pick the entry in the list which has the shortest list– Ties are split at random

6. In the case of reaching an empty list, the other end of the offspring is examinedfor extension; otherwise a new element is chosen at random

Clearly only in the last case will so-called foreign edges be introduced.We define next the neighbourhoods associated with the four crossover operators

just described and provide worst case upper bounds on the size of the neighbour-hoods induced. The superscript L in sL is used to denote the fixed size of tours.Given two solutions, s1 and s2, to a TSP instance x of size L, we produce the set ofall the solutions that can be generated by either PMX, IX, DPX or Edge crossoverusing s1 and s2. The lemmas describing the bounds are given in the appendix.

Definition 5 (IX Neighbourhood) Given sL1 , sL

2 ∈ S(x)TSP with L =| x |, x ∈ DTSP

then NI X = {sL : sL = I X(sL1 , sL

2 )}.

The size of NI X is bounded by O(L3).

Definition 6 (PMX neighbourhood) Given sL1 , sL

2 ∈ S(x)TSP with L =| x |, x ∈ DTSP

then NPMX = {sL : sL = PMX(sL1 , sL

2 )}.

The size of NPMX is bounded by O(L2)

Definition 7 (DPX Neighbourhood) Given sL1 , sL

2 ∈ S(x)TSP with L =| x |, x ∈ DTSP

then NDPX = {sL : sL = DPX(sL1 , sL

2 )}.

The size of NDPX is bounded by an exponential in L.

Definition 8 (Edge Recombination Neighbourhood) Given sL1 , sL

2 ∈ S(x)TSP withL =| x |, x ∈ DTSP then NEdge = {sL : sL = Edge(sL

1 , sL2 )}.

The size of NEdge is bounded by an exponential in L.

3.2 Mutation Neighbourhoods

It is usual to find in TSP studies that the mutation operators used are related tothe local search employed, e.g., using a 4-exchange (also called double bridge) inconjunction with 2-opt. We describe next some mutations for the TSP, namely,j-swap, j-insertion, j-inversion and j-exchange. The reader should note that thereis no general agreement on the names given to the neighbourhoods that will bedescribed, and the situation is even worse: there are authors that confuse the j-Optwith the j-Exchange (and sometimes even with the Lin-Kernighan neighbourhoods),see for example [30] page 251, where the description of the j-Opt neighbourhood isactually that of the j-Exchange.

Page 11: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

12 J Math Model Algor (2008) 7:3–24

As said before, the permutation t1 = 1 2 3 4 5 6 7 8 9 10 is the tour that goes fromcity 1 to 10 in the natural order. An Inversion in a tour is done by selecting a sub-tourof size j at random, and inverting its cities, effectively performing a 2-exchange move(see below):

t1 = 1 2 | 3 4 5 6 | 7 8 9 10

t2 = 1 2 | 6 5 4 3 | 7 8 9 10

where the t1’s cities inverted are 3, 4, 5 and 6 producing t2.The Swap move exchanges the position of two city blocks of size j in the per-

mutation, i.e., if j = 1:

t1 = 1 .2 3 4 5 6 7 .8 9 10

t2 = 1 .8 3 4 5 6 7 .2 9 10

where we move from t1 to t2 by swapping cities 2 and 8. This is usually called a 2-swapmove in the literature. Here, due to the notation introduced, this move is one of thepotential 1-swaps on t1.

The insertion selects a sub-tour of size j and inserts it between two randomlyselected cities; the example illustrates the move for j = 3:

t1 = 1 2 3. 4 5 6 | 7 8 9 | 10

t2 = 1 2 3 7 8 9 4 5 6 10

where the sub-tour 789 was inserted just after the 3. That is, the insertion move is anasymmetric version of the j-swap move where one of the blocks to be swapped is oflength 0.

The j-exchange operator works by breaking j links and inserting j new links torebuild a feasible tour, for j = 4 we have:

t1 = 1 2 3 4 5 6 7 8 9 10

t2 = 1 8 7 4 5 6 3 2 9 10

where the broken links in t1 to generate t2 were {(1, 2), (8, 9), (7, 6), (4, 3)}Having introduced these moves we can proceed to formally define the following

neighborhoods which are generalisations of those defined in [3] and [1]. The lemmasdescribing the bounds on the size of the neighbourhoods are also given in theappendix. We will need the following auxiliary higher order functions:

Definition 9 (apply_R_times)

apply_R_times : (S(x)TSP �→ S(x)TSP) × S(x)TSP

−→ (S(x)TSP �→ S(x)TSP) × S(x)TSP

apply_R_times( f, x) = apply_(R − 1)_times( f, f (x))

Page 12: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 13

and

Definition 10 (snd)

snd : (α, β) �→ β

snd(x, y) = y

The first one, apply_R_times is an iteration function and snd returns the secondelement of the pair passed to it as an argument.

Definition 11 ( j-Inversion neighbourhood) Given sL1 ∈S(x)TSP, with L=|x|,x∈DTSP

and 2≤ j< L then N j−Inversion ={sL : sL =snd(apply_1_times( j − Inversion, sL1 ))}.

The size of N j−Inversion is bounded by O(L).

Definition 12 ( j-Exchange neighbourhood) Given sL1 ∈ S(x)TSP, with L = |x|,

x ∈ DTSP and 2 ≤ j ≤ L then N j−Exchange = {sL : sL = snd(apply_1_times( j −exchange, sL

1 )), }.

We want to stress that this is not the j-Opt neighbourhood, for a definition of thelatter see [21] and references therein.

The size of N j−exchange is bounded by O(L j).

Definition 13 ( j-Swap neighbourhood) Given sL1 ∈ S(x)TSP, with L = |x|, x ∈ DTSP

and 1 ≤ j ≤ L2 then N j−Swap = {sL : sL = snd(apply_1_times( j − Swap, sL

1 ))}.The size of N j−Swap is bounded by O(L2).

Definition 14 ( j-Insertion neighbourhood) Given sL1 ∈ S(x)TSP, with L = |x|, x ∈

DTSP and 1 ≤ j ≤ L then N = {sL : sL = snd(apply_1_times( j − Insertion, sL1 ))}.

The size of N j−Insertion is bounded by O(L2).

4 The Local Search Complexity of a Family of Memetic Algorithms for the TSP

4.1 Preliminaries

In previous sections we formally defined some neighbourhoods used on the TSPproblem and bounded their size. Here, we define a family of related Local SearchProblems as formalised in Definition 1 and later we prove their membership intoPLS . To accomplish this goal we need to provide the four entities described inDefinition 1. In this case the set of instances D� is the set of instances for theTSP. The set of solutions to instance x ∈ DTSP,STSP(x), will be a set of multi-sets,i.e populations, of tours which are solutions to x, where a multi-set is a set that cancontain multiple instances of the same object (in this, case tours).

The cost function, fTSP(s, x) is

fTSP(s, x) = mint∈s(tour_length(t)) (1)

Page 13: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

14 J Math Model Algor (2008) 7:3–24

that basically assigns to a population of tours the cost of the best tour in thepopulation. The crucial part of the definition of the local search problems we wantto analyse is the definition of the neighbourhood function NTSP(s, x). We will basethe neighbourhood definition on the operators defined in previous sections. Foreach of these MA models we will have a particular NTSP(s, x) (defined later in theappropriate section).

We further define a set of crossover neighbourhoods Cros, and a set of mutationneighbourhoods Mut, as follows:

Cros = {NI X , NPMX} (2)

and

Mut = {N j−Inversion, N j−Exchange, N j−Swap, N j−Insertion} (3)

The elements of the sets Mut and Cros are those neighbourhoods for which, inprevious sections, polynomially bounded sizes were found. The reader must also notethat, in contrast to continuous domain optimisation where a Poison distribution is of-ten used for the application of mutation, in combinatorial optimisation permutationbased problems like the ones we are considering, mutation is usually applied once perchromosome. That is, a candidate solution s′ to a continuous optimisation problemis potentially within the neighbourhood of any other solution s. This is because thePoison distribution does not “remember” how many mutations have already beenapplied to s, rendering the search space a complete graph. However, a solution s topermutation combinatorial problem will usually be mutated once per iteration, hencethe underlying search space is not a complete graph.

With the sets Cros and Mut we formally define a family of local search problemsby means of the 8 different combination of genetic operators and the different Dvalues of the taxonomy introduced in [25]:

Definition 15 (T SP − (D, M, R) Local Search Problem)

T SP − (D, M, R) = (T SP, STSP, fTSP, NTSP−D,M,R).

where T SP, STSP, fTSP are as defined before and NTSP−D,M,R is defined next forsome values of D in the taxonomy of memetic algorithms.

The D value in the previous definition ranges from 0 to 15, each of whichrepresents a different mode of operations of the genetic operators and the localsearcher. In this paper we consider only the sub-range [0, 3]. In a memetic algorithmwith D = 0 the only “generating functions” available to the algorithm are the geneticoperators (i.e. there is no local search stage); when the MA is a D = 1, then localsearch is also present and is executed either before or after mutation; a D = 2 MAis one that coordinates the local searcher with the operations of the crossover (e.g.to restore feasibility to an offspring); finally, a memetic algorithm with index D = 3is one that will schedule the operation of the local searcher before and/or after themutation operator and before and/or after the crossover operator.

Page 14: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 15

4.2 Complexity of NTSP−0,M,R

By using a D = 0 in Definition 15 we are specifying a local search problem whoseneighbourhood is based on a standard GA with crossover neighbourhood R andmutation neighbourhood M. NTSP−0,M,R should produce, based on a given s ∈ STSP,which will be called the “parents” set P, a set of neighbour solutions. Recall that inthe definition of these PLS problems a solution is actually a set of tours. We constructNTSP−0,M,R incrementally:

Based on R ∈ Cros we build O′ = P ∪ (⋃

ti,t j∈P;ti �=t jR(ti, t j)), that is, to the current

solution for a PLS problem we add all the possible tours that result from crossingover two other tours that already exist in that solution. Note that if one were to allowcrossover between identical individuals the inclusion of the first P in the right handside would be unnecessary.

Next, using M ∈ Mut we have O′′ = O′ ∪ (⋃

ti∈O′ M(ti)) where also all the possiblemutated tours have been included in the current solution. We are now in a positionto specify the neighbourhood:

NTSP−0,M,R(P) = {U(O′′, P)} (4)

This definition of the PLS neighbourhood depends on having an appropriateupdating function U whose role is to define how the algorithm moves from theparents set P to the next generation.

The updating function must have a special form; we give a couple of examplesbelow:

– Sort O′′ ∪ P and return the μ = |P| best individuals as the new population. Thisis known as the plus strategy.

– Delete the worst tour from P and insert the best tour from O′′ in P. Return P.This is a variant of the GENITOR strategy.

– Return μ best tours from O′′. This is known as the comma strategy.

This sort of updating function was also used in [20] to prove some bound on theaverage time complexity of evolutionary algorithms under certain conditions and in[17] to analyse convergence properties of Evolutionary Pattern Search Problems.Rayward-Smith used a somewhat similar formulation to provide a simple, unifiedlocal search framework for genetic algorithms, simulated annealing and Tabu search[39]. Interestingly enough [38] employ a similar construction to define an “ExtendedH System with Locally Evolving Splicing Rules” within the DNA computationparadigm. They also prove that these systems can generate any recursive enumerablelanguage. The reader should note that their work focused on completeness anduniversality, from a language theory perspective, rather than on complexity issuesas we are doing here.

The definition in Eq. 4 implies that | NTSP−0,M,R |=| P |= μ. That is, the neigh-bourhood for this PLS problem is of constant size and is constructed by updatingthe original solution (the parents population P) with a multi set containing all thepossible combinations of applying or not the mutation and crossover operators totours in the parents population. Thus, Eq. 4 defines an exhaustive GA in the sensethat, given an initial population, it applies crossover and mutation in such a way that

Page 15: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

16 J Math Model Algor (2008) 7:3–24

generates all the individuals a standard GA (i.e. one that uses random numbers togenerate the offspring population) might produce. As a result, an exhaustive GAcannot produce worse solutions than a standard GA that uses the same generatingfunctions R and M and updating function U in one generation. Obviously, thecost of such a guarantee is paid in time as each new generation of the exhaustiveGA requires polynomial time in the size of the problem instance while a standardstochastic GA will use only a constant time per generation as the population size isgenerally kept independent of the problem size.

We will see next however that for the PLS problems studied here, we are stillwithin a polynomial time bounded situation. Moreover, Eq. 4 is an operationaldefinition that effectively defines an algorithm, although an expensive one. Pleasenote that a similar approach was also employed in practical realms. [23] used whatthey called “systematic crossover” to improve the performance of a genetic algorithmon the protein folding domain. In their approach, they generate all the possiblecrossover points of a pair of solutions, i.e., protein structures, and select the best ofthe offspring generated for the next generations. [45], again working on the ProteinStructure Prediction, exhaustively search for the best way to reconnect two partialprotein structures after crossover. In the first case a breadth first search was usedwhere in the later a depth first search was executed. In any case, the use of exhaustiveGAs is not a too impractical suggestion and, in our case, it helps to understand thesearch process while simultaneously defining a deterministic stopping criterion.

Auxiliary Lemma 10 (demonstrated in the appendix) shows that the size of O′′ ispolynomially bounded for the crossover and mutation operators in Eqs. 2 and 3, thenwe can proceed to prove our main results.

Theorem 1 (T SP − (0, M, R) is in PLS)

Proof To prove the theorem we need to provide three polynomial time algorithms:

– ATSP−(0,M,R)

– BTSP−(0,M,R)

– CTSP−(0,M,R)

as described in Definition 2.Algorithms ATSP−(0,M,R) can recognise in polynomial time an instance x ∈ DTSP

and construct an initial solution P consisting of μ tours. Clearly this can be donein polynomial time, i.e, applying a predefined number of times any of j-Inversion,j-Swap, j-Exchange or j-Insert to a random permutation of cities.

Algorithm BTSP−(0,M,R) is just Eq. 1, this is also a polynomial time procedurebecause it needs only to find out which of the μ tours in a solution s ∈ STSP is thebest and return its cost.

Finally we need to provide algorithm CTSP−(0,M,R). It will be based on theNTSP−0,M,R. Given a solution P, C will compute NTSP−0,M,R(P). If for s ∈NTSP−0,M,R(P), fTSP(s, x) ≥ fTSP(P, x) then C reports that P is a local optimum.Otherwise it returns s. Because of Lemma 9 and that the neighbourhood used isof constant size μ, C runs in polynomial time, hence T SP − (0, M, R) is in PLS . �

We prove next the completeness of this PLS problem.

Page 16: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 17

Theorem 2 (T SP − (0, M, R) is PLS -complete)

Proof To prove that T SP − (0, M, R) is PLS-complete we need only to prove that,besides being in PLS as theorem 1 shows, it is PLS-hard. To accomplish this we willprovide a PLS-reduction from �TSP−1 = (T SP, STSP, fTSP, K − Opt) to it.

We begin by giving function h as the definition of the reduction required. In thiscase and because both PLS problems deal with instances of the TSP , h is the identityfunction. We construct g(s, x) where s ∈ STSP(x) is a constant size population oftours solutions of x and x is an instance of the T SP as follows:

g(s, x) = t|t ∈ s∧ � ∃r ∈ s with fTSP(r) < fTSP(t), that is, t is the best tour that ap-pears in solution s of T SP − (0, M, R). The definition also requires that if s∗ is alocal optimum under the neighbourhood used by T SP − (0, M, R) then it must,as well, be a local optimum for the neighbourhood used by �TSP−1. Suppose thatindeed s∗ is a local optimum for T SP − (0, M, R) and let t∗ = g(s∗, x). It is easy tosee that t∗ is a local optimum for K − Opt as well, for if it was not then algorithmC in Theorem 1 by means of the neighbourhood in Eq. 4 would have produced abetter solution by means of its j-exchange operator, and in this case, s∗ wouldn’t havebeen a local optimum itself. Hence t∗ is a local optima. With this we conclude thereduction. �

The previous proof is valid for any j ≥ K, where j is the number of links exchangedby the mutation operator j − Exchange and K the number of edges considered by K-opt, and for any constant size population. The implication is important in the sensethat it says that a GA using a fixed size population is at least as hard as an iterativeimprovement heuristic (K-opt) that uses just one solution. That is, the proof is validover a range of mutation and local search operators. Moreover, in the worst case, thepopulation will not help overcome the (worst case) exponential number of iterationsto reach a local optimum.

In the proof we kept M as the set of mutation operators even when we actuallyemployed the j − Exchange move to do the analysis. The reason for this is thatthe other operators’ neighbourhoods, e.g., j-Inversion, j-Swap and j-Insertion, areincluded in those of the more general j-Exchange. j-Inversion samples part of a2-Exchange neighbourhood, j-Swap part of a 4-Exchange and the j-Insertion that ofa 3-Exchange neighbourhood.

4.3 Complexity of NTSP−1,M,R,L,NTSP−2,M,R,L,NTSP−3,M,R,L

In the previous section we analysed the local search complexity properties of an MAthat actually doesn’t use local search, but rather only employs mutation, crossoverand selection. Now we extend the analysis to MAs that are located higher in thetaxonomy, that is, those MA for which the number D is D > 0. If we considerarchitectures with D = 1 we are including a local search that is coordinated togetherwith the mutation operator by means of the fine grain scheduler fM. This schedulerwill perform a local search before and/or after applying the mutation operator. Ifthe elementary moves on which the helper relies are constrained to be those of themutation operator, or a neighbourhood included in the mutation neighbourhood,

Page 17: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

18 J Math Model Algor (2008) 7:3–24

then our proof of PLS-completeness is still valid and we can state the followingtheorems:

Theorem 3 (NTSP−1,M,R,L is PLS-complete) For M and R as defined in Eqs. 3 and2 respectively, NTSP−1,M,R,L is PLS-complete when both L and M use the sameelementary operator, or when the neighbourhood induced by L is strictly includedin that of M

Theorem 4 (NTSP−2,M,R,L is PLS-complete) NTSP−2,M,R,L is PLS-complete whenboth L and M use the same elementary operator or when the neighbourhood inducedby L is strictly included in that of M

Theorem 5 (NTSP−3,M,R,L is PLS-complete) NTSP−3,M,R,L is PLS-complete whenboth L and M use the same elementary operator or when the neighbourhood inducedby L is strictly included in that of M

Proof To prove the previous theorems the reader should only note that if thepreconditions of the theorem hold then a given solution s∗, i.e. a local optimum, forthose PLS problems will still be a local optimum for �TSP−1 when using the PLS-reduction introduced earlier. This is so because the local searcher can only improvetours by using the same core move that the mutation operator, j-exchange, uses (ora sampled subset of that neighbourhood). Hence local optima for former will also belocal optima for the later. Moreover, they will in turn be local optima for K-opt. �

It is important to note that several of the MAs in the literature fulfil theprecondition of the theorem. For example in the case of the TSP, researchersusually implement MA where the mutation is the “double-bridge” move and thelocal searcher a 2-exchange or a 2-Opt, which verifies the theorem conditions, see[18] or the extensive list of references in [25]. Moreover, those implementationsthat use as local search a Lin-Kernighan heuristic, are justified to do so, not onlyfrom the empirical point of view, but also from our previous analyses. The laterheuristic employs a variable j − Exchange neighbourhood to improve solutionsand hence it is not included (nor necessarily includes) the neighbourhoods of thefrequently employed “double-bridge” mutation, rendering the memetic algorithmmore robust. The results presented here are in agreement with the observation in[43] that the local optima of evolutionary algorithms should be those associated tothe mutation operator landscape. A corollary of this is that if we want to produceassociations (search problems,memetic algorithm with D = {1, 2, 3}) that are not apriori to be PLS-complete then we must use orthogonal neighbourhoods or at leastneighbourhoods that are not strictly included one in each other.

In retrospect, it is not surprising that all of the previous PLS problems areindeed PLS-complete and that the use of a population cannot help to overcome theworst case of an exponential path to a local minimum. Anybody familiar with therelative performances of K-opt and Lin–Kernighan (LK) heuristics running alone oninstances of the TSPLIB [40] or within an MA, will be forced to acknowledge thatthe benefits of using the GA to provide the “global” search are marginal. It is well

Page 18: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 19

known that LK can produce tours about 2.5% (or less) above the optimum leaving asmall space for the GA to act.

5 Extensions

5.1 Alternative Models

An extension of the previous theoretical framework that leaves the results un-changed is to consider an alternative metric for the quality of a solution (population),that is, a different definition of Eq. 1. If instead of taking as the cost of the populationthat of the fittest individual, we take for example the average of the best μ < |P|individuals, the results will still hold.

For MAs where the neighbourhood of a given generating function is not completebut rather generated by random applications of the elementary move, i.e. non-exhaustive MAs, we can produce the following corollary:

Corollary 1 (Local optimality of non-exhaustive MA) The local optima ofNTSP−{0,1,2,3},M,R,L are also local optima for the non-exhaustive (random) version.

It is possible to verify the corollary by employing the same PLS-reduction (h, g)

of the previous theorems. It is easy to see that if a solution s is a local optimum forthe exhaustive version of the PLS problem then for all randomly chosen samples ofthe complete neighbourhood, that is for any pivot rule, s will be a local optimum aswell. Moreover, the local optimality of the sample is even valid for non-computable ornon-polynomial pivot rules.

It should be clear that the limitation of the complexity and convergence analysisdone so far is not related to the exhaustiveness of the neighbourhoods defined forthe several genetic operators but to the fact that the updating function is an elitistfunction, that is, it always preserves the best individual. Theorems equivalent to 2,3, 4, 5 can be obtained for random neighbourhoods if one assumes that the randomnumber generator used to sample the neighbourhood works in polynomial time. Inthis case, a suitable modification to algorithm C of the PLS definition will be sufficientto provide such a proof.

However, if a “generational” kind of selection was used instead, the theorems willnot be valid. Intuitively a generational updating function can be seen as a simulationof random choices over the exhaustive neighbourhood and the simulation of perhapsnon-polynomial pivot rules. It remains to be seen if the theorems can be proved forthe generational case or a more positive result can be expected in that case.

Another important observation is that the PLS-completeness results of this paperwere based on a “Lamarckian” version of MA, i.e., the benefits of local searchwere coded back into the genotype of individuals. However, note that the mappingfrom the phenotype to the genotype is not used in these proofs. By a suitablemodification of algorithm BTSP−(D,M,R,L) with D ∈ {0, 1, 2, 3} used in the PLS-completeness proofs, i.e., including the local searcher L as a bias in the original fitnessfunction in Eq. 1, the polynomial local search problems described here can be recastas “Baldwinian” versions. The PLS-completeness proofs will still be valid because,by hypothesis, L runs in polynomial time. As a consequence, all that was said forLamarckian MAs in this chapter is valid for the Baldwinian counterparts as well.

Page 19: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

20 J Math Model Algor (2008) 7:3–24

5.2 PLS-Complexity in Other Problem Domains

Yannakakis [52] proves the PLS-completeness of various other local search prob-lems, amongst them graph partitioning (GP) under the Kernighan–Lin, Swap,Fiduccia–Mattheyses and Fiduccia–Mattheyses-Swap neighbourhoods, and Max-Cutunder the flip neighbourhood. Alekseeva et al. [2] prove the PLS-completeness ofthe p-median problem under the Kernighan–Lin, Swap and Fiduccia–Mattheysesneighbourhoods and discus these in the context of variable neighbourhood search(VNS). These local search problems are related to MAs in a similar way as thelocal search problems described for the TSP. In particular, in [31] a memeticalgorithm is exployed with the Swap and Kernighan–Lin local search. The geneticoperators described in that reference (and also in [28] and [29]) induce polynomialneighbourhoods. That is, for the problems and operator studied there it is possibleto prove, using similar techniques to the ones we used here, the PLS-completeness ofthe relevant polynomial local search problems. Similarly, [4] and [33] are exampleswhere PLS-completeness results will also follow. That is, the complexity results wepresent in this paper are not the result of specially contrived problems or algorithms,but they apply to algorithms as they are actually being used.

5.3 Implications of PLS Theory for the Design of Metaheuristics

The results of this paper with respect to memetic algorithms and, more generally,those of PLS theory should be of relevance to the researchers and practitionersin metaheuristic design. Take the case of the Variable Neighbourhood Searchmetaheuristic. The most commonly use template for VNS [15] is given in the pseudo-code below.

VNS():Begin

Select the set of neighbourhood structures Nk, k = 1, . . . , kmax;Find an initial solution x;Choose a stopping condition;Repeat Until ( Stopping condition fulfilled ) DoSet k = 1;Repeat Until ( k == kmax ) Do

SHAKING: Generate a point x′ from the kth neighbourhood of x;with x′ ∈ Nk(x);LOCAL_SEARCH: Apply some local search method starting from x′;Store the local optimum obtained in x′′;If (x′′ is better than x) Then

x = x′′;/∗ Improved solution found ∗/k = 1;

FiElse

/∗ When trapped in a local optimum relative ∗//∗ to the neighbourhood being used. ∗//∗ Change neighbourhood ∗/k = k + 1;

EsleOd

OdReturn x;

End.

Page 20: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 21

This basic pseudo-code of VNS is, assuming a minimisation problem, a descend-first improvement method. Several combinatorial problems were solved using vari-able neighbourhood search, e.g., TSP [15], facilities location problems [12–14, 32]and its continuous version [7]. Other applications are discussed in [15]. When VNS isinstantiated for permutation problems, e.g. Facility Location, P-median, etc., the kthneighbourhood is defined as a function of the k − 1th and usually have a topologysimilar to the swap and insertion neighbourhood described in this paper. Hence,the potential for worst case complexity results from a PLS theory are even moresevere in VNS than in MAs. This is because, by construction, VNS depends onnon-orthogonal, i.e. nested, neighbourhoods and on more elitist updating rules (seepseudo code above).

The design of Tabu Search could also benefit from a more detailed study ofits PLS complexity. A PLS-complete result for Tabu Search associated to a givencombinatorial optimisation problem could have important implications on the waythe Tabu list is maintained and the aspiration criteria updated. A PLS-completeresult would imply that, potentially, a very long Tabu List might need to be keptin order to make TS effective.

A more detailed study of PLS theory could also help us develop better and morechallenging test generators. A PLS-complete problem associated with a given meta-heuristic might provide easily recognisable local optima that are in fact difficult toattain. That is, PLS-complete problems would seem to provide a way of generalisingthe long path problem that have been used to test evolutionary algorithms.

6 Conclusions

Here we studied the problem of analysing the worst-case complexity of heuristicsapplied to a well-known problem, the two-dimensional and Euclidean TSP. Theheuristics analysed were a family of memetic algorithms. Unlike other complexityapproaches for heuristics, e.g. [6, 41, 46, 48] we restricted neither the algorithms usednor the problem they intend to solve.

By way of very simple arguments we proved the PLS-completeness of a familyof memetic algorithms for the TSP. These complexity results imply that potentiallyvery long path to local optima exists even when the neighbourhood used by thevarious memetic algorithms analysed are of polynomial size. In other words, theaddition of a population to the evolutionary heursitic does not improve the worst-case behaviour beyond that of local search. However, we acknowledge that theseresults do not permit us to make any claims about the average case behaviour of thedifferent algorithms.

In Section 5 we mentioned several other real-life problems and real-life algorithmsfor which similar results will hold. Although the results obtained here are not tightbounds on the worst-case complexity, they represent (to the best of our knowledge)the first such analysis of an evolutionary based metaheuristics for an NP-hardproblem.

Notably, a new complexity theory approach is emerging that (similarly to polyno-mial local search complexity theory) integrates knowledge of the problem and theparticular algorithms used to solve. This approach is based on the “Domination”concept recently introduced by Gutin and coworkers [9, 10]. We believe that the

Page 21: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

22 J Math Model Algor (2008) 7:3–24

theory that emanates from this concept will be of relevance to the overall goal ofcomparing alternative heuristics on a principled manner and in complementing theworst case complexity results that can be achieved by PLS theory. Ultimately, theintegration of both approaches should help to shed light on the complexity of searchwith evolutionary algorithms in general, and memetic algorithms in particular.

Acknowledgements The authors acknowledge useful discussions with S. Gustafson, R. Carr, W.Hart, M. Land, G. Gutin and S. Ahmadi and the valuable comments made by the reviewers.N.K. would like to acknowledge EPSRC and BBSRC for awards EP/D021847/1, EP/E017215/1,GR/T07534/01 and BB/C511764/1.

Appendix

The proofs for the lemmas below are in the supplementary material at http://www.cs.nott.ac.uk/∼nxk/PAPERS/suppMat.pdf.

Lemma 2 (Size of NI X) The size of NI X is bounded by O(L3).

Lemma 3 (Size of NPMX) The size of NPMX is bounded by O(L2)

Lemma 4 (Size of NDPX) The size of NDPX is bounded by an exponential in L.

Lemma 5 (Size of NEdge) The size of NEdge is bounded by an exponential in L.

Lemma 6 (Size of N j−Inversion) The size of N j−Inversion is bounded by O(L)

Lemma 7 (Size of N j−Exchange) The size of N j−exchange is bounded by O(L j)

Lemma 8 (Size of N j−Swap) The size of N j−Swap is bounded by O(L2)

Lemma 9 (Size of N j−Insertion) The size of N j−Insertion is bounded by O(L2)

Lemma 10 (| O′′ | is polynomially bounded)

References

1. Aarts, E.M.H., Lenstra, J.K.: Introduction. In: Aarts, E.H.L., Lenstra, J.K. (eds.) Local Searchin Combinatorial Optimization, pp. 1–17. Wiley, New York (1997)

2. Alekseeva, E., Kochetov, Y., Plyasunov, A.: Complexity of local search for the p-median prob-lem. In: Proceedingss of MEC-VNS: 18th Mini Euro Conference on VNS (2005)

3. Anderson, E.J., Glass, C.A., Potts, C.N.: Machine scheduling. In: Aarts, E.H.L., Lenstra, J.K.(eds.) Local Search in Combinatorial Optimization. Wiley, New York (1997)

4. Areibi, S., Moussa, M., Abdullah, H.: A comparison of genetic-memetic algorithms and otherheuristic search techniques. In: Proceedings of the 2001 International Conference on ArtificialIntelligence IC-AI 2001. Las Vegas, NV, USA (2001)

5. Back, T., Fogel, D.B., Michalewicz, Z.: Handbook of Evolutionary Computation. IOP Publishing(1997)

Page 22: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

J Math Model Algor (2008) 7:3–24 23

6. Baum, E.B., Bone, D., Garret, C.: Where genetic algorithms excel. Evol. Comput. 9(1), 93–124(2001)

7. Brimberg, J., Hansen, P., Mladenovic, N., Taillard, E.: Improvements and comparison of heuris-tics for solving the multisource weber problem. Oper. Res. 48(3), 444–460 (2000)

8. Freisleben, B., Merz, P.: New genetic local search operators for the traveling salesman problem.In: Voigt, H.-M., Ebeling, W., Rechenberg, I., Schwefel, H.-P. (eds.) Proceedings of the 4thConference on Parallel Problem Solving from Nature - PPSN IV. Lecture Notes in ComputerScience, vol. 1141, pp. 890–900. Springer (1996)

9. Gutin, G., Yeo, A.: Polynomial approximation algorithms for the tsp and the qap with a factorialdomination number. Discrete Appl. Math. 119, 107–116 (2002)

10. Gutin, G., Yeo, A., Zverovich, A.: Traveling salesman should not be greedy: domination analysisof greedy-type heuristics for the tsp. Discrete Appl. Math. 117, 81–86 (2002)

11. Goldberg, D.E., Lingle, R.: Alleles, loci, and the travelling salesman problem. In: Proceedingsof the First International Conference on Genetic Algorithms and their Applications. LawrenceErlbaum Associates (1985)

12. Hansen, P., Brimberg, J., Mladenovic, N., Urosevic, D.: Primal-dual variable neighborhoodsearch for the simple plant location problem. INFORMS Journal on Computing (2007) (in press)

13. Hansen, P., Mladenovic, N.: Variable neighborhood search for the p-median. Location Sci. 5(4),207–226 (1998)

14. Hansen, P., Mladenovic, N.: An introduction to variable neighborhood search. Metaheuristics,Advances and Trends in Local Search Paradigms for Optimization, pp. 433–458 (1999)

15. Hansen, P., Mladenovic, N.: Variable neighborhood search: Principles and applications.European J. Oper. Res. 130, 449–467 (2001)

16. Hart, W.E.: Adaptive global optimization with local search. Ph.D. thesis, University of California,San Diego (1994)

17. Hart, W.E.: A convergence analysis of unconstrained and bound constrained evolutionary pat-tern search. Evol. Comput. 9(1) (2001)

18. Hart, W.E., Krasnogor, N., Smith, J.E. (eds.): Recent Advances in Memetic Algorithms andRelated Search Technologies. Springer (2004)

19. Hart, W.E., Belew, R.K.: Optimizing an arbitrary function is hard for the genetic algorithm.In: Proceedings of the 4th International Conference on Genetic Algorithms, pp. 190–195. (June1991)

20. He, J., Yao, X.: Drif analysis and average time complexity of evolutionary algorithms. Artif.Intell. 127, 57–85 (2001)

21. Johnson, D.S., McGeoch, L.A.: The traveling salesman problem: a case study. In: Aarts, E.H.L.,Lenstra, J.K. (eds.) Local Search in Combinatorial Optimization, pp. 215–310. Wiley, New York(1997)

22. Johnson, D.S., Papadimitriou, C.H., Yannakakis, M.: How easy is local search. J. Comput. Syst.Sci. 37, 79–100 (1988)

23. Konig, R., Dandekar, T.: Improving genetic algorithms for protein folding simulations by sys-tematic crossover. BioSystems 50, 17–25 (1999)

24. Krasnogor, N.: http://www.cs.nott.ac.uk/∼nxk/papers.html. In: Studies on the Theory and DesignSpace of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol, UK(2002)

25. Krasnogor, N., Smith, J.E.: A tutorial for competent memetic algorithms: Model, taxonomy anddesign issues. IEEE Trans. Evol. Algorithms 9(5), 474–488 (2005)

26. Krentel, M.W.: Structure in locally optimal solutions. In: 30th Annual Symposium on Founda-tions of Computer Science, pp. 216–222. IEEE Computer Society Press, Los Alamitos, CA (1989)

27. Land, M.W.S.: Evolutionary algorithms with local search for combinatorial optimization. Ph.D.thesis, University of California, San Diego (1998)

28. Merz, P.: Memetic algorithms for combinatorial optimization problems: fitness landscapes andeffecitve search strategies. Ph.D. thesis, Parallel Systems Research Group. Department of Elec-trical Engineering and Computer Science, University of Siegen (2000)

29. Merz, P., Freisleben, B.: Memetic algorithms and the fitness landscape of the graph bi-partitioning problem. In: Eiben, A.E., Back, T., Schoenauer, M., Schwefel, H.-P. (eds.) Proceed-ings of the 5th Conference on Parallel Problem Solving from Nature – PPSN V. Lecture Notes inComputer Science, vol. 1498, pp. 765–774. Springer (1998)

30. Merz, P., Freisleben, B.: Fitness landscapes and memetic algorithm design. In: Corne, D., Glover,F., Dorigo, M. (eds.) New Ideas in Optimization. McGraw-Hill (1999)

Page 23: Krasnogor, N. and Smith, J. (2008) Memetic algorithms: The ...In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,

24 J Math Model Algor (2008) 7:3–24

31. Merz, P., Freisleben, B.: Fitness landscapes, memetic algorithms, and greedy operators for graphbipartitioning. J. Evol. Comput. 8(1), 61–91 (2000)

32. Mladenovic, N.: A variable neighborhood algorithm: a new metaheuristic for combinatorialoptimization. Technical report, Abstract of papers presented at Optimization Days, Montreal,Canada (1995)

33. Moon, B.R., Lee, Y.S., C.Y Kim: Genetic vlsi circuit partitioning with two-dimensional geo-graphic crossover and zigzag mapping. In: Proceedings of the 1997 ACM symposium on Appliedcomputing, pp. 274–278. ACM Press (2001)

34. Moscato, P.: Memetic algorithms’ home page, accessed (2005)35. Moscato, P.A.: On evolution, search, optimization, genetic algorihtms and martial arts: towards

memetic algorithms. Technical Report Caltech Concurrent Computation Program Report 826,Caltech, Caltech, Pasadena, CA (1989)

36. Muhlenbein, H., Gorges-Schleuter, M., Kramer, O.: Evolution algorithms in combinatorial opti-mization. Parallel Comput. 7, 65–85 (1988)

37. Papadimitriou, C.H.: The complexity of the lin-kernighan heuristic for the traveling salesmanproblem. SIAM J. Comput. 21, 450–465 (1992)

38. Paun, G., Rozenberg, G., Salomaa, A.: DNA Computing: New Computing Paradigms. Springer(1998)

39. Rayward-Smith, V.J.,: A unified approach to tabu search, simulated annealing and genetic algo-rithms. Applications of Modern Heuristic Methods, pp. 17–38 (1995)

40. Reinelt, G.: Tsplib (http://www.iwr.uni-heidelberg.de/iwr/comopt/soft/tsplib95/tsplib.html),accessed (November 2005)

41. Salustowicz, R.P., Schmidhuber, J.: Probabilistic incremental program evolution. Evol. Comput.5(2), 123–141 (1997)

42. Shumacher, C., Vose, M.D., Whitley, L.D.: The no free lunch and problem description length.In: Spector, L., Goodman, E.D., Wu, A., Langdon, W.B., Voigt, H.M., Gen, M., Sen, S., Dorigo,M., Pezeshk, S., Garzon, M.H., Burke, E.K. (eds.) GECCO 2001: Proceedings of the Genetic andEvolutionary Computation Conference. Morgan Kaufmann (2001)

43. Rana, S.: The role of local optima in evolutionary search. Ph.D. thesis, Department of ComputerSciences, Colorado University (2000)

44. Jones, T.: Evolutionary algorithms, fitness landscapes and searh. Ph.D. thesis, Univesity of NewMexico, Albuquerque, NM (1995)

45. Unger, R., Moult, J.: Genetic algorithms for protein folding simulations. J. Mol. Biol. 231(1),75–81 (1993)

46. Vitany, P.M.B.: A discipline of evolutionary programming. Theor. Comput. Sci. 241, 1–2, 3–23(2000)

47. Vose, M.D., Wright, A.H.: Form invariance and implicit parallelism. Evol. Comput. 9(3), 355–370(2001)

48. Wegener, I., Scharnow, J., Tinnefeld, K.: Fitness landscapes based on sorting and shortest pathsproblems. In: Proceedings of the Parallel Problem Solving from Nature VII. Lecture Notes InComputer Science (2002)

49. Whitley, D.: Permutations. In: Bäck, T., Fogel, D.B., Michalewicz, Z. (eds.) Evolutionary Com-putation 1: Basic Algorithms and Operators, chapter 33.3, pp. 274–284. Institute of PhysicsPublishing, Bristol (2000)

50. Wolpert, D., Macready, W.: No free lunch theorems for optimization. IEEE Transactions inEvolutionary Computation, pp. 67–82 (1997)

51. Wolpert, D.H., Macready, W.G.: No free lunch theorems for search. Technical report SFI-TR-95-02-010, Santa Fe Institute, New Mexico (1995)

52. Yannakakis, M. Computational complexity. In: Aarts, E., Lenstra, J.K. (eds.) Local Search inCombinatorial Optimization, pp. 19–55. Wiley (1997)