an overall-regional competitive self-organizing map neural network for the euclidean traveling...

11
An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem $ Junying Zhang a,n , Xuerong Feng b , Bin Zhou a , Dechang Ren a a School of Computer Science and Technology, Xidian University, Xi’an 710071, PR China b School of Computing, Informatics and Decision Systems Engineering, Computer Science and Engineering Department, Ira A. Fulton Schools of Engineering, Arizona State University, AZ 85287, USA article info Article history: Received 24 December 2010 Received in revised form 3 November 2011 Accepted 6 November 2011 Communicated by B. Hammer Available online 23 February 2012 Keywords: Euclidean traveling salesman problem Self-organizing map Overall-regional competitive self-organizing map Optimal tour abstract The paper proposes a novel overall-regional competitive SOM (ORC-SOM) algorithm for solving symmetric Euclidean traveling salesman problems (TSPs). Two novel rules, overall and regional competition rules respectively, are introduced in the ORC-SOM. Overall competition is designed to make winning neuron and its neighborhood neurons less competitive for outlining the tour, and regional competition is designed to make them more competitive for refining the tour, both compared with the standard SOM. An increasing radius with respect to iteration is designed for a smooth transition from more focus on outlining to more focus on refining the tour. Besides topology preservation property and convex-hull property, an additional significant property of an optimal tour for a complex TSP, referred to as infiltration property, is introduced, and the feasibility of the ORC-SOM algorithm on these properties are studied. Computational comparisons with typical SOM-based counterparts on two sets of benchmark TSP instances from TSPLIB demonstrate the superiority of the ORC-SOM in solution quality. Crown Copyright & 2012 Published by Elsevier B.V. All rights reserved. 1. Introduction As a typical NP-complete problem, the classical traveling salesman problem (TSP) has attracted extensive research efforts in AI’s meth- ods [1]. This is because its vast real-life applications, such as printed- circuit-boards manufacturing [2,3], data transmission in computer networks [4], power-distribution networks [5], image processing and pattern recognition [6], robot navigation [7], data partitioning [8], hole punching [9], vehicle routing [10], document locating [43] and so forth. Nowadays, the increasing diversity of applications requires increasingly large scale TSPs be solved with precisions. The TSP can be stated as a search for a roundtrip of minimal total length visiting each node exactly once. Since there are ðN1Þ!=2 possible roundtrips for an N-city TSP, it is impractical to apply brute-force approach to the problem for a large N [11]. Many heuristic methods have been developed in order to achieve a near- to-optimal solution of the problem. The heuristics such as simu- lated annealing (SA) [12], genetic algorithms (GAs) [13,14], tabu search [15,16], artificial ant systems [17], neural networks [18], etc., have demonstrated various degrees of strength and success to the TSP [19], among which neural networks are considered as a promising approach for large scale TSP problems [20]. Neural network based methods developed for solving the TSP can be grouped into two categories: the Hopfield-based neural networks initiated by Hopfield and Tank [18] and the Self-Organizing Map (SOM) based (SOM-based) neural networks initiated by Kohonen [21,22]. Hopfield-based neural network generally uses a second- order energy function that determines its structure and behavior. Moreover, it uses a negative feedback to minimize the energy function during the training of the network. In spite of its fast convergence speed, the major drawback of the Hopfield-based net- work for solving the TSP is getting caught in the local minima of the energy function [23, 24], which corresponds to an infeasible solution of the problem. This confines the approach only feasible for small- scale TSPs. In contrast, SOM attracted many research interests to explore and enhance its capability to solve a TSP due to its intuitive appeal, relative simplicity, and promising performance [25]. It was claimed that the SOM-based neural networks can handle large scale TSPs with low computation complexity [21]. Though solution accuracy is not comparable to some state-of-art and Operations Research heuristics, developing an SOM-based neural network that provides a good TSP solution remains a challenging endeavor [27]. There are generally three main streams to enhance the original SOM: (1) introducing the variability of network structure which enables dynamical deletion and/or insertion of neurons, e.g., SOM Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter Crown Copyright & 2012 Published by Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2011.11.024 $ This work was supported by the National Natural Science Foundation of China (Grant no. 61070137), the Major Research Plan of the National Natural Science Foundation of China (Grant no. 91130006), and the Key Program of the National Natural Science Foundation of China (Grant no. 60933009). n Correspondence to: 161 Postbox, Xidian University, No. 2 Taibai Road, Xi’an 710071, PR China. Tel./fax: þ86 29 88203692; Mobile: 13992815979. E-mail addresses: [email protected] (J. Zhang), [email protected] (X. Feng), [email protected] (B. Zhou). Neurocomputing 89 (2012) 1–11

Upload: junying-zhang

Post on 10-Sep-2016

218 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

Neurocomputing 89 (2012) 1–11

Contents lists available at SciVerse ScienceDirect

Neurocomputing

0925-23

doi:10.1

$This

(Grant

Founda

Naturaln Corr

710071

E-m

sherry_

journal homepage: www.elsevier.com/locate/neucom

An overall-regional competitive self-organizing map neural network for theEuclidean traveling salesman problem$

Junying Zhang a,n, Xuerong Feng b, Bin Zhou a, Dechang Ren a

a School of Computer Science and Technology, Xidian University, Xi’an 710071, PR Chinab School of Computing, Informatics and Decision Systems Engineering, Computer Science and Engineering Department, Ira A. Fulton Schools of Engineering,

Arizona State University, AZ 85287, USA

a r t i c l e i n f o

Article history:

Received 24 December 2010

Received in revised form

3 November 2011

Accepted 6 November 2011

Communicated by B. Hammerregional competition is designed to make them more competitive for refining the tour, both compared

Available online 23 February 2012

Keywords:

Euclidean traveling salesman problem

Self-organizing map

Overall-regional competitive

self-organizing map

Optimal tour

12/$ - see front matter Crown Copyright & 2

016/j.neucom.2011.11.024

work was supported by the National Natural

no. 61070137), the Major Research Plan of t

tion of China (Grant no. 91130006), and the

Science Foundation of China (Grant no. 6093

espondence to: 161 Postbox, Xidian Univers

, PR China. Tel./fax: þ86 29 88203692; Mobil

ail addresses: [email protected] (J.

[email protected] (X. Feng), [email protected]

a b s t r a c t

The paper proposes a novel overall-regional competitive SOM (ORC-SOM) algorithm for solving

symmetric Euclidean traveling salesman problems (TSPs). Two novel rules, overall and regional

competition rules respectively, are introduced in the ORC-SOM. Overall competition is designed to

make winning neuron and its neighborhood neurons less competitive for outlining the tour, and

with the standard SOM. An increasing radius with respect to iteration is designed for a smooth

transition from more focus on outlining to more focus on refining the tour. Besides topology

preservation property and convex-hull property, an additional significant property of an optimal tour

for a complex TSP, referred to as infiltration property, is introduced, and the feasibility of the ORC-SOM

algorithm on these properties are studied. Computational comparisons with typical SOM-based

counterparts on two sets of benchmark TSP instances from TSPLIB demonstrate the superiority of the

ORC-SOM in solution quality.

Crown Copyright & 2012 Published by Elsevier B.V. All rights reserved.

1. Introduction

As a typical NP-complete problem, the classical traveling salesmanproblem (TSP) has attracted extensive research efforts in AI’s meth-ods [1]. This is because its vast real-life applications, such as printed-circuit-boards manufacturing [2,3], data transmission in computernetworks [4], power-distribution networks [5], image processing andpattern recognition [6], robot navigation [7], data partitioning [8],hole punching [9], vehicle routing [10], document locating [43] andso forth. Nowadays, the increasing diversity of applications requiresincreasingly large scale TSPs be solved with precisions.

The TSP can be stated as a search for a roundtrip of minimal totallength visiting each node exactly once. Since there are ðN�1Þ!=2possible roundtrips for an N-city TSP, it is impractical to applybrute-force approach to the problem for a large N [11]. Manyheuristic methods have been developed in order to achieve a near-to-optimal solution of the problem. The heuristics such as simu-lated annealing (SA) [12], genetic algorithms (GAs) [13,14], tabu

012 Published by Elsevier B.V. All

Science Foundation of China

he National Natural Science

Key Program of the National

3009).

ity, No. 2 Taibai Road, Xi’an

e: 13992815979.

Zhang),

om (B. Zhou).

search [15,16], artificial ant systems [17], neural networks [18], etc.,have demonstrated various degrees of strength and success to theTSP [19], among which neural networks are considered as apromising approach for large scale TSP problems [20].

Neural network based methods developed for solving the TSP canbe grouped into two categories: the Hopfield-based neural networksinitiated by Hopfield and Tank [18] and the Self-Organizing Map(SOM) based (SOM-based) neural networks initiated by Kohonen[21,22]. Hopfield-based neural network generally uses a second-order energy function that determines its structure and behavior.Moreover, it uses a negative feedback to minimize the energyfunction during the training of the network. In spite of its fastconvergence speed, the major drawback of the Hopfield-based net-work for solving the TSP is getting caught in the local minima of theenergy function [23,24], which corresponds to an infeasible solutionof the problem. This confines the approach only feasible for small-scale TSPs. In contrast, SOM attracted many research interests toexplore and enhance its capability to solve a TSP due to its intuitiveappeal, relative simplicity, and promising performance [25]. It wasclaimed that the SOM-based neural networks can handle largescale TSPs with low computation complexity [21]. Though solutionaccuracy is not comparable to some state-of-art and OperationsResearch heuristics, developing an SOM-based neural network thatprovides a good TSP solution remains a challenging endeavor [27].

There are generally three main streams to enhance the originalSOM: (1) introducing the variability of network structure whichenables dynamical deletion and/or insertion of neurons, e.g., SOM

rights reserved.

Page 2: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

Fig. 1. Schematic SOM-like network for the TSP.

J. Zhang et al. / Neurocomputing 89 (2012) 1–112

with a dynamic structure [22], and FLEXMAP with a growingstructure [28]; (2) amending the competition criterion for updat-ing network parameters, e.g., the guilty net developed by Burkand Damany [29] and the work by Favata and Walker [30];(3) enhancing learning rule, e.g., the KNIES [31,32] and theExpanding SOM (ESOM) [33] which is designed to acquirethe convex-hull property of the TSP besides the neighborhoodpreserving property.

SOM and other approaches are also combined for pursuing theoptimal solution of a TSP. For example, a technique to reduce thecomplexity of a large-scale TSP instance, referred as KNIES_DECOM-POSE, was proposed, which decomposes or partitions the probleminto smaller sub-problems, and solves the problem by an all-neuraldecomposition heuristic that is based on KNIES and the EuclideanHamiltonian path problem [32]. Another example is the novelconstructive-optimizer neural network (CONN) for the TSP [20],which uses a feedback structure similar to Hopfield-type neuralnetworks and a competitive training algorithm similar to the SOM.Co-adaptive net not only involves just unsupervised learning to trainneurons, but also allows neurons to co-operate and competeamongst themselves depending on their situation [27]. Theseapproaches focus more on reducing computation complexity.

Both solution quality and computational complexity are themain focuses. Most SOM-based approaches are with the compu-tational complexity of OðN3

Þ, and some are with less computa-tional complexities. Among these approaches, it was claimed thatKNIES NNs has been one of the most efficient NNs formerlypresented in the literature for solution quality of the TSP [31,32].Similarly, ESOM [33] and eISOM [34] were introduced as the mostaccurate NNs for TSP. Additionally, CONN [20], Co-adaptive Net[27] and memetic SOM approach [35] were proposed as NNs withbetter performance than other NNs in terms of accuracy and/orCPU time [20,27,35].

In this paper, we propose a novel overall-regional competitiveSOM (ORC-SOM) for acquisition of solution precision of the TSPs,in which two new rules, overall competition rule and regionalcompetition rule, are introduced. In overall competition, thewinning neuron and its neighborhood neurons are set to be lesscompetitive, and in regional competition, they are set to be morecompetitive, both compared with the standard SOM [36]: lesscompetition aims at outlining and more competition aims atrefining the present tour respectively. A radius is introduced fordetermining if an input city stands close to or far from the presenttour and hence regional or overall competition is required, and itis designed to be an increasing function for changing graduallyfrom the focus more on outlining to more on refining the tourduring the learning process. The feasibility of the solutionsobtained with the proposed ORC-SOM algorithm is studied inthat the designed learning rule can acquire infiltration property ofthe TSP besides the neighborhood preservation property and theconvex-hull property.

We compared the proposed algorithm on solution quality on aset of 16 benchmark TSP instances from TSPLIB [37] with typicalSOM-based seven TSP algorithms with comparable computationand three TSP algorithms with less computation. We also com-pared the proposed algorithm on solution quality and CPU timewith three typical SOM-based SETSP [44], ESOM [33] and co-adaptive net [27] on a set of 20 benchmark TSP instances alsoselected from the TSPLIB. The result indicates the superiority ofthe ORC-SOM to its SOM-based counterparts in solution qualityand CPU time. Furthermore, the average solution quality of themost TSP instances over multiple runs of the algorithm is evenbetter than the best solution of the most SOM-based counterpartsespecially for large scale TSP instances.

The paper is organized as follows. In Section 2, the overall-regional competitive SOM (ORC-SOM) neural network for the TSP

is presented, with idea, formation and algorithm given in thesubsections of the section. The feasibility analysis on the solutionof the TSP by the ORC-SOM is then given in Section 3. In Section 4,experimental results are presented and compared with severalexistent SOM-based TSP algorithms and non-neural TSP algo-rithms. Finally, conclusions are made in Section 5.

2. Overall-regional competition SOM neural network

A conceptually intuitive idea of using a standard SOM to solvea 2-D Euclidean TSP with N cities is simply to set the structure ofthe SOM to be circular [44], shown in Fig. 1, initialize M neuronspositioned in the two-dimensional input space at randomðM4NÞ, and learn such a circular SOM with the cities in the TSPby a standard SOM learning algorithm. At the convergence of theSOM, the arrangement of the cities by the inherent sequence ofneurons with which the cities have been established one-to-onerelationships is a near-optimal solution of the TSP. However, ithas been empirically verified that the standard circular SOM isnot effective in solution quality of the TSP [44].

2.1. Overall competition and regional competition: idea

The standard SOM simply learns an input city X by movingneurons toward it with some displacement during the wholelearning process. However, in our opinion, learning processshould not be considered equally: it should start from outliningto refining the tour. In addition, at any moment (iteration), aninput city, if it stands far from the present tour, should update thetour less, such that outlining is performed, while the one, if itstands close to the present tour, should update the tour more,such that refining is performed, both compared with the standardSOM. For this purpose, we define l-region of a city X below.

Definition. The l-region of an input city X, denoted as elðXÞ, isdefined as the region in the input space whose Euclidean distancefrom the input city X is less than or equal to l.

Now we consider the situation of a fixed l. When the input cityX stands far from its winning neuron iðXÞ, i.e., WiðXÞ=2elðXÞ, it iseven farther away from all the other neurons since the winningneuron is the closest neuron to the city X, i.e., Wj=2elðXÞ, forj¼ 1,2,. . .,M, shown in Fig. 2(a). This implies that the outline ofthe optimal tour has not been reached, and the present tour isrequired to be updated to get a better outline tour. Hence, all the

Page 3: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

1W

( )i XWMW

2W

( ) 1i XW −

( ) 1i XW +

( ) 1i XW −

( )i XW

( ) 1i XW +

2W

1W

MW

Fig. 2. Demonstrative illustration of overall-regional competitive learning rule, (a) situation when overall competition is required to be used; and (b) situation when

regional competition is required to be used.

Fig. 3. DCF adaptation.

J. Zhang et al. / Neurocomputing 89 (2012) 1–11 3

neurons should compete with more fair, or equivalently, theyshould be less competitive. This indicates that they should movetoward X with less displacement, compared with the standardSOM. For the case when the input city X stands close to itswinning neuron iðXÞ, i.e., WiðXÞAelðXÞ, shown in Fig. 2(b), it isreasonable to believe that the outline of the optimal tour aroundthe input city X has been reached, and the present tour should beupdated for refinery of it. In this sense, neurons within elðXÞshould be more competitive, i.e., they should be moved towardthe input city X with larger displacement, compared with thestandard SOM. In the context, the rule which moves neuronstoward X with smaller displacement is called overall competitionrule, and the one which moves them toward X with largerdisplacement is referred to as regional competition rule. Theformer is used for outlining, while the latter for refining the tour.

For pursuing optimal tour solution of the TSP, the searchprocess should be a smooth transition from outlining the tour atthe beginning of the search to refining the tour at the end of thesearch. Hence, in our proposed algorithm, the radius l is set to bean increasing function of iteration.

2.2. Overall competition and regional competition: formation

By specifying a displacement coefficient function (DCF) being

Zðn,dX,iðXÞ,lðnÞÞ ¼ exp �d2

X,iðXÞ

2l2ðnÞþ

1

2

( )ð1Þ

neuron update process with both overall competition and regio-nal competition can be only a simple modification of that of thestandard SOM, written as

Wjðnþ1Þ ¼WjðnÞþZðn,dX,iðXÞ,lðnÞÞZðnÞhj,iðXÞðnÞ½XðnÞ�WjðnÞ� ð2Þ

In Eq. (1), dX,iðXÞ is the Euclidean distance between the inputcity X and the location WiðXÞ of the corresponding winning neuroniðXÞ in input space. Shown in Fig. 3 is the DCF with respect to dX,iðXÞ

for a fixed l. It is seen that the DCF Zðn,dX,iðXÞ,lðnÞÞ has theproperty of

Zðn,dX,iðXÞ,lðnÞÞr1 for WiðXÞ=2elðXÞ41 for WiðXÞAelðXÞ

(ð3Þ

This indeed makes the displacement of a neuron toward the inputcity less than, for the case of WiðXÞ=2elðXÞ, and larger than, for thecase of WiðXÞAelðXÞ, that of the standard circular SOM respec-tively. Hence, at any iteration n, neurons outside the elðXÞcompete with overall competition rule to outlining, and neuronswithin the elðXÞ compete with regional competition rule torefining.

For a smooth transition from outlining to refining the tour forany neuron, we set the radius l being an increasing function withrespect to iteration n as

lðnÞ ¼ l12

1þexpð�anÞ�

1

2

� �ð4Þ

where l1 is referred to as the final radius and a the radius relatedconstant. We set l1 ¼ cd=4 and a¼ 1=2 in this approach, where cd

is the diagonal length of the rectangular frame defined by thesmallest and the largest coordinates of the cities in the TSP.Actually, it is intuitive that such an increasing function lðnÞreflects a smooth transition from overall competition at thebeginning of the search, which focuses more on outlining, toregional competition at the end of the search, which focuses moreon refining the tour. The relation of the radius lðnÞwith respect toiteration n, shown in Fig. 4, demonstrates such a transition.

The learning rate ZðnÞ and the topological neighborhoodfunction hj,iðXÞðnÞ in Eq. (2) are set the same as those of a standardSOM in this approach.

2.3. ORC-SOM algorithm for the Euclidean TSP

We present the proposed ORC-SOM learning algorithm forsolving the 2-D Euclidean TSP of N cities below.

Input: coordinates of N cities in two-dimensional space.Output: an optimal or near-to-optimal tour.Parameter:

circular SOM with two inputs and suitable M outputneurons, M4N;number nmax of loops for termination.

Page 4: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

Fig. 4. l Adaptation.

J. Zhang et al. / Neurocomputing 89 (2012) 1–114

=*Learning process*=

Step 1: Initialize M neurons to be statistically uniformlydistributed on the rectangular frame defined by the smallestand the largest coordinates of the cities in the TSP, and let cd

be the diagonal length of the frame; set iteration numbern¼ 0;Step 2: Select a city X from the cities of the TSP at random, andfeed it to the ORC-SOM;Step 3: Find the winning neuron, say iðXÞ, nearest to the inputcity X according to the Euclidean metric, i.e.,

iðXÞ ¼ argminj

JXðnÞ�WjðnÞJ,j¼ 1,2,:. . .,M ð5Þ

Step 4: Train neurons using following update formula:

Wjðnþ1Þ ¼WjðnÞþZðn,dX,iðXÞ,lðnÞÞZðnÞhj,iðXÞðnÞ½XðnÞ�WjðnÞ� ð6Þ

for j¼ iðXÞ,iðXÞ71,iðXÞ72,:::,iðXÞ7sðnÞ, where ZðnÞ is thelearning-rate, hj,iðXÞðnÞ the neighborhood function given by

hj,i Xð ÞðnÞ ¼ expð�d2j,iðXÞ=2s2ðnÞÞ ð7Þ

and the DCF Zðn,dX,iðXÞ,lðnÞÞ given by

Zðn,dX,iðXÞ,lðnÞÞ ¼ exp �d2

X,iðXÞ

2l2ðnÞþ

1

2

( )ð8Þ

In the above two formula, dj,iðXÞ is the distance between neuroniðXÞ and neuron j in the circular, dX,iðXÞ the Euclidean distancebetween X and WiðXÞ in input space, sðnÞ is the effective-widthand lðnÞ the radius function calculated by

lðnÞ ¼cd

4

2

1þexpð�n=2Þ�

1

2

� �ð9Þ

Step 5: Update the effective-width sðnÞ and the learning-rateZðnÞ with predetermined decreasing schemes. If the predeter-mined number of loops nmax have not been executed, go toStep 2 with n¼ nþ1;Step 6: Arrange the cities by the inherent sequence of theneurons with which the cities have been established one-to-one relationships, and then form a tour of the TSP. If there isany city not mapped, go to Step 1 with some larger M;otherwise, output the tour.

In the above algorithm, Step 6 is used for recognizing if theresultant tour obtained by the ORC-SOM is a feasible solution ofthe TSP. If not, the number of neurons in the net is increased andthe learning process is re-performed. It has been verified by our

experiments that simply setting M be 2N or 3N is enough for allthe TSP instances experimented in this paper.

The ORC-SOM is degraded to standard SOM when we setZðn,dX,iðXÞ,lðnÞÞ being one, irrespective of dX,iðXÞ, lðnÞ and n.

3. Feasibility analysis

In this section, we study computation complexity of the ORC-SOM, and then we will show that the converged solution of thealgorithm has not only neighborhood preservation and convex-hull property which most conventional SOM-based algorithmspossess, but also infiltration property, which is significant for atour being an optimal or near-optimal tour, especially forcomplex TSPs.

Computation complexity of the ORC-SOM is OðN3Þ, the same as

that of the standard SOM and the most typical SOM-basedalgorithms, such as ESOM [33], Budinich’s SOM [38], and eISOM[34]. Actually, the only difference between the standard SOM andthe ORC-SOM is computation of Eq. (1) which does not change thecomputational complexity.

Theorem 1. The computation complexity of the ORC-SOM isOðN3Þ, and storage complexity is OðNÞ, where N is the number of

cities.

Proof. The dominant step at each iteration of the ORC-SOM isStep 2, Step 3 and Step 4. At first, every city would be input. Thistakes OðNÞ. For each city, Step 3 needs OðMÞ computations tosearch the winning neuron, and Step 4 requires at most OðMÞ

computations for updating at most all neurons, where M is thenumber of neurons, and set being M¼ kN. Hence each iterationtakes OðNÞðOðkNÞþOðkNÞÞ ¼OðN2

Þ computations. The number ofiterations is OðNÞ [33], and in each iteration, the outputs of allneurons are stored. Hence the overall computation complexity isOðN3Þ, and the storage complexity is OðNÞ. &

3.1. Neighborhood preservation and convex-hull properties

In fact, neighborhood preservation property of the SOM [36] isthe property that makes SOM-based approach available to esti-mate a tour of the TSP. It says that the mapped space can preservethe neighborhood order of the input space to some extent, i.e., theinput cities close in the input space generally activates neuronswhich are close in the mapped space [36]. Convex-hull property isone of the properties of the optimal tour of the TSP. It says that,for any optimal tour of a 2-D TSP, the cities located on the convex-hull formed by the given cities must be visited in the same orderas they appear on the convex-hull [33].

Standard SOM demonstrated satisfactory neighborhood pre-servation property in many SOM applications [33,36,39], and thetour obtained from the converged circular and standard SOMfollows convex-hull property, both verified empirically ratherthan theoretically. Hence, for studying the convergence propertyof the ORC-SOM, instead of rigorous convergence analysis, weconduct a one-step trend analysis similar to that in [33] (in fact, astochastic method based on bootstrapping was proposed toincrease the reliability of the induced neighborhood structure[40], and the convergence of an SOM in high-dimensional spacehas been one of the long-standing open problems in the neuralnetwork research [41,42]).

Let X be any input city, and WjðnÞ,j¼ 1,2,. . .,M, be the weightvectors of the ORC-SOM at iteration n. Then, for any nZ0, we have

JWjðnþ1Þ�XJ¼ JWjðnÞ�XþZðn,dX,iðXÞ,lðnÞÞZðnÞhj,iðXÞðnÞ½X�WjðnÞ�J

¼ 91�Zðn,dX,iðXÞ,lðnÞÞZðnÞhj,iðXÞðnÞ9 JWjðnÞ�XJ, ð10Þ

Page 5: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

J. Zhang et al. / Neurocomputing 89 (2012) 1–11 5

or simply

JWjðnþ1Þ�XJ¼ 91�ZZh9 JWjðnÞ�XJ ð11Þ

Comparatively, the weight vector of a standard SOM with thesame weight vectors WjðnÞ and input city X at iteration n is

W0

jðnþ1Þ ¼WjðnÞþZh0ðWjðnÞ�XÞ ð12Þ

from which we have

JW0

jðnþ1Þ�XJ¼ 91�Zh09 JWjðnÞ�XJ ð13Þ

If l1 is set to be large enough compared to dX,iðXÞ, as n-1, wehave that Zðn,dX,iðXÞ,lðnÞÞ ¼ expfð�d2

X,iðXÞ=2l21Þþ1=2g approxi-

mately approaches a constant, say, C. By simply setting h¼ h0

and Z¼ Z0=C, we have

:Wjðnþ1Þ�X:¼ :W0

jðnþ1Þ�X:, as n-1 ð14Þ

This implies that the ORC-SOM will be converged if the SOM does,and the asymptotic behavior of the ORC-SOM and the SOM will beequivalent, including both the neighborhood preservation prop-erty and convex-hull property. This supports the feasibility of theORC-SOM.

Fig. 5. Demonstration of the infiltration property for the TSP instance eil51. (a) The opt

infiltration; and (c) the tour obtained with the proposed ORC-SOM.

3.2. Infiltration property

Neighborhood preservation and convex hull are not enough.By observing optimal tours of many TSP instances, we found thatin general the optimal tour of a complex TSP has an additional,referred to as infiltration property. To specify it, we introducesub-tour city-center and city-center: the sub-tour city-center isthe location center of the cities in a sub-tour, and the city-centeris the location center of all cities in the TSP.

The optimal tour of a complex TSP generally follows infiltra-tion property: the sub-tour city-center of some sub-tour is veryclose in some extent to the city-center of the TSP.

We take the TSP instance eil51 from TSPLIB [37] as an exampleto demonstrate the property. Shown in Fig. 5(a–c) are tours of thisinstance. All these tours follow both neighborhood preservationproperty and convex-hull property. However, they are optimal,non-optimal and a near-optimal tour respectively. The mostdifferent sub-tours of these tours are the sub-tours 26-7. Wenow examine what their main difference is for exploring theinfiltration property of the optimal tour.

The sub-tour 26-7 shown in Fig. 5(a) is 26-8-22-1-32-11-38-5-37-17-4-18-47-12-46-51-27-6-

imal tour, in which the infiltration is in some deepness; (b) a tour without enough

Page 6: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

J. Zhang et al. / Neurocomputing 89 (2012) 1–116

48-23-7, and the one shown in Fig. 5(b) is 26-8-22-1-27-6-48-23-7. By visualizing the sub-tour city-center inFig. 5(a) and (b) respectively and comparing it with city-center, itis simply seen that the sub-tour city-center of the former is closerto the city-center than that of the latter. This indicates that thesub-tour 26-7 in the optimal tour has more infiltration to thecity-center than that of the non-optimal tour. In fact, a deepinfiltration in a sub-tour 26-7, for this TSP instance, is significantfor a tour to be an optimal/near optimal tour. The sub-tour 26-7shown in Fig. 5(c), i.e., 26-8-22-1-32-11-38-49-5-37-17-4-18-47-12-46-51-27-6-48-23-7 is verysimilar to that in the optimal tour in Fig. 5(a) (with the inclusionof an additional city indexed 49), and it is such a very similarityleads the tour a near-optimal tour.

Hence, additional to neighborhood preservation property andconvex-hull property, infiltration property plays a key role inleading a tour to be an optimal/near-to-optimal one. Next, westudy the feasibility of the ORC-SOM for infiltration property.However, we are frustrated by our inability to verify it theoreti-cally. Instead, we explain the infiltration property of the proposedORC-SOM geometrically.

Shown in Fig. 6(a) and (b) are the cases of WiðXÞ=2elðXÞ, and ofWiðXÞAelðXÞ respectively. In the case of WiðXÞ=2elðXÞ, shown inFig. 6(a), overall competition is adopted, and hence neurons movetoward the input city X with less displacements, leading to a betteroutline tour of the TSP; in the case of WiðXÞAelðXÞ, shown inFig. 6(b), regional competition is adopted, and hence neurons movetoward the input city X with more displacements. Since less or moredisplacement is the one compared with that of a standard SOM, theDCF values the above two cases correspond to are smaller and largerthan 1 respectively. Hence, for the input city X, the algorithmsearches by overall competition to outlining the tour for the neuronsiðXÞ with WiðXÞ=2elðXÞ, and by regional competition to refining thetour for the neurons iðXÞ with WiðXÞAelðXÞ. Such a regionalcompetition will lead to infiltration property of the tour, whichcan be seen from Fig. 6(b). Since WiðXÞAelðXÞ, both the winningneuron iðXÞ and its neighborhood neurons iðXÞ�1 and iðXÞþ1 movetoward the input city X with DCF values larger than 1. The resultmight be that the neighborhood neurons, iðXÞþ1 and/or iðXÞ�1, aremoved into the l—region of the neighborhood city Y and/or Z of theinput city X, i.e., WiðXÞþ1AelðYÞ and/or WiðXÞ�1AelðZÞ. This isgenerally helpful to the follow-up refinery of the tour at aroundthe city Y and/or Z: it is likely that the tour infiltrates to form a routeof Z-X-Y, or Z-y-X-y-Y with enough infiltration. In fact,shown in the Fig. 5(c) is the tour found with our proposed ORC-SOM.Its infiltration situation is very similar to that of the optimal tourshown in Fig. 5(a). This makes the tour a near-optimal tour. Wefound from our experiments that such an infiltration property holdsfor most experimental instances. This makes the ORC-SOM powerfulfor opportunity on finding superior solutions.

Fig. 6. Adaptive behavior of neurons stimulated with the input city X fo

4. Experiments and results

Experiments on ORC-SOM were conducted on two sets ofbenchmark TSP instances, both selected from the TSPLIB [37].For the first set of 16 TSP instances, we implemented and run theORC-SOM, and made a solution quality comparison with typicalSOM-based seven algorithms with comparable computation andthree algorithms with less computation, where the solutionquality of each instance with those algorithms are available frompublications. The algorithms with comparable computation areKG (KNIES_TSP_Global), KL (KNIES_TSP_Local), KD (KNIES_DE-COMPOSE) [32], SETSP (SOM Efficiently applied to the TSP) [44],Budinich (the SOM developed by Budinich) [38], ESOM (theExpanding SOM) [33], eISOM [34] and a OðN3

Þ-complex variantof SA(SA2) [33]. The algorithms with less computation areOðN2logðNÞÞ-complex co-adaptive net [27], OðN2

Þ-complex meme-tic SOM [35] and OðN3

Þ-complex constructive-optimizer neuralnetwork (CONN) [20]. The number of cities in the instancesranges from 51 to 2392, and the optimal tour length is availablefrom the TSPLIB. For the second set of 20 TSP instances with thenumber of cities ranged from 52 to 2319, we additionallyimplemented three typical SOM-based algorithms, i.e., two com-putationally comparable SETSP and ESOM and one computation-ally less co-adaptive net [27], for comparison on both solutionquality and CPU time. All experiments were conducted on adesktop PC with 2.40 Ghz Pentium 4 processor working underWindows XP professional. The ORC-SOM was also compared withseveral non-neural possibly more computationally complex algo-rithms on solution quality, including OðN2

Þ-complex 2-Opt andOðN3

Þ-complex 4-Opt heuristics, an accurate variant of SA (SA1) ofOðN3

Þ complexity, a variant of a standard (pure) tabu search (TS)and the OðN3

Þ-complex iterated tabu search (ITS) with their tourquality available from publications.

We cite Ref. [20] for description of the most typical SOM-basedalgorithms for the TSP. The basic idea of KNIES NNs is dispersingoutput neurons after SOM learning to make their statistics equalto that of some cities. If all cities participate, it leads to the globalversion, KG. If only the represented cities are involved, it leads tothe local version, KL. KD decomposes a large scale TSP byclustering cities using the learning vector quantization approach,uses KG to find a tour among the cluster centers, and finally, itglues the Hamiltonian paths, which are computed byKNIES_HPP_Global [45] for each cluster. Budinich’s SOM is aneffective implementation of the traditional SOM that maps eachcity onto a linear order without ambiguity [38]. ESOM incorpo-rates the neighborhood preserving and convex-hull properties ofTSP to generate shorter tours than Budinich’s SOM [38]. eISOMoptimally integrates the above two-TSP properties used in ESOMwith the mechanism of dragging excited neurons toward theinput cities [34]. The SETSP incorporates initialization and

r (a) overall competition, and (b) regional competition respectively.

Page 7: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

J. Zhang et al. / Neurocomputing 89 (2012) 1–11 7

parameter adaptation to the standard SOM [44]. The co-adaptivenet incorporates co-operation amongst neurons to the SOMapproach and many techniques for saving computation time areproposed [20]. Memetic SOM hybridizes the SOM in an evolu-tionary algorithm to solve the Euclidean TSP [35]. Finally, CONNuses a feedback structure similar to Hopfield-type neuralnetworks and a competitive training algorithm similar to theKohonen-type self-organizing maps as a fast method whileachieving near-to-optimal deterministic solution for the TSP[20]. Each algorithm generally provides different solution forseparate run due to randomness in initialization of the algorithm,except for CONN which provides a unique solution.

We evaluated the performance of a TSP algorithm by comput-ing the percent difference (PD)

d¼l�lopt

lopt� 100 ð15Þ

where l and lopt are the algorithm and optimal tour lengthsrespectively. Since the optimal tour length was a rounded (to itsnearest) integer for all the TSP instances [46], while the distancepresented in most references [31–34,38] is not rounded, weneither round the distance for the sake of fair comparison withthose TSP algorithms.

Table 1Parameter setting in our experiments which relates to TSP instances.

Instance eil51 st70 eil76 rd100

M 2N 3N 3N 3N

Instance rat195 kroA200 pcb442 pr100

M 2N 3N 2N 3N

Table 2Comparing ORC-SOM and its seven computationally comparable and three computatio

eISOM, Co-adaptive net, memetic SOM and CONN for the 16 benchmark TSP instances

complex algorithms, and bold-faced italic number is the best one among all the algori

TSP

instances

# of

cities

Optimal

solutionPercent difference ðd%Þ

Algorithms with comparative computation

SA2 KG KL KD SETSP Budin

eil51 51 426 2.3 2.86 2.86 3.50 2.22 3.10

st70 70 675 2.3 2.33 1.51 3.67 1.60 1.70

eil76 76 538 5.5 5.48 4.98 6.49 4.23 5.32

rd100 100 7910 4.1 2.62 2.09 4.89 2.60 3.16

lin105 105 14,379 1.9 1.29 1.98 2.18 1.30 1.71

pr107 107 44,303 1.5 0.42 0.73 10.83 0.41 1.32

pr136 136 96,772 4.9 5.15 4.53 1.93 4.40 5.20

pr152 152 73,682 2.6 1.29 0.97 3.24 1.17 2.04

rat195 195 2323 13.3 11.92 12.24 8.35 11.19 11.48

kroA200 200 28,568 5.6 6.57 5.72 5.66 3.12 6.13

pcb442 442 50,778 9.2 10.45 11.07 8.00 10.16 8.43

Average on above 11 TSPs 4.8364 4.58 4.4255 5.3400 3.8545 4.508

pr1002 1002 259,045 6.0 7.60 – 7.08 – 8.75

pcb1173 1173 56,892 11.1 – – 12.73 – 11.38

dl655 1655 62,128 13.2 – – 12.76 – 15.18

vm1748 1748 336,556 7.9 – – 8.63 – 10.19

pr2392 2392 378,032 8.2 – – 11.52 – 10.3

Average on above 5 TSPs �9.28 7.60 – 10.544 – 11.16

Average on all the TSPs 6.2250 4.8317 4.4255 6.9663 3.8545 6.586

Computation complexity OðN3Þ

In our experiments, M neurons were initialized to be statisti-cally uniformly distributed on the rectangular frame defined bythe smallest and the largest coordinates of the cities in the TSP.This was used and has been proved efficient in [44]. The choice ofthe neuron number M is purely empiric, and only 2 or 3 times thenumber of cities in a TSP. The 16 TSP instances and the setting ofM is given in Table 1 respectively. Similar to the standard SOM,the learning rate ZðnÞ and the effective-width sðnÞ for defining thetopological neighborhood function hj,iðXÞðnÞ are respectively setexponentially decreasing with respect to iteration n:

ZðnÞ ¼ Z0exp �n

tZ

� �and sðnÞ ¼ s0exp �

n

ts

� �ð16Þ

The related parameters were set being Z0 ¼ 0:3, tZ ¼ nmax ands0 ¼ 9M=49, ts ¼ nmax=logðs0Þ where nmax was set 1000 in ourexperiments. The reason that we set ts ¼ nmax=logðs0Þ is that theeffective-width sðnÞ can decrease to 1 at n¼ nmax.

Table 2 summarizes the results of the 16 TSP instancesobtained with the ORC-SOM and obtained from various TSPalgorithms with comparable computation and less computation.The first, second and the third column specifies TSP instance, thenumber of cities in that instance and the optimal tour lengthobtained from Ref. [46]. The PDs for various SOM-based TSPalgorithms are listed in the following columns, where the last two

lin105 pr107 pr136 pr152

2N 2N 3N2N

2 pcb1173 d1655 vm1748pr2392

2N 2N 3N 2N

nally less counterparts including SA2, KG, KL, KD, SETSP, Budinich’s SOM, ESOM,

from TSPLIB in terms of PD (underlined number is the best PD among the OðN3Þ-

thms).

Algorithm with less computation Proposed algorithm

ich ESOM eISOM Co-adaptive

Net

Memetic

SOM

CONN ORC-

SOM

ORC-SOM

(d%)

2.10 2.56 2.89 2.52 2.6 0.9390 3.0023

2.09 – 1.72 1.13 3.0 0.8889 2.1104

3.89 – 4.35 3.34 5.0 2.7881 4.6264

1.96 – 3.64 1.00 3.6 1.4096 3.4011

0.25 – 1.08 0.03 0.38 0.0278 0.2420

1.48 – 4.41 0.14 2.8 0.2212 0.8731

4.31 – 4.65 0.73 2.3 3.1352 3.5534

0.89 – 2.06 1.58 0.79 1.1373 2.0846

7.13 – 7.46 5.30 5.6 5.53 7.8338

2.91 1.64 3.27 1.07 5.7 2.7479 3.2617

7.43 6.11 7.58 3.59 5.8 6.5343 7.83

2 3.1309 3.4367 3.9191 1.8573 3.4155 2.3054 3.5290

5.93 4.82 5.27 4.76 7.6 3.4809 4.91

9.87 – 9.63 8.27 9.2 6.73 8.52

11.35 – 10.09 10.26 7.7 7.26 9.24

7.27 – 6.84 6.96 8.4 6.57 7.05

8.5 6.44 7.86 7.34 8.9 6.85 7.16

8.584 5.63 7.938 7.518 8.36 6.178 7.376

9 4.8350 4.3140 5.175 3.626 4.961 3.516 4.731

OðN2logðNÞÞ OðN2Þ OðN3

Þ OðN3Þ

Page 8: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

J. Zhang et al. / Neurocomputing 89 (2012) 1–118

columns are the results of the proposed algorithm. PD values forSA2, KG, KL, and KD are quoted from [32], SETSP from [44],Budinich’s SOM and ESOM from [33], eISOM from [34], andco-adaptive net quoted from [27], as well as memetic SOM from[35]. Those are the best values recorded after running the algorithmsfor a number of times; values for CONN are quoted from [20], each ofwhich is a unique value in its run due to the deterministiccharacteristic of the CONN. Listed in the column of ‘‘ORC-SOMðd%Þ’’ is the average PD of multiple independent runs of the ORC-SOM algorithm, i.e., 100 runs of the ORC-SOM algorithm for the firstten TSP instances, 50 runs for the 4 TSP instances that follow, and5 runs for the last two TSP instances, and that in the column of ‘‘ORC-SOM’’ is the best percent difference among the ones obtained. Thelast row in Table 2 provides computation complexity of eachalgorithm, where those for SA2, KD, KL, KG, Budinich’s SOM, ESOM,eISOM, CONN, co-adaptive net are quoted from [27], and that formemetic SOM is quoted from [35]. One can observe from Table 2that all algorithms can generate good tours.

We have following observations from Table 2:

(1)

Better solutions were found with the proposed ORC-SOMalgorithm in 8 out of 16 TSP instances. Among them, 4 isout of the 11 small scale TSP instances, and 4 is out of 5 largescale TSP instances. Specifically, the solution for the TSPinstance lin105 obtained with the proposed ORC-SOM is justthe optimal tour of the instance. The reason why the percentdifference is not zero is the round operation on distance.

(2)

The proposed OðN3Þ-complex ORC-SOM was compared with

computationally comparable OðN3Þ-complex algorithms. From

Table 2, it is seen that for the first 11 small-scale TSPinstances, the ORC-SOM generates tours 2.3054% longer thanthe optimal tours in average, while the best algorithm, ESOM,generates tours 3.1309% longer than the optimal ones inaverage. Hence, the ORC-SOM makes at least 0.8255%improvement over all other algorithms. Now we comparethe average PDs over the last 5 large-scale TSP instancesbetween the proposed ORC-SOM and all its counterpartsexcept KL and SETSP where no solutions were reported onthe 5 TSP instances. The ORC-SOM generates tours 6.1782%longer than the optimal ones in average. This average PD isnot superior to the one obtained from the eISOM wherehowever solutions for only two out of 5 TSP instances werereported, with one superior to and one inferior to that of theORC-SOM. Hence we turn to compare the average PD of theORC-SOM with all the other algorithms in which all the 5 TSPsolutions were reported. From Table 2, it is seen that the OCR-SOM generates tours 6.178% longer than the optimal ones inaverage while this number is 9.28% for SA2, 10.544% for KD,11.16% for Budinich, and 8.584% for ESOM. Hence, 3.102%,4.366%, 4.982% and 2.406% improvement was achievedrespectively by the ORC-SOM compared with its counterparts.For all the instances, ORC-SOM generates in average the bestsolutions compared with those by its counterparts.

(3)

The proposed ORC-SOM was compared with algorithms of less

computation. From Table 2, it is seen that for the first 11 smallscale instances, memetic SOM is superior to co-adaptive net andthe ORC-SOM in generating best tours in average, indicating thatmemetic SOM is not only fast but also superior in getting moreprecise solutions, however only for small scale instances. For thelast 5 large scale instances, it is seen from Table 2 that the PD of6.178% in average by the ORC-SOM is the smallest comparedwith those from the co-adaptive net and memetic SOM, making1.76% and 1.34% improvement respectively. This indicates thatORC-SOM is suitable for more precise solutions while withhigher computation complexity for large scale problems com-pared with the memetic SOM and co-adaptive net.

(4)

As previously explained, CONN provides solution in its firstrun, one of the advantages of the CONN. Hence, its solutionquality should be compared with the mean PD of the ORC-SOM, i.e., the last column of Table 2. It is seen from Table 2that the mean PD of the ORC-SOM is less than that of theCONN for 6 out of 11 small scale instances, and for 4 out of5 large scale instances. This result and less computation of theCONN indicate that CONN is promising in balancing solutionquality and computation complexity. Though it is not superiorin obtaining better solution for large scale instances, it mightbe useful as a fast and effective initialization approach for theproposed ROC-SOM.

(5)

When all the TSP instances are considered, the ORC-SOMgenerates 3.5156% longer tours than the optimal ones inaverage. Such a best performance is an improvement of2.709%, 1.316%, 0.910%, 3.45%, 0.339%, 3.071%, 1.319%,0.798%, 1.66%, 0.11% and 1.45% from those of all its counter-parts, in which some reported no large scale instance results(e.g., KL and SETSP), and some reported only a few instanceresults (e.g., KG and eISOM). This indicates that the proposedORC-SOM is superior in average to its counterparts in solutionquality.

(6)

By comparison of the last column and the 4th through 10thcolumns of Table 2 (i.e., all the computationally comparablealgorithms with the exception of eISOM) for the last 5 largescale instances, it is seen that the average result over multipleruns of the ORC-SOM algorithm are better than the bestresults obtained from other algorithms of the same computa-tion complexity. For example, the proposed ORC-SOM run-ning on the TSP instance of pcb1173 generates 8.52% longertour than the optimal one in average over multiple runs of thealgorithm, even less than all the best ones obtained from SA2,KD, Budinich’s SOM and ESOM, which is 11.1%, 12.73%,11.38%, and 9.87%. Hence the ORC-SOM is of some robustnessespecially for large scale instances, i.e., it is generally able toachieve, for large-scale TSPs, a better solution in comparisonto the other algorithms. On the other hand, the co-adaptivenet approach though can find the solutions whose qualitiesare inferior to those of the proposed approach, its OðN2logðNÞÞcomputation complexity makes it promising to combining theproposed approach and the co-adaptive net by further intro-ducing co-operation amongst neurons in the proposed ORC-SOM. For example, it not only involves just unsupervisedlearning as the proposed algorithm does to train neurons, butalso allows neurons to co-operate and compete amongstthemselves depending on their situation [27].

We then compared CPU time among the ORC-SOM and thetypical SOM-based algorithms of SETSP, ESOM, all with OðN3

Þ-complexity. Note that ESOM requires that neuron number be setthe same as city number, activity value of each city be computedat the convergence of the network, and cities were orderedaccording to their activity values for getting output tour. For afair comparison, those three were set over the three algorithmsexactly the same as those in the ESOM, and the maximal iterationnumber was set being simply 200. 20 TSP instances were selectedat random from the TSPLIB. They are Berlin52, Ch130, Ch150,D657, D1291, D1655, D2103, Fl1400, Fl1577, kroA100, kroA150,kroB100, kroB150, kroD100, Lin318, Pr124, Rd100, Rl1304,Rl1323, and U2319, with city number ranged from 52 to 2319.For each TSP instance, each algorithm run 20 times. Shown in theupper panel of Fig. 7 is the average PD (top left subfigure) and theaverage CPU time (top right subfigure). Due to some non-uniformness of the city number distribution over the TSPinstances: some instances concentrate on very small values (e.g.,around 100), while others are sparsely distributed over vary large

Page 9: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

101 102 103 104100

101

102

103

Number of cities101 102 103 104

Number of cities

101 102 103 104

Number of cities101 102 103 104

Number of cities

Per

cent

diff

eren

ce %

ORC-SOMSETSPESOM

CP

U ti

me

(s)

100

101

102

103

Per

cent

diff

eren

ce %

ORC-SOMCo-adaptCo-adapt*

100CP

U ti

me

(s)

108

106

104

102

100

10-2

104

103

102

101

10-1

10-2

ORC-SOMSETSPESOM

ORC-SOMCo-adaptCo-adapt*

Fig. 7. Comparison of the ORC-SOM with the computationally comparable SETPS and ESOM (upper panel), and the computationally less co-adaptive net (lower panel) in

solution quality and CPU time.

J. Zhang et al. / Neurocomputing 89 (2012) 1–11 9

values, we use log–log plots in Fig. 7 for better visualization. Fromthe upper panel of Fig. 7, it is seen that ORC-SOM outperformsSETSP and ESOM in average PD values for most instances; theapproximately linear logarithmic CPU time with respect tologarithmic city number, shown in the top right log–log plotsubfigure, indicates that the three algorithms are of similarcomputation complexity. In fact, by curve fitting, it is estimatedthat the CPU time is quadratic with respect to city number in theiroriginal scales: it is approximately proportional to N2=10000. Thisis in consistent to Theorem 1 since in this case iteration number isset fixed rather than as ordinary one of OðNÞ. Additionally, it isseen that SETSP is generally better than ESOM in both solutionprecision, seen from the top left subfigure, and CPU time, seenfrom the top right subfigures of Fig. 7, for most instances, with anexception of an instance that the CPU time is much larger thanthose of the other algorithms.

We also compared the ORC-SOM on these TSP instances withthe co-adaptive net, whose computation complexity is onlyOðN2logðNÞÞ, which is less than that of the ORC-SOM. For a faircomparison, we set the number of neurons, strategy for gettingoutput tour (by ordering cities according to their activity values),and the maximum iteration number exactly the same as above,with all the other network parameters set with default values. Werefer to the co-adaptive net with such setting as ‘co-adapt’. Itsaverage result over 20 runs is shown in the lower panel of Fig. 7. Itis seen from the lower left subfigure of Fig. 7 that the averagesolution quality is worse than that of the ORC-SOM. This encour-aged us to change the strategy to getting output tour according tothat of the co-adaptive net in Ref. [27] while keeping the othersettings unchanged, though the comparison is unfair in somesense. We refer to the corresponding co-adaptive net as‘co-adaptn’. Its average result over 20 runs is also given in the

lower panel of Fig. 7. By comparison of solution quality amongORC-SOM, ‘co-adapt’, and ‘co-adaptn’ from the lower left subfigureof Fig. 7, it is seen that ORC-SOM is superior to both ‘co-adapt’ and‘co-adaptn’. The lower right subfigure indicates that the CPU timeof the ORC-SOM is more than that of the ‘co-adapt’ and ‘co-adaptn’, a verification that the complexity of co-adaptive net islower than that of the ORC-SOM. This indicates that the ORC-SOMis suitable for more precise solutions while with more CPU timecompared with co-adaptive net.

The PDs shown in the left panel of Fig. 7 are not very small, andsome instances result in very bad solution accuracy, e.g., the PDsof more than 100%. This imputes the limited iteration number andthe possibility that the algorithm does not converge or convergesto local minimum within the limited iterations. From Fig. 7, it isseen that this seems not the case for ORC-SOM but the case forESOM and ‘Co-adapt’, especially for large scale TSP instances.

We also compared the proposed OðN3Þ-complex ORC-SOM

with several typical non-neural algorithms, including OðN2Þ-

complex 2-Opt and OðN3Þ-complex 4-Opt heuristics, an accurate

OðN3Þ-complex variant of SA (SA1), a variant of a standard (pure)

tabu search (TS) and OðN3Þ-complex iterated tabu search (ITS).

The results are given in Table 3, with the results by 2-Opt, 4-Opt,SA1, TS and ITS quoted from [26]. Due to the high computationcomplexity, we did not find result of them on the 5 large scale TSPinstances. It can be seen from the average result of the first 11small scale TSPs in Table 3 that the average tour quality of theORC-SOM is only a little inferior to that of the 2-Opt, and inferiorto that of 4-Opt, SA1, TS and ITS. However, the ORC-SOM issuperior to some of other algorithms for some TSP instances, e.g.,lin105 and pr107 are the examples that the ORC-SOM is superiorto 2-Opt, 4-Opt and TS, and KroA200 is the one that the ORC-SOMis superior to 2-Opt and TS. This indicates that the proposed

Page 10: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

Table 3Comparing ORC-SOM with typical non-neural algorithms 2-Opt heuristic, 4-Opt

heuristic, SA1 and ITS, for the 16 benchmark TSP instances from TSPLIB in terms of

average percent difference (best results are indicated by bold-faced italic text).

InstancesAverage percent difference ðd%Þ

2-Opt 4-Opt SA1 TS ITS ORC-SOM

Eil51 2.28 1.83 0.02 0.09 0 3.0023St70 0.76 1.84 0.02 1.04 0 2.1104Eil76 3.90 2.36 0 0 0 4.6264Rd100 – – – – – 3.4011Lin105 1.23 1.97 0.12 3.13 0 0.2420Pr107 0.90 1.14 0 1.17 0 0.8731Pr136 2.87 2.58 0.43 0.95 0 3.5534Pr152 0.84 0.77 0.22 1.84 0 2.0846Rat195 7.47 3.44 0.2 0.30 0 7.8338kroA200 4.18 2.88 0.41 3.68 0 3.2617Pcb442 7.07 – 0.55 1.58 0.37 7.83

Average on above 11 TSPs 3.15 2.09 0.197 1.378 0.037 3.5290

Pr1002 – – – – – 4.91Pcb1173 – – – – – 8.52D1655 – – – – – 9.24Vm1748 – – – – – 7.05Pr2392 – – – – – 7.16

Average on above 5 TSPs – – – – – 7.376

Average on all the TSPs 3.15 2.09 0.197 – 0.037 4.7312

Computation complexity OðN2Þ OðN3

Þ OðN3Þ – OðN3

Þ OðN3Þ

J. Zhang et al. / Neurocomputing 89 (2012) 1–1110

ORC-SOM is promising in that it sometimes can achieve bettersolution but with less computation complexity.

5. Conclusions

In this paper, we propose a novel overall-regional competitiveSOM (ORC-SOM) for solving the Euclidean TSP. In ORC-SOM, twocompetition rules, overall competition and regional competition,are introduced into the circular standard SOM competitive learn-ing algorithm: overall competition is employed for outlining thetour while regional one is for refining it for obtaining the optimaland/or near-to-optimal solution of the TSP. A radius is introducedto determine if an input city needs to be used for outlining or forrefining the present tour, and it varies gradually from zero tosome positive value in the learning process for focus more onoverall competition for getting outline at the beginning of thesearch and for focus more on regional competition for the refineryof the tour such that optimal and/or near-to-optimal solution canbe discovered. It is noted that the optimal tour of the TSP has theinfiltration property which is significant for the tour to be theoptimal tour besides neighborhood preservation and convex-hullproperties for the complex TSPs. The feasibility of the solution bythe proposed ORC-SOM algorithm is studied by theoretical one-step trend analysis on the asymptotic behavior of the ORC-SOMon neighborhood preservation and convex-hull properties, and bygeometrical analysis on the infiltration property of the tourobtained with the ORC-SOM algorithm. The algorithm is provedto be with the computational complexity of OðN3

Þ, which is thesame as that of the most conventional SOM-based algorithms.

A comprehensive series of experiments for 16 benchmark TSPsselected from TSPLIB have been conducted to show the power ofthe ORC-SOM. Though the proposed method is intuitive andsimple for implementation, the experimental results have demon-strated that for most TSP instances, it outperforms several typicalSOM-based algorithms, such as KG, KL, KD, SETSP, Budinich’sSOM, ESOM, and the computationally comparable simulatedannealing (SA2), not only in the total tour length of the TSP, but

also in the robustness of the solution quality to multiple runs ofthe algorithm especially for large scale TSP instances. Furtherexperiment results for other 20 benchmark TSPs selected fromTSPLIB indicate that the CPU time used in the ORC-SOM iscomparable to that used in SETSP and less than that used inESOM while higher solution quality can be reached for most of theinstances, and the ORC-SOM is suitable for more precise solutionswith more CPU time compared with co-adaptive net.

The ORC-SOM can be considered as a simple extension of astandard SOM with overall-regional competitive learning ruleembedded. The performance can be improved by making a goodchoice relating to the radius, i.e., the final radius l1 and theconstant a. It seems that the knowledge on distribution of thecities in the TSP may help make better choice on the parameters.Furthermore, incorporation of the proposed ORC-SOM into theKNIES_DECOMPOSE (KD) is promising for discovering optimaltour among represented cluster centers for larger scale TSPs, andso does using fast CONN for initialization of neurons in theORC-SOM. Incorporating co-operation among neurons as theco-adaptive net does is also promising for large scale TSPs.

Acknowledgments

The authors are very grateful to the anonymous referees fortheir careful reading of this paper and valuable and constructivecomments, which help to improve the quality and the readabilityof the paper. Also, we would like to acknowledge the generoussupport of the Chinese National Science Foundation under Grantnos. 61070137, 91130006 and 60933009. Additionally, we givethanks to Langui Tu for his contribution and help on coding andperforming experiments.

References

[1] E.L. Lawler, J.K. Lenstra, IV reprint, in: A.G. Rinnoy Kan, D.B. Shmoys (Eds.),The Traveling Salesman Problem—A Guided Tour of Combinatorial Optimiza-tion, John Wiley & Sons, New York, 1990. (p. x474).

[2] K. Fujimura, K. Obu-Cann, H. Tokutaka, Optimization of surface componentmounting on the printed circuit board using SOM–TSP method, in: Proceed-ings of the 6th ICONIP, vol. 1, 1999, pp. 131–136.

[3] K. Fujimura, S. Fujiwaki, O.-C. Kwaw, H. Tokutaka, Optimization of electronicchip-mounting machine using SOM–TSP method with 5 dimensional data,Proc. ICII 4 (2001) 26–31.

[4] M.K. Mehmet Ali, F. Kamoun, Neural networks for shortest tour computationand routing in computer networks, IEEE Trans. Neural Networks 4 (5) (1993)941–953.

[5] T. Onoyama, T. Maekawa, S. Kubota, Y. Taniguchi, S. Tsuruta, Intelligentevolutional algorithm for distribution network optimization, Proc. Int. Conf.Control Appl. 2 (2002) 802–807.

[6] D. Banaszak, G.A. Dale, A.N. Watkins, J.D. Jordan, An optical technique fordetecting fatigue cracks in aerospace structures, in: Proceedings of the 18thICIASF, 1999, pp. 27/1–27/7.

[7] D. Barrel, J.-P. Perrin, E. Dombre, A. Liengeois, An evolutionary simulatedannealing algorithm for optimizing robotic task ordering, Proc. IEEE ISATP(1999) 157–162.

[8] C.-H. Cheng, W.-K. Lee, K.-F. Wong, A genetic algorithm-based clusteringapproach for database partitioning, IEEE Trans. Syst. Man Cybern. Part C,Appl. Rev. 32 (3) (2002) 215–230.

[9] N. Ascheuer, M. Junger, G. Reinelt, A branch and cut algorithm for theasymmetric traveling salesman problem with precedence constraints, Com-put. Optim. Appl. 17 (1) (2000) 61–84.

[10] G. Laporte, The vehicle routing problem: an overview of exact and approx-imate algorithms, Eur. J. Oper. Res. 59 (1992) 345–358.

[11] Wang Wei, Artificial Neural Network: Theory and Application, BeijingUniversity of Aeronautic and Astronautic Science and Technology Press,Beijing, 1995.

[12] S.G. Kirkpatrick Jr., C.D. Gelatt, M.P. Vecchi, Optimization by simulatedannealing, Science 220 (1983) 671–680.

[13] D.E. Goldberg, Genetic algorithms in search, optimization and machinelearning, Reading, Addisom-Wesley, MA, 1989.

[14] Xue-song Yan, Han-min Liu, Yan, et al., A fast evolutionary algorithm fortraveling salesman problem, in: Proceedings of the Third InternationalConference on Natural Computation, vol. 4, 2007, pp. 85–90.

Page 11: An overall-regional competitive self-organizing map neural network for the Euclidean traveling salesman problem

J. Zhang et al. / Neurocomputing 89 (2012) 1–11 11

[15] J. Knox, Tabu search performance on the symmetric traveling salesmanproblem, Comput. Oper. Res. 21 (1994) 867–876.

[16] Ning Yang, Ping Li, Baisha Mei, An angle-based crossover tabu search for thetraveling salesman problem, in: Proceedings of the Third InternationalConference on Natural Computation, vol. 4, 1994, pp. 512–516.

[17] M. Dorigo, L.M. Gambardella, Ant colony system: a cooperative learningapproach to the traveling salesman problem, IEEE Trans. Evol. Comput. 1 (1)(1997) 53–66.

[18] J.J. Hopfield, D.W. Tank, Neural computation of decisions in optimizationproblems, Biolog. Cybern. 52 (1985) 141–152.

[19] D.S. Johnson, L.A. McGeoch, The traveling salesman problem: a case study, in:Emile Aarts Jan Karel Lenstra (Ed.), Local Search in Combinatorial Optimiza-tion, John Wiley and Sons, New York, 1997, pp. 215–310.

[20] M. Saadatmand-Tarzjan, M. Khademi, M.-R. Akbarzadeh-T., H.A.A. Moghaddam,Novel constructive-optimizer neural network for the traveling salesman pro-blem, IEEE Trans. Syst. Man Cybern. Part B 37 (4) (2007) 754–770.

[21] T. Kohonen, Self-Organizing Maps, Springer-Verlag, New York, 1997.[22] B. Ang�eniol, G.D.L.C. Vaubois, J.Y.L. Texier, Self-organizing feature maps and

the traveling salesman problem, Neural Networks 4 (1) (1988) 289–293.[23] S.P. Coy, B.L. Golden, G.C. Runger, E.A. Wasil, See the forest before the trees:

fine-tuned learning and its application to the traveling salesman problem,IEEE Trans. Syst. Man Cybern. Part A, Syst. Hum. 28 (4) (1998) 454–464.

[24] K.A. Smith, An argument for abandoning the traveling salesman problem as aneural network benchmark, IEEE Trans. Neural Networks 7 (6) (1996)1542–1544.

[25] D.S. Johnson, L.A. McGeoch, Experimental analysis and heuristics for theSTSP, in: G. Gutin, Punmen (Eds.), Traveling Salesman Problem and itsVariations, Kluwer AcademicPublishers, Holland, 2002, pp. 369–443.

[26] Alfonsas Misevicius, Using iterated tabu search for the traveling salesmanproblem, Inf. Technol. Control 3 (32) (2004) 29–40.

[27] E.M. Cochrane, J.E. Beasley, The co-adaptive neural network approach to theEuclidean travelling salesman problem, Neural Networks 16 (2003)1499–1525.

[28] B. Fritzke, P. Wilke, FLEXMAP—a neural network with linear time and spacecomplexity for the traveling salesman problem, Int. Jt. Conf. Neural Networks(1991). 929-923.

[29] L.I. Burk, P. Damany, The guilty net for the traveling salesman problem,Comput. Oper. Res. 19 (3-4) (1992) 255–265.

[30] F. Favata, R. Walker, A study of the application of Kohonen-type neuralnetworks to the traveling salesman problem, Biol. Cybern. 64 (1991)463–468.

[31] N. Aras, B.J. Oommen, _I.K. Altinel, Kohonen network incorporating explicitstatistics and its application to the traveling salesman problem, NeuralNetworks 12 (9) (1999) 1273–1284.

[32] N. Aras, _I.K. Altinel, B.J. Oommen, Kohonen-like decomposition method forthe Euclidean traveling salesman problem—KNIES_DECOMPOSE, IEEE Trans.Neural Networks 14 (1) (2003) 869–890.

[33] K.S. Leung, H.D. Jin, Z.B. Xu, An expanding self-organizing neural network forthe traveling salesman problem, Neurocomputing 62 (2004) 267–292.

[34] Kwong-Sak Hui-Dong Jin, Man-Leung Leung, Zong-Ben Xu. Wong, An efficientself-organizing map designed by genetic algorithms for the traveling sales-man problem, IEEE Trans. Syst. Man Cybern. Part B 33 (6) (2003) 877–888.

[35] J.C. Creput, A. Koukam, A. Memetic, Neural network for the euclideantraveling salesman problem, Neurocomputing 72 (4-6) (2009) 1250–1264.

[36] Simon Haykin, Neural Networks: A Comprehensive Foundation, 2nd edition,Prentice Hall, London, 1999 (pp. 444–460).

[37] G. Reinelt, TSPLIB—a traveling salesman problem library, ORSA J. Comput. 3(4) (1991) 376–384.

[38] M. Budinich, A self-organizing neural network for the traveling salesmanproblem that is competitive with simulated annealing, Neural Comput. 8(1996) 416–424.

[39] S. Chao, H. Houkuan, A. New, Data visualization algorithm based on SOM, J.Comput. Res. Dev. 43 (3) (2006) 429–435.

[40] Christiane Guinot Patrick Rousset, Bertrand Maillet, Understanding andreducing variability of SOM neighborhood structure, Neural Networks 19(2006) 838–846.

[41] S. Lin, J. Si, Weight-value convergence of the SOM algorithm for discreteinput, Neural Comput. 10 (4) (1998) 807–814.

[42] A.A. Sadeghi, Self-organization property of Kohonen’s map with general typeof stimuli distribution, Neural Networks 11 (9) (1998) 1637–1643.

[43] T. Kohonen, S. Kaski, K. Lagus, J. Salojrvi, V. Paatero, et al., Organization of amassive document collection, IEEE Trans. Neural Networks 11 (3) (2000)574–585.

[44] F.C. Vieira, A.D.D. Neto, An efficient approach to the traveling salesmanproblem using self-organizing maps, Int. J. Neural Syst. 13 (2) (2003) 59–66.

[45] I.K. Altinel, N. Aras, B.J. Oommen, Fast, efficient and accurate solutions to theHamiltonian path problem using neural approaches, Comput. Oper. Res. 27(5) (2000) 461–494.

[46] G. Reinelt, TSPLIB95[Internet]. Available from:/http://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/S.

Junying Zhang received her Ph.D. degree in Signal andInformation Processing from Xidian University, Xi’an,China, in 1998. From 2001 to 2002, she was a visitingscholar at the Department of Electrical Engineering andComputer Science, The Catholic University of America,Washington, DC, USA, and in 2007, she was a visitingprofessor at the Department of Electrical Engineeringand Computer Science, Virginia Polytechnic Instituteand State University, USA. She is currently a professor inthe School of Computer Science and Technology, XidianUniversity, Xi’an, China. Her research interests focus onintelligent information processing, including machine

learning and its application to cancer related bioinfor-

matics, causative learning and pattern discovery.

Xuerong Feng received her M.S. and Ph.D. degree fromthe Department of Computer Science at the University ofTexas at Dallas, USA in 2001 and 2005 respectively. Shereceived her B.S. degree from Xidian University, Xian,People’s Republic of China in 1991. Now she is anassistant professor in Computer Science and EngineeringDepartment at Arizona State University, USA. Her re-search interests include computer algorithm design andanalysis, algorithm optimization and bioinformatics.

Bin Zhou received his Bachelor degree and Masterdegree both in Computer Applications in the School ofComputer Science and Technology, Xidian University,PR China, in 2005 and in 2007 respectively. Hisresearch interests now focus on artificial neural net-works for combinatorial optimization problems.

Dechang Ren received his Master degree in ComputerApplications in the School of Computer Science andTechnology, Xidian University, PR China, in 2011. Hisresearch interests now focus on artificial neural net-works for combinatorial optimization problems.