an new self-organizing maps strategy for solving the traveling salesman problem

Download An new self-organizing maps strategy for solving the traveling salesman problem

Post on 26-Jun-2016

213 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • Abstract

    1. Introduction

    and a faster convergence.

    0960-0779/$ - see front matter 2005 Published by Elsevier Ltd.

    * Corresponding author.E-mail addresses: baiyp@nuc.edu.cn (Y. Bai), wdzhang@nuc.edu.cn (W. Zhang).

    Chaos, Solitons and Fractals 28 (2006) 10821089

    www.elsevier.com/locate/chaosThe traveling salesman problem (TSP) is a typical combinatorial optimization problem. It can be understood as asearch for the shortest closed tour that visits each city once and only once. The decision-problem form of TSP is a NP-complete [1], thence the great interest in ecient heuristics to solve it. There are many of the heuristics that utilize theparadigm of neural computation or related notions recently [2]. The rst approach to the TSP via neural networks wasthe work of Hopeld and Tank in 1985 [3], which was based on the minimization of an energy function and the localminima should correspond to a good solution of a TSP instance. However, this approach does not assure the feasibility,that is, not all the minima of the energy function represent feasible solution to the TSP.

    The SOM algorithm, originally introduced by Kohonen [4], is an unsupervised learning algorithm, where the learn-ing algorithm established a topological relationship among input data. This is the main characteristic explored by thiswork and by many others found in the literature that approached the TSP via modications of the SOM algorithm.There are many types of SOM algorithms to solve the TSP [510], the purpose of this paper is to look for the incor-poration of an ecient initialization method, and the denition of a parameters adaptation law to achieve better resultsThis paper presents an approach to the well-known traveling salesman problem (TSP) using self-organizing maps(SOM). There are many types of SOM algorithms to solve the TSP found in the literature, whereas the purpose of thispaper is to look for the incorporation of an ecient initialization methods and the denition of a parameters adaptationlaw to achieve better results and a faster convergence. Aspects of parameters adaptation, selecting the number of nodesof neurons, index of winner neurons and eect of the initial ordering of the cities, as well as the initial synaptic weightsof the modied SOM algorithm are discussed. The complexity of the modied SOM algorithm is analyzed. The simu-lated results show an average deviation of 2.32% from the optimal tour length for a set of 12 TSP instances. 2005 Published by Elsevier Ltd.An new self-organizing maps strategy for solvingthe traveling salesman problem

    Yanping Bai a,*, Wendong Zhang a, Zhen Jin b

    a Key Lab of Instrument Science and Dynamic Measurement of Ministry of Education, North University of China,

    No. 3, Xueyuan Road, TaiYuan, ShanXi 030051, Chinab Department of Applied Mathematics, North University of China, No. 3 Xueyuan Road, TaiYuan, ShanXi 030051, China

    Accepted 15 August 2005doi:10.1016/j.chaos.2005.08.114

  • This paper is organized as follows. The SOM algorithm is briey described in Section 2. Parameters adaptation ofthe learning rate and the neighborhood function variance are presented in Section 3. The initialization methods are dis-cussed in Section 4. Computational experiments and comparisons with other methods are presented in Section 5. Thealgorithm complexity is discussed in Section 6. Conclusions are presented in Section 7.

    2. Self-organizing maps and algorithm

    There are many dierent types of self-organizing maps; however, they all share a common characteristic, i.e. the abil-ity to assess the input patterns presented to the networks, organize itself to learn, on its own, based on similaritiesamong the collective set of inputs, and categorize them into groups of similar patterns. In general, self-organizing learn-ing (unsupervised learning) involving the frequent modication of the networks synaptic weights in response to a set ofinput patterns. The weight modications are carried out in accordance with a set of learning rules. After repeated appli-cations of the patterns to the network, a conguration emerges that is of some signicance.

    Kohonen self-organizing map belong to a special class of neural networks denominated Competitive Neural Net-works, where each neuron competes with the others to get activated. The result of this competition is the activationof only one output neuron at a given moment. The purpose of Kohonen self-organizing map is to capture the topologyand probability distribution of input data [11]. This network generally involved an architecture consisting of uni- or bi-dimensional array of neurons. The original uni-dimensional SOM topology is similar to a bar and is presented in Fig. 1.A competitive training algorithm is used to train the neural network. In this training mode, not only the winning neuronis allowed to learn, but some neurons within a predened radius from the winning neuron are also allowed to learn witha decreasing learning rate as the cardinality distance from the winning neuron increases. During the training procedure,synaptic weights of neurons are gradually changed in order to preserve the topological information of the input datawhen it is introduced to the neural networks.

    To apply this approach to the TSP, a two-layer network, which consists of a two-dimensional input unit and m out-put units, is used. The evolution of the network may be geometrically imagined as stretching of a ring toward the coor-dinates of the cities. Fig. 2 represents the ring structure proposed. The input data is the coordinates of cities and the

    Y. Bai et al. / Chaos, Solitons and Fractals 28 (2006) 10821089 1083weights of nodes are the coordinates of the points on the ring. The input data (a set of n cities) are presented to the

    Fig. 1. Uni-dimensional SOM structure.Fig. 2. Toroidal SOM structure.

  • yj yj af r; d xi yj ; 2

    each o

    3. Par

    whereand (4crucia

    Thtime t

    1084 Y. Bai et al. / Chaos, Solitons and Fractals 28 (2006) 108210894.1. Selecting the number of nodes of neurons and neighbor length

    Since selecting the number of nodes that is equal to the number of cities makes the process of node separation moredelicate and results in sensitiveness of algorithm to the initial conguration, the authors recommended selecting the

    numbe initialization method is very important for the algorithm convergence and has great inuence on the processingo achieve a reasonable topological map conguration for TSP instances.4. Algorithm initializationrk rk1 1 0:01 k; r0 10. 6Eq. (5) determines the evolution of the learning rate. There is no need to dene an initial value to this parameter

    since it depends only on the number of iterations.Eq. (6) determines the evolution of the neighborhood function variance. This parameter requires an appropriate ini-

    tialization, where r0 = 10 has been adopted.During experiments considering the SOM applied to the TSP there were identied new parameter adaptation heu-ristics, which have led to a faster convergence. The adaptation laws proposed for these parameters are presented asfollows:

    ak 1k4

    p ; 5rk r0 exp ks2

    ; 4

    k = 0,1,2, . . . , is the number of iterations and s1 and s2 are the exponential time constants. However, Eqs. (3)) are heuristics and other methods can be adopted. It is must be remembered that the parameters adaptation isl for the convergence of the algorithm [12].The SOM algorithm has two adaptive parameters, the learning rate a and the neighborhood function variance r.Kohonen [4] proposed an exponential evolution of this parameters in order to achieve a better convergence of theSOM algorithm, of the form:

    ak a0 exp ks1

    ; 3ther and creates a mechanism for shortening the constructed tour.

    ameters adaptationwhere a and r are learning rate and neighborhood function variance, respectively. And d = min{kj Jk, m kj Jk}is the cardinal distance measured along the ring between nodes j and J, where k k represents absolute value and m is thenumber of the neurons. The neighborhood function determines the inuence of a city on the neighbor nodes of the win-ner. We noted that the update of neighbor nodes in response to an input creates a force that preserves nodes close tonetwork in a random order and a competition based on Euclidian distance will be held among nodes. The winner nodeis the node (J) with the minimum distance to the presenting city. J = Argminj{kxi yjk2}, where xi is the coordinate ofcity i, yj is the coordinate of nodes j and k k2 is the Euclidean distance.

    Furthermore, in each iteration, the winner node and its neighbors move toward the presenting city i using the neigh-borhood function:

    f r; d e d2=r2 ; 1according to the following updated function:

    new old old er of nodes as twice of the number of cities (m = 2n).

  • riorat

    complete iteration. Starting calculation with dierent sets of initial order will lead to dierent solution. The authorsrecommended to randomly redening the order of the cities after a complete cycle of iterations to increase the robust-

    Y. Bai et al. / Chaos, Solitons and Fractals 28 (2006) 10821089 1085ness of the algorithm and reduce its dependence on initial conguration and the order of input data.

    4.3. The initialization method of synaptic weights of neurons

    Our choice of the initial curves is virtually unlimited. The question is, therefore, what kind of cures is best to usefor a TSP with a given number of cites, based on a well-dened performance such as the minimum execution time orminimum tour length. There does not exist any analytical answer to the above posed question, but an attempt ismade here to investigate the eect of starting the algorithm with dierent initial cures, using numerical experiments.Certainly, one cannot expect to investigate all poss

Recommended

View more >