a fuzzy hopfield-tank traveling salesman problem .a fuzzy hopfield-tank traveling salesman problem

Download A Fuzzy Hopfield-Tank Traveling Salesman Problem .A Fuzzy Hopfield-Tank Traveling Salesman Problem

Post on 21-Sep-2018

212 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • A Fuzzy Hopfield-Tank Traveling Salesman Problem ModelWILLIAM J. WOLFE University of Colorado at Denver, Department of Computer Science and Engineering, Campus Box 109,

    P.O. Box 173364, Denver, CO 80217-3364, Email: wwolfe@cse.cudenver.edu

    (Received: March 1999; revised: June 1999; accepted: August 1999)

    This article describes a Hopfield-Tank model of the Euclideantraveling salesman problem (using Aiyers subspace approach)that incorporates a fuzzy interpretation of the rows of theactivation array. This fuzzy approach is used to explain thenetworks behavior. The fuzzy interpretation consists of com-puting the center of mass of the positive activations in eachrow. This produces real numbers along the time line, one foreach city, and defines a tour in a natural way (referred to as afuzzy tour). A fuzzy tour is computed at each iteration, andexamination of such tours exposes fine features of the networkdynamics. For example, these tours demonstrate that the net-work goes through (at least) three phases: centroid, mono-tonic, and nearest-city. The centroid (initial) phase of thenetwork produces an approximation to the centroid tour (i.e.,the tour that orders the cities by central angle with respect tothe centroid of the cities). The second phase produces an ap-proximation to the monotonic tour (i.e., the tour that orders thecities by their projection onto the principal axis of the cities).The third phase produces an approximation to a nearest-citytour, that is, a tour whose length is between the tour lengths ofthe nearest-city-best (best starting city) and nearest-city-worst(worst starting city). Simulations are presented for up to 125uniformly randomly generated cities.

    T he Hopfield-Tank model[2, 13, 16, 17] of the Euclidean trav-eling salesman problem (TSP)[7, 8, 14] has attracted many re-searchers because it demonstrates how a highly intercon-nected network of simple processing units can solve acomplex problem. The method is also interesting because itexemplifies a key property of parallel-distributed systems:the emergence of global characteristics from local interac-tions.[9, 10] It is often difficult to understand how local inter-actions can lead to a particular global effect, yet the analysisof such effects is crucial to improved understanding of re-cently developed models of biological, ecological, and eco-nomical systems. We take the TSP as a representative prob-lem and show how a fuzzy approach exposes the networksorganizational properties.

    We use Aiyers subspace method[1] to construct a particularHopfield-Tank TSP model. Briefly put, the subspace methodis based on Hopfields[2] original approach but with a morecarefully chosen Energy Function. This is a direct conse-quence of analyzing the orthogonal projection onto the fea-sible (or valid) space. The result is a network that keeps theactivation vector close to the feasible space, which in turnfacilitates the interpretation of a final convergent state (i.e.,the final state of the network corresponds closely to a per-mutation matrix). We show that for up to 100 uniformly

    randomly generated cities, the network produces tours ofnearest-city quality (by nearest-city quality we mean thatthe tour length is between the tour lengths of the nearest-city-best (best starting city) and the nearest-city-worst (worststarting city) tours). The focus of this article, however, is toprovide an explanation for the networks behavior, that is, tobetter understand the underlying mechanisms by which thenetwork chooses a tour.

    To provide a finer analysis of the networks behavior, weintroduce a fuzzy interpretation of each row of the activationarray. This consists of calculating the center of mass (cm) ofthe positive activations in each row. These cms are realnumbers, and therefore not restricted to the integers thatrepresent the columns of the array. Thus, each row, andtherefore each city, is associated with a real number on thetime line, and the corresponding sequence is interpreted asthe current tour (i.e., the fuzzy tour). Such a tour can becomputed at any iteration. At the beginning of a run, asmight be expected, the fuzzy tours are random, but as thenetwork progresses, various patterns emergepatterns thatcorrelate with city statistics, such as the centroid and principalcomponents. It is shown that the network goes through threephases:

    1. Centroid Phase. The fuzzy tours are approximations tothe centroid tour, that is, the tour that orders the cities bycentral angle with respect to the center of mass of thecities.

    2. Monotonic Phase. The fuzzy tours are approximations tothe monotonic tour, that is, the tour that orders the citiesby their projections onto the principal axis of the cities.

    3. Nearest-City Phase. The fuzzy tours are approximationsto a nearest-city tour, that is, a tour that starts from anarbitrary city and then chooses nearest neighbors until atour is constructed.

    In simulations the network is initialized with very smallrandom positive activations (less than 1030). Initially, thefuzzy tours are themselves random but they quickly becomeapproximations to the centroid tour. The emergence of thecentroid tour is surprising, but it was first suggested byAiyer.[1] We do not provide a proof that centroid tours mustemerge, but it is evident in almost all simulations, with rareexceptions. However, a proof is provided that for smallactivations the network produces sinusoids of frequency 1in each row of the activation array, and that the peaks of the

    Subject classifications: Combinatorial optimization.Other key words: Hopfield-Tank, traveling salesman problem.

    329INFORMS Journal on Computing 0899-1499 99 1104-0329 $05.00Vol. 11, No. 4, Fall 1999 1999 INFORMS

  • sinusoids cluster around 2 columns of the network (half wayaround the tour from each other). Empirical evidence is thenpresented that the peaks of the sinusoids organize them-selves so that the corresponding fuzzy tour is a close ap-proximation to the centroid tour.

    The centroid configuration is just the initial phase, a man-ifestation of the linear behavior of the network (dominantwhen the activations are small). The corner-seeking (i.e.,nonlinear) tendency of the network ultimately forces theactivations to move toward a nearest-city tour. In-between,there is a phase where the fuzzy tours correlate stronglywith the monotonic tour. The relationship between themonotonic and nearest-city phases is critical to understand-ing the networks overall performance. Unfortunately, it isvery difficult to see the correlation between these phases,but we provide strong empirical evidence that the principalaxis of the cities plays an important role in tying these twophases together.

    1. Network ArchitectureFollowing Hopfield and Tank,[2] an n-city TSP problem isrepresented by an n n grid of neurons (cities at possibletime stops). A row of the grid represents a city at all time stops;a column of the grid represents a time stop at all cities. Theneighboring columns of the grid are connected (symmetri-cally) with a connection strength of d(x, y), where d(x, y) isthe distance between cities x and y. That is, neuron (i, x) isconnected to neurons (i 1, y), with wrap around, withconnection strength d(x, y).

    Each neuron has an activation (aix), and we refer to the n n array of activations as the activation array and to then2-long vector obtained from the array (by concatenating therows) as the activation vector (a). The index i indicates thecolumn of the array (i.e., the stop), and the index xindicates the row of the array (i.e., the city). The net inputto a neuron is the sum of activations times the connectionstrengths (there are no external inputs):

    netix y k wixkyaky (1)1.1 The Subspace ApproachSo far, the network architecture is too simple to be effective.Following Aiyer,[1] we analyze the orthogonal projectioninto the feasible space. The Hopfield-Tank model defines thefeasible space as the affine space containing the n n ma-trices whose rows and columns sum to 1 (i.e., doubly sto-chastic matrices). In our model the feasible space (0) is thespace spanned by n n matrices whose rows and columnssum to zero (Aiyer refers to this as the zero-sum space).The Hopfield-Tank model uses an external input to shift(i.e., parallel translation) 0 to the affine space containingthe doubly stochastic matricesotherwise the neural archi-tectures are identical.

    The orthogonal projection consists of the following calcula-tion:

    aixproj aix actavg rowavg

    x colavgi . (2)

    The actavg is the average of all the activations at thatiteration; rowxavg is the average of the activations in the x

    th

    row; colavgi is the average of the activations in the ith column.

    After projection, the array (aixproj) has the property that the

    sum of any row or column is zero and is therefore in thefeasible space.[6]

    The effectiveness of the projection is a direct consequenceof the analysis provided by Aiyer[1] (see also Gee et al.[3] andAiyer et al.[4]). It helps to realize that n

    2decomposes into

    the following mutually orthogonal subspaces:

    n2 0 c r . (3)

    where is the one-dimensional space spanned by a constantvector (e.g., aij 1); c is spanned by harmonics in thex-direction (e.g., sinusoids parallel to the rows of the neuralgrid); r is spanned by harmonics in the y-direction (e.g.,sinusoids parallel to the columns of the neural grid); and 0(the feasible space) is spanned by the two-dimensional har-monics that are orthogonal to , c, and r.

    With these definitions the feasible space becomes a linearsubspace of n

    2(further details are provided in Aiyer,[1]

    Wolfe et al.,[5] and Wolfe and Ulmer[6]).The projection is introduced here, separate from the rest

    of the network architecture, for reasons of clarity, but theprojection calculation can be e

Recommended

View more >