neural network to solve traveling salesman problem

Download Neural Network to solve Traveling Salesman Problem

Post on 31-Jan-2016

42 views

Category:

Documents

1 download

Embed Size (px)

DESCRIPTION

Neural Network to solve Traveling Salesman Problem. Amit Goyal 01005009 Koustubh Vachhani 01005021 Ankur Jain 01D05007. Roadmap. Hopfield Neural Network Solving TSP using Hopfield Network Modification of Hopfield Neural Network - PowerPoint PPT Presentation

TRANSCRIPT

  • Neural Network to solve Traveling Salesman ProblemAmit Goyal 01005009 Koustubh Vachhani 01005021Ankur Jain 01D05007

  • RoadmapHopfield Neural Network

    Solving TSP using Hopfield Network

    Modification of Hopfield Neural Network

    Solving TSP using Concurrent Neural Network

    Comparison between Neural Network and SOM for solving TSP

  • BackgroundNeural NetworksComputing device composed of processing elements called neuronsProcessing power comes from interconnection between neuronsVarious models are Hopfield, Back propagation, Perceptron, Kohonen Net etc

  • Associative memoryAssociative memoryProduces for any input pattern a similar stored patternRetrieval by part of dataNoisy input can be also recognized

    OriginalDegradedReconstruction

  • Hopfield NetworkRecurrent network Feedback from output to input

    Fully connectedEvery neuron connected to every other neuron

  • Hopfield NetworkSymmetric connectionsConnection weights from unit i to unit j and from unit j to unit i are identical for all i and jNo self connection, so weight matrix is 0-diagonal and symmetricLogic levels are +1 and -1

  • Computation For any neuron i, at an instant t input is j = 1 to n, ji wij j(t) j(t) is the activation of the jth neuron

    Threshold function = 0 Activation i(t+1)=sgn(j=1 to n, ji wijj(t)) where

    Sgn(x) = +1 x>0Sgn(x) = -1 x

  • Modes of operationSynchronousAll neurons are updated simultaneously

    AsynchronousSimple : Only one unit is randomly selected at each step

    General : Neurons update themselves independently and randomly based on probability distribution over time.

  • StabilityIssue of stability arises since there is a feedback in Hopfield networkMay lead to fixed point, limit cycle or chaosFixed point : unique point attractorLimit cycles : state space repeats itself in periodic cyclesChaotic : aperiodic strange attractor

  • ProcedureStore and stabilize the vector which has to be part of memory.

    Find the value of weight wij, for all i, j such that : is stable in Hopfield Network of N neurons.

  • Weight learningWeight learning is given bywij = 1/(N-1) i j 1/(N-1) is Normalizing factori j derives from Hebbs ruleIf two connected neurons are ON then weight of the connection is such that mutual excitation is sustained.Similarly, if two neurons inhibit each other then the connection should sustain the mutual inhibition.

  • Multiple VectorsIf multiple vectors need to be stored in memory like

    .

    Then the weight are given by:wij = 1/(N-1) m=1 to pim jm

  • EnergyEnergy is associated with the state of the system.

    Some patterns need to be made stable this corresponds to minimum energy state of the system.

  • Energy functionEnergy at state = E() = - i ji wij ijLet the pth neuron change its state from pinitial to pfinal so Einitial = - jp wpj pinitial j + T Efinal = - jp wpj pfinal j + TE = Efinal EinitialT is independent of p

  • ContinuedE = - (pfinal - pinitial ) jp wpj ji.e. E = - p jp wpj jThus: E = - p x (netinputp)

    If p changes from +1 to -1 then p is negative and netinputp is negative and vice versa.

    So, E is always negative. Thus energy always decreases when neuron changes state.

  • Applications of Hopfield NetsHopfield nets are applied for Optimization problems.Optimization problems maximize or minimize a function.In Hopfield Network the energy gets minimized.

  • Traveling Salesman Problem Given a set of cities and the distances between them, determine the shortest closed path passing through all the cities exactly once.

  • Traveling Salesman ProblemOne of the classic and highly researched problem in the field of computer science.

    Decision problem Is there a tour with length less than k" is NP - Complete

    Optimization problem What is the shortest tour? is NP - Hard

  • Hopfield Net for TSPN cities are represented by an N X N matrix of neuronsEach row has exactly one 1 Each column has exactly one 1Matrix has exactly N 1skj = 1 if city k is in position jkj = 0 otherwise

  • Hopfield Net for TSPFor each element of the matrix take a neuron and fully connect the assembly with symmetric weights

    Finding a suitable energy function E

  • Determination of Energy FunctionE function for TSP has four components satisfying four constraints

    Each city can have no more than oneposition i.e. each row can have no morethan one activated neuron

    E1= A/2 k i ji ki kj A - Constant

  • Energy Function (Contd..)Each position contains no more than one city i.e. each column contains no more than one activated neuron

    E2= B/2 j k rk kj rj B - constant

  • Energy Function (Contd..)There are exactly N entries in the output matrix i.e. there are N 1s in the output matrix

    E3= C/2 (n - ki ki)2 C - constant

  • Energy Function (cont..)Fourth term incorporates the requirement of the shortest path

    E4= D/2 krkj dkr kj(r(j+1) + r(j-1))

    where dkr is the distance between city-k and city-r

    Etotal = E1 + E2 + E3 + E4

  • Energy Function (cont..)Energy equation is also given by E= -kirj w(ki)(rj) ki rj

    ki City k at position irj City r at position jOutput function ki ki = ( 1 + tanh(uki/u0))u0 is a constantuki is the net input

  • Weight ValueComparing above equations with the energy equation obtained previously

    W(ki)(rj) = -A kr(1 rj) - Bij(1 kr) C Ddkr(j(i+1) + j(i-1))

    Kronecker Symbol : krkr = 1 when k = rkr = 0 when k r

  • ObservationChoice of constants A,B,C and D that provide a good solution vary betweenAlways obtain legitimate loops (D is small relative to A, B and C)

    Giving heavier weights to the distances (D is large relative to A, B and C)

  • Observation (cont..)Local minimaEnergy function full of dips, valleys and local minimaSpeedFast due to rapid computational capacity of network

  • Concurrent Neural Network Proposed by N. Toomarian in 1988

    It requires N(log(N)) neurons to compute TSP of N cities.

    It also has a much higher probability to reach a valid tour.

  • Objective Function Aim is to minimize the distance between city k at position i and city r at position i+1Ei = krri kir(i+1) dkr

    Where is the Kronecers Symbol

  • Cont

    Ei = 1/N2 krri dkr i= 1 to ln(N) [1 + (2i 1) ki] [1 + (2i 1) ri]

    Where (2i 1) = (2i 1) [1 j= 1 to i-1 i ]Also to ensure that 2 cities dont occupy same positionEerror = krr kr

  • SolutionEerror will have a value 0 for any valid tour.So we have a constrained optimization problem to solve.

    E = Ei + Eerror is the Lagrange multiplier to be calculated form the solution.

  • Minimization of energy functionMinimizing Energy function which is in terms of kiAlgorithm is an iterative procedure which is usually used for minimization of quadratic functionsThe iteration steps are carried out in the direction of steepest decent with respect to the energy function E

  • Minimization of energy functionDifferentiating the energy dUki/dt = - E/ ki = - Ei/ ki - Eerror/ ki

    d/dt = E/ = Eerror

    ki = tanh(Uki) , const.

  • ImplementationInitial Input Matrix and the value of is randomly selected and specified

    At each iteration, new value of ki and is calculated in the direction of steepest descent of energy function

    Iterations will stop either when convergence is achieved or when the number of iterations exceeds a user specified number

  • Comparison Hopfield vs Concurrent NNConverges faster than Hopfield Network

    Probability to achieve valid tour is higher than Hopfield Network

    Hopfield doesnt have systematic way to determine the constant terms.

  • Comparison SOM and Concurrent NNData set consists of 52 cities in Germany and its subset of 15 cities.Both algorithms were run for 80 times on 15 city data set.52 city dataset could be analyzed only using SOM while Concurrent Neural Net failed to analyze this dataset.

  • ResultConcurrent neural network always converged and never missed any city, where as SOM is capable of missing cities.Concurrent Neural Network is very erratic in behavior , whereas SOM has higher reliability to detect every link in smallest path.Overall Concurrent Neural Network performed poorly as compared to SOM.

  • Shortest path generatedConcurrent Neural Network (2127 km)Self Organizing Maps (1311km)

  • Behavior in terms of probability Concurrent Neural NetworkSelf Organizing Maps

  • ConclusionHopfield Network can also be used for optimization problems.Concurrent Neural Network performs better than Hopfield network and uses less neurons.Concurrent and Hopfield Neural Network are less efficient than SOM for solving TSP.

  • ReferencesN. K. Bose and P. Liang, Neural Network Fundamentals with Graphs, Algorithms and Applications, Tata McGraw Hill Publication, 1996

    P. D. Wasserman, Neural computing: theory and practice, Van Nostrand Reinhold Co., 1989

    N. Toomarian, A Concurrent Neural Network algorithm for the Traveling Salesman Problem, ACM Proceedings of the third conference on Hypercube concurrent computers and applications, pp. 1483-1490, 1988.

  • ReferencesR. Reilly, Neural Network approach to solving the Traveling Salesman Problem, Journals of Computer Science in Colleges, pp. 41-61,October 2003

    Wolfram Research inc., Tutorial on Neural Networks, http://documents.wolfram.com/applications/neuralnetworks/NeuralNetworkTheory/2.7.0.html, 2004

    Prof.

Recommended

View more >