neural network to solve traveling salesman problem

47
Neural Network to solve Traveling Salesman Problem Amit Goyal 01005009 Koustubh Vachhani 01005021 Ankur Jain 01D05007

Upload: calla

Post on 31-Jan-2016

70 views

Category:

Documents


2 download

DESCRIPTION

Neural Network to solve Traveling Salesman Problem. Amit Goyal 01005009 Koustubh Vachhani 01005021 Ankur Jain 01D05007. Roadmap. Hopfield Neural Network Solving TSP using Hopfield Network Modification of Hopfield Neural Network - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Neural Network to solve Traveling Salesman Problem

Neural Network to solve Traveling Salesman Problem

Amit Goyal 01005009 Koustubh Vachhani 01005021Ankur Jain 01D05007

Page 2: Neural Network to solve Traveling Salesman Problem

Roadmap Hopfield Neural Network

Solving TSP using Hopfield Network

Modification of Hopfield Neural Network

Solving TSP using Concurrent Neural Network

Comparison between Neural Network and SOM for solving TSP

Page 3: Neural Network to solve Traveling Salesman Problem

Background

Neural Networks Computing device composed of

processing elements called neurons Processing power comes from

interconnection between neurons Various models are Hopfield, Back

propagation, Perceptron, Kohonen Net etc

Page 4: Neural Network to solve Traveling Salesman Problem

Associative memory

Associative memory Produces for any input pattern a

similar stored pattern Retrieval by part of data Noisy input can be also recognized

Original Degraded Reconstruction

Page 5: Neural Network to solve Traveling Salesman Problem

Hopfield Network Recurrent network

Feedback from output to input

Fully connected Every neuron connected

to every other neuron

Page 6: Neural Network to solve Traveling Salesman Problem

Hopfield Network

Symmetric connections Connection weights from unit i to

unit j and from unit j to unit i are identical for all i and j

No self connection, so weight matrix is 0-diagonal and symmetric

Logic levels are +1 and -1

Page 7: Neural Network to solve Traveling Salesman Problem

Computation

For any neuron i, at an instant t input is

Σj = 1 to n, j≠i wij σj(t)

σj(t) is the activation of the jth neuron

Threshold function θ = 0

Activation σi(t+1)=sgn(Σj=1 to n, j≠i wijσj(t))

where

Sgn(x) = +1 x>0Sgn(x) = -1 x<0

Page 8: Neural Network to solve Traveling Salesman Problem

Modes of operation Synchronous

All neurons are updated simultaneously

Asynchronous Simple : Only one unit is randomly selected

at each step

General : Neurons update themselves independently and randomly based on probability distribution over time.

Page 9: Neural Network to solve Traveling Salesman Problem

Stability

Issue of stability arises since there is a feedback in Hopfield network

May lead to fixed point, limit cycle or chaos Fixed point : unique point attractor Limit cycles : state space repeats

itself in periodic cycles Chaotic : aperiodic strange attractor

Page 10: Neural Network to solve Traveling Salesman Problem

Procedure

Store and stabilize the vector which has to be part of memory.

Find the value of weight wij, for all i, j such that :<σ1, σ2, σ3 …… σN> is stable in Hopfield

Network of N neurons.

Page 11: Neural Network to solve Traveling Salesman Problem

Weight learning Weight learning is given by

wij = 1/(N-1) σi σj

1/(N-1) is Normalizing factor σi σj derives from Hebb’s rule

If two connected neurons are ON then weight of the connection is such that mutual excitation is sustained.

Similarly, if two neurons inhibit each other then the connection should sustain the mutual inhibition.

Page 12: Neural Network to solve Traveling Salesman Problem

Multiple Vectors If multiple vectors need to be

stored in memory like<σ1

1, σ21, σ3

1 …… σN1>

<σ12, σ2

2, σ32 …… σN

2>……………………………….<σ1

p, σ2p, σ3

p …… σNp>

Then the weight are given by:wij = 1/(N-1) Σm=1 to pσi

m σj

m

Page 13: Neural Network to solve Traveling Salesman Problem

Energy

Energy is associated with the state of the system.

Some patterns need to be made stable this corresponds to minimum energy state of the system.

Page 14: Neural Network to solve Traveling Salesman Problem

Energy function

Energy at state σ’ = <σ1, σ2, σ3 ……

σN> E(σ’) = -½ Σi Σj≠i wij σiσj

Let the pth neuron change its state from σp

initial to σp

final so Einitial = -½ Σj≠p wpj σp

initial σj + T Efinal = -½ Σj≠p wpj σp

final σj + T ΔE = Efinal – Einitial

T is independent of σp

Page 15: Neural Network to solve Traveling Salesman Problem

Continued…ΔE = - ½ (σp

final - σpinitial ) Σj≠p wpj σj

i.e. ΔE = -½ Δσp Σj≠p wpj σj

Thus: ΔE = -½ Δσp x (netinputp)

If p changes from +1 to -1 then Δσp is negative and netinputp is negative and vice versa.

So, ΔE is always negative. Thus energy always decreases when neuron changes state.

Page 16: Neural Network to solve Traveling Salesman Problem

Applications of Hopfield Nets

Hopfield nets are applied for Optimization problems.

Optimization problems maximize or minimize a function.

In Hopfield Network the energy gets minimized.

Page 17: Neural Network to solve Traveling Salesman Problem

Traveling Salesman Problem

Given a set of cities and the distances between them, determine the shortest closed path passing through all the cities exactly once.

Page 18: Neural Network to solve Traveling Salesman Problem

Traveling Salesman Problem One of the classic and highly researched

problem in the field of computer science.

Decision problem “Is there a tour with length less than k" is NP - Complete

Optimization problem “What is the shortest tour?” is NP - Hard

Page 19: Neural Network to solve Traveling Salesman Problem

Hopfield Net for TSP N cities are

represented by an N X N matrix of neurons

Each row has exactly one 1

Each column has exactly one 1

Matrix has exactly N 1’s

σkj = 1 if city k is in position j

σkj = 0 otherwise

Page 20: Neural Network to solve Traveling Salesman Problem

Hopfield Net for TSP

For each element of the matrix take a neuron and fully connect the assembly with symmetric weights

Finding a suitable energy function E

Page 21: Neural Network to solve Traveling Salesman Problem

Determination of Energy Function E function for TSP has four components

satisfying four constraints

Each city can have no more than oneposition i.e. each row can have no morethan one activated neuron

E1= A/2 Σk Σi Σj≠i σki σkj A - Constant

Page 22: Neural Network to solve Traveling Salesman Problem

Energy Function (Contd..)

Each position contains no more than one city i.e. each column contains no more than one activated neuron

E2= B/2 Σj Σk Σr≠k σkj σrj B - constant

Page 23: Neural Network to solve Traveling Salesman Problem

Energy Function (Contd..)

There are exactly N entries in the output matrix i.e. there are N 1’s in the output matrix

E3= C/2 (n - ΣkΣi σki)2 C - constant

Page 24: Neural Network to solve Traveling Salesman Problem

Energy Function (cont..) Fourth term incorporates the

requirement of the shortest path

E4= D/2 ΣkΣr≠kΣj dkr σkj(σr(j+1) + σr(j-1))

where dkr is the distance between city-k and city-r

Etotal = E1 + E2 + E3 + E4

Page 25: Neural Network to solve Traveling Salesman Problem

Energy Function (cont..) Energy equation is also given by

E= -½ΣkiΣrj w(ki)(rj) σki σrj

σki – City k at position i

σrj – City r at position j Output function σki

σki = ½ ( 1 + tanh(uki/u0))

u0 is a constant

uki is the net input

Page 26: Neural Network to solve Traveling Salesman Problem

Weight Value

Comparing above equations with the energy equation obtained previously

W(ki)(rj) = -A δkr(1 – δrj) - Bδij(1 – δkr) –C –Ddkr(δj(i+1) + δj(i-1))

Kronecker Symbol : δkr

δkr = 1 when k = r δkr = 0 when k ≠ r

Page 27: Neural Network to solve Traveling Salesman Problem

Observation

Choice of constants A,B,C and D that provide a good solution vary between Always obtain legitimate loops (D is

small relative to A, B and C)

Giving heavier weights to the distances (D is large relative to A, B and C)

Page 28: Neural Network to solve Traveling Salesman Problem

Observation (cont..)

Local minima Energy function full of dips, valleys

and local minima Speed

Fast due to rapid computational capacity of network

Page 29: Neural Network to solve Traveling Salesman Problem

Concurrent Neural Network

Proposed by N. Toomarian in 1988

It requires N(log(N)) neurons to compute TSP of N cities.

It also has a much higher probability to reach a valid tour.

Page 30: Neural Network to solve Traveling Salesman Problem

Objective Function

Aim is to minimize the distance between city k at position i and city r at position i+1

Ei = Σk≠rΣrΣi δkiδr(i+1) dkr

Where δ is the Kronecers Symbol

Page 31: Neural Network to solve Traveling Salesman Problem

Cont …

Ei = 1/N2 Σk≠rΣrΣi dkr Πi= 1 to ln(N) [1 + (2עi – 1) σki] [1

+ (2µi – 1) σri]

Where (2µi – 1) = (2עi – 1) [1 – Πj= 1 to i-1 עi ]

Also to ensure that 2 cities don’t occupy same position

Eerror = Σk≠rΣr δkr

Page 32: Neural Network to solve Traveling Salesman Problem

Solution

Eerror will have a value 0 for any valid tour.

So we have a constrained optimization problem to solve.

E = Ei + λ Eerror

λ is the Lagrange multiplier to be calculated form the solution.

Page 33: Neural Network to solve Traveling Salesman Problem

Minimization of energy function

Minimizing Energy function which is in terms of σki

Algorithm is an iterative procedure which is usually used for minimization of quadratic functions

The iteration steps are carried out in the direction of steepest decent with respect to the energy function E

Page 34: Neural Network to solve Traveling Salesman Problem

Minimization of energy function

Differentiating the energy

dUki/dt = - δE/ δσki = - δEi/ δσki - λδEerror/ δσki

dλ/dt = ± δE/ δλ = ± Eerror

σki = tanh(αUki) , α – const.

Page 35: Neural Network to solve Traveling Salesman Problem

Implementation Initial Input Matrix and the value of λ is

randomly selected and specified

At each iteration, new value of σki and λ is calculated in the direction of steepest descent of energy function

Iterations will stop either when convergence is achieved or when the number of iterations exceeds a user specified number

Page 36: Neural Network to solve Traveling Salesman Problem

Comparison – Hopfield vs Concurrent NN

Converges faster than Hopfield Network

Probability to achieve valid tour is higher than Hopfield Network

Hopfield doesn’t have systematic way to determine the constant terms.

Page 37: Neural Network to solve Traveling Salesman Problem

Comparison – SOM and Concurrent NN

Data set consists of 52 cities in Germany and its subset of 15 cities.

Both algorithms were run for 80 times on 15 city data set.

52 city dataset could be analyzed only using SOM while Concurrent Neural Net failed to analyze this dataset.

Page 38: Neural Network to solve Traveling Salesman Problem

Result Concurrent neural network always

converged and never missed any city, where as SOM is capable of missing cities.

Concurrent Neural Network is very erratic in behavior , whereas SOM has higher reliability to detect every link in smallest path.

Overall Concurrent Neural Network performed poorly as compared to SOM.

Page 39: Neural Network to solve Traveling Salesman Problem

Shortest path generatedConcurrent Neural Network (2127 km) Self Organizing Maps (1311km)

Page 40: Neural Network to solve Traveling Salesman Problem

Behavior in terms of probability

Concurrent Neural Network Self Organizing Maps

Page 41: Neural Network to solve Traveling Salesman Problem

Conclusion Hopfield Network can also be used

for optimization problems. Concurrent Neural Network

performs better than Hopfield network and uses less neurons.

Concurrent and Hopfield Neural Network are less efficient than SOM for solving TSP.

Page 42: Neural Network to solve Traveling Salesman Problem

References N. K. Bose and P. Liang, ”Neural Network

Fundamentals with Graphs, Algorithms and Applications”, Tata McGraw Hill Publication, 1996

P. D. Wasserman, “Neural computing: theory and practice”, Van Nostrand Reinhold Co., 1989

N. Toomarian, “A Concurrent Neural Network algorithm for the Traveling Salesman Problem”, ACM Proceedings of the third conference on Hypercube concurrent computers and applications, pp. 1483-1490, 1988.

Page 43: Neural Network to solve Traveling Salesman Problem

References R. Reilly, “Neural Network approach to solving the

Traveling Salesman Problem”, Journals of Computer Science in Colleges, pp. 41-61,October 2003

Wolfram Research inc., “Tutorial on Neural Networks”, http://documents.wolfram.com/applications/neuralnetworks/NeuralNetworkTheory/2.7.0.html, 2004

Prof. P. Bhattacharyya, “Introduction to computing with Neural Nets”,http://www.cse.iitb.ac.in/~pb/Teaching.html.

Page 44: Neural Network to solve Traveling Salesman Problem
Page 45: Neural Network to solve Traveling Salesman Problem

NP-complete NP-hard

When a decision version of a combinatorial optimization problem is proved to belong to the class of NP-complete problems, which includes well-known problems such as satisfiability,traveling salesman, the bin packing problem, etc., then the optimization version is NP-hard.

Page 46: Neural Network to solve Traveling Salesman Problem

NP-complete NP-hard

“Is there a tour with length less than k" is NP-complete:

It is easy to determine if a proposed certificate has length less than k

The optimization problem : "what is the shortest tour?", is NP-

hard Since there is no easy way to determine if a certificate is the shortest.

Page 47: Neural Network to solve Traveling Salesman Problem

Path lengths

Concurrent Neural Network Self Organizing Maps