network optimization problems: models and algorithms in this handout: approximation algorithms...
Post on 22-Dec-2015
215 views
TRANSCRIPT
Network Optimization Problems:
Models and Algorithms
In this handout:
• Approximation Algorithms
• Traveling Salesman Problem
Classes of discrete optimization problems
Class 1 problems have polynomial-time algorithms
for solving the problems optimally.
Ex.: Min. Spanning Tree problem
For Class 2 problems (NP-hard problems)
• No polynomial-time algorithm is known;
• And more likely there is no one.
Ex.: Traveling Salesman Problem
Three main directions to solve
NP-hard discrete optimization problems:
• Integer programming techniques
• Heuristics
• Approximation algorithms
• We gave examples of the first two methods for TSP.
• In this handout,
an approximation algorithm for TSP.
Definition of Approximation Algorithms
• Definition: An α-approximation algorithm is a polynomial-time algorithm which always produces a solution of value within α times the value of an optimal solution.
That is, for any instance of the problem
Zalgo / Zopt α , (for a minimization problem)
where Zalgo is the cost of the algorithm output,
Zopt is the cost of an optimal solution.
• α is called the approximation guarantee (or factor) of the algorithm.
Some Characteristics of Approximation Algorithms
• Time-efficient (sometimes not as efficient as heuristics)• Don’t guarantee optimal solution• Guarantee good solution within some factor of the optimum • Rigorous mathematical analysis to prove the approximation
guarantee• Often use algorithms for related problems as subroutines
Next we will give
an approximation algorithm for TSP.
An approximation algorithm for TSP
Given an instance for TSP problem,
1. Find a minimum spanning tree (MST) for that instance.
(using the algorithm of the previous handout)
2. To get a tour, start from any node and traverse the arcs of MST by taking shortcuts when necessary.
Example:
Stage 1 Stage 2
start from this node
red bold arcs form a tour
Approximation guarantee for the algorithm
• In many situations, it is reasonable to assume that triangle inequality holds for the cost function c: E R defined on the arcs of network G=(V,E) :
cuw cuv + cvw for any u, v, w V
• Theorem:
If the cost function satisfies the triangle ineqality,
then the algorithm for TSP
is a 2-approximation algorithm.
w
v
u
Approximation guarantee for the algorithm (proof)
First let’s compare the optimal solutions of MST and TSP for any problem instance G=(V,E), c: E R .
• Idea: Get a tour from Minimum spanning tree without increasing its cost too much (at most twice in our case).
Cost (Opt. TSP sol-n) Cost (of this tree) Cost (Opt. MST sol-n)≥ ≥
Optimal TSP sol-n Optimal MST sol-nA tree obtained from the tour
(*)
Approximation guarantee for the algorithm (proof)
The algorithm • takes a minimum spanning tree • starts from any node• traverse the MST arcs
by taking shortcuts when necessary
to get a tour. What is the cost of the tour compared to the cost of MST?• Each tour (bold) arc e is a shortcut
for a set of tree (thin) arcs f1, …, fk
(or simply coincides with a tree arc)
start from this node
red bold arcs form a tour
1
2
3
4
5
6
Approximation guarantee for the algorithm (proof)
• Based on triangle inequality,
c(e) c(f1)+…+c(fk)
E.g, c15 c13 + c35
c23 c23
• But each tree (thin) arc
is shortcut exactly twice. (**)
E.g., tree arc 3-5 is shortcut by tour arcs 1-5 and 5-6 . The following chain of inequalities concludes the proof,
by using the facts we obtained so far:
start from this node
red bold arcs form a tour
1
2
3
4
5
6
TSP)alcost(optim2cost(MST)2
c(e)2c(e) our)cost(our t
(*)by
earcsthin
(**)ineq.,Δby
e arcs bold
Performance of TSP algorithms in practice
• A more sophisticated algorithm (which again uses the MST algorithm as a subroutine) guarantees a solution within factor of 1.5 of the optimum (Christofides).
• For many discrete optimization problems, there are benchmarks of instances on which algorithms are tested.
• For TSP, such a benchmark is TSPLIB.• On TSPLIB instances, the Christofides’ algorithm outputs
solutions which are on average 1.09 times the optimum.
For comparison, the nearest neighbor algorithm outputs solutions which are on average 1.26 times the optimum.
• A good approximation factor often leads to good performance in practice.