03. dynamic programming

Post on 14-Jun-2015

357 Views

Category:

Engineering

12 Downloads

Preview:

Click to see full reader

DESCRIPTION

Dynamic Programming

TRANSCRIPT

Kamalesh Karmakar,Assistant Professor,

Dept. of C.S.E.

Meghnad Saha Institute of Technology

A dynamic-programming algorithm solves every subsubproblem justonce and then saves its answer in a table, thereby avoiding thework of recomputing the answer every time the subsubproblem isencountered.

Dynamic programming is typically applied to optimization problems.In such problems there can be many possible solutions. Eachsolution has a value, and we wish to find a solution with theoptimal (minimum or maximum) value. We call such a solution anoptimal solution to the problem, as opposed to the optimal solution,since there may be several solutions that achieve the optimalvalue.

The development of a dynamic-programming algorithm can bebroken into a sequence of four steps.

1. Characterize the structure of an optimal solution.2. Recursively define the value of an optimal solution.3. Compute the value of an optimal solution in a bottom-up

fashion.4. Construct an optimal solution from computed information.

A product of matrices is fully parenthesized if it is either a single matrixor the product of two fully parenthesized matrix products,surrounded by parentheses. Matrix multiplication is associative,and so all parenthesizations yield the same product. For example,if the chain of matrices is A1, A2, A3, A4, the product A1 A2 A3 A4can be fully parenthesized in five distinct ways:

The matrix-chain multiplication problem can be stated as follows:given a chain A1, A2,. . . , An of n matrices, where for i = 1, 2, . . . ,n, matrix Ai has dimension pi−1 × pi , fully parenthesize theproduct A1 A2 · · · An in a way that minimizes the number of scalarmultiplications.

Denote the number of alternative parenthesizations of a sequence of nmatrices by P(n). When n = 1, there is just one matrix and therefore onlyone way to fully parenthesize the matrix product. When n ≥ 2, a fullyparenthesized matrix product is the product of two fullyparenthesized matrix subproducts, and the split between the twosubproducts may occur between the kth and (k + 1)st matrices for anyk = 1, 2, . . . , n − 1.Thus, we obtain the recurrence

The minimum number of scalar multiplications

to multiply the 6 matrices is m[1, 6] = 15,125.

In the example, the call PRINT-OPTIMAL-PARENS(s, 1, 6) prints the

parenthesization ((A1(A2A3))((A4A5)A6)).

• A different dynamic-programming formulation to solve the all-pairs shortest-paths

problem on a directed graph G = (V, E).

• The Floyd-Warshall algorithm, runs in O(V3) time.

• an intermediate vertex of a simple path p = v1, v2, . . . , vl is any vertex of p

other than v1 or vl , that is, any vertex in the set {v2, v3, . . . , vl−1}.

Let d(k)i j be the weight of a shortest path from vertex i to vertex j for which

all intermediate vertices are in the set {1, 2, . . . , k}.

When k = 0, a path from vertex i to vertex j with no intermediate vertex

numbered higher than 0 has no intermediate vertices at all. Such a path has

at most one edge, and hence d(0)i j = wi j . A recursive definition following the above

discussion is given by

Its input is an n × n matrix W defined as in equation

The procedure returns the matrix D(n) of shortest-path weights.

There are a variety of different methods for constructing shortest paths in the

Floyd-Warshall algorithm. One way is to compute the matrix D of shortest-path

weights and then construct the predecessor matrix Π from the D matrix. This

method can be implemented to run in O(n3) time.

We can compute the predecessor matrix Π“on-line” just as the Floyd-Warshall

algorithm computes the matrices D(k). Specifically, we compute a sequence of

matrices Π(0), Π(1), . . ., Π(n), where Π = Π(n) and π(k)i j is defined to be the

Predecessor of vertex j on a shortest path from vertex i with all intermediate

vertices in the set {1, 2, . . . , k}.

We can give a recursive formulation of π(k) i j . When k = 0, a shortest path

from i to j has no intermediate vertices at all. Thus,

There are a variety of different m

Bellman-Ford

The Bellman-Ford algorithm solves the single-source shortest-paths

problem in the general case in which edge weights may be negative.

Given a weighted, directed graph G = (V, E) with source s and weight

function w : E → R, the Bellman-Ford algorithm returns a boolean value

indicating whether or not there is a negative-weight cycle that is

reachable from the source.

If there is such a cycle, the algorithm indicates that no solution exists. If

there is no such cycle, the algorithm produces the shortest paths and

their weights.

Dijstra’s Algorithm

Dijkstra’s algorithm solves the single-source shortest-paths problem on a

weighted, directed graph G = (V, E) for the case in which all edge

weights are nonnegative.

In this section, therefore, we assume that w(u, v) ≥ 0 for each edge (u, v)

∈ E. As we shall see, with a good implementation, the running time of

Dijkstra’s algorithm is lower than that of the Bellman-Ford algorithm.

Dijkstra’s algorithm maintains a set S of vertices whose final shortest-

path weights from the source s have already been determined. The

algorithm repeatedly selects the vertex u ∈ V − S with the minimum

shortest-path estimate, adds u to S, and relaxes all edges leaving u. In the

following implementation, we use a min-priority queue Q of vertices,

keyed by their d values.

top related