1 dynamic programming dynamic programming algorithms address problems whose solution is recursive in...
Post on 20-Dec-2015
223 views
TRANSCRIPT
1
Dynamic Programming• Dynamic Programming algorithms address
problems whose solution is recursive in nature, but has the following property: The direct implementation of the recursive solution results in identical recursive calls that are executed more than once.
• Dynamic programming implements such algorithms by evaluating the recurrence in a bottom-up manner, saving intermediate results that are later used in computing the desired solution
2
Fibonacci Numbers
•
• What is the recursive algorithm that computes Fibonacci numbers? What is its time complexity?– Note that it can be shown that
2
1
0
21
1
0
nfff
f
f
nnn
2
51,2
nnf
3
Computing the Binomial Coefficient
• Recursive Definition
• Actual Value
nk
k
n
k
n
nkk
k
n
01
1
1
or01
!!
!
knk
n
k
n
4
Computing the Binomial Coefficient
• What is the direct recursive algorithm for computing the binomial coefficient? How much does it cost?– Note that
nnn
n
n
n n
2
!2/!2/
!
2/
5
Dynamic Programming
• Development of a dynamic programming solution to an optimization problem involves three steps
1. Characterize the structure of an optimal solution • Optimal substructures, where an optimal
solution consists of sub-solutions that are optimal.
• Overlapping sub-problems. 2. Recursively define the value of an optimal
solution.3. Compute the value of an optimal solution in a
bottom-up manner.• Construct an optimal solution from the
computed optimal value.
6
Longest Common Subsequence
• Problem Definition: Given two strings A and B over alphabet , determine the length of the longest subsequence that is common in A and B.
• A subsequence of A=a1a2…an is a string of the form ai1ai2…aik where 1i1<i2<…<ik n
• Example: Let = { x , y , z }, A = xyxyxxzy, B=yxyyzxy, and C= zzyyxyz
– LCS(A,B)=yxyzy Hence the length =– LCS(B,C)= Hence the length =– LCS(A,C)= Hence the length =
7
Straight-Forward Solution
• Brute-force search– How many subsequences exist in a string of
length n?– How much time needed to check a string
whether it is a subsequence of another string of length m?
– What is the time complexity of the brute-force search algorithm of finding the length of the longest common subsequence of two strings of sizes n and m?
8
Dynamic Programming Solution
• Let L[i,j] denote the length of the longest common subsequence of a1a2…ai and b1b2…bj, which are substrings of A and B of lengths n and m, respectively. ThenL[i,j] = when i = 0 or j = 0
L[i,j] = when i > 0, j > 0, a i=bj
L[i,j] = when i > 0, j > 0, a ibj
9
LCS Algorithm
Algorithm LCS(A,B)
Input: A and B strings of length n and m
Output: Length of LCS of A and B
Initialize L[i,0] and L[0,j] to zero;
for i ← 1 to n do
for j ← 1 to m do
{ if ai = bj then L[i,j] ← 1 + L[i-1,j-1]
else L[i,j] ← max(L[i-1,j],L[i,j-1])
}
return L[n,m];
10
Example (Q7.5 pp. 220)
• Find the length of the longest common subsequence of A=xzyzzyx and B=zxyyzxz
11
Example (Cont.)x z y z z y x
0 0 0 0 0 0 0 0
z 0
x 0
y 0
y 0
z 0
x 0
z 0
12
Complexity Analysis of LCS Algorithm
• What is the time and space complexity of the algorithm?
13
Matrix Chain Multiplication
• Assume Matrices A, B, and C have dimensions 210, 102, and 210 respectively. The number of scalar multiplications using the standard Matrix multiplication algorithm for– (A B) C is– A (B C) is
• Problem Statement: Find the order of multiplying n matrices in which the number of scalar multiplications is minimum.
14
Straight-Forward Solution
• Again, let us consider the brute-force method. We need to compute the number of different ways that we can parenthesize the product of n matrices.– e.g. how many different orderings do we have for
the product of four matrices?– Let f(n) denote the number of ways to
parenthesize the product M1, M2, …, Mn.
• (M1M2…Mk) (M k+1M k+2…Mn)
• What is f(2), f(3) and f(1)?
15
Catalan Numbers
•
• f(n) is approximately
1
221)(
n
n
nnf
5.14
4
n
n
16
Cost of Brute Force Method
• How many possibilities do we have for parenthesizing n matrices?
• How much does it cost to find the number of scalar multiplications for one parenthesized expression?
• Therefore, the total cost is
17
The Recursive Solution• Since the number of columns of each matrix Mi is
equal to the number of rows of Mi+1, we only need to specify the number of rows of all the matrices, plus the number of columns of the last matrix, r1, r2, …, rn+1 respectively.
• Let the cost of multiplying the chain Mi…Mj (denoted by Mi,j) be C[i,j]
• If k is an index between i+1 and j, what is the cost of multiplying Mi,j considering multiplying Mi,k-1 with Mk,j?
• Therefore, C[1,n]=
18
The Dynamic Programming Algorithm
C[1,1] C[1,2] C[1,3] C[1,4] C[1,5] C[1,6]
C[2,2] C[2,3] C[2,4] C[2,5] C[2,6]
C[3,3] C[3,4] C[3,5] C[3,6]
C[4,4] C[4,5] C[4,6]
C[5,5] C[5,6]
C[6,6]
19
MatChain AlgorithmAlgorithm MatChainInput: r[1..n+1] of +ve integers (dimensions of matr.)Output: Least # of scalar multiplications required
for i := 1 to n do C[i,i] := 0; // diagonal d0
for d := 1 to n-1 do // for diagonals d1 to dn-1
for i := 1 to n-d do { j := i+d; C[i,j] := ; for k := i+1 to j do C[i,j] := min{C[i,j],C[i,k-1]+C[k,j]+r[i]r[k]r[j+1]; }; return C[1,n];
20
Example (Q7.11 pp. 221-222)
• Given as input 2 , 3 , 6 , 4 , 2 , 7 compute the minimum number of scalar multiplications:
21
Time and Space Complexity of MatChain
• Time Complexity
• Space Complexity
)(
6
121
2
1
)(1
1
3
1
1
2
1
11
1
11
1
1
11
1
111
1
1
n
nnnc
nncnddnc
dndcdcdc
cc
n
d
n
d
dn
i
n
d
dn
i
n
d
di
ik
dn
i
n
d
j
ik
dn
i
n
d
22
Assembly-Line Scheduling
• Two parallel assembly lines in a factory, lines 1 and 2
• Each line has n stations Si,1…Si,n
• For each j, S1, j does the same thing as S2, j , but it may take a different amount of assembly time ai, j
• Transferring away from line i after stage j costs ti, j
• Also entry time ei and exit time xi at beginning and end
23
Assembly-Lines
• Brute force algorithm
– Time complexity O(n2n)
24
Finding Subproblem
• Pick some convenient stage of the process– Say, just before the last station
• What’s the next decision to make?– Whether the last station should be S1,n or
S2,n
• What do you need to know to decide which option is better?– What the fastest times are for S1,n & S2,n
25
=min ( ,
Recursive Formula for Subproblem
Fastest time to any given station
Fastest time through prev station (same line)
Fastest time through prev station (other line)
Time it takes to switch lines
+ )
26
Recursive Formula (II)
• Let fi [ j] denote the fastest possible time to get the chassis through S i, j
• Have the following formulas:
f1[ 1] = e1 + a1,1
f1[ j] = min( f1[ j-1] + a1, j, f2 [ j-1]+t2, j-1+ a1, j )
• Total time:
f * = min( f1[n] + x1, f2 [ n]+x2)
27
28
Analysis + an example
• Only loop is lines 3-13 which iterate n-1 times: Algorithm is O(n).
• The array l records which line is used for each station number
29
All-Pairs Shortest Paths
• Problem Statement: Let G = (V, E) be a directed graph in which each edge (i, j) has a nonnegative length l[i, j]. The all-pairs shortest path problem is to find the length of the shortest path from each vertex to all other vertices.– The set of vertices V = {1, 2, …, n}– l[i,j] = if (i, j) E, i j.
• Brute force algorithms?
30
Dynamic Programming Solution
• Optimal Substructure Property: A shortest path from a vertex to another vertex consists of the concatenation of shortest sub-paths of the intermediate vertices.
• Definition: A k-path from vertex i to vertex j is a path that does not pass through any vertex in {k + 1, k + 2,…, n}– What is a 0-path? 1-path? … n-path?
o A nice property: an r-path is an s-path if and only if r s.
31
Dynamic Programming Solution• Let p be the shortest path from i to j containing
only vertices from the set {1, …, k}. – If vertex k is not in p then a shortest (k -1)-path
is the shortest k-path.– If k is an intermediate vertex in p, then we
break down p into p1(i to k) and p2(k to j).
nkddd
kjid k
jkkki
kji
kji 1 if,min
0 if],[length1
,1
,1
,,
i
k
j
p1
p2
32
Floyd’s Algorithm
Algorithm FloydInput: An n n matrix length[1..n, 1..n] such that
length[i,j] is the weight of the edge (i,j) in a directed graph G = ({1,2,…,n}, E)
Output: A matrix D with D[i,j] = [i,j]
1 D = length; //copy the input matrix length into D2 for k = 1 to n do3 for i = 1 to n do4 for j = 1 to n do5 D[i,j] = min{D[i,j] , D[i,k] + D[k,j]}
33
Example
1 3
4
2
48
111
5
12 15
511
2
34
0-p 1 2 3 4
1 0 12 5
2 15 0 8 11
3 4 0 2
4 1 5 11 0
1-p 1 2 3 4
1 0 12 5
2 15 0 8 11
3 4 0 2
4 1 5 6 0
4-p 1 2 3 4
1 0 9 5 7
2 11 0 8 10
3 3 4 0 2
4 1 5 6 0
Example (Cont.)
… …
35
Time and Space Complexity
• Time Complexity:
• Space Complexity:
36
Greedy vs. DP
Greedy:• Make a choice at each step.• Make the choice before solving the subproblems.• Solve top-down.Dynamic programming:• Make a choice at each step.• Choice depends on knowing optimal solutions to
subproblems. Solve subproblems first.• Solve bottom-up.
37
Coin changing
• Greedy algorithm works fine (for this example: [100, 25, 10, 5, 1])– Prove greedy choice property– See Rosen (section 2.1, pages 128,129)
• Greedy Method does not work in all cases – Coin sets = { 8, 5, 1} Change = 10.
• Greedy Solution = {8, 1, 1}• Optimal Solution = { 5, 5}
– What if Coin sets = {10, 6, 1}, Change = 12?
38
Coin Changing: Dyn. Prog.
• A =12, denom = [10, 6, 1]?
• What could be the sub-problems? Described by which parameters?
• How do we solve sub-problems?
( 1, ) if [ ]( , )
min{ ( 1, ),1 ( , [ ])} if [ ]
c i j denom i jc i j
c i j c i j denom i denom i j
10 6 1
How do we solve the trivial sub-problems? In which order do I have to solve sub-
problems?
39
0/1 Knapsack Problem
• Greedy approach does not give optimal:– n = 3, W = 30– weights = (20, 10, 5)– values = (180, 80, 50)
• Ratios = (180/20, 80/10, 50/5) = (9, 8, 10)
• Greedy solution: (1,0,1) =180+50=230
• The optimal solution: (1,1,0) =180+80=260
40
0-1 Knapsack problem: brute-force approach
Let’s first solve this problem with a straightforward algorithm
• Since there are n items, there are 2n possible combinations of items.
• We go through all combinations and find the one with the most total value and with total weight less or equal to W
• Running time will be O(2n)
41
0-1 Knapsack problem: brute-force approach
• Can we do better?
• Yes, with an algorithm based on dynamic programming
• We need to carefully identify the subproblems
Let’s try this:If items are labeled 1..n, then a subproblem would be to find an optimal solution for Sk = {items labeled 1, 2, .. k}
42
Defining a Subproblem
If items are labeled 1..n, then a subproblem would be to find an optimal solution for Sk
= {items labeled 1, 2, .. k}
• Let’s add another parameter: w, which will represent the exact weight for each subset of items
• The subproblem is to compute B[k,w]
43
Recursive Formula
• The best subset of Sk that has the total weight w, either contains item k or not.
• First case: wk>w. Item k can’t be part of the solution, since if it was, the total weight would be > w, which is unacceptable
• Second case: wk w. Then the item k can be in the solution, and we choose the case with greater value
else }],1[],,1[max{
if ],1[],[
kk
k
bwwkBwkB
wwwkBwkB
44
The 0/1 Knapsack Algorithm
• B[k,w] = best selection from items 1-k with weight exactly equal to w
• Base case: k = 0, no items to choose from. Total value is 0.
• The answer will be the largest (rightmost) value in the last row (k = n)
• Running time: O(nW).• Note: not a polynomial-time algorithm if W is large
else }],1[],,1[max{
if ],1[],[
kk
k
bwwkBwkB
wwwkBwkB
45
The 0/1 Knapsack Algorithm
Algorithm 0-1Knapsack(S, W):
Input: set S of items w/ benefit bi and
weight wi; max. weight WOutput: value of best subset w/weight ≤ Wfor w 0 to W do
B[0,w] 0 for k 1 to n do
for w W downto wk doB[k,w] max(B[k-1,w],
B[k-1,w-wk]+bk)
46
Example
Let’s run our algorithm on the following data:
n = 4 (# of elements)W = 5 (max weight)Elements (weight, benefit):(2,3), (3,4), (4,5), (5,6)
47
Example
for w 0 to W doB[0,w] 0
0000
00
W0123
4
5
k 0 1 2 3
4
48
Example
B[k,w] max(B[k-1,w], B[k-1,w-wk]+bk)
0
0
0
0
0
0
W0
1
2
3
4
5
k 0 1 2 3
0 0 0 0
Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)
4
0 00
3
4
4
7
0
3
4
5
7
0
3
4
5
7
3
3
3
3
49
Improvements
• Running time: O(nW).• Note: not a polynomial-time algorithm if W is
large (e.g., W=n!, it’s worse than 2n)• Improvement
– B[k,w] is computed from B[k-1,w], and B[k-1,w-wk].
– Start from B[k,w] and see which B[i,j] are needed to be computed.
– Compute them– At most 1 + 2 + 22 + 23 +…+ 2n-1 = 2n-1
• Worst case complexity: O(min{nW, 2n})
50
Summary
• 3 steps in dynamic programming solution1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a bottom-up manner.
• Construct an optimal solution.
• We discussed DP solutions for (a) LCS, (b) MCM, (c) Production line, (d) All pairs shortest, (e) Knapsack, and (f) Coin change.