chapter 15: dynamic programming ii

33
1 Chapter 15: Dynamic Programming II

Upload: sarah-cooper

Post on 01-Jan-2016

44 views

Category:

Documents


3 download

DESCRIPTION

Chapter 15: Dynamic Programming II. Matrix Multiplication. Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we multiply matrices A and B , we obtain a resulting matrix C = A B whose dimension is p x r - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Chapter 15: Dynamic Programming II

1

Chapter 15: Dynamic Programming II

Page 2: Chapter 15: Dynamic Programming II

2

Matrix Multiplication

• Let A be a matrix of dimension p x q and B be a matrix of dimension q x r • Then, if we multiply matrices A and

B, we obtain a resulting matrix C = AB whose dimension is p x r

• We can obtain each entry in C using q operations in total, pqr operations

Page 3: Chapter 15: Dynamic Programming II

3

Matrix Multiplication

Example :

How to obtain c1,2 ?

2,21,2

2,11,1

2,31,3

2,21,2

2,11,1

3,2

3,1

2,2

2,1

1,2

1,1 cc

cc

bb

bb

bb

a

a

a

a

a

a

Page 4: Chapter 15: Dynamic Programming II

4

Matrix Multiplication

• In fact, ((A1A2)A3) = (A1(A2A3)) so that matrix multiplication is associative

Any way to write down the parentheses gives the same result

E.g., (((A1A2)A3)A4) = ((A1A2)(A3A4))

= (A1((A2A3)A4)) = ((A1(A2A3))A4)

= (A1(A2(A3A4)))

Page 5: Chapter 15: Dynamic Programming II

5

Matrix Multiplication

Question: Why do we bother this?

Because different computation sequence may use different number of operations!

E.g., Let the dimensions of A1, A2, A3 be:

1x100, 100x1, 1x100 , respectively

#operations to get ((A1A2)A3) = ??

#operations to get (A1(A2A3)) = ??

Page 6: Chapter 15: Dynamic Programming II

6

Lemma: Suppose that to multiply B1,B2,…,Bn, the way with minimum #operations is to:

(i) first, obtain B1B2 … Bx

(ii) then, obtain Bx+1 … Bn

(iii) finally, multiply the matrices of part (i) and part (ii)

Then, the matrices in part (i) and part (ii) must be obtained with min #operations

Optimal Substructure (allows recursion)

Page 7: Chapter 15: Dynamic Programming II

7

Let fi,j denote the min #operations to obtain the product AiAi+1 … Aj

fi,i = 0

Let rk and ck denote #rows and #cols of Ak

Then, we have:

Optimal Substructure

Lemma: For any j > i,

fi,j = minx { fi,x + fx+1,j + ri cx cj }

Page 8: Chapter 15: Dynamic Programming II

8

Define a function Compute_F(i,j) as follows:

Compute_F(i, j) /* Finding fi,j */

1. if (i == j) return 0;

2. m = ; 3. for (x = i, i+1, …, j-1) {

g = Compute_F(i,x) + Compute_F(x+1,j) +

ri cx cj ;

if (g m) m = g; }

4. return m ;

Recursive-Matrix-Chain

Page 9: Chapter 15: Dynamic Programming II

9

The recursion tree for the computation

of RECURSUVE-MATRIX-CHAIN(P, 1, 4)

Page 10: Chapter 15: Dynamic Programming II

10

Time Complexity

• Question: Time to get Compute_F(1,n)?

• By substituion method, we can show that

• Running time = (3n) • Running time = O(n3n )

1

11)1)()((1)(

1)1(n

knforknTkTnT

T

1

1

)(2)(n

i

niTnT

Page 11: Chapter 15: Dynamic Programming II

11

Counting the number of parenthesizations

2)()(

11)( 1

1

nifknpkP

nifnP n

k

)4

(1

2212/3nn

n

n

n

• Remark: On the other hand, #operations for each possible way of writing parentheses are computed at most once Running time = O( C(2n-2,n-1)/n ) Catalan Number

Page 12: Chapter 15: Dynamic Programming II

12

Overlapping Subproblems

Here, we can see that : To Compute_F(i,j) and

Compute_F(i,j+1), both have many COMMON

subproblems: Compute_F(i,i+1), …, Compute_F(i,j-1)

So, in our recursive algorithm, there are many redundant computations !

Question: Can we avoid it ?

Page 13: Chapter 15: Dynamic Programming II

13

Bottom-Up Approach

• We notice that fi,j depends only on fx,y with |x-y| |i-j|

• Let us create a 2D table F to store all fi,j values once they are computed

• Then, compute fi,j for j-i = 1,2,…,n-1

Page 14: Chapter 15: Dynamic Programming II

14

BottomUp_F( ) /* Finding min #operations */ 1. for j = 1, 2, …, n, set F[j, j] = 0 ; 2. for (length = 1,2,…, n-1) {

Compute F[i,i+length] for all i; // Based on F[x,y] with |x-y| < length } 3. return F[1,n] ;

Running Time = (n3)

Bottom-Up Approach

Page 15: Chapter 15: Dynamic Programming II

15

Example:

656

545

434

323

212

101

2520

2010

105

515

1535

3530

ppA

ppA

ppA

ppA

ppA

ppA

Page 16: Chapter 15: Dynamic Programming II

16

The m and s table computed by MATRIX-CHAIN-ORDER for n = 6

)))))((((( 654321 AAAAAAOptimal Solution:

Page 17: Chapter 15: Dynamic Programming II

17

Example

m[2,5]=Min {m[2,2]+m[3,5]+p1p2p5=0+2500+351520

=13000,m[2,3]+m[4,5]+p1p3p5=2625+1000+3552

0=7125,m[2,4]+m[5,5]+p1p4p5=4375+0+351020

=11374} =7125

Page 18: Chapter 15: Dynamic Programming II

18

MATRIX_CHAIN_ORDER

MATRIX_CHAIN_ORDER(p)1 n length[p] –1 2 for i 1 to n3 do m[i, i] 04 for l 2 to n5 do for i 1 to n – l + 16 do j i + l – 1 7 m[i, j] 8 for k i to j – 1 9 do q m[i, k] + m[k+1, j]+ pi-1pkpj

10 if q < m[i, j]11 then m[i, j] q12 s[i, j] k13 return m and s

Page 19: Chapter 15: Dynamic Programming II

19

Remarks

• Again, a slight change in the algorithm allows us to get the exact sequence of steps (or the parentheses) that achieves the minimum number of operations

• Also, we can make minor changes to the recursive algorithm and obtain a memoized version (whose running time is O(n3))

Page 20: Chapter 15: Dynamic Programming II

20

When should we apply DP?

• Optimal structure: an optimal solution to the problem contains optimal solutions to subproblems.– Example: Matrix-multiplication problem

• Overlapping subproblems: a recursive algorithm revisits the same subproblem over and over again.

Page 21: Chapter 15: Dynamic Programming II

21

Common pattern in discovering optimal substructure

1. Show that a solution to a problem consists of making a choice, which leaves one or subproblems to solve.

2. Suppose that for a given problem, you are given the choice that leads to an optimal solution. You do not concern how to determine this choice. You just assume that it has been given to you.

3. Given this choice, determine which subproblems ensure and how to best characterize the resulting (solution) space of subproblems.

Page 22: Chapter 15: Dynamic Programming II

22

Common pattern in discovering optimal substructure

4. Show that the solutions to the subproblems used within the optimal solution must themselves be optimal. Usually use “cut-and-paste” technique.

Page 23: Chapter 15: Dynamic Programming II

23

Optimal substructure

• Optimal substructure varies across problem domains in two ways:

1. how many subproblems are used in an optimal solution to the original problem, and

2. how many choices we have in determining which subproblem(s) to use in an optimal solution.

• Informally, running time depends on (# of subproblems overall) x (# of choices).

Page 24: Chapter 15: Dynamic Programming II

24

Refinement• One should be careful not to assume that

optimal substructure applies when it does not. Consider the following two problems in which we are given a directed graph G = (V, E) and vertices u, v V.– Unweighted shortest path:

• Find a path from u to v consisting of the fewest edges. Good for Dynamic programming.

– Unweighted longest simple path:• Find a simple path from u to v consisting of the

most edges. Not good for Dynamic programming.

Page 25: Chapter 15: Dynamic Programming II

25

Shortest path

• Shortest path has optimal substructure.

• Suppose p is shortest path u -> v.• Let w be any vertex on p.

• Let p1 be the portion of p going u -> w.

• Then p1 is a shortest path u -> w.

Page 26: Chapter 15: Dynamic Programming II

26

Longest simple path

• Does longest path have optimal substructure?

• Consider q -> r -> t = longest path q -> t. Are its subpaths longest paths? No!

• Longest simple path q -> r is q -> s -> t -> r.

• Longest simple path r -> t is r -> q ->s -> t .

• Not only isn’t there optimal substructure, but we can’t even assemble a legal solution

from solutions to subproblems.

Page 27: Chapter 15: Dynamic Programming II

27

Overlapping subproblems

• These occur when a recursive algorithm revisits the same problem over and over.

• Solutions1. Bottom up2. Memorization (memorize the

natural, but inefficient)

Page 28: Chapter 15: Dynamic Programming II

28

Top down approachRECURSIVE_MATRIX_CHAIN(p, i, j)1 if i = j2 then return 03 m[i, j] 4 for k i to j – 1 5 do q RMC(p,i,k) + RMC(p,k+1,j) + pi-

1pkpj

6 if q < m[i, j]7 then m[i, j] q8 return m[i, j]

Page 29: Chapter 15: Dynamic Programming II

29

The recursion tree for the computation

of RECURSUVE-MATRIX-CHAIN(P, 1, 4)

Page 30: Chapter 15: Dynamic Programming II

30

Memoization• Alternative approach to dynamic programming:

“Store, don’t recompute.”Make a table indexed by subproblem.When solving a subproblem: Lookup in table.If answer is there, use it.Else, compute answer, then store it.In bottom-up dynamic programming, we go

one step further. We determine in what order we’d want to access the table, and fill it in that way.

Page 31: Chapter 15: Dynamic Programming II

31

MEMORIZED_MATRIX_CHAIN

MEMORIZED_MATRIX_CHAIN(p)1 n length[p] –1 2 for i 1 to n3 do for j 1 to n4 do m[i, j] 5 return LC (p,1,n)

Page 32: Chapter 15: Dynamic Programming II

32

LOOKUP_CHAIN LOOKUP_CHAIN(p, i, j)1 if m[i, j] < 2 then return m[i, j]3 if i = j4 then m[i, j] 05 else for k i to j – 1 6 do q LC(p, i, k) +LC(p, k+1, j)+pi-

1pkpj

7 if q < m[i, j]8 then m[i, j] q9 return m[i, j]

Time Complexity: )( 3nO

Page 33: Chapter 15: Dynamic Programming II

33

Homework

• Problem: 15.1 (Due: Nov. 11)• Practice at home: 15.2-1, 15.2-3,

15.3-2, 15.3-5